threads
listlengths
1
275
[ { "msg_contents": "Hi there,\n\nI have a simple aggregate query: SELECT count(\"PK_ID\") AS \"b1\" FROM \"tbA\" \nWHERE \"PK_ID\" > \"f1\"( 'c1' ), which has the following execution plan:\n\"Aggregate (cost=2156915.42..2156915.43 rows=1 width=4)\"\n\" -> Seq Scan on \"tbA\" (cost=0.00..2137634.36 rows=7712423 width=4)\"\n\" Filter: (\"PK_ID\" > \"f1\"('c1'::character varying))\"\n\nI tried to get the same result with the following query:\nSELECT (\n SELECT count(\"PK_ID\") AS \"b1\" FROM \"tbA\" ) -\n (\n SELECT count(\"PK_ID\") AS \"b1\"\n FROM \"tbA\"\n WHERE \"PK_ID\" <= \"f1\"( 'c1' )\n )\nwith the execution plan:\n\"Result (cost=248952.95..248952.96 rows=1 width=0)\"\n\" InitPlan\"\n\" -> Aggregate (cost=184772.11..184772.12 rows=1 width=4)\"\n\" -> Seq Scan on \"tbA\" (cost=0.00..165243.49 rows=7811449 \nwidth=4)\"\n\" -> Aggregate (cost=64180.81..64180.82 rows=1 width=4)\"\n\" -> Index Scan using \"tbA_pkey\" on \"tbA\" (cost=0.25..63933.24 \nrows=99026 width=4)\"\n\" Index Cond: (\"PK_ID\" <= \"f1\"('c1'::character varying))\"\n\nHow do you explain the cost is about ten times lower in the 2nd query than \nthe first ?\n\nTIA,\nSabin \n\n\n", "msg_date": "Tue, 13 Apr 2010 15:32:57 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "count is ten times faster" }, { "msg_contents": "\n> How do you explain the cost is about ten times lower in the 2nd query \n> than the first ?\n\nFunction call cost ?\n\nCan you EXPLAIN ANALYZE ?\n", "msg_date": "Tue, 13 Apr 2010 20:09:27 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count is ten times faster" }, { "msg_contents": "\"Sabin Coanda\" <[email protected]> wrote:\n \n> How do you explain the cost is about ten times lower in the 2nd\n> query than the first ?\n \nTo elaborate on Pierre's answer:\n \nIn the first query, you scan the entire table and execute the \"f1\"\nfunction on each row. In the second query you pass the entire table\njust counting visible tuples and then run the \"f1\" function once,\nand use the resulting value to scan an index on which it expects to\nfind one row. \n \nIt estimates the cost of running the \"f1\" function 7.7 million times\nas being roughly ten times the cost of scanning the table. Now,\nthis is all just estimates; if they don't really reflect the\nrelative cost of *running* the two queries, you might want to adjust\ncosts factors -- perhaps the estimated cost of the \"f1\" function.\n \n-Kevin\n", "msg_date": "Wed, 14 Apr 2010 09:16:16 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count is ten times faster" } ]
[ { "msg_contents": "I have a lot of centos servers which are running postgres. Postgres isn't used\nthat heavily on any of them, but lately, the stats collector process keeps\ncausing tons of IO load. It seems to happen only on servers with centos 5.\nThe versions of postgres that are running are:\n\n8.1.18\n8.2.6\n8.3.1\n8.3.5\n8.3.6\n8.3.7\n8.3.8\n8.3.9\n8.4.2\n8.4.3\n\nI've tried turning off everything under RUNTIME STATISTICS in postgresql.conf\nexcept track_counts (since auto vacuum says it needs it), but it seems to have\nlittle affect on the IO caused by the stats collector.\n\nHas anyone else noticed this? Have there been recent kernel changes\nthat could cause this that anyone knows about? Since we haven't touched\npostgres on these boxes since they were setup initially, I'm a bit baffled as\nto what might be causing the problem, and why I can't make it go away short of\nkill -STOP.\n\nAny suggestions would be much appreciated!\n", "msg_date": "Tue, 13 Apr 2010 11:55:18 -0400", "msg_from": "Chris <[email protected]>", "msg_from_op": true, "msg_subject": "stats collector suddenly causing lots of IO" } ]
[ { "msg_contents": "I have a lot of centos servers which are running postgres. Postgres isn't used\nthat heavily on any of them, but lately, the stats collector process keeps\ncausing tons of IO load. It seems to happen only on servers with centos 5.\nThe versions of postgres that are running are:\n\n8.1.18\n8.2.6\n8.3.1\n8.3.5\n8.3.6\n8.3.7\n8.3.8\n8.3.9\n8.4.2\n8.4.3\n\nI've tried turning off everything under RUNTIME STATISTICS in postgresql.conf\nexcept track_counts (since auto vacuum says it needs it), but it seems to have\nlittle affect on the IO caused by the stats collector.\n\nHas anyone else noticed this? Have there been recent kernel changes\nthat could cause this that anyone knows about? Since we haven't touched\npostgres on these boxes since they were setup initially, I'm a bit baffled as\nto what might be causing the problem, and why I can't make it go away short of\nkill -STOP.\n\nAny suggestions would be much appreciated!\n\n", "msg_date": "Tue, 13 Apr 2010 13:01:06 -0400", "msg_from": "Chris <[email protected]>", "msg_from_op": true, "msg_subject": "stats collector suddenly causing lots of IO" }, { "msg_contents": "2010/4/13 Chris <[email protected]>:\n> I have a lot of centos servers which are running postgres.  Postgres isn't used\n> that heavily on any of them, but lately, the stats collector process keeps\n> causing tons of IO load.  It seems to happen only on servers with centos 5.\n> The versions of postgres that are running are:\n>\n> 8.1.18\n> 8.2.6\n> 8.3.1\n> 8.3.5\n> 8.3.6\n> 8.3.7\n> 8.3.8\n> 8.3.9\n> 8.4.2\n> 8.4.3\n>\n> I've tried turning off everything under RUNTIME STATISTICS in postgresql.conf\n> except track_counts (since auto vacuum says it needs it), but it seems to have\n> little affect on the IO caused by the stats collector.\n>\n> Has anyone else noticed this?  Have there been recent kernel changes\n> that could cause this that anyone knows about?  Since we haven't touched\n> postgres on these boxes since they were setup initially, I'm a bit baffled as\n> to what might be causing the problem, and why I can't make it go away short of\n> kill -STOP.\n>\n> Any suggestions would be much appreciated!\n\nstats file is writed to disk every 500ms (can be change while building\npostgres) but it have been improved in 8.4 and should be write only if\nneeded.\n\nIn 8.4 you can change the directory where to write the stat file with\nthe config param : stats_temp_directory Perhaps have a test and\nchange the filesystem (you might want to try a ramdisk and another fs\n- ext3 -XFS-ext4 depending of your kernel) and see if it does change\nsomething in your IO load.\n\nAnyway it looks like it is centos 5 relative so what is your curernt\nrunning kernel ? (and what FS )\n\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain\n", "msg_date": "Tue, 13 Apr 2010 20:13:50 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO" }, { "msg_contents": "Chris wrote:\n> I have a lot of centos servers which are running postgres. Postgres isn't used\n> that heavily on any of them, but lately, the stats collector process keeps\n> causing tons of IO load. It seems to happen only on servers with centos 5.\n\nDoes this correlate to an increase in size of the pgstat.stat file?\nMaybe you could try resetting stats, so that the file goes back to an\ninitial size and is slowly repopulated. I'd suggest monitoring the size\nof the stats file, just in case there's something abnormal with it.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 13 Apr 2010 17:55:22 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO" }, { "msg_contents": "Chris <[email protected]> writes:\n> I have a lot of centos servers which are running postgres. Postgres isn't used\n> that heavily on any of them, but lately, the stats collector process keeps\n> causing tons of IO load. It seems to happen only on servers with centos 5.\n> The versions of postgres that are running are:\n\n> 8.1.18\n> 8.2.6\n> 8.3.1\n> 8.3.5\n> 8.3.6\n> 8.3.7\n> 8.3.8\n> 8.3.9\n> 8.4.2\n> 8.4.3\n\nDo these different server versions really all show the problem to the\nsame extent? I'd expect 8.4.x in particular to be cheaper than the\nolder branches. Are their pgstat.stat files all of similar sizes?\n(Note that 8.4.x keeps pgstat.stat under $PGDATA/pg_stat_tmp/ whereas\nin earlier versions it was under $PGDATA/global/.)\n\nIf your applications create/use/drop a lot of tables (perhaps temp\ntables) then bloat of the pgstat.stat file is to be expected, but\nit should get cleaned up by vacuum (including autovacuum). What is\nyour vacuuming policy on these servers ... do you use autovacuum?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Apr 2010 18:21:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO " }, { "msg_contents": "On Thu, Apr 15, 2010 at 6:31 PM, Tom Lane <[email protected]> wrote:\n> Chris <[email protected]> writes:\n>> I have a lot of centos servers which are running postgres.  Postgres isn't used\n>> that heavily on any of them, but lately, the stats collector process keeps\n>> causing tons of IO load.  It seems to happen only on servers with centos 5.\n>\n> Say, I just realized that both of you are complaining about stats\n> collector overhead on centos 5 servers.  I hadn't been thinking in terms\n> of OS-specific causes, but maybe that is what we need to consider.\n> Can you tell me the exact kernel versions you are seeing these problems\n> with?\n\nuname -a says \"... 2.6.18-92.1.13.el5 #1 SMP ... x86_64\", and it's CentOS 5.2.\n\nI'm not sure whether this is related to the stats collector problems\non this machine, but I noticed alarming table bloat in the catalog\ntables pg_attribute, pg_attrdef, pg_depend, and pg_type. Perhaps this\nhas happened slowly over the past few months, but I discovered the\nbloat when I ran the query from:\nhttp://pgsql.tapoueh.org/site/html/news/20080131.bloat.html\n\non the most-active database on this server (OID 16389 from the\npgstat.stat I sent in). See attached table_bloat.txt. The autovacuum\nsettings for this server haven't been tweaked from the default; they\nprobably should have been, given the heavy bulk updates/inserts done.\nMaybe there's another cause for this extreme catalog bloat, besides\nthe weak autovacuum settings, though.\n\nTable sizes, according to pg_size_pretty(pg_total_relation_size(...)):\n * pg_attribute: 145 GB\n * pg_attrdef: 85 GB\n * pg_depend: 38 GB\n * pg_type: 3465 MB\n\nI'll try to send in strace outputs later today.\n\nJosh", "msg_date": "Fri, 16 Apr 2010 10:39:07 -0400", "msg_from": "Josh Kupershmidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO" }, { "msg_contents": "Josh Kupershmidt <[email protected]> writes:\n> I'm not sure whether this is related to the stats collector problems\n> on this machine, but I noticed alarming table bloat in the catalog\n> tables pg_attribute, pg_attrdef, pg_depend, and pg_type.\n\nHmm. That makes me wonder if autovacuum is functioning properly at all.\nWhat does pg_stat_all_tables show for the last vacuum and analyze times\nof those tables? Try something like\n\nselect relname,n_live_tup,n_dead_tup,last_vacuum,last_autovacuum,last_analyze,last_autoanalyze from pg_stat_all_tables where schemaname = 'pg_catalog' order by 1;\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2010 11:23:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO " }, { "msg_contents": "Chris <[email protected]> writes:\n> After the file was made larger and I stopped the vacuum process, I started\n> seeing the problem. All other postgress processes were quiet, but the stats\n> collector was constantly causing anywhere from 20-60 of the IO on the server.\n> Since all the other postgres processes weren't really doing anything, and it is\n> a busy web server which is predominately MySQL, I'm fairly curious as to what\n> it is doing.\n\nYeah, the stats collector rewrites the stats file every half second, if\nthere have been any changes since last time --- so the bigger the file,\nthe more overhead. (8.4 is smarter about this, but that doesn't help\nyou on 8.3.)\n\n> I straced the stats collector process. I wasn't sure what else to trace as\n> there wasn't a single other postgres process doing anything.\n\nThat strace doesn't really prove much; it's what I'd expect. Here's\nwhat to do: start a PG session, and strace that session's backend *and*\nthe stats collector while you manually do VACUUM some-small-table.\nThe VACUUM command should try to send some messages to the stats collector\nprocess. I'm wondering if those get dropped somehow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2010 11:29:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO " }, { "msg_contents": "On Fri, Apr 16, 2010 at 11:23 AM, Tom Lane <[email protected]> wrote:\n> Josh Kupershmidt <[email protected]> writes:\n>> I'm not sure whether this is related to the stats collector problems\n>> on this machine, but I noticed alarming table bloat in the catalog\n>> tables pg_attribute, pg_attrdef, pg_depend, and pg_type.\n>\n> Hmm.  That makes me wonder if autovacuum is functioning properly at all.\n> What does pg_stat_all_tables show for the last vacuum and analyze times\n> of those tables?  Try something like\n>\n> select relname,n_live_tup,n_dead_tup,last_vacuum,last_autovacuum,last_analyze,last_autoanalyze from pg_stat_all_tables where schemaname = 'pg_catalog' order by 1;\n>\n\nOutput attached. Note that I ran pg_stat_reset() a few days ago.\nJosh", "msg_date": "Fri, 16 Apr 2010 11:30:27 -0400", "msg_from": "Josh Kupershmidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO" }, { "msg_contents": "Josh Kupershmidt <[email protected]> writes:\n> On Fri, Apr 16, 2010 at 11:23 AM, Tom Lane <[email protected]> wrote:\n>> Hmm. �That makes me wonder if autovacuum is functioning properly at all.\n>> What does pg_stat_all_tables show for the last vacuum and analyze times\n>> of those tables? �Try something like\n>> \n>> select relname,n_live_tup,n_dead_tup,last_vacuum,last_autovacuum,last_analyze,last_autoanalyze from pg_stat_all_tables where schemaname = 'pg_catalog' order by 1;\n\n> Output attached. Note that I ran pg_stat_reset() a few days ago.\n\nWow. Well, we have a smoking gun here: for some reason, autovacuum\nisn't running, or isn't doing its job if it is. If it's not running\nat all, that would explain failure to prune the stats collector's file\ntoo.\n\nIs there anything in the postmaster log that would suggest autovac\ndifficulties?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2010 11:41:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO " }, { "msg_contents": "Josh Kupershmidt <[email protected]> writes:\n> I made a small half-empty table like this:\n> CREATE TABLE test_vacuum (i int PRIMARY KEY);\n> INSERT INTO test_vacuum (i) SELECT a FROM generate_series(1,500000) AS a;\n> DELETE FROM test_vacuum WHERE RANDOM() < 0.5;\n\n> and then ran:\n> VACUUM test_vacuum;\n\n> while an strace of the stats collector process was running. Then after\n> a few seconds, found the PID of the VACUUM process, and ran strace on\n> it. I killed them after the VACUUM finished. Outputs attached.\n\nHuh. The VACUUM strace clearly shows a boatload of TABPURGE messages\nbeing sent:\n\nsendto(7, \"\\2\\0\\0\\0\\350\\3\\0\\0\\5@\\0\\0\\366\\0\\0\\0\\324\\206<\\24\\321uC\\24\\320\\350)\\24\\225\\345,\\24\"..., 1000, 0, NULL, 0) = 1000\nsendto(7, \"\\2\\0\\0\\0\\350\\3\\0\\0\\5@\\0\\0\\366\\0\\0\\0C\\274?\\24\\365\\323?\\24\\241N@\\24\\217\\0309\\24\"..., 1000, 0, NULL, 0) = 1000\nsendto(7, \"\\2\\0\\0\\0\\350\\3\\0\\0\\5@\\0\\0\\366\\0\\0\\0\\375Z2\\24\\211\\f@\\0241\\3047\\24\\357mH\\24\"..., 1000, 0, NULL, 0) = 1000\nsendto(7, \"\\2\\0\\0\\0\\350\\3\\0\\0\\5@\\0\\0\\366\\0\\0\\0\\242\\3529\\24\\234K\\'\\24\\17\\227)\\24\\300\\22+\\24\"..., 1000, 0, NULL, 0) = 1000\n\nand the stats collector is receiving them:\n\nrecvfrom(7, \"\\2\\0\\0\\0\\350\\3\\0\\0\\5@\\0\\0\\366\\0\\0\\0\\324\\206<\\24\\321uC\\24\\320\\350)\\24\\225\\345,\\24\"..., 1000, 0, NULL, NULL) = 1000\nrecvfrom(7, \"\\2\\0\\0\\0\\350\\3\\0\\0\\5@\\0\\0\\366\\0\\0\\0C\\274?\\24\\365\\323?\\24\\241N@\\24\\217\\0309\\24\"..., 1000, 0, NULL, NULL) = 1000\nrecvfrom(7, \"\\2\\0\\0\\0\\350\\3\\0\\0\\5@\\0\\0\\366\\0\\0\\0\\375Z2\\24\\211\\f@\\0241\\3047\\24\\357mH\\24\"..., 1000, 0, NULL, NULL) = 1000\n\nSo this *should* have resulted in the stats file shrinking. Did you\nhappen to notice if it did, after you did this?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2010 12:25:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO " }, { "msg_contents": "On Fri, Apr 16, 2010 at 11:41 AM, Tom Lane <[email protected]> wrote:\n> Wow.  Well, we have a smoking gun here: for some reason, autovacuum\n> isn't running, or isn't doing its job if it is.  If it's not running\n> at all, that would explain failure to prune the stats collector's file\n> too.\n\nHrm, well autovacuum is at least trying to do work: it's currently\nstuck on those bloated pg_catalog tables, of course. Another developer\nkilled an autovacuum of pg_attribute (or maybe it was pg_attrdef)\nafter it had been running for two weeks. See current pg_stat_activity\noutput attached, which shows the three autovacuum workers running plus\ntwo manual VACUUM ANALYZEs I started yesterday.\n\n> Is there anything in the postmaster log that would suggest autovac\n> difficulties?\n\nYup, there are logs from April 1st which I just grepped through. I\nattached the redacted output, and I see a few warnings about \"[table]\ncontains more than \"max_fsm_pages\" pages with useful free space\", as\nwell as \"ERROR: canceling autovacuum task\".\n\nPerhaps bumping up max_fsm_pages and making autovacuum settings more\naggressive will help me? I was also planning to run a CLUSTER of those\nfour bloated pg_catalog tables -- is this safe, particularly for\ntables like pg_attrdef which rely on OIDs?\n\nJosh", "msg_date": "Fri, 16 Apr 2010 12:31:51 -0400", "msg_from": "Josh Kupershmidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO" }, { "msg_contents": "I wrote:\n> So this *should* have resulted in the stats file shrinking. Did you\n> happen to notice if it did, after you did this?\n\nOh, never mind that --- I can see that it did shrink, just from counting\nthe write() calls in the collector's strace. So what we have here is a\ndemonstration that the tabpurge mechanism does work for you, when it's\ninvoked. Which is further evidence that for some reason autovacuum is\nnot running for you.\n\nWhat I'd suggest at this point is cranking up log_min_messages to DEBUG2\nor so in postgresql.conf, restarting the postmaster, and keeping an eye\non the log to see if you can spot anything about why autovac isn't\nworking.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2010 12:39:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO " }, { "msg_contents": "Josh Kupershmidt <[email protected]> writes:\n> On Fri, Apr 16, 2010 at 11:41 AM, Tom Lane <[email protected]> wrote:\n>> Wow. �Well, we have a smoking gun here: for some reason, autovacuum\n>> isn't running, or isn't doing its job if it is. �If it's not running\n>> at all, that would explain failure to prune the stats collector's file\n>> too.\n\n> Hrm, well autovacuum is at least trying to do work: it's currently\n> stuck on those bloated pg_catalog tables, of course. Another developer\n> killed an autovacuum of pg_attribute (or maybe it was pg_attrdef)\n> after it had been running for two weeks. See current pg_stat_activity\n> output attached, which shows the three autovacuum workers running plus\n> two manual VACUUM ANALYZEs I started yesterday.\n\nTwo weeks? What have you got the autovacuum cost delays set to?\n\nOnce you're up to three AV workers, no new ones can get launched until\none of those finishes or is killed. So that would explain failure to\nprune the stats collector's tables (the tabpurge code is only run during\nAV worker launch). So what we need to figure out is why it's taking so\nobscenely long to vacuum these tables ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2010 12:48:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO " }, { "msg_contents": "\nOn Apr 16, 2010, at 9:48 AM, Tom Lane wrote:\n\n> Josh Kupershmidt <[email protected]> writes:\n>> On Fri, Apr 16, 2010 at 11:41 AM, Tom Lane <[email protected]> wrote:\n>>> Wow. Well, we have a smoking gun here: for some reason, autovacuum\n>>> isn't running, or isn't doing its job if it is. If it's not running\n>>> at all, that would explain failure to prune the stats collector's file\n>>> too.\n> \n>> Hrm, well autovacuum is at least trying to do work: it's currently\n>> stuck on those bloated pg_catalog tables, of course. Another developer\n>> killed an autovacuum of pg_attribute (or maybe it was pg_attrdef)\n>> after it had been running for two weeks. See current pg_stat_activity\n>> output attached, which shows the three autovacuum workers running plus\n>> two manual VACUUM ANALYZEs I started yesterday.\n> \n> Two weeks? What have you got the autovacuum cost delays set to?\n> \n> Once you're up to three AV workers, no new ones can get launched until\n> one of those finishes or is killed. So that would explain failure to\n> prune the stats collector's tables (the tabpurge code is only run during\n> AV worker launch). So what we need to figure out is why it's taking so\n> obscenely long to vacuum these tables ...\n> \n\nOn any large system with good I/O I have had to significantly increase the aggressiveness of autovacuum.\nEven with the below settings, it doesn't interfere with other activity (~2200iops random, ~900MB/sec sequential capable I/O).\n\nMy relevant autovacuum parameters are (from 'show *'):\n autovacuum | on | Starts the autovacuum subprocess.\n autovacuum_analyze_scale_factor | 0.1 | Number of tuple inserts, updates or deletes prior to analyze as a fraction of reltuples.\n autovacuum_analyze_threshold | 50 | Minimum number of tuple inserts, updates or deletes prior to analyze.\n autovacuum_freeze_max_age | 200000000 | Age at which to autovacuum a table to prevent transaction ID wraparound.\n autovacuum_max_workers | 3 | Sets the maximum number of simultaneously running autovacuum worker processes.\n autovacuum_naptime | 1min | Time to sleep between autovacuum runs.\n autovacuum_vacuum_cost_delay | 20ms | Vacuum cost delay in milliseconds, for autovacuum.\n autovacuum_vacuum_cost_limit | 2000 | Vacuum cost amount available before napping, for autovacuum.\n autovacuum_vacuum_scale_factor | 0.2 | Number of tuple updates or deletes prior to vacuum as a fraction of reltuples.\n autovacuum_vacuum_threshold | 50 \n\n\n\n\nFor what it is worth, I just went onto one of my systems -- one with lots of partition tables and temp table creation/destruction -- and looked at the system tables in question there.\n\nPostgres 8.4, using dt+ (trimmed result below to interesting tables)\n\n Schema | Name | Type | Owner | Size | Description \n------------+-------------------------+-------+----------+------------+-------------\n pg_catalog | pg_attrdef | table | postgres | 195 MB | \n pg_catalog | pg_attribute | table | postgres | 1447 MB | \n pg_catalog | pg_class | table | postgres | 1694 MB | \n pg_catalog | pg_constraint | table | postgres | 118 MB | \n pg_catalog | pg_depend | table | postgres | 195 MB | \n pg_catalog | pg_statistic | table | postgres | 2300 MB | \n pg_catalog | pg_type | table | postgres | 181 MB | \n\n\nSo, I did a vacuum full; reindex table; analyze; sequence on each of these. I wish I could just CLUSTER them but the above works.\n\nnow the tables are:\n Schema | Name | Type | Owner | Size | Description \n------------+-------------------------+-------+----------+------------+-------------\n pg_catalog | pg_attrdef | table | postgres | 44 MB | \n pg_catalog | pg_attribute | table | postgres | 364 MB | \n pg_catalog | pg_class | table | postgres | 1694 MB | \n pg_catalog | pg_constraint | table | postgres | 118 MB | \n pg_catalog | pg_depend | table | postgres | 195 MB | \n pg_catalog | pg_statistic | table | postgres | 656 MB | \n pg_catalog | pg_type | table | postgres | 45 MB | \n\n\nI've learned to accept about 50% bloat (2x the compacted size) in postgres as just the way it usually is on a busy table, but the 3x and 4x bloat of statistic, attrdef, and attribute have me wondering.\n\nI have had some 'idle in transaction' connections hanging out from time to time that have caused issues on this machine that could explain the above perma-bloat. That is one thing that could affect the case reported here as well. The worst thing about those, is you can't even force kill those connections from within postgres (pg_cancel_backend doesn't work on them, and killing them via the OS bounces postgres ...) so you have to hunt down the offending client.\n\n\n> \t\t\tregards, tom lane\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 16 Apr 2010 10:20:54 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO " }, { "msg_contents": "> \n> I have had some 'idle in transaction' connections hanging out from time to time that have caused issues on this machine that could explain the above perma-bloat. That is one thing that could affect the case reported here as well. The worst thing about those, is you can't even force kill those connections from within postgres (pg_cancel_backend doesn't work on them, and killing them via the OS bounces postgres ...) so you have to hunt down the offending client.\n> \n\nOoh, I just noticed pg_terminate_backend() ... maybe this will let me kill annoying idle in transaction clients. I guess this arrived in 8.4? Hopefully this won't cause the whole thing to bounce and close all other backends....\n\n\n> \n>> \t\t\tregards, tom lane\n>> \n>> -- \n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 16 Apr 2010 10:24:08 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO " }, { "msg_contents": "On Fri, Apr 16, 2010 at 12:48 PM, Tom Lane <[email protected]> wrote:\n> Josh Kupershmidt <[email protected]> writes:\n>> Hrm, well autovacuum is at least trying to do work: it's currently\n>> stuck on those bloated pg_catalog tables, of course. Another developer\n>> killed an autovacuum of pg_attribute (or maybe it was pg_attrdef)\n>> after it had been running for two weeks. See current pg_stat_activity\n>> output attached, which shows the three autovacuum workers running plus\n>> two manual VACUUM ANALYZEs I started yesterday.\n>\n> Two weeks?  What have you got the autovacuum cost delays set to?\n\nSELECT name, current_setting(name), source FROM pg_settings WHERE\nsource != 'default' AND name ILIKE '%vacuum%';\n name | current_setting | source\n----------------------+-----------------+--------------------\n vacuum_cost_delay | 200ms | configuration file\n vacuum_cost_limit | 100 | configuration file\n vacuum_cost_page_hit | 6 | configuration file\n(3 rows)\n\nI'm guessing these values and the default autovacuum configuration\nvalues need to be cranked significantly to make vacuum much more\naggressive :-(\n\n> Once you're up to three AV workers, no new ones can get launched until\n> one of those finishes or is killed.  So that would explain failure to\n> prune the stats collector's tables (the tabpurge code is only run during\n> AV worker launch).  So what we need to figure out is why it's taking so\n> obscenely long to vacuum these tables ...\n>\n\nHopefully changing those three vacuum_cost_* params will speed up the\nmanual- and auto-vacuums.. it'll take me a few days to see any\nresults, since I still need to do something about the bloat that's\nalready there.\n\nJosh\n", "msg_date": "Fri, 16 Apr 2010 13:43:17 -0400", "msg_from": "Josh Kupershmidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO" }, { "msg_contents": "Josh Kupershmidt wrote:\n> SELECT name, current_setting(name), source FROM pg_settings WHERE\n> source != 'default' AND name ILIKE '%vacuum%';\n> name | current_setting | source\n> ----------------------+-----------------+--------------------\n> vacuum_cost_delay | 200ms | configuration file\n> vacuum_cost_limit | 100 | configuration file\n> vacuum_cost_page_hit | 6 | configuration file\n>\n> \n> Hopefully changing those three vacuum_cost_* params will speed up the\n> manual- and auto-vacuums..\n\nThose only impact manual VACUUM statements. There's a different set \nwith names like autovacuum_vacuum_cost_delay that control the daemon. \nYou can set those to \"-1\" in order to match the regular VACUUM, but \nthat's not the default.\n\nYou really need to sort out the max_fsm_pages setting too, because until \nthat issue goes away these tables are unlikely to ever stop growing. \nAnd, no, you can't use CLUSTER on the system tables to clean those up.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Fri, 16 Apr 2010 14:14:53 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO" }, { "msg_contents": "On Fri, Apr 16, 2010 at 2:14 PM, Greg Smith <[email protected]> wrote:\n> Josh Kupershmidt wrote:\n>>\n>> SELECT name, current_setting(name), source FROM pg_settings WHERE\n>> source != 'default' AND name ILIKE '%vacuum%';\n>>         name         | current_setting |       source\n>> ----------------------+-----------------+--------------------\n>>  vacuum_cost_delay    | 200ms           | configuration file\n>>  vacuum_cost_limit    | 100             | configuration file\n>>  vacuum_cost_page_hit | 6               | configuration file\n>>\n>>  Hopefully changing those three vacuum_cost_* params will speed up the\n>> manual- and auto-vacuums..\n>\n> Those only impact manual VACUUM statements.  There's a different set with\n> names like autovacuum_vacuum_cost_delay that control the daemon.  You can\n> set those to \"-1\" in order to match the regular VACUUM, but that's not the\n> default.\n\nIt looks like the default which I have of autovacuum_vacuum_cost_limit\n= -1, which means it's inheriting the vacuum_cost_limit of 100 I had\nset. I'll try bumping vacuum_cost_limit up to 1000 or so.\n\n> You really need to sort out the max_fsm_pages setting too, because until\n> that issue goes away these tables are unlikely to ever stop growing.  And,\n> no, you can't use CLUSTER on the system tables to clean those up.\n\nI have max_fsm_pages = 524288 , but from the hints in the logfiles\nthis obviously needs to go up much higher. And it seems the only way\nto compact the pg_catalog tables is VACUUM FULL + REINDEX on 8.3 -- I\nhad tried the CLUSTER on my 9.0 machine and wrongly assumed it would\nwork on 8.3, too.\n\nJosh\n", "msg_date": "Fri, 16 Apr 2010 14:35:56 -0400", "msg_from": "Josh Kupershmidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO" }, { "msg_contents": "Josh Kupershmidt wrote:\n> And it seems the only way\n> to compact the pg_catalog tables is VACUUM FULL + REINDEX on 8.3 -- I\n> had tried the CLUSTER on my 9.0 machine and wrongly assumed it would\n> work on 8.3, too.\n> \n\nRight; that just got implemented a couple of months ago. See the news \nfrom http://www.postgresql.org/community/weeklynews/pwn20100214 for a \nsummary of how the code was gyrated around to support that. This is a \ntough situation to get out of in <9.0 because VACUUM FULL is slow and \ntakes an exclusive lock on the table. That tends to lead toward an \nunpredictable window for required downtime, which is never good.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Fri, 16 Apr 2010 14:53:10 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO" }, { "msg_contents": "Josh Kupershmidt <[email protected]> writes:\n> name | current_setting | source\n> ----------------------+-----------------+--------------------\n> vacuum_cost_delay | 200ms | configuration file\n> vacuum_cost_limit | 100 | configuration file\n> vacuum_cost_page_hit | 6 | configuration file\n> \n> It looks like the default which I have of autovacuum_vacuum_cost_limit\n> = -1, which means it's inheriting the vacuum_cost_limit of 100 I had\n> set. I'll try bumping vacuum_cost_limit up to 1000 or so.\n\nActually I think the main problem is that cost_delay value, which is\nprobably an order of magnitude too high. The way to limit vacuum's\nI/O impact on other stuff is to make it take frequent short delays,\nnot have it run full speed and then sleep a long time. In any case,\nyour current settings have got it sleeping way too much. Two WEEKS !!!??\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2010 15:22:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO " }, { "msg_contents": "On Fri, Apr 16, 2010 at 3:22 PM, Tom Lane <[email protected]> wrote:\n> Josh Kupershmidt <[email protected]> writes:\n>>         name         | current_setting |       source\n>> ----------------------+-----------------+--------------------\n>>  vacuum_cost_delay    | 200ms           | configuration file\n>>  vacuum_cost_limit    | 100             | configuration file\n>>  vacuum_cost_page_hit | 6               | configuration file\n>>\n>> It looks like the default which I have of autovacuum_vacuum_cost_limit\n>> = -1, which means it's inheriting the vacuum_cost_limit of 100 I had\n>> set. I'll try bumping vacuum_cost_limit up to 1000 or so.\n>\n> Actually I think the main problem is that cost_delay value, which is\n> probably an order of magnitude too high.  The way to limit vacuum's\n> I/O impact on other stuff is to make it take frequent short delays,\n> not have it run full speed and then sleep a long time.  In any case,\n> your current settings have got it sleeping way too much.  Two WEEKS !!!??\n\nYup, I was going to turn vacuum_cost_delay down to 20. The two weeks\nwas for the pg_catalog table which has bloated to 145 GB, I think. One\nof those manual VACUUMs I kicked off just finished, after 48 hours --\nand that table was only 25 GB or so. I wasn't the one who set up this\npostgresql.conf, but I am stuck fixing things :/\n", "msg_date": "Fri, 16 Apr 2010 15:40:51 -0400", "msg_from": "Josh Kupershmidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats collector suddenly causing lots of IO" } ]
[ { "msg_contents": "Hi foilks\n\nI am using PG 8.3 from Java. I am considering a performance tweak which will\ninvolve holding about 150 java.sql.PreparedStatment objects open against a\nsingle PGSQL connection. Is this safe?\n\nI know that MySQL does not support prepared statements *per se*, and so\ntheir implementation of PreparedStatement is nothing more than some\nclient-side convenience code that knows how to escape and format constants\nfor you. Is this the case for PG, or does the PG JDBC driver do the real\nthing? I'm assuming if it's just a client side constant escaper that there\nwon't be an issue.\n\nCheers\nDave\n\nHi foilksI am using PG 8.3 from Java. I am considering a performance tweak which will involve holding about 150 java.sql.PreparedStatment objects open against a single PGSQL connection. Is this safe?I know that MySQL does not support prepared statements per se, and so their implementation of PreparedStatement is nothing more than some client-side convenience code that knows how to escape and format constants for you. Is this the case for PG, or does the PG JDBC driver do the real thing? I'm assuming if it's just a client side constant escaper that there won't be an issue.\nCheersDave", "msg_date": "Wed, 14 Apr 2010 15:49:16 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "JDBC question for PG 8.3.9" }, { "msg_contents": "On 15/04/10 04:49, Dave Crooke wrote:\n> Hi foilks\n>\n> I am using PG 8.3 from Java. I am considering a performance tweak which\n> will involve holding about 150 java.sql.PreparedStatment objects open\n> against a single PGSQL connection. Is this safe?\n>\n> I know that MySQL does not support prepared statements /per se/, and so\n> their implementation of PreparedStatement is nothing more than some\n> client-side convenience code that knows how to escape and format\n> constants for you. Is this the case for PG, or does the PG JDBC driver\n> do the real thing?\n\nPg supports real server-side prepared statements, as does the JDBC driver.\n\nIIRC (and I can't say this with 100% certainty without checking the \nsources or a good look at TFM) the PostgreSQL JDBC driver initially does \nonly a client-side prepare. However, if the PreparedStatement is re-used \nmore than a certain number of times (five by default?) it switches to \nserver-side prepared statements.\n\nThis has actually caused a bunch of performance complaints on the jdbc \nlist, because the query plan may change at that switch-over point, since \nwith a server-side prepared statement Pg no longer has a specific value \nfor each parameter and may pick a more generic plan.\n\nAgain only IIRC there's a configurable threshold for prepared statement \nswitch-over. I thought all this was in the PgJDBC documentation and/or \njavadoc - if it's not, it needs to be.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 15 Apr 2010 07:10:28 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JDBC question for PG 8.3.9" }, { "msg_contents": "Mine is a single record INSERT, so no issues with plans :-) Little Java ETL\njob.\n\nIs there any setting I'd need to tweak assuming I'm using 150-200 of these\nat once?\n\nCheers\nDave\n\nOn Wed, Apr 14, 2010 at 6:10 PM, Craig Ringer\n<[email protected]>wrote:\n\n> On 15/04/10 04:49, Dave Crooke wrote:\n>\n>> Hi foilks\n>>\n>> I am using PG 8.3 from Java. I am considering a performance tweak which\n>> will involve holding about 150 java.sql.PreparedStatment objects open\n>> against a single PGSQL connection. Is this safe?\n>>\n>> I know that MySQL does not support prepared statements /per se/, and so\n>> their implementation of PreparedStatement is nothing more than some\n>> client-side convenience code that knows how to escape and format\n>> constants for you. Is this the case for PG, or does the PG JDBC driver\n>> do the real thing?\n>>\n>\n> Pg supports real server-side prepared statements, as does the JDBC driver.\n>\n> IIRC (and I can't say this with 100% certainty without checking the sources\n> or a good look at TFM) the PostgreSQL JDBC driver initially does only a\n> client-side prepare. However, if the PreparedStatement is re-used more than\n> a certain number of times (five by default?) it switches to server-side\n> prepared statements.\n>\n> This has actually caused a bunch of performance complaints on the jdbc\n> list, because the query plan may change at that switch-over point, since\n> with a server-side prepared statement Pg no longer has a specific value for\n> each parameter and may pick a more generic plan.\n>\n> Again only IIRC there's a configurable threshold for prepared statement\n> switch-over. I thought all this was in the PgJDBC documentation and/or\n> javadoc - if it's not, it needs to be.\n>\n> --\n> Craig Ringer\n>\n\nMine is a single record INSERT, so no issues with plans :-) Little Java ETL job.Is there any setting I'd need to tweak assuming I'm using 150-200 of these at once?CheersDave\nOn Wed, Apr 14, 2010 at 6:10 PM, Craig Ringer <[email protected]> wrote:\nOn 15/04/10 04:49, Dave Crooke wrote:\n\nHi foilks\n\nI am using PG 8.3 from Java. I am considering a performance tweak which\nwill involve holding about 150 java.sql.PreparedStatment objects open\nagainst a single PGSQL connection. Is this safe?\n\nI know that MySQL does not support prepared statements /per se/, and so\ntheir implementation of PreparedStatement is nothing more than some\nclient-side convenience code that knows how to escape and format\nconstants for you. Is this the case for PG, or does the PG JDBC driver\ndo the real thing?\n\n\nPg supports real server-side prepared statements, as does the JDBC driver.\n\nIIRC (and I can't say this with 100% certainty without checking the sources or a good look at TFM) the PostgreSQL JDBC driver initially does only a client-side prepare. However, if the PreparedStatement is re-used more than a certain number of times (five by default?) it switches to server-side prepared statements.\n\nThis has actually caused a bunch of performance complaints on the jdbc list, because the query plan may change at that switch-over point, since with a server-side prepared statement Pg no longer has a specific value for each parameter and may pick a more generic plan.\n\nAgain only IIRC there's a configurable threshold for prepared statement switch-over. I thought all this was in the PgJDBC documentation and/or javadoc - if it's not, it needs to be.\n\n--\nCraig Ringer", "msg_date": "Wed, 14 Apr 2010 22:03:09 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "Re: JDBC question for PG 8.3.9" }, { "msg_contents": "On Wed, Apr 14, 2010 at 7:10 PM, Craig Ringer\n<[email protected]> wrote:\n> On 15/04/10 04:49, Dave Crooke wrote:\n>>\n>> Hi foilks\n>>\n>> I am using PG 8.3 from Java. I am considering a performance tweak which\n>> will involve holding about 150 java.sql.PreparedStatment objects open\n>> against a single PGSQL connection. Is this safe?\n>>\n>> I know that MySQL does not support prepared statements /per se/, and so\n>> their implementation of PreparedStatement is nothing more than some\n>> client-side convenience code that knows how to escape and format\n>> constants for you. Is this the case for PG, or does the PG JDBC driver\n>> do the real thing?\n>\n> Pg supports real server-side prepared statements, as does the JDBC driver.\n>\n> IIRC (and I can't say this with 100% certainty without checking the sources\n> or a good look at TFM) the PostgreSQL JDBC driver initially does only a\n> client-side prepare. However, if the PreparedStatement is re-used more than\n> a certain number of times (five by default?) it switches to server-side\n> prepared statements.\n>\nThis is partially true. The driver uses an unnamed prepared statement\non the server.\n\n> This has actually caused a bunch of performance complaints on the jdbc list,\n> because the query plan may change at that switch-over point, since with a\n> server-side prepared statement Pg no longer has a specific value for each\n> parameter and may pick a more generic plan.\n\nThis is a limitation of the server, not the driver\n", "msg_date": "Thu, 15 Apr 2010 06:59:22 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JDBC question for PG 8.3.9" } ]
[ { "msg_contents": "Hello,\n\nI am struggling to understand why for certain criteria that i supply for a\nquery alters the the query plan. In my \"good\" case i can see that an index\nis used, in my bad case where i only change the text value of the criteria,\nbut not the criteria itslef (ie change/add the conditions) a hbitmap heap\nscan of the table is performed.\n\nRefer attached Good/Bad query plans.\n\nThe basic query is:\n\nSELECT * FROM highrate_log_entry\nWHERE\ntest_seq_number > 26668670\nand udc = '2424'\nAND (test_signal_number = 'D2030'\n)\nORDER BY test_seq_number LIMIT 11\n\ntest_seq_number is the pk and is generated by a sequence.\n\nThe D2030 is the only thing that i vary between good/bad runs. The issue is\npossibly related to the data spead is for the test-signal_number is not\nuniform, but there does not appear to be that much difference in difference\nbetween the first sequence number and the last sequence number (to achieve\nthe 11 results), when compared between the test_seq_number that yield good\nor bad results.\n\nI dont believe that the issue is to do with re-writing the query, but how\nthe planner chooses its path.\n\nI am using Postgres 8.4 on windows with default postgres.conf. I have tried\nchanging(increasing) shared_buffers, work_mem and effective_cache_size\nwithout success.\n\nAny suggestions would be appreciated.\n\nThanks\n\nJason", "msg_date": "Thu, 15 Apr 2010 11:15:45 +0930", "msg_from": "JmH <[email protected]>", "msg_from_op": true, "msg_subject": "Good/Bad query plans based on text criteria" }, { "msg_contents": "JmH <[email protected]> writes:\n> I am struggling to understand why for certain criteria that i supply for a\n> query alters the the query plan. In my \"good\" case i can see that an index\n> is used, in my bad case where i only change the text value of the criteria,\n> but not the criteria itslef (ie change/add the conditions) a hbitmap heap\n> scan of the table is performed.\n\nI think you're jumping to conclusions. The second plan is processing\nabout 100 times as many rows, because the WHERE conditions are much less\nselective. A change in plan is entirely appropriate.\n\nIt might be that you need to change planner parameters (particularly\nrandom_page_cost/seq_page_cost) to more nearly approximate the operating\nconditions of your database, but I'd recommend being very cautious about\ndoing so on the basis of a small number of example queries. In\nparticular it's easy to fall into the trap of optimizing for\nfully-cached scenarios because repeatedly trying the same example\nresults in touching only already-cached data --- but that might or might\nnot be reflective of your whole workload.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Apr 2010 22:42:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good/Bad query plans based on text criteria " } ]
[ { "msg_contents": "Hi,\n\nwe are seeing latency spikes in the 2-3 second range (sometimes 8-10s) for \nqueries that usually take 3-4ms on our systems and I am running out of things to \ntry to get rid of them. Perhaps someone here has more ideas - here's a \ndescription of the systems and what I've tried with no impact at all:\n\n2 x 6-core Opterons (2431)\n32GB RAM\n2 SATA disks (WD1500HLFS) in software RAID-1\nLinux 2.6.26 64 bit (Debian kernel)\nPostgreSQL 8.3.9 (Debian package)\nFS mounted with option noatime\nvm.dirty_ratio = 80\n\n3 DB clusters, 2 of which are actively used, all on the same RAID-1 FS\nfsync=off\nshared_buffers=5GB (database size is ~4.7GB on disk right now)\ntemp_buffers=50MB\nwork_mem=500MB\nwal_buffers=256MB (*)\ncheckpoint_segments=256 (*)\ncommit_delay=100000 (*)\nautovacuum=off (*)\n\n(*) added while testing, no change w.r.t. the spikes seen at all\n\nThe databases have moderate read load (no burst load, typical web backend) and \nsomewhat regular write load (updates in batches, always single-row \nupdate/delete/inserts using the primary key, 90% updates, a few 100s to 1000s \nrows together, without explicit transactions/locking).\n\nThis is how long the queries take (seen from the client):\nThu Apr 15 18:16:14 CEST 2010 real 0m0.004s\nThu Apr 15 18:16:15 CEST 2010 real 0m0.004s\nThu Apr 15 18:16:16 CEST 2010 real 0m0.003s\nThu Apr 15 18:16:17 CEST 2010 real 0m0.005s\nThu Apr 15 18:16:18 CEST 2010 real 0m0.068s\nThu Apr 15 18:16:19 CEST 2010 real 0m0.004s\nThu Apr 15 18:16:20 CEST 2010 real 0m0.005s\nThu Apr 15 18:16:21 CEST 2010 real 0m0.235s\nThu Apr 15 18:16:22 CEST 2010 real 0m0.005s\nThu Apr 15 18:16:23 CEST 2010 real 0m3.006s <== !\nThu Apr 15 18:16:27 CEST 2010 real 0m0.004s\nThu Apr 15 18:16:28 CEST 2010 real 0m0.084s\nThu Apr 15 18:16:29 CEST 2010 real 0m0.003s\nThu Apr 15 18:16:30 CEST 2010 real 0m0.005s\nThu Apr 15 18:16:32 CEST 2010 real 0m0.038s\nThu Apr 15 18:16:33 CEST 2010 real 0m0.005s\nThu Apr 15 18:16:34 CEST 2010 real 0m0.005s\n\nThe spikes aren't periodic, i.e. not every 10,20,30 seconds or 5 minutes etc, \nthey seem completely random... PostgreSQL also reports (due to \nlog_min_duration_statement=1000) small bursts of queries that take much longer \nthan they should:\n\n[nothing for a few minutes]\n2010-04-15 16:50:03 CEST LOG: duration: 8995.934 ms statement: select ...\n2010-04-15 16:50:04 CEST LOG: duration: 3383.780 ms statement: select ...\n2010-04-15 16:50:04 CEST LOG: duration: 3328.523 ms statement: select ...\n2010-04-15 16:50:05 CEST LOG: duration: 1120.108 ms statement: select ...\n2010-04-15 16:50:05 CEST LOG: duration: 1079.879 ms statement: select ...\n[nothing for a few minutes]\n(explain analyze yields 5-17ms for the above queries)\n\nThings I've tried apart from the PostgreSQL parameters above:\n- switching from ext3 with default journal settings to data=writeback\n- switching to ext2\n- vm.dirty_background_ratio set to 1, 10, 20, 60\n- vm.dirty_expire_centisecs set to 3000 (default), 8640000 (1 day)\n- fsync on\n- some inofficial Debian 2.6.32 kernel and ext3 with data=writeback (because of \nhttp://lwn.net/Articles/328363/ although it seems to address fsync latency and \nnot read latency)\n- running irqbalance\n\nAll these had no visible impact on the latency spikes.\n\nI can also exclude faulty hardware with some certainty (since we have 12 \nidentical systems with this problem).\n\nI am suspecting some strange software RAID or kernel problem, unless the default \nbgwriter settings can actually cause selects to get stuck for so long when there \nare too many dirty buffers (I hope not). Unless I'm missing something, I only \nhave a non-RAID setup or ramdisks (tmpfs), or SSDs left to try to get rid of \nthese, so any suggestion will be greatly appreciated. Generally, I'd be very \ninterested in hearing how people tune their databases and their hardware/Linux \nfor consistently low query latency (esp. when everything should fit in memory).\n\nRegards,\n Marinos\n", "msg_date": "Thu, 15 Apr 2010 18:46:00 +0200", "msg_from": "Marinos Yannikos <[email protected]>", "msg_from_op": true, "msg_subject": "8.3.9 - latency spikes with Linux (and tuning for consistently low\n\tlatency)" }, { "msg_contents": "Marinos Yannikos <[email protected]> writes:\n> we are seeing latency spikes in the 2-3 second range (sometimes 8-10s) for \n> queries that usually take 3-4ms on our systems and I am running out of things to \n> try to get rid of them.\n\nHave you checked whether the spikes correlate with checkpoints? Turn\non log_checkpoints and watch for awhile. If so, fooling with the\ncheckpoint parameters might give some relief. However, 8.3 already has\nthe spread-checkpoint code so I'm not sure how much more win can be had\nthere.\n\nMore generally, you should watch vmstat/iostat output and see if you\ncan correlate the spikes with I/O activity, CPU peaks, etc.\n\nA different line of thought is that maybe the delays have to do with lock\ncontention --- log_lock_waits might help you identify that.\n\n> fsync=off\n\nThat's pretty scary.\n\n> work_mem=500MB\n\nYipes. I don't think you have enough RAM for that to be safe.\n\n> commit_delay=100000 (*)\n\nThis is probably not a good idea.\n\n> autovacuum=off (*)\n\nNor this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Apr 2010 13:38:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.3.9 - latency spikes with Linux (and tuning for consistently\n\tlow latency)" }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> Have you checked whether the spikes correlate with checkpoints? \n> Turn on log_checkpoints and watch for awhile. If so, fooling with\n> the checkpoint parameters might give some relief.\n \nIf that by itself doesn't do it, I've found that making the\nbackground writer more aggressive can help. We've had good luck\nwith:\n \nbgwriter_lru_maxpages = 1000\nbgwriter_lru_multiplier = 4.0\n \nIf you still have checkpoint-related latency issues, you could try\nscaling back shared_buffers, letting the OS cache handle more of the\ndata.\n \nAlso, if you have a RAID controller with a battery-backed RAM cache,\nmake sure it is configured for write-back.\n \n-Kevin\n", "msg_date": "Thu, 15 Apr 2010 12:47:26 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.3.9 - latency spikes with Linux (and tuning\n\tfor consistently low latency)" }, { "msg_contents": "Marinos Yannikos wrote:\n> vm.dirty_ratio = 80\n\nThis is tuned the opposite direction of what you want. The default \ntuning in the generation of kernels you're using is:\n\n/proc/sys/vm/dirty_ratio = 10\n/proc/sys/vm/dirty_background_ratio = 5\n\nAnd those should be considered upper limits if you want to tune for latency.\n\nUnfortunately, even 5% will still allow 1.6GB of dirty data to queue up \nwithout being written given 32GB of RAM, which is still plenty to lead \nto a multi-second pause at times.\n\n> 3 DB clusters, 2 of which are actively used, all on the same \n> [software] RAID-1 FS\n\nSo your basic problem here is that you don't have enough disk I/O to \nsupport this load. You can tune it all day and that fundamental issue \nwill never go away. You'd need a battery-backed write controller \ncapable of hardware RAID to even have a shot at supporting a system with \nthis much RAM without long latency pauses. I'd normally break out the \nWAL onto a separate volume too.\n\n> [nothing for a few minutes]\n> 2010-04-15 16:50:03 CEST LOG: duration: 8995.934 ms statement: \n> select ...\n> 2010-04-15 16:50:04 CEST LOG: duration: 3383.780 ms statement: \n> select ...\n> 2010-04-15 16:50:04 CEST LOG: duration: 3328.523 ms statement: \n> select ...\n> 2010-04-15 16:50:05 CEST LOG: duration: 1120.108 ms statement: \n> select ...\n> 2010-04-15 16:50:05 CEST LOG: duration: 1079.879 ms statement: \n> select ...\n> [nothing for a few minutes]\n\nGuessing five minutes each time? You should turn on checkpoint_logs to \nbe sure, but I'd bet money that's the interval, and that these are \ncheckpoint spikes. If the checkpoing log shows up at about the same \ntime as all these queries that were blocking behind it, that's what \nyou've got.\n\n> shared_buffers=5GB (database size is ~4.7GB on disk right now)\n\nThe best shot you have at making this problem a little better just with \nsoftware tuning is to reduce this to something much smaller; 128MB - \n256MB would be my starting suggestion. Make sure checkpoint_segments is \nstill set to a high value.\n\nThe other thing you could try is to tune like this:\n\ncheckpoint_segments=256MB\ncheckpoint_timeout=20min\n\nWhich would get you 4X as much checkpoint spreading as you have now.\n\n> fsync=off\n\nThis is just generally a bad idea.\n\n> work_mem=500MB\n> wal_buffers=256MB (*)\n> commit_delay=100000 (*)\n\nThat's way too big a value for work_mem; there's no sense making \nwal_buffers bigger than 16MB; and you shouldn't ever adjust \ncommit_delay. It's a mostly broken feature that might even introduce \nlatency issues in your situation. None of these are likely related to \nyour problem today though.\n\n> I am suspecting some strange software RAID or kernel problem, unless \n> the default bgwriter settings can actually cause selects to get stuck \n> for so long when there are too many dirty buffers (I hope not).\n\nThis fairly simple: your kernel is configured to allow the system to \ncache hundreds of megabytes, if not gigabytes, of writes. There is no \nway to make that go completely away because the Linux kernel has an \nunfortunate design in terms of being low latency. I've written two \npapers in this area:\n\nhttp://www.westnet.com/~gsmith/content/linux-pdflush.htm\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\nAnd I doubt I could get the worst case on these tuned down to under a \nsecond using software RAID without a proper disk controller. \nPeriodically, the database must get everything in RAM flushed out to \ndisk, and the only way to make that happen instantly is for there to be \na hardware write cache to dump it into, and the most common way to get \none of those is to buy a hardware RAID card.\n\n> Unless I'm missing something, I only have a non-RAID setup or ramdisks \n> (tmpfs), or SSDs left to try to get rid of these\n\nBattery-backed write caching controller, and then re-tune afterwards. \nNothing else will improve your situation very much. SSDs have their own \nissues under heavy writes and the RAID has nothing to do with your \nproblem. If this is disposable data and you can run from a RAM disk, \nnow that would work, but now you've got some serious work to do in order \nto make that persistent.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 15 Apr 2010 18:45:03 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.3.9 - latency spikes with Linux (and tuning for consistently\n\tlow latency)" } ]
[ { "msg_contents": "Hey folks\n\nI am trying to do a full table scan on a large table from Java, using a\nstraightforward \"select * from foo\". I've run into these problems:\n\n1. By default, the PG JDBC driver attempts to suck the entire result set\ninto RAM, resulting in *java.lang.OutOfMemoryError* ... this is not cool, in\nfact I consider it a serious bug (even MySQL gets this right ;-) I am only\ntesting with a 9GB result set, but production needs to scale to 200GB or\nmore, so throwing hardware at is is not feasible.\n\n2. I tried using the official taming method, namely *\njava.sql.Statement.setFetchSize(1000)* and this makes it blow up entirely\nwith an error I have no context for, as follows (the number C_10 varies,\ne.g. C_12 last time) ...\n\norg.postgresql.util.PSQLException: ERROR: portal \"C_10\" does not exist\n at\norg.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)\n at\norg.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n at\norg.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)\n at\norg.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)\n\nThis is definitely a bug :-)\n\n\nIs there a known workaround for this ... will updating to a newer version of\nthe driver fix this?\n\nIs there a magic incation of JDBC calls that will tame it?\n\nCan I cast the objects to PG specific types and access a hidden API to turn\noff this behaviour?\n\nIf the only workaround is to explicitly create a cursor in PG, is there a\ngood example of how to do this from Java?\n\nCheers\nDave\n\nHey folksI am trying to do a full table scan on a large table from Java, using a straightforward \"select * from foo\". I've run into these problems:1. By default, the PG JDBC driver attempts to suck the entire result set into RAM, resulting in java.lang.OutOfMemoryError ... this is not cool, in fact I consider it a serious bug (even MySQL gets this right ;-) I am only testing with a 9GB result set, but production needs to scale to 200GB or more, so throwing hardware at is is not feasible.\n2. I tried using the official taming method, namely java.sql.Statement.setFetchSize(1000) and this makes it blow up entirely with an error I have no context for, as follows (the number C_10 varies, e.g. C_12 last time) ... \norg.postgresql.util.PSQLException: ERROR: portal \"C_10\" does not exist    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n    at org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)    at org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)This is definitely a bug :-)\nIs there a known workaround for this ... will updating to a newer version of the driver fix this? Is there a magic incation of JDBC calls that will tame it?Can I cast the objects to PG specific types and access a hidden API to turn off this behaviour?\nIf the only workaround is to explicitly create a cursor in PG, is there a good example of how to do this from Java?CheersDave", "msg_date": "Thu, 15 Apr 2010 14:42:51 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "HELP: How to tame the 8.3.x JDBC driver with a biq guery result set" }, { "msg_contents": "I have followed the instructions below to no avail .... any thoughts?\n\nhttp://jdbc.postgresql.org/documentation/83/query.html#query-with-cursor\n\nThis is what happens when I reduce the fetch_size to 50 ... stops after\nabout 950msec and 120 fetches (6k rows) ....\n\n13:59:56,054 [PerfDataMigrator] ERROR\ncom.hyper9.storage.sample.persistence.PersistenceManager:3216 - Unexpected\nerror while migrating sample data: 6000\norg.postgresql.util.PSQLException: ERROR: portal \"C_14\" does not exist\n at\norg.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)\n at\norg.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n at\norg.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)\n at\norg.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)\n at\norg.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:169)\n at\norg.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:169)\n at\ncom.hyper9.storage.sample.persistence.PersistenceManager$Migrator.run(PersistenceManager.java:3156)\n at java.lang.Thread.run(Thread.java:619)\n\n\nCheers\nDave\n\n\nOn Thu, Apr 15, 2010 at 2:42 PM, Dave Crooke <[email protected]> wrote:\n\n> Hey folks\n>\n> I am trying to do a full table scan on a large table from Java, using a\n> straightforward \"select * from foo\". I've run into these problems:\n>\n> 1. By default, the PG JDBC driver attempts to suck the entire result set\n> into RAM, resulting in *java.lang.OutOfMemoryError* ... this is not cool,\n> in fact I consider it a serious bug (even MySQL gets this right ;-) I am\n> only testing with a 9GB result set, but production needs to scale to 200GB\n> or more, so throwing hardware at is is not feasible.\n>\n> 2. I tried using the official taming method, namely *\n> java.sql.Statement.setFetchSize(1000)* and this makes it blow up entirely\n> with an error I have no context for, as follows (the number C_10 varies,\n> e.g. C_12 last time) ...\n>\n> org.postgresql.util.PSQLException: ERROR: portal \"C_10\" does not exist\n> at\n> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)\n> at\n> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n> at\n> org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)\n> at\n> org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)\n>\n> This is definitely a bug :-)\n>\n>\n> Is there a known workaround for this ... will updating to a newer version\n> of the driver fix this?\n>\n> Is there a magic incation of JDBC calls that will tame it?\n>\n> Can I cast the objects to PG specific types and access a hidden API to turn\n> off this behaviour?\n>\n> If the only workaround is to explicitly create a cursor in PG, is there a\n> good example of how to do this from Java?\n>\n> Cheers\n> Dave\n>\n>\n>\n>\n>\n>\n\nI have followed the instructions below to no avail .... any thoughts?http://jdbc.postgresql.org/documentation/83/query.html#query-with-cursor\nThis is what happens when I reduce the fetch_size to 50 ... stops after about 950msec and 120 fetches (6k rows) ....13:59:56,054 [PerfDataMigrator] ERROR com.hyper9.storage.sample.persistence.PersistenceManager:3216 - Unexpected error while migrating sample data: 6000\norg.postgresql.util.PSQLException: ERROR: portal \"C_14\" does not exist    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n    at org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)    at org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)    at org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:169)\n    at org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:169)    at com.hyper9.storage.sample.persistence.PersistenceManager$Migrator.run(PersistenceManager.java:3156)    at java.lang.Thread.run(Thread.java:619)\nCheersDaveOn Thu, Apr 15, 2010 at 2:42 PM, Dave Crooke <[email protected]> wrote:\nHey folksI am trying to do a full table scan on a large table from Java, using a straightforward \"select * from foo\". I've run into these problems:1. By default, the PG JDBC driver attempts to suck the entire result set into RAM, resulting in java.lang.OutOfMemoryError ... this is not cool, in fact I consider it a serious bug (even MySQL gets this right ;-) I am only testing with a 9GB result set, but production needs to scale to 200GB or more, so throwing hardware at is is not feasible.\n2. I tried using the official taming method, namely java.sql.Statement.setFetchSize(1000) and this makes it blow up entirely with an error I have no context for, as follows (the number C_10 varies, e.g. C_12 last time) ... \norg.postgresql.util.PSQLException: ERROR: portal \"C_10\" does not exist    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n\n    at org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)    at org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)This is definitely a bug :-)\n\nIs there a known workaround for this ... will updating to a newer version of the driver fix this? Is there a magic incation of JDBC calls that will tame it?Can I cast the objects to PG specific types and access a hidden API to turn off this behaviour?\nIf the only workaround is to explicitly create a cursor in PG, is there a good example of how to do this from Java?CheersDave", "msg_date": "Thu, 15 Apr 2010 15:01:55 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result\n\tset" }, { "msg_contents": "On Apr 15, 2010, at 1:01 PM, Dave Crooke wrote:\n> On Thu, Apr 15, 2010 at 2:42 PM, Dave Crooke <[email protected]> wrote:\n> Hey folks\n> \n> I am trying to do a full table scan on a large table from Java, using a straightforward \"select * from foo\". I've run into these problems:\n> \n> 1. By default, the PG JDBC driver attempts to suck the entire result set into RAM, resulting in java.lang.OutOfMemoryError ... this is not cool, in fact I consider it a serious bug (even MySQL gets this right ;-) I am only testing with a 9GB result set, but production needs to scale to 200GB or more, so throwing hardware at is is not feasible.\n> \n\nFor scrolling large result sets you have to do the following to prevent it from loading the whole thing into memory:\n\n\nUse forward-only, read-only result scrolling and set the fetch size. Some of these may be the default depending on what the connection pool is doing, but if set otherwise it may cause the whole result set to load into memory. I regularly read several GB result sets with ~10K fetch size batches.\n\nSomething like:\nStatement st = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY)\nst.setFetchSize(FETCH_SIZE);\n\n\n\n> 2. I tried using the official taming method, namely java.sql.Statement.setFetchSize(1000) and this makes it blow up entirely with an error I have no context for, as follows (the number C_10 varies, e.g. C_12 last time) ... \n> \n> org.postgresql.util.PSQLException: ERROR: portal \"C_10\" does not exist\n> at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)\n> at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n> at org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)\n> at org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)\n> \n> This is definitely a bug :-)\n> \n> \n\nI have no idea what that is.\n\n> Is there a known workaround for this ... will updating to a newer version of the driver fix this? \n> \n> Is there a magic incation of JDBC calls that will tame it?\n> \n> Can I cast the objects to PG specific types and access a hidden API to turn off this behaviour?\n> \n> If the only workaround is to explicitly create a cursor in PG, is there a good example of how to do this from Java?\n> \n> Cheers\n> Dave\n> \n> \n> \n> \n> \n> \n\n", "msg_date": "Mon, 19 Apr 2010 11:05:59 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: HELP: How to tame the 8.3.x JDBC driver with a\n\tbiq guery result \tset" } ]
[ { "msg_contents": "All,\n\nWe're having a very strange problem where autovacuum does not complete\non a Postgres 8.3.8/Solaris 5.10 system. The reason I say strange is:\nthis is one of a twin pair of identical systems,and the other system\ndoes not have this issue.\n\nBasically, vacuuming of a table which normally takes about 20 minutes\ninteractively with vacuum_cost_delay set to 20 had not completed after\n14 hours. When I trussed it, I saw activity which indicated to me that\nautovacuum was doing a pollsys, presumably for cost_limit, every data page.\n\nAutovacuum was running with vacuum_cost_limit = 200 and\nautovacuum_vacuum_cost_delay = 20, which I believe is the default for 8.3.\n\nTruss output:\n\npollsys(0xFFFFFD7FFFDF83E0, 0, 0xFFFFFD7FFFDF8470, 0x00000000) = 0\nlseek(4, 0x2AD9C000, SEEK_SET) = 0x2AD9C000\nwrite(4, \" L\\v\\0\\010F0F8 a01\\0\\0\\0\".., 8192) = 8192\nlseek(4, 0x2ADDC000, SEEK_SET) = 0x2ADDC000\nread(4, \" L\\v\\0\\0108FFD a01\\0\\0\\0\".., 8192) = 8192\npollsys(0xFFFFFD7FFFDF83E0, 0, 0xFFFFFD7FFFDF8470, 0x00000000) = 0\nlseek(4, 0x2AD9E000, SEEK_SET) = 0x2AD9E000\nwrite(4, \" L\\v\\0\\0 X15F9 a01\\0\\0\\0\".., 8192) = 8192\nlseek(4, 0x2ADDE000, SEEK_SET) = 0x2ADDE000\nread(4, \" L\\v\\0\\080B0FD a01\\0\\0\\0\".., 8192) = 8192\npollsys(0xFFFFFD7FFFDF83E0, 0, 0xFFFFFD7FFFDF8470, 0x00000000) = 0\nlseek(4, 0x2ADA0000, SEEK_SET) = 0x2ADA0000\nwrite(4, \" L\\v\\0\\0D0 6F9 a01\\0\\0\\0\".., 8192) = 8192\nlseek(4, 0x2ADE0000, SEEK_SET) = 0x2ADE0000\nread(4, \" L\\v\\0\\0F8D1FD a01\\0\\0\\0\".., 8192) = 8192\n\nNote that this is VERY different from the truss output for a manual\nvacuum on the same machine (although I think the above is an index and\nthe below is the main table):\n\npollsys(0xFFFFFD7FFFDF88C0, 0, 0xFFFFFD7FFFDF8950, 0x00000000) = 0\nread(14, \" (\\v\\0\\010\\v 19501\\001\\0\".., 8192) = 8192\nread(14, \" !\\v\\0\\0B8 qFF9701\\001\\0\".., 8192) = 8192\nread(14, \" -\\v\\0\\08895 WBC01\\001\\0\".., 8192) = 8192\nread(14, \" (\\v\\0\\0\\b I 19501\\001\\0\".., 8192) = 8192\nread(14, \" :\\v\\0\\0 ( ;BCCD01\\001\\0\".., 8192) = 8192\nread(14, \" (\\v\\0\\0 @89 19501\\001\\0\".., 8192) = 8192\nread(14, \" D\\v\\0\\0B0 7 e l01\\001\\0\".., 8192) = 8192\nread(14, \" (\\v\\0\\0B0C7 19501\\001\\0\".., 8192) = 8192\nread(14, \" -\\v\\0\\0B8 5 XBC01\\001\\0\".., 8192) = 8192\nread(14, \" (\\v\\0\\0 ( 3 29501\\001\\0\".., 8192) = 8192\nread(14, \" (\\v\\0\\0 X R 29501\\001\\0\".., 8192) = 8192\nread(14, \" :\\v\\0\\0 CEBCCD01\\001\\0\".., 8192) = 8192\nread(14, \" !\\v\\0\\0D8A0 9801\\001\\0\".., 8192) = 8192\nread(14, \" (\\v\\0\\0C0C6 29501\\001\\0\".., 8192) = 8192\nread(14, \"1C\\v\\0\\0D0 u g [01\\001\\0\".., 8192) = 8192\nread(14, \" !\\v\\0\\0A0 `81 [01\\001\\0\".., 8192) = 8192\nread(14, \" -\\v\\0\\0 0ED XBC01\\001\\0\".., 8192) = 8192\nread(14, \" 7\\v\\0\\0C8 UECD901\\001\\0\".., 8192) = 8192\nread(14, \"1A\\v\\0\\0107F W z01\\001\\0\".., 8192) = 8192\nread(14, \" !\\v\\0\\0 p ZB5A401\\001\\0\".., 8192) = 8192\nread(14, \" -\\v\\0\\0A0D5 YBC01\\001\\0\".., 8192) = 8192\nread(14, \" z\\v\\0\\0 81AFB9A01\\001\\0\".., 8192) = 8192\nread(14, \"1A\\v\\0\\080 { X z01\\001\\0\".., 8192) = 8192\nread(14, \" (\\v\\0\\080 ] 39501\\001\\0\".., 8192) = 8192\nread(14, \" (\\v\\0\\0A8 | 39501\\001\\0\".., 8192) = 8192\nread(14, \" :\\v\\0\\0\\09ABDCD01\\001\\0\".., 8192) = 8192\n\nIdeas on where to look next, anyone?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Thu, 15 Apr 2010 12:44:31 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Autovaccum with cost_delay does not complete on one solaris 5.10\n\tmachine" }, { "msg_contents": "Josh Berkus wrote:\n\n> Basically, vacuuming of a table which normally takes about 20 minutes\n> interactively with vacuum_cost_delay set to 20 had not completed after\n> 14 hours. When I trussed it, I saw activity which indicated to me that\n> autovacuum was doing a pollsys, presumably for cost_limit, every data page.\n> \n> Autovacuum was running with vacuum_cost_limit = 200 and\n> autovacuum_vacuum_cost_delay = 20, which I believe is the default for 8.3.\n> \n> Truss output:\n> \n> pollsys(0xFFFFFD7FFFDF83E0, 0, 0xFFFFFD7FFFDF8470, 0x00000000) = 0\n\nSo what is it polling? Please try \"truss -v pollsys\"; is there a way in\nSolaris to report what each file descriptor is pointing to? (In linux\nI'd look at /proc/<pid>/fd)\n\nWe don't call pollsys anywhere. Something in Solaris must be doing it\nunder the hood.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 15 Apr 2010 16:00:56 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovaccum with cost_delay does not complete on one\n\tsolaris 5.10 machine" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> We don't call pollsys anywhere. Something in Solaris must be doing it\n> under the hood.\n\npg_usleep calls select(), and some googling indicates that select() is\nimplemented as pollsys() on recent Solaris versions. So Josh's\nassumption that those are delay calls seems plausible. But it shouldn't\nbe sleeping after each page with normal cost_delay parameters, should it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Apr 2010 16:14:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovaccum with cost_delay does not complete on one solaris 5.10\n\tmachine" }, { "msg_contents": "Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > We don't call pollsys anywhere. Something in Solaris must be doing it\n> > under the hood.\n> \n> pg_usleep calls select(), and some googling indicates that select() is\n> implemented as pollsys() on recent Solaris versions. So Josh's\n> assumption that those are delay calls seems plausible. But it shouldn't\n> be sleeping after each page with normal cost_delay parameters, should it?\n\nCertainly not ... The only explanation would be that the cost balance\ngets over the limit very frequently. So one of the params would have to\nbe abnormally high (vacuum_cost_page_hit, vacuum_cost_page_miss,\nvacuum_cost_page_dirty).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 15 Apr 2010 16:24:05 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovaccum with cost_delay does not complete on one\n\tsolaris 5.10 machine" }, { "msg_contents": "\n>> pg_usleep calls select(), and some googling indicates that select() is\n>> implemented as pollsys() on recent Solaris versions. So Josh's\n>> assumption that those are delay calls seems plausible.\n\nIt's certainly the behavior I'm seeing otherwise. In \"normal\noperation\", the number of pages read between pollsys is consistent with\nthe vacuum delay settings.\n\n>> But it shouldn't\n>> be sleeping after each page with normal cost_delay parameters, should it?\n\nRight, that's why I find this puzzling. If the problem was easier to\nreproduce it would be easier to analyze.\n\n> Certainly not ... The only explanation would be that the cost balance\n> gets over the limit very frequently. So one of the params would have to\n> be abnormally high (vacuum_cost_page_hit, vacuum_cost_page_miss,\n> vacuum_cost_page_dirty).\n\nNope, all defaults. And, all identical to the other server which is\nbehaving normally.\n\nHonestly, I mostly posted this in hopes that some other Solaris user\nwould speak up to having seen the same thing.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Thu, 15 Apr 2010 15:17:22 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovaccum with cost_delay does not complete on one\n\tsolaris 5.10 machine" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> But it shouldn't\n>> be sleeping after each page with normal cost_delay parameters, should it?\n\n> Right, that's why I find this puzzling. If the problem was easier to\n> reproduce it would be easier to analyze.\n\nThe behavior would be explained if VacuumCostLimit were getting set to\nzero (or some unreasonably small value) in the autovac worker process.\nI looked at the autovac code that manages that, and it seems complicated\nenough that a bug wouldn't surprise me in the least.\n\nI especially note that wi_cost_limit is explicitly initialized to zero,\nrather than something sane; and that table_recheck_autovac falls back to\nsetting vac_cost_limit from the previous value of VacuumCostLimit\n... which is NOT constant but in general is left over from the\npreviously processed table. One should also keep in mind that SIGHUP\nprocessing might reload VacuumCostLimit from GUC values. So I think\nthat area needs a closer look.\n\nJosh, are you sure that both servers are identical in terms of both\nGUC-related and per-table autovacuum settings?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Apr 2010 18:48:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovaccum with cost_delay does not complete on one solaris 5.10\n\tmachine" }, { "msg_contents": "\n> Josh, are you sure that both servers are identical in terms of both\n> GUC-related and per-table autovacuum settings?\n\nI should check per-table. GUC, yes, because the company has source\nmanagement for config files.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Thu, 15 Apr 2010 15:52:09 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovaccum with cost_delay does not complete on one\n\tsolaris 5.10 machine" }, { "msg_contents": "Tom,\n\nNeither database has and per-table autovacuum settings.\n\nHowever, since this is a production database, I had to try something, \nand set vacuum_cost_limit up to 1000. The issue with vacuuming one page \nat a time went away, or at least I have not seen it repeat in the last \n16 hours.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 16 Apr 2010 09:39:34 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovaccum with cost_delay does not complete on one\n\tsolaris 5.10 machine" }, { "msg_contents": "Josh Berkus wrote:\n> Tom,\n> \n> Neither database has and per-table autovacuum settings.\n> \n> However, since this is a production database, I had to try\n> something, and set vacuum_cost_limit up to 1000. The issue with\n> vacuuming one page at a time went away, or at least I have not seen\n> it repeat in the last 16 hours.\n\nHow many autovac workers are there?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 16 Apr 2010 12:55:14 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovaccum with cost_delay does not complete on one\n\tsolaris 5.10 machine" }, { "msg_contents": "\n> How many autovac workers are there?\n\nMax_workers is set to 3. However, I've never seen more than one active \nat a time.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 16 Apr 2010 09:59:16 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovaccum with cost_delay does not complete on one\n\tsolaris 5.10 machine" } ]
[ { "msg_contents": "When a connection is used for both reading and writing, a commit() also\ndestroys any open cursors. Simple workaround - use two connections.\n\nSee full discussion on JDBC list.\n\nCheers\nDave\n\nOn Thu, Apr 15, 2010 at 3:01 PM, Dave Crooke <[email protected]> wrote:\n\n> I have followed the instructions below to no avail .... any thoughts?\n>\n>\n> http://jdbc.postgresql.org/documentation/83/query.html#query-with-cursor\n>\n> This is what happens when I reduce the fetch_size to 50 ... stops after\n> about 950msec and 120 fetches (6k rows) ....\n>\n>\n> 13:59:56,054 [PerfDataMigrator] ERROR\n> com.hyper9.storage.sample.persistence.PersistenceManager:3216 - Unexpected\n> error while migrating sample data: 6000\n> org.postgresql.util.PSQLException: ERROR: portal \"C_14\" does not exist\n>\n> at\n> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)\n> at\n> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n> at\n> org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)\n> at\n> org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)\n> at\n> org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:169)\n> at\n> org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:169)\n> at\n> com.hyper9.storage.sample.persistence.PersistenceManager$Migrator.run(PersistenceManager.java:3156)\n> at java.lang.Thread.run(Thread.java:619)\n>\n>\n> Cheers\n> Dave\n>\n>\n>\n> On Thu, Apr 15, 2010 at 2:42 PM, Dave Crooke <[email protected]> wrote:\n>\n>> Hey folks\n>>\n>> I am trying to do a full table scan on a large table from Java, using a\n>> straightforward \"select * from foo\". I've run into these problems:\n>>\n>> 1. By default, the PG JDBC driver attempts to suck the entire result set\n>> into RAM, resulting in *java.lang.OutOfMemoryError* ... this is not cool,\n>> in fact I consider it a serious bug (even MySQL gets this right ;-) I am\n>> only testing with a 9GB result set, but production needs to scale to 200GB\n>> or more, so throwing hardware at is is not feasible.\n>>\n>> 2. I tried using the official taming method, namely *\n>> java.sql.Statement.setFetchSize(1000)* and this makes it blow up entirely\n>> with an error I have no context for, as follows (the number C_10 varies,\n>> e.g. C_12 last time) ...\n>>\n>> org.postgresql.util.PSQLException: ERROR: portal \"C_10\" does not exist\n>> at\n>> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)\n>> at\n>> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n>> at\n>> org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)\n>> at\n>> org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)\n>>\n>> This is definitely a bug :-)\n>>\n>>\n>> Is there a known workaround for this ... will updating to a newer version\n>> of the driver fix this?\n>>\n>> Is there a magic incation of JDBC calls that will tame it?\n>>\n>> Can I cast the objects to PG specific types and access a hidden API to\n>> turn off this behaviour?\n>>\n>> If the only workaround is to explicitly create a cursor in PG, is there a\n>> good example of how to do this from Java?\n>>\n>> Cheers\n>> Dave\n>>\n>>\n>>\n>>\n>>\n>>\n>\n\nWhen a connection is used for both reading and writing, a commit() also destroys any open cursors. Simple workaround - use two connections.See full discussion on JDBC list.CheersDave\nOn Thu, Apr 15, 2010 at 3:01 PM, Dave Crooke <[email protected]> wrote:\nI have followed the instructions below to no avail .... any thoughts?http://jdbc.postgresql.org/documentation/83/query.html#query-with-cursor\nThis is what happens when I reduce the fetch_size to 50 ... stops after about 950msec and 120 fetches (6k rows) ....13:59:56,054 [PerfDataMigrator] ERROR com.hyper9.storage.sample.persistence.PersistenceManager:3216 - Unexpected error while migrating sample data: 6000\n\norg.postgresql.util.PSQLException: ERROR: portal \"C_14\" does not exist    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n\n    at org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)    at org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)    at org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:169)\n\n    at org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:169)    at com.hyper9.storage.sample.persistence.PersistenceManager$Migrator.run(PersistenceManager.java:3156)    at java.lang.Thread.run(Thread.java:619)\nCheersDaveOn Thu, Apr 15, 2010 at 2:42 PM, Dave Crooke <[email protected]> wrote:\n\nHey folksI am trying to do a full table scan on a large table from Java, using a straightforward \"select * from foo\". I've run into these problems:1. By default, the PG JDBC driver attempts to suck the entire result set into RAM, resulting in java.lang.OutOfMemoryError ... this is not cool, in fact I consider it a serious bug (even MySQL gets this right ;-) I am only testing with a 9GB result set, but production needs to scale to 200GB or more, so throwing hardware at is is not feasible.\n2. I tried using the official taming method, namely java.sql.Statement.setFetchSize(1000) and this makes it blow up entirely with an error I have no context for, as follows (the number C_10 varies, e.g. C_12 last time) ... \norg.postgresql.util.PSQLException: ERROR: portal \"C_10\" does not exist    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n\n\n    at org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)    at org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)This is definitely a bug :-)\n\n\nIs there a known workaround for this ... will updating to a newer version of the driver fix this? Is there a magic incation of JDBC calls that will tame it?Can I cast the objects to PG specific types and access a hidden API to turn off this behaviour?\nIf the only workaround is to explicitly create a cursor in PG, is there a good example of how to do this from Java?CheersDave", "msg_date": "Thu, 15 Apr 2010 18:39:37 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "SOLVED: Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery\n\tresult set" } ]
[ { "msg_contents": "Hello.\n\nI have a query that performs very poor because there is a limit on join\ncolumn that is not applied to other columns:\n\nselect * from company this_ left outer join company_tag this_1_ on\nthis_.id=this_1_.company_id left outer join company_measures companymea2_ on\nthis_.id=companymea2_.company_id left outer join company_descr ces3_ on\nthis_.id=ces3_.company_id where this_1_.tag_id = 7 and this_.id>50000000\nand this_1_.company_id>50000000\norder by this_.id asc limit 1000;\n\n(plan1.txt)\nTotal runtime: 7794.692 ms\n\nAt the same time if I apply the limit (>50000000) to other columns in query\nitself it works like a charm:\n\nselect * from company this_ left outer join company_tag this_1_ on\nthis_.id=this_1_.company_id left outer join company_measures companymea2_ on\nthis_.id=companymea2_.company_id left outer join company_descr ces3_ on\nthis_.id=ces3_.company_id where this_1_.tag_id = 7 and this_.id>50000000\nand this_1_.company_id>50000000\nand companymea2_.company_id>50000000 and ces3_.company_id>50000000\norder by this_.id asc limit 1000;\n\n(plan2.txt)\nTotal runtime: 27.547 ms\n\nI've thought and someone in this list've told me that this should be done\nautomatically. But I have pretty recent server:\nPostgreSQL 8.4.2 on amd64-portbld-freebsd8.0, compiled by GCC cc (GCC) 4.2.1\n20070719 [FreeBSD], 64-bit\nand it still do not work\n\nDo I misunderstand something or this feature don't work in such a query?\n\nBest regards, Vitalii Tymchyshyn", "msg_date": "Fri, 16 Apr 2010 11:02:06 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Planner not using column limit specified for one column for another\n\tcolumn equal to first" }, { "msg_contents": "On Fri, 2010-04-16 at 11:02 +0300, Віталій Тимчишин wrote:\n> Hello.\n> \n> \n> I have a query that performs very poor because there is a limit on\n> join column that is not applied to other columns:\n> \n> \n> select * from company this_ left outer join company_tag this_1_ on\n> this_.id=this_1_.company_id left outer join company_measures\n> companymea2_ on this_.id=companymea2_.company_id left outer join\n> company_descr ces3_ on this_.id=ces3_.company_id where this_1_.tag_id\n> = 7 and this_.id>50000000 \n> and this_1_.company_id>50000000\n> order by this_.id asc limit 1000;\n> \n> \n> (plan1.txt)\n> Total runtime: 7794.692 ms\n> \n> \n> At the same time if I apply the limit (>50000000) to other columns in\n> query itself it works like a charm:\n> \n> \n> select * from company this_ left outer join company_tag this_1_ on\n> this_.id=this_1_.company_id left outer join company_measures\n> companymea2_ on this_.id=companymea2_.company_id left outer join\n> company_descr ces3_ on this_.id=ces3_.company_id where this_1_.tag_id\n> = 7 and this_.id>50000000 \n> and this_1_.company_id>50000000\n> and companymea2_.company_id>50000000 and ces3_.company_id>50000000\n> order by this_.id asc limit 1000;\n\nThe queries are not the same.\n\n2nd variant will not return the rows where there are no matching rows\ninthis_1_ , companymea2_ or ces3_.company_id\n\nA query equivalent to first one would be:\n\n\nselect * from company this_ \n left outer join company_tag this_1_ \n on (this_.id=this_1_.company_id \n\t and this_1_.company_id>50000000)\n left outer join company_measures companymea2_ \n on (this_.id=companymea2_.company_id \n\t and companymea2_.company_id>50000000)\n left outer join company_descr ces3_ \n on (this_.id=ces3_.company_id \n\t and ces3_.company_id>50000000)\n where this_1_.tag_id = 7 \n and this_.id>50000000 \n order by this_.id asc \n limit 1000;\n\n\nI'm not sure that planner considers the above form of plan rewrite, nor\nthat it would make much sense to do so unless there was a really small\nnumber of rows where x_.company_id>50000000 \n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Fri, 16 Apr 2010 11:25:25 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner not using column limit specified for one\n\tcolumn for another column equal to first" }, { "msg_contents": "Віталій Тимчишин wrote:\n> Hello.\n>\n> I have a query that performs very poor because there is a limit on \n> join column that is not applied to other columns:\n>\n> select * from company this_ left outer join company_tag this_1_ on \n> this_.id=this_1_.company_id left outer join company_measures \n> companymea2_ on this_.id=companymea2_.company_id left outer join \n> company_descr ces3_ on this_.id=ces3_.company_id where this_1_.tag_id \n> = 7 and this_.id>50000000 \n> and this_1_.company_id>50000000\n> order by this_.id asc limit 1000;\n>\n> (plan1.txt)\n> Total runtime: 7794.692 ms\n>\n> At the same time if I apply the limit (>50000000) to other columns in \n> query itself it works like a charm:\n>\n> select * from company this_ left outer join company_tag this_1_ on \n> this_.id=this_1_.company_id left outer join company_measures \n> companymea2_ on this_.id=companymea2_.company_id left outer join \n> company_descr ces3_ on this_.id=ces3_.company_id where this_1_.tag_id \n> = 7 and this_.id>50000000 \n> and this_1_.company_id>50000000\n> and companymea2_.company_id>50000000 and ces3_.company_id>50000000\n> order by this_.id asc limit 1000;\n>\n> (plan2.txt)\n> Total runtime: 27.547 ms\n>\n> I've thought and someone in this list've told me that this should be \n> done automatically.\nYes, if you have in a query a=b and b=c, then the optimizer figures out \nthat a=c as well. (a,b and c are then member of the same equivalence class).\n\nHowever both queries are not the same, since the joins you're using are \nouter joins. In the first it's possible that records are returned for \ncompany records with no matching ces3_ records, the ces3_ records is \nnull in that case. In the second query no NULL ces3_ information may be \nreturned.\n\nAnother thing is it seems that the number of rows guessed is far off \nfrom the actual number of rows, is the number 5000000 artificial or are \nyou're statistics old or too small histogram/mcv's?\n\nregards,\nYeb Havinga\n\n", "msg_date": "Fri, 16 Apr 2010 10:31:06 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner not using column limit specified for one column\n\tfor another column equal to first" }, { "msg_contents": "16 квітня 2010 р. 11:31 Yeb Havinga <[email protected]> написав:\n\n> Віталій Тимчишин wrote:\n>\n>> Hello.\n>>\n>> I have a query that performs very poor because there is a limit on join\n>> column that is not applied to other columns:\n>>\n>> select * from company this_ left outer join company_tag this_1_ on\n>> this_.id=this_1_.company_id left outer join company_measures companymea2_ on\n>> this_.id=companymea2_.company_id left outer join company_descr ces3_ on\n>> this_.id=ces3_.company_id where this_1_.tag_id = 7 and this_.id>50000000 and\n>> this_1_.company_id>50000000\n>> order by this_.id asc limit 1000;\n>>\n>> (plan1.txt)\n>> Total runtime: 7794.692 ms\n>>\n>> At the same time if I apply the limit (>50000000) to other columns in\n>> query itself it works like a charm:\n>>\n>> select * from company this_ left outer join company_tag this_1_ on\n>> this_.id=this_1_.company_id left outer join company_measures companymea2_ on\n>> this_.id=companymea2_.company_id left outer join company_descr ces3_ on\n>> this_.id=ces3_.company_id where this_1_.tag_id = 7 and this_.id>50000000 and\n>> this_1_.company_id>50000000\n>> and companymea2_.company_id>50000000 and ces3_.company_id>50000000\n>> order by this_.id asc limit 1000;\n>>\n>> (plan2.txt)\n>> Total runtime: 27.547 ms\n>>\n>> I've thought and someone in this list've told me that this should be done\n>> automatically.\n>>\n> Yes, if you have in a query a=b and b=c, then the optimizer figures out\n> that a=c as well. (a,b and c are then member of the same equivalence class).\n>\n> However both queries are not the same, since the joins you're using are\n> outer joins. In the first it's possible that records are returned for\n> company records with no matching ces3_ records, the ces3_ records is null in\n> that case. In the second query no NULL ces3_ information may be returned.\n>\n\nOK, but when I move limit to join condition the query is still fast:\n\nselect * from company this_ left outer join company_tag this_1_ on\nthis_.id=this_1_.company_id\nleft outer join company_measures companymea2_ on\nthis_.id=companymea2_.company_id and companymea2_.company_id>50000000\nleft outer join company_descr ces3_ on this_.id=ces3_.company_id and\nces3_.company_id>50000000\nwhere this_1_.tag_id = 7 and this_.id>50000000\nand this_1_.company_id>50000000\norder by this_.id asc limit 1000;\n\n(plan3.txt),\nTotal runtime: 26.327 ms\nBTW: Changing slow query to inner joins do not make it fast\n\n\n>\n> Another thing is it seems that the number of rows guessed is far off from\n> the actual number of rows, is the number 5000000 artificial or are you're\n> statistics old or too small histogram/mcv's?\n>\n\nNope, I suppose this is because of limit. If I remove the limit, the\nestimations are quite correct. There are ~6 millions of row in each table.", "msg_date": "Fri, 16 Apr 2010 15:49:45 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner not using column limit specified for one column\n\tfor another column equal to first" }, { "msg_contents": "16 квітня 2010 р. 11:25 Hannu Krosing <[email protected]> написав:\n\n> On Fri, 2010-04-16 at 11:02 +0300, Віталій Тимчишин wrote:\n> > Hello.\n> >\n> >\n> > I have a query that performs very poor because there is a limit on\n> > join column that is not applied to other columns:\n> >\n> >\n> > select * from company this_ left outer join company_tag this_1_ on\n> > this_.id=this_1_.company_id left outer join company_measures\n> > companymea2_ on this_.id=companymea2_.company_id left outer join\n> > company_descr ces3_ on this_.id=ces3_.company_id where this_1_.tag_id\n> > = 7 and this_.id>50000000\n> > and this_1_.company_id>50000000\n> > order by this_.id asc limit 1000;\n> >\n> >\n> > (plan1.txt)\n> > Total runtime: 7794.692 ms\n> >\n> >\n> > At the same time if I apply the limit (>50000000) to other columns in\n> > query itself it works like a charm:\n> >\n> >\n> > select * from company this_ left outer join company_tag this_1_ on\n> > this_.id=this_1_.company_id left outer join company_measures\n> > companymea2_ on this_.id=companymea2_.company_id left outer join\n> > company_descr ces3_ on this_.id=ces3_.company_id where this_1_.tag_id\n> > = 7 and this_.id>50000000\n> > and this_1_.company_id>50000000\n> > and companymea2_.company_id>50000000 and ces3_.company_id>50000000\n> > order by this_.id asc limit 1000;\n>\n> The queries are not the same.\n>\n> 2nd variant will not return the rows where there are no matching rows\n> inthis_1_ , companymea2_ or ces3_.company_id\n>\n> A query equivalent to first one would be:\n>\n>\n> select * from company this_\n> left outer join company_tag this_1_\n> on (this_.id=this_1_.company_id\n> and this_1_.company_id>50000000)\n> left outer join company_measures companymea2_\n> on (this_.id=companymea2_.company_id\n> and companymea2_.company_id>50000000)\n> left outer join company_descr ces3_\n> on (this_.id=ces3_.company_id\n> and ces3_.company_id>50000000)\n> where this_1_.tag_id = 7\n> and this_.id>50000000\n> order by this_.id asc\n> limit 1000;\n>\n\nAnd it's still fast (see plan in another mail), while \"inner join\" variant\nof original query is still slow.\n\n\n>\n>\n> I'm not sure that planner considers the above form of plan rewrite, nor\n> that it would make much sense to do so unless there was a really small\n> number of rows where x_.company_id>50000000\n>\n> Actually no,\nselect id > 50000000, count(*) from company group by 1\nf,1096042\nt,5725630\n\nI don't know why the planner wishes to perform few merges of 1000 to a\nmillion of records (and the merges is the thing that takes time) instead of\ntaking a 1000 of records from main table and then doing a nested loop. And\nit must read all the records that DO NOT match the criteria for secondary\ntables before getting to correct records if it do not filter secondary\ntables with index on retrieve.\n\nset enable_mergejoin=false helps original query, but this is another problem\nand first solution is simpler and can be used by planner automatically,\nwhile second requires rethinking/rewrite of LIMIT estimation logic\n(Plan of nested loop attached)", "msg_date": "Fri, 16 Apr 2010 15:59:50 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner not using column limit specified for one column\n\tfor another column equal to first" }, { "msg_contents": "О©ҐО©ҐО©ҐО©ҐліО©Ґ О©ҐО©ҐО©ҐО©ҐО©ҐО©ҐО©ҐО©Ґ wrote:\n>\n> BTW: Changing slow query to inner joins do not make it fast\nI'm interested to see the query andplan of the slow query with inner joins.\n \n>\n>\n> Another thing is it seems that the number of rows guessed is far\n> off from the actual number of rows, is the number 5000000\n> artificial or are you're statistics old or too small histogram/mcv's?\n>\n>\n> Nope, I suppose this is because of limit. If I remove the limit, the \n> estimations are quite correct. There are ~6 millions of row in each table.\nYes, that makes sense.\n\nregards,\nYeb Havinga\n\n", "msg_date": "Fri, 16 Apr 2010 15:21:30 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner not using column limit specified for one column\n\tfor another column equal to first" }, { "msg_contents": "16 квітня 2010 р. 16:21 Yeb Havinga <[email protected]> написав:\n\n> Віталій Тимчишин wrote:\n>\n>>\n>> BTW: Changing slow query to inner joins do not make it fast\n>>\n> I'm interested to see the query andplan of the slow query with inner joins.\n>\n>\n> Here you are. The query:\n\nselect * from company this_ inner join company_tag this_1_ on\nthis_.id=this_1_.company_id\ninner join company_measures companymea2_ on\nthis_.id=companymea2_.company_id\ninner join company_descr ces3_ on this_.id=ces3_.company_id\nwhere this_1_.tag_id = 7 and this_.id>50000000\norder by this_.id asc\nlimit 1000\n;\nTotal runtime: 14088.942 ms\n(plan is attached)\n\nBest regards, Vitalii Tymchyshyn", "msg_date": "Fri, 16 Apr 2010 16:58:22 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner not using column limit specified for one column\n\tfor another column equal to first" }, { "msg_contents": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]> writes:\n> I've thought and someone in this list've told me that this should be done\n> automatically.\n\nNo, that's not true. We do make deductions about transitive equalities,\nie, given WHERE a=b AND b=c the planner will infer a=c and use that if\nit's helpful. We don't make deductions about inequalities such as a>c.\nIn theory there's enough information available to do so, but overall\ntrying to do that would probably waste more cycles than it would save.\nYou'd need a lot of expensive new planner infrastructure, and in the\nvast majority of queries it wouldn't produce anything very helpful.\n\nAs was pointed out, even if we had such logic it wouldn't apply in this\nexample, because the equality conditions aren't real equalities but\nOUTER JOIN conditions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2010 10:19:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner not using column limit specified for one column for\n\tanother column equal to first" }, { "msg_contents": "Tom Lane wrote:\n> =?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]> writes:\n> \n>> I've thought and someone in this list've told me that this should be done\n>> automatically.\n>> \n>\n> No, that's not true. We do make deductions about transitive equalities,\n> ie, given WHERE a=b AND b=c the planner will infer a=c and use that if\n> it's helpful. We don't make deductions about inequalities such as a>c.\n> In theory there's enough information available to do so, but overall\n> trying to do that would probably waste more cycles than it would save.\n> You'd need a lot of expensive new planner infrastructure, and in the\n> vast majority of queries it wouldn't produce anything very helpful.\n> \nNew expensive planner infrastructure to support from a>b and b>c infer \na>c, yes.\n\nBut I wonder if something like Leibniz's principle of identity holds for \nmembers of the same equivalence class, e.g. like if x,y are both members \nof the same EC, then for every predicate P, P(x) iff P(y). Probably not \nfor every predicate (like varno = 2 or attname='x'), but for the query \nevaluation, the object denoted by the variables are the same, since that \nis the standard meaning of the = operator. I cannot think of any \nstandard (btree) operator where 'Leibniz' would fail in this case.\n\nregards,\nYeb Havinga\n\n\n", "msg_date": "Fri, 16 Apr 2010 17:33:56 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner not using column limit specified for one column\n\tfor another column equal to first" }, { "msg_contents": "Yeb Havinga <[email protected]> writes:\n> New expensive planner infrastructure to support from a>b and b>c infer \n> a>c, yes.\n\n> But I wonder if something like Leibniz's principle of identity holds for \n> members of the same equivalence class, e.g. like if x,y are both members \n> of the same EC, then for every predicate P, P(x) iff P(y).\n\nThis could only be assumed to apply for predicates constructed from\noperators that are in the equivalence operator's btree opfamily.\nNow, that would certainly be a large enough set of cases to sometimes\ngive useful results --- but I stand by the opinion that it wouldn't\nwin often enough to justify the added planner overhead.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Apr 2010 11:45:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner not using column limit specified for one column for\n\tanother column equal to first" }, { "msg_contents": "16 квітня 2010 р. 17:19 Tom Lane <[email protected]> написав:\n\n> =?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]> writes:\n> > I've thought and someone in this list've told me that this should be done\n> > automatically.\n>\n> As was pointed out, even if we had such logic it wouldn't apply in this\n> example, because the equality conditions aren't real equalities but\n> OUTER JOIN conditions.\n>\n>\nIn this case you can copy condition to \"ON\" condition, not to where cause\nand this would work correct, e.g. \"select something from a join b on a.x=b.y\nwhere a.x > n\" <=> \"select something from a join b on a.x=b.y and b.y > n\nwhere a.x > n\".\n\nAs of making planner more clever, may be it is possible to introduce\ndivision on \"fast queries\" and \"long queries\", so that if after fast\nplanning cost is greater then some configurable threshold, advanced planning\ntechniques (or settings) are used. As far as I have seen in this list, many\ntechniques are not used simply because they are too complex and could make\nplanning take too much time for really fast queries, but they are vital for\nlong ones.\nAlso same (or similar) threshold could be used to enable replanning for each\nrun of prepared query - also an often complaint is that planned query is not\nthat fast as is could be.\n\n-- \nBest regards,\nVitalii Tymchyshyn\n\n16 квітня 2010 р. 17:19 Tom Lane <[email protected]> написав:\n=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]> writes:\n> I've thought and someone in this list've told me that this should be done\n> automatically.\nAs was pointed out, even if we had such logic it wouldn't apply in this\nexample, because the equality conditions aren't real equalities but\nOUTER JOIN conditions.In this case you can copy condition to \"ON\" condition, not to where cause and this would work correct, e.g. \"select something from a join b on a.x=b.y where a.x > n\" <=> \"select something from a join b on a.x=b.y and b.y > n where a.x > n\".\nAs of making planner more clever, may be it is possible to introduce division on \"fast queries\" and \"long queries\", so that if after fast planning cost is greater then some configurable threshold, advanced planning techniques (or settings) are used. As far as I have seen in this list, many techniques are not used simply because they are too complex and could make planning take too much time for really fast queries, but they are vital for long ones.\nAlso same (or similar) threshold could be used to enable replanning for each run of prepared query - also an often complaint is that planned query is not that fast as is could be. -- Best regards,\n Vitalii Tymchyshyn", "msg_date": "Sat, 17 Apr 2010 07:12:18 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner not using column limit specified for one column\n\tfor another column equal to first" }, { "msg_contents": "On Sat, 17 Apr 2010, Віталій Тимчишин wrote:\n> As of making planner more clever, may be it is possible to introduce\n> division on \"fast queries\" and \"long queries\", so that if after fast\n> planning cost is greater then some configurable threshold, advanced planning\n> techniques (or settings) are used. As far as I have seen in this list, many\n> techniques are not used simply because they are too complex and could make\n> planning take too much time for really fast queries, but they are vital for\n> long ones.\n\n+1. That's definitely a good idea in my view. The query optimiser I wrote \n(which sits on top of Postgres and makes use of materialised views to \nspeed up queries) uses a similar approach - it expends effort proportional \nto the estimated cost of the query, as reported by EXPLAIN.\n\nMatthew\n\n-- \n To most people, solutions mean finding the answers. But to chemists,\n solutions are things that are still all mixed up.", "msg_date": "Mon, 19 Apr 2010 10:47:37 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner not using column limit specified for one column\n\tfor another column equal to first" } ]
[ { "msg_contents": "Scott - I tried to post a SOLVED followup to the JDBC list but it was\nrejected :-!\n\nI now have the opposite problem of getting rid of the cursor :-)\nResultSet.close() does not work. I am trying to do a DROP TABLE from the\nother Connection, to whack the table I just finished the ETL on, but it just\nhangs indefintiely, and pg_locks shows the shared read lock still sitting\nthere.\n\nI am trying a Statement.close() and Connection.close() now, but I fear I may\nhave to do something slightly ugly, as I have Apache DBCP sitting in between\nme and the actual PG JDBC driver.\n\nI am hoping the slightly ugly thing is only closing the underlying\nconnection, and does not have to be */etc/init.d/postgresql8.3 restart* :-)\nIs there a backdoor way to forcibly get rid of a lock you don't need any\nmore?\n\nCheers\nDave\n\nOn Mon, Apr 19, 2010 at 1:05 PM, Scott Carey <[email protected]>wrote:\n\n> On Apr 15, 2010, at 1:01 PM, Dave Crooke wrote:\n> > On Thu, Apr 15, 2010 at 2:42 PM, Dave Crooke <[email protected]> wrote:\n> > Hey folks\n> >\n> > I am trying to do a full table scan on a large table from Java, using a\n> straightforward \"select * from foo\". I've run into these problems:\n> >\n> > 1. By default, the PG JDBC driver attempts to suck the entire result set\n> into RAM, resulting in java.lang.OutOfMemoryError ... this is not cool, in\n> fact I consider it a serious bug (even MySQL gets this right ;-) I am only\n> testing with a 9GB result set, but production needs to scale to 200GB or\n> more, so throwing hardware at is is not feasible.\n> >\n>\n> For scrolling large result sets you have to do the following to prevent it\n> from loading the whole thing into memory:\n>\n>\n> Use forward-only, read-only result scrolling and set the fetch size. Some\n> of these may be the default depending on what the connection pool is doing,\n> but if set otherwise it may cause the whole result set to load into memory.\n> I regularly read several GB result sets with ~10K fetch size batches.\n>\n> Something like:\n> Statement st = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,\n> java.sql.ResultSet.CONCUR_READ_ONLY)\n> st.setFetchSize(FETCH_SIZE);\n>\n\nThat's what I''m using, albeit without any args to createStatement, and it\nnow works.\n\n\n>\n>\n>\n> > 2. I tried using the official taming method, namely\n> java.sql.Statement.setFetchSize(1000) and this makes it blow up entirely\n> with an error I have no context for, as follows (the number C_10 varies,\n> e.g. C_12 last time) ...\n> >\n> > org.postgresql.util.PSQLException: ERROR: portal \"C_10\" does not exist\n> > at\n> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)\n> > at\n> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n> > at\n> org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)\n> > at\n> org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)\n> >\n> > This is definitely a bug :-)\n> >\n> >\n>\n> I have no idea what that is.\n>\n\nIt was because I was also writing to the same Connection ... when you call\nConnection.commit() with the PG JDBC driver, it also kills all your open\ncursors.\n\nI think this is a side effect of the PG internal design where it does MVCC\nwithin a table (rows have multiple versions with min and max transaction\nids) ... even a query in PG has a notional virtual transaction ID, whereas\nin e.g. Oracle, a query has a start time and visibility horizon, and as long\nas you have enough undo tablespace, it has an existence which is totally\nindependent of any transactions going on around it even on the same JDBC\nconnection.\n\nScott - I tried to post a SOLVED followup to the JDBC list but it was rejected :-!I now have the opposite problem of getting rid of the cursor :-) ResultSet.close() does not work. I am trying to do a DROP TABLE from the other Connection, to whack the table I just finished the ETL on, but it just hangs indefintiely, and pg_locks shows the shared read lock still sitting there.\nI am trying a Statement.close() and Connection.close() now, but I fear I may have to do something slightly ugly, as I have Apache DBCP sitting in between me and the actual PG JDBC driver.I am hoping the slightly ugly thing is only closing the underlying connection, and does not have to be /etc/init.d/postgresql8.3 restart :-) Is there a backdoor way to forcibly get rid of a lock you don't need any more?\nCheersDaveOn Mon, Apr 19, 2010 at 1:05 PM, Scott Carey <[email protected]> wrote:\nOn Apr 15, 2010, at 1:01 PM, Dave Crooke wrote:\n> On Thu, Apr 15, 2010 at 2:42 PM, Dave Crooke <[email protected]> wrote:\n> Hey folks\n>\n> I am trying to do a full table scan on a large table from Java, using a straightforward \"select * from foo\". I've run into these problems:\n>\n> 1. By default, the PG JDBC driver attempts to suck the entire result set into RAM, resulting in java.lang.OutOfMemoryError ... this is not cool, in fact I consider it a serious bug (even MySQL gets this right ;-) I am only testing with a 9GB result set, but production needs to scale to 200GB or more, so throwing hardware at is is not feasible.\n\n>\n\nFor scrolling large result sets you have to do the following to prevent it from loading the whole thing into memory:\n\n\nUse forward-only, read-only result scrolling and set the fetch size.  Some of these may be the default depending on what the connection pool is doing, but if set otherwise it may cause the whole result set to load into memory.  I regularly read several GB result sets with ~10K fetch size batches.\n\nSomething like:\nStatement st =  conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY)\nst.setFetchSize(FETCH_SIZE);That's what I''m using, albeit without any args to createStatement, and it now works. \n\n\n\n> 2. I tried using the official taming method, namely java.sql.Statement.setFetchSize(1000) and this makes it blow up entirely with an error I have no context for, as follows (the number C_10 varies, e.g. C_12 last time) ...\n\n>\n> org.postgresql.util.PSQLException: ERROR: portal \"C_10\" does not exist\n>     at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)\n>     at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n>     at org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)\n>     at org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)\n>\n> This is definitely a bug :-)\n>\n>\n\nI have no idea what that is.It was because I was also writing to the same Connection ... when you call Connection.commit() with the PG JDBC driver, it also kills all your open cursors. \nI think this is a side effect of the PG internal design where it does MVCC within a table (rows have multiple versions with min and max transaction ids) ... even a query in PG has a notional virtual transaction ID, whereas in e.g. Oracle, a query has a start time and visibility horizon, and as long as you have enough undo tablespace, it has an existence which is totally independent of any transactions going on around it even on the same JDBC connection.", "msg_date": "Mon, 19 Apr 2010 18:28:49 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to\n\ttame the 8.3.x JDBC driver with a biq guery result set" } ]
[ { "msg_contents": "Statement.close() appears to get the job done (in my envrionment, PG's\ndriver never sees a Connection.close() because of DBCP).\n\nI'd consider the fact that ResultSet.close() does not release the implicit\ncursor to be something of a bug, but it may well have been fixed already.\n\nCheers\nDave\n\nOn Mon, Apr 19, 2010 at 6:28 PM, Dave Crooke <[email protected]> wrote:\n\n> Scott - I tried to post a SOLVED followup to the JDBC list but it was\n> rejected :-!\n>\n> I now have the opposite problem of getting rid of the cursor :-)\n> ResultSet.close() does not work. I am trying to do a DROP TABLE from the\n> other Connection, to whack the table I just finished the ETL on, but it just\n> hangs indefintiely, and pg_locks shows the shared read lock still sitting\n> there.\n>\n> I am trying a Statement.close() and Connection.close() now, but I fear I\n> may have to do something slightly ugly, as I have Apache DBCP sitting in\n> between me and the actual PG JDBC driver.\n>\n> I am hoping the slightly ugly thing is only closing the underlying\n> connection, and does not have to be */etc/init.d/postgresql8.3 restart*:-) Is there a backdoor way to forcibly get rid of a lock you don't need any\n> more?\n>\n> Cheers\n> Dave\n>\n> On Mon, Apr 19, 2010 at 1:05 PM, Scott Carey <[email protected]>wrote:\n>\n>> On Apr 15, 2010, at 1:01 PM, Dave Crooke wrote:\n>> > On Thu, Apr 15, 2010 at 2:42 PM, Dave Crooke <[email protected]> wrote:\n>> > Hey folks\n>> >\n>> > I am trying to do a full table scan on a large table from Java, using a\n>> straightforward \"select * from foo\". I've run into these problems:\n>> >\n>> > 1. By default, the PG JDBC driver attempts to suck the entire result set\n>> into RAM, resulting in java.lang.OutOfMemoryError ... this is not cool, in\n>> fact I consider it a serious bug (even MySQL gets this right ;-) I am only\n>> testing with a 9GB result set, but production needs to scale to 200GB or\n>> more, so throwing hardware at is is not feasible.\n>> >\n>>\n>> For scrolling large result sets you have to do the following to prevent it\n>> from loading the whole thing into memory:\n>>\n>>\n>> Use forward-only, read-only result scrolling and set the fetch size. Some\n>> of these may be the default depending on what the connection pool is doing,\n>> but if set otherwise it may cause the whole result set to load into memory.\n>> I regularly read several GB result sets with ~10K fetch size batches.\n>>\n>> Something like:\n>> Statement st = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,\n>> java.sql.ResultSet.CONCUR_READ_ONLY)\n>> st.setFetchSize(FETCH_SIZE);\n>>\n>\n> That's what I''m using, albeit without any args to createStatement, and it\n> now works.\n>\n>\n>>\n>>\n>>\n>> > 2. I tried using the official taming method, namely\n>> java.sql.Statement.setFetchSize(1000) and this makes it blow up entirely\n>> with an error I have no context for, as follows (the number C_10 varies,\n>> e.g. C_12 last time) ...\n>> >\n>> > org.postgresql.util.PSQLException: ERROR: portal \"C_10\" does not exist\n>> > at\n>> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)\n>> > at\n>> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n>> > at\n>> org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)\n>> > at\n>> org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)\n>> >\n>> > This is definitely a bug :-)\n>> >\n>> >\n>>\n>> I have no idea what that is.\n>>\n>\n> It was because I was also writing to the same Connection ... when you call\n> Connection.commit() with the PG JDBC driver, it also kills all your open\n> cursors.\n>\n> I think this is a side effect of the PG internal design where it does MVCC\n> within a table (rows have multiple versions with min and max transaction\n> ids) ... even a query in PG has a notional virtual transaction ID, whereas\n> in e.g. Oracle, a query has a start time and visibility horizon, and as long\n> as you have enough undo tablespace, it has an existence which is totally\n> independent of any transactions going on around it even on the same JDBC\n> connection.\n>\n>\n>\n>\n>\n\nStatement.close() appears to get the job done (in my envrionment, PG's driver never sees a Connection.close() because of DBCP).I'd consider the fact that ResultSet.close() does not release the implicit cursor to be something of a bug, but it may well have been fixed already.\nCheersDaveOn Mon, Apr 19, 2010 at 6:28 PM, Dave Crooke <[email protected]> wrote:\nScott - I tried to post a SOLVED followup to the JDBC list but it was rejected :-!I now have the opposite problem of getting rid of the cursor :-) ResultSet.close() does not work. I am trying to do a DROP TABLE from the other Connection, to whack the table I just finished the ETL on, but it just hangs indefintiely, and pg_locks shows the shared read lock still sitting there.\nI am trying a Statement.close() and Connection.close() now, but I fear I may have to do something slightly ugly, as I have Apache DBCP sitting in between me and the actual PG JDBC driver.I am hoping the slightly ugly thing is only closing the underlying connection, and does not have to be /etc/init.d/postgresql8.3 restart :-) Is there a backdoor way to forcibly get rid of a lock you don't need any more?\nCheersDaveOn Mon, Apr 19, 2010 at 1:05 PM, Scott Carey <[email protected]> wrote:\n\nOn Apr 15, 2010, at 1:01 PM, Dave Crooke wrote:\n> On Thu, Apr 15, 2010 at 2:42 PM, Dave Crooke <[email protected]> wrote:\n> Hey folks\n>\n> I am trying to do a full table scan on a large table from Java, using a straightforward \"select * from foo\". I've run into these problems:\n>\n> 1. By default, the PG JDBC driver attempts to suck the entire result set into RAM, resulting in java.lang.OutOfMemoryError ... this is not cool, in fact I consider it a serious bug (even MySQL gets this right ;-) I am only testing with a 9GB result set, but production needs to scale to 200GB or more, so throwing hardware at is is not feasible.\n\n\n>\n\nFor scrolling large result sets you have to do the following to prevent it from loading the whole thing into memory:\n\n\nUse forward-only, read-only result scrolling and set the fetch size.  Some of these may be the default depending on what the connection pool is doing, but if set otherwise it may cause the whole result set to load into memory.  I regularly read several GB result sets with ~10K fetch size batches.\n\nSomething like:\nStatement st =  conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY)\nst.setFetchSize(FETCH_SIZE);That's what I''m using, albeit without any args to createStatement, and it now works. \n\n\n\n> 2. I tried using the official taming method, namely java.sql.Statement.setFetchSize(1000) and this makes it blow up entirely with an error I have no context for, as follows (the number C_10 varies, e.g. C_12 last time) ...\n\n\n>\n> org.postgresql.util.PSQLException: ERROR: portal \"C_10\" does not exist\n>     at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)\n>     at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)\n>     at org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)\n>     at org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)\n>\n> This is definitely a bug :-)\n>\n>\n\nI have no idea what that is.It was because I was also writing to the same Connection ... when you call Connection.commit() with the PG JDBC driver, it also kills all your open cursors. \nI think this is a side effect of the PG internal design where it does MVCC within a table (rows have multiple versions with min and max transaction ids) ... even a query in PG has a notional virtual transaction ID, whereas in e.g. Oracle, a query has a start time and visibility horizon, and as long as you have enough undo tablespace, it has an existence which is totally independent of any transactions going on around it even on the same JDBC connection.", "msg_date": "Mon, 19 Apr 2010 18:33:24 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM]\n\tRe: HELP: How to tame the 8.3.x JDBC driver with a biq guery result\n\tset" }, { "msg_contents": "Dave Crooke <[email protected]> wrote:\n \n> I'd consider the fact that ResultSet.close() does not release the\n> implicit cursor to be something of a bug\n \nWhat's your reasoning on that? The definitions of cursors in the\nspec, if memory serves, allow a cursor to be closed and re-opened;\nwhy would this be treated differently?\n \n-Kevin\n", "msg_date": "Tue, 20 Apr 2010 09:28:15 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "SOLVED ... Re: Getting rid of a cursor from JDBC ....\n\tRe: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver\n\twith a biq guery result set" }, { "msg_contents": "AFAICT from the Java end, ResultSet.close() is supposed to be final. There\nis no way I know of in JDBC to get a handle back to the cursor on the server\nside once you have made this call - in fact, its sole purpose is to inform\nthe server in a timely fashion that this cursor is no longer required, since\nthe ResultSet itself is a Java object and thus subject to garbage collection\nand finalizer hooks.\n\nAt a pragmatic level, the PGSQL JDBC driver has a lot of odd behaviours\nwhich, while they may or may not be in strict compliance with the letter of\nthe standard, are very different from any other mainstream database that I\nhave accessed from Java .... what I'd consider as normative behaviour, using\nregular JDBC calls without the need to jump through all these weird hoops,\nis exhibited by all of the following: Oracle, SQL Server, DB2, MySQL, Apache\nDerby and JET (MS-Access file-based back end, the .mdb format)\n\nIn practce, this places PGSQL as the odd one out, which is a bit of a\nturn-off to expereinced Java people who are PG newbies for what is otherwise\nan excellent database.\n\nAt my current post, I came into a shop that had PG as the only real\ndatabase, so I have learned to love it, and de-supported Derby and the other\ntoy back ends we used to use. And to be fair, from a back end perspective,\nPG is better than MySQL in terms of manageability .... I am running 250GB\ndatabases on small systems with no issues.\n\nAt my previous shop, we built a couple of database-backed apps from scratch,\nand despite a desire to use PG due to there being more certainty over its\nfuture licensing (it was just after Sun had bought MySQL AG), I ended up\nswitching from PG to MySQL 5.0.47 (last open source version) because of the\ndifficulties I was having with the PG driver.\n\nI consider part of the acme of great FOSS is to make it easy to use for\nnewbies and thus attract a larger user base, but that is just my $0.02\nworth.\n\nCheers\nDave\n\nOn Tue, Apr 20, 2010 at 9:28 AM, Kevin Grittner <[email protected]\n> wrote:\n\n> Dave Crooke <[email protected]> wrote:\n>\n> > I'd consider the fact that ResultSet.close() does not release the\n> > implicit cursor to be something of a bug\n>\n> What's your reasoning on that? The definitions of cursors in the\n> spec, if memory serves, allow a cursor to be closed and re-opened;\n> why would this be treated differently?\n>\n> -Kevin\n>\n\nAFAICT from the Java end, ResultSet.close() is supposed to be final. There is no way I know of in JDBC to get a handle back to the cursor on the server side once you have made this call - in fact, its sole purpose is to inform the server in a timely fashion that this cursor is no longer required, since the ResultSet itself is a Java object and thus subject to garbage collection and finalizer hooks.\nAt a pragmatic level, the PGSQL JDBC driver has a lot of odd behaviours which, while they may or may not be in strict compliance with the letter of the standard, are very different from any other mainstream database that I have accessed from Java .... what I'd consider as normative behaviour, using regular JDBC calls without the need to jump through all these weird hoops, is exhibited by all of the following: Oracle, SQL Server, DB2, MySQL, Apache Derby and JET (MS-Access file-based back end, the .mdb format)\nIn practce, this places PGSQL as the odd one out, which is a bit of a turn-off to expereinced Java people who are PG newbies for what is otherwise an excellent database. At my current post, I came into a shop that had PG as the only real database, so I have learned to love it, and de-supported Derby and the other toy back ends we used to use. And to be fair, from a back end perspective, PG is better than MySQL in terms of manageability .... I am running 250GB databases on small systems with no issues.\nAt my previous shop, we built a couple of database-backed apps from scratch, and despite a desire to use PG due to there being more certainty over its future licensing (it was just after Sun had bought MySQL AG), I ended up switching from PG to MySQL 5.0.47 (last open source version) because of the difficulties I was having with the PG driver.\nI consider part of the acme of great FOSS is to make it easy to use for newbies and thus attract a larger user base, but that is just my $0.02 worth.CheersDaveOn Tue, Apr 20, 2010 at 9:28 AM, Kevin Grittner <[email protected]> wrote:\nDave Crooke <[email protected]> wrote:\n\n> I'd consider the fact that ResultSet.close() does not release the\n> implicit cursor to be something of a bug\n\nWhat's your reasoning on that?  The definitions of cursors in the\nspec, if memory serves, allow a cursor to be closed and re-opened;\nwhy would this be treated differently?\n\n-Kevin", "msg_date": "Tue, 20 Apr 2010 10:47:25 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SOLVED ... Re: Getting rid of a cursor from JDBC .... Re:\n\t[PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery\n\tresult set" }, { "msg_contents": "\n\nOn Mon, 19 Apr 2010, Dave Crooke wrote:\n\n> Statement.close() appears to get the job done (in my envrionment, PG's\n> driver never sees a Connection.close() because of DBCP).\n> \n> I'd consider the fact that ResultSet.close() does not release the implicit\n> cursor to be something of a bug, but it may well have been fixed already.\n\nPG doesn't release the locks acquired by the query until transaction end. \nSo closing a cursor will release some backend memory, but it won't release \nthe locks. The way the driver implements ResultSet.close() is to put \nthe close message into a queue so that the next time a message is sent to \nthe backend we'll also send the cursor close message. This avoids an \nextra network roundtrip for the close action.\n\nIn any case Statement.close isn't helping you here either. It's really \nConnection.commit/rollback that's releasing the locks.\n\nKris Jurka\n", "msg_date": "Tue, 20 Apr 2010 12:07:48 -0400 (EDT)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED ... Re: Getting rid of a cursor from JDBC ....\n\tRe: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq\n\tguery result set" }, { "msg_contents": "Dave Crooke <[email protected]> wrote:\n \n> AFAICT from the Java end, ResultSet.close() is supposed to be\n> final.\n \nFor that ResultSet. That doesn't mean a ResultSet defines a cursor.\nSuch methods as setCursorName, setFetchSize, and setFetchDirection\nare associated with a Statement. Think of the ResultSet as the\nresult of a cursor *scan* generated by opening the cursor defined by\nthe Statement.\n \nhttp://java.sun.com/javase/6/docs/api/java/sql/ResultSet.html#close%28%29\n \nNotice that the ResultSet is automatically closed if the Statement\nthat generated it is re-executed. That is very much consistent with\nStatement as the equivalent of a cursor, and not very consistent\nwith a ResultSet as the equivalent of a cursor.\n \n> There is no way I know of in JDBC to get a handle back to the\n> cursor on the server side once you have made this call - in fact,\n> its sole purpose is to inform the server in a timely fashion that\n> this cursor is no longer required, since the ResultSet itself is a\n> Java object and thus subject to garbage collection and finalizer\n> hooks.\n \nAgain, you're talking about the *results* from *opening* the cursor.\n \n> At a pragmatic level, the PGSQL JDBC driver has a lot of odd\n> behaviours which, while they may or may not be in strict\n> compliance with the letter of the standard, are very different\n> from any other mainstream database that I have accessed from Java\n> .... what I'd consider as normative behaviour, using regular JDBC\n> calls without the need to jump through all these weird hoops, is\n> exhibited by all of the following: Oracle, SQL Server, DB2, MySQL,\n> Apache Derby and JET (MS-Access file-based back end, the .mdb\n> format)\n \nAre you talking about treating the Statement object as representing\na cursor and the ResultSet representing the results from opening\nthe cursor, or are you thinking of something else here?\n \n> In practce, this places PGSQL as the odd one out, which is a bit\n> of a turn-off to expereinced Java people who are PG newbies for\n> what is otherwise an excellent database.\n \nHuh. I dropped PostgreSQL into an environment with hundreds of\ndatabases, and the applications pretty much \"just worked\" for us.\nOf course, we were careful to write to the SQL standard and the JDBC\nAPI, not to some other product's implementation of them. \n \nThere were a few bugs we managed to hit which hadn't previously been\nnoticed, but those were promptly fixed. As I recall, about the only\nother things which caused me problems were:\n \n(1) Needing to setFetchSize to avoid materializing the entire\nresult set in RAM on the client.\n \n(2) Fixing a race condition in our software which was benign in\nother products, but clearly my own bug.\n \n(3) Working around the fact that COALESCE(NULL, NULL) can't be used\neverywhere NULL can.\n \n> At my previous shop, we built a couple of database-backed apps\n> from scratch, and despite a desire to use PG due to there being\n> more certainty over its future licensing (it was just after Sun\n> had bought MySQL AG), I ended up switching from PG to MySQL 5.0.47\n> (last open source version) because of the difficulties I was\n> having with the PG driver.\n \nJust out of curiosity, did you discuss that on the PostgreSQL lists?\nCan you reference the thread(s)?\n \n> I consider part of the acme of great FOSS is to make it easy to\n> use for newbies and thus attract a larger user base, but that is\n> just my $0.02 worth.\n \nSure, but I would consider it a step away from that to follow\nMySQL's interpretation of cursors rather than the standard's.\nYMMV, of course.\n \n-Kevin\n", "msg_date": "Tue, 20 Apr 2010 11:32:40 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED ... Re: Getting rid of a cursor from JDBC\n\t.... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC\n\tdriver with a biq guery result set" }, { "msg_contents": "I don't want to get into a big debate about standards, but I will clarify a\ncouple of things inline below.\n\nMy key point is that the PG JDBC driver resets people's expecations who have\nused JDBC with other databases, and that is going to reflect negatively on\nPostgres if Postgres is in the minority, standards nothwithstanding, and I\nfeel badly about that, because PG rocks!\n\nCheers\nDave\n\nOn Tue, Apr 20, 2010 at 11:32 AM, Kevin Grittner <\[email protected]> wrote:\n\n> Dave Crooke <[email protected]> wrote:\n>\n> > AFAICT from the Java end, ResultSet.close() is supposed to be\n> > final.\n>\n> For that ResultSet. That doesn't mean a ResultSet defines a cursor.\n> Such methods as setCursorName, setFetchSize, and setFetchDirection\n> are associated with a Statement. Think of the ResultSet as the\n> result of a cursor *scan* generated by opening the cursor defined by\n> the Statement.\n>\n> http://java.sun.com/javase/6/docs/api/java/sql/ResultSet.html#close%28%29\n>\n> Notice that the ResultSet is automatically closed if the Statement\n> that generated it is re-executed. That is very much consistent with\n> Statement as the equivalent of a cursor, and not very consistent\n> with a ResultSet as the equivalent of a cursor.\n>\n\nTrue, but mechanically there is no other choice - the ResultSet is created\nby Statement.executeQuery() and by then it's already in motion .... in the\ncase of Postgres with default settings, the JVM blows out before that call\nreturns.\n\nI am not explicitly creating any cursors, all I'm doing is running a query\nwith a very large ResultSet.\n\n\n Again, you're talking about the *results* from *opening* the cursor.\n>\n> > At a pragmatic level, the PGSQL JDBC driver has a lot of odd\n> > behaviours which, while they may or may not be in strict\n> > compliance with the letter of the standard, are very different\n> > from any other mainstream database that I have accessed from Java\n> > .... what I'd consider as normative behaviour, using regular JDBC\n> > calls without the need to jump through all these weird hoops, is\n> > exhibited by all of the following: Oracle, SQL Server, DB2, MySQL,\n> > Apache Derby and JET (MS-Access file-based back end, the .mdb\n> > format)\n>\n> Are you talking about treating the Statement object as representing\n> a cursor and the ResultSet representing the results from opening\n> the cursor, or are you thinking of something else here?\n>\n\nSpecific examples:\n\na. the fact that Statement.executeQuery(\"select * from huge_table\") works\nout of the box with every one of those databases, but results in\njava.langOutOfMemory with PG without special setup. Again, this is to the\nletter of the standard, it's just not very user friendly.\n\nb. The fact that with enterprise grade commercital databases, you can mix\nreads and writes on the same Connection, whereas with PG Connection.commit()\nkills open cursors.\n\nThe fact that I've been using JDBC for 12 years with half a dozen database\nproducts, in blissful ignorance of these fine distinctions in the standard\nuntil I had to deal with them with PG, is kinda what my point is :-)\n\nI understand the reasons for some of these limitations, but by no means all\nof them.\n\n\n> Huh. I dropped PostgreSQL into an environment with hundreds of\n> databases, and the applications pretty much \"just worked\" for us.\n> Of course, we were careful to write to the SQL standard and the JDBC\n> API, not to some other product's implementation of them.\n>\n\nTrue, but not everyone can hire every developer to be a JDBC / SQL language\nlawyer. All of our SQL is either ANSI or created by the Hibernate PGSQL\nadapter, with the exception of a daily \"VACUUM ANALYSE\" which I added ;-)\n\nI do believe that when there are two ways to implement a standard, the \"it\njust works\" way is far preferable to the \"well, I know you probably think\nthis is a bug, because 90% of the client code out there chokes on it, but\nactually we are standards compliant, it's everyone else who is doing it\nwrong\" way.\n\nI used to work at a storage startup that did exactly the latter, using an\nobscure HTTP/1.1 standard feature that absolutely none of the current\nbrowsers or HTTP libraries supports, and so it was a constant source of\nfrustration for customers and tech support alike. I no longer work there ;-)\n\nIt's kinda like making stuff that has to work with Windows - you know\nMicrosoft doesn't follow it's own standards, but you gotta make our code\nwork with theirs, so you play ball with their rules.\n\n\n> (1) Needing to setFetchSize to avoid materializing the entire\n> result set in RAM on the client.\n>\n\nI don't understand the rationale for why PG, unlike every other database,\ndoesn't make this a sensible default, e.g, 10,000 rows ... maybe because the\nlocks stay in place until you call Connection.close() or Connection.commit()\n? ;-)\n\n\n>\n> (2) Fixing a race condition in our software which was benign in\n> other products, but clearly my own bug.\n>\n\nBeen there and done that with code developed on single-threaded DB's (JET /\nDerby) ... not what I'm griping about here though, the base code with no\nextra JDBC setup calls works perfectly against Oracle.\n\n\n> Just out of curiosity, did you discuss that on the PostgreSQL lists?\n> Can you reference the thread(s)?\n>\n\nNo, I was in a hurry, and the \"just works\" model was available with both\nMySQL and Berkeley DB, so I didn't see the point in engaging. I felt the in\nhouse paranoia about the MySQL licensing (our CFO) was not justified, and it\nwas the devil I knew, I was taking a look at PG which was then foreign to me\nas a \"genius of the and\" alternative.\n\n\n>\n> Sure, but I would consider it a step away from that to follow\n> MySQL's interpretation of cursors rather than the standard's.\n> YMMV, of course.\n>\n\nI wouldn't hold MySQL up to be a particularly good implmentation of\nanything, other than speed (MyISAM) and usability (the CLI) .... I find\nOracle's JDBC implmentation to be both user friendly and (largely) standards\ncompliant.\n\nYMMV too :-)\n\nI hope this can be taken in the amicable spirit of gentlemanly debate in\nwhich it is offered, and in the context that we all want to see PG grow and\ncontinue to succeed.\n\nCheers\nDave\n\nI don't want to get into a big debate about standards, but I will clarify a couple of things inline below. My key point is that the PG JDBC driver resets people's expecations who have used JDBC with other databases, and that is going to reflect negatively on Postgres if Postgres is in the minority, standards nothwithstanding, and I feel badly about that, because PG rocks!\nCheersDaveOn Tue, Apr 20, 2010 at 11:32 AM, Kevin Grittner <[email protected]> wrote:\nDave Crooke <[email protected]> wrote:\n\n> AFAICT from the Java end, ResultSet.close() is supposed to be\n> final.\n\nFor that ResultSet.  That doesn't mean a ResultSet defines a cursor.\nSuch methods as setCursorName, setFetchSize, and setFetchDirection\nare associated with a Statement.  Think of the ResultSet as the\nresult of a cursor *scan* generated by opening the cursor defined by\nthe Statement.\n\nhttp://java.sun.com/javase/6/docs/api/java/sql/ResultSet.html#close%28%29\n\nNotice that the ResultSet is automatically closed if the Statement\nthat generated it is re-executed.  That is very much consistent with\nStatement as the equivalent of a cursor, and not very consistent\nwith a ResultSet as the equivalent of a cursor.True, but mechanically there is no other choice - the ResultSet is created by Statement.executeQuery() and by then it's already in motion .... in the case of Postgres with default settings, the JVM blows out before that call returns.\nI am not explicitly creating any cursors, all I'm doing is running a query with a very large ResultSet. \n\nAgain, you're talking about the *results* from *opening* the cursor.\n\n> At a pragmatic level, the PGSQL JDBC driver has a lot of odd\n> behaviours which, while they may or may not be in strict\n> compliance with the letter of the standard, are very different\n> from any other mainstream database that I have accessed from Java\n> .... what I'd consider as normative behaviour, using regular JDBC\n> calls without the need to jump through all these weird hoops, is\n> exhibited by all of the following: Oracle, SQL Server, DB2, MySQL,\n> Apache Derby and JET (MS-Access file-based back end, the .mdb\n> format)\n\nAre you talking about treating the Statement object as representing\na cursor and the ResultSet representing the results from opening\nthe cursor, or are you thinking of something else here?Specific examples: a. the fact that Statement.executeQuery(\"select * from huge_table\") works out of the box with every one of those databases, but results in java.langOutOfMemory with PG without special setup. Again, this is to the letter of the standard, it's just not very user friendly.\nb. The fact that with enterprise grade commercital databases, you can mix reads and writes on the same Connection, whereas with PG Connection.commit() kills open cursors.The fact that I've been using JDBC for 12 years with half a dozen database products, in blissful ignorance of these fine distinctions in the standard until I had to deal with them with PG, is kinda what my point is :-)\nI understand the reasons for some of these limitations, but by no means all of them. \nHuh.  I dropped PostgreSQL into an environment with hundreds of\ndatabases, and the applications pretty much \"just worked\" for us.\nOf course, we were careful to write to the SQL standard and the JDBC\nAPI, not to some other product's implementation of them.True, but not everyone can hire every developer to be a JDBC / SQL language lawyer. All of our SQL is either ANSI or created by the Hibernate PGSQL adapter, with the exception of a daily \"VACUUM ANALYSE\" which I added ;-)\nI do believe that when there are two ways to implement a standard, the \"it just works\" way is far preferable to the \"well, I know you probably think this is a bug, because 90% of the client code out there chokes on it, but actually we are standards compliant, it's everyone else who is doing it wrong\" way. \nI used to work at a storage startup that did exactly the latter, using an obscure HTTP/1.1 standard feature that absolutely none of the current browsers or HTTP libraries supports, and so it was a constant source of frustration for customers and tech support alike. I no longer work there ;-)\nIt's kinda like making stuff that has to work with Windows - you know Microsoft doesn't follow it's own standards, but you gotta make our code work with theirs, so you play ball with their rules.\n\n(1)  Needing to setFetchSize to avoid materializing the entire\nresult set in RAM on the client.I don't understand the rationale for why PG, unlike every other database, doesn't make this a sensible default, e.g, 10,000 rows ... maybe because the locks stay in place until you call Connection.close() or Connection.commit() ? ;-)\n \n\n(2)  Fixing a race condition in our software which was benign in\nother products, but clearly my own bug.Been there and done that with code developed on single-threaded DB's (JET / Derby) ... not what I'm griping about here though, the base code with no extra JDBC setup calls works perfectly against Oracle.\n Just out of curiosity, did you discuss that on the PostgreSQL lists?\nCan you reference the thread(s)?No, I was in a hurry, and the \"just works\" model was available with both MySQL and Berkeley DB, so I didn't see the point in engaging. I felt the in house paranoia about the MySQL licensing (our CFO) was not justified, and it was the devil I knew, I was taking a look at PG which was then foreign to me as a \"genius of the and\" alternative.\n \nSure, but I would consider it a step away from that to follow\nMySQL's interpretation of cursors rather than the standard's.\nYMMV, of course.I wouldn't hold MySQL up to be a particularly good implmentation of anything, other than speed (MyISAM) and usability (the CLI) .... I find Oracle's JDBC implmentation to be both user friendly and (largely) standards compliant. \nYMMV too :-) I hope this can be taken in the amicable spirit of gentlemanly debate in which it is offered, and in the context that we all want to see PG grow and continue to succeed.Cheers\nDave", "msg_date": "Tue, 20 Apr 2010 14:29:28 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SOLVED ... Re: Getting rid of a cursor from JDBC .... Re:\n\t[PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery\n\tresult set" }, { "msg_contents": "On Tue, Apr 20, 2010 at 3:29 PM, Dave Crooke <[email protected]> wrote:\n>\n> I wouldn't hold MySQL up to be a particularly good implmentation of\n> anything, other than speed (MyISAM) and usability (the CLI) .... I find\n> Oracle's JDBC implmentation to be both user friendly and (largely) standards\n> compliant.\n>\n\nDave,\n\nI've been following along at home and agree with you right up until you\nmention the MySQL CLI being usable. I work with the thing every day. The\nplain, vanilla install on my Ubuntu laptop lacks proper readline support.\n Hitting ctrl-c will sometimes kill the running query and sometimes kill the\nCLI. Its far from a paragon of usability. That last time I used psql it\ndidn't have any of those issues.\n\nFull disclosure: mysql does have proper readline support on a Centos\nmachine I have access to. ctrl-c still kills the shell.\n\nYour other points are good though.\n\n--Nik\n\nOn Tue, Apr 20, 2010 at 3:29 PM, Dave Crooke <[email protected]> wrote:\nI wouldn't hold MySQL up to be a particularly good implmentation of anything, other than speed (MyISAM) and usability (the CLI) .... I find Oracle's JDBC implmentation to be both user friendly and (largely) standards compliant. \nDave,I've been following along at home and agree with you right up until you mention the MySQL CLI being usable.  I work with the thing every day.  The plain, vanilla install on my Ubuntu laptop lacks proper readline support.  Hitting ctrl-c will sometimes kill the running query and sometimes kill the CLI.  Its far from a paragon of usability.  That last time I used psql it didn't have any of those issues.\nFull disclosure:  mysql does have proper readline support on a Centos machine I have access to.  ctrl-c still kills the shell.Your other points are good though.\n--Nik", "msg_date": "Tue, 20 Apr 2010 15:57:18 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED ... Re: Getting rid of a cursor from JDBC .... Re:\n\t[PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery\n\tresult set" }, { "msg_contents": "Dave Crooke <[email protected]> wrote:\n \n> a. the fact that Statement.executeQuery(\"select * from\n> huge_table\") works out of the box with every one of those\n> databases, but results in java.langOutOfMemory with PG without\n> special setup. Again, this is to the letter of the standard, it's\n> just not very user friendly.\n \nThe way I read it, it's *allowed* by the standard, but not\n*required* by the standard. I agree it's not very friendly\nbehavior. I made some noise about it early in my use of PostgreSQL,\nbut let it go once I had it covered for my own shop. I agree it's a\nbarrier to conversion -- it often comes up here with new PostgreSQL\nusers, and who knows how many people give up on PostgreSQL without\ncoming here when they hit it?\n \nIt's not just an issue in JDBC, either; it's generally the default\nin PostgreSQL interfaces. That seems to be by design, with the\nrationale that it prevents returning some part of a result set and\nthen throwing an error. Anyone coming from another database\nprobably already handles that, so they won't tend to be impressed by\nthat argument, but it would be hard to change that as a default\nbehavior in PostgreSQL without breaking a lot of existing code for\nPostgreSQL users at this point. :-(\n \n> b. The fact that with enterprise grade commercital databases, you\n> can mix reads and writes on the same Connection, whereas with PG\n> Connection.commit() kills open cursors.\n \nWell, I know that with Sybase ASE (and therefore it's probably also\ntrue of Microsoft SQL Server, since last I saw they both use TDS\nprotocol), unless you're using a cursor, if you execute another\nstatement through JDBC on the same connection which has a pending\nResultSet, it reads the rest of the ResultSet into RAM (the behavior\nyou don't like), before executing the new statement. So at least\nfor those databases you can't really claim *both* a and b as points.\n \nOops -- I just noticed you said \"enterprise grade\". ;-)\n \n> The fact that I've been using JDBC for 12 years with half a dozen\n> database products, in blissful ignorance of these fine\n> distinctions in the standard until I had to deal with them with\n> PG, is kinda what my point is :-)\n \nOK, point taken.\n \n> I understand the reasons for some of these limitations, but by no\n> means all of them.\n \nWell, one of the cool things about open source is that users have\nthe opportunity to \"scratch their own itches\". The JDBC\nimplementation is 100% Java, so if changing something there would be\nhelpful to you, you can do so. If you're careful about it, you may\nbe able to contribute it back to the community to save others the\npain. If you want to take a shot at some of this, I'd be willing to\nhelp a bit. If nothing else, the attempt may give you better\nperspective on the reasons for some of the limitations. ;-)\n \n>> (1) Needing to setFetchSize to avoid materializing the entire\n>> result set in RAM on the client.\n> \n> I don't understand the rationale for why PG, unlike every other\n> database, doesn't make this a sensible default, e.g, 10,000 rows\n \nI took a bit of a look at this, years ago. My recollection is that,\nbased on the nature of the data stream, you would need to do\nsomething similar to databases using TDS -- you could read as you go\nas long as no other statement is executed on the connection; but\nyou'd need to add code to recognize the exceptional circumstance and\nsuck the rest of the result set down the wire to RAM should it be\nnecessary to \"clear the way\" for another statement.\n \nIf you give it a shot, you might want to see whether it's possible\nto avoid an irritating implementation artifact of the TDS JDBC\ndrivers: if you close a ResultSet or a Statement with an open\nResultSet without first invoking Statement.cancel, they would suck\nback the rest of the results (and ignore them) -- making for a big\ndelay sometimes on a close invocation. As I recall, the\njustification was that for executions involving multiple result\nsets, they needed to do this to get at the next one cleanly;\nalthough some forms of execute don't support multiple results, and\nit doesn't do you a lot of good on Statement close, so you'd think\nthese could have been optimized.\n \n> I find Oracle's JDBC implmentation to be both user friendly and\n> (largely) standards compliant.\n \nWhere there are issues with usability or standards compliance with\nPostgreSQL, especially for something which works well for you in\nother products, I hope you raise them on these lists. Perhaps there\nare already ways to deal with them, perhaps we need to better\ndocument something, and perhaps some change can be made to\naccommodate the issue. Even if no action is taken at the time it is\nhelpful to the project, because the number of people raising an\nissue is often taken into consideration when deciding whether to\nchange something. Also, someone running into the issue later may\nfind the discussion on a search and gain helpful information.\n \n> I hope this can be taken in the amicable spirit of gentlemanly\n> debate in which it is offered, and in the context that we all want\n> to see PG grow and continue to succeed.\n \nSure -- and I hope my posts haven't been taken in any other light.\n \n-Kevin\n", "msg_date": "Tue, 20 Apr 2010 15:22:33 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED ... Re: Getting rid of a cursor from JDBC\n\t.... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC\n\tdriver with a biq guery result set" }, { "msg_contents": "I digest this down to \"this is the best that can be achieved on a connection\nthat's single threaded\"\n\nI think the big difference with Oracle is this:\n\ni. in Oracle, a SELECT does not have to be a transaction, in the sense that\nPG's SELECT does ... but in Oracle, a SELECT can fail mid-stream if you wait\ntoo long and the UNDO tablespace wraps (ORA-600), i.e. Oracle does not lock\non SELECT. Oracle is optimized for lots of small transactions that typically\ncommit, PG supports arbitrary transaction mixes of any size, but is less\nefficient at the workload for which Oracle is specialized.\n\nii. SELECT always creates an implicit cursor in Oracle, but access to these\ncursors can be interleaved arbitrarily on one connection both with each\nother and transactions (writes)\n\nAfter consiering the context you offered, I'd recommend the following two\nminor changes to the PG driver ....\n\na. Make setFetchSize(10000) the default\n\nb. If someone does call rs.close() before the end of the ResultSet, and has\nnot created an explicit cursor at the JDBC level, flag the query / lock /\nvirtual transaction in some way in the JDBC driver that tells it that it can\njust dump the cursor on a subsequent stmt.close(), conn.commit() or\nconn.close() call without sucking down the rest of the data.\n\nAFAICT, this will make the behaviour more like other DB's without sacrifcing\nanything, but I don't know what default behaviour expectations might be out\nthere in PG land.\n\nCheers\nDave\n\nOn Tue, Apr 20, 2010 at 3:22 PM, Kevin Grittner <[email protected]\n> wrote:\n(Lots of good explanatory stuff)\n\nI digest this down to \"this is the best that can be achieved on a connection that's single threaded\"I think the big difference with Oracle is this:i. in Oracle, a SELECT does not have to be a transaction, in the sense that PG's SELECT does ... but in Oracle, a SELECT can fail mid-stream if you wait too long and the UNDO tablespace wraps (ORA-600), i.e. Oracle does not lock on SELECT. Oracle is optimized for lots of small transactions that typically commit, PG supports arbitrary transaction mixes of any size, but is less efficient at the workload for which Oracle is specialized.\nii. SELECT always creates an implicit cursor in Oracle, but access to these cursors can be interleaved arbitrarily on one connection both with each other and transactions (writes)After consiering the context you offered, I'd recommend the following two minor changes to the PG driver ....\na. Make setFetchSize(10000) the defaultb. If someone does call rs.close() before the end of the ResultSet, and has not created an explicit cursor at the JDBC level, flag the query / lock / virtual transaction in some way in the JDBC driver that tells it that it can just dump the cursor on a subsequent stmt.close(), conn.commit() or conn.close() call without sucking down the rest of the data.\nAFAICT, this will make the behaviour more like other DB's without sacrifcing anything, but I don't know what default behaviour expectations might be out there in PG land.CheersDave\nOn Tue, Apr 20, 2010 at 3:22 PM, Kevin Grittner <[email protected]> wrote:(Lots of good explanatory stuff)", "msg_date": "Tue, 20 Apr 2010 15:40:14 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SOLVED ... Re: Getting rid of a cursor from JDBC .... Re:\n\t[PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery\n\tresult set" }, { "msg_contents": "\n\nOn Tue, 20 Apr 2010, Dave Crooke wrote:\n\n> a. Make setFetchSize(10000) the default\n\nThe reason this is not done is that the mechanism used for fetching a \npiece of the results at a time can change the query plan used if using a \nPreparedStatement. There are three ways to plan a PreparedStatement:\n\na) Using the exact parameter values by substituting them directly into the \nquery. This isn't really \"planned\" as you can't re-use it at all. This \nis only available using the V2 protocol.\n\nb) Using the parameter values for statistics, but not making any stronger\nguarantees about them. So the parameters will be used for evaluating the \nselectivity, but not to perform other optimizations like \ncontraint_exclusion or transforming a LIKE operation to a range query. \nThis is the default plan type the JDBC driver uses.\n\nc) Planning the query with no regard for the parameters passed to it. \nThis is the plan type the JDBC driver uses when it sees the same \nPreparedStatement being re-used multiple times or when it is respecting \nsetFetchSize and allowing for partial results.\n\nWe must use (c) for partial results instead of (b) because of some \nlimitations of the server. Currently you cannot have two statements of \ntype (b) open on the same connection. So since the driver can't know if \nthe user will issue another query before fetching the remainder of the \nfirst query's results, it must setup the first query to be of type (c) so \nthat multiple statements can exist simultaneously.\n\nSwitching the default plan type to (c) will cause a significant number of \ncomplaints as performance on some queries will go into the tank. Perhaps \nwe could have a default fetchSize for plain Statements as it won't affect \nthe plan. I could also see making this a URL parameter though so it could \nbe set as the default with only a configuration, not a code change.\n\n> b. If someone does call rs.close() before the end of the ResultSet, and has\n> not created an explicit cursor at the JDBC level, flag the query / lock /\n> virtual transaction in some way in the JDBC driver that tells it that it can\n> just dump the cursor on a subsequent stmt.close(), conn.commit() or\n> conn.close() call without sucking down the rest of the data.\n\nThis is already true. The JDBC driver only asks the server for more of \nthe ResultSet when a next() call requires it. So the server isn't \nconstantly spewing out rows that the driver must deal with, the driver \nonly gets the rows it asks for. Once the ResultSet is closed, it won't \nask for any more.\n\nKris Jurka\n\n", "msg_date": "Tue, 20 Apr 2010 17:05:54 -0400 (EDT)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED ... Re: Getting rid of a cursor from JDBC ....\n\tRe: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq\n\tguery result set" }, { "msg_contents": "On Tue, Apr 20, 2010 at 5:05 PM, Kris Jurka <[email protected]> wrote:\n> The reason this is not done is that the mechanism used for fetching a piece\n> of the results at a time can change the query plan used if using a\n> PreparedStatement.  There are three ways to plan a PreparedStatement:\n>\n> a) Using the exact parameter values by substituting them directly into the\n> query.  This isn't really \"planned\" as you can't re-use it at all.  This is\n> only available using the V2 protocol.\n>\n> b) Using the parameter values for statistics, but not making any stronger\n> guarantees about them.  So the parameters will be used for evaluating the\n> selectivity, but not to perform other optimizations like contraint_exclusion\n> or transforming a LIKE operation to a range query. This is the default plan\n> type the JDBC driver uses.\n\nHmm. I didn't think this was possible. How are you doing this?\n\n> c) Planning the query with no regard for the parameters passed to it. This\n> is the plan type the JDBC driver uses when it sees the same\n> PreparedStatement being re-used multiple times or when it is respecting\n> setFetchSize and allowing for partial results.\n>\n> We must use (c) for partial results instead of (b) because of some\n> limitations of the server.  Currently you cannot have two statements of type\n> (b) open on the same connection.  So since the driver can't know if the user\n> will issue another query before fetching the remainder of the first query's\n> results, it must setup the first query to be of type (c) so that multiple\n> statements can exist simultaneously.\n>\n> Switching the default plan type to (c) will cause a significant number of\n> complaints as performance on some queries will go into the tank.  Perhaps we\n> could have a default fetchSize for plain Statements as it won't affect the\n> plan.  I could also see making this a URL parameter though so it could be\n> set as the default with only a configuration, not a code change.\n\n...Robert\n", "msg_date": "Wed, 21 Apr 2010 10:41:25 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED ... Re: Getting rid of a cursor from JDBC .... Re:\n\t[PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery\n\tresult set" }, { "msg_contents": "On Wed, Apr 21, 2010 at 10:41 AM, Robert Haas <[email protected]> wrote:\n\n> On Tue, Apr 20, 2010 at 5:05 PM, Kris Jurka <[email protected]> wrote:\n> > The reason this is not done is that the mechanism used for fetching a\n> piece\n> > of the results at a time can change the query plan used if using a\n> > PreparedStatement. There are three ways to plan a PreparedStatement:\n> >\n> > a) Using the exact parameter values by substituting them directly into\n> the\n> > query. This isn't really \"planned\" as you can't re-use it at all. This\n> is\n> > only available using the V2 protocol.\n> >\n> > b) Using the parameter values for statistics, but not making any stronger\n> > guarantees about them. So the parameters will be used for evaluating the\n> > selectivity, but not to perform other optimizations like\n> contraint_exclusion\n> > or transforming a LIKE operation to a range query. This is the default\n> plan\n> > type the JDBC driver uses.\n>\n> Hmm. I didn't think this was possible. How are you doing this?\n\n\nMore to the point is there some option that can shift you into method a?\n I'm thinking of warehousing type applications where you want to re-plan a\ngood portion of your queries.\n\nOn Wed, Apr 21, 2010 at 10:41 AM, Robert Haas <[email protected]> wrote:\nOn Tue, Apr 20, 2010 at 5:05 PM, Kris Jurka <[email protected]> wrote:\n> The reason this is not done is that the mechanism used for fetching a piece\n> of the results at a time can change the query plan used if using a\n> PreparedStatement.  There are three ways to plan a PreparedStatement:\n>\n> a) Using the exact parameter values by substituting them directly into the\n> query.  This isn't really \"planned\" as you can't re-use it at all.  This is\n> only available using the V2 protocol.\n>\n> b) Using the parameter values for statistics, but not making any stronger\n> guarantees about them.  So the parameters will be used for evaluating the\n> selectivity, but not to perform other optimizations like contraint_exclusion\n> or transforming a LIKE operation to a range query. This is the default plan\n> type the JDBC driver uses.\n\nHmm.  I didn't think this was possible.  How are you doing this?More to the point is there some option that can shift you into method a?  I'm thinking of warehousing type applications where you want to re-plan a good portion of your queries.", "msg_date": "Wed, 21 Apr 2010 11:07:11 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED ... Re: Getting rid of a cursor from JDBC .... Re:\n\t[PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery\n\tresult set" }, { "msg_contents": ">> On Tue, Apr 20, 2010 at 5:05 PM, Kris Jurka <[email protected]> wrote:\n>>> ... There are three ways to plan a PreparedStatement:\n\nFWIW, I think there is some consensus to experiment (in the 9.1 cycle)\nwith making the server automatically try replanning of parameterized\nqueries with the actual parameter values substituted. It'll keep doing\nso if it finds that that produces a significantly better plan than the\ngeneric parameterized plan; which is what you'd expect if there's a\nchance to optimize a LIKE search, eliminate partitions, etc.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Apr 2010 11:30:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED ... Re: Getting rid of a cursor from JDBC .... Re:\n\t[PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a\n\tbiq guery result set" }, { "msg_contents": "On Wed, Apr 21, 2010 at 11:30 AM, Tom Lane <[email protected]> wrote:\n\n> >> On Tue, Apr 20, 2010 at 5:05 PM, Kris Jurka <[email protected]> wrote:\n> >>> ... There are three ways to plan a PreparedStatement:\n>\n> FWIW, I think there is some consensus to experiment (in the 9.1 cycle)\n> with making the server automatically try replanning of parameterized\n> queries with the actual parameter values substituted. It'll keep doing\n> so if it finds that that produces a significantly better plan than the\n> generic parameterized plan; which is what you'd expect if there's a\n> chance to optimize a LIKE search, eliminate partitions, etc.\n>\n> regards, tom lane\n>\n\nThat'd be wonderful.\n\nOn Wed, Apr 21, 2010 at 11:30 AM, Tom Lane <[email protected]> wrote:\n>> On Tue, Apr 20, 2010 at 5:05 PM, Kris Jurka <[email protected]> wrote:\n>>> ... There are three ways to plan a PreparedStatement:\n\nFWIW, I think there is some consensus to experiment (in the 9.1 cycle)\nwith making the server automatically try replanning of parameterized\nqueries with the actual parameter values substituted.  It'll keep doing\nso if it finds that that produces a significantly better plan than the\ngeneric parameterized plan; which is what you'd expect if there's a\nchance to optimize a LIKE search, eliminate partitions, etc.\n\n                        regards, tom lane\nThat'd be wonderful.", "msg_date": "Wed, 21 Apr 2010 11:43:52 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED ... Re: Getting rid of a cursor from JDBC .... Re:\n\t[PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery\n\tresult set" }, { "msg_contents": "\n\nOn Wed, 21 Apr 2010, Robert Haas wrote:\n\n> On Tue, Apr 20, 2010 at 5:05 PM, Kris Jurka <[email protected]> wrote:\n>>\n>> b) Using the parameter values for statistics, but not making any stronger\n>> guarantees about them.  So the parameters will be used for evaluating the\n>> selectivity, but not to perform other optimizations like contraint_exclusion\n>> or transforming a LIKE operation to a range query. This is the default plan\n>> type the JDBC driver uses.\n>\n> Hmm. I didn't think this was possible. How are you doing this?\n\nThis is only possible at the protocol level, it's not available using SQL \ncommands only. You do this by creating an unnamed instead of a named \nstatement:\n\nhttp://www.postgresql.org/docs/8.4/static/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY\n\n \tQuery planning for named prepared-statement objects occurs when\n \tthe Parse message is processed. If a query will be repeatedly\n \texecuted with different parameters, it might be beneficial to send\n \ta single Parse message containing a parameterized query, followed\n \tby multiple Bind and Execute messages. This will avoid replanning\n \tthe query on each execution.\n\n \tThe unnamed prepared statement is likewise planned during Parse\n \tprocessing if the Parse message defines no parameters. But if\n \tthere are parameters, query planning occurs during Bind processing\n \tinstead. This allows the planner to make use of the actual values\n \tof the parameters provided in the Bind message when planning the\n \tquery.\n\n\nKris Jurka", "msg_date": "Wed, 21 Apr 2010 13:58:51 -0400 (EDT)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED ... Re: Getting rid of a cursor from JDBC ....\n\tRe: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq\n\tguery result set" }, { "msg_contents": "\n\nOn Wed, 21 Apr 2010, Nikolas Everett wrote:\n\n> More to the point is there some option that can shift you into method a? \n>  I'm thinking of warehousing type applications where you want to re-plan \n> a good portion of your queries.\n>\n\nThis can be done by connecting to the database using the V2 protocol (use \nURL option protocolVersion=2). This does remove some functionality of \nthe driver that is only available for V3 protocol, but will work just \nfine for query execution.\n\nKris Jurka\n", "msg_date": "Wed, 21 Apr 2010 14:00:53 -0400 (EDT)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED ... Re: Getting rid of a cursor from JDBC ....\n\tRe: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq\n\tguery result set" } ]
[ { "msg_contents": "Hi,\n \nI am using dbt2 on Linux 64 (CentOS release 5.3 (Final)) . I have compiled latest postgresql-8.4.3 code on the machine and run dbt2 against it. I am little confused about the results. I ran dbt2 with the following configuration i.e.\n \nDBT2 Options :\n WAREHOUSES=75\n DB_CONNECTIONS=20\n REGRESS_DURATION=1 #HOURS\n REGRESS_DURATION_SEC=$((60*60*$REGRESS_DURATION))\n \nDBT2 Command :\n ./dbt2-pgsql-create-db\n ./dbt2-pgsql-build-db -d $DBDATA -g -r -w $WAREHOUSES\n ./dbt2-run-workload -a pgsql -c $DB_CONNECTIONS -d $REGRESS_DURATION_SEC -w $WAREHOUSES -o $OUTPUT_DIR\n ./dbt2-pgsql-stop-db\n \nI am not able to understand the sar related graphs. Iostat,mpstat and vmstat results are similar but sar results are strange. I tried to explore the dbt2 source code to find out the how graphs are drawn and why sar results differ.DBT2.pm : 189 reads sar.out and parse it and consider 1 minute elapsed time between each record i.e.\n ActivePerl-5.10.1.1007-i686-linux-glibc-2.3.2-291969/inst/lib/Test/Parser/Sar.pm : 266\n elapsed_time is a counter, with every record it increment to 1 (++$elapsed_time)\n \n Sar.out shows the following results i.e.\n \n 08:54:47 PM cswch/s\n ..\n ..\n 09:21:47 PM 1809.46\n 09:22:47 PM 2251.26\n 09:23:47 PM 2151.27\n 09:24:47 PM 2217.33\n 09:27:01 PM 2189.83\n 09:29:02 PM 2155.13\n 09:30:02 PM 2048.04\n 09:32:19 PM 2033.16\n 09:34:20 PM 2032.47\n 09:36:20 PM 2006.02\n 09:37:20 PM 1966.03\n 09:39:35 PM 1974.77\n 09:41:37 PM 1973.88\n 09:42:37 PM 1960.65\n 09:44:56 PM 1993.15\n 09:45:56 PM 1989.46\n 09:47:57 PM 2430.77\n 09:48:57 PM 2416.64\n 09:51:08 PM 2330.02\n 09:53:19 PM 1738.46\n 09:54:19 PM 2182.27\n 09:55:19 PM 2221.31\n 09:56:19 PM 2131.81\n 09:57:19 PM 2183.47\n 09:59:31 PM 2156.70\n 10:01:32 PM 2114.38\n 10:02:32 PM 2030.05\n 10:04:51 PM 2059.56\n 10:05:51 PM 1995.06\n 10:08:09 PM 1355.43\n 10:09:09 PM 218.73\n 10:10:09 PM 175.13\n 10:11:09 PM 168.30\n 10:12:09 PM 168.58\n ..\n ..\nIt shows that sar results for each record is not after every 1 minute duration, it varies. Is it expected or there are some bugs in CentOS default sar package (sysstat-7.0.2-3.el5). I tried latest package sysstat-9.0.6.1 from but it behaving the same. Systat utilities depends on procfs, is there something wrong with the system ?. Thanks.\n \nBest Regards,\nAsif Naeem\n \t\t \t \t\t \n_________________________________________________________________\nYour E-mail and More On-the-Go. Get Windows Live Hotmail Free.\nhttps://signup.live.com/signup.aspx?id=60969\n\n\n\n\n\nHi, I am using dbt2 on Linux 64 (CentOS release 5.3 (Final)) . I have compiled latest postgresql-8.4.3 code on the machine and run dbt2 against it. I am little confused about the results. I ran dbt2 with the following configuration i.e. DBT2 Options :    WAREHOUSES=75    DB_CONNECTIONS=20    REGRESS_DURATION=1 #HOURS    REGRESS_DURATION_SEC=$((60*60*$REGRESS_DURATION)) DBT2 Command :        ./dbt2-pgsql-create-db        ./dbt2-pgsql-build-db -d $DBDATA -g -r -w $WAREHOUSES        ./dbt2-run-workload -a pgsql -c $DB_CONNECTIONS -d $REGRESS_DURATION_SEC -w $WAREHOUSES -o $OUTPUT_DIR        ./dbt2-pgsql-stop-db I am not able to understand the sar related graphs. Iostat,mpstat and vmstat results are similar but\n sar results are strange. I tried to explore the dbt2 source code to find out the how graphs are drawn and why sar results differ.DBT2.pm : 189 reads sar.out and parse it and consider 1 minute elapsed time between each record i.e.    ActivePerl-5.10.1.1007-i686-linux-glibc-2.3.2-291969/inst/lib/Test/Parser/Sar.pm : 266        elapsed_time is a counter, with every record it increment to 1 (++$elapsed_time)  Sar.out shows the following results i.e.     08:54:47 PM   cswch/s    ..    ..    09:21:47 PM   1809.46    09:22:47 PM   2251.26    09:23:47 PM   2151.27    09:24:47 PM   2217.33    09:27:01 PM   2189.83    09:29:02 PM   2155.13    09:30:02 PM \n   2048.04    09:32:19 PM   2033.16    09:34:20 PM   2032.47    09:36:20 PM   2006.02    09:37:20 PM   1966.03    09:39:35 PM   1974.77    09:41:37 PM   1973.88    09:42:37 PM   1960.65    09:44:56 PM   1993.15    09:45:56 PM   1989.46    09:47:57 PM   2430.77    09:48:57 PM   2416.64    09:51:08 PM   2330.02    09:53:19 PM   1738.46    09:54:19 PM   2182.27    09:55:19 PM   2221.31    09:56:19 PM   2131.81    09:57:19 PM   2183.47    09:59:31 PM   2156.70    10:01:32 PM   2114.38    10:02:32 PM   2030.05    10:04:51 PM   2059.56    10:05:51 PM   1995.06    10:08:09 PM   1355.43    10:09:09 PM    218.73    10:10:09 PM    175.13    10:11:09 PM    168.30    10:12:09 PM    168.58    ..    ..It shows that sar results for each record is not after every 1 minute duration, it varies.  Is it expected or there are some bugs in CentOS default sar package (sysstat-7.0.2-3.el5). I tried latest package sysstat-9.0.6.1 from but it behaving the same. Systat utilities depends on procfs, is there something wrong with the system ?. Thanks. Best Regards,Asif Naeem Your E-mail and \n More On-the-Go. Get Windows Live Hotmail Free. Sign up now.", "msg_date": "Tue, 20 Apr 2010 18:38:27 +0600", "msg_from": "MUHAMMAD ASIF <[email protected]>", "msg_from_op": true, "msg_subject": "=?windows-1256?Q?Dbt2_with_?= =?windows-1256?Q?postgres_i?=\n\t=?windows-1256?Q?ssues_on_C?= =?windows-1256?Q?entOS-5.3=FE?=" }, { "msg_contents": "2010/4/20 MUHAMMAD ASIF <[email protected]>:\n> Hi,\n>\n> I am using dbt2 on Linux 64 (CentOS release 5.3 (Final)) . I have compiled\n> latest postgresql-8.4.3 code on the machine and run dbt2 against it. I am\n> little confused about the results. I ran dbt2 with the following\n> configuration i.e.\n>\n> DBT2 Options :\n>     WAREHOUSES=75\n>     DB_CONNECTIONS=20\n>     REGRESS_DURATION=1 #HOURS\n>     REGRESS_DURATION_SEC=$((60*60*$REGRESS_DURATION))\n>\n> DBT2 Command :\n>         ./dbt2-pgsql-create-db\n>         ./dbt2-pgsql-build-db -d $DBDATA -g -r -w $WAREHOUSES\n>         ./dbt2-run-workload -a pgsql -c $DB_CONNECTIONS -d\n> $REGRESS_DURATION_SEC -w $WAREHOUSES -o $OUTPUT_DIR\n>         ./dbt2-pgsql-stop-db\n>\n> I am not able to understand the sar related graphs. Iostat,mpstat and vmstat\n> results are similar but\n> sar results are strange. I tried to explore the dbt2 source code to find\n> out the how graphs are drawn and why sar results differ.DBT2.pm : 189 reads\n> sar.out and parse it and consider 1 minute elapsed time between each record\n> i.e.\n\nThat is certainly a weakness in the logic of the perl modules in\nplotting the charts accurately. I wouldn't be surprised if the other\nstat tools suffer the same problem.\n\nRegards,\nMark\n", "msg_date": "Wed, 21 Apr 2010 18:10:35 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Dbt2_with_postgres_issues_on_CentOS=2D5=2E?=\n\t=?UTF-8?Q?3=E2=80=8F?=" }, { "msg_contents": "I am facing sar related issues on Redhat Enterprise Linux64 5.4 too (60G Ram, No Swap space, Xeon Processor).\n\nsar -o /var/dbt2_data/PG/Output/driver/dbt2-sys1/sar_raw.out 60 204\n |___ sadc 60 205 -z /var/dbt2_data/PG/Output/driver/dbt2-sys1/sar_raw.out\n\nIt generates following sar data i.e.\nï؟½.\nï؟½.\n03:52:43 AM 2.31\n03:53:43 AM 2.31\n03:54:43 AM 2.28\n03:55:43 AM 2.31\n03:56:43 AM 1.67\n03:57:43 AM 0.29\n03:58:43 AM 0.29\n04:00:43 AM 0.30\n04:04:00 AM 3.52\n04:07:07 AM 0.30\n04:09:36 AM 0.23\n04:12:04 AM 0.36\n04:14:25 AM 0.23\n04:16:45 AM 0.26\n04:19:10 AM 0.24\n04:21:30 AM 0.38\n04:23:55 AM 0.24\n04:26:25 AM 0.35\n04:28:48 AM 0.24\n04:31:10 AM 0.27\n04:33:40 AM 0.33\n04:36:45 AM 0.41\n04:39:12 AM 0.27\n04:41:41 AM 0.26\n04:44:11 AM 0.33\n04:46:35 AM 0.25\n04:49:06 AM 0.33\n04:51:27 AM 0.27\n04:53:56 AM 0.23\n04:56:19 AM 0.36\n04:58:43 AM 0.24\n05:01:10 AM 0.35\n05:03:43 AM 0.33\n05:06:53 AM 0.29\n05:09:25 AM 0.23\nï؟½.\nï؟½.\n\nTo fix this issue I have modified the sysstat-9.1.2/sadc.c and replaced signal based pause (That is not real time) with \"select\" based pause. That fixed the issue. Thanks.\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n\nsadc.c.patch\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n--- sadc.c.org 2010-06-14 21:44:18.000000000 +0500\n+++ sadc.c 2010-06-14 22:52:51.693211184 +0500\n@@ -33,6 +33,10 @@\n #include <sys/stat.h>\n #include <sys/utsname.h>\n \n+#include <sys/types.h> \n+#include <sys/time.h> \n+#include <time.h> \n+\n #include \"version.h\"\n #include \"sa.h\"\n #include \"rd_stats.h\"\n@@ -792,6 +796,15 @@\n }\n }\n \n+void pause_new( void )\n+{\n+ struct timeval tvsel; \n+ tvsel.tv_sec = interval; \n+ tvsel.tv_usec = 0; \n+\n+ select( 0, NULL, NULL, NULL, &tvsel );\n+}\n+\n /*\n ***************************************************************************\n * Main loop: Read stats from the relevant sources and display them.\n@@ -899,7 +912,7 @@\n }\n \n if (count) {\n- pause();\n+ pause_new();\n }\n \n /* Rotate activity file if necessary */\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n\nBest Regards,\nAsif Naeem\n\n> Date: Wed, 21 Apr 2010 18:10:35 -0700\n> Subject: Re: [PERFORM] Dbt2 with postgres issues on CentOS-5.3ï؟½\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n> \n> 2010/4/20 MUHAMMAD ASIF <[email protected]>:\n> > Hi,\n> >\n> > I am using dbt2 on Linux 64 (CentOS release 5.3 (Final)) . I have compiled\n> > latest postgresql-8.4.3 code on the machine and run dbt2 against it. I am\n> > little confused about the results. I ran dbt2 with the following\n> > configuration i.e.\n> >\n> > DBT2 Options :\n> > WAREHOUSES=75\n> > DB_CONNECTIONS=20\n> > REGRESS_DURATION=1 #HOURS\n> > REGRESS_DURATION_SEC=$((60*60*$REGRESS_DURATION))\n> >\n> > DBT2 Command :\n> > ./dbt2-pgsql-create-db\n> > ./dbt2-pgsql-build-db -d $DBDATA -g -r -w $WAREHOUSES\n> > ./dbt2-run-workload -a pgsql -c $DB_CONNECTIONS -d\n> > $REGRESS_DURATION_SEC -w $WAREHOUSES -o $OUTPUT_DIR\n> > ./dbt2-pgsql-stop-db\n> >\n> > I am not able to understand the sar related graphs. Iostat,mpstat and vmstat\n> > results are similar but\n> > sar results are strange. I tried to explore the dbt2 source code to find\n> > out the how graphs are drawn and why sar results differ.DBT2.pm : 189 reads\n> > sar.out and parse it and consider 1 minute elapsed time between each record\n> > i.e.\n> \n> That is certainly a weakness in the logic of the perl modules in\n> plotting the charts accurately. I wouldn't be surprised if the other\n> stat tools suffer the same problem.\n> \n> Regards,\n> Mark\n \t\t \t \t\t \n_________________________________________________________________\nYour E-mail and More On-the-Go. Get Windows Live Hotmail Free.\nhttps://signup.live.com/signup.aspx?id=60969\n\n\n\n\n\nI am facing sar related issues on Redhat Enterprise Linux64 5.4 too (60G Ram, No Swap space, Xeon Processor).sar -o /var/dbt2_data/PG/Output/driver/dbt2-sys1/sar_raw.out 60 204    |___ sadc 60 205 -z /var/dbt2_data/PG/Output/driver/dbt2-sys1/sar_raw.outIt generates following sar data i.e.ï؟½.ï؟½.03:52:43 AM      2.3103:53:43 AM      2.3103:54:43 AM      2.2803:55:43 AM      2.3103:56:43 AM      1.6703:57:43 AM      0.2903:58:43 AM      0.2904:00:43 AM      0.3004:04:00 AM      3.5204:07:07 AM      0.3004:09:36 AM      0.2304:12:04 AM      0.3604:14:25 AM      0.23\n 04:16:45 AM      0.2604:19:10 AM      0.2404:21:30 AM      0.3804:23:55 AM      0.2404:26:25 AM      0.3504:28:48 AM      0.2404:31:10 AM      0.2704:33:40 AM      0.3304:36:45 AM      0.4104:39:12 AM      0.2704:41:41 AM      0.2604:44:11 AM      0.3304:46:35 AM      0.2504:49:06 AM      0.3304:51:27 AM      0.2704:53:56 AM      0.2304:56:19 AM      0.3604:58:43 AM      0.2405:01:10 AM      0.3505:03:43 AM    &nbs\n p; 0.3305:06:53 AM      0.2905:09:25 AM      0.23ï؟½.ï؟½.To fix this issue I have modified the sysstat-9.1.2/sadc.c and replaced signal based pause (That is not real time) with \"select\" based pause. That fixed the issue. Thanks.-------------------------------------------------------------------------------------------------------------------------------------------------------\nsadc.c.patch---------------------------------------------------------------------------------------------------------------------------------------------------------- sadc.c.org    2010-06-14 21:44:18.000000000 +0500+++ sadc.c    2010-06-14 22:52:51.693211184 +0500@@ -33,6 +33,10 @@ #include <sys/stat.h> #include <sys/utsname.h> +#include <sys/types.h> +#include <sys/time.h> +#include <time.h> + #include \"version.h\" #include \"sa.h\" #include \"rd_stats.h\"@@ -792,6 +796,15 @@     } } +void pause_new( void )+{+    struct timeval tvsel; +    tvsel.tv_sec = interval; +    tvsel.tv_usec = 0; ++    select( 0, NULL, NULL, NULL, &tvsel );+}+ /*  ************************************\n ***************************************  * Main loop: Read stats from the relevant sources and display them.@@ -899,7 +912,7 @@         }          if (count) {-            pause();+            pause_new();         }          /* Rotate activity file if necessary */-------------------------------------------------------------------------------------------------------------------------------------------------------Best Regards,Asif Naeem> Date: Wed, 21 Apr 2010 18:10:35 -0700> Subject: Re: [PERFORM] Dbt2 with postgres issues on CentOS-5.3ï؟½> From: [email protected]> To: [email protected]> CC: [email protected]> >\n 2010/4/20 MUHAMMAD ASIF <[email protected]>:> > Hi,> >> > I am using dbt2 on Linux 64 (CentOS release 5.3 (Final)) . I have compiled> > latest postgresql-8.4.3 code on the machine and run dbt2 against it. I am> > little confused about the results. I ran dbt2 with the following> > configuration i.e.> >> > DBT2 Options :> >     WAREHOUSES=75> >     DB_CONNECTIONS=20> >     REGRESS_DURATION=1 #HOURS> >     REGRESS_DURATION_SEC=$((60*60*$REGRESS_DURATION))> >> > DBT2 Command :> >         ./dbt2-pgsql-create-db> >         ./dbt2-pgsql-build-db -d $DBDATA -g -r -w $WAREHOUSES> >         ./dbt2-run-workload -a pgsql -c $DB_CONNECTIONS -d&\n gt; > $REGRESS_DURATION_SEC -w $WAREHOUSES -o $OUTPUT_DIR> >         ./dbt2-pgsql-stop-db> >> > I am not able to understand the sar related graphs. Iostat,mpstat and vmstat> > results are similar but> > sar results are strange. I tried to explore the dbt2 source code to find> > out the how graphs are drawn and why sar results differ.DBT2.pm : 189 reads> > sar.out and parse it and consider 1 minute elapsed time between each record> > i.e.> > That is certainly a weakness in the logic of the perl modules in> plotting the charts accurately. I wouldn't be surprised if the other> stat tools suffer the same problem.> > Regards,> Mark Your E-mail and More On-the-Go. Get Windows Live Hotmail Free. Sign up now.", "msg_date": "Tue, 15 Jun 2010 00:28:01 +0600", "msg_from": "MUHAMMAD ASIF <[email protected]>", "msg_from_op": true, "msg_subject": "=?windows-1256?Q?RE:_[PERFO?= =?windows-1256?Q?RM]_Dbt2_w?=\n\t=?windows-1256?Q?ith_postgr?= =?windows-1256?Q?es_issues_?=\n\t=?windows-1256?Q?on_CentOS-?= =?windows-1256?Q?5.3=FE?=" } ]
[ { "msg_contents": "Howdy all,\n\nI've got a huge server running just postgres. It's got 48 cores and 256GB of ram. Redhat 5.4, Postgres 8.3.9.\n64bit OS. No users currently.\n\nI've got a J2EE app that loads data into the DB, it's got logic behind it so it's not a simple bulk load, so\ni don't think we can use copy.\n\nBased on the tuning guides, it set my effective_cache_size to 128GB (1/2 the available memory) on the box.\n\nWhen I ran my load, it took aproximately 15 hours to do load 20 million records. I thought this was odd because\non a much smaller machine I was able to do that same amount of records in 6 hours.\n\nMy initial thought was hardware issues so we got sar, vmstat, etc all running on the box and they didn't give\nany indication that we had resource issues.\n\nSo I decided to just make the 2 PG config files look the same. (the only change was dropping effective_cache_size \nfrom 128GB to 2GB).\n\nNow the large box performs the same as the smaller box. (which is fine).\n\nincidentally, both tests were starting from a blank database.\n\nIs this expected? \n\nThanks!\n\nDave\n", "msg_date": "Tue, 20 Apr 2010 10:39:36 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, 2010-04-20 at 10:39 -0700, David Kerr wrote:\n> Howdy all,\n> \n> I've got a huge server running just postgres. It's got 48 cores and 256GB of ram. Redhat 5.4, Postgres 8.3.9.\n> 64bit OS. No users currently.\n> \n> I've got a J2EE app that loads data into the DB, it's got logic behind it so it's not a simple bulk load, so\n> i don't think we can use copy.\n> \n> Based on the tuning guides, it set my effective_cache_size to 128GB (1/2 the available memory) on the box.\n> \n> When I ran my load, it took aproximately 15 hours to do load 20 million records. I thought this was odd because\n> on a much smaller machine I was able to do that same amount of records in 6 hours.\n> \n> My initial thought was hardware issues so we got sar, vmstat, etc all running on the box and they didn't give\n> any indication that we had resource issues.\n> \n> So I decided to just make the 2 PG config files look the same. (the only change was dropping effective_cache_size \n> from 128GB to 2GB).\n> \n> Now the large box performs the same as the smaller box. (which is fine).\n> \n> incidentally, both tests were starting from a blank database.\n> \n> Is this expected? \n\nWithout a more complete picture of the configuration, this post doesn't\nmean a whole lot. Further, effective_cash_size is not likely to effect a\nbulk load at all.\n\nJoshua D. Drake\n\n\n\n> \n> Thanks!\n> \n> Dave\n> \n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564\nConsulting, Training, Support, Custom Development, Engineering\n\n\n", "msg_date": "Tue, 20 Apr 2010 10:41:36 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 1:39 PM, David Kerr <[email protected]> wrote:\n> Howdy all,\n>\n> I've got a huge server running just postgres. It's got 48 cores and 256GB of ram. Redhat 5.4, Postgres 8.3.9.\n> 64bit OS. No users currently.\n>\n> I've got a J2EE app that loads data into the DB, it's got logic behind it so it's not a simple bulk load, so\n> i don't think we can use copy.\n>\n> Based on the tuning guides, it set my effective_cache_size to 128GB (1/2 the available memory) on the box.\n>\n> When I ran my load, it took aproximately 15 hours to do load 20 million records. I thought this was odd because\n> on a much smaller machine I was able to do that same amount of records in 6 hours.\n>\n> My initial thought was hardware issues so we got sar, vmstat, etc all running on the box and they didn't give\n> any indication that we had resource issues.\n>\n> So I decided to just make the 2 PG config files look the same. (the only change was dropping effective_cache_size\n> from 128GB to 2GB).\n>\n> Now the large box performs the same as the smaller box. (which is fine).\n>\n> incidentally, both tests were starting from a blank database.\n>\n> Is this expected?\n\nLowering effective_cache_size tends to discourage the planner from\nusing a nested-loop-with-inner-indexscan plan - that's it.\n\nWhat may be happening is that you may be loading data into some tables\nand then running a query against those tables before the autovacuum\ndaemon has a chance to analyze them. I suspect that if you enable\nsome logging you'll find that one of those queries is really, really\nslow, and that (by happy coincidence) discouraging it from using the\nindex it thinks it should use happens to produce a better plan. What\nyou should probably do is, for each table that you bulk load and then\nquery, insert a manual ANALYZE between the two.\n\n...Robert\n", "msg_date": "Tue, 20 Apr 2010 13:44:18 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 01:44:18PM -0400, Robert Haas wrote:\n- On Tue, Apr 20, 2010 at 1:39 PM, David Kerr <[email protected]> wrote:\n- > My initial thought was hardware issues so we got sar, vmstat, etc all running on the box and they didn't give\n- > any indication that we had resource issues.\n- >\n- > So I decided to just make the 2 PG config files look the same. (the only change was dropping effective_cache_size\n- > from 128GB to 2GB).\n- >\n- > Now the large box performs the same as the smaller box. (which is fine).\n- >\n- > incidentally, both tests were starting from a blank database.\n- >\n- > Is this expected?\n- \n- Lowering effective_cache_size tends to discourage the planner from\n- using a nested-loop-with-inner-indexscan plan - that's it.\n- \n- What may be happening is that you may be loading data into some tables\n- and then running a query against those tables before the autovacuum\n- daemon has a chance to analyze them. I suspect that if you enable\n- some logging you'll find that one of those queries is really, really\n- slow, and that (by happy coincidence) discouraging it from using the\n- index it thinks it should use happens to produce a better plan. What\n- you should probably do is, for each table that you bulk load and then\n- query, insert a manual ANALYZE between the two.\n- \n- ...Robert\n- \n\nthat thought occured to me while I was testing this. I ran a vacuumdb -z \non my database during the load and it didn't impact performance at all.\n\nIncidentally the code is written to work like this :\n\nwhile (read X lines in file){\nProcess those lines.\nwrite lines to DB.\n}\n\nSo i would generally expect to get the benefits of the updated staticis \nonce the loop ended. no? (would prepared statements affect that possibly?)\n\nAlso, while I was debugging the problem, I did load a 2nd file into the DB\nontop of one that had been loaded. So the statistics almost certinaly should\nhave been decent at that point. \n\nI did turn on log_min_duration_statement but that caused performance to be unbearable,\nbut i could turn it on again if it would help.\n\nDave\n", "msg_date": "Tue, 20 Apr 2010 11:03:51 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 2:03 PM, David Kerr <[email protected]> wrote:\n\n> that thought occured to me while I was testing this. I ran a vacuumdb -z\n> on my database during the load and it didn't impact performance at all.\n>\n> Incidentally the code is written to work like this :\n>\n> while (read X lines in file){\n> Process those lines.\n> write lines to DB.\n> }\n>\n> So i would generally expect to get the benefits of the updated staticis\n> once the loop ended. no? (would prepared statements affect that possibly?)\n>\n> Also, while I was debugging the problem, I did load a 2nd file into the DB\n> ontop of one that had been loaded. So the statistics almost certinaly\n> should\n> have been decent at that point.\n>\n> I did turn on log_min_duration_statement but that caused performance to be\n> unbearable,\n> but i could turn it on again if it would help.\n>\n> Dave\n\n\nYou can absolutely use copy if you like but you need to use a non-standard\njdbc driver: kato.iki.fi/sw/db/postgresql/jdbc/copy/. I've used it in the\npast and it worked.\n\nIs the whole thing going in in one transaction? I'm reasonably sure\nstatistics aren't kept for uncommited transactions.\n\nFor inserts the prepared statements can only help. For selects they can\nhurt because eventually the JDBC driver will turn them into back end\nprepared statements that are only planned once. The price here is that that\nplan may not be the best plan for the data that you throw at it.\n\nWhat was log_min_duration_statement logging that it killed performance?\n\n--Nik\n\nOn Tue, Apr 20, 2010 at 2:03 PM, David Kerr <[email protected]> wrote:\nthat thought occured to me while I was testing this. I ran a vacuumdb -z\non my database during the load and it didn't impact performance at all.\n\nIncidentally the code is written to work like this :\n\nwhile (read X lines in file){\nProcess those lines.\nwrite lines to DB.\n}\n\nSo i would generally expect to get the benefits of the updated staticis\nonce the loop ended. no?  (would prepared statements affect that possibly?)\n\nAlso, while I was debugging the problem, I did load a 2nd file into the DB\nontop of one that had been loaded. So the statistics almost certinaly should\nhave been decent at that point.\n\nI did turn on log_min_duration_statement but that caused performance to be unbearable,\nbut i could turn it on again if it would help.\n\nDaveYou can absolutely use copy if you like but you need to use a non-standard jdbc driver:  kato.iki.fi/sw/db/postgresql/jdbc/copy/.  I've used it in the past and it worked.\nIs the whole thing going in in one transaction?  I'm reasonably sure statistics aren't kept for uncommited transactions.For inserts the prepared statements can only help.  For selects they can hurt because eventually the JDBC driver will turn them into back end prepared statements that are only planned once.  The price here is that that plan may not be the best plan for the data that you throw at it.\nWhat was log_min_duration_statement logging that it killed performance?--Nik", "msg_date": "Tue, 20 Apr 2010 14:12:15 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 2:03 PM, David Kerr <[email protected]> wrote:\n> that thought occured to me while I was testing this. I ran a vacuumdb -z\n> on my database during the load and it didn't impact performance at all.\n\nThe window to run ANALYZE usefully is pretty short. If you run it\nbefore the load is complete, your stats will be wrong. If you run it\nafter the select statements that hit the table are planned, the\nupdated stats won't arrive in time to do any good.\n\n> I did turn on log_min_duration_statement but that caused performance to be unbearable,\n> but i could turn it on again if it would help.\n\nI think you need to find a way to identify exactly which query is\nrunning slowly. You could sit there and run \"select * from\npg_stat_activity\", or turn on log_min_duration_statement, or have your\napplication print out timestamps at key points, or some other\nmethod...\n\n...Robert\n", "msg_date": "Tue, 20 Apr 2010 14:15:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 11:39 AM, David Kerr <[email protected]> wrote:\n> Howdy all,\n>\n> I've got a huge server running just postgres. It's got 48 cores and 256GB of ram. Redhat 5.4, Postgres 8.3.9.\n> 64bit OS. No users currently.\n\nWhat's your IO subsystem look like? What did vmstat actually say?\n", "msg_date": "Tue, 20 Apr 2010 12:15:54 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "David Kerr <[email protected]> wrote:\n \n> Incidentally the code is written to work like this :\n> \n> while (read X lines in file){\n> Process those lines.\n> write lines to DB.\n> }\n \nUnless you're selecting from multiple database tables in one query,\neffective_cache_size shouldn't make any difference. There's\nprobably some other reason for the difference.\n \nA couple wild shots in the dark:\n \nAny chance the source files were cached the second time, but not the\nfirst?\n \nDo you have a large checkpoint_segments setting, and did the second\nrun without a new initdb?\n \n-Kevin\n", "msg_date": "Tue, 20 Apr 2010 13:17:02 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse\n\t performance?" }, { "msg_contents": "\n\nOn Tue, 20 Apr 2010, Nikolas Everett wrote:\n\n> You can absolutely use copy if you like but you need to use a non-standard\n> jdbc driver:  kato.iki.fi/sw/db/postgresql/jdbc/copy/.  I've used it in the\n> past and it worked.\n\nCopy support has been added to the 8.4 driver.\n\nKris Jurka\n", "msg_date": "Tue, 20 Apr 2010 14:19:52 -0400 (EDT)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 12:15 PM, Scott Marlowe <[email protected]> wrote:\n> On Tue, Apr 20, 2010 at 11:39 AM, David Kerr <[email protected]> wrote:\n>> Howdy all,\n>>\n>> I've got a huge server running just postgres. It's got 48 cores and 256GB of ram. Redhat 5.4, Postgres 8.3.9.\n>> 64bit OS. No users currently.\n>\n> What's your IO subsystem look like?  What did vmstat actually say?\n\nNote that on a 48 core machine, if vmstat shows 2% wait and 98% idle\nthen you'd be 100% io bound, because it's % of total CPU. iostat -x\n10 will give a better view of how hard your disks are working, and if\nthey're the issue.\n", "msg_date": "Tue, 20 Apr 2010 12:20:18 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote:\n- On Tue, Apr 20, 2010 at 2:03 PM, David Kerr <[email protected]> wrote:\n- \n- > that thought occured to me while I was testing this. I ran a vacuumdb -z\n- > on my database during the load and it didn't impact performance at all.\n- >\n- > Incidentally the code is written to work like this :\n- >\n- > while (read X lines in file){\n- > Process those lines.\n- > write lines to DB.\n- > }\n- >\n- > So i would generally expect to get the benefits of the updated staticis\n- > once the loop ended. no? (would prepared statements affect that possibly?)\n- >\n- > Also, while I was debugging the problem, I did load a 2nd file into the DB\n- > ontop of one that had been loaded. So the statistics almost certinaly\n- > should\n- > have been decent at that point.\n- >\n- > I did turn on log_min_duration_statement but that caused performance to be\n- > unbearable,\n- > but i could turn it on again if it would help.\n- >\n- > Dave\n- \n- \n- You can absolutely use copy if you like but you need to use a non-standard\n- jdbc driver: kato.iki.fi/sw/db/postgresql/jdbc/copy/. I've used it in the\n- past and it worked.\n- \n- Is the whole thing going in in one transaction? I'm reasonably sure\n- statistics aren't kept for uncommited transactions.\n- \n- For inserts the prepared statements can only help. For selects they can\n- hurt because eventually the JDBC driver will turn them into back end\n- prepared statements that are only planned once. The price here is that that\n- plan may not be the best plan for the data that you throw at it.\n- \n- What was log_min_duration_statement logging that it killed performance?\n- \n- --Nik\n\nGood to know about the jdbc-copy. but this is a huge project and the load is \njust one very very tiny component, I don't think we could introduce anything\nnew to assist that.\n\nIt's not all in one tx. I don't have visibility to the code to determine how \nit's broken down, but most likely each while loop is a tx.\n\nI set it to log all statements (i.e., = 0.). that doubled the load time from \n~15 to ~30 hours. I could, of course, be more granular if it would be helpful.\n\nDave\n", "msg_date": "Tue, 20 Apr 2010 11:20:30 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 12:20 PM, David Kerr <[email protected]> wrote:\n> On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote:\n> - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr <[email protected]> wrote:\n> -\n> - > that thought occured to me while I was testing this. I ran a vacuumdb -z\n> - > on my database during the load and it didn't impact performance at all.\n> - >\n> - > Incidentally the code is written to work like this :\n> - >\n> - > while (read X lines in file){\n> - > Process those lines.\n> - > write lines to DB.\n> - > }\n> - >\n> - > So i would generally expect to get the benefits of the updated staticis\n> - > once the loop ended. no?  (would prepared statements affect that possibly?)\n> - >\n> - > Also, while I was debugging the problem, I did load a 2nd file into the DB\n> - > ontop of one that had been loaded. So the statistics almost certinaly\n> - > should\n> - > have been decent at that point.\n> - >\n> - > I did turn on log_min_duration_statement but that caused performance to be\n> - > unbearable,\n> - > but i could turn it on again if it would help.\n> - >\n> - > Dave\n> -\n> -\n> - You can absolutely use copy if you like but you need to use a non-standard\n> - jdbc driver:  kato.iki.fi/sw/db/postgresql/jdbc/copy/.  I've used it in the\n> - past and it worked.\n> -\n> - Is the whole thing going in in one transaction?  I'm reasonably sure\n> - statistics aren't kept for uncommited transactions.\n> -\n> - For inserts the prepared statements can only help.  For selects they can\n> - hurt because eventually the JDBC driver will turn them into back end\n> - prepared statements that are only planned once.  The price here is that that\n> - plan may not be the best plan for the data that you throw at it.\n> -\n> - What was log_min_duration_statement logging that it killed performance?\n> -\n> - --Nik\n>\n> Good to know about the jdbc-copy. but this is a huge project and the load is\n> just one very very tiny component, I don't think we could introduce anything\n> new to assist that.\n>\n> It's not all in one tx. I don't have visibility to the code to determine how\n> it's broken down, but most likely each while loop is a tx.\n>\n> I set it to log all statements (i.e., = 0.). that doubled the load time from\n> ~15 to ~30 hours. I could, of course, be more granular if it would be helpful.\n\nSo are you logging to the same drive that has pg_xlog and your\ndata/base directory on this machine?\n", "msg_date": "Tue, 20 Apr 2010 12:23:51 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 12:23:51PM -0600, Scott Marlowe wrote:\n- On Tue, Apr 20, 2010 at 12:20 PM, David Kerr <[email protected]> wrote:\n- > On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote:\n- > - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr <[email protected]> wrote:\n- > -\n- > - You can absolutely use copy if you like but you need to use a non-standard\n- > - jdbc driver: �kato.iki.fi/sw/db/postgresql/jdbc/copy/. �I've used it in the\n- > - past and it worked.\n- > -\n- > - Is the whole thing going in in one transaction? �I'm reasonably sure\n- > - statistics aren't kept for uncommited transactions.\n- > -\n- > - For inserts the prepared statements can only help. �For selects they can\n- > - hurt because eventually the JDBC driver will turn them into back end\n- > - prepared statements that are only planned once. �The price here is that that\n- > - plan may not be the best plan for the data that you throw at it.\n- > -\n- > - What was log_min_duration_statement logging that it killed performance?\n- > -\n- > - --Nik\n- >\n- > Good to know about the jdbc-copy. but this is a huge project and the load is\n- > just one very very tiny component, I don't think we could introduce anything\n- > new to assist that.\n- >\n- > It's not all in one tx. I don't have visibility to the code to determine how\n- > it's broken down, but most likely each while loop is a tx.\n- >\n- > I set it to log all statements (i.e., = 0.). that doubled the load time from\n- > ~15 to ~30 hours. I could, of course, be more granular if it would be helpful.\n- \n- So are you logging to the same drive that has pg_xlog and your\n- data/base directory on this machine?\n- \n\nthe db, xlog and logs are all on separate areas of the SAN.\n\nseparate I/O controllers, etc on the SAN. it's setup well, I wouldn't expect\ncontention there.\n\nI'm logging via syslog, I've had trouble with that before. when i moved to syslog-ng\non my dev environments that mostly resoved the probelm for me. but these machines\nstill have vanilla syslog.\n\nDave\n", "msg_date": "Tue, 20 Apr 2010 11:28:32 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 12:28 PM, David Kerr <[email protected]> wrote:\n>\n> I'm logging via syslog, I've had trouble with that before. when i moved to syslog-ng\n> on my dev environments that mostly resoved the probelm for me. but these machines\n> still have vanilla syslog.\n\nYea, I almost always log directly via stdout on production machines\nbecause of that.\n", "msg_date": "Tue, 20 Apr 2010 12:30:14 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 12:30:14PM -0600, Scott Marlowe wrote:\n- On Tue, Apr 20, 2010 at 12:28 PM, David Kerr <[email protected]> wrote:\n- >\n- > I'm logging via syslog, I've had trouble with that before. when i moved to syslog-ng\n- > on my dev environments that mostly resoved the probelm for me. but these machines\n- > still have vanilla syslog.\n- \n- Yea, I almost always log directly via stdout on production machines\n- because of that.\n- \n\nAh well good to know i'm not the only one =)\n\nI'll get the query info. I've got a twin system that I can use and abuse.\n\nDave\n", "msg_date": "Tue, 20 Apr 2010 11:32:15 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 01:17:02PM -0500, Kevin Grittner wrote:\n- David Kerr <[email protected]> wrote:\n- \n- > Incidentally the code is written to work like this :\n- > \n- > while (read X lines in file){\n- > Process those lines.\n- > write lines to DB.\n- > }\n- \n- Unless you're selecting from multiple database tables in one query,\n- effective_cache_size shouldn't make any difference. There's\n- probably some other reason for the difference.\n- \n- A couple wild shots in the dark:\n- \n- Any chance the source files were cached the second time, but not the\n- first?\n- \n- Do you have a large checkpoint_segments setting, and did the second\n- run without a new initdb?\n- \n- -Kevin\n\nno i don't think the files would be cached the 2nd time. I ran it multiple times\nand got the same performance each time. It wasn't until i changed the parameter\nthat performance got better.\n\nI've got checkpoint_segments = 300\n\nDave\n", "msg_date": "Tue, 20 Apr 2010 11:46:14 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 02:15:19PM -0400, Robert Haas wrote:\n- On Tue, Apr 20, 2010 at 2:03 PM, David Kerr <[email protected]> wrote:\n- > that thought occured to me while I was testing this. I ran a vacuumdb -z\n- > on my database during the load and it didn't impact performance at all.\n- \n- The window to run ANALYZE usefully is pretty short. If you run it\n- before the load is complete, your stats will be wrong. If you run it\n- after the select statements that hit the table are planned, the\n- updated stats won't arrive in time to do any good.\n\nright, but i'm loading 20 million records in 1000 record increments. so\nthe analyze should affect all subsequent increments, no?\n\n- > I did turn on log_min_duration_statement but that caused performance to be unbearable,\n- > but i could turn it on again if it would help.\n- \n- I think you need to find a way to identify exactly which query is\n- running slowly. You could sit there and run \"select * from\n- pg_stat_activity\", or turn on log_min_duration_statement, or have your\n- application print out timestamps at key points, or some other\n- method...\n\nI'm on it.\n\nDave\n", "msg_date": "Tue, 20 Apr 2010 11:47:27 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 12:47 PM, David Kerr <[email protected]> wrote:\n> On Tue, Apr 20, 2010 at 02:15:19PM -0400, Robert Haas wrote:\n> - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr <[email protected]> wrote:\n> - > that thought occured to me while I was testing this. I ran a vacuumdb -z\n> - > on my database during the load and it didn't impact performance at all.\n> -\n> - The window to run ANALYZE usefully is pretty short.  If you run it\n> - before the load is complete, your stats will be wrong.  If you run it\n> - after the select statements that hit the table are planned, the\n> - updated stats won't arrive in time to do any good.\n>\n> right, but i'm loading 20 million records in 1000 record increments. so\n> the analyze should affect all subsequent increments, no?\n\nI keep thinking FK checks are taking a long time because they aren't\ncached because in import they went through the ring buffer in pg or\nsome other way aren't in a buffer but large effective cache size says\nit's 99.99% chance or better that it's in cache, and chooses a poor\nplan to look them up. Just a guess.\n", "msg_date": "Tue, 20 Apr 2010 13:22:36 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 12:28 PM, David Kerr <[email protected]> wrote:\n> On Tue, Apr 20, 2010 at 12:23:51PM -0600, Scott Marlowe wrote:\n> - So are you logging to the same drive that has pg_xlog and your\n> - data/base directory on this machine?\n> -\n>\n> the db, xlog and logs are all on separate areas of the SAN.\n>\n> separate I/O controllers, etc on the SAN. it's setup well, I wouldn't expect\n> contention there.\n\nSame xkb/s gigabit connection?\n", "msg_date": "Tue, 20 Apr 2010 13:24:34 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "David Kerr wrote:\n> the db, xlog and logs are all on separate areas of the SAN.\n> separate I/O controllers, etc on the SAN. it's setup well, I wouldn't expect\n> contention there.\n> \n\nJust because you don't expect it doesn't mean it's not there. \nParticularly something as complicated as a SAN setup, presuming anything \nwithout actually benchmarking it is a recipe for fuzzy diagnostics when \nproblems pop up. If you took anyone's word that your SAN has good \nperformance without confirming it yourself, that's a path that's lead \nmany to trouble.\n\nAnyway, as Robert already stated, effective_cache_size only impacts how \nsome very specific types of queries are executed; that's it. If there's \nsome sort of query behavior involved in your load, maybe that has \nsomething to do with your slowdown, but it doesn't explain general slow \nperformance. Other possibilities include that something else changed \nwhen you reloaded the server as part of that, or it's a complete \ncoincidence--perhaps autoanalyze happened to finish at around the same \ntime and it lead to new plans.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Tue, 20 Apr 2010 16:26:52 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "On Tue, Apr 20, 2010 at 04:26:52PM -0400, Greg Smith wrote:\n- David Kerr wrote:\n- >the db, xlog and logs are all on separate areas of the SAN.\n- >separate I/O controllers, etc on the SAN. it's setup well, I wouldn't \n- >expect\n- >contention there.\n- > \n- \n- Just because you don't expect it doesn't mean it's not there. \n- Particularly something as complicated as a SAN setup, presuming anything \n- without actually benchmarking it is a recipe for fuzzy diagnostics when \n- problems pop up. If you took anyone's word that your SAN has good \n- performance without confirming it yourself, that's a path that's lead \n- many to trouble.\n\nthat's actually what I'm doing, performance testing this environment.\neverything's on the table for me at this point. \n\n- Anyway, as Robert already stated, effective_cache_size only impacts how \n- some very specific types of queries are executed; that's it. If there's \n- some sort of query behavior involved in your load, maybe that has \n- something to do with your slowdown, but it doesn't explain general slow \n- performance. Other possibilities include that something else changed \n- when you reloaded the server as part of that, or it's a complete \n- coincidence--perhaps autoanalyze happened to finish at around the same \n- time and it lead to new plans.\n\nOk that's good to know. I didn't think it would have any impact, and was\nsurprised when it appeared to.\n\nI just finished running the test on another machine and wasn't able to \nreproduce the problem, so that's good news in some ways. But now i'm back \nto the drawing board.\n\nI don't think it's anything in the Db that's causing it. ( drop and re-create\nthe db between tests) I actually suspect a hardware issue somewhere. \n\nDave\n", "msg_date": "Tue, 20 Apr 2010 13:39:19 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "David Kerr wrote:\n> I don't think it's anything in the Db that's causing it. ( drop and re-create\n> the db between tests) I actually suspect a hardware issue somewhere. \n> \n\nYou might find my \"Database Hardware Benchmarking\" talk, available at \nhttp://projects.2ndquadrant.com/talks , useful to help sort out what's \ngood and bad on each server, and correspondingly what'd different \nbetween the two. Many of the ideas there came from fighting with SAN \nhardware that didn't do what I expected.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Wed, 21 Apr 2010 02:45:55 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse performance?" }, { "msg_contents": "\nOn Apr 20, 2010, at 12:22 PM, Scott Marlowe wrote:\n\n> On Tue, Apr 20, 2010 at 12:47 PM, David Kerr <[email protected]> wrote:\n>> On Tue, Apr 20, 2010 at 02:15:19PM -0400, Robert Haas wrote:\n>> - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr <[email protected]> wrote:\n>> - > that thought occured to me while I was testing this. I ran a vacuumdb -z\n>> - > on my database during the load and it didn't impact performance at all.\n>> -\n>> - The window to run ANALYZE usefully is pretty short. If you run it\n>> - before the load is complete, your stats will be wrong. If you run it\n>> - after the select statements that hit the table are planned, the\n>> - updated stats won't arrive in time to do any good.\n>> \n>> right, but i'm loading 20 million records in 1000 record increments. so\n>> the analyze should affect all subsequent increments, no?\n> \n> I keep thinking FK checks are taking a long time because they aren't\n> cached because in import they went through the ring buffer in pg or\n> some other way aren't in a buffer but large effective cache size says\n> it's 99.99% chance or better that it's in cache, and chooses a poor\n> plan to look them up. Just a guess.\n> \n\nYeah, I was thinking the same thing.\n\nIf possible make sure the table either has no indexes and FK's or only the minimum required (PK?) while doing the load, then add the indexes and FK's later.\nWhether this is possible depends on what the schema is and what must be known by the app to load the data, but if you can do it its a huge win.\n\nOf course, if its not all in one transaction and there is any other concurrency going on that could be a bad idea. Or, if this is not a load on a fresh table but an append/update it may not be possible to drop some of the indexes first.\n\nGenerally speaking, a load on a table without an index followed by index creation is at least twice as fast, and often 5x as fast or more. This is less true if each row is an individual insert and batching or 'insert into foo values (a, b, c, ...), (a2, b2, c2, ...)' multiple row syntax is not used.\n\n\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 21 Apr 2010 18:21:10 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very high effective_cache_size == worse performance?" } ]
[ { "msg_contents": "Hi,\n\nI have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3. I have \nnoticed that on the 8.4.* versions, a lot of our code is either taking \nmuch longer to complete, or never completing. I think I have isolated \nthe problem to queries using in(), not in() or not exists(). I've put \ntogether a test case with one particular query that demonstrates the \nproblem.\n\nselect count(*) from traderhank.vendor_catalog = 147,352\n\nselect count(*) from traderhank.xc_products = 8,610\n\nThe sub query (select vc1.th_sku from traderhank.vendor_catalog vc1\njoin traderhank.vendor_catalog vc2 on vc2.short_desc_75 = vc1.short_desc_75\nand vc2.th_sku != vc1.th_sku\nwhere vc1.cutoff_date is null and vc2.cutoff_date is null\ngroup by vc1.th_sku\n) yields 54,390 rows\n\nThe sub query (select vc_th_Sku from traderhank.xc_products where \nvc_th_sku is not null) yields 5,132 rows\n\nThese 2 tables have been loaded from a pg_dump on all servers, vacuum \nanalyze run after load.\n\n1st case: pg 8.3.1 using left join finishes the query in about 3.5 seconds\n\nexplain analyze\nselect vc.* from traderhank.vendor_catalog vc\nleft join\n(\nselect vc1.th_sku from traderhank.vendor_catalog vc1\njoin traderhank.vendor_catalog vc2 on vc2.short_desc_75 = vc1.short_desc_75\nand vc2.th_sku != vc1.th_sku\nwhere vc1.cutoff_date is null and vc2.cutoff_date is null\ngroup by vc1.th_sku\n) vcj on vcj.th_sku = vc.th_sku\nleft join traderhank.xc_products xc on xc.vc_th_sku = vc.th_sku\nwhere vcj.th_sku is null\nand xc.vc_th_sku is null\n\n\"Merge Left Join (cost=71001.53..72899.35 rows=36838 width=310) (actual \ntime=9190.446..10703.509 rows=78426 loops=1)\"\n\" Merge Cond: ((vc.th_sku)::text = (vc1.th_sku)::text)\"\n\" Filter: (vc1.th_sku IS NULL)\"\n\" -> Merge Left Join (cost=19362.72..20201.46 rows=73676 width=310) \n(actual time=917.947..1784.593 rows=141962 loops=1)\"\n\" Merge Cond: ((vc.th_sku)::text = (xc.vc_th_sku)::text)\"\n\" Filter: (xc.vc_th_sku IS NULL)\"\n\" -> Sort (cost=17630.88..17999.26 rows=147352 width=310) \n(actual time=871.130..1114.453 rows=147352 loops=1)\"\n\" Sort Key: vc.th_sku\"\n\" Sort Method: quicksort Memory: 45285kB\"\n\" -> Seq Scan on vendor_catalog vc (cost=0.00..4981.52 \nrows=147352 width=310) (actual time=0.020..254.023 rows=147352 loops=1)\"\n\" -> Sort (cost=1731.84..1753.37 rows=8610 width=8) (actual \ntime=46.783..62.347 rows=9689 loops=1)\"\n\" Sort Key: xc.vc_th_sku\"\n\" Sort Method: quicksort Memory: 734kB\"\n\" -> Seq Scan on xc_products xc (cost=0.00..1169.10 \nrows=8610 width=8) (actual time=0.013..25.490 rows=8610 loops=1)\"\n\" -> Sort (cost=51638.80..51814.57 rows=70309 width=32) (actual \ntime=8272.483..8382.258 rows=66097 loops=1)\"\n\" Sort Key: vc1.th_sku\"\n\" Sort Method: quicksort Memory: 4086kB\"\n\" -> HashAggregate (cost=44572.25..45275.34 rows=70309 width=8) \n(actual time=7978.928..8080.317 rows=54390 loops=1)\"\n\" -> Merge Join (cost=27417.09..42493.30 rows=831580 \nwidth=8) (actual time=1317.874..6380.928 rows=810012 loops=1)\"\n\" Merge Cond: ((vc1.short_desc_75)::text = \n(vc2.short_desc_75)::text)\"\n\" Join Filter: ((vc2.th_sku)::text <> \n(vc1.th_sku)::text)\"\n\" -> Sort (cost=13708.55..13970.22 rows=104669 \nwidth=27) (actual time=661.319..834.131 rows=104624 loops=1)\"\n\" Sort Key: vc1.short_desc_75\"\n\" Sort Method: quicksort Memory: 11235kB\"\n\" -> Seq Scan on vendor_catalog vc1 \n(cost=0.00..4981.52 rows=104669 width=27) (actual time=0.010..268.552 \nrows=104624 loops=1)\"\n\" Filter: (cutoff_date IS NULL)\"\n\" -> Sort (cost=13708.55..13970.22 rows=104669 \nwidth=27) (actual time=656.447..2130.290 rows=914636 loops=1)\"\n\" Sort Key: vc2.short_desc_75\"\n\" Sort Method: quicksort Memory: 11235kB\"\n\" -> Seq Scan on vendor_catalog vc2 \n(cost=0.00..4981.52 rows=104669 width=27) (actual time=0.015..266.926 \nrows=104624 loops=1)\"\n\" Filter: (cutoff_date IS NULL)\"\n\"Total runtime: 10837.005 ms\"\n\n\nThis query returns same set of rows, in about 2.8 seconds:\n\nexplain analyze\nselect vc.* from traderhank.vendor_catalog vc\nwhere vc.th_sku not in\n(\nselect vc1.th_sku from traderhank.vendor_catalog vc1\njoin traderhank.vendor_catalog vc2 on vc2.short_desc_75 = vc1.short_desc_75\nand vc2.th_sku != vc1.th_sku\nwhere vc1.cutoff_date is null and vc2.cutoff_date is null and vc1.th_sku \nis not null\ngroup by vc1.th_sku\n)\nand vc.th_sku not in\n(select vc_th_Sku from traderhank.xc_products where vc_th_sku is not null)\n\n\n\"Seq Scan on vendor_catalog vc (cost=46633.03..52351.31 rows=36838 \nwidth=310) (actual time=8216.197..8506.825 rows=78426 loops=1)\"\n\" Filter: ((NOT (hashed subplan)) AND (NOT (hashed subplan)))\"\n\" SubPlan\"\n\" -> Seq Scan on xc_products (cost=0.00..1169.10 rows=5129 width=8) \n(actual time=0.026..16.907 rows=5132 loops=1)\"\n\" Filter: (vc_th_sku IS NOT NULL)\"\n\" -> HashAggregate (cost=44572.25..45275.34 rows=70309 width=8) \n(actual time=7973.792..8076.297 rows=54390 loops=1)\"\n\" -> Merge Join (cost=27417.09..42493.30 rows=831580 width=8) \n(actual time=1325.988..6377.197 rows=810012 loops=1)\"\n\" Merge Cond: ((vc1.short_desc_75)::text = \n(vc2.short_desc_75)::text)\"\n\" Join Filter: ((vc2.th_sku)::text <> (vc1.th_sku)::text)\"\n\" -> Sort (cost=13708.55..13970.22 rows=104669 \nwidth=27) (actual time=669.237..841.978 rows=104624 loops=1)\"\n\" Sort Key: vc1.short_desc_75\"\n\" Sort Method: quicksort Memory: 11235kB\"\n\" -> Seq Scan on vendor_catalog vc1 \n(cost=0.00..4981.52 rows=104669 width=27) (actual time=0.014..272.037 \nrows=104624 loops=1)\"\n\" Filter: ((cutoff_date IS NULL) AND (th_sku \nIS NOT NULL))\"\n\" -> Sort (cost=13708.55..13970.22 rows=104669 \nwidth=27) (actual time=656.638..2130.440 rows=914636 loops=1)\"\n\" Sort Key: vc2.short_desc_75\"\n\" Sort Method: quicksort Memory: 11235kB\"\n\" -> Seq Scan on vendor_catalog vc2 \n(cost=0.00..4981.52 rows=104669 width=27) (actual time=0.016..266.767 \nrows=104624 loops=1)\"\n\" Filter: (cutoff_date IS NULL)\"\n\"Total runtime: 8631.652 ms\"\n\n\nSo far, so good.\n\nSame 2 queries on 8.4.2:\n\nLeft join version will return same rows in about 42 seconds\n\nexplain analyze\nselect vc.* from traderhank.vendor_catalog vc\nleft join\n(\nselect vc1.th_sku from traderhank.vendor_catalog vc1\njoin traderhank.vendor_catalog vc2 on vc2.short_desc_75 = vc1.short_desc_75\nand vc2.th_sku != vc1.th_sku\nwhere vc1.cutoff_date is null and vc2.cutoff_date is null\ngroup by vc1.th_sku\n) vcj on vcj.th_sku = vc.th_sku\nleft join traderhank.xc_products xc on xc.vc_th_sku = vc.th_sku\nwhere vcj.th_sku is null\nand xc.vc_th_sku is null\n\n\"Hash Anti Join (cost=142357.84..167341.98 rows=140877 width=309) \n(actual time=42455.615..44244.251 rows=78426 loops=1)\"\n\" Hash Cond: ((vc.th_sku)::text = (vc1.th_sku)::text)\"\n\" -> Hash Anti Join (cost=1829.48..12853.64 rows=141143 width=309) \n(actual time=62.380..1049.863 rows=141962 loops=1)\"\n\" Hash Cond: ((vc.th_sku)::text = (xc.vc_th_sku)::text)\"\n\" -> Seq Scan on vendor_catalog vc (cost=0.00..8534.52 \nrows=147352 width=309) (actual time=0.009..351.005 rows=147352 loops=1)\"\n\" -> Hash (cost=1716.99..1716.99 rows=8999 width=8) (actual \ntime=62.348..62.348 rows=5132 loops=1)\"\n\" -> Seq Scan on xc_products xc (cost=0.00..1716.99 \nrows=8999 width=8) (actual time=0.009..45.818 rows=8610 loops=1)\"\n\" -> Hash (cost=139067.10..139067.10 rows=75541 width=32) (actual \ntime=42393.149..42393.149 rows=54390 loops=1)\"\n\" -> Group (cost=134997.43..138311.69 rows=75541 width=8) \n(actual time=35987.418..42264.948 rows=54390 loops=1)\"\n\" -> Sort (cost=134997.43..136654.56 rows=662853 width=8) \n(actual time=35987.407..40682.275 rows=810012 loops=1)\"\n\" Sort Key: vc1.th_sku\"\n\" Sort Method: external merge Disk: 14256kB\"\n\" -> Merge Join (cost=39600.73..52775.08 \nrows=662853 width=8) (actual time=5762.785..13763.041 rows=810012 loops=1)\"\n\" Merge Cond: ((vc1.short_desc_75)::text = \n(vc2.short_desc_75)::text)\"\n\" Join Filter: ((vc2.th_sku)::text <> \n(vc1.th_sku)::text)\"\n\" -> Sort (cost=19800.37..20062.75 \nrows=104954 width=27) (actual time=2884.012..3604.405 rows=104624 loops=1)\"\n\" Sort Key: vc1.short_desc_75\"\n\" Sort Method: external merge Disk: 3776kB\"\n\" -> Seq Scan on vendor_catalog vc1 \n(cost=0.00..8534.52 rows=104954 width=27) (actual time=0.009..395.976 \nrows=104624 loops=1)\"\n\" Filter: (cutoff_date IS NULL)\"\n\" -> Materialize (cost=19800.37..21112.29 \nrows=104954 width=27) (actual time=2878.550..5291.205 rows=914636 loops=1)\"\n\" -> Sort (cost=19800.37..20062.75 \nrows=104954 width=27) (actual time=2878.538..3607.201 rows=104624 loops=1)\"\n\" Sort Key: vc2.short_desc_75\"\n\" Sort Method: external merge \nDisk: 3776kB\"\n\" -> Seq Scan on vendor_catalog \nvc2 (cost=0.00..8534.52 rows=104954 width=27) (actual \ntime=0.018..392.270 rows=104624 loops=1)\"\n\" Filter: (cutoff_date IS NULL)\"\n\"Total runtime: 45145.977 ms\"\n\n\n\non any version from 8.3.8 on, this query has never returned, and explain \nanalyze never returns, so I am only posting explain output\n\nexplain --analyze\nselect vc.* from traderhank.vendor_catalog vc\nwhere vc.th_sku not in\n(\nselect vc1.th_sku from traderhank.vendor_catalog vc1\njoin traderhank.vendor_catalog vc2 on vc2.short_desc_75 = vc1.short_desc_75\nand vc2.th_sku != vc1.th_sku\nwhere vc1.cutoff_date is null and vc2.cutoff_date is null and vc1.th_sku \nis not null\ngroup by vc1.th_sku\n)\nand vc.th_sku not in\n(select vc_th_Sku from traderhank.xc_products where vc_th_sku is not null)\n\n\n\"Seq Scan on vendor_catalog vc (cost=140413.05..91527264.28 rows=36838 \nwidth=309)\"\n\" Filter: ((NOT (hashed SubPlan 2)) AND (NOT (SubPlan 1)))\"\n\" SubPlan 2\"\n\" -> Seq Scan on xc_products (cost=0.00..1716.99 rows=5132 width=8)\"\n\" Filter: (vc_th_sku IS NOT NULL)\"\n\" SubPlan 1\"\n\" -> Materialize (cost=138683.23..139734.64 rows=75541 width=8)\"\n\" -> Group (cost=134997.43..138311.69 rows=75541 width=8)\"\n\" -> Sort (cost=134997.43..136654.56 rows=662853 width=8)\"\n\" Sort Key: vc1.th_sku\"\n\" -> Merge Join (cost=39600.73..52775.08 \nrows=662853 width=8)\"\n\" Merge Cond: ((vc1.short_desc_75)::text = \n(vc2.short_desc_75)::text)\"\n\" Join Filter: ((vc2.th_sku)::text <> \n(vc1.th_sku)::text)\"\n\" -> Sort (cost=19800.37..20062.75 \nrows=104954 width=27)\"\n\" Sort Key: vc1.short_desc_75\"\n\" -> Seq Scan on vendor_catalog vc1 \n(cost=0.00..8534.52 rows=104954 width=27)\"\n\" Filter: ((cutoff_date IS NULL) \nAND (th_sku IS NOT NULL))\"\n\" -> Materialize (cost=19800.37..21112.29 \nrows=104954 width=27)\"\n\" -> Sort (cost=19800.37..20062.75 \nrows=104954 width=27)\"\n\" Sort Key: vc2.short_desc_75\"\n\" -> Seq Scan on vendor_catalog \nvc2 (cost=0.00..8534.52 rows=104954 width=27)\"\n\" Filter: (cutoff_date IS \nNULL)\"\n\n\n\n\nI've also tried changing the code to not exists, but that query never \ncomes back on any version I have available:\n\nexplain --analyze\nselect vc.* from traderhank.vendor_catalog vc\nwhere not exists\n(\nselect 1 from traderhank.vendor_catalog vc1\njoin traderhank.vendor_catalog vc2 on vc2.short_desc_75 = vc1.short_desc_75\nand vc2.th_sku != vc1.th_sku\nwhere vc1.cutoff_date is null and vc2.cutoff_date is null and vc1.th_sku \n= vc.th_sku\ngroup by vc1.th_sku\n)\nand not exists\n(select 1 from traderhank.xc_products where vc_th_sku is not null and \nvc_th_sku = vc.th_sku)\n\n\"Nested Loop Anti Join (cost=63650.74..93617.53 rows=1 width=309)\"\n\" Join Filter: ((xc_products.vc_th_sku)::text = (vc.th_sku)::text)\"\n\" -> Hash Anti Join (cost=63650.74..91836.39 rows=1 width=309)\"\n\" Hash Cond: ((vc.th_sku)::text = (vc1.th_sku)::text)\"\n\" -> Seq Scan on vendor_catalog vc (cost=0.00..8534.52 \nrows=147352 width=309)\"\n\" -> Hash (cost=52775.08..52775.08 rows=662853 width=8)\"\n\" -> Merge Join (cost=39600.73..52775.08 rows=662853 \nwidth=8)\"\n\" Merge Cond: ((vc1.short_desc_75)::text = \n(vc2.short_desc_75)::text)\"\n\" Join Filter: ((vc2.th_sku)::text <> \n(vc1.th_sku)::text)\"\n\" -> Sort (cost=19800.37..20062.75 rows=104954 \nwidth=27)\"\n\" Sort Key: vc1.short_desc_75\"\n\" -> Seq Scan on vendor_catalog vc1 \n(cost=0.00..8534.52 rows=104954 width=27)\"\n\" Filter: (cutoff_date IS NULL)\"\n\" -> Materialize (cost=19800.37..21112.29 \nrows=104954 width=27)\"\n\" -> Sort (cost=19800.37..20062.75 \nrows=104954 width=27)\"\n\" Sort Key: vc2.short_desc_75\"\n\" -> Seq Scan on vendor_catalog vc2 \n(cost=0.00..8534.52 rows=104954 width=27)\"\n\" Filter: (cutoff_date IS NULL)\"\n\" -> Seq Scan on xc_products (cost=0.00..1716.99 rows=5132 width=8)\"\n\" Filter: (xc_products.vc_th_sku IS NOT NULL)\"\n\n\n\nSo, my question is, do I need to re-write all of my in() and not in () \nqueries to left joins, is this something that might get resolved in \nanother release in the future?\n\nThanks for any help.\n\nRoger Ging\n\n\n", "msg_date": "Tue, 20 Apr 2010 11:38:04 -0700", "msg_from": "Roger Ging <[email protected]>", "msg_from_op": true, "msg_subject": "performance change from 8.3.1 to later releases" }, { "msg_contents": "On Tue, Apr 20, 2010 at 12:38 PM, Roger Ging <[email protected]> wrote:\n> Hi,\n>\n> I have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3.  I have\n> noticed that on the 8.4.* versions, a lot of our code is either taking much\n> longer to complete, or never completing.  I think I have isolated the\n> problem to queries using in(), not in() or not exists().  I've put together\n> a test case with one particular query that demonstrates the problem.\n>\n> select count(*) from traderhank.vendor_catalog = 147,352\n>\n> select count(*) from traderhank.xc_products = 8,610\n>\n> The sub query (select vc1.th_sku from traderhank.vendor_catalog vc1\n> join traderhank.vendor_catalog vc2 on vc2.short_desc_75 = vc1.short_desc_75\n> and vc2.th_sku != vc1.th_sku\n> where vc1.cutoff_date is null and vc2.cutoff_date is null\n> group by vc1.th_sku\n> )  yields 54,390 rows\n>\n> The sub query (select vc_th_Sku from traderhank.xc_products where vc_th_sku\n> is not null) yields 5,132 rows\n>\n> These 2 tables have been loaded from a pg_dump on all servers, vacuum\n> analyze run after load.\n>\n> 1st case: pg 8.3.1 using left join finishes the query in about 3.5 seconds\n>\n> explain analyze\n> select vc.* from traderhank.vendor_catalog vc\n> left join\n> (\n> select vc1.th_sku from traderhank.vendor_catalog vc1\n> join traderhank.vendor_catalog vc2 on vc2.short_desc_75 = vc1.short_desc_75\n> and vc2.th_sku != vc1.th_sku\n> where vc1.cutoff_date is null and vc2.cutoff_date is null\n> group by vc1.th_sku\n> ) vcj on vcj.th_sku = vc.th_sku\n> left join traderhank.xc_products xc on xc.vc_th_sku = vc.th_sku\n> where vcj.th_sku is null\n> and xc.vc_th_sku is null\n>\n> \"Merge Left Join  (cost=71001.53..72899.35 rows=36838 width=310) (actual\n> time=9190.446..10703.509 rows=78426 loops=1)\"\n> \"  Merge Cond: ((vc.th_sku)::text = (vc1.th_sku)::text)\"\n> \"  Filter: (vc1.th_sku IS NULL)\"\n> \"  ->  Merge Left Join  (cost=19362.72..20201.46 rows=73676 width=310)\n> (actual time=917.947..1784.593 rows=141962 loops=1)\"\n> \"        Merge Cond: ((vc.th_sku)::text = (xc.vc_th_sku)::text)\"\n> \"        Filter: (xc.vc_th_sku IS NULL)\"\n> \"        ->  Sort  (cost=17630.88..17999.26 rows=147352 width=310) (actual\n> time=871.130..1114.453 rows=147352 loops=1)\"\n> \"              Sort Key: vc.th_sku\"\n> \"              Sort Method:  quicksort  Memory: 45285kB\"\n> \"              ->  Seq Scan on vendor_catalog vc  (cost=0.00..4981.52\n> rows=147352 width=310) (actual time=0.020..254.023 rows=147352 loops=1)\"\n> \"        ->  Sort  (cost=1731.84..1753.37 rows=8610 width=8) (actual\n> time=46.783..62.347 rows=9689 loops=1)\"\n> \"              Sort Key: xc.vc_th_sku\"\n> \"              Sort Method:  quicksort  Memory: 734kB\"\n> \"              ->  Seq Scan on xc_products xc  (cost=0.00..1169.10 rows=8610\n> width=8) (actual time=0.013..25.490 rows=8610 loops=1)\"\n> \"  ->  Sort  (cost=51638.80..51814.57 rows=70309 width=32) (actual\n> time=8272.483..8382.258 rows=66097 loops=1)\"\n> \"        Sort Key: vc1.th_sku\"\n> \"        Sort Method:  quicksort  Memory: 4086kB\"\n\nSo here we get a hash agg in ~4M memory:\n\n> \"        ->  HashAggregate  (cost=44572.25..45275.34 rows=70309 width=8)\n> (actual time=7978.928..8080.317 rows=54390 loops=1)\"\n\nAnd the row estimate is similar.\n\n(much deleted)\n\n> on any version from 8.3.8 on, this query has never returned, and explain\n> analyze never returns, so I am only posting explain output\n\nWe get a Seq Scan with a huge cost, and no hash agg or quick sort. Is\nthe work_mem the same or similar? I'd crank it up for testing just to\nsee if it helps. 16Meg is pretty safe on a low traffic machine.\n\n> \"Seq Scan on vendor_catalog vc  (cost=140413.05..91527264.28 rows=36838\n> width=309)\"\n> \"  Filter: ((NOT (hashed SubPlan 2)) AND (NOT (SubPlan 1)))\"\n> \"  SubPlan 2\"\n> \"    ->  Seq Scan on xc_products  (cost=0.00..1716.99 rows=5132 width=8)\"\n> \"          Filter: (vc_th_sku IS NOT NULL)\"\n> \"  SubPlan 1\"\n> \"    ->  Materialize  (cost=138683.23..139734.64 rows=75541 width=8)\"\n> \"          ->  Group  (cost=134997.43..138311.69 rows=75541 width=8)\"\n> \"                ->  Sort  (cost=134997.43..136654.56 rows=662853 width=8)\"\n> \"                      Sort Key: vc1.th_sku\"\n> \"                      ->  Merge Join  (cost=39600.73..52775.08 rows=662853\n> width=8)\"\n> \"                            Merge Cond: ((vc1.short_desc_75)::text =\n> (vc2.short_desc_75)::text)\"\n> \"                            Join Filter: ((vc2.th_sku)::text <>\n> (vc1.th_sku)::text)\"\n> \"                            ->  Sort  (cost=19800.37..20062.75 rows=104954\n> width=27)\"\n> \"                                  Sort Key: vc1.short_desc_75\"\n> \"                                  ->  Seq Scan on vendor_catalog vc1\n>  (cost=0.00..8534.52 rows=104954 width=27)\"\n> \"                                        Filter: ((cutoff_date IS NULL) AND\n> (th_sku IS NOT NULL))\"\n> \"                            ->  Materialize  (cost=19800.37..21112.29\n> rows=104954 width=27)\"\n> \"                                  ->  Sort  (cost=19800.37..20062.75\n> rows=104954 width=27)\"\n> \"                                        Sort Key: vc2.short_desc_75\"\n> \"                                        ->  Seq Scan on vendor_catalog vc2\n>  (cost=0.00..8534.52 rows=104954 width=27)\"\n> \"                                              Filter: (cutoff_date IS\n> NULL)\"\n>\n>\n>\n>\n> I've also tried changing the code to not exists, but that query never comes\n> back on any version I have available:\n>\n> explain --analyze\n> select vc.* from traderhank.vendor_catalog vc\n> where not exists\n> (\n> select 1 from traderhank.vendor_catalog vc1\n> join traderhank.vendor_catalog vc2 on vc2.short_desc_75 = vc1.short_desc_75\n> and vc2.th_sku != vc1.th_sku\n> where vc1.cutoff_date is null and vc2.cutoff_date is null and vc1.th_sku =\n> vc.th_sku\n> group by vc1.th_sku\n> )\n> and not exists\n> (select 1 from traderhank.xc_products where vc_th_sku is not null and\n> vc_th_sku = vc.th_sku)\n>\n> \"Nested Loop Anti Join  (cost=63650.74..93617.53 rows=1 width=309)\"\n> \"  Join Filter: ((xc_products.vc_th_sku)::text = (vc.th_sku)::text)\"\n> \"  ->  Hash Anti Join  (cost=63650.74..91836.39 rows=1 width=309)\"\n> \"        Hash Cond: ((vc.th_sku)::text = (vc1.th_sku)::text)\"\n> \"        ->  Seq Scan on vendor_catalog vc  (cost=0.00..8534.52 rows=147352\n> width=309)\"\n> \"        ->  Hash  (cost=52775.08..52775.08 rows=662853 width=8)\"\n> \"              ->  Merge Join  (cost=39600.73..52775.08 rows=662853\n> width=8)\"\n> \"                    Merge Cond: ((vc1.short_desc_75)::text =\n> (vc2.short_desc_75)::text)\"\n> \"                    Join Filter: ((vc2.th_sku)::text <>\n> (vc1.th_sku)::text)\"\n> \"                    ->  Sort  (cost=19800.37..20062.75 rows=104954\n> width=27)\"\n> \"                          Sort Key: vc1.short_desc_75\"\n> \"                          ->  Seq Scan on vendor_catalog vc1\n>  (cost=0.00..8534.52 rows=104954 width=27)\"\n> \"                                Filter: (cutoff_date IS NULL)\"\n> \"                    ->  Materialize  (cost=19800.37..21112.29 rows=104954\n> width=27)\"\n> \"                          ->  Sort  (cost=19800.37..20062.75 rows=104954\n> width=27)\"\n> \"                                Sort Key: vc2.short_desc_75\"\n> \"                                ->  Seq Scan on vendor_catalog vc2\n>  (cost=0.00..8534.52 rows=104954 width=27)\"\n> \"                                      Filter: (cutoff_date IS NULL)\"\n> \"  ->  Seq Scan on xc_products  (cost=0.00..1716.99 rows=5132 width=8)\"\n> \"        Filter: (xc_products.vc_th_sku IS NOT NULL)\"\n>\n>\n>\n> So, my question is, do I need  to re-write all of my in() and not in ()\n> queries to left joins, is this something that might get resolved in another\n> release in the future?\n>\n> Thanks for any help.\n>\n> Roger Ging\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nWhen fascism comes to America, it will be intolerance sold as diversity.\n", "msg_date": "Tue, 20 Apr 2010 13:33:32 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance change from 8.3.1 to later releases" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> On Tue, Apr 20, 2010 at 12:38 PM, Roger Ging <[email protected]> wrote:\n>> I have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3. �I have\n>> noticed that on the 8.4.* versions, a lot of our code is either taking much\n>> longer to complete, or never completing. �I think I have isolated the\n>> problem to queries using in(), not in() or not exists(). �I've put together\n>> a test case with one particular query that demonstrates the problem.\n\n> We get a Seq Scan with a huge cost, and no hash agg or quick sort. Is\n> the work_mem the same or similar?\n\nIt looks to me like it's not. The 8.4 plan is showing sorts spilling to\ndisk for amounts of data that the 8.3 plan is perfectly willing to hold\nin memory. I'm also wondering if the 8.4 server is on comparable\nhardware, because it seems to be only about half as fast for the plain\nseqscan steps, which surely ought to be no worse than before.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Apr 2010 23:47:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance change from 8.3.1 to later releases " } ]
[ { "msg_contents": "I have a DB with small and large tables that can go up to 15G.\nFor performance benefits, it appears that analyze has much less cost\nthan vacuum, but the same benefits?\nI can’t find any clear recommendations for frequencies and am\nconsidering these parameters:\n\nAutovacuum_vacuum_threshold = 50000\nAutovacuum_analyze_threshold = 10000\nAutovacuum_vacuum_scale_factor = 0.01\nAutovacuum_analyze_scale_factor = 0.005\n\nThis appears it will result in table analyzes occurring around 10,000\nto 85,000 dead tuples and vacuum occuring around 50,000 to 200,000,\ndepending on the table sizes.\n\nCan anyone comment on whether this is the right strategy and targets\nto use?\n", "msg_date": "Wed, 21 Apr 2010 08:06:11 -0700 (PDT)", "msg_from": "Rick <[email protected]>", "msg_from_op": true, "msg_subject": "autovacuum strategy / parameters" }, { "msg_contents": "On Wed, Apr 21, 2010 at 11:06 AM, Rick <[email protected]> wrote:\n> I have a DB with small and large tables that can go up to 15G.\n> For performance benefits, it appears that analyze has much less cost\n> than vacuum, but the same benefits?\n\nErr, no. ANALYZE gathers statistics for the query planner; VACUUM\nclears out old, dead tuples so that space can be reused by the\ndatabase system.\n\n> I can’t find any clear recommendations for frequencies and am\n> considering these parameters:\n>\n> Autovacuum_vacuum_threshold = 50000\n> Autovacuum_analyze_threshold = 10000\n> Autovacuum_vacuum_scale_factor = 0.01\n> Autovacuum_analyze_scale_factor = 0.005\n>\n> This appears it will result in table analyzes occurring around 10,000\n> to 85,000 dead tuples and vacuum occuring around 50,000 to 200,000,\n> depending on the table sizes.\n>\n> Can anyone comment on whether this is the right strategy and targets\n> to use?\n\nI'm not that familiar with tuning these parameters but increasing the\ndefault thesholds by a thousand-fold doesn't seem like a good idea.\nSmall tables will never get vacuumed or analyzed at all.\n\n...Robert\n", "msg_date": "Thu, 22 Apr 2010 14:55:18 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "On Apr 22, 2:55 pm, [email protected] (Robert Haas) wrote:\n> On Wed, Apr 21, 2010 at 11:06 AM, Rick <[email protected]> wrote:\n> > I have a DB with small and large tables that can go up to 15G.\n> > For performance benefits, it appears that analyze has much less cost\n> > than vacuum, but the same benefits?\n>\n> Err, no.  ANALYZE gathers statistics for the query planner; VACUUM\n> clears out old, dead tuples so that space can be reused by the\n> database system.\n>\n> > I can’t find any clear recommendations for frequencies and am\n> > considering these parameters:\n>\n> > Autovacuum_vacuum_threshold = 50000\n> > Autovacuum_analyze_threshold = 10000\n> > Autovacuum_vacuum_scale_factor = 0.01\n> > Autovacuum_analyze_scale_factor = 0.005\n>\n> > This appears it will result in table analyzes occurring around 10,000\n> > to 85,000 dead tuples and vacuum occuring around 50,000 to 200,000,\n> > depending on the table sizes.\n>\n> > Can anyone comment on whether this is the right strategy and targets\n> > to use?\n>\n> I'm not that familiar with tuning these parameters but increasing the\n> default thesholds by a thousand-fold doesn't seem like a good idea.\n> Small tables will never get vacuumed or analyzed at all.\n>\n> ...Robert\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\nThe problem is with the autovacuum formula:\n\nIn a loop, autovacuum checks to see if number of dead tuples >\n((number of live tuples * autovacuum_vacuum_scale_factor) +\nautovacuum_vacuum_threshold), and if\nso, it runs VACUUM. If not, it sleeps. It works the same way for\nANALYZE.\n\nSo, in a large table, the scale_factor is the dominant term. In a\nsmall\ntable, the threshold is the dominant term. But both are taken into\naccount.\n\nThe default values are set for small tables; it is not being run for\nlarge tables.\nThe question boils down to exactly what is the max number of dead\ntuples that should be allowed to accumulate before running analyze?\nSince vacuum just recovers space, that doesn't seem to be nearly as\ncritical for performance?\n\n-Rick\n", "msg_date": "Thu, 22 Apr 2010 13:42:41 -0700 (PDT)", "msg_from": "Rick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "Rick wrote:\n\n> So, in a large table, the scale_factor is the dominant term. In a\n> small\n> table, the threshold is the dominant term. But both are taken into\n> account.\n\nCorrect.\n\n> The default values are set for small tables; it is not being run for\n> large tables.\n\nSo decrease the scale factor and leave threshold alone.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Mon, 26 Apr 2010 12:58:22 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "On Thu, Apr 22, 2010 at 4:42 PM, Rick <[email protected]> wrote:\n> On Apr 22, 2:55 pm, [email protected] (Robert Haas) wrote:\n>> On Wed, Apr 21, 2010 at 11:06 AM, Rick <[email protected]> wrote:\n>> > I have a DB with small and large tables that can go up to 15G.\n>> > For performance benefits, it appears that analyze has much less cost\n>> > than vacuum, but the same benefits?\n>>\n>> Err, no.  ANALYZE gathers statistics for the query planner; VACUUM\n>> clears out old, dead tuples so that space can be reused by the\n>> database system.\n>>\n>> > I can’t find any clear recommendations for frequencies and am\n>> > considering these parameters:\n>>\n>> > Autovacuum_vacuum_threshold = 50000\n>> > Autovacuum_analyze_threshold = 10000\n>> > Autovacuum_vacuum_scale_factor = 0.01\n>> > Autovacuum_analyze_scale_factor = 0.005\n>>\n>> > This appears it will result in table analyzes occurring around 10,000\n>> > to 85,000 dead tuples and vacuum occuring around 50,000 to 200,000,\n>> > depending on the table sizes.\n>>\n>> > Can anyone comment on whether this is the right strategy and targets\n>> > to use?\n>>\n>> I'm not that familiar with tuning these parameters but increasing the\n>> default thesholds by a thousand-fold doesn't seem like a good idea.\n>> Small tables will never get vacuumed or analyzed at all.\n>>\n>> ...Robert\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n>\n> The problem is with the autovacuum formula:\n>\n> In a loop, autovacuum checks to see if number of dead tuples >\n> ((number of live tuples * autovacuum_vacuum_scale_factor) +\n> autovacuum_vacuum_threshold), and if\n> so, it runs VACUUM. If not, it sleeps. It works the same way for\n> ANALYZE.\n>\n> So, in a large table, the scale_factor is the dominant term. In a\n> small\n> table, the threshold is the dominant term. But both are taken into\n> account.\n>\n> The default values are set for small tables; it is not being run for\n> large tables.\n> The question boils down to exactly what is the max number of dead\n> tuples that should be allowed to accumulate before running analyze?\n> Since vacuum just recovers space, that doesn't seem to be nearly as\n> critical for performance?\n\nThat doesn't really match my experience. Without regular vacuuming,\ntables and indices end up being larger than they ought to be and\ncontain large amounts of dead space that slows things down. How much\nof an impact that ends up having depends on how badly bloated they are\nand what you're trying to do, but it can get very ugly.\n\nMy guess is that the reason we run ANALYZE more frequently than vacuum\n(with the default settings) is that ANALYZE is pretty cheap. In many\ncases, if the statistical distribution of the data hasn't changed\nmuch, then it's not really necessary, but it doesn't cost much either.\n And for certain types of usage patterns, like time series (where the\nmaximum value keeps increasing) it's REALLY important to analyze\nfrequently.\n\nBut having said that, on the systems I've worked with, I've only\nrarely seen a problem caused by not analyzing frequently enough. On\nthe other hand, I've seen MANY problems caused by not vacuuming\nenough. Someone runs a couple of big queries that rewrite a large\nportion of a table several times over and, boom, problems. 8.3 and\nhigher are better about this because of an optimization called HOT,\nbut there can still be problems.\n\nOther people's experiences may not match mine, but the bottom line is\nthat you need to do both of these things, and you need to make sure\nthey happen regularly. In most cases, the CPU and I/O time they\nconsume will be amply repaid in improved query performance.\n\n...Robert\n", "msg_date": "Tue, 27 Apr 2010 20:55:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "Robert Haas <[email protected]> wrote:\n> Rick <[email protected]> wrote:\n\n>> Since vacuum just recovers space, that doesn't seem to be nearly\n>> as critical for performance?\n> \n> That doesn't really match my experience. Without regular\n> vacuuming, tables and indices end up being larger than they ought\n> to be and contain large amounts of dead space that slows things\n> down. How much of an impact that ends up having depends on how\n> badly bloated they are and what you're trying to do, but it can\n> get very ugly.\n \nThat has been my experience, too. When we first started using\nPostgreSQL, we noticed a performance hit when some small tables\nwhich were updated very frequently were vacuumed. Our knee-jerk\nreaction was to tune autovacuum to be less aggressive, so that we\ndidn't get hit with the pain as often. Of course, things just got\nworse, because every access to that table, when vacuum hadn't been\nrun recently, had to examine all versions of the desired row, and\ntest visibility for each version, to find the current one. So\nperformance fell off even worse. So we went to much more aggressive\nsettings for autovacuum (although only slightly more aggressive than\nwhat has since become the default) and the problems disappeared.\n \nBasically, as long as small tables are not allowed to bloat,\nvacuuming them is so fast that you never notice it.\n \n> 8.3 and higher are better about this because of an optimization\n> called HOT, but there can still be problems.\n \nAgreed on both counts.\n \n-Kevin\n", "msg_date": "Wed, 28 Apr 2010 08:53:36 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "Rick, 22.04.2010 22:42:\n>\n> So, in a large table, the scale_factor is the dominant term. In a\n> small table, the threshold is the dominant term. But both are taken into\n> account.\n>\n> The default values are set for small tables; it is not being run for\n> large tables.\n\nWith 8.4 you can adjust the autovacuum settings per table...\n\n\n\n", "msg_date": "Wed, 28 Apr 2010 16:20:53 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "Hi -\n don't want to side track the discussion. We have 8.4, which of\nAUTOVACUUM PARAMETERS can be set to handle individual table? I ran into\nbloat with small table only. Now the issue is being resolved.\n\nRegards\nOn Wed, Apr 28, 2010 at 10:20 AM, Thomas Kellerer <[email protected]>wrote:\n\n> Rick, 22.04.2010 22:42:\n>\n>\n>> So, in a large table, the scale_factor is the dominant term. In a\n>> small table, the threshold is the dominant term. But both are taken into\n>> account.\n>>\n>> The default values are set for small tables; it is not being run for\n>> large tables.\n>>\n>\n> With 8.4 you can adjust the autovacuum settings per table...\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi -\n           don't want to side track the discussion. We have 8.4, which of AUTOVACUUM PARAMETERS can be set to handle individual table?  I ran into bloat with small table only. Now the issue is being resolved. \n \nRegards\nOn Wed, Apr 28, 2010 at 10:20 AM, Thomas Kellerer <[email protected]> wrote:\nRick, 22.04.2010 22:42: \n\nSo, in a large table, the scale_factor is the dominant term. In asmall table, the threshold is the dominant term. But both are taken into\naccount.The default values are set for small tables; it is not being run forlarge tables.With 8.4 you can adjust the autovacuum settings per table... \n\n\n-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 28 Apr 2010 10:37:35 -0400", "msg_from": "akp geek <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "Check out the manual:\n\nhttp://www.postgresql.org/docs/8.4/static/routine-vacuuming.html#AUTOVACUUM\n\nCheers,\nKen\n\nOn Wed, Apr 28, 2010 at 10:37:35AM -0400, akp geek wrote:\n> Hi -\n> don't want to side track the discussion. We have 8.4, which of\n> AUTOVACUUM PARAMETERS can be set to handle individual table? I ran into\n> bloat with small table only. Now the issue is being resolved.\n> \n> Regards\n> On Wed, Apr 28, 2010 at 10:20 AM, Thomas Kellerer <[email protected]>wrote:\n> \n> > Rick, 22.04.2010 22:42:\n> >\n> >\n> >> So, in a large table, the scale_factor is the dominant term. In a\n> >> small table, the threshold is the dominant term. But both are taken into\n> >> account.\n> >>\n> >> The default values are set for small tables; it is not being run for\n> >> large tables.\n> >>\n> >\n> > With 8.4 you can adjust the autovacuum settings per table...\n> >\n> >\n> >\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> >\n", "msg_date": "Wed, 28 Apr 2010 09:45:36 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "akp geek, 28.04.2010 16:37:\n> We have 8.4, which of AUTOVACUUM PARAMETERS can be set to handle individual table?\n\nAll documented here:\nhttp://www.postgresql.org/docs/current/static/sql-createtable.html\n\n\n", "msg_date": "Wed, 28 Apr 2010 16:45:57 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "\n> My guess is that the reason we run ANALYZE more frequently than vacuum\n> (with the default settings) is that ANALYZE is pretty cheap. In many\n> cases, if the statistical distribution of the data hasn't changed\n> much, then it's not really necessary, but it doesn't cost much either.\n> And for certain types of usage patterns, like time series (where the\n> maximum value keeps increasing) it's REALLY important to analyze\n> frequently.\n> \n> But having said that, on the systems I've worked with, I've only\n> rarely seen a problem caused by not analyzing frequently enough. On\n> the other hand, I've seen MANY problems caused by not vacuuming\n> enough. \n\nWhich is the opposite of my experience; currently we have several\nclients who have issues which required more-frequent analyzes on\nspecific tables. Before 8.4, vacuuming more frequently, especially on\nlarge tables, was very costly; vacuum takes a lot of I/O and CPU. Even\nwith 8.4 it's not something you want to increase without thinking about\nthe tradeoffs.\n\nSince I'm responsible for the current defaults, I though I'd explain the\nreasoning behind them. I developed and tested them while at Greenplum,\nso they are *not* designed for small databases.\n\n#autovacuum_vacuum_threshold = 50\n#autovacuum_analyze_threshold = 50\n\nThese two are set to the minimum threshold to avoid having small tables\nget vacuum/analyzed continuously, but to make sure that small tables do\nget vacuumed & analyzed sometimes.\n\n#autovacuum_vacuum_scale_factor = 0.2\n\nThis is set because in my experience, 20% bloat is about the level at\nwhich bloat starts affecting performance; thus, we want to vacuum at\nthat level but not sooner. This does mean that very large tables which\nnever have more than 10% updates/deletes don't get vacuumed at all until\nfreeze_age; this is a *good thing*. VACUUM on large tables is expensive;\nyou don't *want* to vacuum a billion-row table which has only 100,000\nupdates.\n\n#autovacuum_analyze_scale_factor = 0.1\n\nThe 10% threshold for analyze is there because (a) analyze is cheap, and\n(b) 10% changes to a table can result in very bad plans if the changes\nare highly skewed towards a specific range, such as additions onto the\nend of a time-based table.\n\nThe current postgres defaults were tested on DBT2 as well as pgbench,\nand in my last 2 years of consulting I've seldom found reason to touch\nthem except on *specific* tables. So I still feel that they are good\ndefaults.\n\nIt would be worth doing a DBT2/DBT5 test run with different autovac\nsettings post-8.4 so see if we should specifically change the vacuum\nthreshold. Pending that, though, I think the current defaults are good\nenough.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 30 Apr 2010 15:50:54 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "Josh Berkus escribi�:\n\n> #autovacuum_vacuum_scale_factor = 0.2\n> \n> This is set because in my experience, 20% bloat is about the level at\n> which bloat starts affecting performance; thus, we want to vacuum at\n> that level but not sooner. This does mean that very large tables which\n> never have more than 10% updates/deletes don't get vacuumed at all until\n> freeze_age; this is a *good thing*. VACUUM on large tables is expensive;\n> you don't *want* to vacuum a billion-row table which has only 100,000\n> updates.\n\nHmm, now that we have partial vacuum, perhaps we should revisit this.\n\n\n> It would be worth doing a DBT2/DBT5 test run with different autovac\n> settings post-8.4 so see if we should specifically change the vacuum\n> threshold.\n\nRight.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 30 Apr 2010 23:08:20 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "On Fri, Apr 30, 2010 at 6:50 PM, Josh Berkus <[email protected]> wrote:\n> Which is the opposite of my experience; currently we have several\n> clients who have issues which required more-frequent analyzes on\n> specific tables.\n\nThat's all fine, but probably not too relevant to the original\ncomplaint - the OP backed off the default settings by several orders\nof magnitude, which might very well cause a problem with both VACUUM\nand ANALYZE.\n\nI don't have a stake in the ground on what the right settings are, but\nI think it's fair to say that if you vacuum OR analyze much less\nfrequently than what we recommend my default, it might break.\n\n...Robert\n", "msg_date": "Sat, 1 May 2010 07:39:09 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "On Wed, Apr 28, 2010 at 8:20 AM, Thomas Kellerer <[email protected]> wrote:\n> Rick, 22.04.2010 22:42:\n>>\n>> So, in a large table, the scale_factor is the dominant term. In a\n>> small table, the threshold is the dominant term. But both are taken into\n>> account.\n>>\n>> The default values are set for small tables; it is not being run for\n>> large tables.\n>\n> With 8.4 you can adjust the autovacuum settings per table...\n\nYou can as well with 8.3, but it's not made by alter table but by\npg_autovacuum table entries.\n", "msg_date": "Sat, 1 May 2010 10:08:36 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "On Fri, Apr 30, 2010 at 4:50 PM, Josh Berkus <[email protected]> wrote:\n> Which is the opposite of my experience; currently we have several\n> clients who have issues which required more-frequent analyzes on\n> specific tables.   Before 8.4, vacuuming more frequently, especially on\n> large tables, was very costly; vacuum takes a lot of I/O and CPU.  Even\n> with 8.4 it's not something you want to increase without thinking about\n> the tradeoff\n\nActually I would think that statement would be be that before 8.3\nvacuum was much more expensive. The changes to vacuum for 8.4 mostly\nhad to do with moving FSM to disk, making seldom vacuumed tables\neasier to keep track of, and making autovac work better in the\npresence of long running transactions. The ability to tune IO load\netc was basically unchanged in 8.4.\n", "msg_date": "Sat, 1 May 2010 10:13:25 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "Robert Haas wrote:\n> I don't have a stake in the ground on what the right settings are, but\n> I think it's fair to say that if you vacuum OR analyze much less\n> frequently than what we recommend my default, it might break.\n> \n\nI think the default settings are essentially minimum recommended \nfrequencies. They aren't too terrible for the giant data warehouse case \nJosh was suggesting they came from--waiting until there's 20% worth of \ndead stuff before kicking off an intensive vacuum is OK when vacuum is \nexpensive and you're mostly running big queries anyway. And for smaller \ntables, the threshold helps it kick in a little earlier. It's unlikely \nanyone wants to *increase* those, so that autovacuum runs even less; out \nof the box it's not tuned to run very often at all.\n\nIf anything, I'd expect people to want to increase how often it runs, \nfor tables where much less than 20% dead is a problem. The most common \nsituation I've seen where that's the case is when you have a hotspot of \nheavily updated rows in a large table, and this may match some of the \nsituations that Robert was alluding to seeing. Let's say you have a big \ntable where 0.5% of the users each update their respective records \nheavily, averaging 30 times each. That's only going to result in 15% \ndead rows, so no autovacuum. But latency for those users will suffer \ngreatly, because they might have to do lots of seeking around to get \ntheir little slice of the data.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 01 May 2010 13:11:05 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> If anything, I'd expect people to want to increase how often it runs, \n> for tables where much less than 20% dead is a problem. The most common \n> situation I've seen where that's the case is when you have a hotspot of \n> heavily updated rows in a large table, and this may match some of the \n> situations that Robert was alluding to seeing. Let's say you have a big \n> table where 0.5% of the users each update their respective records \n> heavily, averaging 30 times each. That's only going to result in 15% \n> dead rows, so no autovacuum. But latency for those users will suffer \n> greatly, because they might have to do lots of seeking around to get \n> their little slice of the data.\n\nWith a little luck, HOT will alleviate that case, since HOT updates can\nbe reclaimed without running vacuum per se. I agree there's a risk\nthere though.\n\nNow that partial vacuum is available, it'd be a real good thing to\nrevisit these numbers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 May 2010 13:25:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters " }, { "msg_contents": "On Sat, May 1, 2010 at 12:13 PM, Scott Marlowe <[email protected]> wrote:\n> On Fri, Apr 30, 2010 at 4:50 PM, Josh Berkus <[email protected]> wrote:\n>> Which is the opposite of my experience; currently we have several\n>> clients who have issues which required more-frequent analyzes on\n>> specific tables.   Before 8.4, vacuuming more frequently, especially on\n>> large tables, was very costly; vacuum takes a lot of I/O and CPU.  Even\n>> with 8.4 it's not something you want to increase without thinking about\n>> the tradeoff\n>\n> Actually I would think that statement would be be that before 8.3\n> vacuum was much more expensive.  The changes to vacuum for 8.4 mostly\n> had to do with moving FSM to disk, making seldom vacuumed tables\n> easier to keep track of, and making autovac work better in the\n> presence of long running transactions.  The ability to tune IO load\n> etc was basically unchanged in 8.4.\n\nWhat about http://www.postgresql.org/docs/8.4/static/storage-vm.html ?\n\n...Robert\n", "msg_date": "Sat, 1 May 2010 15:08:48 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "On Sat, May 1, 2010 at 1:08 PM, Robert Haas <[email protected]> wrote:\n> On Sat, May 1, 2010 at 12:13 PM, Scott Marlowe <[email protected]> wrote:\n>> On Fri, Apr 30, 2010 at 4:50 PM, Josh Berkus <[email protected]> wrote:\n>>> Which is the opposite of my experience; currently we have several\n>>> clients who have issues which required more-frequent analyzes on\n>>> specific tables.   Before 8.4, vacuuming more frequently, especially on\n>>> large tables, was very costly; vacuum takes a lot of I/O and CPU.  Even\n>>> with 8.4 it's not something you want to increase without thinking about\n>>> the tradeoff\n>>\n>> Actually I would think that statement would be be that before 8.3\n>> vacuum was much more expensive.  The changes to vacuum for 8.4 mostly\n>> had to do with moving FSM to disk, making seldom vacuumed tables\n>> easier to keep track of, and making autovac work better in the\n>> presence of long running transactions.  The ability to tune IO load\n>> etc was basically unchanged in 8.4.\n>\n> What about http://www.postgresql.org/docs/8.4/static/storage-vm.html ?\n\nThat really only has an effect no tables that aren't updated very\noften. Unless you've got a whole bunch of those, it's not that big of\na deal.\n", "msg_date": "Sat, 1 May 2010 13:17:12 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "On Sat, May 1, 2010 at 1:17 PM, Scott Marlowe <[email protected]> wrote:\n> On Sat, May 1, 2010 at 1:08 PM, Robert Haas <[email protected]> wrote:\n>> On Sat, May 1, 2010 at 12:13 PM, Scott Marlowe <[email protected]> wrote:\n>>> On Fri, Apr 30, 2010 at 4:50 PM, Josh Berkus <[email protected]> wrote:\n>>>> Which is the opposite of my experience; currently we have several\n>>>> clients who have issues which required more-frequent analyzes on\n>>>> specific tables.   Before 8.4, vacuuming more frequently, especially on\n>>>> large tables, was very costly; vacuum takes a lot of I/O and CPU.  Even\n>>>> with 8.4 it's not something you want to increase without thinking about\n>>>> the tradeoff\n>>>\n>>> Actually I would think that statement would be be that before 8.3\n>>> vacuum was much more expensive.  The changes to vacuum for 8.4 mostly\n>>> had to do with moving FSM to disk, making seldom vacuumed tables\n>>> easier to keep track of, and making autovac work better in the\n>>> presence of long running transactions.  The ability to tune IO load\n>>> etc was basically unchanged in 8.4.\n>>\n>> What about http://www.postgresql.org/docs/8.4/static/storage-vm.html ?\n>\n> That really only has an effect no tables that aren't updated very\n> often.  Unless you've got a whole bunch of those, it's not that big of\n> a deal.\n\nsigh, s/ no / on /\n\nAnyway, my real point was that the big improvements that made vacuum\nso much better came in 8.3, with HOT updates and multi-threaded vacuum\n(that might have shown up in 8.2 even) 8.3 was a huge improvement and\ncompelling upgrade from 8.1 for me.\n", "msg_date": "Sat, 1 May 2010 13:19:47 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" }, { "msg_contents": "On Sat, May 1, 2010 at 1:11 PM, Greg Smith <[email protected]> wrote:\n> Robert Haas wrote:\n>>\n>> I don't have a stake in the ground on what the right settings are, but\n>> I think it's fair to say that if you vacuum OR analyze much less\n>> frequently than what we recommend my default, it might break.\n>>\n>\n> I think the default settings are essentially minimum recommended\n> frequencies.  They aren't too terrible for the giant data warehouse case\n> Josh was suggesting they came from--waiting until there's 20% worth of dead\n> stuff before kicking off an intensive vacuum is OK when vacuum is expensive\n> and you're mostly running big queries anyway.  And for smaller tables, the\n> threshold helps it kick in a little earlier.  It's unlikely anyone wants to\n> *increase* those, so that autovacuum runs even less; out of the box it's not\n> tuned to run very often at all.\n>\n> If anything, I'd expect people to want to increase how often it runs, for\n> tables where much less than 20% dead is a problem.  The most common\n> situation I've seen where that's the case is when you have a hotspot of\n> heavily updated rows in a large table, and this may match some of the\n> situations that Robert was alluding to seeing.  Let's say you have a big\n> table where 0.5% of the users each update their respective records heavily,\n> averaging 30 times each.  That's only going to result in 15% dead rows, so\n> no autovacuum.  But latency for those users will suffer greatly, because\n> they might have to do lots of seeking around to get their little slice of\n> the data.\n\nFor me it's more that my applications are typically really fast, and\nwhen they run at half-speed people think \"oh, it's slow today\" but\nthey can still work and attribute the problem to their computer, or\nthe network, or something. When they slow down by like 10x then they\nfile a bug. I'm typically dealing with a situation where the whole\ndatabase can be easily cached in RAM and the CPU is typically 90%\nidle, which cushions the blow quite a bit.\n\nA few months ago someone reported that \"the portal was slow\" and the\nproblem turned out to be that the database was bloated by in excess of\na factor a factor of 10 due to having blown out the free space map. I\nwasn't a regular user of that system at that time so hadn't had the\nopportunity to notice myself.\n\n...Robert\n", "msg_date": "Sat, 1 May 2010 15:20:03 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum strategy / parameters" } ]
[ { "msg_contents": "I have previously discussed my very long PL/PGSQL stored procedure on this\nlist. However, without getting into too many details, I have another\nperformance-related question.\n\nThe procedure currently uses cursors to return multiple result sets to the\nprogram executing the procedure. Basically, I do this:\n\nBEGIN;\nSELECT * FROM stored_proc();\nFETCH ALL FROM cursor1;\nFETCH ALL FROM cursor2;\nFETCH ALL FROM cursor3;\netc.\nCOMMIT;\n\nHowever, there are some cases in the stored procedure where some of the\nresult sets returned by these cursors are also needed as inputs to\nadditional queries. To use them, I am currently doing:\n\nFOR temp IN cursorX LOOP\n -- Some code that pushes the current temp record onto the end of an array\nEND LOOP;\nOPEN cursorX;\nMOVE FIRST FROM cursorX;\n\nThen, when I need to use the results in a query, I do something like:\n\nSELECT * FROM table1 INNER JOIN (SELECT * FROM unnest(result_array)) AS\ntable2 ON ( blah blah ) WHERE blah\n\nThis seems extremely inefficient to me. First, I'm not sure of the penalty\nfor unnesting an array into a SET OF object. Second, the set of records\nreturned from unnesting would not be indexed for the join which means a\nsequential scan. Third, building the array in the first place using\narray_append seems extremely inefficient. Fourth, opening the cursor twice\nseems like it would execute the query twice, though given the placement and\ncontext, it's probably got it cached somewhere (hopefully). I'm sure there\nare probably other things I am overlooking.\n\nInstead of doing things this way, I think using temporary tables is really\nwhat I want. I am thinking that instead of doing this cursor BS, I can do\nsomething like:\n\nCREATE TEMPORARY TABLE table2 WITH (OIDS=FALSE) ON COMMIT DROP AS (\n SELECT * FROM blah blah blah -- whatever the cursor is defined as doing\n);\nALTER TABLE table2 ADD PRIMARY KEY (id);\nCREATE INDEX table2_blah_idx ON table2 USING btree (blah);\nANALYZE table2;\n\nThen, when I need to use the results in another query, I could do:\n\nSELECT * FROM table1 INNER JOIN table2 ON ( blah blah ) WHERE blah\n\nThis would use the indexes and the primary key appropriately. I could also\nensure that the order of the information in the temporary table is such that\nit facilitates any joining, where clauses, or order by clauses on the\nadditional queries. Finally, to get results into my application, I would\nthen do:\n\nBEGIN;\nSELECT * FROM stored_procedure();\nSELECT * FROM temp_table1;\nSELECT * FROM temp_table2;\nSELECT * FROM temp_table3;\netc\nCOMMIT;\n\nHowever, this is a fairly major re-write of how things are done. Before I\nspend the time to do all that re-writing, can anyone share some insight on\nwhere / how I might expect to gain performance from this conversion and also\nspeak to some of the overhead (if any) in using temporary tables like this\n(building them, creating indexes on them, analyzing them, then dropping them\non commit)? It is worth mentioning that the data being stored in these\ntemporary tables is probably <1,000 records for all tables involved. Most\nwill probably be <100 records. Some of these temporary tables will be joined\nto other tables up to 4 more times throughout the rest of the stored\nprocedure. Most will be generated and then retrieved only from outside the\nstored procedure. Obviously, I would not create indexes on or analyze the\ntemporary tables being retrieved only from outside the stored procedure.\nIndexes and primary keys will only be created on the tables that are joined\nto other tables and have WHERE conditions applied to them.\n\nI have done a lot of Googling on temporary tables and cursors in PostgreSQL,\nbut I have found only very limited discussion as to performance differences\nwith respect to how I'm planning on using them, and I am unsure about the\nquality of the information given that most of it is 4+ years out of date and\nposted on various expert exchanges and not on this pgsql-performance list.\n\nOne final question:\n\nIn this conversion to temporary table use, there are a couple of cases where\nI would prefer to do something like:\n\nprepare blah(blah blah) as select blah;\n\nThen, I want to call this prepared statement multiple times, passing a\ndifferent argument value each time. The only reason to do this would be to\nsave writing code and to ensure that updating the select statement in once\nplace covers all places where it is used. However, I am concerned it might\nincur a performance hit by re-preparing the query since I assume that having\nthis inside the PL/PGSQL procedure means it is already prepared once. Can\nanyone speak to this? I know that I could put it in a separate stored\nprocedure, but then the question becomes, does that add extra overhead? Or,\nin different words, is it similar to the difference between an inlined\nfunction and a non-inlined function in C?\n\nI would greatly appreciate any insights to these questions/issues.\n\nThanks in advance for any assistance anyone can provide.\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nI have previously discussed my very long PL/PGSQL stored procedure on this list. However, without getting into too many details, I have another performance-related question.The procedure currently uses cursors to return multiple result sets to the program executing the procedure. Basically, I do this:\nBEGIN;SELECT * FROM stored_proc();FETCH ALL FROM cursor1;FETCH ALL FROM cursor2;FETCH ALL FROM cursor3;etc.COMMIT;However, there are some cases in the stored procedure where some of the result sets returned by these cursors are also needed as inputs to additional queries. To use them, I am currently doing:\nFOR temp IN cursorX LOOP  -- Some code that pushes the current temp record onto the end of an arrayEND LOOP;OPEN cursorX;MOVE FIRST FROM cursorX;Then, when I need to use the results in a query, I do something like:\nSELECT * FROM table1 INNER JOIN (SELECT * FROM unnest(result_array)) AS table2 ON ( blah blah ) WHERE blahThis seems extremely inefficient to me. First, I'm not sure of the penalty for unnesting an array into a SET OF object. Second, the set of records returned from unnesting would not be indexed for the join which means a sequential scan. Third, building the array in the first place using array_append seems extremely inefficient. Fourth, opening the cursor twice seems like it would execute the query twice, though given the placement and context, it's probably got it cached somewhere (hopefully). I'm sure there are probably other things I am overlooking. \nInstead of doing things this way, I think using temporary tables is really what I want. I am thinking that instead of doing this cursor BS, I can do something like:CREATE TEMPORARY TABLE table2 WITH (OIDS=FALSE) ON COMMIT DROP AS (\n  SELECT * FROM blah blah blah -- whatever the cursor is defined as doing);ALTER TABLE table2 ADD PRIMARY KEY (id);CREATE INDEX table2_blah_idx ON table2 USING btree (blah);ANALYZE table2;Then, when I need to use the results in another query, I could do:\nSELECT * FROM table1 INNER JOIN table2 ON ( blah blah ) WHERE blahThis would use the indexes and the primary key appropriately. I could also ensure that the order of the information in the temporary table is such that it facilitates any joining, where clauses, or order by clauses on the additional queries. Finally, to get results into my application, I would then do:\nBEGIN;SELECT * FROM stored_procedure();SELECT * FROM temp_table1;SELECT * FROM temp_table2;SELECT * FROM temp_table3;etcCOMMIT;However, this is a fairly major re-write of how things are done. Before I spend the time to do all that re-writing, can anyone share some insight on where / how I might expect to gain performance from this conversion and also speak to some of the overhead (if any) in using temporary tables like this (building them, creating indexes on them, analyzing them, then dropping them on commit)? It is worth mentioning that the data being stored in these temporary tables is probably <1,000 records for all tables involved. Most will probably be <100 records. Some of these temporary tables will be joined to other tables up to 4 more times throughout the rest of the stored procedure. Most will be generated and then retrieved only from outside the stored procedure. Obviously, I would not create indexes on or analyze the temporary tables being retrieved only from outside the stored procedure. Indexes and primary keys will only be created on the tables that are joined to other tables and have WHERE conditions applied to them.\nI have done a lot of Googling on temporary tables and cursors in PostgreSQL, but I have found only very limited discussion as to performance differences with respect to how I'm planning on using them, and I am unsure about the quality of the information given that most of it is 4+ years out of date and posted on various expert exchanges and not on this pgsql-performance list. \nOne final question:In this conversion to temporary table use, there are a couple of cases where I would prefer to do something like:prepare blah(blah blah) as select blah;Then, I want to call this prepared statement multiple times, passing a different argument value each time. The only reason to do this would be to save writing code and to ensure that updating the select statement in once place covers all places where it is used. However, I am concerned it might incur a performance hit by re-preparing the query since I assume that having this inside the PL/PGSQL procedure means it is already prepared once. Can anyone speak to this? I know that I could put it in a separate stored procedure, but then the question becomes, does that add extra overhead? Or, in different words, is it similar to the difference between an inlined function and a non-inlined function in C?\nI would greatly appreciate any insights to these questions/issues.Thanks in advance for any assistance anyone can provide.-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \n\"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero", "msg_date": "Wed, 21 Apr 2010 16:16:18 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Replacing Cursors with Temporary Tables" }, { "msg_contents": "I think it's really tough to say how this is going to perform. I'd\nrecommend constructing a couple of simplified test cases and\nbenchmarking the heck out of it. One of the problems with temporary\ntables is that every time you create a temporary table, it creates a\n(temporary) record in pg_class; that can get to be a problem if you do\nit a lot. Another is that for non-trivial queries you may need to do\na manual ANALYZE on the table to get good stats for the rest of the\nquery to perform well. But on the flip side, as you say, nesting and\nunnesting of arrays and function calls are not free either. I am\ngoing to hazard a SWAG that the array implementation is faster UNLESS\nthe lack of good stats on the contents of the arrays is hosing the\nperformance somewhere down the road. But that is really just a total\nshot in the dark.\n\nAnother possible implementation might be to have a couple of permanent\ntables where you store the results. Give each such table a \"batch id\"\ncolumn, and return the batch id from your stored procedure. This\nwould probably avoid a lot of the overhead associated with temp tables\nwhile retaining many of the benefits.\n\n...Robert\n", "msg_date": "Wed, 21 Apr 2010 21:21:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "\nOn Apr 21, 2010, at 1:16 PM, Eliot Gable wrote:\n\n> I have previously discussed my very long PL/PGSQL stored procedure on this list. However, without getting into too many details, I have another performance-related question.\n> \n> The procedure currently uses cursors to return multiple result sets to the program executing the procedure. Basically, I do this:\n> \n> CREATE TEMPORARY TABLE table2 WITH (OIDS=FALSE) ON COMMIT DROP AS (\n> SELECT * FROM blah blah blah -- whatever the cursor is defined as doing\n> );\n> ALTER TABLE table2 ADD PRIMARY KEY (id);\n> CREATE INDEX table2_blah_idx ON table2 USING btree (blah);\n> ANALYZE table2;\n> \n> Then, when I need to use the results in another query, I could do:\n> \n> SELECT * FROM table1 INNER JOIN table2 ON ( blah blah ) WHERE blah\n> \n> This would use the indexes and the primary key appropriately. I could also ensure that the order of the information in the temporary table is such that it facilitates any joining, where clauses, or order by clauses on the additional queries. Finally, to get results into my application, I would then do:\n\nI have had good luck with temp tables, but beware -- there isn't anything special performance wise about them -- they do as much I/O as a real table without optimizations that know that it will be dropped on commit so it doesn't have to be as fail-safe as ordinary ones. Even so, a quick \nCREATE TABLE foo ON COMMIT DROP AS (SELECT ...); \nANALYZE foo;\nSELECT FROM foo JOIN bar ... ;\ncan be very effective for performance.\n\nHowever, creating the indexes above is going to slow it down a lot. Most likely, the join with a seqscan will be faster than an index build followed by the join. After all, in order to build the index it has to seqscan! If you are consuming these tables for many later select queries rather than just one or two, building the index might help. Otherwise its just a lot of extra work.\n\nI suggest you experiment with the performance differences using psql on a specific use case on real data.\n\n\n> One final question:\n> \n> In this conversion to temporary table use, there are a couple of cases where I would prefer to do something like:\n> \n> prepare blah(blah blah) as select blah;\n> \n> Then, I want to call this prepared statement multiple times, passing a different argument value each time. The only reason to do this would be to save writing code and to ensure that updating the select statement in once place covers all places where it is used. However, I am concerned it might incur a performance hit by re-preparing the query since I assume that having this inside the PL/PGSQL procedure means it is already prepared once. Can anyone speak to this? I know that I could put it in a separate stored procedure, but then the question becomes, does that add extra overhead? Or, in different words, is it similar to the difference between an inlined function and a non-inlined function in C?\n\nI can't speak for the details in your question, but it brings up a different issue I can speak to:\nPrepared statements usually cause the planner to create a generic query plan for all possible inputs. For some queries where the parameters can significantly influence the query plan, this can be a big performance drop. For other queries (particularly inserts or simple selects on PK's) the cached plan saves time.\n\n> I would greatly appreciate any insights to these questions/issues.\n> \n> Thanks in advance for any assistance anyone can provide.\n> \n> \n> -- \n> Eliot Gable\n> \n> \"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \n> \n> \"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n> \n> \"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero\n\n", "msg_date": "Wed, 21 Apr 2010 20:13:16 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "On Wed, Apr 21, 2010 at 4:16 PM, Eliot Gable\n<[email protected]> wrote:\n> I have previously discussed my very long PL/PGSQL stored procedure on this\n> list. However, without getting into too many details, I have another\n> performance-related question.\n>\n> The procedure currently uses cursors to return multiple result sets to the\n> program executing the procedure. Basically, I do this:\n>\n> BEGIN;\n> SELECT * FROM stored_proc();\n> FETCH ALL FROM cursor1;\n> FETCH ALL FROM cursor2;\n> FETCH ALL FROM cursor3;\n> etc.\n> COMMIT;\n>\n> However, there are some cases in the stored procedure where some of the\n> result sets returned by these cursors are also needed as inputs to\n> additional queries. To use them, I am currently doing:\n>\n> FOR temp IN cursorX LOOP\n>   -- Some code that pushes the current temp record onto the end of an array\n> END LOOP;\n> OPEN cursorX;\n> MOVE FIRST FROM cursorX;\n>\n> Then, when I need to use the results in a query, I do something like:\n>\n> SELECT * FROM table1 INNER JOIN (SELECT * FROM unnest(result_array)) AS\n> table2 ON ( blah blah ) WHERE blah\n>\n> This seems extremely inefficient to me. First, I'm not sure of the penalty\n> for unnesting an array into a SET OF object. Second, the set of records\n> returned from unnesting would not be indexed for the join which means a\n> sequential scan. Third, building the array in the first place using\n> array_append seems extremely inefficient. Fourth, opening the cursor twice\n> seems like it would execute the query twice, though given the placement and\n> context, it's probably got it cached somewhere (hopefully). I'm sure there\n> are probably other things I am overlooking.\n\n*) don't use temp tables unless there is no other way (for example, if\nthe set is quite large)\n*) unnest is cheap unless the array is large\n*) Don't build arrays thay way:\n\ndeclare a_cursor for a_query\n\nbecomes\nCREATE FUNCTION get_foos(out foo[]) RETURNS record AS -- foo is a\ntable or composite type\n$$\nBEGIN\n select array (a_query) into foos;\n [...]\n$$ language plpgsql;\n\nIn 8.4, we will manipulate the results typically like this:\n\nWITH f AS (select unnest(foos) as foo)\nSELECT * from f join bar on (f).foo.bar_id= bar.bar_id [...]\n\nor this:\nWITH f AS (select (foo).* from (select unnest(foos) as foo) q)\nSELECT * from f join bar on f.bar_id= bar.bar_id [...]\n\n\nThis will use an index on bar.bar_id if it exists. Obviously, any\nindexes on foo are not used after creating the array but doesn't\nmatter much as long as the right side is indexed. Your cursor method\ndoes do any better in this regard. You can create an index on a temp\ntable but the cost of building the index will probably be more than\nany savings you get unless this is some type of special case, for\nexample if the left (temp table) side is big and you need to have it\nsorted from that side.\n\nmerlin\n", "msg_date": "Thu, 22 Apr 2010 08:14:42 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "On Thu, Apr 22, 2010 at 8:14 AM, Merlin Moncure <[email protected]> wrote:\n> This will use an index on bar.bar_id if it exists.  Obviously, any\n> indexes on foo are not used after creating the array but doesn't\n> matter much as long as the right side is indexed.  Your cursor method\n> does do any better in this regard.  You can create an index on a temp\n\ner, meant to say: 'doesn't do any better'.\n\nmerlin\n", "msg_date": "Thu, 22 Apr 2010 08:17:00 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "On Wed, Apr 21, 2010 at 4:16 PM, Eliot Gable\n<[email protected]> wrote:\n> I have previously discussed my very long PL/PGSQL stored procedure on this\n> list. However, without getting into too many details, I have another\n> performance-related question.\n\nok, here's a practical comparion:\n-- test data\ncreate table foo(foo_id int primary key);\ninsert into foo select generate_series(1, 1000) v;\ncreate table bar(bar_id int, foo_id int references foo);\ncreate index bar_foo_id_idx on bar(foo_id);\ninsert into bar select v, (v % 1000) + 1 from generate_series(1, 1000000) v;\n\n-- arrays\ncreate or replace function get_foobars(_foo_id int, _foo out foo,\n_bars out bar[]) returns record as\n$$\n begin\n select * from foo where foo_id = _foo_id into _foo;\n\n select array(select bar from bar where foo_id = _foo_id) into _bars;\n end;\n$$ language plpgsql;\n\nselect (unnest(_bars)).* from get_foobars(6); -- ~ 4ms on my box\n\n-- temp table\n\ncreate or replace function get_foobars(_foo_id int) returns void as\n$$\n begin\n create temp table bars on commit drop as select * from bar where\nfoo_id = _foo_id;\n end;\n$$ language plpgsql;\n\nbegin;\nselect get_foobars(6); -- ~ 3ms\nselect * from bars; -- 1.6ms\ncommit; -- 1ms\n\nThe timings are similar, but the array returning case:\n*) runs in a single statement. If this is executed from the client\nthat means less round trips\n*) can be passed around as a variable between functions. temp table\nrequires re-query\n*) make some things easier/cheap such as counting the array -- you get\nto call the basically free array_upper()\n*) makes some things harder. specifically dealing with arrays on the\nclient is a pain UNLESS you expand the array w/unnest() or use\nlibpqtypes\n*) can nest. you can trivially nest complicated sets w/arrays\n*) does not require explicit transaction mgmt\n\nmerlin\n", "msg_date": "Thu, 22 Apr 2010 10:11:59 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "On Thu, Apr 22, 2010 at 10:11 AM, Merlin Moncure <[email protected]> wrote:\n> The timings are similar, but the array returning case:\n> *)  runs in a single statement.  If this is executed from the client\n> that means less round trips\n> *) can be passed around as a variable between functions.  temp table\n> requires re-query\n> *) make some things easier/cheap such as counting the array -- you get\n> to call the basically free array_upper()\n> *) makes some things harder.  specifically dealing with arrays on the\n> client is a pain UNLESS you expand the array w/unnest() or use\n> libpqtypes\n> *) can nest. you can trivially nest complicated sets w/arrays\n> *) does not require explicit transaction mgmt\n\n\nI neglected to mention perhaps the most important point about the array method:\n*) does not rely on any temporary resources.\n\nIf you write a lot of plpsql, you will start to appreciate the\ndifference in execution time between planned and unplanned functions.\nThe first time you run a function in a database session, it has to be\nparsed and planned. The planning time in particular for large-ish\nfunctions that touch a lot of objects can exceed the execution time of\nthe function. Depending on _any_ temporary resources causes plan mgmt\nissues because the database detects that a table in the old plan is\ngone ('on commit drop') and has to re-plan. If your functions are\ncomplex/long and you are counting milliseconds, then that alone should\nbe enough to dump any approach that depends on temp tables.\n\nmerlin\n", "msg_date": "Thu, 22 Apr 2010 10:42:42 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "I appreciate all the comments.\n\nI will perform some benchmarking before doing the rewrite to be certain of\nhow it will impact performance. At the very least, I think can say for\nnear-certain now that the indexes are not going to help me given the\nparticular queries I am dealing with and limited number of records the temp\ntables will have combined with the limited number of times I will re-use\nthem.\n\n\nOn Thu, Apr 22, 2010 at 10:42 AM, Merlin Moncure <[email protected]> wrote:\n\n> On Thu, Apr 22, 2010 at 10:11 AM, Merlin Moncure <[email protected]>\n> wrote:\n> > The timings are similar, but the array returning case:\n> > *) runs in a single statement. If this is executed from the client\n> > that means less round trips\n> > *) can be passed around as a variable between functions. temp table\n> > requires re-query\n> > *) make some things easier/cheap such as counting the array -- you get\n> > to call the basically free array_upper()\n> > *) makes some things harder. specifically dealing with arrays on the\n> > client is a pain UNLESS you expand the array w/unnest() or use\n> > libpqtypes\n> > *) can nest. you can trivially nest complicated sets w/arrays\n> > *) does not require explicit transaction mgmt\n>\n>\n> I neglected to mention perhaps the most important point about the array\n> method:\n> *) does not rely on any temporary resources.\n>\n> If you write a lot of plpsql, you will start to appreciate the\n> difference in execution time between planned and unplanned functions.\n> The first time you run a function in a database session, it has to be\n> parsed and planned. The planning time in particular for large-ish\n> functions that touch a lot of objects can exceed the execution time of\n> the function. Depending on _any_ temporary resources causes plan mgmt\n> issues because the database detects that a table in the old plan is\n> gone ('on commit drop') and has to re-plan. If your functions are\n> complex/long and you are counting milliseconds, then that alone should\n> be enough to dump any approach that depends on temp tables.\n>\n> merlin\n>\n\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nI appreciate all the comments.I will perform some benchmarking before doing the rewrite to be certain of how it will impact performance. At the very least, I think can say for near-certain now that the indexes are not going to help me given the particular queries I am dealing with and limited number of records the temp tables will have combined with the limited number of times I will re-use them.\nOn Thu, Apr 22, 2010 at 10:42 AM, Merlin Moncure <[email protected]> wrote:\nOn Thu, Apr 22, 2010 at 10:11 AM, Merlin Moncure <[email protected]> wrote:\n> The timings are similar, but the array returning case:\n> *)  runs in a single statement.  If this is executed from the client\n> that means less round trips\n> *) can be passed around as a variable between functions.  temp table\n> requires re-query\n> *) make some things easier/cheap such as counting the array -- you get\n> to call the basically free array_upper()\n> *) makes some things harder.  specifically dealing with arrays on the\n> client is a pain UNLESS you expand the array w/unnest() or use\n> libpqtypes\n> *) can nest. you can trivially nest complicated sets w/arrays\n> *) does not require explicit transaction mgmt\n\n\nI neglected to mention perhaps the most important point about the array method:\n*) does not rely on any temporary resources.\n\nIf you write a lot of plpsql, you will start to appreciate the\ndifference in execution time between planned and unplanned functions.\nThe first time you run a function in a database session, it has to be\nparsed and planned.  The planning time in particular for large-ish\nfunctions that touch a lot of objects can exceed the execution time of\nthe function.  Depending on _any_ temporary resources causes plan mgmt\nissues because the database detects that a table in the old plan is\ngone ('on commit drop') and has to re-plan.   If your functions are\ncomplex/long and you are counting milliseconds, then that alone should\nbe enough to dump any approach that depends on temp tables.\n\nmerlin\n-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero", "msg_date": "Thu, 22 Apr 2010 16:57:08 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "To answer the question of whether calling a stored procedure adds any\nsignificant overhead, I built a test case and the short answer is that it\nseems that it does:\n\nCREATE OR REPLACE FUNCTION Test1() RETURNS INTEGER AS\n$BODY$\nDECLARE\n temp INTEGER;\nBEGIN\n FOR i IN 1..1000 LOOP\n SELECT 1 AS id INTO temp;\n END LOOP;\n RETURN 1;\nEND;\n$BODY$\nLANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION Test2A() RETURNS SETOF INTEGER AS\n$BODY$\nDECLARE\nBEGIN\n RETURN QUERY SELECT 1 AS id;\nEND;\n$BODY$\nLANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION Test2B() RETURNS INTEGER AS\n$BODY$\nDECLARE\n temp INTEGER;\nBEGIN\n FOR i IN 1..1000 LOOP\n temp := Test2A();\n END LOOP;\n RETURN 1;\nEND;\n$BODY$\nLANGUAGE plpgsql;\n\n\nEXPLAIN ANALYZE SELECT * FROM Test1();\n\"Function Scan on test1 (cost=0.00..0.26 rows=1 width=4) (actual\ntime=6.568..6.569 rows=1 loops=1)\"\n\"Total runtime: 6.585 ms\"\n\n\nEXPLAIN ANALYZE SELECT * FROM Test2B();\n\"Function Scan on test2b (cost=0.00..0.26 rows=1 width=4) (actual\ntime=29.006..29.007 rows=1 loops=1)\"\n\"Total runtime: 29.020 ms\"\n\n\nSo, when chasing milliseconds, don't call sub functions if it can\nrealistically and easily be avoided. I only have one operation/algorithm\nbroken out into another stored procedure because I call it in about 8\ndifferent places and it is 900+ lines long. While everything else could be\nbroken out into different stored procedures to make it easier to analyze the\nwhole set of code and probably make it easier to maintain, it does not make\nsense from a performance perspective. Each different logical group of\nactions that would be in its own stored procedure is only ever used once in\nthe whole algorithm, so there is no good code re-use going on. Further,\nsince the containing stored procedure gets called by itself hundreds or even\nthousands of times per second on a production system, the nested calls to\nindividual sub-stored procedures would just add extra overhead for no real\ngain. And, from these tests, it would be significant overhead.\n\n\n\nOn Thu, Apr 22, 2010 at 4:57 PM, Eliot Gable <\[email protected] <egable%[email protected]>>wrote:\n\n> I appreciate all the comments.\n>\n> I will perform some benchmarking before doing the rewrite to be certain of\n> how it will impact performance. At the very least, I think can say for\n> near-certain now that the indexes are not going to help me given the\n> particular queries I am dealing with and limited number of records the temp\n> tables will have combined with the limited number of times I will re-use\n> them.\n>\n>\n> On Thu, Apr 22, 2010 at 10:42 AM, Merlin Moncure <[email protected]>wrote:\n>\n>> On Thu, Apr 22, 2010 at 10:11 AM, Merlin Moncure <[email protected]>\n>> wrote:\n>> > The timings are similar, but the array returning case:\n>> > *) runs in a single statement. If this is executed from the client\n>> > that means less round trips\n>> > *) can be passed around as a variable between functions. temp table\n>> > requires re-query\n>> > *) make some things easier/cheap such as counting the array -- you get\n>> > to call the basically free array_upper()\n>> > *) makes some things harder. specifically dealing with arrays on the\n>> > client is a pain UNLESS you expand the array w/unnest() or use\n>> > libpqtypes\n>> > *) can nest. you can trivially nest complicated sets w/arrays\n>> > *) does not require explicit transaction mgmt\n>>\n>>\n>> I neglected to mention perhaps the most important point about the array\n>> method:\n>> *) does not rely on any temporary resources.\n>>\n>> If you write a lot of plpsql, you will start to appreciate the\n>> difference in execution time between planned and unplanned functions.\n>> The first time you run a function in a database session, it has to be\n>> parsed and planned. The planning time in particular for large-ish\n>> functions that touch a lot of objects can exceed the execution time of\n>> the function. Depending on _any_ temporary resources causes plan mgmt\n>> issues because the database detects that a table in the old plan is\n>> gone ('on commit drop') and has to re-plan. If your functions are\n>> complex/long and you are counting milliseconds, then that alone should\n>> be enough to dump any approach that depends on temp tables.\n>>\n>> merlin\n>>\n>\n>\n>\n> --\n> Eliot Gable\n>\n> \"We do not inherit the Earth from our ancestors: we borrow it from our\n> children.\" ~David Brower\n>\n> \"I decided the words were too conservative for me. We're not borrowing from\n> our children, we're stealing from them--and it's not even considered to be a\n> crime.\" ~David Brower\n>\n> \"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live;\n> not live to eat.) ~Marcus Tullius Cicero\n>\n\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nTo answer the question of whether calling a stored procedure adds any significant overhead, I built a test case and the short answer is that it seems that it does:CREATE OR REPLACE FUNCTION Test1() RETURNS INTEGER AS\n$BODY$DECLARE    temp INTEGER;BEGIN    FOR i IN 1..1000 LOOP        SELECT 1 AS id INTO temp;    END LOOP;    RETURN 1;END;$BODY$LANGUAGE plpgsql;CREATE OR REPLACE FUNCTION Test2A() RETURNS SETOF INTEGER AS\n$BODY$DECLAREBEGIN    RETURN QUERY SELECT 1 AS id;END;$BODY$LANGUAGE plpgsql;CREATE OR REPLACE FUNCTION Test2B() RETURNS INTEGER AS$BODY$DECLARE    temp INTEGER;BEGIN    FOR i IN 1..1000 LOOP\n        temp := Test2A();    END LOOP;    RETURN 1;END;$BODY$LANGUAGE plpgsql;EXPLAIN ANALYZE SELECT * FROM Test1();\"Function Scan on test1  (cost=0.00..0.26 rows=1 width=4) (actual time=6.568..6.569 rows=1 loops=1)\"\n\"Total runtime: 6.585 ms\"EXPLAIN ANALYZE SELECT * FROM Test2B();\"Function Scan on test2b  (cost=0.00..0.26 rows=1 width=4) (actual time=29.006..29.007 rows=1 loops=1)\"\"Total runtime: 29.020 ms\"\nSo, when chasing milliseconds, don't call sub functions if it can realistically and easily be avoided. I only have one operation/algorithm broken out into another stored procedure because I call it in about 8 different places and it is 900+ lines long. While everything else could be broken out into different stored procedures to make it easier to analyze the whole set of code and probably make it easier to maintain, it does not make sense from a performance perspective. Each different logical group of actions that would be in its own stored procedure is only ever used once in the whole algorithm, so there is no good code re-use going on. Further, since the containing stored procedure gets called by itself hundreds or even thousands of times per second on a production system, the nested calls to individual sub-stored procedures would just add extra overhead for no real gain. And, from these tests, it would be significant overhead. \nOn Thu, Apr 22, 2010 at 4:57 PM, Eliot Gable <[email protected]> wrote:\nI appreciate all the comments.I will perform some benchmarking before doing the rewrite to be certain of how it will impact performance. At the very least, I think can say for near-certain now that the indexes are not going to help me given the particular queries I am dealing with and limited number of records the temp tables will have combined with the limited number of times I will re-use them.\n\nOn Thu, Apr 22, 2010 at 10:42 AM, Merlin Moncure <[email protected]> wrote:\nOn Thu, Apr 22, 2010 at 10:11 AM, Merlin Moncure <[email protected]> wrote:\n> The timings are similar, but the array returning case:\n> *)  runs in a single statement.  If this is executed from the client\n> that means less round trips\n> *) can be passed around as a variable between functions.  temp table\n> requires re-query\n> *) make some things easier/cheap such as counting the array -- you get\n> to call the basically free array_upper()\n> *) makes some things harder.  specifically dealing with arrays on the\n> client is a pain UNLESS you expand the array w/unnest() or use\n> libpqtypes\n> *) can nest. you can trivially nest complicated sets w/arrays\n> *) does not require explicit transaction mgmt\n\n\nI neglected to mention perhaps the most important point about the array method:\n*) does not rely on any temporary resources.\n\nIf you write a lot of plpsql, you will start to appreciate the\ndifference in execution time between planned and unplanned functions.\nThe first time you run a function in a database session, it has to be\nparsed and planned.  The planning time in particular for large-ish\nfunctions that touch a lot of objects can exceed the execution time of\nthe function.  Depending on _any_ temporary resources causes plan mgmt\nissues because the database detects that a table in the old plan is\ngone ('on commit drop') and has to re-plan.   If your functions are\ncomplex/long and you are counting milliseconds, then that alone should\nbe enough to dump any approach that depends on temp tables.\n\nmerlin\n-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \n\"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero\n-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero", "msg_date": "Fri, 23 Apr 2010 16:42:57 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "On Fri, Apr 23, 2010 at 4:42 PM, Eliot Gable\n<[email protected]> wrote:\n> To answer the question of whether calling a stored procedure adds any\n> significant overhead, I built a test case and the short answer is that it\n> seems that it does:\n>\n> CREATE OR REPLACE FUNCTION Test1() RETURNS INTEGER AS\n> $BODY$\n> DECLARE\n>     temp INTEGER;\n> BEGIN\n>     FOR i IN 1..1000 LOOP\n>         SELECT 1 AS id INTO temp;\n>     END LOOP;\n>     RETURN 1;\n> END;\n> $BODY$\n> LANGUAGE plpgsql;\n>\n> CREATE OR REPLACE FUNCTION Test2A() RETURNS SETOF INTEGER AS\n> $BODY$\n> DECLARE\n> BEGIN\n>     RETURN QUERY SELECT 1 AS id;\n> END;\n> $BODY$\n> LANGUAGE plpgsql;\n>\n> CREATE OR REPLACE FUNCTION Test2B() RETURNS INTEGER AS\n> $BODY$\n> DECLARE\n>     temp INTEGER;\n> BEGIN\n>     FOR i IN 1..1000 LOOP\n>         temp := Test2A();\n>     END LOOP;\n>     RETURN 1;\n> END;\n> $BODY$\n> LANGUAGE plpgsql;\n>\n>\n> EXPLAIN ANALYZE SELECT * FROM Test1();\n> \"Function Scan on test1  (cost=0.00..0.26 rows=1 width=4) (actual\n> time=6.568..6.569 rows=1 loops=1)\"\n> \"Total runtime: 6.585 ms\"\n>\n>\n> EXPLAIN ANALYZE SELECT * FROM Test2B();\n> \"Function Scan on test2b  (cost=0.00..0.26 rows=1 width=4) (actual\n> time=29.006..29.007 rows=1 loops=1)\"\n> \"Total runtime: 29.020 ms\"\n\nThat's not a fair test. test2a() is a SRF which has higher overhead\nthan regular function. Try it this way and the timings will level\nout:\n\nCREATE OR REPLACE FUNCTION Test2A() RETURNS INTEGER AS\n$BODY$\nDECLARE\nBEGIN\n RETURN 1 ;\nEND;\n$BODY$\nLANGUAGE plpgsql ;\n\nmerlin\n", "msg_date": "Fri, 23 Apr 2010 17:01:21 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "On Fri, Apr 23, 2010 at 4:42 PM, Eliot Gable\n<[email protected]> wrote:\n> And, from these tests, it would be significant overhead.\n\nYeah, I've been very disappointed by the size of the function-call\noverhead on many occasions. It might be worth putting some effort\ninto seeing if there's anything that can be done about this, but I\nhaven't. :-)\n\n...Robert\n", "msg_date": "Fri, 23 Apr 2010 18:08:59 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "That's a good point. However, even after changing it, it is still 12ms with\nthe function call verses 6ms without the extra function call. Though, it is\nworth noting that if you can make the function call be guaranteed to return\nthe same results when used with the same input parameters, it ends up being\nfaster (roughly 3ms in my test case) due to caching -- at least when\nexecuting it multiple times in a row like this. Unfortunately, I cannot take\nadvantage of that, because in my particular use case, the chances of it\nbeing called again with the same input values within the cache lifetime of\nthe results is close to zero. Add to that the fact that the function queries\ntables that could change between transactions (meaning the function is\nvolatile) and it's a moot point. However, it is worth noting that for those\npeople using a non-volatile function call multiple times in the same\ntransaction with the same input values, there is no need to inline the\nfunction call.\n\n\nOn Fri, Apr 23, 2010 at 5:01 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Fri, Apr 23, 2010 at 4:42 PM, Eliot Gable\n> <[email protected] <egable%[email protected]>>\n> wrote:\n> > To answer the question of whether calling a stored procedure adds any\n> > significant overhead, I built a test case and the short answer is that it\n> > seems that it does:\n> >\n> > CREATE OR REPLACE FUNCTION Test1() RETURNS INTEGER AS\n> > $BODY$\n> > DECLARE\n> > temp INTEGER;\n> > BEGIN\n> > FOR i IN 1..1000 LOOP\n> > SELECT 1 AS id INTO temp;\n> > END LOOP;\n> > RETURN 1;\n> > END;\n> > $BODY$\n> > LANGUAGE plpgsql;\n> >\n> > CREATE OR REPLACE FUNCTION Test2A() RETURNS SETOF INTEGER AS\n> > $BODY$\n> > DECLARE\n> > BEGIN\n> > RETURN QUERY SELECT 1 AS id;\n> > END;\n> > $BODY$\n> > LANGUAGE plpgsql;\n> >\n> > CREATE OR REPLACE FUNCTION Test2B() RETURNS INTEGER AS\n> > $BODY$\n> > DECLARE\n> > temp INTEGER;\n> > BEGIN\n> > FOR i IN 1..1000 LOOP\n> > temp := Test2A();\n> > END LOOP;\n> > RETURN 1;\n> > END;\n> > $BODY$\n> > LANGUAGE plpgsql;\n> >\n> >\n> > EXPLAIN ANALYZE SELECT * FROM Test1();\n> > \"Function Scan on test1 (cost=0.00..0.26 rows=1 width=4) (actual\n> > time=6.568..6.569 rows=1 loops=1)\"\n> > \"Total runtime: 6.585 ms\"\n> >\n> >\n> > EXPLAIN ANALYZE SELECT * FROM Test2B();\n> > \"Function Scan on test2b (cost=0.00..0.26 rows=1 width=4) (actual\n> > time=29.006..29.007 rows=1 loops=1)\"\n> > \"Total runtime: 29.020 ms\"\n>\n> That's not a fair test. test2a() is a SRF which has higher overhead\n> than regular function. Try it this way and the timings will level\n> out:\n>\n> CREATE OR REPLACE FUNCTION Test2A() RETURNS INTEGER AS\n> $BODY$\n> DECLARE\n> BEGIN\n> RETURN 1 ;\n> END;\n> $BODY$\n> LANGUAGE plpgsql ;\n>\n> merlin\n>\n\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nThat's a good point. However, even after changing it, it is still 12ms with the function call verses 6ms without the extra function call. Though, it is worth noting that if you can make the function call be guaranteed to return the same results when used with the same input parameters, it ends up being faster (roughly 3ms in my test case) due to caching -- at least when executing it multiple times in a row like this. Unfortunately, I cannot take advantage of that, because in my particular use case, the chances of it being called again with the same input values within the cache lifetime of the results is close to zero. Add to that the fact that the function queries tables that could change between transactions (meaning the function is volatile) and it's a moot point. However, it is worth noting that for those people using a non-volatile function call multiple times in the same transaction with the same input values, there is no need to inline the function call.\nOn Fri, Apr 23, 2010 at 5:01 PM, Merlin Moncure <[email protected]> wrote:\nOn Fri, Apr 23, 2010 at 4:42 PM, Eliot Gable\n<[email protected]> wrote:\n> To answer the question of whether calling a stored procedure adds any\n> significant overhead, I built a test case and the short answer is that it\n> seems that it does:\n>\n> CREATE OR REPLACE FUNCTION Test1() RETURNS INTEGER AS\n> $BODY$\n> DECLARE\n>     temp INTEGER;\n> BEGIN\n>     FOR i IN 1..1000 LOOP\n>         SELECT 1 AS id INTO temp;\n>     END LOOP;\n>     RETURN 1;\n> END;\n> $BODY$\n> LANGUAGE plpgsql;\n>\n> CREATE OR REPLACE FUNCTION Test2A() RETURNS SETOF INTEGER AS\n> $BODY$\n> DECLARE\n> BEGIN\n>     RETURN QUERY SELECT 1 AS id;\n> END;\n> $BODY$\n> LANGUAGE plpgsql;\n>\n> CREATE OR REPLACE FUNCTION Test2B() RETURNS INTEGER AS\n> $BODY$\n> DECLARE\n>     temp INTEGER;\n> BEGIN\n>     FOR i IN 1..1000 LOOP\n>         temp := Test2A();\n>     END LOOP;\n>     RETURN 1;\n> END;\n> $BODY$\n> LANGUAGE plpgsql;\n>\n>\n> EXPLAIN ANALYZE SELECT * FROM Test1();\n> \"Function Scan on test1  (cost=0.00..0.26 rows=1 width=4) (actual\n> time=6.568..6.569 rows=1 loops=1)\"\n> \"Total runtime: 6.585 ms\"\n>\n>\n> EXPLAIN ANALYZE SELECT * FROM Test2B();\n> \"Function Scan on test2b  (cost=0.00..0.26 rows=1 width=4) (actual\n> time=29.006..29.007 rows=1 loops=1)\"\n> \"Total runtime: 29.020 ms\"\n\nThat's not a fair test.  test2a() is a SRF which has higher overhead\nthan regular function.  Try it this way and the timings will level\nout:\n\nCREATE OR REPLACE FUNCTION Test2A() RETURNS  INTEGER AS\n$BODY$\nDECLARE\nBEGIN\n    RETURN  1 ;\nEND;\n$BODY$\nLANGUAGE plpgsql ;\n\nmerlin\n-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero", "msg_date": "Fri, 23 Apr 2010 20:39:13 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "More benchmarking results are in with a comparison between cursors, arrays,\nand temporary tables for storing, using, and accessing data outside the\nstored procedure:\n\nCREATE OR REPLACE FUNCTION Test_Init() RETURNS INTEGER AS\n$BODY$\nDECLARE\n temp INTEGER;\nBEGIN\n DROP TABLE IF EXISTS test_table1 CASCADE;\n CREATE TABLE test_table1 (\n id SERIAL NOT NULL PRIMARY KEY,\n junk_field1 INTEGER,\n junk_field2 INTEGER,\n junk_field3 INTEGER\n ) WITH (OIDS=FALSE);\n DROP INDEX IF EXISTS test_table1_junk_field1_idx CASCADE;\n DROP INDEX IF EXISTS test_table1_junk_field2_idx CASCADE;\n DROP INDEX IF EXISTS test_table1_junk_field3_idx CASCADE;\n FOR i IN 1..10000 LOOP\n INSERT INTO test_table1 (junk_field1, junk_field2, junk_field3) VALUES\n (i%10, i%20, i%30);\n END LOOP;\n CREATE INDEX test_table1_junk_field1_idx ON test_table1 USING btree\n(junk_field1);\n CREATE INDEX test_table1_junk_field2_idx ON test_table1 USING btree\n(junk_field2);\n CREATE INDEX test_table1_junk_field3_idx ON test_table1 USING btree\n(junk_field3);\n DROP TABLE IF EXISTS test_table2 CASCADE;\n CREATE TABLE test_table2 (\n id SERIAL NOT NULL PRIMARY KEY,\n junk_field1 INTEGER,\n junk_field2 INTEGER,\n junk_field3 INTEGER\n ) WITH (OIDS=FALSE);\n DROP INDEX IF EXISTS test_table2_junk_field1_idx CASCADE;\n DROP INDEX IF EXISTS test_table2_junk_field2_idx CASCADE;\n DROP INDEX IF EXISTS test_table2_junk_field3_idx CASCADE;\n FOR i IN 1..10000 LOOP\n INSERT INTO test_table2 (junk_field1, junk_field2, junk_field3) VALUES\n (i%15, i%25, i%35);\n END LOOP;\n CREATE INDEX test_table2_junk_field1_idx ON test_table2 USING btree\n(junk_field1);\n CREATE INDEX test_table2_junk_field2_idx ON test_table2 USING btree\n(junk_field2);\n CREATE INDEX test_table2_junk_field3_idx ON test_table2 USING btree\n(junk_field3);\n RETURN 1;\nEND;\n$BODY$\nLANGUAGE plpgsql;\n\nSELECT * FROM Test_Init();\n\nDROP TYPE IF EXISTS test_row_type CASCADE;\nCREATE TYPE test_row_type AS (\n junk_field1 INTEGER,\n junk_field2 INTEGER,\n junk_field3 INTEGER\n);\n\nCREATE OR REPLACE FUNCTION Test1() RETURNS INTEGER AS\n$BODY$\nDECLARE\n temp_row test_row_type;\n cursresults test_row_type[];\n curs SCROLL CURSOR IS\n SELECT * FROM test_table1 WHERE junk_field1=8;\nBEGIN\n FOR temp IN curs LOOP\n temp_row := temp;\n cursresults := array_append(cursresults, temp_row);\n END LOOP;\n OPEN curs;\n RETURN 1;\nEND;\n$BODY$\nLANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION Test2() RETURNS INTEGER AS\n$BODY$\nDECLARE\ncursresults test_row_type[];\n cur SCROLL CURSOR IS\n SELECT * FROM unnest(cursresults);\nBEGIN\n cursresults := array(SELECT (junk_field1, junk_field2,\njunk_field3)::test_row_type AS rec FROM test_table1 WHERE junk_field1=8);\n OPEN cur;\n RETURN 1;\nEND;\n$BODY$\nLANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION Test3() RETURNS INTEGER AS\n$BODY$\nDECLARE\nBEGIN\n CREATE TEMPORARY TABLE results WITH (OIDS=FALSE) ON COMMIT DROP AS (\n SELECT junk_field1, junk_field2, junk_field3 FROM test_table1 WHERE\njunk_field1=8\n );\n RETURN 1;\nEND;\n$BODY$\nLANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION Test4() RETURNS INTEGER AS\n$BODY$\nDECLARE\n cur SCROLL CURSOR IS\n SELECT * FROM results;\nBEGIN\n CREATE TEMPORARY TABLE results WITH (OIDS=FALSE) ON COMMIT DROP AS (\n SELECT junk_field1, junk_field2, junk_field3 FROM test_table1 WHERE\njunk_field1=8\n );\n OPEN cur;\n RETURN 1;\nEND;\n$BODY$\nLANGUAGE plpgsql;\n\nEXPLAIN ANALYZE SELECT * FROM Test1();\n\"Function Scan on test1 (cost=0.00..0.26 rows=1 width=4) (actual\ntime=17.701..17.701 rows=1 loops=1)\"\n\"Total runtime: 17.714 ms\" -- Ouch\n\n\nEXPLAIN ANALYZE SELECT * FROM Test2();\n\"Function Scan on test2 (cost=0.00..0.26 rows=1 width=4) (actual\ntime=1.137..1.137 rows=1 loops=1)\"\n\"Total runtime: 1.153 ms\" -- Wow\n\n\nEXPLAIN ANALYZE SELECT * FROM Test3();\n\"Function Scan on test3 (cost=0.00..0.26 rows=1 width=4) (actual\ntime=2.033..2.034 rows=1 loops=1)\"\n\"Total runtime: 2.050 ms\"\n\n\nEXPLAIN ANALYZE SELECT * FROM Test4();\n\"Function Scan on test4 (cost=0.00..0.26 rows=1 width=4) (actual\ntime=2.001..2.001 rows=1 loops=1)\"\n\"Total runtime: 2.012 ms\"\n\n\nIn each case, the results are available outside the stored procedure by\neither fetching from the cursor or selecting from the temporary table.\nClearly, the temporary table takes a performance hit compared using arrays.\nBuilding an array with array append is horrendously inefficient. Unnesting\nan array is surprisingly efficient. As can be seen from Test3 and Test4,\ncursors have no detectable overhead for opening the cursor (at least in this\nexample with 1000 result rows). It is unclear whether there is any\ndifference at all from Test3 and Test4 for retrieving the data as I have no\neasy way right now to measure that accurately. However, since arrays+cursors\nare more efficient than anything having to do with temp tables, that is the\nway I will go. With the number of rows I am dealing with (which should\nalways be less than 1,000 in the final returned result set), unnesting an\narray is much faster than building a temp table and selecting from it.\n\nIf anyone thinks I may have missed some important item in this testing,\nplease let me know.\n\n\nOn Fri, Apr 23, 2010 at 8:39 PM, Eliot Gable <\[email protected] <egable%[email protected]>>wrote:\n\n> That's a good point. However, even after changing it, it is still 12ms with\n> the function call verses 6ms without the extra function call. Though, it is\n> worth noting that if you can make the function call be guaranteed to return\n> the same results when used with the same input parameters, it ends up being\n> faster (roughly 3ms in my test case) due to caching -- at least when\n> executing it multiple times in a row like this. Unfortunately, I cannot take\n> advantage of that, because in my particular use case, the chances of it\n> being called again with the same input values within the cache lifetime of\n> the results is close to zero. Add to that the fact that the function queries\n> tables that could change between transactions (meaning the function is\n> volatile) and it's a moot point. However, it is worth noting that for those\n> people using a non-volatile function call multiple times in the same\n> transaction with the same input values, there is no need to inline the\n> function call.\n>\n>\n> On Fri, Apr 23, 2010 at 5:01 PM, Merlin Moncure <[email protected]>wrote:\n>\n>> On Fri, Apr 23, 2010 at 4:42 PM, Eliot Gable\n>> <[email protected]<egable%[email protected]>>\n>> wrote:\n>> > To answer the question of whether calling a stored procedure adds any\n>> > significant overhead, I built a test case and the short answer is that\n>> it\n>> > seems that it does:\n>> >\n>> > CREATE OR REPLACE FUNCTION Test1() RETURNS INTEGER AS\n>> > $BODY$\n>> > DECLARE\n>> > temp INTEGER;\n>> > BEGIN\n>> > FOR i IN 1..1000 LOOP\n>> > SELECT 1 AS id INTO temp;\n>> > END LOOP;\n>> > RETURN 1;\n>> > END;\n>> > $BODY$\n>> > LANGUAGE plpgsql;\n>> >\n>> > CREATE OR REPLACE FUNCTION Test2A() RETURNS SETOF INTEGER AS\n>> > $BODY$\n>> > DECLARE\n>> > BEGIN\n>> > RETURN QUERY SELECT 1 AS id;\n>> > END;\n>> > $BODY$\n>> > LANGUAGE plpgsql;\n>> >\n>> > CREATE OR REPLACE FUNCTION Test2B() RETURNS INTEGER AS\n>> > $BODY$\n>> > DECLARE\n>> > temp INTEGER;\n>> > BEGIN\n>> > FOR i IN 1..1000 LOOP\n>> > temp := Test2A();\n>> > END LOOP;\n>> > RETURN 1;\n>> > END;\n>> > $BODY$\n>> > LANGUAGE plpgsql;\n>> >\n>> >\n>> > EXPLAIN ANALYZE SELECT * FROM Test1();\n>> > \"Function Scan on test1 (cost=0.00..0.26 rows=1 width=4) (actual\n>> > time=6.568..6.569 rows=1 loops=1)\"\n>> > \"Total runtime: 6.585 ms\"\n>> >\n>> >\n>> > EXPLAIN ANALYZE SELECT * FROM Test2B();\n>> > \"Function Scan on test2b (cost=0.00..0.26 rows=1 width=4) (actual\n>> > time=29.006..29.007 rows=1 loops=1)\"\n>> > \"Total runtime: 29.020 ms\"\n>>\n>> That's not a fair test. test2a() is a SRF which has higher overhead\n>> than regular function. Try it this way and the timings will level\n>> out:\n>>\n>> CREATE OR REPLACE FUNCTION Test2A() RETURNS INTEGER AS\n>> $BODY$\n>> DECLARE\n>> BEGIN\n>> RETURN 1 ;\n>> END;\n>> $BODY$\n>> LANGUAGE plpgsql ;\n>>\n>> merlin\n>>\n>\n>\n>\n> --\n> Eliot Gable\n>\n> \"We do not inherit the Earth from our ancestors: we borrow it from our\n> children.\" ~David Brower\n>\n> \"I decided the words were too conservative for me. We're not borrowing from\n> our children, we're stealing from them--and it's not even considered to be a\n> crime.\" ~David Brower\n>\n> \"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live;\n> not live to eat.) ~Marcus Tullius Cicero\n>\n\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nMore benchmarking results are in with a comparison between cursors, arrays, and temporary tables for storing, using, and accessing data outside the stored procedure:CREATE OR REPLACE FUNCTION Test_Init() RETURNS INTEGER AS\n$BODY$DECLARE   temp INTEGER;BEGIN   DROP TABLE IF EXISTS test_table1 CASCADE;   CREATE TABLE test_table1 ( \t    id SERIAL NOT NULL PRIMARY KEY, \t    junk_field1 INTEGER, \t    junk_field2 INTEGER,\n \t    junk_field3 INTEGER   ) WITH (OIDS=FALSE);   DROP INDEX IF EXISTS test_table1_junk_field1_idx CASCADE;   DROP INDEX IF EXISTS test_table1_junk_field2_idx CASCADE;   DROP INDEX IF EXISTS test_table1_junk_field3_idx CASCADE; \n   FOR i IN 1..10000 LOOP \t    INSERT INTO test_table1 (junk_field1, junk_field2, junk_field3) VALUES \t\t      (i%10, i%20, i%30);   END LOOP;   CREATE INDEX test_table1_junk_field1_idx ON test_table1 USING btree (junk_field1);\n   CREATE INDEX test_table1_junk_field2_idx ON test_table1 USING btree (junk_field2);   CREATE INDEX test_table1_junk_field3_idx ON test_table1 USING btree (junk_field3);   DROP TABLE IF EXISTS test_table2 CASCADE;\n   CREATE TABLE test_table2 ( \t    id SERIAL NOT NULL PRIMARY KEY, \t    junk_field1 INTEGER, \t    junk_field2 INTEGER, \t    junk_field3 INTEGER   ) WITH (OIDS=FALSE);   DROP INDEX IF EXISTS test_table2_junk_field1_idx CASCADE;\n   DROP INDEX IF EXISTS test_table2_junk_field2_idx CASCADE;   DROP INDEX IF EXISTS test_table2_junk_field3_idx CASCADE;    FOR i IN 1..10000 LOOP \t    INSERT INTO test_table2 (junk_field1, junk_field2, junk_field3) VALUES\n \t\t        (i%15, i%25, i%35);   END LOOP;   CREATE INDEX test_table2_junk_field1_idx ON test_table2 USING btree (junk_field1);   CREATE INDEX test_table2_junk_field2_idx ON test_table2 USING btree (junk_field2);\n   CREATE INDEX test_table2_junk_field3_idx ON test_table2 USING btree (junk_field3);   RETURN 1;END;$BODY$LANGUAGE plpgsql;SELECT * FROM Test_Init();DROP TYPE IF EXISTS test_row_type CASCADE;\nCREATE TYPE test_row_type AS (   junk_field1 INTEGER,   junk_field2 INTEGER,   junk_field3 INTEGER);CREATE OR REPLACE FUNCTION Test1() RETURNS INTEGER AS$BODY$DECLARE   temp_row test_row_type;\n   cursresults test_row_type[];   curs SCROLL CURSOR IS \t    SELECT * FROM test_table1 WHERE junk_field1=8;BEGIN    FOR temp IN curs LOOP \t     temp_row := temp; \t     cursresults := array_append(cursresults, temp_row);\n    END LOOP;    OPEN curs;    RETURN 1;END;$BODY$LANGUAGE plpgsql;CREATE OR REPLACE FUNCTION Test2() RETURNS INTEGER AS$BODY$DECLARE\tcursresults test_row_type[];   cur SCROLL CURSOR IS\n \t   SELECT * FROM unnest(cursresults);BEGIN   cursresults := array(SELECT (junk_field1, junk_field2, junk_field3)::test_row_type AS rec FROM test_table1 WHERE junk_field1=8);   OPEN cur;   RETURN 1;END;\n$BODY$LANGUAGE plpgsql;CREATE OR REPLACE FUNCTION Test3() RETURNS INTEGER AS$BODY$DECLAREBEGIN   CREATE TEMPORARY TABLE results WITH (OIDS=FALSE) ON COMMIT DROP AS ( \t    SELECT junk_field1, junk_field2, junk_field3 FROM test_table1 WHERE junk_field1=8\n   );   RETURN 1;END;$BODY$LANGUAGE plpgsql;CREATE OR REPLACE FUNCTION Test4() RETURNS INTEGER AS$BODY$DECLARE   cur SCROLL CURSOR IS \t    SELECT * FROM results;BEGIN   CREATE TEMPORARY TABLE results WITH (OIDS=FALSE) ON COMMIT DROP AS (\n \t    SELECT junk_field1, junk_field2, junk_field3 FROM test_table1 WHERE junk_field1=8   );   OPEN cur;   RETURN 1;END;$BODY$LANGUAGE plpgsql;EXPLAIN ANALYZE SELECT * FROM Test1();\n\"Function Scan on test1 (cost=0.00..0.26 rows=1 width=4) (actual time=17.701..17.701 rows=1 loops=1)\"\"Total runtime: 17.714 ms\" -- OuchEXPLAIN ANALYZE SELECT * FROM Test2();\n\"Function Scan on test2 (cost=0.00..0.26 rows=1 width=4) (actual time=1.137..1.137 rows=1 loops=1)\"\"Total runtime: 1.153 ms\" -- WowEXPLAIN ANALYZE SELECT * FROM Test3();\n\"Function Scan on test3 (cost=0.00..0.26 rows=1 width=4) (actual time=2.033..2.034 rows=1 loops=1)\"\"Total runtime: 2.050 ms\"EXPLAIN ANALYZE SELECT * FROM Test4();\n\"Function Scan on test4 (cost=0.00..0.26 rows=1 width=4) (actual time=2.001..2.001 rows=1 loops=1)\"\"Total runtime: 2.012 ms\"In each case, the results are available outside the stored procedure by either fetching from the cursor or selecting from the temporary table. Clearly, the temporary table takes a performance hit compared using arrays. Building an array with array append is horrendously inefficient. Unnesting an array is surprisingly efficient. As can be seen from Test3 and Test4, cursors have no detectable overhead for opening the cursor (at least in this example with 1000 result rows). It is unclear whether there is any difference at all from Test3 and Test4 for retrieving the data as I have no easy way right now to measure that accurately. However, since arrays+cursors are more efficient than anything having to do with temp tables, that is the way I will go. With the number of rows I am dealing with (which should always be less than 1,000 in the final returned result set), unnesting an array is much faster than building a temp table and selecting from it. \nIf anyone thinks I may have missed some important item in this testing, please let me know.On Fri, Apr 23, 2010 at 8:39 PM, Eliot Gable <[email protected]> wrote:\nThat's a good point. However, even after changing it, it is still 12ms with the function call verses 6ms without the extra function call. Though, it is worth noting that if you can make the function call be guaranteed to return the same results when used with the same input parameters, it ends up being faster (roughly 3ms in my test case) due to caching -- at least when executing it multiple times in a row like this. Unfortunately, I cannot take advantage of that, because in my particular use case, the chances of it being called again with the same input values within the cache lifetime of the results is close to zero. Add to that the fact that the function queries tables that could change between transactions (meaning the function is volatile) and it's a moot point. However, it is worth noting that for those people using a non-volatile function call multiple times in the same transaction with the same input values, there is no need to inline the function call.\nOn Fri, Apr 23, 2010 at 5:01 PM, Merlin Moncure <[email protected]> wrote:\n\nOn Fri, Apr 23, 2010 at 4:42 PM, Eliot Gable\n<[email protected]> wrote:\n> To answer the question of whether calling a stored procedure adds any\n> significant overhead, I built a test case and the short answer is that it\n> seems that it does:\n>\n> CREATE OR REPLACE FUNCTION Test1() RETURNS INTEGER AS\n> $BODY$\n> DECLARE\n>     temp INTEGER;\n> BEGIN\n>     FOR i IN 1..1000 LOOP\n>         SELECT 1 AS id INTO temp;\n>     END LOOP;\n>     RETURN 1;\n> END;\n> $BODY$\n> LANGUAGE plpgsql;\n>\n> CREATE OR REPLACE FUNCTION Test2A() RETURNS SETOF INTEGER AS\n> $BODY$\n> DECLARE\n> BEGIN\n>     RETURN QUERY SELECT 1 AS id;\n> END;\n> $BODY$\n> LANGUAGE plpgsql;\n>\n> CREATE OR REPLACE FUNCTION Test2B() RETURNS INTEGER AS\n> $BODY$\n> DECLARE\n>     temp INTEGER;\n> BEGIN\n>     FOR i IN 1..1000 LOOP\n>         temp := Test2A();\n>     END LOOP;\n>     RETURN 1;\n> END;\n> $BODY$\n> LANGUAGE plpgsql;\n>\n>\n> EXPLAIN ANALYZE SELECT * FROM Test1();\n> \"Function Scan on test1  (cost=0.00..0.26 rows=1 width=4) (actual\n> time=6.568..6.569 rows=1 loops=1)\"\n> \"Total runtime: 6.585 ms\"\n>\n>\n> EXPLAIN ANALYZE SELECT * FROM Test2B();\n> \"Function Scan on test2b  (cost=0.00..0.26 rows=1 width=4) (actual\n> time=29.006..29.007 rows=1 loops=1)\"\n> \"Total runtime: 29.020 ms\"\n\nThat's not a fair test.  test2a() is a SRF which has higher overhead\nthan regular function.  Try it this way and the timings will level\nout:\n\nCREATE OR REPLACE FUNCTION Test2A() RETURNS  INTEGER AS\n$BODY$\nDECLARE\nBEGIN\n    RETURN  1 ;\nEND;\n$BODY$\nLANGUAGE plpgsql ;\n\nmerlin\n-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \n\"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero\n-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero", "msg_date": "Fri, 23 Apr 2010 22:31:16 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "\nFYI, I had a query like this :\n\n(complex search query ORDER BY foo LIMIT X)\nLEFT JOIN objects_categories oc\nLEFT JOIN categories c\nGROUP BY ...\n(more joins)\nORDER BY foo LIMIT X\n\nHere, we do a search on \"objects\" (i'm not gonna give all the details, \nthey're not interesting for the problem at hand).\nPoint is that these objects can belong to several categories, so I need to \nperform a GROUP BY with array_agg() somewhere unless I want the JOIN to \nreturn several rows per object, which is not what I want. This makes the \nquery quite complicated...\n\nI ended up rewriting it like this :\n\n(complex search query ORDER BY foo LIMIT X)\nLEFT JOIN\n(SELECT .. FROM objects_categories oc\n LEFT JOIN categories c\n GROUP BY ...\n) ON ...\n(more joins)\nORDER BY foo LIMIT X\n\nBasically moving the aggregates into a separate query. It is easier to \nhandle.\n\nI tried to process it like this, in a stored proc :\n\n- do the (complex search query ORDER BY foo LIMIT X) alone and stuff it in \na cursor\n- extract the elements needed into arrays (mostly object_id)\n- get the other information as separate queries like :\n\nSELECT object_id, category_id, category_name\n FROM objects_categories JOIN categories ON ...\nWHERE object_id =ANY( my_array );\n\nand return the results into cursors, too.\n\nOr like this (using 2 cursors) :\n\nSELECT object_id, array_agg(category_id) FROM objects_categories WHERE \nobject_id =ANY( my_array );\n\nSELECT category_id, category_name, ...\n FROM categories WHERE category_id IN (\n SELECT category_id FROM objects_categories WHERE object_id =ANY( my_array \n));\n\nI found it to be quite faster, and it also simplifies my PHP code. From \nPHP's point of view, it is simpler to get a cursor that returns the \nobjects, and separate cursors that can be used to build an in-memory PHP \nhashtable of only the categories we're going to display. Also, it avoids \nretrieving lots of data multiple times, since many objects will belong to \nthe same categories. With the second example, I can use my ORM to \ninstantiate only one copy of each.\n\nIt would be quite useful if we could SELECT from a cursor, or JOIN a \ncursor to an existing table...\n", "msg_date": "Sat, 24 Apr 2010 11:26:58 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "On Fri, Apr 23, 2010 at 10:31 PM, Eliot Gable\n<[email protected]> wrote:\n> In each case, the results are available outside the stored procedure by\n> either fetching from the cursor or selecting from the temporary table.\n> Clearly, the temporary table takes a performance hit compared using arrays.\n> Building an array with array append is horrendously inefficient. Unnesting\n> an array is surprisingly efficient. As can be seen from Test3 and Test4,\n> cursors have no detectable overhead for opening the cursor (at least in this\n> example with 1000 result rows). It is unclear whether there is any\n> difference at all from Test3 and Test4 for retrieving the data as I have no\n> easy way right now to measure that accurately. However, since arrays+cursors\n> are more efficient than anything having to do with temp tables, that is the\n> way I will go. With the number of rows I am dealing with (which should\n> always be less than 1,000 in the final returned result set), unnesting an\n> array is much faster than building a temp table and selecting from it.\n> If anyone thinks I may have missed some important item in this testing,\n> please let me know.\n\nWell, you missed the most important part: not using cursors at all.\nInstead of declaring a cursor and looping it to build the array, build\nit with array(). That's what I've been saying: arrays can completely\ndisplace both temp tables _and_ cursors when passing small sets around\nfunctions.\n\nmerlin\n", "msg_date": "Sat, 24 Apr 2010 09:23:40 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "On Sat, Apr 24, 2010 at 2:23 PM, Merlin Moncure <[email protected]> wrote:\n\n>\n> Well, you missed the most important part: not using cursors at all.\n> Instead of declaring a cursor and looping it to build the array, build\n> it with array(). That's what I've been saying: arrays can completely\n> displace both temp tables _and_ cursors when passing small sets around\n> functions.\n>\n> with huge emphasis on the word small.\n\n\n-- \nGJ\n\nOn Sat, Apr 24, 2010 at 2:23 PM, Merlin Moncure <[email protected]> wrote:\n\nWell, you missed the most important part: not using cursors at all.\nInstead of declaring a cursor and looping it to build the array, build\nit with array().  That's what I've been saying: arrays can completely\ndisplace both temp tables _and_ cursors when passing small sets around\nfunctions.\nwith huge emphasis on the word small.-- GJ", "msg_date": "Sat, 24 Apr 2010 15:38:02 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replacing Cursors with Temporary Tables" }, { "msg_contents": "2010/4/24 Grzegorz Jaśkiewicz <[email protected]>:\n>\n>\n> On Sat, Apr 24, 2010 at 2:23 PM, Merlin Moncure <[email protected]> wrote:\n>>\n>> Well, you missed the most important part: not using cursors at all.\n>> Instead of declaring a cursor and looping it to build the array, build\n>> it with array().  That's what I've been saying: arrays can completely\n>> displace both temp tables _and_ cursors when passing small sets around\n>> functions.\n>>\n> with huge emphasis on the word small.\n\nThe rule of thumb I use is 10000 narrow records (scalars, or very\nsmall composites) or 1000 wide/complex records. I routinely pass\nextremely complex (3-4 levels nesting) nested composite arrays to the\nclient for processing -- it is extremely efficient and clean. This of\ncourse is going to depend on hardware and other factors.\n\nmerlin\n", "msg_date": "Sat, 24 Apr 2010 11:20:12 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replacing Cursors with Temporary Tables" } ]
[ { "msg_contents": "Please do this small optimization if it is possible. It seem that the \noptimizer have the all information to create a fast plan but it does not \ndo that.\n\ncreate temp table t1 (id bigint, t bigint);\n\ninsert into t1 values (1, 1);\ninsert into t1 values (2, 2);\ninsert into t1 values (2, 3);\ninsert into t1 values (2, 4);\ninsert into t1 values (3, 5);\n\ncreate temp table t2 (id bigint, t bigint);\n\ninsert into t2 (id, t)\nselect g, 2\nfrom generate_series(1, 200) g;\n\ninsert into t2 (id, t)\nselect g, 3\nfrom generate_series(201, 300) g;\n\ninsert into t2 (id, t)\nselect g, 4\nfrom generate_series(301, 400) g;\n\ninsert into t2 (id, t)\nselect g, 1\nfrom generate_series(401, 100000) g;\n\ninsert into t2 (id, t)\nselect g, 5\nfrom generate_series(100001, 100100) g;\n\ncreate index t_idx on t2(t);\n\nanalyze t1;\nanalyze t2;\n\nexplain analyze\nselect *\nfrom t2\n join t1 on t1.t = t2.t\nwhere t1.t = 2\n\nexplain analyze\nselect *\nfrom t2\n join t1 on t1.t = t2.t\nwhere t1.id = 3\n\nexplain analyze\nselect *\nfrom t2\nwhere t2.t in (2, 3, 4)\n\n\nThese two queries are completely equal and optimizator should know it as \nI see from the plans:\n\n\"Hash Join (cost=1.09..2667.09 rows=75000 width=32) (actual \ntime=0.026..100.207 rows=400 loops=1)\"\n\" Hash Cond: (t2.t = t1.t)\"\n\" -> Seq Scan on t2 (cost=0.00..1541.00 rows=100000 width=16) (actual \ntime=0.007..47.083 rows=100000 loops=1)\"\n\" -> Hash (cost=1.05..1.05 rows=3 width=16) (actual time=0.011..0.011 \nrows=3 loops=1)\"\n\" -> Seq Scan on t1 (cost=0.00..1.05 rows=3 width=16) (actual \ntime=0.005..0.008 rows=3 loops=1)\" <-- HERE IS THE PROBLEM. IF THE \nESTIMATED COUNT = 1 OPTIMIZER BUILDS THE CORRECT FAST PLAN, BUT IF THE \nESTIMATION IS GREATER THAN 1 WE HAVE A PROBLEM\n\" Filter: (id = 2)\"\n\"Total runtime: 100.417 ms\"\n\n\"Nested Loop (cost=0.00..1024.46 rows=20020 width=32) (actual \ntime=0.030..0.222 rows=100 loops=1)\"\n\" -> Seq Scan on t1 (cost=0.00..1.05 rows=1 width=16) (actual \ntime=0.008..0.009 rows=1 loops=1)\"\n\" Filter: (id = 3)\"\n\" -> Index Scan using t_idx on t2 (cost=0.00..773.16 rows=20020 \nwidth=16) (actual time=0.016..0.078 rows=100 loops=1)\"\n\" Index Cond: (t2.t = t1.t)\"\n\"Total runtime: 0.296 ms\"\n\n\"Bitmap Heap Scan on t2 (cost=16.09..556.80 rows=429 width=16) (actual \ntime=0.067..0.256 rows=400 loops=1)\"\n\" Recheck Cond: (t = ANY ('{2,3,4}'::bigint[]))\"\n\" -> Bitmap Index Scan on t_idx (cost=0.00..15.98 rows=429 width=0) \n(actual time=0.056..0.056 rows=400 loops=1)\"\n\" Index Cond: (t = ANY ('{2,3,4}'::bigint[]))\"\n\"Total runtime: 0.458 ms\"\n\nAn ugly workaround is to add the column t1(t) in the table t2.\n", "msg_date": "Thu, 22 Apr 2010 18:25:46 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Optimization idea" }, { "msg_contents": "Vlad Arkhipov wrote:\n> Please do this small optimization if it is possible. It seem that the \n> optimizer have the all information to create a fast plan but it does \n> not do that.\n\nThis isn't strictly an optimization problem; it's an issue with \nstatistics the optimizer has to work with, the ones ANALYZE computes. \nYou noticed this yourself:\n\n> HERE IS THE PROBLEM. IF THE ESTIMATED COUNT = 1 OPTIMIZER BUILDS THE \n> CORRECT FAST PLAN, BUT IF THE ESTIMATION IS GREATER THAN 1 WE HAVE A \n> PROBLEM\n\nSee http://www.postgresql.org/docs/current/static/planner-stats.html for \nan intro to this area.\n\nYou didn't mention your PostgreSQL version. If you're running 8.3 or \nearlier, an increase to default_statistics_target might be in order to \nget more data about the distribution of data in the table, to reduce the \nodds of what you're seeing happening.\n\nI can't replicate your problem on the current development 9.0; all three \nplans come back with results quickly when I just tried it:\n\n Nested Loop (cost=0.00..50.76 rows=204 width=32) (actual \ntime=0.049..0.959 rows=200 loops=1)\n -> Seq Scan on t1 (cost=0.00..1.06 rows=1 width=16) (actual \ntime=0.013..0.016 rows=1 loops=1)\n Filter: (t = 2)\n -> Index Scan using t_idx on t2 (cost=0.00..47.66 rows=204 \nwidth=16) (actual time=0.029..0.352 rows=200 loops=1)\n Index Cond: (t2.t = 2)\n Total runtime: 1.295 ms\n\n Nested Loop (cost=0.00..1042.77 rows=20020 width=32) (actual \ntime=0.042..0.437 rows=100 loops=1)\n -> Seq Scan on t1 (cost=0.00..1.06 rows=1 width=16) (actual \ntime=0.013..0.015 rows=1 loops=1)\n Filter: (id = 3)\n -> Index Scan using t_idx on t2 (cost=0.00..791.45 rows=20020 \nwidth=16) (actual time=0.022..0.164 rows=100 loops=1)\n Index Cond: (t2.t = t1.t)\n Total runtime: 0.608 ms\n\n Bitmap Heap Scan on t2 (cost=16.11..558.73 rows=433 width=16) (actual \ntime=0.095..0.674 rows=400 loops=1)\n Recheck Cond: (t = ANY ('{2,3,4}'::bigint[]))\n -> Bitmap Index Scan on t_idx (cost=0.00..16.00 rows=433 width=0) \n(actual time=0.075..0.075 rows=400 loops=1)\n Index Cond: (t = ANY ('{2,3,4}'::bigint[]))\n Total runtime: 1.213 ms\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 22 Apr 2010 08:37:32 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "Greg Smith пишет:\n> Vlad Arkhipov wrote:\n>> Please do this small optimization if it is possible. It seem that the \n>> optimizer have the all information to create a fast plan but it does \n>> not do that.\n>\n> This isn't strictly an optimization problem; it's an issue with \n> statistics the optimizer has to work with, the ones ANALYZE computes. \n> You noticed this yourself:\n>\nI don't think this is just an issue with statistics, because the same \nproblem arises when I try executing a query like this:\n\nexplain analyze\nselect *\nfrom t2\nwhere t2.t in (select 2 union select 3 union select 4) /* It works well \nif there is only one row in the subquery */\n\n\"Hash Semi Join (cost=0.17..2474.10 rows=60060 width=16) (actual \ntime=0.032..103.034 rows=400 loops=1)\"\n\" Hash Cond: (t2.t = (2))\"\n\" -> Seq Scan on t2 (cost=0.00..1543.00 rows=100100 width=16) (actual \ntime=0.007..47.856 rows=100100 loops=1)\"\n\" -> Hash (cost=0.13..0.13 rows=3 width=4) (actual time=0.019..0.019 \nrows=3 loops=1)\"\n\" -> HashAggregate (cost=0.07..0.10 rows=3 width=0) (actual \ntime=0.013..0.015 rows=3 loops=1)\"\n\" -> Append (cost=0.00..0.06 rows=3 width=0) (actual \ntime=0.001..0.007 rows=3 loops=1)\"\n\" -> Result (cost=0.00..0.01 rows=1 width=0) \n(actual time=0.001..0.001 rows=1 loops=1)\"\n\" -> Result (cost=0.00..0.01 rows=1 width=0) \n(actual time=0.000..0.000 rows=1 loops=1)\"\n\" -> Result (cost=0.00..0.01 rows=1 width=0) \n(actual time=0.000..0.000 rows=1 loops=1)\"\n\"Total runtime: 103.244 ms\"\n\nvs\n\nexplain analyze\nselect *\nfrom t2\nwhere t2.t in (2, 3, 4)\n\n\"Bitmap Heap Scan on t2 (cost=15.53..527.91 rows=357 width=16) (actual \ntime=0.068..0.255 rows=400 loops=1)\"\n\" Recheck Cond: (t = ANY ('{2,3,4}'::bigint[]))\"\n\" -> Bitmap Index Scan on t_idx (cost=0.00..15.44 rows=357 width=0) \n(actual time=0.056..0.056 rows=400 loops=1)\"\n\" Index Cond: (t = ANY ('{2,3,4}'::bigint[]))\"\n\"Total runtime: 0.445 ms\"\n\nI also tried setting columns' statistics to 10000, nothing happened. \nPostgreSQL version is 8.4.2. It sounds good that there is no such issue \non PostgreSQL 9.0, i'll try it on the weekend.\n", "msg_date": "Fri, 23 Apr 2010 11:37:57 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "Greg Smith пишет:\n> I can't replicate your problem on the current development 9.0; all \n> three plans come back with results quickly when I just tried it:\n>\n> Nested Loop (cost=0.00..50.76 rows=204 width=32) (actual \n> time=0.049..0.959 rows=200 loops=1)\n> -> Seq Scan on t1 (cost=0.00..1.06 rows=1 width=16) (actual \n> time=0.013..0.016 rows=1 loops=1)\n> Filter: (t = 2)\n> -> Index Scan using t_idx on t2 (cost=0.00..47.66 rows=204 \n> width=16) (actual time=0.029..0.352 rows=200 loops=1)\n> Index Cond: (t2.t = 2)\n> Total runtime: 1.295 ms\n>\n> Nested Loop (cost=0.00..1042.77 rows=20020 width=32) (actual \n> time=0.042..0.437 rows=100 loops=1)\n> -> Seq Scan on t1 (cost=0.00..1.06 rows=1 width=16) (actual \n> time=0.013..0.015 rows=1 loops=1)\n> Filter: (id = 3)\n> -> Index Scan using t_idx on t2 (cost=0.00..791.45 rows=20020 \n> width=16) (actual time=0.022..0.164 rows=100 loops=1)\n> Index Cond: (t2.t = t1.t)\n> Total runtime: 0.608 ms\n>\n> Bitmap Heap Scan on t2 (cost=16.11..558.73 rows=433 width=16) (actual \n> time=0.095..0.674 rows=400 loops=1)\n> Recheck Cond: (t = ANY ('{2,3,4}'::bigint[]))\n> -> Bitmap Index Scan on t_idx (cost=0.00..16.00 rows=433 width=0) \n> (actual time=0.075..0.075 rows=400 loops=1)\n> Index Cond: (t = ANY ('{2,3,4}'::bigint[]))\n> Total runtime: 1.213 ms\n>\n\nJust noticed a mistype in the first query. Here are the correct queries:\n\ncreate temp table t1 (id bigint, t bigint);\n\ninsert into t1 values (1, 1);\ninsert into t1 values (2, 2);\ninsert into t1 values (2, 3);\ninsert into t1 values (2, 4);\ninsert into t1 values (3, 5);\n\ncreate temp table t2 (id bigint, t bigint);\n\ninsert into t2 (id, t)\nselect g, 2\nfrom generate_series(1, 200) g;\n\ninsert into t2 (id, t)\nselect g, 3\nfrom generate_series(201, 300) g;\n\ninsert into t2 (id, t)\nselect g, 4\nfrom generate_series(301, 400) g;\n\ninsert into t2 (id, t)\nselect g, 1\nfrom generate_series(401, 100000) g;\n\ninsert into t2 (id, t)\nselect g, 5\nfrom generate_series(100001, 100100) g;\n\ncreate index t_idx on t2(t);\n\nanalyze t1;\nanalyze t2;\n\nexplain analyze\nselect *\nfrom t2\n join t1 on t1.t = t2.t\nwhere t1.id = 2;\n\nexplain analyze\nselect *\nfrom t2\n join t1 on t1.t = t2.t\nwhere t1.id = 3;\n\nexplain analyze\nselect *\nfrom t2\nwhere t2.t in (2, 3, 4);\n\nI've just tried these queries on PostgreSQL 9.0alpha4, nothing differs \nfrom PostgreSQL 8.4.\n", "msg_date": "Fri, 23 Apr 2010 13:13:49 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov <[email protected]> wrote:\n> I don't think this is just an issue with statistics, because the same\n> problem arises when I try executing a query like this:\n\nI'm not sure how you think this proves that it isn't a problem with\nstatistics, but I think what you should be focusing on here, looking\nback to your original email, is that the plans that are actually much\nfaster have almost as much estimated cost as the slower one. Since\nall your data is probably fully cached, at a first cut, I might try\nsetting random_page_cost and seq_page_cost to 0.005 or so, and\nadjusting effective_cache_size to something appropriate.\n\n...Robert\n", "msg_date": "Fri, 23 Apr 2010 07:05:53 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "2010/4/23 Robert Haas <[email protected]>:\n> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov <[email protected]> wrote:\n>> I don't think this is just an issue with statistics, because the same\n>> problem arises when I try executing a query like this:\n>\n> I'm not sure how you think this proves that it isn't a problem with\n> statistics, but I think what you should be focusing on here, looking\n> back to your original email, is that the plans that are actually much\n> faster have almost as much estimated cost as the slower one.  Since\n> all your data is probably fully cached, at a first cut, I might try\n> setting random_page_cost and seq_page_cost to 0.005 or so, and\n> adjusting effective_cache_size to something appropriate.\n\nthat will help worrect the situation, but the planner is loosing here I think.\n\n>\n> ...Robert\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain\n", "msg_date": "Fri, 23 Apr 2010 15:09:34 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain\n<[email protected]> wrote:\n> 2010/4/23 Robert Haas <[email protected]>:\n>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov <[email protected]> wrote:\n>>> I don't think this is just an issue with statistics, because the same\n>>> problem arises when I try executing a query like this:\n>>\n>> I'm not sure how you think this proves that it isn't a problem with\n>> statistics, but I think what you should be focusing on here, looking\n>> back to your original email, is that the plans that are actually much\n>> faster have almost as much estimated cost as the slower one.  Since\n>> all your data is probably fully cached, at a first cut, I might try\n>> setting random_page_cost and seq_page_cost to 0.005 or so, and\n>> adjusting effective_cache_size to something appropriate.\n>\n> that will help worrect the situation, but the planner is loosing here I think.\n\nWell, what do you think the planner should do differently?\n\n...Robert\n", "msg_date": "Fri, 23 Apr 2010 09:36:40 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "Cᅵdric Villemain<[email protected]> wrote:\n> 2010/4/23 Robert Haas <[email protected]>:\n \n>> Since all your data is probably fully cached, at a first cut, I\n>> might try setting random_page_cost and seq_page_cost to 0.005 or\n>> so, and adjusting effective_cache_size to something appropriate.\n> \n> that will help worrect the situation, but the planner is loosing\n> here I think.\n \nThe planner produces a lot of possible plans to produce the\nrequested results, and then calculates a cost for each. The lowest\ncost plan which will produce the correct results is the one chosen. \nIf your costing factors don't represent the reality of your\nenvironment, it won't pick the best plan for your environment.\n \n-Kevin\n", "msg_date": "Fri, 23 Apr 2010 08:41:01 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "2010/4/23 Robert Haas <[email protected]>:\n> On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain\n> <[email protected]> wrote:\n>> 2010/4/23 Robert Haas <[email protected]>:\n>>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov <[email protected]> wrote:\n>>>> I don't think this is just an issue with statistics, because the same\n>>>> problem arises when I try executing a query like this:\n>>>\n>>> I'm not sure how you think this proves that it isn't a problem with\n>>> statistics, but I think what you should be focusing on here, looking\n>>> back to your original email, is that the plans that are actually much\n>>> faster have almost as much estimated cost as the slower one.  Since\n>>> all your data is probably fully cached, at a first cut, I might try\n>>> setting random_page_cost and seq_page_cost to 0.005 or so, and\n>>> adjusting effective_cache_size to something appropriate.\n>>\n>> that will help worrect the situation, but the planner is loosing here I think.\n>\n> Well, what do you think the planner should do differently?\n\nHere the planner just divide the number of rows in the t2 table by the\nnumber of distinct value of t1.t. this is the rows=20200 we can see in\nthe explains.\nIt seems it is normal, but it also looks to me that it can be improved.\nWhen estimating the rowcount to just num_rows/n_distinct, it *knows*\nthat this is wrong because the most_common_freqs of t2.t say that of\nthe 99600 rows have the value 1, or less than 200 in all other case.\nSo in every case the planner make (perhaps good) choice, but being\nsure its estimation are wrong.\nI wonder if we can improve the planner here.\n\nIn this case where the number of rows is lower than the stats\ntarget(in t1.t), perhaps the planner can improve its decision by going\na bit ahead and trying plan for each n_distinct values corresponding\nin t2.t .\n\nI haven't a very clear idea of how to do that, but it may be better if\nthe planner estimate if its plan is 100%(or lower, just an idea) sure\nto hapen and that's fine, else try another plan.\n\nin this test case, if the query is :\nselect *\nfrom t2\njoin t1 on t1.t = t2.t\nwhere t1.id = X;\n\nif X=1 then the planner has 20% of chance that the rowcount=99600 and\n80% that rowcount=200 or less, by providing a rowcount=20200 how can\nit find the good plan anyway ? Is it beter to start with bad\nestimation and perhaps find a good plan, or start with estimation\nwhich may be bad but lead to a good plan in more than XX% of the\ncases.\n\nSo, currently, the planner do as expected, but can we try another\napproach for those corner cases ?\n\n>\n> ...Robert\n>\n\n\n\n-- \nCédric Villemain\n", "msg_date": "Fri, 23 Apr 2010 21:22:08 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "On Fri, Apr 23, 2010 at 3:22 PM, Cédric Villemain\n<[email protected]> wrote:\n> 2010/4/23 Robert Haas <[email protected]>:\n>> On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain\n>> <[email protected]> wrote:\n>>> 2010/4/23 Robert Haas <[email protected]>:\n>>>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov <[email protected]> wrote:\n>>>>> I don't think this is just an issue with statistics, because the same\n>>>>> problem arises when I try executing a query like this:\n>>>>\n>>>> I'm not sure how you think this proves that it isn't a problem with\n>>>> statistics, but I think what you should be focusing on here, looking\n>>>> back to your original email, is that the plans that are actually much\n>>>> faster have almost as much estimated cost as the slower one.  Since\n>>>> all your data is probably fully cached, at a first cut, I might try\n>>>> setting random_page_cost and seq_page_cost to 0.005 or so, and\n>>>> adjusting effective_cache_size to something appropriate.\n>>>\n>>> that will help worrect the situation, but the planner is loosing here I think.\n>>\n>> Well, what do you think the planner should do differently?\n>\n> Here the planner just divide the number of rows in the t2 table by the\n> number of distinct value of t1.t. this is the rows=20200 we can see in\n> the explains.\n> It seems it is normal, but it also looks to me that it can be improved.\n> When estimating the rowcount to just num_rows/n_distinct, it *knows*\n> that this is wrong because the most_common_freqs of t2.t say that of\n> the 99600 rows have the value 1, or less than 200 in all other case.\n> So in every case the planner make (perhaps good) choice, but being\n> sure its estimation are wrong.\n> I wonder if we can improve the planner here.\n>\n> In this case where the number of rows is lower than the stats\n> target(in t1.t), perhaps the planner can improve its decision by going\n> a bit ahead and trying plan for each n_distinct values corresponding\n> in t2.t .\n>\n> I haven't a very clear idea of how to do that, but it may be better if\n> the planner estimate if its plan is 100%(or lower, just an idea) sure\n> to hapen and that's fine, else  try another plan.\n>\n> in this test case, if the query is :\n> select *\n> from t2\n> join t1 on t1.t = t2.t\n> where t1.id = X;\n>\n> if X=1 then the planner has 20% of chance that the rowcount=99600 and\n> 80% that rowcount=200 or less, by providing a rowcount=20200 how can\n> it find the good plan anyway ? Is it beter to start with bad\n> estimation and perhaps find a good plan, or start with estimation\n> which may be bad but lead to a good plan in more than XX% of the\n> cases.\n>\n> So, currently, the planner do as expected, but can we try another\n> approach for those corner cases ?\n\nHmm. We currently have a heuristic that we don't record a value as an\nMCV unless it's more frequent than the average frequency. When the\nnumber of MCVs is substantially smaller than the number of distinct\nvalues in the table this is probably a good heuristic, since it\nprevents us from bothering with the recording of some values that are\nprobably only marginally more interesting than other values we don't\nhave space to record. But if ndistinct is less than the stats target\nwe could in theory record every value we find in the MCVs table and\nleave the histogram empty. Not sure if that would be better in\ngeneral, or not, but it's a thought.\n\n...Robert\n", "msg_date": "Fri, 23 Apr 2010 18:06:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> Hmm. We currently have a heuristic that we don't record a value as an\n> MCV unless it's more frequent than the average frequency. When the\n> number of MCVs is substantially smaller than the number of distinct\n> values in the table this is probably a good heuristic, since it\n> prevents us from bothering with the recording of some values that are\n> probably only marginally more interesting than other values we don't\n> have space to record. But if ndistinct is less than the stats target\n> we could in theory record every value we find in the MCVs table and\n> leave the histogram empty.\n\nWhich, in fact, is exactly what we do. Cf analyze.c lines 2414ff\n(as of CVS HEAD). The heuristic you mention only gets applied after\nwe determine that a complete MCV list won't fit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Apr 2010 18:53:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea " }, { "msg_contents": "On Fri, Apr 23, 2010 at 6:53 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> Hmm.  We currently have a heuristic that we don't record a value as an\n>> MCV unless it's more frequent than the average frequency.  When the\n>> number of MCVs is substantially smaller than the number of distinct\n>> values in the table this is probably a good heuristic, since it\n>> prevents us from bothering with the recording of some values that are\n>> probably only marginally more interesting than other values we don't\n>> have space to record.  But if ndistinct is less than the stats target\n>> we could in theory record every value we find in the MCVs table and\n>> leave the histogram empty.\n>\n> Which, in fact, is exactly what we do.  Cf analyze.c lines 2414ff\n> (as of CVS HEAD).  The heuristic you mention only gets applied after\n> we determine that a complete MCV list won't fit.\n\nOh, hrmm. I guess I need to go try to understand this example again, then.\n\n...Robert\n", "msg_date": "Fri, 23 Apr 2010 19:09:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "\n> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov <[email protected]> wrote:\n> \n>> I don't think this is just an issue with statistics, because the same\n>> problem arises when I try executing a query like this:\n>> \n>\n> I'm not sure how you think this proves that it isn't a problem with\n> statistics, but I think what you should be focusing on here, looking\n> back to your original email, is that the plans that are actually much\n> faster have almost as much estimated cost as the slower one. Since\n> all your data is probably fully cached, at a first cut, I might try\n> setting random_page_cost and seq_page_cost to 0.005 or so, and\n> adjusting effective_cache_size to something appropriate.\n>\n> ...Robert\n>\n> \n\nOk. I thougth it's quite obvious because of these two queries. I can't\nunderstand why the estimated rows count is 40040 in the first plan.\n\ntest=# explain analyze select * from t2 join t1 on t1.t = t2.t where\nt1.t in (2,3,4);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\nHash Join (cost=1.09..2319.87 rows=40040 width=32) (actual\ntime=0.050..356.269 rows=400 loops=1)\n Hash Cond: (t2.t = t1.t)\n -> Seq Scan on t2 (cost=0.00..1543.00 rows=100100 width=16) (actual\ntime=0.013..176.087 rows=100100 loops=1)\n -> Hash (cost=1.07..1.07 rows=2 width=16) (actual time=0.023..0.023\nrows=3 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Seq Scan on t1 (cost=0.00..1.07 rows=2 width=16) (actual\ntime=0.006..0.014 rows=3 loops=1)\n Filter: (t = ANY ('{2,3,4}'::bigint[]))\nTotal runtime: 356.971 ms\n(8 rows)\n\ntest=# explain analyze select * from t2 join t1 on t1.t = t2.t where\nt1.t = 2 union all select * from t2 join t1 on t1.t = t2.t where t1.t =\n3 union all select * from t2 join t1 on t1.t = t2.t where t1.t = 4;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\nAppend (cost=0.00..112.42 rows=407 width=32) (actual time=0.048..3.487\nrows=400 loops=1)\n -> Nested Loop (cost=0.00..47.51 rows=197 width=32) (actual\ntime=0.045..1.061 rows=200 loops=1)\n -> Seq Scan on t1 (cost=0.00..1.06 rows=1 width=16) (actual\ntime=0.011..0.014 rows=1 loops=1)\n Filter: (t = 2)\n -> Index Scan using t_idx on t2 (cost=0.00..44.48 rows=197\nwidth=16) (actual time=0.026..0.382 rows=200 loops=1)\n Index Cond: (pg_temp_2.t2.t = 2)\n -> Nested Loop (cost=0.00..32.67 rows=117 width=32) (actual\ntime=0.019..0.599 rows=100 loops=1)\n -> Seq Scan on t1 (cost=0.00..1.06 rows=1 width=16) (actual\ntime=0.003..0.006 rows=1 loops=1)\n Filter: (t = 3)\n -> Index Scan using t_idx on t2 (cost=0.00..30.43 rows=117\nwidth=16) (actual time=0.010..0.211 rows=100 loops=1)\n Index Cond: (pg_temp_2.t2.t = 3)\n -> Nested Loop (cost=0.00..28.17 rows=93 width=32) (actual\ntime=0.017..0.534 rows=100 loops=1)\n -> Seq Scan on t1 (cost=0.00..1.06 rows=1 width=16) (actual\ntime=0.005..0.008 rows=1 loops=1)\n Filter: (t = 4)\n -> Index Scan using t_idx on t2 (cost=0.00..26.18 rows=93\nwidth=16) (actual time=0.007..0.187 rows=100 loops=1)\n Index Cond: (pg_temp_2.t2.t = 4)\nTotal runtime: 4.190 ms\n(17 rows)\n\n", "msg_date": "Mon, 26 Apr 2010 11:52:23 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "2010/4/26 Vlad Arkhipov <[email protected]>:\n>\n>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov <[email protected]>\n>> wrote:\n>>\n>>>\n>>> I don't think this is just an issue with statistics, because the same\n>>> problem arises when I try executing a query like this:\n>>>\n>>\n>> I'm not sure how you think this proves that it isn't a problem with\n>> statistics, but I think what you should be focusing on here, looking\n>> back to your original email, is that the plans that are actually much\n>> faster have almost as much estimated cost as the slower one.  Since\n>> all your data is probably fully cached, at a first cut, I might try\n>> setting random_page_cost and seq_page_cost to 0.005 or so, and\n>> adjusting effective_cache_size to something appropriate.\n>>\n>> ...Robert\n>>\n>>\n>\n> Ok. I thougth it's quite obvious because of these two queries. I can't\n> understand why the estimated rows count is 40040 in the first plan.\n\nIn the first query, the planner doesn't use the information of the 2,3,4.\nIt just does a : I'll bet I'll have 2 rows in t1 (I think it should\nsay 3, but it doesn't)\nSo it divide the estimated number of rows in the t2 table by 5\n(different values) and multiply by 2 (rows) : 40040.\n\nIn the second query the planner use a different behavior : it did\nexpand the value of t1.t to t2.t for each join relation and find a\ncostless plan. (than the one using seqscan on t2)\n\nWe are here in corner case situation where n_distinc valuest <\nstatistics on the column and where we might be able to improve the\nplanner decision. I believe I have already read something on this\ntopic on -hackers...\n\n>\n> test=# explain analyze select * from t2 join t1 on t1.t = t2.t where\n> t1.t in (2,3,4);\n>                                                   QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------\n> Hash Join  (cost=1.09..2319.87 rows=40040 width=32) (actual\n> time=0.050..356.269 rows=400 loops=1)\n>  Hash Cond: (t2.t = t1.t)\n>  ->  Seq Scan on t2  (cost=0.00..1543.00 rows=100100 width=16) (actual\n> time=0.013..176.087 rows=100100 loops=1)\n>  ->  Hash  (cost=1.07..1.07 rows=2 width=16) (actual time=0.023..0.023\n> rows=3 loops=1)\n>        Buckets: 1024  Batches: 1  Memory Usage: 1kB\n>        ->  Seq Scan on t1  (cost=0.00..1.07 rows=2 width=16) (actual\n> time=0.006..0.014 rows=3 loops=1)\n>              Filter: (t = ANY ('{2,3,4}'::bigint[]))\n> Total runtime: 356.971 ms\n> (8 rows)\n>\n> test=# explain analyze select * from t2 join t1 on t1.t = t2.t where\n> t1.t = 2 union all select * from t2 join t1 on t1.t = t2.t where t1.t =\n> 3 union all select * from t2 join t1 on t1.t = t2.t where t1.t = 4;\n>                                                        QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Append  (cost=0.00..112.42 rows=407 width=32) (actual time=0.048..3.487\n> rows=400 loops=1)\n>  ->  Nested Loop  (cost=0.00..47.51 rows=197 width=32) (actual\n> time=0.045..1.061 rows=200 loops=1)\n>        ->  Seq Scan on t1  (cost=0.00..1.06 rows=1 width=16) (actual\n> time=0.011..0.014 rows=1 loops=1)\n>              Filter: (t = 2)\n>        ->  Index Scan using t_idx on t2  (cost=0.00..44.48 rows=197\n> width=16) (actual time=0.026..0.382 rows=200 loops=1)\n>              Index Cond: (pg_temp_2.t2.t = 2)\n>  ->  Nested Loop  (cost=0.00..32.67 rows=117 width=32) (actual\n> time=0.019..0.599 rows=100 loops=1)\n>        ->  Seq Scan on t1  (cost=0.00..1.06 rows=1 width=16) (actual\n> time=0.003..0.006 rows=1 loops=1)\n>              Filter: (t = 3)\n>        ->  Index Scan using t_idx on t2  (cost=0.00..30.43 rows=117\n> width=16) (actual time=0.010..0.211 rows=100 loops=1)\n>              Index Cond: (pg_temp_2.t2.t = 3)\n>  ->  Nested Loop  (cost=0.00..28.17 rows=93 width=32) (actual\n> time=0.017..0.534 rows=100 loops=1)\n>        ->  Seq Scan on t1  (cost=0.00..1.06 rows=1 width=16) (actual\n> time=0.005..0.008 rows=1 loops=1)\n>              Filter: (t = 4)\n>        ->  Index Scan using t_idx on t2  (cost=0.00..26.18 rows=93\n> width=16) (actual time=0.007..0.187 rows=100 loops=1)\n>              Index Cond: (pg_temp_2.t2.t = 4)\n> Total runtime: 4.190 ms\n> (17 rows)\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain\n", "msg_date": "Mon, 26 Apr 2010 11:33:29 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain\n<[email protected]> wrote:\n> In the first query, the planner doesn't use the information of the 2,3,4.\n> It just does a : I'll bet I'll have 2 rows in t1 (I think it should\n> say 3, but it doesn't)\n> So it divide the estimated number of rows in the t2 table by 5\n> (different values) and multiply by 2 (rows) : 40040.\n\nI think it's doing something more complicated. See scalararraysel().\n\n> In the second query the planner use a different behavior : it did\n> expand the value of t1.t to t2.t for each join relation and find a\n> costless plan. (than the one using seqscan on t2)\n\nI think the problem here is one we've discussed before: if the query\nplanner knows that something is true of x (like, say, x =\nANY('{2,3,4}')) and it also knows that x = y, it doesn't infer that\nthe same thing holds of y (i.e. y = ANY('{2,3,4}') unless the thing\nthat is known to be true of x is that x is equal to some constant.\nTom doesn't think it would be worth the additional CPU time that it\nwould take to make these sorts of deductions. I'm not sure I believe\nthat, but I haven't tried to write the code, either.\n\n...Robert\n", "msg_date": "Tue, 27 Apr 2010 20:46:36 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "2010/4/28 Robert Haas <[email protected]>:\n> On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain\n> <[email protected]> wrote:\n>> In the first query, the planner doesn't use the information of the 2,3,4.\n>> It just does a : I'll bet I'll have 2 rows in t1 (I think it should\n>> say 3, but it doesn't)\n>> So it divide the estimated number of rows in the t2 table by 5\n>> (different values) and multiply by 2 (rows) : 40040.\n>\n> I think it's doing something more complicated.  See scalararraysel().\n\nThank you for driving me to the right function, Robert.\nIt is in fact more complicated :)\n\n>\n>> In the second query the planner use a different behavior : it did\n>> expand the value of t1.t to t2.t for each join relation and find a\n>> costless plan. (than the one using seqscan on t2)\n>\n> I think the problem here is one we've discussed before: if the query\n> planner knows that something is true of x (like, say, x =\n> ANY('{2,3,4}')) and it also knows that x = y, it doesn't infer that\n> the same thing holds of y (i.e. y = ANY('{2,3,4}') unless the thing\n> that is known to be true of x is that x is equal to some constant.\n> Tom doesn't think it would be worth the additional CPU time that it\n> would take to make these sorts of deductions.  I'm not sure I believe\n> that, but I haven't tried to write the code, either.\n\nIf I understand correctly, I did have some issues with\nexclusion_constraint= ON for complex queries in datamining where the\nplanner failled to understand it must use only one partition because\nthe where clause where not enough 'explicit'. But it's long time ago\nand I don't have my use case.\n\nWe probably need to find some real case where the planner optimisation\nmake sense. But I don't want usual queries to see their CPU time\nincrease...\n<joke>Do we need real Planner Hints ?</joke>\n\n-- \nCédric Villemain\n", "msg_date": "Wed, 28 Apr 2010 09:49:05 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "\n> 2010/4/28 Robert Haas <[email protected]>:\n> \n>> On Mon, Apr 26, 2010 at 5:33 AM, C�dric Villemain\n>> <[email protected]> wrote:\n>> \n>>> In the first query, the planner doesn't use the information of the 2,3,4.\n>>> It just does a : I'll bet I'll have 2 rows in t1 (I think it should\n>>> say 3, but it doesn't)\n>>> So it divide the estimated number of rows in the t2 table by 5\n>>> (different values) and multiply by 2 (rows) : 40040.\n>>> \n>> I think it's doing something more complicated. See scalararraysel().\n>> \n>\n> Thank you for driving me to the right function, Robert.\n> It is in fact more complicated :)\n>\n> \n>>> In the second query the planner use a different behavior : it did\n>>> expand the value of t1.t to t2.t for each join relation and find a\n>>> costless plan. (than the one using seqscan on t2)\n>>> \n>> I think the problem here is one we've discussed before: if the query\n>> planner knows that something is true of x (like, say, x =\n>> ANY('{2,3,4}')) and it also knows that x = y, it doesn't infer that\n>> the same thing holds of y (i.e. y = ANY('{2,3,4}') unless the thing\n>> that is known to be true of x is that x is equal to some constant.\n>> Tom doesn't think it would be worth the additional CPU time that it\n>> would take to make these sorts of deductions. I'm not sure I believe\n>> that, but I haven't tried to write the code, either.\n>> \n>\n> If I understand correctly, I did have some issues with\n> exclusion_constraint= ON for complex queries in datamining where the\n> planner failled to understand it must use only one partition because\n> the where clause where not enough 'explicit'. But it's long time ago\n> and I don't have my use case.\n>\n> We probably need to find some real case where the planner optimisation\n> make sense. But I don't want usual queries to see their CPU time\n> increase...\n> <joke>Do we need real Planner Hints ?</joke>\n>\n> \nEven if it will be done it does not solve the original issue. If I\nunderstood you right there is now no any decent way of speeding up the query\n\nselect *\nfrom t2\njoin t1 on t1.t = t2.t\nwhere t1.id = X;\n\nexcept of the propagating the t1.id value to the table t2 and createing\nand index for this column? Then the query will look like\n\nselect *\nfrom t2\nwhere t1_id = X;\n\n", "msg_date": "Wed, 28 Apr 2010 18:38:20 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "On Wed, Apr 28, 2010 at 5:37 AM, Vlad Arkhipov <[email protected]> wrote:\n> Even if it will be done it does not solve the original issue. If I\n> understood you right there is now no any decent way of speeding up the query\n>\n> select *\n> from t2\n> join t1 on t1.t = t2.t\n> where t1.id = X;\n>\n> except of the propagating the t1.id value to the table t2 and createing and\n> index for this column?\n\nNo, what I'm saying is that if X is any ANY() expression, you can get\na faster plan in this case by writing:\n\nSELECT * FROM t2 JOIN t1 ON t1.t = t2.t WHERE t2.id = X;\n\nFor me this is about 8x faster.\n\n...Robert\n", "msg_date": "Wed, 28 Apr 2010 23:17:25 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "2010/4/29 Robert Haas <[email protected]>:\n> On Wed, Apr 28, 2010 at 5:37 AM, Vlad Arkhipov <[email protected]> wrote:\n>> Even if it will be done it does not solve the original issue. If I\n>> understood you right there is now no any decent way of speeding up the query\n>>\n>> select *\n>> from t2\n>> join t1 on t1.t = t2.t\n>> where t1.id = X;\n>>\n>> except of the propagating the t1.id value to the table t2 and createing and\n>> index for this column?\n>\n> No, what I'm saying is that if X is any ANY() expression, you can get\n> a faster plan in this case by writing:\n>\n> SELECT * FROM t2 JOIN t1 ON t1.t = t2.t WHERE t2.id = X;\n\nSELECT * FROM t2 JOIN t1 ON t1.t = t2.t WHERE t2.t = X;\n\nside note : You might want/need to improve statistics in the column\nt2.t (in situation/distribution like this one)\n\n>\n> For me this is about 8x faster.\n>\n> ...Robert\n>\n\n\n\n-- \nCédric Villemain\n", "msg_date": "Thu, 29 Apr 2010 10:21:13 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "2010/4/28 Robert Haas <[email protected]>:\n> On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain\n> <[email protected]> wrote:\n>> In the first query, the planner doesn't use the information of the 2,3,4.\n>> It just does a : I'll bet I'll have 2 rows in t1 (I think it should\n>> say 3, but it doesn't)\n>> So it divide the estimated number of rows in the t2 table by 5\n>> (different values) and multiply by 2 (rows) : 40040.\n>\n> I think it's doing something more complicated.  See scalararraysel().\n>\n>> In the second query the planner use a different behavior : it did\n>> expand the value of t1.t to t2.t for each join relation and find a\n>> costless plan. (than the one using seqscan on t2)\n>\n> I think the problem here is one we've discussed before: if the query\n> planner knows that something is true of x (like, say, x =\n> ANY('{2,3,4}')) and it also knows that x = y, it doesn't infer that\n> the same thing holds of y (i.e. y = ANY('{2,3,4}') unless the thing\n> that is known to be true of x is that x is equal to some constant.\n> Tom doesn't think it would be worth the additional CPU time that it\n> would take to make these sorts of deductions.  I'm not sure I believe\n> that, but I haven't tried to write the code, either.\n\nRelative to this too :\nhttp://archives.postgresql.org/pgsql-general/2010-05/msg00009.php ?\n\n>\n> ...Robert\n>\n\n\n\n-- \nCédric Villemain\n", "msg_date": "Sat, 1 May 2010 12:52:35 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" }, { "msg_contents": "2010/5/1 Cédric Villemain <[email protected]>:\n> 2010/4/28 Robert Haas <[email protected]>:\n>> On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain\n>> <[email protected]> wrote:\n>>> In the first query, the planner doesn't use the information of the 2,3,4.\n>>> It just does a : I'll bet I'll have 2 rows in t1 (I think it should\n>>> say 3, but it doesn't)\n>>> So it divide the estimated number of rows in the t2 table by 5\n>>> (different values) and multiply by 2 (rows) : 40040.\n>>\n>> I think it's doing something more complicated.  See scalararraysel().\n>>\n>>> In the second query the planner use a different behavior : it did\n>>> expand the value of t1.t to t2.t for each join relation and find a\n>>> costless plan. (than the one using seqscan on t2)\n>>\n>> I think the problem here is one we've discussed before: if the query\n>> planner knows that something is true of x (like, say, x =\n>> ANY('{2,3,4}')) and it also knows that x = y, it doesn't infer that\n>> the same thing holds of y (i.e. y = ANY('{2,3,4}') unless the thing\n>> that is known to be true of x is that x is equal to some constant.\n>> Tom doesn't think it would be worth the additional CPU time that it\n>> would take to make these sorts of deductions.  I'm not sure I believe\n>> that, but I haven't tried to write the code, either.\n>\n> Relative to this too :\n> http://archives.postgresql.org/pgsql-general/2010-05/msg00009.php  ?\n\nnot, sorry ,misread about prepared statement in the other thread ...\n\n>\n>>\n>> ...Robert\n>>\n>\n>\n>\n> --\n> Cédric Villemain\n>\n\n\n\n-- \nCédric Villemain\n", "msg_date": "Sat, 1 May 2010 14:00:32 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization idea" } ]
[ { "msg_contents": "Hello, everybody!\n\nI'm using PostgreSQL 8.4.3, compiled by Visual C++ build 1400, 32-bit on\nWindows XP SP3.\nI use following data model for issue reproducing.\n\nCREATE TABLE test1\n(\n id integer NOT NULL,\n \"value\" double precision,\n CONSTRAINT test1_pkey PRIMARY KEY (id)\n);\n\nCREATE INDEX test1_value_idx ON test1(value);\n\nCREATE TABLE test2\n(\n id integer NOT NULL,\n id1 integer NOT NULL REFERENCES test2 (id),\n \"value\" double precision,\n CONSTRAINT test2_pkey PRIMARY KEY (id)\n);\n\nCREATE INDEX test2_id1_value_idx ON test2(id1, value);\n\nFollowing statements generate 200 rows of test data into test1 table and\n1000000 rows of test data into test2 table.\n\nINSERT INTO test1 (id, value) (SELECT x, random() from\ngenerate_series(1,200) x);\n\nINSERT INTO test2 (id, id1, value) (SELECT x, (random()*200)::int + 1,\nrandom() from generate_series(1,1000000) x);\n\nThe following statements return top 10 rows from joining of two tables\nordered by table1.value and table2.value. The\n\nSELECT t1.value AS value1, t2.value AS value2 FROM test1 t1 JOIN test2 t2 ON\nt2.id1 = t1.id ORDER BY t1.value, t2.value\nLIMIT 10\n\n value1 | value2\n---------------------+---------------------\n 0.00562104489654303 | 0.00039379671216011\n 0.00562104489654303 | 0.000658359378576279\n 0.00562104489654303 | 0.000668979249894619\n 0.00562104489654303 | 0.000768951140344143\n 0.00562104489654303 | 0.00121330376714468\n 0.00562104489654303 | 0.00122168939560652\n 0.00562104489654303 | 0.00124016962945461\n 0.00562104489654303 | 0.00134057039394975\n 0.00562104489654303 | 0.00169069319963455\n 0.00562104489654303 | 0.00171623658388853\n(10 rows)\n\nThe statement plan doesn't use indexes. So the statement it is slow.\n\nLimit (cost=50614.88..50614.91 rows=10 width=16) (actual\ntime=8388.331..8388.382 rows=10 loops=1)\n -> Sort (cost=50614.88..53102.45 rows=995025 width=16) (actual\ntime=8388.324..8388.340 rows=10 loops=1)\n Sort Key: t1.value, t2.value\n Sort Method: top-N heapsort Memory: 17kB\n -> Hash Join (cost=6.50..29112.75 rows=995025 width=16) (actual\ntime=0.982..6290.516 rows=997461 loops=1)\n Hash Cond: (t2.id1 = t1.id)\n -> Seq Scan on test2 t2 (cost=0.00..15406.00 rows=1000000\nwidth=12) (actualtime=0.088..2047.910 rows=1000000 loops=1)\n -> Hash (cost=4.00..4.00 rows=200 width=12) (actual\ntime=0.870..0.870 rows=200 loops=1)\n -> Seq Scan on test1 t1 (cost=0.00..4.00 rows=200\nwidth=12) (actual time=0.010..0.428 rows=200 loops=1)\nTotal runtime: 8388.473 ms\n\nI can remove ordering by test2.value.\n\nSELECT t1.value AS value1, t2.value AS value2 FROM test1 t1 JOIN test2 t2 ON\nt2.id1 = t1.id ORDER BY t1.value LIMIT 10\n\nThen the result is the same.\n\n value1 | value2\n---------------------+---------------------\n 0.00562104489654303 | 0.00039379671216011\n 0.00562104489654303 | 0.000658359378576279\n 0.00562104489654303 | 0.000668979249894619\n 0.00562104489654303 | 0.000768951140344143\n 0.00562104489654303 | 0.00121330376714468\n 0.00562104489654303 | 0.00122168939560652\n 0.00562104489654303 | 0.00124016962945461\n 0.00562104489654303 | 0.00134057039394975\n 0.00562104489654303 | 0.00169069319963455\n 0.00562104489654303 | 0.00171623658388853\n(10 rows)\n\nThe statement plan uses indexes and statement runs fast. This plan is\nexactly what I need.\n\nLimit (cost=0.00..0.62 rows=10 width=16) (actual time=0.049..0.148 rows=10\nloops=1)\n -> Nested Loop (cost=0.00..62109.86 rows=995025 width=16) (actual\ntime=0.044..0.107 rows=10 loops=1)\n -> Index Scan using test1_value_idx on test1 t1 (cost=0.00..19.19\nrows=200 width=12) (actual time=0.017..0.017 rows=1 loops=1)\n -> Index Scan using test2_id1_value_idx on test2 t2\n(cost=0.00..248.27 rows=4975 width=12) (actual time=0.013..0.042 rows=10\nloops=1)\n Index Cond: (t2.id1 = t1.id)\nTotal runtime: 0.224 ms\n\nSo PostgreSQL planner can produce the plan I need but it doesn't produce\nthis plan when I specify particular second ordering column. So is there any\nway to make planner produce desired plan when particular second ordering\ncolumn is specified?\n\nWith best regards,\nKorotkov Alexander.\n\nHello, everybody!I'm using PostgreSQL 8.4.3, compiled by Visual C++ build 1400, 32-bit on Windows XP SP3. I use following data model for issue reproducing.CREATE TABLE test1(\n  id integer NOT NULL,  \"value\" double precision,  CONSTRAINT test1_pkey PRIMARY KEY (id));CREATE INDEX test1_value_idx ON test1(value);CREATE TABLE test2(  id integer NOT NULL,\n\n\n  id1 integer NOT NULL REFERENCES test2 (id),  \"value\" double precision,  CONSTRAINT test2_pkey PRIMARY KEY (id));CREATE INDEX test2_id1_value_idx ON test2(id1, value);Following statements generate 200 rows of test data into test1 table and 1000000 rows of test data into test2 table.\nINSERT INTO test1 (id, value) (SELECT x, random() from generate_series(1,200) x);INSERT INTO test2 (id, id1, value) (SELECT x, (random()*200)::int + 1, random() from generate_series(1,1000000) x);The following statements return top 10 rows from joining of two tables ordered by table1.value and table2.value. The\nSELECT t1.value AS value1, t2.value AS value2 FROM test1 t1 JOIN test2 t2 ON t2.id1 = t1.id ORDER BY t1.value, t2.value LIMIT 10       value1        |        value2---------------------+---------------------\n\n\n 0.00562104489654303 |  0.00039379671216011 0.00562104489654303 | 0.000658359378576279 0.00562104489654303 | 0.000668979249894619 0.00562104489654303 | 0.000768951140344143 0.00562104489654303 |  0.00121330376714468\n\n\n 0.00562104489654303 |  0.00122168939560652 0.00562104489654303 |  0.00124016962945461 0.00562104489654303 |  0.00134057039394975 0.00562104489654303 |  0.00169069319963455 0.00562104489654303 |  0.00171623658388853\n\n\n(10 rows)The statement plan doesn't use indexes. So the statement it is slow.Limit  (cost=50614.88..50614.91 rows=10 width=16) (actual time=8388.331..8388.382 rows=10 loops=1)  ->  Sort  (cost=50614.88..53102.45 rows=995025 width=16) (actual time=8388.324..8388.340 rows=10 loops=1)\n\n\n        Sort Key: t1.value, t2.value        Sort Method:  top-N heapsort  Memory: 17kB        ->  Hash Join  (cost=6.50..29112.75 rows=995025 width=16) (actual time=0.982..6290.516 rows=997461 loops=1)              Hash Cond: (t2.id1 = t1.id)\n\n\n              ->  Seq Scan on test2 t2  (cost=0.00..15406.00 rows=1000000 width=12) (actualtime=0.088..2047.910 rows=1000000 loops=1)              ->  Hash  (cost=4.00..4.00 rows=200 width=12) (actual time=0.870..0.870 rows=200 loops=1)\n\n\n                    ->  Seq Scan on test1 t1  (cost=0.00..4.00 rows=200 width=12) (actual time=0.010..0.428 rows=200 loops=1)Total runtime: 8388.473 msI can remove ordering by test2.value.SELECT t1.value AS value1, t2.value AS value2 FROM test1 t1 JOIN test2 t2 ON t2.id1 = t1.id ORDER BY t1.value LIMIT 10\nThen the result is the same.       value1        |        value2---------------------+--------------------- 0.00562104489654303 |  0.00039379671216011 0.00562104489654303 | 0.000658359378576279\n\n\n 0.00562104489654303 | 0.000668979249894619 0.00562104489654303 | 0.000768951140344143 0.00562104489654303 |  0.00121330376714468 0.00562104489654303 |  0.00122168939560652 0.00562104489654303 |  0.00124016962945461\n\n\n 0.00562104489654303 |  0.00134057039394975 0.00562104489654303 |  0.00169069319963455 0.00562104489654303 |  0.00171623658388853(10 rows)The statement plan uses indexes and statement runs fast. This plan is exactly what I need.\nLimit  (cost=0.00..0.62 rows=10 width=16) (actual time=0.049..0.148 rows=10 loops=1)  ->  Nested Loop  (cost=0.00..62109.86 rows=995025 width=16) (actual time=0.044..0.107 rows=10 loops=1)        ->  Index Scan using test1_value_idx on test1 t1  (cost=0.00..19.19 rows=200 width=12) (actual time=0.017..0.017 rows=1 loops=1)\n\n        ->  Index Scan using test2_id1_value_idx on test2 t2  (cost=0.00..248.27 rows=4975 width=12) (actual time=0.013..0.042 rows=10 loops=1)              Index Cond: (t2.id1 = t1.id)\n\n\nTotal runtime: 0.224 msSo PostgreSQL planner can produce the plan I need but it doesn't produce this plan when I specify particular second ordering column. So is there any way to make planner produce desired plan when particular second ordering column is specified?\nWith best regards,Korotkov Alexander.", "msg_date": "Sun, 25 Apr 2010 22:22:29 +0400", "msg_from": "=?KOI8-R?B?68/Sz9TLz9cg4czFy9PBzsTS?= <[email protected]>", "msg_from_op": true, "msg_subject": "Planner issue on sorting joining of two tables with limit" }, { "msg_contents": "=?KOI8-R?B?68/Sz9TLz9cg4czFy9PBzsTS?= <[email protected]> writes:\n> So PostgreSQL planner can produce the plan I need but it doesn't produce\n> this plan when I specify particular second ordering column.\n\nWell, no, because that plan wouldn't produce the specified ordering;\nor at least it would be a lucky coincidence if it did. It's only\nsorting on t1.value.\n\n> So is there any\n> way to make planner produce desired plan when particular second ordering\n> column is specified?\n\nNot when the ordering columns come from two different tables. (If they\nwere in the same table then scanning a two-column index could produce\nthe demanded sort order.) I don't see any way to satisfy this query\nwithout an explicit sort step, which means it has to look at the whole\njoin output.\n\nIf you're willing to make assumptions like \"the required 10 rows will be\nwithin the first 100 t1.value rows\" then you could nest an ORDER BY\nt1.value LIMIT 100 query inside something that did an ORDER BY with both\ncolumns. But this fails if you have too many duplicate t1.value values,\nand your test case suggests that you might have a lot of them. In any\ncase it would stop being fast if you make the inner LIMIT very large.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Apr 2010 12:51:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner issue on sorting joining of two tables with limit " }, { "msg_contents": ">\n> Well, no, because that plan wouldn't produce the specified ordering;\n> or at least it would be a lucky coincidence if it did. It's only\n> sorting on t1.value.\n>\nI just don't find why it is coincidence. I think that such plan will always\nproduce result ordered by two columns, because such nested index scan always\nproduce this result.\nLet's consider some simple example in order to illustrate how this plan\nworks.\n\nt1\nid | value\n---+------\n1 | 0.1\n2 | 0.3\n3 | 0.2\n\nt2\nid | id1 | value\n---+-----+------\n1 | 2 | 0.2\n2 | 1 | 0.9\n3 | 2 | 0.6\n4 | 1 | 0.7\n5 | 1 | 0.4\n6 | 3 | 0.2\n\n1) The outer index scan will find the row of t1 with least value using\ntest1_value_idx. It will be row (1, 0.1)\n2) The inner index scan will find all the rows in t2 where id1 = 1 using\ntest2_id1_value_idx. But index test2_id1_value_idx have second order by\nvalue column and the index scan result will be ordered by value. That's why\ninner index scan will find rows (5, 1, 0.4), (4, 1, 0.7) and (2, 1, 0.9).\nAnd following query output will be produced:\nvalue1 | value2\n--------+-------\n0.1 | 0.4\n0.1 | 0.7\n0.1 | 0.9\n3) The outer index scan will find the row of t1 with the second value using\ntest1_value_idx. It will be row (3, 0.2)\n4) The inner index scan will find all the rows in t2 where id1 = 3 using\ntest2_id1_value_idx. This row is (6, 3, 0.2). The following query output\nwill be produced:\nvalue1 | value2\n--------+-------\n0.2 | 0.2\n5) The outer index scan will find the row of t1 with the third value using\ntest1_value_idx. It will be row (2, 0.3)\n6) The inner index scan will find all the rows in t2 where id1 = 2 using\ntest2_id1_value_idx. These rows are (1, 2, 0.2) and (3, 2, 0.6). The\nfollowing query output will be produced:\nvalue1 | value2\n--------+-------\n0.3 | 0.2\n0.3 | 0.6\n\nAnd the whole query result is:\nvalue1 | value2\n--------+-------\n0.1 | 0.4\n0.1 | 0.7\n0.1 | 0.9\n0.2 | 0.2\n0.3 | 0.2\n0.3 | 0.6\n\nAnd this result is really ordered by t1.value, t2.value.\nI can't find error in my reasoning :)\n\nThe query without limit produce similar plan.\n\nEXPLAIN SELECT t1.value AS value1, t2.value AS value2 FROM test1 t1 JOIN\ntest2 t2 ON t2.id1 = t1.id ORDER BY t1.value\n\nNested Loop (cost=0.00..62109.86 rows=995025 width=16)\n -> Index Scan using test1_value_idx on test1 t1 (cost=0.00..19.19\nrows=200 width=12)\n -> Index Scan using test2_id1_value_idx on test2 t2 (cost=0.00..248.27\nrows=4975 width=12)\n Index Cond: (t2.id1 = t1.id)\n\nAnd I checked that the result is ordered by t1.value and t2.value.\n\n2010/4/26 Tom Lane <[email protected]>\n\n> =?KOI8-R?B?68/Sz9TLz9cg4czFy9PBzsTS?= <[email protected]> writes:\n> > So PostgreSQL planner can produce the plan I need but it doesn't produce\n> > this plan when I specify particular second ordering column.\n>\n> Well, no, because that plan wouldn't produce the specified ordering;\n> or at least it would be a lucky coincidence if it did. It's only\n> sorting on t1.value.\n>\n> > So is there any\n> > way to make planner produce desired plan when particular second ordering\n> > column is specified?\n>\n> Not when the ordering columns come from two different tables. (If they\n> were in the same table then scanning a two-column index could produce\n> the demanded sort order.) I don't see any way to satisfy this query\n> without an explicit sort step, which means it has to look at the whole\n> join output.\n>\n> If you're willing to make assumptions like \"the required 10 rows will be\n> within the first 100 t1.value rows\" then you could nest an ORDER BY\n> t1.value LIMIT 100 query inside something that did an ORDER BY with both\n> columns. But this fails if you have too many duplicate t1.value values,\n> and your test case suggests that you might have a lot of them. In any\n> case it would stop being fast if you make the inner LIMIT very large.\n>\n> regards, tom lane\n>\n\nWell, no, because that plan wouldn't produce \nthe specified ordering;\n\nor at least it would be a lucky coincidence if it did.  It's only\nsorting on t1.value.I just don't find why it is coincidence. I think that such \nplan will \nalways produce result ordered by two columns, because such nested index \nscan always produce this result.\nLet's consider some simple example in order to illustrate how this plan \nworks.t1id | value---+------1  | 0.12  | 0.33 \n\n | 0.2t2id | id1 | value---+-----+------1  |  2  | \n0.2\n2  |  1  | 0.93  |  2  | 0.64  |  1  | 0.75  |  1  | 0.46 \n\n |  3  | 0.21) The outer index scan will find the row of t1 with\n least value using test1_value_idx. It will be row (1, 0.1)2) The \ninner index scan will find all the rows in t2 where id1 = 1 using \ntest2_id1_value_idx. But index test2_id1_value_idx have second order by \nvalue column and the index scan result will be ordered by value. That's \nwhy inner index scan will find rows (5, 1, 0.4), (4, 1, 0.7) and (2, 1, \n0.9). And following query output will be produced:\nvalue1  | value2--------+-------0.1     | 0.40.1     | 0.70.1    \n\n | 0.93) The outer index scan will find the row of t1 with the \nsecond value using test1_value_idx. It will be row (3, 0.2)4) The \ninner index scan will find all the rows in t2 where id1 = 3 using \ntest2_id1_value_idx. This row is (6, 3, 0.2). The following query output\n will be produced:\nvalue1  | value2--------+-------0.2     | 0.25) The outer \nindex scan will find the row of t1 with the third value using \ntest1_value_idx. It will be row (2, 0.3)6) The inner index scan will\n find all the rows in t2 where id1 = 2 using test2_id1_value_idx. These \nrows are (1, 2, 0.2) and (3, 2, 0.6). The following query output will be\n produced:\nvalue1  | value2--------+-------0.3     | 0.20.3     | 0.6And\n\n the whole query result is:value1  | value2--------+-------0.1    \n\n | 0.40.1     | 0.70.1     | 0.90.2     | 0.20.3     | \n0.2\n0.3     | 0.6And this result is really ordered by t1.value, \nt2.value.I can't find error in my reasoning :)The query \nwithout limit produce similar plan.EXPLAIN SELECT \nt1.value AS \nvalue1, t2.value AS value2 FROM test1 t1 JOIN test2 t2 ON t2.id1 = t1.id ORDER BY t1.value\nNested Loop  (cost=0.00..62109.86 rows=995025 width=16)  ->  Index Scan using test1_value_idx on test1 t1  \n(cost=0.00..19.19 rows=200 width=12) \n ->  \nIndex Scan using test2_id1_value_idx on test2 t2  (cost=0.00..248.27 \nrows=4975 width=12)\n        Index Cond: (t2.id1 = t1.id)And\nI checked that the result is ordered by t1.value and t2.value.2010/4/26 Tom Lane <[email protected]>\n=?KOI8-R?B?68/Sz9TLz9cg4czFy9PBzsTS?= <[email protected]> writes:\n> So PostgreSQL planner can produce the plan I need but it doesn't produce\n> this plan when I specify particular second ordering column.\n\nWell, no, because that plan wouldn't produce the specified ordering;\nor at least it would be a lucky coincidence if it did.  It's only\nsorting on t1.value.\n\n> So is there any\n> way to make planner produce desired plan when particular second ordering\n> column is specified?\n\nNot when the ordering columns come from two different tables.  (If they\nwere in the same table then scanning a two-column index could produce\nthe demanded sort order.)  I don't see any way to satisfy this query\nwithout an explicit sort step, which means it has to look at the whole\njoin output.\n\nIf you're willing to make assumptions like \"the required 10 rows will be\nwithin the first 100 t1.value rows\" then you could nest an ORDER BY\nt1.value LIMIT 100 query inside something that did an ORDER BY with both\ncolumns.  But this fails if you have too many duplicate t1.value values,\nand your test case suggests that you might have a lot of them.  In any\ncase it would stop being fast if you make the inner LIMIT very large.\n\n                        regards, tom lane", "msg_date": "Mon, 3 May 2010 17:57:06 +0400", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner issue on sorting joining of two tables with\n\tlimit" }, { "msg_contents": "I found my mistake. My supposition is working only if value column in t1\ntable is unique. But if I replace the index by unique one then plan is the\nsame.\n\nOn Mon, May 3, 2010 at 5:57 PM, Alexander Korotkov <[email protected]>wrote:\n\n> Well, no, because that plan wouldn't produce the specified ordering;\n>> or at least it would be a lucky coincidence if it did. It's only\n>> sorting on t1.value.\n>>\n> I just don't find why it is coincidence. I think that such plan will always\n> produce result ordered by two columns, because such nested index scan always\n> produce this result.\n>\n> Let's consider some simple example in order to illustrate how this plan\n> works.\n>\n> t1\n> id | value\n> ---+------\n> 1 | 0.1\n> 2 | 0.3\n> 3 | 0.2\n>\n> t2\n> id | id1 | value\n> ---+-----+------\n> 1 | 2 | 0.2\n> 2 | 1 | 0.9\n> 3 | 2 | 0.6\n> 4 | 1 | 0.7\n> 5 | 1 | 0.4\n> 6 | 3 | 0.2\n>\n> 1) The outer index scan will find the row of t1 with least value using\n> test1_value_idx. It will be row (1, 0.1)\n> 2) The inner index scan will find all the rows in t2 where id1 = 1 using\n> test2_id1_value_idx. But index test2_id1_value_idx have second order by\n> value column and the index scan result will be ordered by value. That's why\n> inner index scan will find rows (5, 1, 0.4), (4, 1, 0.7) and (2, 1, 0.9).\n> And following query output will be produced:\n>\n> value1 | value2\n> --------+-------\n> 0.1 | 0.4\n> 0.1 | 0.7\n> 0.1 | 0.9\n> 3) The outer index scan will find the row of t1 with the second value using\n> test1_value_idx. It will be row (3, 0.2)\n> 4) The inner index scan will find all the rows in t2 where id1 = 3 using\n> test2_id1_value_idx. This row is (6, 3, 0.2). The following query output\n> will be produced:\n>\n> value1 | value2\n> --------+-------\n> 0.2 | 0.2\n> 5) The outer index scan will find the row of t1 with the third value using\n> test1_value_idx. It will be row (2, 0.3)\n> 6) The inner index scan will find all the rows in t2 where id1 = 2 using\n> test2_id1_value_idx. These rows are (1, 2, 0.2) and (3, 2, 0.6). The\n> following query output will be produced:\n>\n> value1 | value2\n> --------+-------\n> 0.3 | 0.2\n> 0.3 | 0.6\n>\n> And the whole query result is:\n> value1 | value2\n> --------+-------\n> 0.1 | 0.4\n> 0.1 | 0.7\n> 0.1 | 0.9\n> 0.2 | 0.2\n> 0.3 | 0.2\n> 0.3 | 0.6\n>\n> And this result is really ordered by t1.value, t2.value.\n> I can't find error in my reasoning :)\n>\n> The query without limit produce similar plan.\n>\n> EXPLAIN SELECT t1.value AS value1, t2.value AS value2 FROM test1 t1 JOIN\n> test2 t2 ON t2.id1 = t1.id ORDER BY t1.value\n>\n> Nested Loop (cost=0.00..62109.86 rows=995025 width=16)\n> -> Index Scan using test1_value_idx on test1 t1 (cost=0.00..19.19\n> rows=200 width=12)\n> -> Index Scan using test2_id1_value_idx on test2 t2 (cost=0.00..248.27\n> rows=4975 width=12)\n> Index Cond: (t2.id1 = t1.id)\n>\n> And I checked that the result is ordered by t1.value and t2.value.\n>\n> 2010/4/26 Tom Lane <[email protected]>\n>\n>> =?KOI8-R?B?68/Sz9TLz9cg4czFy9PBzsTS?= <[email protected]> writes:\n>>\n>> > So PostgreSQL planner can produce the plan I need but it doesn't produce\n>> > this plan when I specify particular second ordering column.\n>>\n>> Well, no, because that plan wouldn't produce the specified ordering;\n>> or at least it would be a lucky coincidence if it did. It's only\n>> sorting on t1.value.\n>>\n>> > So is there any\n>> > way to make planner produce desired plan when particular second ordering\n>> > column is specified?\n>>\n>> Not when the ordering columns come from two different tables. (If they\n>> were in the same table then scanning a two-column index could produce\n>> the demanded sort order.) I don't see any way to satisfy this query\n>> without an explicit sort step, which means it has to look at the whole\n>> join output.\n>>\n>> If you're willing to make assumptions like \"the required 10 rows will be\n>> within the first 100 t1.value rows\" then you could nest an ORDER BY\n>> t1.value LIMIT 100 query inside something that did an ORDER BY with both\n>> columns. But this fails if you have too many duplicate t1.value values,\n>> and your test case suggests that you might have a lot of them. In any\n>> case it would stop being fast if you make the inner LIMIT very large.\n>>\n>> regards, tom lane\n>>\n>\n>\n\nI found my mistake. My supposition is working only if value column in t1 table is unique. But if I replace the index by unique one then plan is the same.On Mon, May 3, 2010 at 5:57 PM, Alexander Korotkov <[email protected]> wrote:\n\n\nWell, no, because that plan wouldn't produce \nthe specified ordering;\n\nor at least it would be a lucky coincidence if it did.  It's only\nsorting on t1.value.I just don't find why it is coincidence. I think that such \nplan will \nalways produce result ordered by two columns, because such nested index \nscan always produce this result.\nLet's consider some simple example in order to illustrate how this plan \nworks.t1id | value---+------1  | 0.12  | 0.33 \n\n | 0.2t2id | id1 | value---+-----+------1  |  2  | \n0.2\n2  |  1  | 0.93  |  2  | 0.64  |  1  | 0.75  |  1  | 0.46 \n\n |  3  | 0.21) The outer index scan will find the row of t1 with\n least value using test1_value_idx. It will be row (1, 0.1)2) The \ninner index scan will find all the rows in t2 where id1 = 1 using \ntest2_id1_value_idx. But index test2_id1_value_idx have second order by \nvalue column and the index scan result will be ordered by value. That's \nwhy inner index scan will find rows (5, 1, 0.4), (4, 1, 0.7) and (2, 1, \n0.9). And following query output will be produced:\nvalue1  | value2--------+-------0.1     | 0.40.1     | 0.70.1    \n\n | 0.93) The outer index scan will find the row of t1 with the \nsecond value using test1_value_idx. It will be row (3, 0.2)4) The \ninner index scan will find all the rows in t2 where id1 = 3 using \ntest2_id1_value_idx. This row is (6, 3, 0.2). The following query output\n will be produced:\nvalue1  | value2--------+-------0.2     | 0.25) The outer \nindex scan will find the row of t1 with the third value using \ntest1_value_idx. It will be row (2, 0.3)6) The inner index scan will\n find all the rows in t2 where id1 = 2 using test2_id1_value_idx. These \nrows are (1, 2, 0.2) and (3, 2, 0.6). The following query output will be\n produced:\nvalue1  | value2--------+-------0.3     | 0.20.3     | 0.6And\n\n the whole query result is:value1  | value2--------+-------0.1    \n\n | 0.40.1     | 0.70.1     | 0.90.2     | 0.20.3     | \n0.2\n0.3     | 0.6And this result is really ordered by t1.value, \nt2.value.I can't find error in my reasoning :)The query \nwithout limit produce similar plan.EXPLAIN SELECT \nt1.value AS \nvalue1, t2.value AS value2 FROM test1 t1 JOIN test2 t2 ON t2.id1 = t1.id ORDER BY t1.value\nNested Loop  (cost=0.00..62109.86 rows=995025 width=16)  ->  Index Scan using test1_value_idx on test1 t1  \n(cost=0.00..19.19 rows=200 width=12) \n ->  \nIndex Scan using test2_id1_value_idx on test2 t2  (cost=0.00..248.27 \nrows=4975 width=12)\n        Index Cond: (t2.id1 = t1.id)And\nI checked that the result is ordered by t1.value and t2.value.2010/4/26 Tom Lane <[email protected]>\n\n=?KOI8-R?B?68/Sz9TLz9cg4czFy9PBzsTS?= <[email protected]> writes:\n> So PostgreSQL planner can produce the plan I need but it doesn't produce\n> this plan when I specify particular second ordering column.\n\nWell, no, because that plan wouldn't produce the specified ordering;\nor at least it would be a lucky coincidence if it did.  It's only\nsorting on t1.value.\n\n> So is there any\n> way to make planner produce desired plan when particular second ordering\n> column is specified?\n\nNot when the ordering columns come from two different tables.  (If they\nwere in the same table then scanning a two-column index could produce\nthe demanded sort order.)  I don't see any way to satisfy this query\nwithout an explicit sort step, which means it has to look at the whole\njoin output.\n\nIf you're willing to make assumptions like \"the required 10 rows will be\nwithin the first 100 t1.value rows\" then you could nest an ORDER BY\nt1.value LIMIT 100 query inside something that did an ORDER BY with both\ncolumns.  But this fails if you have too many duplicate t1.value values,\nand your test case suggests that you might have a lot of them.  In any\ncase it would stop being fast if you make the inner LIMIT very large.\n\n                        regards, tom lane", "msg_date": "Thu, 6 May 2010 10:54:01 +0400", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner issue on sorting joining of two tables with\n\tlimit" }, { "msg_contents": "Alexander Korotkov <[email protected]> wrote:\n> Alexander Korotkov <[email protected]> wrote:\n \n>>> Well, no, because that plan wouldn't produce the specified\n>>> ordering; or at least it would be a lucky coincidence if it did.\n>>> It's only sorting on t1.value.\n>>>\n>> I just don't find why it is coincidence. I think that such plan\n>> will always produce result ordered by two columns, because such\n>> nested index scan always produce this result.\n \nAssuming a nested index scan, or any particular plan, is unwise. \nNew data or just the \"luck of the draw\" on your next ANALYZE could\nresult in a totally different plan which wouldn't produce the same\nordering unless specified.\n \n> I found my mistake. My supposition is working only if value column\n> in t1 table is unique. But if I replace the index by unique one\n> then plan is the same.\n \nYeah, maybe, for the moment. When you have ten times the quantity\nof data, a completely different plan may be chosen. If you want a\nparticular order, ask for it. The planner will even take the\nrequested ordering into account when choosing a plan, so the cutoff\nfor switching to an in-memory hash table or a bitmap index scan\nmight shift a bit based on the calculated cost of sorting data.\n \n-Kevin\n", "msg_date": "Fri, 07 May 2010 10:27:02 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner issue on sorting joining of two tables\n\t with limit" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Alexander Korotkov <[email protected]> wrote:\n>>> I just don't find why it is coincidence. I think that such plan\n>>> will always produce result ordered by two columns, because such\n>>> nested index scan always produce this result.\n \n> Assuming a nested index scan, or any particular plan, is unwise. \n\nI think he's proposing that the planner should recognize that a plan\nof this type produces a result sorted by the additional index columns.\nI'm not convinced either that the sortedness property really holds,\nor that it would be worth the extra planning effort to check for;\nbut it's not a fundamentally misguided proposal.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 May 2010 11:35:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner issue on sorting joining of two tables with limit " }, { "msg_contents": "On Fri, May 7, 2010 at 11:35 AM, Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> Alexander Korotkov <[email protected]> wrote:\n>>>> I just don't find why it is coincidence. I think that such plan\n>>>> will always produce result ordered by two columns, because such\n>>>> nested index scan always produce this result.\n>\n>> Assuming a nested index scan, or any particular plan, is unwise.\n>\n> I think he's proposing that the planner should recognize that a plan\n> of this type produces a result sorted by the additional index columns.\n> I'm not convinced either that the sortedness property really holds,\n> or that it would be worth the extra planning effort to check for;\n> but it's not a fundamentally misguided proposal.\n\nI took a slightly different point - a nested loop will be ordered by\nthe ordering of the outer side and then, within that, the ordering of\nthe inner side, presuming (not quite sure how to phrase this) that the\nouter side is \"unique enough\" with respect to the ordering. I'm not\ntoo sure whether there's anything useful we can do with this\ninformation in a reasonable number of CPU cycles, but it is something\nI've noticed before while reading the code.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Sun, 16 May 2010 08:21:37 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner issue on sorting joining of two tables with\n\tlimit" } ]
[ { "msg_contents": "I have a 16G box and tmpfs is configured to use 8G for tmpfs .\n\nIs a lot of memory being wasted that can be used for Postgres ? (I am\nnot seeing any performance issues, but I am not clear how Linux uses\nthe tmpfs and how Postgres would be affected by the reduction in\nmemory)\n\nSriram\n", "msg_date": "Mon, 26 Apr 2010 16:24:14 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "tmpfs and postgres memory" }, { "msg_contents": "On Mon, Apr 26, 2010 at 5:24 PM, Anj Adu <[email protected]> wrote:\n> I have a 16G box and tmpfs is configured to use 8G for tmpfs .\n>\n> Is a lot of memory being wasted that can be used for Postgres ? (I am\n> not seeing any performance issues, but I am not clear how Linux uses\n> the tmpfs and how Postgres would be affected by the reduction in\n> memory)\n\nLike Solaris, tmpfs is from swap and swap is both memory and disk so\nthere is no guarantee when you're using it that it will be the fast\nmemory based file system you're looking for.\n\nWhat you may be wanting is ramfs. Unlike tmpfs, it is 100% memory.\nAnother difference is though you may mount a ramfs file system\nspecifying a size, no real size is enforced. If you have 2GB of\nmemory and attempt to copy 2GB of files to a ramfs mount point the\nsystem will do it until all the space, i.e. memory, is gone.\n\nBoth tmpfs and ramfs come with a price, that is at the flick of a\nswitch, loss of power or other cause that resets or reboots the system\nall data is lost. That reason doesn't necessarily mean you can't use\na memory based file system it just limits it's applications.\n\nPersonally, I'd find a way to tell PostgreSQL about the memory before\ntoying with tmpfs or ramfs but I'm sure our applications are\ndifferent.\n\n-Greg\n", "msg_date": "Mon, 26 Apr 2010 21:59:50 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tmpfs and postgres memory" }, { "msg_contents": "On Mon, Apr 26, 2010 at 7:24 PM, Anj Adu <[email protected]> wrote:\n> I have a 16G box and tmpfs is configured to use 8G for tmpfs .\n>\n> Is a lot of memory being wasted that can be used for Postgres ? (I am\n> not seeing any performance issues, but I am not clear how Linux uses\n> the tmpfs and how Postgres would be affected by the reduction in\n> memory)\n\nWhat does the output of \"free -m\" look like when the database is as\nheavily used as it typically gets?\n\n...Robert\n", "msg_date": "Tue, 27 Apr 2010 20:56:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tmpfs and postgres memory" } ]
[ { "msg_contents": "Hello,\n\nI'm about to embark on a partitioning project to improve read performance on some of our tables:\n\ndb=# select relname,n_live_tup,pg_size_pretty(pg_relation_size(relid)) from pg_stat_all_tables where schemaname = 'public' order by n_live_tup desc limit 10;\n relname | n_live_tup | pg_size_pretty \n-------------------------------------+------------+----------------\n objects | 125255895 | 11 GB\n papers | 124213085 | 14 GB\n stats | 124202261 | 9106 MB\n exclusions | 53090902 | 3050 MB\n marks | 42467477 | 4829 MB\n student_class | 31491181 | 1814 MB\n users | 19906017 | 3722 MB\n view_stats | 12031074 | 599 MB\n highlights | 10884380 | 629 MB\n\nProblem is, I have foreign keys that link almost all of our tables together (as a business requirement/IT policy). However, I know (er, I have a gut feeling) that many people out there have successfully deployed table partitioning, so I'm hoping to solicit some advice with respect to this. I've looked at documentation, tried creating a prototype, etc...looks like foreign keys have to go. But do they? What have other people out there done to get their tables partitioned?\n\nAny input would be much appreciated.\n\nThanks!\n--Richard", "msg_date": "Wed, 5 May 2010 13:25:46 -0700", "msg_from": "Richard Yen <[email protected]>", "msg_from_op": true, "msg_subject": "partioning tips?" }, { "msg_contents": "\"Because it's policy\" is rarely a good design decision :-) Lose the FK\nconstraints, and make up for them with integrity checking queries.\n\nI just did a major refactor and shard on our PG schema and the performance\nimprovement was dramatic ... a big plus for PG, if it is e.g. time-series\ndata is to shard by time and make the tables write-once. The same applies to\nany record id that doesn't get re-used. PG doesn't do in-place record\nupdates, so tables with lots of row changes can get order-fragmented.\n\nIf not, also check out the \"cluster table on index\" command.\n\nCheers\nDave\n\nOn Wed, May 5, 2010 at 3:25 PM, Richard Yen <[email protected]> wrote:\n\n> Hello,\n>\n> I'm about to embark on a partitioning project to improve read performance\n> on some of our tables:\n>\n> db=# select relname,n_live_tup,pg_size_pretty(pg_relation_size(relid)) from\n> pg_stat_all_tables where schemaname = 'public' order by n_live_tup desc\n> limit 10;\n> relname | n_live_tup | pg_size_pretty\n> -------------------------------------+------------+----------------\n> objects | 125255895 | 11 GB\n> papers | 124213085 | 14 GB\n> stats | 124202261 | 9106 MB\n> exclusions | 53090902 | 3050 MB\n> marks | 42467477 | 4829 MB\n> student_class | 31491181 | 1814 MB\n> users | 19906017 | 3722 MB\n> view_stats | 12031074 | 599 MB\n> highlights | 10884380 | 629 MB\n>\n> Problem is, I have foreign keys that link almost all of our tables together\n> (as a business requirement/IT policy). However, I know (er, I have a gut\n> feeling) that many people out there have successfully deployed table\n> partitioning, so I'm hoping to solicit some advice with respect to this.\n> I've looked at documentation, tried creating a prototype, etc...looks like\n> foreign keys have to go. But do they? What have other people out there\n> done to get their tables partitioned?\n>\n> Any input would be much appreciated.\n>\n> Thanks!\n> --Richard\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\"Because it's policy\" is rarely a good design decision :-) Lose the FK constraints, and make up for them with integrity checking queries.I just did a major refactor and shard on our PG schema and the performance improvement was dramatic ... a big plus for PG, if it is e.g. time-series data is to shard by time and make the tables write-once. The same applies to any record id that doesn't get re-used. PG doesn't do in-place record updates, so tables with lots of row changes can get order-fragmented.\nIf not, also check out the \"cluster table on index\" command.CheersDaveOn Wed, May 5, 2010 at 3:25 PM, Richard Yen <[email protected]> wrote:\nHello,\n\nI'm about to embark on a partitioning project to improve read performance on some of our tables:\n\ndb=# select relname,n_live_tup,pg_size_pretty(pg_relation_size(relid)) from pg_stat_all_tables where schemaname = 'public' order by n_live_tup desc limit 10;\n               relname               | n_live_tup | pg_size_pretty\n-------------------------------------+------------+----------------\n objects                            |  125255895 | 11 GB\n papers                      |  124213085 | 14 GB\n stats                      |  124202261 | 9106 MB\n exclusions                      |   53090902 | 3050 MB\n marks                            |   42467477 | 4829 MB\n student_class                     |   31491181 | 1814 MB\n users                              |   19906017 | 3722 MB\n view_stats                   |   12031074 | 599 MB\n highlights                       |   10884380 | 629 MB\n\nProblem is, I have foreign keys that link almost all of our tables together (as a business requirement/IT policy).  However, I know (er, I have a gut feeling) that many people out there have successfully deployed table partitioning, so I'm hoping to solicit some advice with respect to this.  I've looked at documentation, tried creating a prototype, etc...looks like foreign keys have to go.  But do they?  What have other people out there done to get their tables partitioned?\n\nAny input would be much appreciated.\n\nThanks!\n--Richard\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 5 May 2010 16:31:40 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partioning tips?" }, { "msg_contents": "On 05/05/2010 01:25 PM, Richard Yen wrote:\n> Problem is, I have foreign keys that link almost all of our tables together (as a business requirement/IT policy). However, I know (er, I have a gut feeling) that many people out there have successfully deployed table partitioning, so I'm hoping to solicit some advice with respect to this. I've looked at documentation, tried creating a prototype, etc...looks like foreign keys have to go. But do they? What have other people out there done to get their tables partitioned?\n\nWell, it's possible to work around the limitation on FKs, but probably \nnot worth it. In general, the reasons you want to partition (being able \nto cheaply drop segments, no scans against the whole table, ever) are \nreasons why you wouldn't want an FK to a partition table in any case.\n\nThe specific cases where it works to have FKs anyway are:\n\n1) if you're making FKs between two partitioned tables whose partition \nranges match exactly. In this case, you can just FK the individual \npartitions (there is a TODO, and some draft code from Aster, to make \nthis happen automatically).\n\n2) If the partitioned table has very wide rows, and it's large for that \nreason rather than because of having many rows. In this case, you can \ncreate an FK join table containing only the SKs for creating FKs to, \njust like a many-to-many join table.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 07 May 2010 10:48:30 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partioning tips?" }, { "msg_contents": "Now that it's time to buy a new computer, Dell has changed their RAID models from the Perc6 to Perc H200 and such. Does anyone know what's inside these? I would hope they've stuck with the Megaraid controller...\n\nAlso, I can't find any info on Dell's site about how these devices can be configured. I was thinking of ten disks, as\n\n OS: RAID1\n WAL: RAID1\n Database: RAID10 using 6 disks\n\nBut it's not clear to me if these RAID controllers can handle multible arrays, or if you need a separate controller for each array. We're a small shop and I only get to do this every year or so, and everything changes in between purchases!\n\nThanks,\nCraig\n", "msg_date": "Fri, 07 May 2010 14:11:02 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Dell Perc HX00 RAID controllers: What's inside?" }, { "msg_contents": "Craig James wrote:\n> Now that it's time to buy a new computer, Dell has changed their RAID \n> models from the Perc6 to Perc H200 and such. Does anyone know what's \n> inside these? I would hope they've stuck with the Megaraid controller...\n\nThe H700 and H800 are both based on the LSI 2180 chipset: \nhttp://support.dell.com/support/edocs/storage/Storlink/H700H800/en/UG/HTML/chapterb.htm\n\nI'm not sure what's in the H200, but since it does not have a write \ncache you don't want one of those anyway.\n\nNote that early versions of these cards shipped such that you could not \nuse non-Dell drives with them. Customer feedback was so overwhelmingly \nnegative that last month they announced that the next firmware update \nwill remove that restriction: \nhttp://en.community.dell.com/support-forums/servers/f/906/p/19324790/19689719.aspx#19689719\n\nIf I were you, I'd tell Dell that you refuse to make your purchase until \nthat firmware release is actually available, such that your system ships \nwithout that restriction. That's the right thing to do for the \nprotection of your company, and it sends the right message to their \nsales team too: this sort of nonsense only reduces their sales.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sun, 09 May 2010 17:20:28 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Perc HX00 RAID controllers: What's inside?" }, { "msg_contents": "On Sun, May 9, 2010 at 3:20 PM, Greg Smith <[email protected]> wrote:\n> Craig James wrote:\n>>\n>> Now that it's time to buy a new computer, Dell has changed their RAID\n>> models from the Perc6 to Perc H200 and such.  Does anyone know what's inside\n>> these?  I would hope they've stuck with the Megaraid controller...\n>\n> The H700 and H800 are both based on the LSI 2180 chipset:\n>  http://support.dell.com/support/edocs/storage/Storlink/H700H800/en/UG/HTML/chapterb.htm\n>\n> I'm not sure what's in the H200, but since it does not have a write cache\n> you don't want one of those anyway.\n>\n> Note that early versions of these cards shipped such that you could not use\n> non-Dell drives with them.  Customer feedback was so overwhelmingly negative\n> that last month they announced that the next firmware update will remove\n> that restriction:\n>  http://en.community.dell.com/support-forums/servers/f/906/p/19324790/19689719.aspx#19689719\n>\n> If I were you, I'd tell Dell that you refuse to make your purchase until\n> that firmware release is actually available, such that your system ships\n> without that restriction.  That's the right thing to do for the protection\n> of your company, and it sends the right message to their sales team too:\n>  this sort of nonsense only reduces their sales.\n\nWell, it's the attitude that really matters, and Dell has shown how\nlittle they think of the people who buy their machines with this move.\n I gave up on them when they screwed me royally over 8 quad core cpus\nthat they couldn't even tell me which of my 1950's could take them,\nand they have all the config codes for them. At least if I buy\nsomething with generic mobos in it I can go look it up myself.\n\nAlso, going back to the PE16xx series and the adaptec based Perc3 (DI?\n not sure which one was) had lockup problems in windows AND linux.\nDell never would take responsibility and ship us different RAID\ncontrollers for some 300 machines we bought. We wound up buying a\nhandful of LSI based Perc 3 (DC? still not sure of the name) and just\npulling the RAID controller on all the rest to get reliable machines.\nNever. Again.\n", "msg_date": "Sun, 9 May 2010 20:39:24 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Perc HX00 RAID controllers: What's inside?" } ]
[ { "msg_contents": "Hi,\n\nI am trying to have synchronous master-master replication in PostgreSQL8.4 using PgPool II. I am not able to configure PgPool on the system, it gives me an error, libpq is not installed or libpq is old.\n\nI have tried the command , ./configure -with-pgsql = PostgreSQL dir -with-pgsql-libdir = PostgreSQL dir/lib/\n\nBut still could not resolve the issue.\n\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThanks & Regards,\nNeha Mehta\n\n\n\n________________________________\nThis Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.\n\n______________________________________________________________________", "msg_date": "Thu, 6 May 2010 09:47:48 +0530", "msg_from": "Neha Mehta <[email protected]>", "msg_from_op": true, "msg_subject": "PgPool II configuration with PostgreSQL 8.4" }, { "msg_contents": "El 06/05/2010 6:17, Neha Mehta escribi�:\n>\n> Hi,\n>\n> I am trying to have synchronous master-master replication in \n> PostgreSQL8.4 using PgPool II. I am not able to configure PgPool on \n> the system, it gives me an error, libpq is not installed or libpq is old.\n>\n> I have tried the command , ./configure --with-pgsql = PostgreSQL dir \n> --with-pgsql-libdir = PostgreSQL dir/lib/\n>\n> But still could not resolve the issue.\n>\n> /~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~/\n>\n> /Thanks & Regards,/\n>\n> /Neha Mehta/\n>\n>\n> ------------------------------------------------------------------------\n> This Email may contain confidential or privileged information for the \n> intended recipient (s) If you are not the intended recipient, please \n> do not use or disseminate the information, notify the sender and \n> delete it from your system.\n>\n> ______________________________________________________________________\nPgPool-II is a master-slave system, if you want to use a Master-Master \nSystem yo can take a look to Bucardo(http://www.bucardo.org)\nWhich is the error?\nHave you installed all dependencies of PgPool? libpq5 for example.\nWhich is you operating system?\n\nWe need all this information to help to find a certain solution.\n\nRegards", "msg_date": "Thu, 06 May 2010 08:29:51 +0200", "msg_from": "Marcos Ortiz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgPool II configuration with PostgreSQL 8.4" }, { "msg_contents": "> I am trying to have synchronous master-master replication in PostgreSQL8.4 using PgPool II. I am not able to configure PgPool on the system, it gives me an error, libpq is not installed or libpq is old.\n> \n> I have tried the command , ./configure -with-pgsql = PostgreSQL dir -with-pgsql-libdir = PostgreSQL dir/lib/\n> \n> But still could not resolve the issue.\n\nWhat are the exact error messages? What kind of platform are you\nusing? What pgpool-II version?\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n", "msg_date": "Thu, 06 May 2010 16:09:25 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgPool II configuration with PostgreSQL 8.4" }, { "msg_contents": "On Wed, May 5, 2010 at 10:17 PM, Neha Mehta <[email protected]>wrote:\n\n> I am trying to have synchronous master-master replication in PostgreSQL8.4\n> using PgPool II. I am not able to configure PgPool on the system, it gives\n> me an error, libpq is not installed or libpq is old.\n>\n(FYI: This topic is probably more germane to the -ADMIN list, or at least\n-GENERAL, than it is to -PERFORM.)\n\nIs there a particular reason you're building pgpool, rather than installing\nit via your distribution's package manager? Most distributions have it\navailable these days. (At a minimum, any distribution that's widely-used\nand well-enough understood to warrant hosting something as critical as your\nRDBMS should have it.)\n\nFWIW, I'm successfully using pgpool-II against a pair of 8.4 instances (in\nthe connection pool mode, not replication, and all installed from the PGDG\nRPM repository). I'm also using Bucardo (in its multi-master/swap mode) to\nhandle the replication, as suggested by someone else down-thread. So\nthere's an existence proof that it *can* work.\n\nFinally, when PostgreSQL is installed, libpq.so.N is usually put under\n/usr/lib(64)/, not under the postgres install directory. Your distribution\nshould have a postgresql-devel package available which will provide a\npg_config command that can be used to pass the *actual* installed locations\nto a configure invocation, as in:\n\n./configure --with-pgsql-libdir=`pg_config --libdir`...\n\nrls\n\n-- \n:wq\n\nOn Wed, May 5, 2010 at 10:17 PM, Neha Mehta <[email protected]> wrote:\nI am trying to have synchronous master-master replication in PostgreSQL8.4 using PgPool II. I am not able to configure PgPool on the system, it gives\n me an error, libpq is not installed or libpq is old.\n(FYI: This topic is probably more germane to the -ADMIN list, or at least -GENERAL, than it is to -PERFORM.)Is there a particular reason you're building pgpool, rather than installing it via your distribution's package manager?  Most distributions have it available these days.  (At a minimum, any distribution that's widely-used and well-enough understood to warrant hosting something as critical as your RDBMS should have it.)\nFWIW, I'm successfully using pgpool-II against a pair of 8.4 instances (in the connection pool mode, not replication, and all installed from the PGDG RPM repository).  I'm also using Bucardo (in its multi-master/swap mode) to handle the replication, as suggested by someone else down-thread.  So there's an existence proof that it *can* work.\nFinally, when PostgreSQL is installed, libpq.so.N is usually put under /usr/lib(64)/, not under the postgres install directory.  Your distribution should have a postgresql-devel package available which will provide a pg_config command that can be used to pass the *actual* installed locations to a configure invocation, as in:\n./configure --with-pgsql-libdir=`pg_config --libdir`...rls-- :wq", "msg_date": "Thu, 6 May 2010 16:28:10 -0600", "msg_from": "Rosser Schwarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgPool II configuration with PostgreSQL 8.4" } ]
[ { "msg_contents": "El 07/05/2010 15:37, Mark Stosberg escribi�:\n> Hello,\n>\n> We've been a satified user of PostgreSQL for several years, and use it\n> to power a national pet adoption website: http://www.adoptapet.com/\n>\n> Recently we've had a regularly-timed middle-of-the-night problem where\n> database handles are exhausted for a very brief period.\n>\n> In tracking it down, I have found that the event seems to correspond to\n> a time when a cron script is deleting from a large logging table, but\n> I'm not certain if this is the cause or a correlation.\n>\n> We are deleting about 5 million rows from a time-based logging table\n> that is replicated by Slony. We are currently using a single delete\n> statement, which takes about 15 minutes to run. There is no RI on the\n> table, but the use of Slony means that a trigger call and action is made\n> for every row deleted, which causes a corresponding insertion in another\n> table so the deletion can be replicated to the slave.\n>\n> My questions:\n>\n> - Could this kind of activity lead to an upward spiral in database\n> handle usage?\n>\n> - Would it be advisable to use several small DELETE statements instead,\n> to delete rows in batches of 1,000. We could use the recipe for this\n> that was posted earlier to this list:\n>\n> delete from table where pk in\n> (select pk from table where delete_condition limit X);\n>\n> Partitions seems attractive here, but aren't easy to use Slony. Perhaps\n> once we migrate to PostgreSQL 9.0 and the hot standby feature we can\n> consider that.\n>\n> Thanks for your help!\n>\n> Mark\n>\n> . . . . . . . . . . . . . . . . . . . . . . . . . . .\n> Mark Stosberg Principal Developer\n> [email protected] Summersault, LLC\n> 765-939-9301 ext 202 database driven websites\n> . . . . . http://www.summersault.com/ . . . . . . . .\n>\n>\n>\n> \nYou can use TRUNCATE instead DELETE. TRUNCATE is more efficient and \nfaster that DELETE.\nNow, we need more information about your system to give you a certain \nsolution:\nAre you using a RAID controller for you data?\nDo you have separated the xlog directory from the data directory?\nWhich is your Operating System?\nWhich is you architecture?\n\nRegards\n", "msg_date": "Fri, 07 May 2010 09:44:09 +0200", "msg_from": "Marcos Ortiz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: debugging handle exhaustion and 15 min/ 5mil row delete" }, { "msg_contents": "El 07/05/2010 16:10, Mark Stosberg escribi�:\n> \n>> You can use TRUNCATE instead DELETE. TRUNCATE is more efficient and\n>> faster that DELETE.\n>> \n> Thanks for the suggestion. However, TRUNCATE is not compatible with\n> Slony, and we also have some rows which remain in table.\n>\n> \n>> Now, we need more information about your system to give you a certain\n>> solution:\n>> Are you using a RAID controller for you data?\n>> \n> Yes.\n>\n> \n>> Do you have separated the xlog directory from the data directory?\n>> \n> No.\n>\n> \n>> Which is your Operating System?\n>> \n> FreeBSD.\n>\n> \n>> Which is you architecture?\n>> \n> i386.\n>\n> Thanks for the feedback. I'm going to try batching the deletes for now,\n> which is approach was worked well for some of our other long-running\n> deletes.\n>\n> Mark\n>\n> \nHave you valorated to use a 64 bits version of FreeBSD for that?\nThe 64 bits OS can help you very much on large databases because yo can \nuse actually all available RAM that you have on the server.\n\nMany experts on this list recommende to separate the xlog directory on a \nRAID 1 configuration and the data directory on RAID 10 to obtain a \nbetter performance.\nThe filesystems are very diverse, but I �ve seen that ZFS is very useful \non these cases.\n\nWhich version of Slony-I are you using?\n\nRegards\n", "msg_date": "Fri, 07 May 2010 15:22:58 +0200", "msg_from": "Marcos Ortiz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: debugging handle exhaustion and 15 min/ 5mil row delete" }, { "msg_contents": "\nHello,\n\nWe've been a satified user of PostgreSQL for several years, and use it\nto power a national pet adoption website: http://www.adoptapet.com/\n\nRecently we've had a regularly-timed middle-of-the-night problem where\ndatabase handles are exhausted for a very brief period.\n\nIn tracking it down, I have found that the event seems to correspond to\na time when a cron script is deleting from a large logging table, but\nI'm not certain if this is the cause or a correlation.\n\nWe are deleting about 5 million rows from a time-based logging table\nthat is replicated by Slony. We are currently using a single delete\nstatement, which takes about 15 minutes to run. There is no RI on the\ntable, but the use of Slony means that a trigger call and action is made\nfor every row deleted, which causes a corresponding insertion in another\ntable so the deletion can be replicated to the slave.\n\nMy questions:\n\n- Could this kind of activity lead to an upward spiral in database\n handle usage?\n\n- Would it be advisable to use several small DELETE statements instead,\n to delete rows in batches of 1,000. We could use the recipe for this\n that was posted earlier to this list:\n\n delete from table where pk in\n (select pk from table where delete_condition limit X);\n\nPartitions seems attractive here, but aren't easy to use Slony. Perhaps\nonce we migrate to PostgreSQL 9.0 and the hot standby feature we can\nconsider that.\n\nThanks for your help!\n\n Mark\n\n . . . . . . . . . . . . . . . . . . . . . . . . . . . \n Mark Stosberg Principal Developer \n [email protected] Summersault, LLC \n 765-939-9301 ext 202 database driven websites\n . . . . . http://www.summersault.com/ . . . . . . . .\n\n\n", "msg_date": "Fri, 7 May 2010 09:37:42 -0400", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": false, "msg_subject": "debugging handle exhaustion and 15 min/ 5mil row delete" }, { "msg_contents": "On Fri, May 07, 2010 at 09:37:42AM -0400, Mark Stosberg wrote:\n> \n> Hello,\n> \n> We've been a satified user of PostgreSQL for several years, and use it\n> to power a national pet adoption website: http://www.adoptapet.com/\n> \n> Recently we've had a regularly-timed middle-of-the-night problem where\n> database handles are exhausted for a very brief period.\n> \n> In tracking it down, I have found that the event seems to correspond to\n> a time when a cron script is deleting from a large logging table, but\n> I'm not certain if this is the cause or a correlation.\n> \n> We are deleting about 5 million rows from a time-based logging table\n> that is replicated by Slony. We are currently using a single delete\n> statement, which takes about 15 minutes to run. There is no RI on the\n> table, but the use of Slony means that a trigger call and action is made\n> for every row deleted, which causes a corresponding insertion in another\n> table so the deletion can be replicated to the slave.\n> \n> My questions:\n> \n> - Could this kind of activity lead to an upward spiral in database\n> handle usage?\nYes.\n> \n> - Would it be advisable to use several small DELETE statements instead,\n> to delete rows in batches of 1,000. We could use the recipe for this\n> that was posted earlier to this list:\nYes, that is the method we use in several cases to avoid this behavior.\nDeletion is a more intensive process in PostgreSQL, so batching it will\nkeep from dragging down other queries which results in your out-of-handles\nerror.\n\nRegards,\nKen\n", "msg_date": "Fri, 7 May 2010 08:42:15 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: debugging handle exhaustion and 15 min/ 5mil row\n\tdelete" }, { "msg_contents": "\n> You can use TRUNCATE instead DELETE. TRUNCATE is more efficient and \n> faster that DELETE.\n\nThanks for the suggestion. However, TRUNCATE is not compatible with\nSlony, and we also have some rows which remain in table. \n\n> Now, we need more information about your system to give you a certain \n> solution:\n> Are you using a RAID controller for you data? \n\nYes.\n\n> Do you have separated the xlog directory from the data directory?\n\nNo.\n\n> Which is your Operating System?\n\nFreeBSD.\n\n> Which is you architecture?\n\ni386.\n\nThanks for the feedback. I'm going to try batching the deletes for now,\nwhich is approach was worked well for some of our other long-running\ndeletes.\n\n Mark\n", "msg_date": "Fri, 7 May 2010 10:10:31 -0400", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: debugging handle exhaustion and 15 min/ 5mil row delete" } ]
[ { "msg_contents": "Jignesh, All:\n\nMost of our Solaris users have been, I think, following Jignesh's advice\nfrom his benchmark tests to set ZFS page size to 8K for the data zpool.\n However, I've discovered that this is sometimes a serious problem for\nsome hardware.\n\nFor example, having the recordsize set to 8K on a Sun 4170 with 8 drives\nrecently gave me these appalling Bonnie++ results:\n\nVersion 1.96 ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 4 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\ndb111 24G 260044 33 62110 17 89914 15\n1167 25\nLatency 6549ms 4882ms 3395ms\n107ms\n\nI know that's hard to read. What it's saying is:\n\nSeq Writes: 260mb/s combined\nSeq Reads: 89mb/s combined\nRead Latency: 3.3s\n\nBest guess is that this is a result of overloading the array/drives with\ncommands for all those small blocks; certainly the behavior observed\n(stuttering I/O, latency) is in line with that issue.\n\nAnyway, since this is a DW-like workload, we just bumped the recordsize\nup to 128K and the performance issues went away ... reads up over 300mb/s.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 07 May 2010 17:09:45 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "8K recordsize bad on ZFS?" }, { "msg_contents": "On Fri, May 7, 2010 at 8:09 PM, Josh Berkus <[email protected]> wrote:\n> Jignesh, All:\n>\n> Most of our Solaris users have been, I think, following Jignesh's advice\n> from his benchmark tests to set ZFS page size to 8K for the data zpool.\n>  However, I've discovered that this is sometimes a serious problem for\n> some hardware.\n>\n> For example, having the recordsize set to 8K on a Sun 4170 with 8 drives\n> recently gave me these appalling Bonnie++ results:\n>\n> Version  1.96       ------Sequential Output------ --Sequential Input-\n> --Random-\n> Concurrency   4     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> db111           24G           260044  33 62110  17           89914  15\n> 1167  25\n> Latency                        6549ms    4882ms              3395ms\n> 107ms\n>\n> I know that's hard to read.  What it's saying is:\n>\n> Seq Writes: 260mb/s combined\n> Seq Reads: 89mb/s combined\n> Read Latency: 3.3s\n>\n> Best guess is that this is a result of overloading the array/drives with\n> commands for all those small blocks; certainly the behavior observed\n> (stuttering I/O, latency) is in line with that issue.\n>\n> Anyway, since this is a DW-like workload, we just bumped the recordsize\n> up to 128K and the performance issues went away ... reads up over 300mb/s.\n>\n> --\n>                                  -- Josh Berkus\n>                                     PostgreSQL Experts Inc.\n>                                     http://www.pgexperts.com\n>\n\nHi Josh,\n\nThe 8K recommendation is for OLTP Applications.. So if you seen\nsomewhere to use it for DSS/DW workload then I need to change it. DW\nWorkloads require throughput and if they use 8K then they are limited\nby 8K x max IOPS which with 8 disk is about 120 (typical) x 8 SAS\ndrives which is roughly about 8MB/sec.. (Prefetching with read drives\nand other optimizations can help it to push to about 24-30MB/sec with\n8K on 12 disk arrays).. So yes that advice is typically bad for DSS..\nAnd I believe I generally recommend them to use 128KB for DSS.So if\nyou have seen the 8K for DSS let me know and hopefully if I still have\naccess to it I can change it. However for OLTP you are generally want\nmore IOPS with low latency which is what 8K provides (The smallest\nblocksize in ZFS).\n\nHope this clarifies.\n\n-Jignesh\n", "msg_date": "Sat, 8 May 2010 17:39:02 -0400", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8K recordsize bad on ZFS?" }, { "msg_contents": "Josh,\n\nit'll be great if you explain how did you change the records size to\n128K? - as this size is assigned on the file creation and cannot be\nchanged later - I suppose that you made a backup of your data and then\nprocess a full restore.. is it so?\n\nRgds,\n-Dimitri\n\n\nOn 5/8/10, Josh Berkus <[email protected]> wrote:\n> Jignesh, All:\n>\n> Most of our Solaris users have been, I think, following Jignesh's advice\n> from his benchmark tests to set ZFS page size to 8K for the data zpool.\n> However, I've discovered that this is sometimes a serious problem for\n> some hardware.\n>\n> For example, having the recordsize set to 8K on a Sun 4170 with 8 drives\n> recently gave me these appalling Bonnie++ results:\n>\n> Version 1.96 ------Sequential Output------ --Sequential Input-\n> --Random-\n> Concurrency 4 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> db111 24G 260044 33 62110 17 89914 15\n> 1167 25\n> Latency 6549ms 4882ms 3395ms\n> 107ms\n>\n> I know that's hard to read. What it's saying is:\n>\n> Seq Writes: 260mb/s combined\n> Seq Reads: 89mb/s combined\n> Read Latency: 3.3s\n>\n> Best guess is that this is a result of overloading the array/drives with\n> commands for all those small blocks; certainly the behavior observed\n> (stuttering I/O, latency) is in line with that issue.\n>\n> Anyway, since this is a DW-like workload, we just bumped the recordsize\n> up to 128K and the performance issues went away ... reads up over 300mb/s.\n>\n> --\n> -- Josh Berkus\n> PostgreSQL Experts Inc.\n> http://www.pgexperts.com\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Sun, 9 May 2010 10:45:18 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8K recordsize bad on ZFS?" }, { "msg_contents": "On 5/9/10 1:45 AM, Dimitri wrote:\n> Josh,\n> \n> it'll be great if you explain how did you change the records size to\n> 128K? - as this size is assigned on the file creation and cannot be\n> changed later - I suppose that you made a backup of your data and then\n> process a full restore.. is it so?\n\nYou can change the recordsize of the zpool dynamically, then simply copy\nthe data directory (with PostgreSQL shut down) to a new directory on\nthat zpool. This assumes that you have enough space on the zpool, of\ncourse.\n\nWe didn't test how it would work to let the files in the Postgres\ninstance get gradually replaced by \"natural\" updating.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Mon, 10 May 2010 11:39:10 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8K recordsize bad on ZFS?" }, { "msg_contents": "On 05/10/10 20:39, Josh Berkus wrote:\n> On 5/9/10 1:45 AM, Dimitri wrote:\n>> Josh,\n>>\n>> it'll be great if you explain how did you change the records size to\n>> 128K? - as this size is assigned on the file creation and cannot be\n>> changed later - I suppose that you made a backup of your data and then\n>> process a full restore.. is it so?\n>\n> You can change the recordsize of the zpool dynamically, then simply copy\n> the data directory (with PostgreSQL shut down) to a new directory on\n> that zpool. This assumes that you have enough space on the zpool, of\n> course.\n\nOther things could have influenced your result - 260 MB/s vs 300 MB/s is \nclose enough to be influenced by data position on (some of) the drives. \n(I'm not saying anything about the original question.)\n\n", "msg_date": "Mon, 10 May 2010 21:13:54 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8K recordsize bad on ZFS?" }, { "msg_contents": "As I said, the record size is applied on the file creation :-)\nso by copying your data from one directory to another one you've made\nthe new record size applied on the newly created files :-) (equal to\nbackup restore if there was not enough space)..\n\nDid you try to redo the same but still keeping record size equal 8K ? ;-)\n\nI think the problem you've observed is simply related to the\ncopy-on-write nature of ZFS - if you bring any modification to the\ndata your sequential order of pages was broken with a time, and\nfinally the sequential read was transformed to the random access.. But\nonce you've re-copied your files again - the right order was applied\nagain.\n\nBTW, 8K is recommended for OLTP workloads, but for DW you may stay\nwith 128K without problem.\n\nRgds,\n-Dimitri\n\n\nOn 5/10/10, Josh Berkus <[email protected]> wrote:\n> On 5/9/10 1:45 AM, Dimitri wrote:\n>> Josh,\n>>\n>> it'll be great if you explain how did you change the records size to\n>> 128K? - as this size is assigned on the file creation and cannot be\n>> changed later - I suppose that you made a backup of your data and then\n>> process a full restore.. is it so?\n>\n> You can change the recordsize of the zpool dynamically, then simply copy\n> the data directory (with PostgreSQL shut down) to a new directory on\n> that zpool. This assumes that you have enough space on the zpool, of\n> course.\n>\n> We didn't test how it would work to let the files in the Postgres\n> instance get gradually replaced by \"natural\" updating.\n>\n> --\n> -- Josh Berkus\n> PostgreSQL Experts Inc.\n> http://www.pgexperts.com\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 10 May 2010 21:26:21 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8K recordsize bad on ZFS?" }, { "msg_contents": "Ivan,\n\n> Other things could have influenced your result - 260 MB/s vs 300 MB/s is\n> close enough to be influenced by data position on (some of) the drives.\n> (I'm not saying anything about the original question.)\n\nYou misread my post. It's *87mb/s* vs. 300mb/s. I kinda doubt that's\nposition on the drive.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Mon, 10 May 2010 12:30:27 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8K recordsize bad on ZFS?" }, { "msg_contents": "On Mon, May 10, 2010 at 8:30 PM, Josh Berkus <[email protected]> wrote:\n> Ivan,\n>\n>> Other things could have influenced your result - 260 MB/s vs 300 MB/s is\n>> close enough to be influenced by data position on (some of) the drives.\n>> (I'm not saying anything about the original question.)\n>\n> You misread my post.  It's *87mb/s* vs. 300mb/s.  I kinda doubt that's\n> position on the drive.\n\nThat still is consistent with it being caused by the files being\ndiscontiguous. Copying them moved all the blocks to be contiguous and\nsequential on disk and might have had the same effect even if you had\nleft the settings at 8kB blocks. You described it as \"overloading the\narray/drives with commands\" which is probably accurate but sounds less\nexotic if you say \"the files were fragmented causing lots of seeks so\nour drives we saturated the drives' iops capacity\". How many iops were\nyou doing before and after anyways?\n\nThat said that doesn't change very much. The point remains that with\n8kB blocks ZFS is susceptible to files becoming discontinuous and\nsequential i/o performing poorly whereas with 128kB blocks hopefully\nthat would happen less. Of course with 128kB blocks updates become a\nwhole lot more expensive too.\n\n\n-- \ngreg\n", "msg_date": "Mon, 10 May 2010 21:01:03 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8K recordsize bad on ZFS?" }, { "msg_contents": "\n> That still is consistent with it being caused by the files being\n> discontiguous. Copying them moved all the blocks to be contiguous and\n> sequential on disk and might have had the same effect even if you had\n> left the settings at 8kB blocks. You described it as \"overloading the\n> array/drives with commands\" which is probably accurate but sounds less\n> exotic if you say \"the files were fragmented causing lots of seeks so\n> our drives we saturated the drives' iops capacity\". How many iops were\n> you doing before and after anyways?\n\nDon't know. This was a client system and once we got the target\nnumbers, they stopped wanting me to run tests on in. :-(\n\nNote that this was a brand-new system, so there wasn't much time for\nfragmentation to occur.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Mon, 10 May 2010 14:16:46 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8K recordsize bad on ZFS?" }, { "msg_contents": "\n> Sure, but bulk load + reandom selects is going to *guarentee*\n> fragmentatioon on a COW system (like ZFS, BTRFS, etc) as the selects\n> start to write out all the hint-bit-dirtied blocks in random orders...\n> \n> i.e. it doesn't take long to make an originally nicely continuous block\n> random....\n\nI'm testing with DD and Bonnie++, though, which create their own files.\n\nFor that matter, running an ETL procedure with a newly created database\non both recordsizes was notably (2.5x) faster on the 128K system.\n\nSo I don't think fragmentation is the difference.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Tue, 11 May 2010 18:01:48 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8K recordsize bad on ZFS?" } ]
[ { "msg_contents": "Hi all!\n\nWe moved from MySQL to Postgresql for some of our projects. So far\nwe're very impressed with the performance (especially INSERTs and\nUPDATEs), except for a strange problem with the following bulk delete\nquery:\n\nDELETE FROM table1 WHERE table2_id = ?\n\nI went through these Wiki pages, trying to solve the problem:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions and\nhttp://wiki.postgresql.org/wiki/Performance_Optimization\n\nbut unfortunately without much luck.\n\nOur application is doing batch jobs. On every batch run, we must\ndelete approx. 1M rows in table1 and recreate these entries. The\ninserts are very fast, but deletes are not. We cannot make updates,\nbecause there's no identifying property in the objects of table1.\n\nThis is what EXPLAIN is telling me:\n\nEXPLAIN ANALYZE DELETE FROM table1 WHERE table2_id = 11242939\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------\n Index Scan using sr_index on table1 (cost=0.00..8.56 rows=4 width=6)\n(actual time=0.111..0.154 rows=4 loops=1)\n Index Cond: (table2_id = 11242939)\n Total runtime: 0.421 ms\n(3 rows)\n\nThis seems to be very fast (using the index), but running this query\nfrom JDBC takes up to 20ms each. For 1M rows this sum up to several\nhours. When I have a look at pg_top psql uses most of the time for the\ndeletes. CPU usage is 100% (for the core used by postgresql). So it\nseems that postgresql is doing some sequential scanning or constraint\nchecks.\n\nThis is the table structure:\n\nid\tbigint\t (primary key)\ntable2_id\tbigint\t (foreign key constraint to table 2, *indexed*)\ntable3_id\tbigint\t (foreign key constraint to table 3, *indexed*)\nsome non-referenced text and boolean fields\n\nMy server settings (Potgresql 8.4.2):\n\nshared_buffers = 1024MB\neffective_cache_size = 2048MB\nwork_mem = 128MB\nwal_buffers = 64MB\ncheckpoint_segments = 32\ncheckpoint_timeout = 15min\ncheckpoint_completion_target = 0.9\n\nIt would be very nice to give me a hint to solve the problem. It\ndrives me crazy ;-)\n\nIf you need more details please feel free to ask!\n\nThanks in advance for your help!\n\nKind regards\n\nThilo\n", "msg_date": "Sat, 8 May 2010 04:39:58 -0700 (PDT)", "msg_from": "thilo <[email protected]>", "msg_from_op": true, "msg_subject": "Slow Bulk Delete" }, { "msg_contents": "On 05/08/2010 06:39 AM, thilo wrote:\n> Hi all!\n>\n> We moved from MySQL to Postgresql for some of our projects. So far\n> we're very impressed with the performance (especially INSERTs and\n> UPDATEs), except for a strange problem with the following bulk delete\n> query:\n>\n> DELETE FROM table1 WHERE table2_id = ?\n>\n> I went through these Wiki pages, trying to solve the problem:\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions and\n> http://wiki.postgresql.org/wiki/Performance_Optimization\n>\n> but unfortunately without much luck.\n>\n> Our application is doing batch jobs. On every batch run, we must\n> delete approx. 1M rows in table1 and recreate these entries. The\n> inserts are very fast, but deletes are not. We cannot make updates,\n> because there's no identifying property in the objects of table1.\n>\n> This is what EXPLAIN is telling me:\n>\n> EXPLAIN ANALYZE DELETE FROM table1 WHERE table2_id = 11242939\n> QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Index Scan using sr_index on table1 (cost=0.00..8.56 rows=4 width=6)\n> (actual time=0.111..0.154 rows=4 loops=1)\n> Index Cond: (table2_id = 11242939)\n> Total runtime: 0.421 ms\n> (3 rows)\n>\n> This seems to be very fast (using the index), but running this query\n> from JDBC takes up to 20ms each. For 1M rows this sum up to several\n> hours. When I have a look at pg_top psql uses most of the time for the\n> deletes. CPU usage is 100% (for the core used by postgresql). So it\n> seems that postgresql is doing some sequential scanning or constraint\n> checks.\n>\n> This is the table structure:\n>\n> id\tbigint\t (primary key)\n> table2_id\tbigint\t (foreign key constraint to table 2, *indexed*)\n> table3_id\tbigint\t (foreign key constraint to table 3, *indexed*)\n> some non-referenced text and boolean fields\n>\n> My server settings (Potgresql 8.4.2):\n>\n> shared_buffers = 1024MB\n> effective_cache_size = 2048MB\n> work_mem = 128MB\n> wal_buffers = 64MB\n> checkpoint_segments = 32\n> checkpoint_timeout = 15min\n> checkpoint_completion_target = 0.9\n>\n> It would be very nice to give me a hint to solve the problem. It\n> drives me crazy ;-)\n>\n> If you need more details please feel free to ask!\n>\n> Thanks in advance for your help!\n>\n> Kind regards\n>\n> Thilo\n\n\nI am going to guess the slow part is sending 1M different queries back and forth from client to server. You could try batching them together:\n\nDELETE FROM table1 WHERE table2_id in (11242939, 1,2,3,4,5...., 42);\n\nAlso are you preparing the query?\n\n-Andy\n", "msg_date": "Sat, 08 May 2010 08:17:13 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Bulk Delete" }, { "msg_contents": "Hi Andy!\n\nThanks a lot for your hints!\n\nIndeed the problem was on my side. Some Hibernate tuning solved the\nproblem (and I was able to speedup the query using IN). The real\nproblem was that Hibernate using unprepared queries if you create a\nnative query, but prepares the query if you use JP-QL (very odd\nbehavior). Thanks anyway for your help!\n\nKind regards\n\nThilo\n\n\n> I am going to guess the slow part is sending 1M different queries back and forth from client to server.  You could try batching them together:\n>\n> DELETE FROM table1 WHERE table2_id in (11242939, 1,2,3,4,5...., 42);\n>\n> Also are you preparing the query?\n>\n> -Andy\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Sun, 9 May 2010 14:26:28 -0700 (PDT)", "msg_from": "thilo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Bulk Delete" }, { "msg_contents": "Thilo,\n\nJust a few of thoughts off the top of my head:\n\n1. If you know the ids of the rows you want to delete beforhand, insert them in a table, then run the delete based on a join with this table.\n\n2. Better yet, insert the ids into a table using COPY, then use a join to create a new table with the rows you want to keep from the first table. Drop the original source table, truncate the id table, rename the copied table and add indexes and constraints.\n\n3. See if you can partition the table somehow so the rows you want to delete are in a single partitioned child table. When its time to delete them just drop the child table.\n\nOf course, if the 1M rows you need to delete is very small compared to the total overall size of the original table the first two techniques might now buy you anything, but its worth a try.\n\nGood luck!\n\nBob Lunney\n\n--- On Sat, 5/8/10, thilo <[email protected]> wrote:\n\n> From: thilo <[email protected]>\n> Subject: [PERFORM] Slow Bulk Delete\n> To: [email protected]\n> Date: Saturday, May 8, 2010, 7:39 AM\n> Hi all!\n> \n> We moved from MySQL to Postgresql for some of our projects.\n> So far\n> we're very impressed with the performance (especially\n> INSERTs and\n> UPDATEs), except for a strange problem with the following\n> bulk delete\n> query:\n> \n> DELETE FROM table1 WHERE table2_id = ?\n> \n> I went through these Wiki pages, trying to solve the\n> problem:\n> \n> http://wiki.postgresql.org/wiki/SlowQueryQuestions and\n> http://wiki.postgresql.org/wiki/Performance_Optimization\n> \n> but unfortunately without much luck.\n> \n> Our application is doing batch jobs. On every batch run, we\n> must\n> delete approx. 1M rows in table1 and recreate these\n> entries. The\n> inserts are very fast, but deletes are not. We cannot make\n> updates,\n> because there's no identifying property in the objects of\n> table1.\n> \n> This is what EXPLAIN is telling me:\n> \n> EXPLAIN ANALYZE DELETE FROM table1 WHERE table2_id =\n> 11242939\n>                \n>                \n>                \n>          QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Index Scan using sr_index on table1  (cost=0.00..8.56\n> rows=4 width=6)\n> (actual time=0.111..0.154 rows=4 loops=1)\n>    Index Cond: (table2_id = 11242939)\n> Total runtime: 0.421 ms\n> (3 rows)\n> \n> This seems to be very fast (using the index), but running\n> this query\n> from JDBC takes up to 20ms each. For 1M rows this sum up to\n> several\n> hours. When I have a look at pg_top psql uses most of the\n> time for the\n> deletes. CPU usage is 100% (for the core used by\n> postgresql). So it\n> seems that postgresql is doing some sequential scanning or\n> constraint\n> checks.\n> \n> This is the table structure:\n> \n> id   \n> bigint     (primary key)\n> table2_id   \n> bigint     (foreign key constraint\n> to table 2, *indexed*)\n> table3_id   \n> bigint     (foreign key constraint\n> to table 3, *indexed*)\n> some non-referenced text and boolean fields\n> \n> My server settings (Potgresql 8.4.2):\n> \n> shared_buffers = 1024MB\n> effective_cache_size = 2048MB\n> work_mem = 128MB\n> wal_buffers = 64MB\n> checkpoint_segments = 32\n> checkpoint_timeout = 15min\n> checkpoint_completion_target = 0.9\n> \n> It would be very nice to give me a hint to solve the\n> problem. It\n> drives me crazy ;-)\n> \n> If you need more details please feel free to ask!\n> \n> Thanks in advance for your help!\n> \n> Kind regards\n> \n> Thilo\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n \n", "msg_date": "Wed, 12 May 2010 20:13:42 -0700 (PDT)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Bulk Delete" }, { "msg_contents": "> DELETE FROM table1 WHERE table2_id = ?\n\nFor bulk deletes, try :\n\nDELETE FROM table1 WHERE table2_id IN (list of a few thousands ids)\n\n- or use a JOIN delete with a virtual VALUES table\n- or fill a temp table with ids and use a JOIN DELETE\n\nThis will save cliet/server roundtrips.\n\nNow, something that can make a DELETE very slow is a non-indexed ON DELETE \nCASCADE foreign key : when you DELETE FROM table1 and it cascades to a \nDELETE on table2, and you forget the index on table2. Also check the time \nspent in triggers. Do you have a GIN index ?\n", "msg_date": "Mon, 17 May 2010 12:10:31 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Bulk Delete" }, { "msg_contents": "On Mon, May 17, 2010 at 5:10 AM, Pierre C <[email protected]> wrote:\n> - or use a JOIN delete with a virtual VALUES table\n> - or fill a temp table with ids and use a JOIN DELETE\n\nWhat is a virtual VALUES table? Can you give me an example of using a\nvirtual table with selects, joins, and also deletes?\n\n-- \nJon\n", "msg_date": "Mon, 17 May 2010 06:54:26 -0500", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Bulk Delete" }, { "msg_contents": "2010/5/17 Jon Nelson <[email protected]<jnelson%[email protected]>\n>\n\n> On Mon, May 17, 2010 at 5:10 AM, Pierre C <[email protected]> wrote:\n> > - or use a JOIN delete with a virtual VALUES table\n> > - or fill a temp table with ids and use a JOIN DELETE\n>\n> What is a virtual VALUES table? Can you give me an example of using a\n> virtual table with selects, joins, and also deletes?\n>\n>\n>\ndelete from a using (values (1),(2),(5),(8)) b(x) where a.id=b.x\n\nSee http://www.postgresql.org/docs/8.4/static/sql-values.html\n\n-- \nBest regards,\nVitalii Tymchyshyn\n\n2010/5/17 Jon Nelson <[email protected]>\nOn Mon, May 17, 2010 at 5:10 AM, Pierre C <[email protected]> wrote:\n> - or use a JOIN delete with a virtual VALUES table\n> - or fill a temp table with ids and use a JOIN DELETE\n\nWhat is a virtual VALUES table? Can you give me an example of using a\nvirtual table with selects, joins, and also deletes?\ndelete from a using (values (1),(2),(5),(8)) b(x) where a.id=b.xSee http://www.postgresql.org/docs/8.4/static/sql-values.html\n-- Best regards, Vitalii Tymchyshyn", "msg_date": "Mon, 17 May 2010 15:07:17 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Bulk Delete" }, { "msg_contents": "On Mon, May 17, 2010 at 12:54 PM, Jon Nelson <[email protected]> wrote:\n> On Mon, May 17, 2010 at 5:10 AM, Pierre C <[email protected]> wrote:\n>> - or use a JOIN delete with a virtual VALUES table\n>> - or fill a temp table with ids and use a JOIN DELETE\n>\n> What is a virtual VALUES table? Can you give me an example of using a\n> virtual table with selects, joins, and also deletes?\n>\n\nI think he refers to the way you pass values in insert, and alike:\nINSERT INTO foo(a,b) VALUES(1,2), (2,3), (3,4);\n", "msg_date": "Mon, 17 May 2010 13:15:52 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Bulk Delete" }, { "msg_contents": "In response to Jon Nelson :\n> On Mon, May 17, 2010 at 5:10 AM, Pierre C <[email protected]> wrote:\n> > - or use a JOIN delete with a virtual VALUES table\n> > - or fill a temp table with ids and use a JOIN DELETE\n> \n> What is a virtual VALUES table? Can you give me an example of using a\n> virtual table with selects, joins, and also deletes?\n\nSomething like this:\n\ntest=# select * from foo;\n c1\n----\n 1\n 2\n 3\n 4\n(4 rows)\n\ntest=*# delete from foo using (values (1),(2) ) as bla where foo.c1=bla.column1;\nDELETE 2\ntest=*# select * from foo;\n c1\n----\n 3\n 4\n(2 rows)\n\n\n\nvalues (1), (2) as bla -> returns a 'virtual table' bla with one\ncolumn column1.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Mon, 17 May 2010 14:28:16 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Bulk Delete" }, { "msg_contents": "2010/5/17 Віталій Тимчишин <[email protected]>:\n>\n>\n> 2010/5/17 Jon Nelson <[email protected]>\n>>\n>> On Mon, May 17, 2010 at 5:10 AM, Pierre C <[email protected]> wrote:\n>> > - or use a JOIN delete with a virtual VALUES table\n>> > - or fill a temp table with ids and use a JOIN DELETE\n>>\n>> What is a virtual VALUES table? Can you give me an example of using a\n>> virtual table with selects, joins, and also deletes?\n>>\n>>\n>\n> delete from a using (values (1),(2),(5),(8)) b(x) where a.id=b.x\n> See http://www.postgresql.org/docs/8.4/static/sql-values.html\n\nThis syntax I'm familiar with. The author of the previous message\n(Pierre C) indicated that there is a concept of a virtual table which\ncould be joined to. I'd like to know what this virtual table thing\nis, specifically in the context of joins.\n\n\n-- \nJon\n", "msg_date": "Mon, 17 May 2010 07:28:38 -0500", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Bulk Delete" }, { "msg_contents": "again VALUES(1,2), (2,3), ....; is a 'virtual table', as he calls it.\nIt really is not a table to postgresql. I guess he is just using that\nnaming convention.\n", "msg_date": "Mon, 17 May 2010 13:33:09 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Bulk Delete" }, { "msg_contents": "On Mon, May 17, 2010 at 7:28 AM, A. Kretschmer\n<[email protected]> wrote:\n> In response to Jon Nelson :\n>> On Mon, May 17, 2010 at 5:10 AM, Pierre C <[email protected]> wrote:\n>> > - or use a JOIN delete with a virtual VALUES table\n>> > - or fill a temp table with ids and use a JOIN DELETE\n>>\n>> What is a virtual VALUES table? Can you give me an example of using a\n>> virtual table with selects, joins, and also deletes?\n>\n> Something like this:\n...\n\ndelete from foo using (values (1),(2) ) as bla where foo.c1=bla.column1;\n...\n\nAha! Cool. That's not quite what I envisioned when you said virtual\ntable, but it surely clarifies things.\nThanks!\n\n-- \nJon\n", "msg_date": "Mon, 17 May 2010 07:35:35 -0500", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Bulk Delete" } ]
[ { "msg_contents": "I have a message posted in pgsql-general that outlines what I thought\nwas an indexing problem - it's not, so I'm bringing it here.\n\nI dumped the table from our production system and stuffed it into a test\nmachine, then started refining and playing with the query until I was\nable to get it to the \"de-minimus\" that showed the issue. Note that the\nactual query is frequently MUCH more complicated, but without the LIMIT\nshown below the planner seems to do a decent job of figuring out how to\n\"get it done.\"\n\nThe actual table in question has ~2m rows totaling several gigabytes of\nspace.\n\nHere's an abstract of the schema:\n\n Table \"public.post\"\n Column | Type | \nModifiers \n-----------+--------------------------+--------------------------------------------------------\nsubject | text |\n message | text |\n inserted | timestamp with time zone |\n modified | timestamp with time zone |\n replied | timestamp with time zone |\n ordinal | integer | not null default\nnextval('post_ordinal_seq'::regclass)\n \nIndexes:\n \"post_pkey\" PRIMARY KEY, btree (ordinal)\n \"idx_message\" gin (to_tsvector('english'::text, message))\n \"idx_subject\" gin (to_tsvector('english'::text, subject))\n \nThere's a bunch of other stuff in the table and many more indices, plus\nforeign references, but stripping the table down to JUST THIS shows the\nproblem.\n\nticker=# explain analyze select * from post where to_tsvector('english',\nmessage) @@ to_tsquery('violence') order by modified desc;\n QUERY\nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=31795.16..31819.68 rows=9808 width=436) (actual\ntime=14.222..17.213 rows=3421 loops=1)\n Sort Key: modified\n Sort Method: quicksort Memory: 3358kB\n -> Bitmap Heap Scan on post (cost=1418.95..31144.90 rows=9808\nwidth=436) (actual time=1.878..7.514 rows=3421 loops=1)\n Recheck Cond: (to_tsvector('english'::text, message) @@\nto_tsquery('violence'::text))\n -> Bitmap Index Scan on idx_message (cost=0.00..1416.49\nrows=9808 width=0) (actual time=1.334..1.334 rows=3434 loops=1)\n Index Cond: (to_tsvector('english'::text, message) @@\nto_tsquery('violence'::text))\n Total runtime: 20.547 ms\n(8 rows)\n\nOk, very nice. 20ms. I like that.\n\nNow lets limit the return to 100 items:\n\nticker=# explain analyze select * from post where to_tsvector('english',\nmessage) @@ to_tsquery('violence') order by modified desc limit 100;\n \nQUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..5348.69 rows=100 width=436) (actual\ntime=198.047..2607.077 rows=100 loops=1)\n -> Index Scan Backward using post_modified on post \n(cost=0.00..524599.31 rows=9808 width=436) (actual\ntime=198.043..2606.864 rows=100 loops=1)\n Filter: (to_tsvector('english'::text, message) @@\nto_tsquery('violence'::text))\n Total runtime: 2607.231 ms\n(4 rows)\n\nBad. Notice that the optimizer decided it was going to do an index scan\nwith an internal filter on it! That's BACKWARD; what I want is for the\nplanner to first execute the index scan on the GIN index, then order the\nreturn and limit the returned data set.\n\nBut it gets much worse - let's use something that's NOT in the message\nbase (the table in question has some ~2m rows by the way and consumes\nseveral gigabytes on disk - anything that actually READS the table is\ninstant \"bad news!\")\n\n\nticker=# explain analyze select * from post where to_tsvector('english',\nmessage) @@ to_tsquery('hosehead') order by modified;\n QUERY\nPLAN \n--------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=31795.16..31819.68 rows=9808 width=436) (actual\ntime=0.407..0.407 rows=0 loops=1)\n Sort Key: modified\n Sort Method: quicksort Memory: 25kB\n -> Bitmap Heap Scan on post (cost=1418.95..31144.90 rows=9808\nwidth=436) (actual time=0.402..0.402 rows=0 loops=1)\n Recheck Cond: (to_tsvector('english'::text, message) @@\nto_tsquery('hosehead'::text))\n -> Bitmap Index Scan on idx_message (cost=0.00..1416.49\nrows=9808 width=0) (actual time=0.399..0.399 rows=0 loops=1)\n Index Cond: (to_tsvector('english'::text, message) @@\nto_tsquery('hosehead'::text))\n Total runtime: 0.441 ms\n(8 rows)\n\n\nVery fast, as you'd expect - it returned nothing. Now let's try it with\na \"LIMIT\":\n\nticker=# explain analyze select * from post where to_tsvector('english',\nmessage) @@ to_tsquery('hosehead') order by modified limit 100;\nNOTICE: word is too long to be indexed\nDETAIL: Words longer than 2047 characters are ignored.\n QUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..5348.69 rows=100 width=436) (actual\ntime=254217.850..254217.850 rows=0 loops=1)\n -> Index Scan using post_modified on post (cost=0.00..524599.31\nrows=9808 width=436) (actual time=254217.847..254217.847 rows=0 loops=1)\n Filter: (to_tsvector('english'::text, message) @@\nto_tsquery('hosehead'::text))\n Total runtime: 254217.891 ms\n(4 rows)\n\nticker=#\n\nOh crap. It actually went through and looked at the entire freaking\ntable - one message at a time.\n\nAn attempt to re-write the query into something that FORCES the planner\nto do the right thing fails too. For example:\n\nticker=# explain analyze select * from post where ordinal in (select\nordinal from post where to_tsvector('english', message) @@\nto_tsquery('hosehead')) order by modified; \n QUERY\nPLAN \n--------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=94886.44..94910.96 rows=9808 width=436) (actual\ntime=0.884..0.884 rows=0 loops=1)\n Sort Key: public.post.modified\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=31173.42..94236.19 rows=9808 width=436)\n(actual time=0.879..0.879 rows=0 loops=1)\n -> HashAggregate (cost=31173.42..31271.50 rows=9808 width=4)\n(actual time=0.877..0.877 rows=0 loops=1)\n -> Bitmap Heap Scan on post (cost=1422.95..31148.90\nrows=9808 width=4) (actual time=0.850..0.850 rows=0 loops=1)\n Recheck Cond: (to_tsvector('english'::text,\nmessage) @@ to_tsquery('hosehead'::text))\n -> Bitmap Index Scan on idx_message \n(cost=0.00..1420.50 rows=9808 width=0) (actual time=0.848..0.848 rows=0\nloops=1)\n Index Cond: (to_tsvector('english'::text,\nmessage) @@ to_tsquery('hosehead'::text))\n -> Index Scan using post_ordinal on post (cost=0.00..6.41\nrows=1 width=436) (never executed)\n Index Cond: (public.post.ordinal = public.post.ordinal)\n Total runtime: 0.985 ms\n(12 rows)\n\nFast, if convoluted.\n\n\nticker=# explain analyze select * from post where ordinal in (select\nordinal from post where to_tsvector('english', message) @@\nto_tsquery('hosehead')) order by modified limit 100;\nNOTICE: word is too long to be indexed\nDETAIL: Words longer than 2047 characters are ignored.\n \nQUERY\nPLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..19892.88 rows=100 width=436) (actual\ntime=270563.091..270563.091 rows=0 loops=1)\n -> Nested Loop Semi Join (cost=0.00..1951093.77 rows=9808\nwidth=436) (actual time=270563.088..270563.088 rows=0 loops=1)\n -> Index Scan using post_modified on post \n(cost=0.00..509887.63 rows=1961557 width=436) (actual\ntime=0.015..3427.627 rows=1953674 loops=1)\n -> Index Scan using post_ordinal on post (cost=0.00..0.73\nrows=1 width=4) (actual time=0.134..0.134 rows=0 loops=1953674)\n Index Cond: (public.post.ordinal = public.post.ordinal)\n Filter: (to_tsvector('english'::text,\npublic.post.message) @@ to_tsquery('hosehead'::text))\n Total runtime: 270563.147 ms\n(7 rows)\n\nticker=#\n\nOk, that didn't work either.\n\nInterestingly enough, if I crank up the limit to 500, it starts behaving!\n\nticker=# explain analyze select * from post where ordinal in (select\nordinal from post where to_tsvector('english', message) @@\nto_tsquery('hosehead')) order by modified limit 500;\n \nQUERY\nPLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=94724.91..94726.16 rows=500 width=436) (actual\ntime=1.475..1.475 rows=0 loops=1)\n -> Sort (cost=94724.91..94749.43 rows=9808 width=436) (actual\ntime=1.473..1.473 rows=0 loops=1)\n Sort Key: public.post.modified\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=31173.43..94236.19 rows=9808 width=436)\n(actual time=1.468..1.468 rows=0 loops=1)\n -> HashAggregate (cost=31173.43..31271.51 rows=9808\nwidth=4) (actual time=1.466..1.466 rows=0 loops=1)\n -> Bitmap Heap Scan on post \n(cost=1422.95..31148.91 rows=9808 width=4) (actual time=1.440..1.440\nrows=0 loops=1)\n Recheck Cond: (to_tsvector('english'::text,\nmessage) @@ to_tsquery('hosehead'::text))\n -> Bitmap Index Scan on idx_message \n(cost=0.00..1420.50 rows=9808 width=0) (actual time=1.438..1.438 rows=0\nloops=1)\n Index Cond:\n(to_tsvector('english'::text, message) @@ to_tsquery('hosehead'::text))\n -> Index Scan using post_ordinal on post \n(cost=0.00..6.41 rows=1 width=436) (never executed)\n Index Cond: (public.post.ordinal = public.post.ordinal)\n Total runtime: 1.600 ms\n(13 rows)\n\nWhy is the planner \"taking into consideration\" the LIMIT (I know the\ndocs say it does) and choosing to sequentially scan a table of nearly 2\nmillion rows?! I don't see how that makes sense.... irrespective of the\nquery being LIMITed.\n\nIf it matters setting enable_seqscan OFF does not impact the results.\n\n-- Karl", "msg_date": "Sat, 08 May 2010 15:35:19 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Ugh - bad plan with LIMIT in a complex SELECT, any way to fix this?" }, { "msg_contents": "Overal comment.. Try reading hrough these old threads most of your\nproblem is the same issue:\n\nhttp://article.gmane.org/gmane.comp.db.postgresql.performance/22395/match=gin\nhttp://thread.gmane.org/gmane.comp.db.postgresql.performance/22331/focus=22434\n\n\n> Table \"public.post\"\n> Column | Type |\n> Modifiers\n> -----------+--------------------------+--------------------------------------------------------\n> subject | text |\n> message | text |\n> inserted | timestamp with time zone |\n> modified | timestamp with time zone |\n> replied | timestamp with time zone |\n> ordinal | integer | not null default\n> nextval('post_ordinal_seq'::regclass)\n>\n> Indexes:\n> \"post_pkey\" PRIMARY KEY, btree (ordinal)\n> \"idx_message\" gin (to_tsvector('english'::text, message))\n> \"idx_subject\" gin (to_tsvector('english'::text, subject))\n>\n> There's a bunch of other stuff in the table and many more indices, plus\n> foreign references, but stripping the table down to JUST THIS shows the\n> problem.\n>\n> ticker=# explain analyze select * from post where to_tsvector('english',\n> message) @@ to_tsquery('violence') order by modified desc;\n> QUERY\n> PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=31795.16..31819.68 rows=9808 width=436) (actual\n> time=14.222..17.213 rows=3421 loops=1)\n> Sort Key: modified\n> Sort Method: quicksort Memory: 3358kB\n> -> Bitmap Heap Scan on post (cost=1418.95..31144.90 rows=9808\n> width=436) (actual time=1.878..7.514 rows=3421 loops=1)\n> Recheck Cond: (to_tsvector('english'::text, message) @@\n> to_tsquery('violence'::text))\n> -> Bitmap Index Scan on idx_message (cost=0.00..1416.49\n> rows=9808 width=0) (actual time=1.334..1.334 rows=3434 loops=1)\n> Index Cond: (to_tsvector('english'::text, message) @@\n> to_tsquery('violence'::text))\n> Total runtime: 20.547 ms\n> (8 rows)\n>\n> Ok, very nice. 20ms. I like that.\n>\n> Now lets limit the return to 100 items:\n>\n> ticker=# explain analyze select * from post where to_tsvector('english',\n> message) @@ to_tsquery('violence') order by modified desc limit 100;\n>\n> QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..5348.69 rows=100 width=436) (actual\n> time=198.047..2607.077 rows=100 loops=1)\n> -> Index Scan Backward using post_modified on post\n> (cost=0.00..524599.31 rows=9808 width=436) (actual\n> time=198.043..2606.864 rows=100 loops=1)\n> Filter: (to_tsvector('english'::text, message) @@\n> to_tsquery('violence'::text))\n> Total runtime: 2607.231 ms\n> (4 rows)\n>\n> Bad. Notice that the optimizer decided it was going to do an index scan\n> with an internal filter on it! That's BACKWARD; what I want is for the\n> planner to first execute the index scan on the GIN index, then order the\n> return and limit the returned data set.\n>\n> But it gets much worse - let's use something that's NOT in the message\n> base (the table in question has some ~2m rows by the way and consumes\n> several gigabytes on disk - anything that actually READS the table is\n> instant \"bad news!\")\n> \n\nThe one problem is that the query-planner doesn't have any\nspecific knowlege about the cost of the gin-index search. Thats\nmentioned in one of the above threads.\n\nThe other problem is that the cost of \"to_tsvector\" and \"ts_match_vq\"\nare set way to conservative in the default installation. Bumping those\nup will increase your amount of correct plans, but it doesnt solve all \nof it\nsince the above problem is also interferring. But try upping the cost\nof those two functions significantly.\n\nalter function ts_match_vq(tsvector,tsquery) cost 500\n(upping the cost times 500 for that one). I've I've got it right it is \"more in\nthe correct ballpark\" it more or less translates to \"how much more expensive the function\nis compared to really simple operators\").\n\nAnother thing you can do, that favours the running time of the queries\nusing to_tsvector() is to specifically store the tsvector in the table and\ncreate an index on that. That will at run-time translate into fewer\ncalls (0 to be precise) of to_tsvector and only costing the ts_match_vq\nat run-time.\n\n> Why is the planner \"taking into consideration\" the LIMIT (I know the\n> docs say it does) and choosing to sequentially scan a table of nearly 2\n> million rows?! I don't see how that makes sense.... irrespective of the\n> query being LIMITed.\n>\n> If it matters setting enable_seqscan OFF does not impact the results.\n> \n\nNo, because you end up in index-scans on non-gin indexes in that\nsitutaion.. so turning seqscan off has no effect.\n\n\n-- \nJesper\n\n>\n>\n> \n\n", "msg_date": "Sun, 09 May 2010 08:53:49 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ugh - bad plan with LIMIT in a complex SELECT, any\n\tway to fix this?" }, { "msg_contents": "Karl,\n\nestimation for gin indexes currently is very-very bad, so I don't surprise\nyour result. We'll discuss on pgcon our new ginconstetimate function,\nwhich could improve planner. Can you provide us dump of your db (with\nrelevant columns) and test queries, so we could check our new patch.\n\nOleg\nOn Sat, 8 May 2010, Karl Denninger wrote:\n\n> I have a message posted in pgsql-general that outlines what I thought\n> was an indexing problem - it's not, so I'm bringing it here.\n>\n> I dumped the table from our production system and stuffed it into a test\n> machine, then started refining and playing with the query until I was\n> able to get it to the \"de-minimus\" that showed the issue. Note that the\n> actual query is frequently MUCH more complicated, but without the LIMIT\n> shown below the planner seems to do a decent job of figuring out how to\n> \"get it done.\"\n>\n> The actual table in question has ~2m rows totaling several gigabytes of\n> space.\n>\n> Here's an abstract of the schema:\n>\n> Table \"public.post\"\n> Column | Type |\n> Modifiers\n> -----------+--------------------------+--------------------------------------------------------\n> subject | text |\n> message | text |\n> inserted | timestamp with time zone |\n> modified | timestamp with time zone |\n> replied | timestamp with time zone |\n> ordinal | integer | not null default\n> nextval('post_ordinal_seq'::regclass)\n>\n> Indexes:\n> \"post_pkey\" PRIMARY KEY, btree (ordinal)\n> \"idx_message\" gin (to_tsvector('english'::text, message))\n> \"idx_subject\" gin (to_tsvector('english'::text, subject))\n>\n> There's a bunch of other stuff in the table and many more indices, plus\n> foreign references, but stripping the table down to JUST THIS shows the\n> problem.\n>\n> ticker=# explain analyze select * from post where to_tsvector('english',\n> message) @@ to_tsquery('violence') order by modified desc;\n> QUERY\n> PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=31795.16..31819.68 rows=9808 width=436) (actual\n> time=14.222..17.213 rows=3421 loops=1)\n> Sort Key: modified\n> Sort Method: quicksort Memory: 3358kB\n> -> Bitmap Heap Scan on post (cost=1418.95..31144.90 rows=9808\n> width=436) (actual time=1.878..7.514 rows=3421 loops=1)\n> Recheck Cond: (to_tsvector('english'::text, message) @@\n> to_tsquery('violence'::text))\n> -> Bitmap Index Scan on idx_message (cost=0.00..1416.49\n> rows=9808 width=0) (actual time=1.334..1.334 rows=3434 loops=1)\n> Index Cond: (to_tsvector('english'::text, message) @@\n> to_tsquery('violence'::text))\n> Total runtime: 20.547 ms\n> (8 rows)\n>\n> Ok, very nice. 20ms. I like that.\n>\n> Now lets limit the return to 100 items:\n>\n> ticker=# explain analyze select * from post where to_tsvector('english',\n> message) @@ to_tsquery('violence') order by modified desc limit 100;\n>\n> QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..5348.69 rows=100 width=436) (actual\n> time=198.047..2607.077 rows=100 loops=1)\n> -> Index Scan Backward using post_modified on post\n> (cost=0.00..524599.31 rows=9808 width=436) (actual\n> time=198.043..2606.864 rows=100 loops=1)\n> Filter: (to_tsvector('english'::text, message) @@\n> to_tsquery('violence'::text))\n> Total runtime: 2607.231 ms\n> (4 rows)\n>\n> Bad. Notice that the optimizer decided it was going to do an index scan\n> with an internal filter on it! That's BACKWARD; what I want is for the\n> planner to first execute the index scan on the GIN index, then order the\n> return and limit the returned data set.\n>\n> But it gets much worse - let's use something that's NOT in the message\n> base (the table in question has some ~2m rows by the way and consumes\n> several gigabytes on disk - anything that actually READS the table is\n> instant \"bad news!\")\n>\n>\n> ticker=# explain analyze select * from post where to_tsvector('english',\n> message) @@ to_tsquery('hosehead') order by modified;\n> QUERY\n> PLAN\n> --------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=31795.16..31819.68 rows=9808 width=436) (actual\n> time=0.407..0.407 rows=0 loops=1)\n> Sort Key: modified\n> Sort Method: quicksort Memory: 25kB\n> -> Bitmap Heap Scan on post (cost=1418.95..31144.90 rows=9808\n> width=436) (actual time=0.402..0.402 rows=0 loops=1)\n> Recheck Cond: (to_tsvector('english'::text, message) @@\n> to_tsquery('hosehead'::text))\n> -> Bitmap Index Scan on idx_message (cost=0.00..1416.49\n> rows=9808 width=0) (actual time=0.399..0.399 rows=0 loops=1)\n> Index Cond: (to_tsvector('english'::text, message) @@\n> to_tsquery('hosehead'::text))\n> Total runtime: 0.441 ms\n> (8 rows)\n>\n>\n> Very fast, as you'd expect - it returned nothing. Now let's try it with\n> a \"LIMIT\":\n>\n> ticker=# explain analyze select * from post where to_tsvector('english',\n> message) @@ to_tsquery('hosehead') order by modified limit 100;\n> NOTICE: word is too long to be indexed\n> DETAIL: Words longer than 2047 characters are ignored.\n> QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..5348.69 rows=100 width=436) (actual\n> time=254217.850..254217.850 rows=0 loops=1)\n> -> Index Scan using post_modified on post (cost=0.00..524599.31\n> rows=9808 width=436) (actual time=254217.847..254217.847 rows=0 loops=1)\n> Filter: (to_tsvector('english'::text, message) @@\n> to_tsquery('hosehead'::text))\n> Total runtime: 254217.891 ms\n> (4 rows)\n>\n> ticker=#\n>\n> Oh crap. It actually went through and looked at the entire freaking\n> table - one message at a time.\n>\n> An attempt to re-write the query into something that FORCES the planner\n> to do the right thing fails too. For example:\n>\n> ticker=# explain analyze select * from post where ordinal in (select\n> ordinal from post where to_tsvector('english', message) @@\n> to_tsquery('hosehead')) order by modified;\n> QUERY\n> PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=94886.44..94910.96 rows=9808 width=436) (actual\n> time=0.884..0.884 rows=0 loops=1)\n> Sort Key: public.post.modified\n> Sort Method: quicksort Memory: 25kB\n> -> Nested Loop (cost=31173.42..94236.19 rows=9808 width=436)\n> (actual time=0.879..0.879 rows=0 loops=1)\n> -> HashAggregate (cost=31173.42..31271.50 rows=9808 width=4)\n> (actual time=0.877..0.877 rows=0 loops=1)\n> -> Bitmap Heap Scan on post (cost=1422.95..31148.90\n> rows=9808 width=4) (actual time=0.850..0.850 rows=0 loops=1)\n> Recheck Cond: (to_tsvector('english'::text,\n> message) @@ to_tsquery('hosehead'::text))\n> -> Bitmap Index Scan on idx_message\n> (cost=0.00..1420.50 rows=9808 width=0) (actual time=0.848..0.848 rows=0\n> loops=1)\n> Index Cond: (to_tsvector('english'::text,\n> message) @@ to_tsquery('hosehead'::text))\n> -> Index Scan using post_ordinal on post (cost=0.00..6.41\n> rows=1 width=436) (never executed)\n> Index Cond: (public.post.ordinal = public.post.ordinal)\n> Total runtime: 0.985 ms\n> (12 rows)\n>\n> Fast, if convoluted.\n>\n>\n> ticker=# explain analyze select * from post where ordinal in (select\n> ordinal from post where to_tsvector('english', message) @@\n> to_tsquery('hosehead')) order by modified limit 100;\n> NOTICE: word is too long to be indexed\n> DETAIL: Words longer than 2047 characters are ignored.\n>\n> QUERY\n> PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..19892.88 rows=100 width=436) (actual\n> time=270563.091..270563.091 rows=0 loops=1)\n> -> Nested Loop Semi Join (cost=0.00..1951093.77 rows=9808\n> width=436) (actual time=270563.088..270563.088 rows=0 loops=1)\n> -> Index Scan using post_modified on post\n> (cost=0.00..509887.63 rows=1961557 width=436) (actual\n> time=0.015..3427.627 rows=1953674 loops=1)\n> -> Index Scan using post_ordinal on post (cost=0.00..0.73\n> rows=1 width=4) (actual time=0.134..0.134 rows=0 loops=1953674)\n> Index Cond: (public.post.ordinal = public.post.ordinal)\n> Filter: (to_tsvector('english'::text,\n> public.post.message) @@ to_tsquery('hosehead'::text))\n> Total runtime: 270563.147 ms\n> (7 rows)\n>\n> ticker=#\n>\n> Ok, that didn't work either.\n>\n> Interestingly enough, if I crank up the limit to 500, it starts behaving!\n>\n> ticker=# explain analyze select * from post where ordinal in (select\n> ordinal from post where to_tsvector('english', message) @@\n> to_tsquery('hosehead')) order by modified limit 500;\n>\n> QUERY\n> PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=94724.91..94726.16 rows=500 width=436) (actual\n> time=1.475..1.475 rows=0 loops=1)\n> -> Sort (cost=94724.91..94749.43 rows=9808 width=436) (actual\n> time=1.473..1.473 rows=0 loops=1)\n> Sort Key: public.post.modified\n> Sort Method: quicksort Memory: 25kB\n> -> Nested Loop (cost=31173.43..94236.19 rows=9808 width=436)\n> (actual time=1.468..1.468 rows=0 loops=1)\n> -> HashAggregate (cost=31173.43..31271.51 rows=9808\n> width=4) (actual time=1.466..1.466 rows=0 loops=1)\n> -> Bitmap Heap Scan on post\n> (cost=1422.95..31148.91 rows=9808 width=4) (actual time=1.440..1.440\n> rows=0 loops=1)\n> Recheck Cond: (to_tsvector('english'::text,\n> message) @@ to_tsquery('hosehead'::text))\n> -> Bitmap Index Scan on idx_message\n> (cost=0.00..1420.50 rows=9808 width=0) (actual time=1.438..1.438 rows=0\n> loops=1)\n> Index Cond:\n> (to_tsvector('english'::text, message) @@ to_tsquery('hosehead'::text))\n> -> Index Scan using post_ordinal on post\n> (cost=0.00..6.41 rows=1 width=436) (never executed)\n> Index Cond: (public.post.ordinal = public.post.ordinal)\n> Total runtime: 1.600 ms\n> (13 rows)\n>\n> Why is the planner \"taking into consideration\" the LIMIT (I know the\n> docs say it does) and choosing to sequentially scan a table of nearly 2\n> million rows?! I don't see how that makes sense.... irrespective of the\n> query being LIMITed.\n>\n> If it matters setting enable_seqscan OFF does not impact the results.\n>\n> -- Karl\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Sun, 9 May 2010 13:35:59 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ugh - bad plan with LIMIT in a complex SELECT, any\n\tway to fix this?" } ]
[ { "msg_contents": "Hello all,\n\nA query ran twice in succession performs VERY poorly the first time as it \niterates through the nested loop. The second time, it rips. Please see SQL, \nSLOW PLAN and FAST PLAN below.\n\nI don't know why these nested loops are taking so long to execute.\n\" -> Nested Loop (cost=0.00..42866.98 rows=77 width=18) (actual \ntime=126.354..26301.027 rows=9613 loops=1)\"\n\" -> Nested Loop (cost=0.00..42150.37 rows=122 width=18) (actual \ntime=117.369..15349.533 rows=13247 loops=1)\"\n\nThe loop members appear to be finished quickly. I suspect that the results \nfor the function aren't really as fast as reported, and are actually taking \nmuch longer to comeplete returning results.\n\" -> Function Scan on zips_in_mile_range (cost=0.00..52.50 \nrows=67 width=40) (actual time=104.196..104.417 rows=155 loops=1)\"\n\" Filter: (zip > ''::text)\"\n\nIs this possible? I can't see what other delay there could be.\n\nThe second time the query runs, the loops are fast:\n\" -> Nested Loop (cost=0.00..42866.98 rows=77 width=18) (actual \ntime=97.073..266.826 rows=9613 loops=1)\"\n\" -> Nested Loop (cost=0.00..42150.37 rows=122 width=18) (actual \ntime=97.058..150.172 rows=13247 loops=1)\"\n\nSince it is fast the second time, I wonder if this is related at all to the \nfunction being IMMUTABLE? (Even though it's IMMUTABLE it reads a very static \ntable)\n\nThis DB is a copy of another DB, on the same server host, same drive but \ndifferent tablespace. The original query has good performance, and is hit \noften by the live web server. With the copy - which performs poorly - the \nquery is hit infrequently.\n\nIs there any evidence for why the nested loop is slow?\n\nCode and plans follow - regards and thanks!\n\nCarlo\n\nSQL:\nselect\n pp.provider_practice_id,\n p.provider_id,\n distance,\n pp.is_principal,\n p.provider_id as sort_order\n from mdx_core.provider as p\n join mdx_core.provider_practice as pp\n on pp.provider_id = p.provider_id\n join (select * from mdx_core.zips_in_mile_range('75203', 15::numeric)\nwhere zip > '') as nearby\n on nearby.zip = substr(pp.default_postal_code, 1, 5)\n where\n pp.default_country_code = 'US'\n and p.provider_status_code = 'A' and p.is_visible = 'Y'\n and pp.is_principal = 'Y'\n and coalesce(pp.record_status, 'A') = 'A'\n order by sort_order, distance\n\nSLOW PLAN:\n\"Sort (cost=42869.40..42869.59 rows=77 width=18) (actual \ntime=26316.495..26322.102 rows=9613 loops=1)\"\n\" Sort Key: p.provider_id, zips_in_mile_range.distance\"\n\" Sort Method: quicksort Memory: 1136kB\"\n\" -> Nested Loop (cost=0.00..42866.98 rows=77 width=18) (actual \ntime=126.354..26301.027 rows=9613 loops=1)\"\n\" -> Nested Loop (cost=0.00..42150.37 rows=122 width=18) (actual \ntime=117.369..15349.533 rows=13247 loops=1)\"\n\" -> Function Scan on zips_in_mile_range (cost=0.00..52.50 \nrows=67 width=40) (actual time=104.196..104.417 rows=155 loops=1)\"\n\" Filter: (zip > ''::text)\"\n\" -> Index Scan using \nprovider_practice_default_base_zip_country_idx on provider_practice pp \n(cost=0.00..628.30 rows=2 width=19) (actual time=1.205..98.231 rows=85 \nloops=155)\"\n\" Index Cond: ((pp.default_country_code = 'US'::bpchar) \nAND (substr((pp.default_postal_code)::text, 1, 5) = zips_in_mile_range.zip) \nAND (pp.is_principal = 'Y'::bpchar))\"\n\" Filter: (COALESCE(pp.record_status, 'A'::bpchar) = \n'A'::bpchar)\"\n\" -> Index Scan using provider_provider_id_provider_status_code_idx \non provider p (cost=0.00..5.86 rows=1 width=4) (actual time=0.823..0.824 \nrows=1 loops=13247)\"\n\" Index Cond: ((p.provider_id = pp.provider_id) AND \n(p.provider_status_code = 'A'::bpchar))\"\n\" Filter: (p.is_visible = 'Y'::bpchar)\"\n\"Total runtime: 26327.329 ms\"\n\nFAST PLAN:\n\"Sort (cost=42869.40..42869.59 rows=77 width=18) (actual \ntime=278.722..284.326 rows=9613 loops=1)\"\n\" Sort Key: p.provider_id, zips_in_mile_range.distance\"\n\" Sort Method: quicksort Memory: 1136kB\"\n\" -> Nested Loop (cost=0.00..42866.98 rows=77 width=18) (actual \ntime=97.073..266.826 rows=9613 loops=1)\"\n\" -> Nested Loop (cost=0.00..42150.37 rows=122 width=18) (actual \ntime=97.058..150.172 rows=13247 loops=1)\"\n\" -> Function Scan on zips_in_mile_range (cost=0.00..52.50 \nrows=67 width=40) (actual time=97.013..97.161 rows=155 loops=1)\"\n\" Filter: (zip > ''::text)\"\n\" -> Index Scan using \nprovider_practice_default_base_zip_country_idx on provider_practice pp \n(cost=0.00..628.30 rows=2 width=19) (actual time=0.017..0.236 rows=85 \nloops=155)\"\n\" Index Cond: ((pp.default_country_code = 'US'::bpchar) \nAND (substr((pp.default_postal_code)::text, 1, 5) = zips_in_mile_range.zip) \nAND (pp.is_principal = 'Y'::bpchar))\"\n\" Filter: (COALESCE(pp.record_status, 'A'::bpchar) = \n'A'::bpchar)\"\n\" -> Index Scan using provider_provider_id_provider_status_code_idx \non provider p (cost=0.00..5.86 rows=1 width=4) (actual time=0.006..0.007 \nrows=1 loops=13247)\"\n\" Index Cond: ((p.provider_id = pp.provider_id) AND \n(p.provider_status_code = 'A'::bpchar))\"\n\" Filter: (p.is_visible = 'Y'::bpchar)\"\n\"Total runtime: 289.582 ms\"\n\n", "msg_date": "Tue, 11 May 2010 01:32:28 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Function scan/Index scan to nested loop" }, { "msg_contents": "On Mon, May 10, 2010 at 11:32 PM, Carlo Stonebanks\n<[email protected]> wrote:\n> Hello all,\n>\n> A query ran twice in succession performs VERY poorly the first time as it\n> iterates through the nested loop. The second time, it rips. Please see SQL,\n> SLOW PLAN and FAST PLAN below.\n\nThis is almost always due to caching. First time the data aren't in\nthe cache, second time they are.\n\n> I don't know why these nested loops are taking so long to execute.\n> \"  ->  Nested Loop  (cost=0.00..42866.98 rows=77 width=18) (actual\n> time=126.354..26301.027 rows=9613 loops=1)\"\n> \"        ->  Nested Loop  (cost=0.00..42150.37 rows=122 width=18) (actual\n> time=117.369..15349.533 rows=13247 loops=1)\"\n\nYour row estimates are WAY off. A nested loop might now be the best choice.\n\nAlso note that some platforms add a lot of time to some parts of an\nexplain analyze due to slow time function response. Compare the run\ntime of the first run with and without explain analyze.\n", "msg_date": "Tue, 11 May 2010 01:07:26 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Function scan/Index scan to nested loop" }, { "msg_contents": "On 11/05/10 13:32, Carlo Stonebanks wrote:\n> Hello all,\n> \n> A query ran twice in succession performs VERY poorly the first time as\n> it iterates through the nested loop. The second time, it rips. Please\n> see SQL, SLOW PLAN and FAST PLAN below.\n\nI haven't looked at the details, but the comment you made about it being\nfast on the live server which hits this query frequently tends to\nsuggest that this is a caching issue.\n\nMost likely, the first time Pg has to read the data from disk. The\nsecond time, it's in memory-based disk cache or even in Pg's\nshared_buffers, so it can be accessed vastly quicker.\n\n-- \nCraig Ringer\n\nTech-related writing: http://soapyfrogs.blogspot.com/\n", "msg_date": "Tue, 11 May 2010 15:14:08 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Function scan/Index scan to nested loop" }, { "msg_contents": "Thanks Scott,\n\n>> This is almost always due to caching. First time the data aren't in the \n>> cache, second time they are.\n<<\n\nI had assumed that it was caching, but I don't know from where because of \nthe inexplicable delay. Hardware? O/S (Linux)? DB? From the function, which \nis IMMUTABLE?\n\nI am concerned that there is such a lag between all the index and function \nscans start/complete times and and the nested loops starting. I have \nreformatted the SLOW PLAN results below to make them easier to read. Can you \ntell me if this makes any sense to you?\n\nI can understand that EXPLAIN might inject some waste, but the delay being \nshown here is equivalent to the delay in real query times - I don't think \nEXPLAIN components would inject 15 second waits... would they?\n\n>> Your row estimates are WAY off. A nested loop might now be the best \n>> choice.\n<<\n\nI tried to run this with set enable_nestloop to off and it built this truly \nimpressively complex plan! However, the cache had already spun up. The thing \nthat makes testing so difficult is that once the caches are loaded, you have \nto flail around trying to find query parameters that DON'T hit the cache, \nmaking debugging difficult.\n\nThe row estimates being off is a chronic problem with our DB. I don't think \nthe 3000 row ANALYZE is getting a proper sample set and would love to change \nthe strategy, even if at the expense of speed of execution of ANALYZE. I \ndon't know what it is about our setup that makes our PG servers so hard to \ntune, but I think its time to call the cavalry (gotta find serious PG server \ntuning experts in NJ).\n\nCarlo\n\n\nSLOW PLAN\nSort (cost=42869.40..42869.59 rows=77 width=18) (actual \ntime=26316.495..26322.102 rows=9613 loops=1)\n Sort Key: p.provider_id, zips_in_mile_range.distance\n Sort Method: quicksort Memory: 1136kB\n -> Nested Loop\n (cost=0.00..42866.98 rows=77 width=18)\n (actual time=126.354..26301.027 rows=9613 loops=1)\n -> Nested Loop\n (cost=0.00..42150.37 rows=122 width=18)\n (actual time=117.369..15349.533 rows=13247 loops=1)\n -> Function Scan on zips_in_mile_range\n (cost=0.00..52.50 rows=67 width=40)\n (actual time=104.196..104.417 rows=155 loops=1)\n Filter: (zip > ''::text)\n -> Index Scan using \nprovider_practice_default_base_zip_country_idx on provider_practice pp\n (cost=0.00..628.30 rows=2 width=19)\n (actual time=1.205..98.231 rows=85 loops=155)\n Index Cond: ((pp.default_country_code = 'US'::bpchar)\n AND (substr((pp.default_postal_code)::text, 1, 5) = \nzips_in_mile_range.zip)\n AND (pp.is_principal = 'Y'::bpchar))\n Filter: (COALESCE(pp.record_status, 'A'::bpchar) = \n'A'::bpchar)\n -> Index Scan using provider_provider_id_provider_status_code_idx \non provider p\n (cost=0.00..5.86 rows=1 width=4)\n (actual time=0.823..0.824 rows=1 loops=13247)\n Index Cond: ((p.provider_id = pp.provider_id)\n AND (p.provider_status_code = 'A'::bpchar))\n Filter: (p.is_visible = 'Y'::bpchar)\n\n\n", "msg_date": "Tue, 11 May 2010 14:00:43 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Function scan/Index scan to nested loop" }, { "msg_contents": "On Tue, May 11, 2010 at 2:00 PM, Carlo Stonebanks\n<[email protected]> wrote:\n> I am concerned that there is such a lag between all the index and function\n> scans start/complete times and and the nested loops starting. I have\n> reformatted the SLOW PLAN results below to make them easier to read. Can you\n> tell me if this makes any sense to you?\n\nI think you want to run EXPLAIN ANALYZE on the queries that are being\nexecuted BY mdx_core.zips_in_mile_range('75203', 15::numeric) rather\nthan the query that calls that function. You should be able to see\nthe same caching effect there and looking at that plan might give you\na better idea what is really happening.\n\n(Note that you might need to use PREPARE and EXPLAIN EXECUTE to get\nthe same plan the function is generating internally, rather than just\nEXPLAIN.)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 26 May 2010 00:13:42 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Function scan/Index scan to nested loop" } ]
[ { "msg_contents": "Hi all,\n In my database application, I've a table whose records can reach 10M\nand insertions can happen at a faster rate like 100 insertions per second in\nthe peak times. I configured postgres to do auto vacuum on hourly basis. I\nhave frontend GUI application in CGI which displays the data from the\ndatabase. When I try to get the last twenty records from the database, it\ntakes around 10-15 mins to complete the operation.This is the query which\nis used:\n\n* select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,\ne.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,\ne.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE\ns.sig_id = e.signature AND e.timestamp >= '1270449180' AND e.timestamp <\n'1273473180' ORDER BY e.cid DESC, e.cid DESC limit 21 offset 10539780;\n*\nCan any one suggest me a better solution to improve the performance.\n\nPlease let me know if you've any further queries.\n\n\nThank you,\nVenu\n\nHi all,       In my database application, I've a table whose\nrecords can reach 10M and insertions can happen at a faster rate like\n100 insertions per second in the peak times. I configured postgres to\ndo auto vacuum on hourly basis. I have frontend GUI application in CGI\nwhich displays the data from the database. When I try to get the last\ntwenty records from the database, it takes around 10-15  mins to\ncomplete the operation.This is the query which is used:\n\nselect e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name, e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,\ne.wifi_addr_2, e.view_status, bssid  FROM event e, signature s WHERE\ns.sig_id = e.signature   AND e.timestamp >= '1270449180' AND\ne.timestamp < '1273473180'  ORDER BY e.cid DESC,  e.cid DESC limit\n21 offset 10539780;\n\n\nCan any one suggest me a better solution to improve the performance.\nPlease let me know if you've any further queries.\n\n\nThank you,\nVenu", "msg_date": "Tue, 11 May 2010 14:17:57 +0530", "msg_from": "venu madhav <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issues when the number of records are around 10 Million" }, { "msg_contents": "First, are you sure you are getting autovacuum to run hourly? Autovacuum will only vacuum when certain configuration thresholds are reached. You can set it to only check for those thresholds every so often, but no vacuuming or analyzing will be done unless they are hit, regardless of how often autovacuum checks the tables. Whenever you are dealing with time series, the default thresholds are often insufficient, especially when you are especially interested in the last few records on a large table. \n \nWhat are your autovacuum configuration parameters?\nWhen were the two tables last autovacuum and analyzed, according to pg_stat_user_tables?\nCould you post the output of explain analyze of your query?\nWhich default statistic collection parameters do you use? Have you changed them specifically for the tables you are using?\nWhich version of Postgres are you running? Which OS? \n \n \n\n>>> venu madhav <[email protected]> 05/11/10 3:47 AM >>>\nHi all,\nIn my database application, I've a table whose records can reach 10M and insertions can happen at a faster rate like 100 insertions per second in the peak times. I configured postgres to do auto vacuum on hourly basis. I have frontend GUI application in CGI which displays the data from the database. When I try to get the last twenty records from the database, it takes around 10-15 mins to complete the operation.This is the query which is used:\n\nselect e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name, e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,\ne.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE s.sig_id = e.signature AND e.timestamp >= '1270449180' AND e.timestamp < '1273473180' ORDER BY e.cid DESC, e.cid DESC limit 21 offset 10539780;\n\nCan any one suggest me a better solution to improve the performance.\n\nPlease let me know if you've any further queries.\n\n\nThank you,\nVenu \n\n\n\n\n\nFirst, are you sure you are getting autovacuum to run hourly? Autovacuum will only vacuum when certain configuration thresholds are reached. You can set it to only check for those thresholds every so often, but no vacuuming or analyzing will be done unless they are hit, regardless of how often autovacuum checks the tables. Whenever you are dealing with time series, the default thresholds are often insufficient, especially when you are especially interested in the last few records on a large table. \n \nWhat are your autovacuum configuration parameters?\nWhen were the two tables last autovacuum and analyzed, according to pg_stat_user_tables?\nCould you post the output of explain analyze of your query?Which default statistic collection parameters do you use? Have you changed them specifically for the tables you are using?\nWhich version of Postgres are you running? Which OS? \n \n \n>>> venu madhav <[email protected]> 05/11/10 3:47 AM >>>Hi all,In my database application, I've a table whose records can reach 10M and insertions can happen at a faster rate like 100 insertions per second in the peak times. I configured postgres to do auto vacuum on hourly basis. I have frontend GUI application in CGI which displays the data from the database. When I try to get the last twenty records from the database, it takes around 10-15 mins to complete the operation.This is the query which is used:select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name, e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE s.sig_id = e.signature AND e.timestamp >= '1270449180' AND e.timestamp < '1273473180' ORDER BY e.cid DESC, e.cid DESC limit 21 offset 10539780;Can any one suggest me a better solution to improve the performance.Please let me know if you've any further queries.Thank you,Venu", "msg_date": "Tue, 11 May 2010 16:47:38 -0500", "msg_from": "\"Jorge Montero\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues when the number of records\n\tare around 10 Million" }, { "msg_contents": "venu madhav <[email protected]> wrote:\n \n> When I try to get the last twenty records from the database, it\n> takes around 10-15 mins to complete the operation.\n \nMaking this a little easier to read (for me, at least) I get this:\n \nselect e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,\n e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,\n e.wifi_addr_2, e.view_status, bssid\n FROM event e,\n signature s\n WHERE s.sig_id = e.signature\n AND e.timestamp >= '1270449180'\n AND e.timestamp < '1273473180'\n ORDER BY\n e.cid DESC,\n e.cid DESC\n limit 21\n offset 10539780\n;\n \nWhy the timestamp range, the order by, the limit, *and* the offset?\nOn the face of it, that seems a bit confused. Not to mention that\nyour ORDER BY has the same column twice.\n \nPerhaps that OFFSET is not needed? It is telling PostgreSQL that\nwhatever results are generated based on the rest of the query, read\nthrough and ignore the first ten and a half million. Since you said\nyou had about ten million rows, you wanted the last 20, and the\nORDER by is DESCending, you're probably not going to get what you\nwant.\n \nWhat, exactly, *is* it you want again?\n \n-Kevin\n", "msg_date": "Tue, 11 May 2010 16:50:19 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues when the number of records\n\tare around 10 Million" }, { "msg_contents": "Venu,\n\nFor starters,\n\n1) You have used the e.cid twice in ORDER BY clause.\n2) If you want last twenty records in the table matching the criteria of timestamp, why do you need the offset?\n3) Do you have indexes on sig_id, signature and timestamp fields?\n\nIf you do not get a good response after that, please post the EXPLAIN ANALYZE for the query.\n\nThanks,\n\nShrirang Chitnis\nSr. Manager, Applications Development\nHOV Services\nOffice: (866) 808-0935 Ext: 39210\[email protected]\nwww.hovservices.com\n\n\nThe information contained in this message, including any attachments, is attorney privileged and/or confidential information intended only for the use of the individual or entity named as addressee. The review, dissemination, distribution or copying of this communication by or to anyone other than the intended addressee is strictly prohibited. If you have received this communication in error, please immediately notify the sender by replying to the message and destroy all copies of the original message.\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of venu madhav\nSent: Tuesday, May 11, 2010 2:18 PM\nTo: [email protected]\nSubject: [PERFORM] Performance issues when the number of records are around 10 Million\n\nHi all,\n In my database application, I've a table whose records can reach 10M and insertions can happen at a faster rate like 100 insertions per second in the peak times. I configured postgres to do auto vacuum on hourly basis. I have frontend GUI application in CGI which displays the data from the database. When I try to get the last twenty records from the database, it takes around 10-15 mins to complete the operation.This is the query which is used:\n\nselect e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name, e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,\ne.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE s.sig_id = e.signature AND e.timestamp >= '1270449180' AND e.timestamp < '1273473180' ORDER BY e.cid DESC, e.cid DESC limit 21 offset 10539780;\n\nCan any one suggest me a better solution to improve the performance.\n\nPlease let me know if you've any further queries.\n\n\nThank you,\nVenu\n", "msg_date": "Tue, 11 May 2010 17:52:09 -0400", "msg_from": "Shrirang Chitnis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues when the number of records are\n\taround 10 Million" }, { "msg_contents": "\n> * select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,\n> e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,\n> e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE\n> s.sig_id = e.signature AND e.timestamp >= '1270449180' AND e.timestamp\n> < '1273473180' ORDER BY e.cid DESC, e.cid DESC limit 21 offset 10539780;\n\nAnything with an offset that high is going to result in a sequential\nscan of most of the table.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Tue, 11 May 2010 18:04:53 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues when the number of records are around\n\t10 Million" }, { "msg_contents": "On Wed, May 12, 2010 at 3:17 AM, Jorge Montero <\[email protected]> wrote:\n\n> First, are you sure you are getting autovacuum to run hourly? Autovacuum\n> will only vacuum when certain configuration thresholds are reached. You can\n> set it to only check for those thresholds every so often, but no vacuuming\n> or analyzing will be done unless they are hit, regardless of how often\n> autovacuum checks the tables. Whenever you are dealing with time series, the\n> default thresholds are often insufficient, especially when you are\n> especially interested in the last few records on a large table.\n>\n>\n[Venu] Yes, autovacuum is running every hour. I could see in the log\nmessages. All the configurations for autovacuum are disabled except that it\nshould run for every hour. This application runs on an embedded box, so\ncan't change the parameters as they effect the other applications running on\nit. Can you please explain what do you mean by default parameters.\n\n\n> What are your autovacuum configuration parameters?\n>\n[Venu] Except these all others are disabled.\n #---------------------------------------------------------------------------\n\n# AUTOVACUUM\nPARAMETERS\n#---------------------------------------------------------------------------\n\n\n\nautovacuum = on # enable autovacuum\nsubprocess?\nautovacuum_naptime = 3600 # time between autovacuum runs, in\nsecs\n\nWhen were the two tables last autovacuum and analyzed, according to\n> pg_stat_user_tables?\n>\n[Venu] This is the content of pg_stat_user_tables for the two tables I am\nusing in that query.\n* relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan\n| idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del\n-------+------------+------------------+----------+--------------+----------+---------------+-----------+-----------+-----------\n 41188 | public | event | 117 | 1201705723 | 998\n| 2824 | 28 | 0 | 0\n 41209 | public | signature | 153 | 5365 | 2\n| 72 | 1 | 0 | 0\n*\n\n> Could you post the output of explain analyze of your query?\n>\n snort=# *EXPLAIN ANALYZE select e.cid, timestamp, s.sig_class,\ns.sig_priority, s.sig_name, e.sniff_ip, e.sniff_channel, s.sig_config,\ne.wifi_addr_1, e.wifi_addr_2, e.view_status, bssid FROM event e, signature\ns WHERE s.sig_id = e.signature AND e.timestamp >= '1270449180' AND\ne.timestamp < '1273473180' ORDER BY e.cid DESC,\ne.cid DESC limit 21 offset 10539780; *\n QUERY\nPLAN\n---------------------------------------------------------------------------\n\n------------------------------------------------------------------\n Limit (cost=7885743.98..7885743.98 rows=1 width=287) (actual\ntime=1462193.060..1462193.083 rows=14 loops=1)\n -> Sort (cost=7859399.66..7885743.98 rows=10537727 width=287)\n(actual time=1349648.207..1456496.334 rows=10539794 loops=1)\n Sort Key: e.cid\n -> Hash Join (cost=2.44..645448.31 rows=10537727 width=287)\n(actual time=0.182..139745.001 rows=10539794 loops=1)\n Hash Cond: (\"outer\".signature = \"inner\".sig_id)\n -> Seq Scan on event e (cost=0.00..487379.97\nrows=10537727 width=104) (actual time=0.012..121595.257 rows=10539794\nloops=1)\n Filter: ((\"timestamp\" >= 1270449180::bigint) AND\n(\"timestamp\" < 1273473180::bigint))\n -> Hash (cost=2.35..2.35 rows=35 width=191) (actual\ntime=0.097..0.097 rows=36 loops=1)\n -> Seq Scan on signature s (cost=0.00..2.35\nrows=35 width=191) (actual time=0.005..0.045 rows=36 loops=1)\n Total runtime: 1463829.145 ms\n(10 rows)\n\n> Which default statistic collection parameters do you use? Have you changed\n> them specifically for the tables you are using?\n>\n[Venu] These are the statistic collection parameters:\n* # - Query/Index Statistics Collector -\n\nstats_start_collector = on\nstats_command_string = on\n#stats_block_level = off\nstats_row_level = on\n#stats_reset_on_server_start = off*\nPlease let me know if you are referring to something else.\n\n> Which version of Postgres are you running? Which OS?\n>\n[Venu] Postgres Version 8.1 and Cent OS 5.1 is the Operating System.\n\nThank you,\nVenu\n\n>\n>\n>\n> >>> venu madhav <[email protected]> 05/11/10 3:47 AM >>>\n>\n> Hi all,\n> In my database application, I've a table whose records can reach 10M and\n> insertions can happen at a faster rate like 100 insertions per second in the\n> peak times. I configured postgres to do auto vacuum on hourly basis. I have\n> frontend GUI application in CGI which displays the data from the database.\n> When I try to get the last twenty records from the database, it takes around\n> 10-15 mins to complete the operation.This is the query which is used:\n>\n> *select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,\n> e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,\n> e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE\n> s.sig_id = e.signature AND e.timestamp >= '1270449180' AND e.timestamp <\n> '1273473180' ORDER BY e.cid DESC, e.cid DESC limit 21 offset 10539780;\n> *\n> Can any one suggest me a better solution to improve the performance.\n>\n> Please let me know if you've any further queries.\n>\n>\n> Thank you,\n> Venu\n>\n\nOn Wed, May 12, 2010 at 3:17 AM, Jorge Montero <[email protected]> wrote:\n\nFirst, are you sure you are getting autovacuum to run hourly? Autovacuum will only vacuum when certain configuration thresholds are reached. You can set it to only check for those thresholds every so often, but no vacuuming or analyzing will be done unless they are hit, regardless of how often autovacuum checks the tables. Whenever you are dealing with time series, the default thresholds are often insufficient, especially when you are especially interested in the last few records on a large table. \n [Venu] Yes, autovacuum is running every hour. I could see in the log messages. All the configurations for autovacuum are disabled except that it should run for every hour. This application runs on an embedded box, so can't change the parameters as they effect the other applications running on it. Can you please explain what do you mean by default parameters.\n \nWhat are your autovacuum configuration parameters?[Venu] Except these all others are disabled.  #---------------------------------------------------------------------------    \n# AUTOVACUUM PARAMETERS                                                         #---------------------------------------------------------------------------                                                                                    \nautovacuum = on                         # enable autovacuum subprocess?         autovacuum_naptime = 3600               # time between autovacuum runs, in secs \n\nWhen were the two tables last autovacuum and analyzed, according to pg_stat_user_tables?[Venu] This is the content of pg_stat_user_tables for the two tables I am using in that query.\n relid | schemaname |     relname      | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del -------+------------+------------------+----------+--------------+----------+---------------+-----------+-----------+-----------\n 41188 | public     | event            |      117 |   1201705723 |      998 |          2824 |        28 |         0 |         0 41209 | public     | signature        |      153 |         5365 |        2 |            72 |         1 |         0 |         0\n\nCould you post the output of explain analyze of your query? snort=# EXPLAIN ANALYZE select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name, e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,  e.wifi_addr_2, e.view_status, bssid  FROM event e, signature s WHERE s.sig_id = e.signature   AND e.timestamp >= '1270449180' AND e.timestamp < '1273473180'  ORDER BY e.cid DESC, \n e.cid DESC limit 21 offset 10539780;                                                                  QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------- \n  Limit  (cost=7885743.98..7885743.98 rows=1 width=287) (actual time=1462193.060..1462193.083 rows=14 loops=1)    ->  Sort  (cost=7859399.66..7885743.98 rows=10537727 width=287) (actual time=1349648.207..1456496.334 rows=10539794 loops=1) \n          Sort Key: e.cid          ->  Hash Join  (cost=2.44..645448.31 rows=10537727 width=287) (actual time=0.182..139745.001 rows=10539794 loops=1)                Hash Cond: (\"outer\".signature = \"inner\".sig_id) \n                ->  Seq Scan on event e  (cost=0.00..487379.97 rows=10537727 width=104) (actual time=0.012..121595.257 rows=10539794 loops=1)                      Filter: ((\"timestamp\" >= 1270449180::bigint) AND \n (\"timestamp\" < 1273473180::bigint))                ->  Hash  (cost=2.35..2.35 rows=35 width=191) (actual time=0.097..0.097 rows=36 loops=1)                      ->  Seq Scan on signature s  (cost=0.00..2.35 \n rows=35 width=191) (actual time=0.005..0.045 rows=36 loops=1)  Total runtime: 1463829.145 ms (10 rows) \nWhich default statistic collection parameters do you use? Have you changed them specifically for the tables you are using?\n[Venu] These are the statistic collection parameters: # - Query/Index Statistics Collector -                                                                                         stats_start_collector = on            \nstats_command_string = on                                                   #stats_block_level = off              stats_row_level = on                                                        #stats_reset_on_server_start = off\nPlease let me know if you are referring to something else.\nWhich version of Postgres are you running? Which OS? [Venu] Postgres Version 8.1 and Cent OS 5.1 is the Operating System. Thank you,Venu \n\n \n \n>>> venu madhav <[email protected]> 05/11/10 3:47 AM >>>Hi all,In my database application, I've a table whose records can reach 10M and insertions can happen at a faster rate like 100 insertions per second in the peak times. I configured postgres to do auto vacuum on hourly basis. I have frontend GUI application in CGI which displays the data from the database. When I try to get the last twenty records from the database, it takes around 10-15 mins to complete the operation.This is the query which is used:\nselect e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name, e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE s.sig_id = e.signature AND e.timestamp >= '1270449180' AND e.timestamp < '1273473180' ORDER BY e.cid DESC, e.cid DESC limit 21 offset 10539780;\nCan any one suggest me a better solution to improve the performance.Please let me know if you've any further queries.Thank you,Venu", "msg_date": "Wed, 12 May 2010 11:15:53 +0530", "msg_from": "venu madhav <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues when the number of records are\n\taround 10 Million" }, { "msg_contents": "On Wed, May 12, 2010 at 3:22 AM, Shrirang Chitnis <\[email protected]> wrote:\n\n> Venu,\n>\n> For starters,\n>\n> 1) You have used the e.cid twice in ORDER BY clause.\n>\n[Venu] Actually the second cid acts as a secondary sort order if any other\ncolumn in the table is used for sorting. In the query since the primary\nsorting key was also cid, we are seeing it twice. I can remove it.\n\n> 2) If you want last twenty records in the table matching the criteria of\n> timestamp, why do you need the offset?\n>\n[Venu] It is part of an UI application where a user can ask for date\nbetween any dates. It has the options to browse through the data retrieved\nbetween those intervals.\n\n> 3) Do you have indexes on sig_id, signature and timestamp fields?\n>\n[Venu] Yes, I do have indexes on those three.\n\n\n> If you do not get a good response after that, please post the EXPLAIN\n> ANALYZE for the query.\n>\nsnort=# EXPLAIN ANALYZE select e.cid, timestamp, s.sig_class,\ns.sig_priority, s.sig_name, e.sniff_ip, e.sniff_channel, s.sig_config,\ne.wifi_addr_1, e.wifi_addr_2, e.view_status, bssid FROM event e, signature\ns WHERE s.sig_id = e.signature AND e.timestamp >= '1270449180' AND\ne.timestamp < '1273473180' ORDER BY e.cid DESC, e.cid DESC limit 21 offset\n10539780;\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=7885743.98..7885743.98 rows=1 width=287) (actual\ntime=1462193.060..1462193.083 rows=14 loops=1)\n -> Sort (cost=7859399.66..7885743.98 rows=10537727 width=287) (actual\ntime=1349648.207..1456496.334 rows=10539794 loops=1)\n Sort Key: e.cid\n -> Hash Join (cost=2.44..645448.31 rows=10537727 width=287)\n(actual time=0.182..139745.001 rows=10539794 loops=1)\n Hash Cond: (\"outer\".signature = \"inner\".sig_id)\n -> Seq Scan on event e (cost=0.00..487379.97 rows=10537727\nwidth=104) (actual time=0.012..121595.257 rows=10539794 loops=1)\n Filter: ((\"timestamp\" >= 1270449180::bigint) AND\n(\"timestamp\" < 1273473180::bigint))\n -> Hash (cost=2.35..2.35 rows=35 width=191) (actual\ntime=0.097..0.097 rows=36 loops=1)\n -> Seq Scan on signature s (cost=0.00..2.35 rows=35\nwidth=191) (actual time=0.005..0.045 rows=36 loops=1)\n *Total runtime: 1463829.145 ms*\n(10 rows)\nThank you,\nVenu Madhav.\n\n>\n> Thanks,\n>\n> Shrirang Chitnis\n> Sr. Manager, Applications Development\n> HOV Services\n> Office: (866) 808-0935 Ext: 39210\n> [email protected]\n> www.hovservices.com\n>\n>\n> The information contained in this message, including any attachments, is\n> attorney privileged and/or confidential information intended only for the\n> use of the individual or entity named as addressee. The review,\n> dissemination, distribution or copying of this communication by or to anyone\n> other than the intended addressee is strictly prohibited. If you have\n> received this communication in error, please immediately notify the sender\n> by replying to the message and destroy all copies of the original message.\n>\n> From: [email protected] [mailto:\n> [email protected]] On Behalf Of venu madhav\n> Sent: Tuesday, May 11, 2010 2:18 PM\n> To: [email protected]\n> Subject: [PERFORM] Performance issues when the number of records are around\n> 10 Million\n>\n> Hi all,\n> In my database application, I've a table whose records can reach 10M\n> and insertions can happen at a faster rate like 100 insertions per second in\n> the peak times. I configured postgres to do auto vacuum on hourly basis. I\n> have frontend GUI application in CGI which displays the data from the\n> database. When I try to get the last twenty records from the database, it\n> takes around 10-15 mins to complete the operation.This is the query which\n> is used:\n>\n> select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,\n> e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,\n> e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE\n> s.sig_id = e.signature AND e.timestamp >= '1270449180' AND e.timestamp <\n> '1273473180' ORDER BY e.cid DESC, e.cid DESC limit 21 offset 10539780;\n>\n> Can any one suggest me a better solution to improve the performance.\n>\n> Please let me know if you've any further queries.\n>\n>\n> Thank you,\n> Venu\n>\n\nOn Wed, May 12, 2010 at 3:22 AM, Shrirang Chitnis <[email protected]> wrote:\nVenu,\n\nFor starters,\n\n1) You have used the e.cid twice in ORDER BY clause.[Venu] Actually the second cid acts as a secondary sort order if any other column in the table is used for sorting. In the query since the primary sorting key was also  cid, we are seeing it twice. I can remove it.\n\n2) If you want last twenty records in the table matching the criteria of timestamp, why do you need the offset?[Venu] It is part of an UI  application where a user can ask for date between any dates. It has the options to browse through the data retrieved between those intervals.\n\n3) Do you have indexes on sig_id, signature and timestamp fields?[Venu] Yes, I do have indexes on those three.  \n\nIf you do not get a good response after that, please post the EXPLAIN ANALYZE for the query.snort=# EXPLAIN ANALYZE select e.cid, timestamp, s.sig_class,\ns.sig_priority, s.sig_name, e.sniff_ip, e.sniff_channel, s.sig_config,\ne.wifi_addr_1,  e.wifi_addr_2, e.view_status, bssid  FROM event e,\nsignature s WHERE s.sig_id = e.signature   AND e.timestamp >=\n'1270449180' AND e.timestamp < '1273473180'  ORDER BY e.cid DESC, \ne.cid DESC limit 21 offset 10539780;\n\n\n\n                                                                 QUERY PLAN                                                                  ---------------------------------------------------------------------------------------------------------------------------------------------\n\n\n\n\n Limit  (cost=7885743.98..7885743.98 rows=1 width=287) (actual time=1462193.060..1462193.083 rows=14 loops=1)  \n->  Sort  (cost=7859399.66..7885743.98 rows=10537727 width=287)\n(actual time=1349648.207..1456496.334 rows=10539794 loops=1)\n\n\n\n         Sort Key: e.cid         ->  Hash Join  (cost=2.44..645448.31 rows=10537727 width=287) (actual time=0.182..139745.001 rows=10539794 loops=1)               Hash Cond: (\"outer\".signature = \"inner\".sig_id)\n\n               ->  Seq Scan on event e  (cost=0.00..487379.97\nrows=10537727 width=104) (actual time=0.012..121595.257 rows=10539794\nloops=1)                     Filter: ((\"timestamp\" >= 1270449180::bigint) AND (\"timestamp\" < 1273473180::bigint))\n\n\n\n               ->  Hash  (cost=2.35..2.35 rows=35 width=191) (actual time=0.097..0.097 rows=36 loops=1)                    \n->  Seq Scan on signature s  (cost=0.00..2.35 rows=35 width=191)\n(actual time=0.005..0.045 rows=36 loops=1)\n\n\n\n Total runtime: 1463829.145 ms(10 rows) Thank you,Venu Madhav.\n\nThanks,\n\nShrirang Chitnis\nSr. Manager, Applications Development\nHOV Services\nOffice: (866) 808-0935 Ext: 39210\[email protected]\nwww.hovservices.com\n\n\nThe information contained in this message, including any attachments, is attorney privileged and/or confidential information intended only for the use of the individual or entity named as addressee.  The review, dissemination, distribution or copying of this communication by or to anyone other than the intended addressee is strictly prohibited.  If you have received this communication in error, please immediately notify the sender by replying to the message and destroy all copies of the original message.\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of venu madhav\n\nSent: Tuesday, May 11, 2010 2:18 PM\nTo: [email protected]\nSubject: [PERFORM] Performance issues when the number of records are around 10 Million\n\nHi all,\n      In my database application, I've a table whose records can reach 10M and insertions can happen at a faster rate like 100 insertions per second in the peak times. I configured postgres to do auto vacuum on hourly basis. I have frontend GUI application in CGI which displays the data from the database. When I try to get the last twenty records from the database, it takes around 10-15  mins to complete the operation.This is the query which is used:\n\nselect e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name, e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,\ne.wifi_addr_2, e.view_status, bssid  FROM event e, signature s WHERE s.sig_id = e.signature   AND e.timestamp >= '1270449180' AND e.timestamp < '1273473180'  ORDER BY e.cid DESC,  e.cid DESC limit 21 offset 10539780;\n\nCan any one suggest me a better solution to improve the performance.\n\nPlease let me know if you've any further queries.\n\n\nThank you,\nVenu", "msg_date": "Wed, 12 May 2010 12:12:33 +0530", "msg_from": "venu madhav <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues when the number of records are\n\taround 10 Million" }, { "msg_contents": "On Wed, May 12, 2010 at 3:20 AM, Kevin Grittner <[email protected]\n> wrote:\n\n> venu madhav <[email protected]> wrote:\n>\n> > When I try to get the last twenty records from the database, it\n> > takes around 10-15 mins to complete the operation.\n>\n> Making this a little easier to read (for me, at least) I get this:\n>\n> select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,\n> e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,\n> e.wifi_addr_2, e.view_status, bssid\n> FROM event e,\n> signature s\n> WHERE s.sig_id = e.signature\n> AND e.timestamp >= '1270449180'\n> AND e.timestamp < '1273473180'\n> ORDER BY\n> e.cid DESC,\n> e.cid DESC\n> limit 21\n> offset 10539780\n> ;\n>\n> Why the timestamp range, the order by, the limit, *and* the offset?\n> On the face of it, that seems a bit confused. Not to mention that\n> your ORDER BY has the same column twice.\n>\n[Venu] The second column acts as a secondary key for sorting if the primary\nsorting key is a different column. For this query both of them are same.\nThis query is part of an application which allows user to select time ranges\nand retrieve the data in that interval. Hence the time stamp. To have it in\nsome particular order we're doing order by. If the records are more in the\ninterval, we display in sets of 20/30 etc. The user also has the option to\nbrowse through any of those records hence the limit and offset.\n\n>\n> Perhaps that OFFSET is not needed? It is telling PostgreSQL that\n> whatever results are generated based on the rest of the query, read\n> through and ignore the first ten and a half million. Since you said\n> you had about ten million rows, you wanted the last 20, and the\n> ORDER by is DESCending, you're probably not going to get what you\n> want.\n>\n> What, exactly, *is* it you want again?\n>\n> [Venu] As explain above this query is part of the application where user\nwishes to see the records from the database between any start and end times.\nThey get rendered as a HTML page with pagination links to traverse through\nthe data. The user has option to go to any set of records. When the user\nasks for the last set of 20 records, this query gets executed.\nHope it is clear now. Please let me know if you need any further info.\n\nThank you,\nVenu\n\n> -Kevin\n>\n\nOn Wed, May 12, 2010 at 3:20 AM, Kevin Grittner <[email protected]> wrote:\nvenu madhav <[email protected]> wrote:\n\n> When I try to get the last twenty records from the database, it\n> takes around 10-15  mins to complete the operation.\n\nMaking this a little easier to read (for me, at least) I get this:\n\nselect e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,\n    e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,\n    e.wifi_addr_2, e.view_status, bssid\n  FROM event e,\n       signature s\n  WHERE s.sig_id = e.signature\n    AND e.timestamp >= '1270449180'\n    AND e.timestamp <  '1273473180'\n  ORDER BY\n    e.cid DESC,\n    e.cid DESC\n  limit 21\n  offset 10539780\n;\n\nWhy the timestamp range, the order by, the limit, *and* the offset?\nOn the face of it, that seems a bit confused.  Not to mention that\nyour ORDER BY has the same column twice.[Venu] The second column acts as a secondary key for sorting if the primary sorting key is a different column. For this query both of them are same. This query is part of an application which allows user to select time ranges and retrieve the data in that interval. Hence the time stamp. To have it in some particular order we're doing order by. If the records are more in the interval, we display in sets of 20/30 etc. The user also has  the option to browse through any of those records hence the limit and offset.\n\n\nPerhaps that OFFSET is not needed?  It is telling PostgreSQL that\nwhatever results are generated based on the rest of the query, read\nthrough and ignore the first ten and a half million.  Since you said\nyou had about ten million rows, you wanted the last 20, and the\nORDER by is DESCending, you're probably not going to get what you\nwant.\n\nWhat, exactly, *is* it you want again?\n[Venu] As explain above this query is part of the application where user wishes to see the records from the database between any start and end times. They get rendered as a HTML page with pagination links to traverse through the data. The user has option to go to any set of records. When the user asks for the last set of 20 records, this query gets executed. \nHope it is clear now. Please let me know if you need any further info.Thank you,Venu \n\n-Kevin", "msg_date": "Wed, 12 May 2010 12:29:11 +0530", "msg_from": "venu madhav <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues when the number of records are\n\taround 10 Million" }, { "msg_contents": "On Wed, May 12, 2010 at 1:45 AM, venu madhav <[email protected]> wrote:\n> [Venu] Yes, autovacuum is running every hour. I could see in the log\n> messages. All the configurations for autovacuum are disabled except that it\n> should run for every hour. This application runs on an embedded box, so\n> can't change the parameters as they effect the other applications running on\n> it. Can you please explain what do you mean by default parameters.\n> autovacuum = on                         # enable autovacuum\n> subprocess?\n> autovacuum_naptime = 3600               # time between autovacuum runs, in\n> secs\n\nThe default value for autovacuum_naptime is a minute. Why would you\nwant to increase it by a factor of 60? That seems likely to result in\nI/O spikes, table bloat, and generally poor performance.\n\nThere are dramatic performance improvements in PostgreSQL 8.3 and 8.4.\n Upgrading would probably help, a lot.\n\nThe points already made about LIMIT <some huge value> are also right on target.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 26 May 2010 00:25:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues when the number of records are\n\taround 10 Million" } ]
[ { "msg_contents": "venu madhav wrote:\n \n>>> AND e.timestamp >= '1270449180'\n>>> AND e.timestamp < '1273473180'\n>>> ORDER BY.\n>>> e.cid DESC,\n>>> e.cid DESC\n>>> limit 21\n>>> offset 10539780\n \n> The second column acts as a secondary key for sorting if the\n> primary sorting key is a different column. For this query both of\n> them are same.\n \nAny chance you could just leave the second one off in that case?\n \n> This query is part of an application which allows user to select\n> time ranges and retrieve the data in that interval. Hence the time\n> stamp.\n \nWhich, of course, is going to affect the number of rows. Which\nleaves me wondering how you know that once you select and sequence\nthe result set you need to read past and ignore exactly 10539780\nrows to get to the last page.\n \n> To have it in some particular order we're doing order by.\n \nWhich will affect which rows are at any particular offset.\n \n> If the records are more in the interval,\n \nHow do you know that before you run your query?\n \n> we display in sets of 20/30 etc. The user also has the option to\n> browse through any of those records hence the limit and offset.\n \nHave you considered alternative techniques for paging? You might\nuse values at the edges of the page to run a small query (limit, no\noffset) when they page. You might generate all the pages on the\nfirst pass and cache them for a while.\n \n> When the user asks for the last set of 20 records, this query gets\n> executed.\n \nThe DESC on the ORDER BY makes it look like you're trying to use the\nORDER BY to get to the end, but then your offset tells PostgreSQL to\nskip the 10.5 million result rows with the highest keys. Is the\n\"last page\" the one with the highest or lowest values for cid?\n \n-Kevin\n\n\n", "msg_date": "Wed, 12 May 2010 06:55:26 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues when the number of records\n\tare around 10 Million" }, { "msg_contents": "On Wed, May 12, 2010 at 5:25 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> venu madhav wrote:\n>\n> >>> AND e.timestamp >= '1270449180'\n> >>> AND e.timestamp < '1273473180'\n> >>> ORDER BY.\n> >>> e.cid DESC,\n> >>> e.cid DESC\n> >>> limit 21\n> >>> offset 10539780\n>\n> > The second column acts as a secondary key for sorting if the\n> > primary sorting key is a different column. For this query both of\n> > them are same.\n>\n> Any chance you could just leave the second one off in that case?\n>\n[Venu] Yes, that can be ignored. But am not sure that removing it would\nreduce the time drastically.\n\n>\n> > This query is part of an application which allows user to select\n> > time ranges and retrieve the data in that interval. Hence the time\n> > stamp.\n>\n> Which, of course, is going to affect the number of rows. Which\n> leaves me wondering how you know that once you select and sequence\n> the result set you need to read past and ignore exactly 10539780\n> rows to get to the last page.\n>\n[Venu]For Ex: My database has 10539793 records. My application first\ncalculates the count of number of records in that interval. And then based\non user request to display 10/20/30/40 records in one page, it calculates\nhow many records to be displayed when the last link is clicked.\n\n>\n> > To have it in some particular order we're doing order by.\n>\n> Which will affect which rows are at any particular offset.\n>\n[Venu]Yes, by default it has the primary key for order by.\n\n>\n> > If the records are more in the interval,\n>\n> How do you know that before you run your query?\n>\n [Venu] I calculate the count first.\n\n>\n> > we display in sets of 20/30 etc. The user also has the option to\n> > browse through any of those records hence the limit and offset.\n>\n> Have you considered alternative techniques for paging? You might\n> use values at the edges of the page to run a small query (limit, no\n> offset) when they page. You might generate all the pages on the\n> first pass and cache them for a while.\n>\n> [Venu] If generate all the pages at once, to retrieve all the 10 M records\nat once, it would take much longer time and since the request from the\nbrowser, there is a chance of browser getting timed out.\n\n> > When the user asks for the last set of 20 records, this query gets\n> > executed.\n>\n> The DESC on the ORDER BY makes it look like you're trying to use the\n> ORDER BY to get to the end, but then your offset tells PostgreSQL to\n> skip the 10.5 million result rows with the highest keys. Is the\n> \"last page\" the one with the highest or lowest values for cid?\n>\n> [Venu] The last page contains the lowest values of cid. By default we get\nthe records in the decreasing order of cid and then get the last 10/20.\n\nThank you,\nVenu.\n\n> -Kevin\n>\n>\n>\n\nOn Wed, May 12, 2010 at 5:25 PM, Kevin Grittner <[email protected]> wrote:\nvenu madhav  wrote:\n\n>>> AND e.timestamp >= '1270449180'\n>>> AND e.timestamp < '1273473180'\n>>> ORDER BY.\n>>> e.cid DESC,\n>>> e.cid DESC\n>>> limit 21\n>>> offset 10539780\n\n> The second column acts as a secondary key for sorting if the\n> primary sorting key is a different column. For this query both of\n> them are same.\n\nAny chance you could just leave the second one off in that case?[Venu] Yes, that can be ignored. But am not sure that removing it would reduce the time drastically. \n\n> This query is part of an application which allows user to select\n> time ranges and retrieve the data in that interval. Hence the time\n> stamp.\n\nWhich, of course, is going to affect the number of rows.  Which\nleaves me wondering how you know that once you select and sequence\nthe result set you need to read past and ignore exactly 10539780\nrows to get to the last page.[Venu]For Ex:  My database has 10539793 records. My application first calculates the count of number of records in that interval. And then based on user request to display 10/20/30/40 records in one page, it calculates how many records to be displayed when the last link is clicked.\n\n\n> To have it in some particular order we're doing order by.\n\nWhich will affect which rows are at any particular offset.[Venu]Yes, by default it has the primary key for order by. \n\n> If the records are more in the interval,\n\nHow do you know that before you run your query? [Venu] I calculate the count first.\n\n> we display in sets of 20/30 etc. The user also has the option to\n> browse through any of those records hence the limit and offset.\n\nHave you considered alternative techniques for paging?  You might\nuse values at the edges of the page to run a small query (limit, no\noffset) when they page.  You might generate all the pages on the\nfirst pass and cache them for a while.\n[Venu] If generate all the pages at once, to retrieve all the 10 M records at once, it would take much longer time and since the request from the browser, there is a chance of browser getting timed out. \n\n> When the user asks for the last set of 20 records, this query gets\n> executed.\n\nThe DESC on the ORDER BY makes it look like you're trying to use the\nORDER BY to get to the end, but then your offset tells PostgreSQL to\nskip the 10.5 million result rows with the highest keys.  Is the\n\"last page\" the one with the highest or lowest values for cid?\n[Venu] The last page contains the lowest values of cid. By default we get the records in the decreasing order of cid and then get the last 10/20.Thank you,Venu.\n\n-Kevin", "msg_date": "Wed, 12 May 2010 17:44:55 +0530", "msg_from": "venu madhav <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues when the number of records are\n\taround 10 Million" }, { "msg_contents": "venu madhav <[email protected]> wrote:\n \n>> > If the records are more in the interval,\n>>\n>> How do you know that before you run your query?\n>>\n> I calculate the count first.\n \nThis and other comments suggest that the data is totally static\nwhile this application is running. Is that correct?\n \n> If generate all the pages at once, to retrieve all the 10 M\n> records at once, it would take much longer time\n \nAre you sure of that? It seems to me that it's going to read all\nten million rows once for the count and again for the offset. It\nmight actually be faster to pass them just once and build the pages.\n \nAlso, you didn't address the issue of storing enough information on\nthe page to read off either edge in the desired sequence with just a\nLIMIT and no offset. \"Last page\" or \"page up\" would need to reverse\nthe direction on the ORDER BY. This would be very fast if you have\nappropriate indexes. Your current technique can never be made very\nfast.\n \n-Kevin\n", "msg_date": "Wed, 12 May 2010 08:56:08 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues when the number of records\n\tare around 10 Million" }, { "msg_contents": "On 5/12/10 4:55 AM, Kevin Grittner wrote:\n> venu madhav wrote:\n>> we display in sets of 20/30 etc. The user also has the option to\n>> browse through any of those records hence the limit and offset.\n>\n> Have you considered alternative techniques for paging? You might\n> use values at the edges of the page to run a small query (limit, no\n> offset) when they page. You might generate all the pages on the\n> first pass and cache them for a while.\n\nKevin is right. You need to you \"hitlists\" - a semi-temporary table that holds the results of your initial query. You're repeating a complex, expensive query over and over, once for each page of data that the user wants to see. Instead, using a hitlist, your initial query looks something like this:\n\ncreate table hitlist_xxx(\n objectid integer,\n sortorder integer default nextval('hitlist_seq')\n);\n\ninsert into hitlist_xxx (objectid)\n (select ... your original query ... order by ...)\n\nYou store some object ID or primary key in the \"hitlist\" table, and the sequence records your original order.\n\nThen when your user asks for page 1, 2, 3 ... N, all you have to do is join your hitlist to your original data:\n\n select ... from mytables join hitlist_xxx on (...)\n where sortorder >= 100 and sortorder < 120;\n\nwhich would instantly return page 5 of your data.\n\nTo do this, you need a way to know when a user is finished so that you can discard the hitlist.\n\nCraig\n", "msg_date": "Wed, 12 May 2010 07:08:20 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues when the number of records\t are\n\taround 10 Million" }, { "msg_contents": "On Wed, May 12, 2010 at 7:26 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> venu madhav <[email protected]> wrote:\n>\n> >> > If the records are more in the interval,\n> >>\n> >> How do you know that before you run your query?\n> >>\n> > I calculate the count first.\n>\n> This and other comments suggest that the data is totally static\n> while this application is running. Is that correct?\n>\n[Venu] No, the data gets added when the application is running. As I've\nmentioned before it could be as faster as 100-400 records per second. And it\nis an important application which will be running 24/7.\n\n>\n> > If generate all the pages at once, to retrieve all the 10 M\n> > records at once, it would take much longer time\n>\n> Are you sure of that? It seems to me that it's going to read all\n> ten million rows once for the count and again for the offset. It\n> might actually be faster to pass them just once and build the pages.\n>\n[Venu] Even if the retrieval is faster, the client which is viewing the\ndatabase and the server where the data gets logged can be any where on the\nglobe. So, it is not feasible to get all the 1 or 10 M records at once from\nthe server to client.\n\n\n>\n> Also, you didn't address the issue of storing enough information on\n> the page to read off either edge in the desired sequence with just a\n> LIMIT and no offset. \"Last page\" or \"page up\" would need to reverse\n> the direction on the ORDER BY. This would be very fast if you have\n> appropriate indexes. Your current technique can never be made very\n> fast.\n>\n[Venu] I actually didn't understand what did you mean when you said \"storing\nenough information on the page to read off either edge in the desired\nsequence with just a\nLIMIT and no offset\". What kind of information can we store to improve the\nperformance. Reversing the order by is one thing, I am trying to figure out\nhow fast it is. Thanks a lot for this suggestion.\n\nThank you,\nVenu.\n\n>\n> -Kevin\n>\n\nOn Wed, May 12, 2010 at 7:26 PM, Kevin Grittner <[email protected]> wrote:\nvenu madhav <[email protected]> wrote:\n\n>> > If the records are more in the interval,\n>>\n>> How do you know that before you run your query?\n>>\n> I calculate the count first.\n\nThis and other comments suggest that the data is totally static\nwhile this application is running.  Is that correct?[Venu] No, the data gets added when the application is running. As I've mentioned before it could be as faster as 100-400 records per second. And it is an important application which will be running 24/7.  \n\n\n> If generate all the pages at once, to retrieve all the 10 M\n> records at once, it would take much longer time\n\nAre you sure of that?  It seems to me that it's going to read all\nten million rows once for the count and again for the offset.  It\nmight actually be faster to pass them just once and build the pages.[Venu] Even if the retrieval is faster, the client which is viewing the database and the server where the data gets logged can be any where on the globe. So, it is not feasible to get all the 1 or 10 M records at once from the server to client.\n\n\n \n\nAlso, you didn't address the issue of storing enough information on\nthe page to read off either edge in the desired sequence with just a\nLIMIT and no offset.  \"Last page\" or \"page up\" would need to reverse\nthe direction on the ORDER BY.  This would be very fast if you have\nappropriate indexes.  Your current technique can never be made very\nfast.[Venu] I actually didn't understand what did you mean when you said \"storing enough information on the page to read off either edge in the desired sequence with just a\nLIMIT and no offset\". What kind of information can we store to improve the performance.  Reversing the order by is one thing, I am trying to figure out how fast it is. Thanks a lot for this suggestion.Thank you,\n\nVenu.\n\n-Kevin", "msg_date": "Thu, 13 May 2010 10:11:46 +0530", "msg_from": "venu madhav <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues when the number of records are\n\taround 10 Million" }, { "msg_contents": "venu madhav <[email protected]> wrote:\n> Kevin Grittner <[email protected] wrote:\n \n>> > I calculate the count first.\n>>\n>> This and other comments suggest that the data is totally static\n>> while this application is running. Is that correct?\n>>\n> No, the data gets added when the application is running. As I've\n> mentioned before it could be as faster as 100-400 records per\n> second. And it is an important application which will be running\n> 24/7.\n \nThen how can you trust that the count you run before selecting is\naccurate when you run the SELECT? Are they both in the same\nREPEATABLE READ or SERIALIZABLE transaction?\n \n>> Also, you didn't address the issue of storing enough information\n>> on the page to read off either edge in the desired sequence with\n>> just a LIMIT and no offset. \"Last page\" or \"page up\" would need\n>> to reverse the direction on the ORDER BY. This would be very\n>> fast if you have appropriate indexes. Your current technique can\n>> never be made very fast.\n>>\n> I actually didn't understand what did you mean when you said\n> \"storing enough information on the page to read off either edge in\n> the desired sequence with just a LIMIT and no offset\". What kind\n> of information can we store to improve the performance.\n \nWell, for starters, it's entirely possible that the \"hitlist\"\napproach posted by Craig James will work better for you than what\nI'm about to describe. Be sure to read this post carefully:\n \nhttp://archives.postgresql.org/pgsql-performance/2010-05/msg00058.php\n \nThe reason that might work better than the idea I was suggesting is\nthat the combination of selecting on timestamp and ordering by\nsomething else might make it hard to use reasonable indexes to\nposition and limit well enough for the technique I was suggesting to\nperform well. It's hard to say without testing.\n \nFor what I was describing, you must use an ORDER BY which guarantees\na consistent sequence for the result rows. I'm not sure whether you\nalways have that currently; if not, that's another nail in the\ncoffin of your current technique, since the same OFFSET into the\nresult might be different rows from one time to the next, even if\ndata didn't change. If your ORDER BY can't guarantee a unique set\nof ordering values for every row in the result set, you need to add\nany missing columns from a unique index (usually the primary key) to\nthe ORDER BY clause.\n \nAnyway, once you are sure you have an ORDER BY which is\ndeterministic, you make sure your software remembers the ORDER BY\nvalues for the first and last entries on the page. Then you can do\nsomething like (abstractly):\n \nSELECT x, y, z\n FROM a, b\n WHERE ts BETWEEN m AND n\n AND a.x = b.a_x\n AND (x, y) > (lastx, lasty)\n ORDER BY x, y\n LIMIT 20;\n \nWith the right indexes, data distributions, selection criteria, and\nORDER BY columns -- that *could* be very fast. If not, look at\nCraig's post.\n \n-Kevin\n", "msg_date": "Thu, 13 May 2010 08:56:07 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues when the number of records\n\tare around 10 Million" } ]
[ { "msg_contents": "Hi\n\nI have a situation at my work which I simply don't understand and hope \nthat here I can find some explanations.\n\nWhat is on the scene:\nA - old 'server' PC AMD Athlon64 3000+, 2GB RAM, 1 ATA HDD 150GB, Debian \netch, postgresql 8.1.19\nB - new server HP DL 360, 12GB RAM, Intel Xeon 8 cores CPU, fast SAS \n(mirrored) HDDs, Debian 64 bit, lenny, backported postgresql 8.1.19\nC - our Windows application based on Postgresql 8.1 (not newer)\n\nand second role actors (for pgAdmin)\nD - my old Windows XP computer, Athlon64 X2 3800+, with 100Mbit ethernet\nE - new laptop with Ubuntu, 1000Mbit ethernet\n\nThe goal: migrate postgresql from A to B.\n\nSimple and works fine (using pg_dump, psql -d dbname <bakcup_file).\n\nSo what is the problem? My simple 'benchmarks' I have done with pgAdmin \nin spare time.\n\npgAdmin is the latest 1.8.2 on both D and E.\nUsing pgAdmin on my (D) computer I have run SELECT * from some_table; \nand noted the execution time on both A and B servers:\n- on A (the old one) about 120sec\n- on B (the new monster) about 120sec (???)\n\n(yes, there is almost no difference)\n\nOn the first test runs the postgresql configs on both servers were the \nsame, so I have started to optimize (according to postgresql wiki) the \npostgresql on the new (B) server. The difference with my simple select \n* were close to 0.\n\nSo this is my first question. Why postgresql behaves so strangely?\nWhy there is no difference in database speed between those two machines?\n\nI thought about hardware problem on B, but:\nhdparm shows 140MB/sec on B and 60MB on A (and buffered reads are 8GB on \nB and 800MB on A)\nbonnie++ on B:\n> Version 1.03d ------Sequential Output------ --Sequential Input- --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n> malwa 24G 51269 71 49649 10 34974 6 48969 82 147840 13 1150 1\non A:\n> Version 1.03 ------Sequential Output------ --Sequential Input- --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n> irys 4G 42961 93 41125 13 14414 3 20262 48 38487 5 167.0 0\n\nHere the difference in writings is not so big (wonder why, the price \nbetween those machines is huge) but in readings are noticeably better on B.\n\nOk, those were the tests done using my old Windows PC (D) computer. So I \nhave decided to do the same using my new laptop with Ubuntu (E).\nThe results were soooo strange that now I am completely confused.\n\nThe same SELECT:\n- on A first (fresh) run 30sec, second (and so on) about 11sec (??!)\n- on B first run 80sec, second (and so on) about 80sec also\n\nWhat is going on here? About 8x faster on slower machine?\n\nOne more thing comes to my mind. The A server has iso-8859-2 locale and \ndatabase is set to latin2, the B server has utf8 locale, but database is \nstill latin2. Does it matter anyway?\n\nSo here I'm stuck and hope for help. Is there any bottleneck? How to \nfind it?\n\nRegards\nPiotr\n", "msg_date": "Fri, 14 May 2010 10:24:20 +0200", "msg_from": "Piotr Legiecki <[email protected]>", "msg_from_op": true, "msg_subject": "old server, new server, same performance" }, { "msg_contents": "2010/5/14 Piotr Legiecki <[email protected]>:\n> So what is the problem? My simple 'benchmarks' I have done with pgAdmin in\n> spare time.\n>\n> pgAdmin is the latest 1.8.2 on both D and E.\n> Using pgAdmin on my (D) computer I have run SELECT * from some_table; and\n> noted the execution time on both A and B servers:\n\nSo, any chance you'll run it like I asked:\n\nselect count(*) from some_table;\n\n?\n", "msg_date": "Fri, 14 May 2010 16:52:43 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: old server, new server, same performance" }, { "msg_contents": "\nOn May 14, 2010, at 3:52 PM, Scott Marlowe wrote:\n\n> 2010/5/14 Piotr Legiecki <[email protected]>:\n>> So what is the problem? My simple 'benchmarks' I have done with pgAdmin in\n>> spare time.\n>> \n>> pgAdmin is the latest 1.8.2 on both D and E.\n>> Using pgAdmin on my (D) computer I have run SELECT * from some_table; and\n>> noted the execution time on both A and B servers:\n> \n> So, any chance you'll run it like I asked:\n> \n> select count(*) from some_table;\n> \n> ?\n\nI agree that select * is a very bad test and probably the problem here. Even if you do 'select * from foo' locally to avoid the network and pipe it to /dev/null, it is _significantly_ slower than count(*) because of all the data serialization.\n\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 14 May 2010 17:46:07 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: old server, new server, same performance" }, { "msg_contents": "Scott Marlowe pisze:\n> 2010/5/14 Piotr Legiecki <[email protected]>:\n>> So what is the problem? My simple 'benchmarks' I have done with pgAdmin in\n>> spare time.\n>>\n>> pgAdmin is the latest 1.8.2 on both D and E.\n>> Using pgAdmin on my (D) computer I have run SELECT * from some_table; and\n>> noted the execution time on both A and B servers:\n> \n> So, any chance you'll run it like I asked:\n> \n> select count(*) from some_table;\n> \n\nSorry, but it was computer-less weekend ;-)\n\nSo to answer all questions in one mail:\n1. The database is autovacuumed, at first (the default debian setting) \nevery one minute, than I have set it to one hour.\n\n2. select count(*) from some_table; runs in a fraction of a second on \nthe console on both servers (there are only 4000 records, the second \nlonger table has 50000 but it does not matter very much). From pg_admin \nthe results are:\n- slow server (and the longest table in my db) 938ms (first run) and \nabout 40ms next ones\n- fast server 110ms first run, about 30ms next ones.\nWell, finally my new server deservers its name ;-) The later times as I \nunderstand are just cache readings from postgresql itself?\n\n3. The configs. As noted earlier, at first test they were the same, \nlater I started to optimize the faster server from the defaults to some \nhigher values, without any significant gain.\nThe package on slower server is just deb package from etch (4.0) \nrepository, the one on fast server (which is newer lenny - 5.0) is \ncompiled from deb source package on that server.\nfast server: http://pgsql.privatepaste.com/edf2ec36c3\nslow server: http://pgsql.privatepaste.com/bdc141f0be\n\n4. Machine. The new server has 5 SAS disks (+ 1 spare), but I don't \nremember how they are set up now (looks like mirror for system '/' and \nRAID5 for rest - including DB). size of the DB is 405MB\n\nSo still I don't get this: select * from table; on old server takes 0,5 \nsec, on new one takes 6sec. Why there is so big difference? And it does \nnot matter how good or bad select is to measure performance, because I \ndon't measure the performance, I measure the relative difference. \nSomwhere there is a bottleneck.\n\nRegards\nP.\n", "msg_date": "Mon, 17 May 2010 10:06:23 +0200", "msg_from": "Piotr Legiecki <[email protected]>", "msg_from_op": false, "msg_subject": "Re: old server, new server, same performance" }, { "msg_contents": "On Mon, May 17, 2010 at 2:06 AM, Piotr Legiecki <[email protected]> wrote:\n> 2. select count(*) from some_table; runs in a fraction of a second on the\n> console on both servers (there are only 4000 records, the second longer\n> table has 50000 but it does not matter very much). From pg_admin the results\n> are:\n> - slow server (and the longest table in my db) 938ms (first run) and about\n> 40ms next ones\n> - fast server 110ms first run, about 30ms next ones.\n> Well, finally my new server deservers its name ;-) The later times as I\n> understand are just cache readings from postgresql itself?\nSNIP\n> So the server itself seems faster.\n> So still I don't get this: select * from table; on old server takes 0,5 sec,\n> on new one takes 6sec. Why there is so big difference? And it does not\n> matter how good or bad select is to measure performance, because I don't\n> measure  the performance, I measure the relative difference. Somwhere there\n> is a bottleneck.\n\nYep, the network I'd say. How fast are things like scp between the\nvarious machines?\n\n> 4. Machine. The new server has 5 SAS disks (+ 1 spare), but I don't remember\n> how they are set up now (looks like mirror for system '/' and RAID5 for rest\n> - including DB). size of the DB is 405MB\n\nGet off of RAID-5 if possible. A 3 Disk RAID-5 is the slowest\npossible combination for RAID-5 and RAID-5 is generally the poorest\nchoice for a db server.\n", "msg_date": "Mon, 17 May 2010 02:10:58 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: old server, new server, same performance" }, { "msg_contents": "On Mon, May 17, 2010 at 2:10 AM, Scott Marlowe <[email protected]> wrote:\n> On Mon, May 17, 2010 at 2:06 AM, Piotr Legiecki <[email protected]> wrote:\n>> 2. select count(*) from some_table; runs in a fraction of a second on the\n>> console on both servers (there are only 4000 records, the second longer\n>> table has 50000 but it does not matter very much). From pg_admin the results\n>> are:\n>> - slow server (and the longest table in my db) 938ms (first run) and about\n>> 40ms next ones\n>> - fast server 110ms first run, about 30ms next ones.\n>> Well, finally my new server deservers its name ;-) The later times as I\n>> understand are just cache readings from postgresql itself?\n> SNIP\n>> So the server itself seems faster.\n>> So still I don't get this: select * from table; on old server takes 0,5 sec,\n>> on new one takes 6sec. Why there is so big difference? And it does not\n>> matter how good or bad select is to measure performance, because I don't\n>> measure  the performance, I measure the relative difference. Somwhere there\n>> is a bottleneck.\n>\n> Yep, the network I'd say.  How fast are things like scp between the\n> various machines?\n>\n>> 4. Machine. The new server has 5 SAS disks (+ 1 spare), but I don't remember\n>> how they are set up now (looks like mirror for system '/' and RAID5 for rest\n>> - including DB). size of the DB is 405MB\n>\n> Get off of RAID-5 if possible.  A 3 Disk RAID-5 is the slowest\n> possible combination for RAID-5 and RAID-5 is generally the poorest\n> choice for a db server.\n\nI refer you to this classic post on the subject:\nhttp://www.mail-archive.com/[email protected]/msg93043.html\n", "msg_date": "Mon, 17 May 2010 02:52:54 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: old server, new server, same performance" }, { "msg_contents": "Whoops, wrong thread.\n\nOn Mon, May 17, 2010 at 2:52 AM, Scott Marlowe <[email protected]> wrote:\n> On Mon, May 17, 2010 at 2:10 AM, Scott Marlowe <[email protected]> wrote:\n>> On Mon, May 17, 2010 at 2:06 AM, Piotr Legiecki <[email protected]> wrote:\n>>> 2. select count(*) from some_table; runs in a fraction of a second on the\n>>> console on both servers (there are only 4000 records, the second longer\n>>> table has 50000 but it does not matter very much). From pg_admin the results\n>>> are:\n>>> - slow server (and the longest table in my db) 938ms (first run) and about\n>>> 40ms next ones\n>>> - fast server 110ms first run, about 30ms next ones.\n>>> Well, finally my new server deservers its name ;-) The later times as I\n>>> understand are just cache readings from postgresql itself?\n>> SNIP\n>>> So the server itself seems faster.\n>>> So still I don't get this: select * from table; on old server takes 0,5 sec,\n>>> on new one takes 6sec. Why there is so big difference? And it does not\n>>> matter how good or bad select is to measure performance, because I don't\n>>> measure  the performance, I measure the relative difference. Somwhere there\n>>> is a bottleneck.\n>>\n>> Yep, the network I'd say.  How fast are things like scp between the\n>> various machines?\n>>\n>>> 4. Machine. The new server has 5 SAS disks (+ 1 spare), but I don't remember\n>>> how they are set up now (looks like mirror for system '/' and RAID5 for rest\n>>> - including DB). size of the DB is 405MB\n>>\n>> Get off of RAID-5 if possible.  A 3 Disk RAID-5 is the slowest\n>> possible combination for RAID-5 and RAID-5 is generally the poorest\n>> choice for a db server.\n>\n> I refer you to this classic post on the subject:\n> http://www.mail-archive.com/[email protected]/msg93043.html\n", "msg_date": "Mon, 17 May 2010 02:53:49 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: old server, new server, same performance" }, { "msg_contents": "Scott Marlowe pisze:\n\n>>> So still I don't get this: select * from table; on old server takes 0,5 sec,\n>>> on new one takes 6sec. Why there is so big difference? And it does not\n>>> matter how good or bad select is to measure performance, because I don't\n>>> measure the performance, I measure the relative difference. Somwhere there\n>>> is a bottleneck.\n>> Yep, the network I'd say. How fast are things like scp between the\n>> various machines?\n\nSure it is, but not in a way one could expect:\n- scp from 1000Gbit laptop to old server 27MB/sec\n- scp from the same laptop to new server 70MB/sec\nBoth servers have 1000Gbit connection. So it is still mysterious why old \nserver makes 9x faster select?\nI don't claim that something is slow on new (or even older) server. Not \nat all. the application works fine (still on older machine). I only \nwonder about those differences.\n\n>>> 4. Machine. The new server has 5 SAS disks (+ 1 spare), but I don't remember\n>>> how they are set up now (looks like mirror for system '/' and RAID5 for rest\n>>> - including DB). size of the DB is 405MB\n>> Get off of RAID-5 if possible. A 3 Disk RAID-5 is the slowest\n>> possible combination for RAID-5 and RAID-5 is generally the poorest\n>> choice for a db server.\n\nSure I know that RAID-5 is slower than mirror but anyway how much \nslower? And for sure not as much as single ATA disk.\n\n> I refer you to this classic post on the subject:\n> http://www.mail-archive.com/[email protected]/msg93043.html\n\nWell, this thread is about benchmarking databases (or even worse, \ncomparison between two RDBMS). I'm not benchmarking anything, just \ncompare one factor.\n\nP.\n", "msg_date": "Mon, 17 May 2010 11:52:17 +0200", "msg_from": "Piotr Legiecki <[email protected]>", "msg_from_op": false, "msg_subject": "Re: old server, new server, same performance" }, { "msg_contents": "On Mon, May 17, 2010 at 3:52 AM, Piotr Legiecki <[email protected]> wrote:\n> Scott Marlowe pisze:\n>\n>>>> So still I don't get this: select * from table; on old server takes 0,5\n>>>> sec,\n>>>> on new one takes 6sec. Why there is so big difference? And it does not\n>>>> matter how good or bad select is to measure performance, because I don't\n>>>> measure  the performance, I measure the relative difference. Somwhere\n>>>> there\n>>>> is a bottleneck.\n>>>\n>>> Yep, the network I'd say.  How fast are things like scp between the\n>>> various machines?\n>\n> Sure it is, but not in a way one could expect:\n> - scp from 1000Gbit laptop to old server 27MB/sec\n> - scp from the same laptop to new server 70MB/sec\n> Both servers have 1000Gbit connection. So it is still mysterious why old\n> server makes 9x faster select?\n> I don't claim that something is slow on new (or even older) server. Not at\n> all. the application works fine (still on older machine). I only wonder\n> about those differences.\n\nIs one connecting via SSL? Is this a simple flat switched network, or\nare these machines on different segments connected via routers?\n\n>>>> 4. Machine. The new server has 5 SAS disks (+ 1 spare), but I don't\n>>>> remember\n>>>> how they are set up now (looks like mirror for system '/' and RAID5 for\n>>>> rest\n>>>> - including DB). size of the DB is 405MB\n>>>\n>>> Get off of RAID-5 if possible.  A 3 Disk RAID-5 is the slowest\n>>> possible combination for RAID-5 and RAID-5 is generally the poorest\n>>> choice for a db server.\n>\n> Sure I know that RAID-5 is slower than mirror but anyway how much slower?\n> And for sure not as much as single ATA disk.\n\nActually, given the amount of read read / write write RAID5 does, it\ncan be slower than a single drive, by quite a bit. A mirror set only\nreads twice as fast, it writes the same speed as a single disk.\nRAID-5 is antithetical to good db performance (unless you hardly ever\nwrite).\n\n>\n>> I refer you to this classic post on the subject:\n>> http://www.mail-archive.com/[email protected]/msg93043.html\n>\n> Well, this thread is about benchmarking databases (or even worse, comparison\n> between two RDBMS). I'm not benchmarking anything, just compare one factor.\n\nThat was a mis-post...\n", "msg_date": "Mon, 17 May 2010 04:25:24 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: old server, new server, same performance" }, { "msg_contents": "Scott Marlowe pisze:\n\n> Is one connecting via SSL? Is this a simple flat switched network, or\n> are these machines on different segments connected via routers?\n\nSSL is disabled.\nIt is switched network, all tested computers are in the same segment.\n\n\nFinally I have switched the production database from old server to new \none and strange things happened. The same query on new server I have \nused before with 30sec results now runs about 9 sec (so the same time as \nthe old production server). Hm...\n\nIs it possible that database under some load performs better because it \nmakes some something that database not used is not doing?\n\n\nP.\n", "msg_date": "Fri, 21 May 2010 14:15:21 +0200", "msg_from": "Piotr Legiecki <[email protected]>", "msg_from_op": false, "msg_subject": "Re: old server, new server, same performance" } ]
[ { "msg_contents": "Hi\n\nI have a situation at my work which I simply don't understand and hope\nthat here I can find some explanations.\n\nWhat is on the scene:\nA - old 'server' PC AMD Athlon64 3000+, 2GB RAM, 1 ATA HDD 150GB, Debian\netch, postgresql 8.1.19\nB - new server HP DL 360, 12GB RAM, Intel Xeon 8 cores CPU, fast SAS\n(mirrored) HDDs, Debian 64 bit, lenny, backported postgresql 8.1.19\nC - our Windows application based on Postgresql 8.1 (not newer)\n\nand second role actors (for pgAdmin)\nD - my old Windows XP computer, Athlon64 X2 3800+, with 100Mbit ethernet\nE - new laptop with Ubuntu, 1000Mbit ethernet\n\nThe goal: migrate postgresql from A to B.\n\nSimple and works fine (using pg_dump, psql -d dbname <bakcup_file).\n\nSo what is the problem? My simple 'benchmarks' I have done with pgAdmin\nin spare time.\n\npgAdmin is the latest 1.8.2 on both D and E.\nUsing pgAdmin on my (D) computer I have run SELECT * from some_table;\nand noted the execution time on both A and B servers:\n- on A (the old one) about 120sec\n- on B (the new monster) about 120sec (???)\n\n(yes, there is almost no difference)\n\nOn the first test runs the postgresql configs on both servers were the\nsame, so I have started to optimize (according to postgresql wiki) the\npostgresql on the new (B) server. The difference with my simple select\n* were close to 0.\n\nSo this is my first question. Why postgresql behaves so strangely?\nWhy there is no difference in database speed between those two machines?\n\nI thought about hardware problem on B, but:\nhdparm shows 140MB/sec on B and 60MB on A (and buffered reads are 8GB on\nB and 800MB on A)\nbonnie++ on B:\n> Version 1.03d ------Sequential Output------ --Sequential Input- --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n> malwa 24G 51269 71 49649 10 34974 6 48969 82 147840 13 1150 1\non A:\n> Version 1.03 ------Sequential Output------ --Sequential Input- --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n> irys 4G 42961 93 41125 13 14414 3 20262 48 38487 5 167.0 0\n\nHere the difference in writings is not so big (wonder why, the price\nbetween those machines is huge) but in readings are noticeably better on B.\n\nOk, those were the tests done using my old Windows PC (D) computer. So I\nhave decided to do the same using my new laptop with Ubuntu (E).\nThe results were soooo strange that now I am completely confused.\n\nThe same SELECT:\n- on A first (fresh) run 30sec, second (and so on) about 11sec (??!)\n- on B first run 80sec, second (and so on) about 80sec also\n\nWhat is going on here? About 8x faster on slower machine?\n\nOne more thing comes to my mind. The A server has iso-8859-2 locale and\ndatabase is set to latin2, the B server has utf8 locale, but database is\nstill latin2. Does it matter anyway?\n\nSo here I'm stuck and hope for help. Is there any bottleneck? How to\nfind it?\n\nRegards\nPiotr\n\n", "msg_date": "Fri, 14 May 2010 15:14:00 +0200", "msg_from": "Piotr Legiecki <[email protected]>", "msg_from_op": true, "msg_subject": "old server, new server, same performance" }, { "msg_contents": "2010/5/14 Piotr Legiecki <[email protected]>\n\n> Hi\n>\n> I have a situation at my work which I simply don't understand and hope\n> that here I can find some explanations.\n>\n> What is on the scene:\n> A - old 'server' PC AMD Athlon64 3000+, 2GB RAM, 1 ATA HDD 150GB, Debian\n> etch, postgresql 8.1.19\n> B - new server HP DL 360, 12GB RAM, Intel Xeon 8 cores CPU, fast SAS\n> (mirrored) HDDs, Debian 64 bit, lenny, backported postgresql 8.1.19\n> C - our Windows application based on Postgresql 8.1 (not newer)\n>\n> and second role actors (for pgAdmin)\n> D - my old Windows XP computer, Athlon64 X2 3800+, with 100Mbit ethernet\n> E - new laptop with Ubuntu, 1000Mbit ethernet\n>\n> The goal: migrate postgresql from A to B.\n>\n> Simple and works fine (using pg_dump, psql -d dbname <bakcup_file).\n>\n> So what is the problem? My simple 'benchmarks' I have done with pgAdmin\n> in spare time.\n>\n> pgAdmin is the latest 1.8.2 on both D and E.\n> Using pgAdmin on my (D) computer I have run SELECT * from some_table;\n> and noted the execution time on both A and B servers:\n> - on A (the old one) about 120sec\n> - on B (the new monster) about 120sec (???)\n>\n> (yes, there is almost no difference)\n>\n> On the first test runs the postgresql configs on both servers were the\n> same, so I have started to optimize (according to postgresql wiki) the\n> postgresql on the new (B) server. The difference with my simple select\n> * were close to 0.\n>\n> So this is my first question. Why postgresql behaves so strangely?\n> Why there is no difference in database speed between those two machines?\n>\n> I thought about hardware problem on B, but:\n> hdparm shows 140MB/sec on B and 60MB on A (and buffered reads are 8GB on\n> B and 800MB on A)\n> bonnie++ on B:\n>\n>> Version 1.03d ------Sequential Output------ --Sequential Input-\n>> --Random-\n>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n>> --Seeks--\n>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n>> /sec %CP\n>> malwa 24G 51269 71 49649 10 34974 6 48969 82 147840 13\n>> 1150 1\n>>\n> on A:\n>\n>> Version 1.03 ------Sequential Output------ --Sequential Input-\n>> --Random-\n>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n>> --Seeks--\n>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n>> /sec %CP\n>> irys 4G 42961 93 41125 13 14414 3 20262 48 38487 5\n>> 167.0 0\n>>\n>\n> Here the difference in writings is not so big (wonder why, the price\n> between those machines is huge) but in readings are noticeably better on B.\n>\n> Ok, those were the tests done using my old Windows PC (D) computer. So I\n> have decided to do the same using my new laptop with Ubuntu (E).\n> The results were soooo strange that now I am completely confused.\n>\n> The same SELECT:\n> - on A first (fresh) run 30sec, second (and so on) about 11sec (??!)\n> - on B first run 80sec, second (and so on) about 80sec also\n>\n> What is going on here? About 8x faster on slower machine?\n>\n> One more thing comes to my mind. The A server has iso-8859-2 locale and\n> database is set to latin2, the B server has utf8 locale, but database is\n> still latin2. Does it matter anyway?\n>\n> So here I'm stuck and hope for help. Is there any bottleneck? How to\n> find it?\n>\n> Regards\n> Piotr\n>\n\nHave you compared the PostgreSQL configurations between servers?\n(postgresql.conf) And how was it installed? Package or compiled from\nscratch?\n\nAnd has the new DB been VACUUM'd?\n\nThom\n\n2010/5/14 Piotr Legiecki <[email protected]>\n\nHi\n\nI have a situation at my work which I simply don't understand and hope\nthat here I can find some explanations.\n\nWhat is on the scene:\nA - old 'server' PC AMD Athlon64 3000+, 2GB RAM, 1 ATA HDD 150GB, Debian\netch, postgresql 8.1.19\nB - new server HP DL 360, 12GB RAM, Intel Xeon 8 cores CPU, fast SAS\n(mirrored) HDDs, Debian 64 bit, lenny, backported postgresql 8.1.19\nC - our Windows application based on Postgresql 8.1 (not newer)\n\nand second role actors (for pgAdmin)\nD - my old Windows XP computer, Athlon64 X2 3800+, with 100Mbit ethernet\nE - new laptop with Ubuntu, 1000Mbit ethernet\n\nThe goal: migrate postgresql from A to B.\n\nSimple and works fine (using pg_dump, psql -d dbname <bakcup_file).\n\nSo what is the problem? My simple 'benchmarks' I have done with pgAdmin\nin spare time.\n\npgAdmin is the latest 1.8.2 on both D and E.\nUsing pgAdmin on my (D) computer I have run SELECT * from some_table;\nand noted the execution time on both A and B servers:\n- on A (the old one) about 120sec\n- on B (the new monster) about 120sec (???)\n\n(yes, there is almost no difference)\n\nOn the first test runs the postgresql configs on both servers were the\nsame, so I have started to optimize (according to postgresql wiki) the\npostgresql on the new (B) server. The difference  with my simple select\n* were close to 0.\n\nSo this is my first question. Why postgresql behaves so strangely?\nWhy there is no difference in database speed between those two machines?\n\nI thought about hardware problem on B, but:\nhdparm shows 140MB/sec on B and 60MB on A (and buffered reads are 8GB on\nB and 800MB on A)\nbonnie++ on B:\n\nVersion 1.03d       ------Sequential Output------ --Sequential Input- --Random-\n                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP\nmalwa           24G 51269  71 49649  10 34974   6 48969  82 147840  13  1150   1\n\non A:\n\nVersion  1.03       ------Sequential Output------ --Sequential Input- --Random-\n                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP\nirys             4G 42961  93 41125  13 14414   3 20262  48 38487   5 167.0   0\n\n\nHere the difference in writings is not so big (wonder why, the price\nbetween those machines is huge) but in readings are noticeably better on B.\n\nOk, those were the tests done using my old Windows PC (D) computer. So I\nhave decided to do the same using my new laptop with Ubuntu (E).\nThe results were soooo strange that now I am completely confused.\n\nThe same SELECT:\n- on A first (fresh) run 30sec, second (and so on) about 11sec (??!)\n- on B first run 80sec, second (and so on) about 80sec also\n\nWhat is going on here? About 8x faster on slower machine?\n\nOne more thing comes to my mind. The A server has iso-8859-2 locale and\ndatabase is set to latin2, the B server has utf8 locale, but database is\nstill latin2. Does it matter anyway?\n\nSo here I'm stuck and hope for help. Is there any bottleneck? How to\nfind it?\n\nRegards\nPiotrHave you compared the PostgreSQL configurations between servers? (postgresql.conf)  And how was it installed?  Package or compiled from scratch?And has the new DB been VACUUM'd?\nThom", "msg_date": "Fri, 14 May 2010 15:03:26 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: old server, new server, same performance" }, { "msg_contents": "Piotr Legiecki <[email protected]> wrote:\n \n> Why there is no difference in database speed between those two\n> machines?\n \nCould you post the contents of the postgresql.conf files for both\n(stripped of comments) and explain what you're using for your\nbenchmarks? In particular, it would be interesting to know how many\nconcurrent connections are active running what mix of queries.\n \n-Kevin\n", "msg_date": "Fri, 14 May 2010 09:04:52 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: old server, new server, same performance" }, { "msg_contents": "2010/5/14 Piotr Legiecki <[email protected]>:\n> Hi\n> The goal: migrate postgresql from A to B.\n>\n> Simple and works fine (using pg_dump, psql -d dbname <bakcup_file).\n>\n> So what is the problem? My simple 'benchmarks' I have done with pgAdmin\n> in spare time.\n>\n> pgAdmin is the latest 1.8.2 on both D and E.\n> Using pgAdmin on my (D) computer I have run SELECT * from some_table;\n> and noted the execution time on both A and B servers:\n> - on A (the old one) about 120sec\n> - on B (the new monster) about 120sec (???)\n\nIt could well be you're measuring the time it takes to trasnfer that\ndata from server to client.\n\nHow fast is select count(*) from table on each machine?\n", "msg_date": "Fri, 14 May 2010 08:07:50 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: old server, new server, same performance" }, { "msg_contents": "Kevin Grittner wrote:\n> Piotr Legiecki <[email protected]> wrote:\n> \n> \n>> Why there is no difference in database speed between those two\n>> machines?\n>> \n> \n> Could you post the contents of the postgresql.conf files for both\n> (stripped of comments) and explain what you're using for your\n> benchmarks? In particular, it would be interesting to know how many\n> concurrent connections are active running what mix of queries.\n> \nIt would be also interesting to know how many disks are there in the new \nserver, and the size of the database (select \npg_size_pretty(pg_database_size('yourdb'))).\n\nregards,\nYeb Havinga\n\n", "msg_date": "Fri, 14 May 2010 16:13:12 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: old server, new server, same performance" }, { "msg_contents": "Agree with Thom,\r\n\r\nI also had the same problem back then when I was migrate from old servers to new server.\r\n\r\nAfter I vacuum the DB at the new servers the result back to normal.\r\n\r\nRgrds\r\n\r\nSent from my BlackBerry®powered by AyahNaima\r\n\r\n-----Original Message-----\r\nFrom: Thom Brown <[email protected]>\r\nDate: Fri, 14 May 2010 15:03:26 \r\nTo: Piotr Legiecki<[email protected]>\r\nCc: <[email protected]>\r\nSubject: Re: [PERFORM] old server, new server, same performance\r\n\r\n2010/5/14 Piotr Legiecki <[email protected]>\r\n\r\n> Hi\r\n>\r\n> I have a situation at my work which I simply don't understand and hope\r\n> that here I can find some explanations.\r\n>\r\n> What is on the scene:\r\n> A - old 'server' PC AMD Athlon64 3000+, 2GB RAM, 1 ATA HDD 150GB, Debian\r\n> etch, postgresql 8.1.19\r\n> B - new server HP DL 360, 12GB RAM, Intel Xeon 8 cores CPU, fast SAS\r\n> (mirrored) HDDs, Debian 64 bit, lenny, backported postgresql 8.1.19\r\n> C - our Windows application based on Postgresql 8.1 (not newer)\r\n>\r\n> and second role actors (for pgAdmin)\r\n> D - my old Windows XP computer, Athlon64 X2 3800+, with 100Mbit ethernet\r\n> E - new laptop with Ubuntu, 1000Mbit ethernet\r\n>\r\n> The goal: migrate postgresql from A to B.\r\n>\r\n> Simple and works fine (using pg_dump, psql -d dbname <bakcup_file).\r\n>\r\n> So what is the problem? My simple 'benchmarks' I have done with pgAdmin\r\n> in spare time.\r\n>\r\n> pgAdmin is the latest 1.8.2 on both D and E.\r\n> Using pgAdmin on my (D) computer I have run SELECT * from some_table;\r\n> and noted the execution time on both A and B servers:\r\n> - on A (the old one) about 120sec\r\n> - on B (the new monster) about 120sec (???)\r\n>\r\n> (yes, there is almost no difference)\r\n>\r\n> On the first test runs the postgresql configs on both servers were the\r\n> same, so I have started to optimize (according to postgresql wiki) the\r\n> postgresql on the new (B) server. The difference with my simple select\r\n> * were close to 0.\r\n>\r\n> So this is my first question. Why postgresql behaves so strangely?\r\n> Why there is no difference in database speed between those two machines?\r\n>\r\n> I thought about hardware problem on B, but:\r\n> hdparm shows 140MB/sec on B and 60MB on A (and buffered reads are 8GB on\r\n> B and 800MB on A)\r\n> bonnie++ on B:\r\n>\r\n>> Version 1.03d ------Sequential Output------ --Sequential Input-\r\n>> --Random-\r\n>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\r\n>> --Seeks--\r\n>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\r\n>> /sec %CP\r\n>> malwa 24G 51269 71 49649 10 34974 6 48969 82 147840 13\r\n>> 1150 1\r\n>>\r\n> on A:\r\n>\r\n>> Version 1.03 ------Sequential Output------ --Sequential Input-\r\n>> --Random-\r\n>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\r\n>> --Seeks--\r\n>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\r\n>> /sec %CP\r\n>> irys 4G 42961 93 41125 13 14414 3 20262 48 38487 5\r\n>> 167.0 0\r\n>>\r\n>\r\n> Here the difference in writings is not so big (wonder why, the price\r\n> between those machines is huge) but in readings are noticeably better on B.\r\n>\r\n> Ok, those were the tests done using my old Windows PC (D) computer. So I\r\n> have decided to do the same using my new laptop with Ubuntu (E).\r\n> The results were soooo strange that now I am completely confused.\r\n>\r\n> The same SELECT:\r\n> - on A first (fresh) run 30sec, second (and so on) about 11sec (??!)\r\n> - on B first run 80sec, second (and so on) about 80sec also\r\n>\r\n> What is going on here? About 8x faster on slower machine?\r\n>\r\n> One more thing comes to my mind. The A server has iso-8859-2 locale and\r\n> database is set to latin2, the B server has utf8 locale, but database is\r\n> still latin2. Does it matter anyway?\r\n>\r\n> So here I'm stuck and hope for help. Is there any bottleneck? How to\r\n> find it?\r\n>\r\n> Regards\r\n> Piotr\r\n>\r\n\r\nHave you compared the PostgreSQL configurations between servers?\r\n(postgresql.conf) And how was it installed? Package or compiled from\r\nscratch?\r\n\r\nAnd has the new DB been VACUUM'd?\r\n\r\nThom\r\n\r\n\n Agree with Thom,I also had the same problem back then when I was migrate from old servers to new server.After I vacuum the DB at the new servers the result back to normal.RgrdsSent from my BlackBerry®powered by AyahNaimaFrom: Thom Brown <[email protected]>\r\nDate: Fri, 14 May 2010 15:03:26 +0100To: Piotr Legiecki<[email protected]>Cc: <[email protected]>Subject: Re: [PERFORM] old server, new server, same performance2010/5/14 Piotr Legiecki <[email protected]>\r\n\r\nHi\n\r\nI have a situation at my work which I simply don't understand and hope\r\nthat here I can find some explanations.\n\r\nWhat is on the scene:\r\nA - old 'server' PC AMD Athlon64 3000+, 2GB RAM, 1 ATA HDD 150GB, Debian\r\netch, postgresql 8.1.19\r\nB - new server HP DL 360, 12GB RAM, Intel Xeon 8 cores CPU, fast SAS\r\n(mirrored) HDDs, Debian 64 bit, lenny, backported postgresql 8.1.19\r\nC - our Windows application based on Postgresql 8.1 (not newer)\n\r\nand second role actors (for pgAdmin)\r\nD - my old Windows XP computer, Athlon64 X2 3800+, with 100Mbit ethernet\r\nE - new laptop with Ubuntu, 1000Mbit ethernet\n\r\nThe goal: migrate postgresql from A to B.\n\r\nSimple and works fine (using pg_dump, psql -d dbname <bakcup_file).\n\r\nSo what is the problem? My simple 'benchmarks' I have done with pgAdmin\r\nin spare time.\n\r\npgAdmin is the latest 1.8.2 on both D and E.\r\nUsing pgAdmin on my (D) computer I have run SELECT * from some_table;\r\nand noted the execution time on both A and B servers:\r\n- on A (the old one) about 120sec\r\n- on B (the new monster) about 120sec (???)\n\r\n(yes, there is almost no difference)\n\r\nOn the first test runs the postgresql configs on both servers were the\r\nsame, so I have started to optimize (according to postgresql wiki) the\r\npostgresql on the new (B) server. The difference  with my simple select\r\n* were close to 0.\n\r\nSo this is my first question. Why postgresql behaves so strangely?\r\nWhy there is no difference in database speed between those two machines?\n\r\nI thought about hardware problem on B, but:\r\nhdparm shows 140MB/sec on B and 60MB on A (and buffered reads are 8GB on\r\nB and 800MB on A)\r\nbonnie++ on B:\n\r\nVersion 1.03d       ------Sequential Output------ --Sequential Input- --Random-\r\n                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\r\nMachine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP\r\nmalwa           24G 51269  71 49649  10 34974   6 48969  82 147840  13  1150   1\n\r\non A:\n\r\nVersion  1.03       ------Sequential Output------ --Sequential Input- --Random-\r\n                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\r\nMachine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP\r\nirys             4G 42961  93 41125  13 14414   3 20262  48 38487   5 167.0   0\n\n\r\nHere the difference in writings is not so big (wonder why, the price\r\nbetween those machines is huge) but in readings are noticeably better on B.\n\r\nOk, those were the tests done using my old Windows PC (D) computer. So I\r\nhave decided to do the same using my new laptop with Ubuntu (E).\r\nThe results were soooo strange that now I am completely confused.\n\r\nThe same SELECT:\r\n- on A first (fresh) run 30sec, second (and so on) about 11sec (??!)\r\n- on B first run 80sec, second (and so on) about 80sec also\n\r\nWhat is going on here? About 8x faster on slower machine?\n\r\nOne more thing comes to my mind. The A server has iso-8859-2 locale and\r\ndatabase is set to latin2, the B server has utf8 locale, but database is\r\nstill latin2. Does it matter anyway?\n\r\nSo here I'm stuck and hope for help. Is there any bottleneck? How to\r\nfind it?\n\r\nRegards\r\nPiotrHave you compared the PostgreSQL configurations between servers? (postgresql.conf)  And how was it installed?  Package or compiled from scratch?And has the new DB been VACUUM'd?\nThom", "msg_date": "Sat, 15 May 2010 02:26:43 +0000", "msg_from": "\"Sarwani Dwinanto\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: old server, new server, same performance" }, { "msg_contents": "Hello all,\nI was testing how much time a pg_dump backup would take to get restored. \nInitially, I tried it with psql (on a backup taken with pg_dumpall). It \ntook me about one hour. I felt that I should target for a recovery time of \n15 minutes to half an hour. So I went through the blogs/documentation etc \nand switched to pg_dump and pg_restore. I tested only the database with \nthe maximum volume of data (about 1.5 GB). With \npg_restore -U postgres -v -d PROFICIENT --clean -Fc proficient.dmp\nit took about 45 minutes. I tried it with \npg_restore -U postgres -j8 -v -d PROFICIENT --clean -Fc proficient.dmp\nNot much improvement there either. Have I missed something or 1.5 GB data \non a machine with the following configuration will take about 45 minutes? \nThere is nothing else running on the machine consuming memory or CPU. Out \nof 300 odd tables, about 10 tables have millions of records, rest are all \nhaving a few thousand records at most.\n\nHere are the specs ( a pc class machine)-\n\nPostgreSQL 8.4.3 on i686-pc-linux-gnu\nCentOS release 5.2 \nIntel(R) Pentium(R) D CPU 2.80GHz \n2 GB RAM\nStorage is local disk.\n\nPostgresql parameters (what I felt are relevant) - \nmax_connections = 100\nshared_buffers = 64MB\nwork_mem = 16MB\nmaintenance_work_mem = 16MB\nsynchronous_commit on\n\n\nThank you for any suggestions.\nJayadevan \n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\nHello all,\nI was testing how much time a pg_dump\nbackup would take to get restored. Initially, I tried it with psql (on\na backup taken with pg_dumpall). It took me about one hour. I felt that\nI should target for a recovery time of 15 minutes to half an hour. So I\nwent through the blogs/documentation etc and switched to pg_dump and pg_restore.\nI tested only the database with the maximum volume of data (about 1.5 GB).\nWith \npg_restore -U postgres -v -d PROFICIENT\n--clean -Fc proficient.dmp\nit took about 45 minutes. I tried it\nwith \npg_restore -U postgres -j8 -v\n-d PROFICIENT --clean -Fc proficient.dmp\nNot much improvement there either. Have\nI missed something or 1.5 GB data on a machine with the following configuration\nwill take about 45 minutes? There is nothing else running on the machine\nconsuming memory or CPU. Out of 300 odd tables, about 10 tables have millions\nof records, rest are all having a few thousand records at most.\n\nHere are the specs  ( a pc class\n machine)-\n\nPostgreSQL 8.4.3 on i686-pc-linux-gnu\nCentOS release 5.2 \nIntel(R) Pentium(R) D CPU 2.80GHz \n2 GB RAM\nStorage is local disk.\n\nPostgresql parameters (what I felt are\nrelevant) - \nmax_connections = 100\nshared_buffers = 64MB\nwork_mem = 16MB\nmaintenance_work_mem = 16MB\nsynchronous_commit on\n\n\nThank you for any suggestions.\nJayadevan \n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only\nfor the person to whom it is addressed and may contain confidential and/or\nprivileged material. If you have received this e-mail in error, kindly\ncontact the sender and destroy all copies of the original communication.\nIBS makes no warranty, express or implied, nor guarantees the accuracy,\nadequacy or completeness of the information contained in this email or\nany attachment and is not liable for any errors, defects, omissions, viruses\nor for resultant loss or damage, if any, direct or indirect.\"", "msg_date": "Mon, 17 May 2010 10:34:29 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": false, "msg_subject": "pg_dump and pg_restore" }, { "msg_contents": "On Mon, May 17, 2010 at 1:04 AM, Jayadevan M\n<[email protected]> wrote:\n> Hello all,\n> I was testing how much time a pg_dump backup would take to get restored.\n> Initially, I tried it with psql (on a backup taken with pg_dumpall). It took\n> me about one hour. I felt that I should target for a recovery time of 15\n> minutes to half an hour. So I went through the blogs/documentation etc and\n> switched to pg_dump and pg_restore. I tested only the database with the\n> maximum volume of data (about 1.5 GB). With\n> pg_restore -U postgres -v -d PROFICIENT --clean -Fc proficient.dmp\n> it took about 45 minutes. I tried it with\n> pg_restore -U postgres -j8 -v -d PROFICIENT --clean -Fc proficient.dmp\n> Not much improvement there either. Have I missed something or 1.5 GB data on\n> a machine with the following configuration will take about 45 minutes? There\n> is nothing else running on the machine consuming memory or CPU. Out of 300\n> odd tables, about 10 tables have millions of records, rest are all having a\n> few thousand records at most.\n>\n> Here are the specs  ( a pc class  machine)-\n>\n> PostgreSQL 8.4.3 on i686-pc-linux-gnu\n> CentOS release 5.2\n> Intel(R) Pentium(R) D CPU 2.80GHz\n> 2 GB RAM\n> Storage is local disk.\n>\n> Postgresql parameters (what I felt are relevant) -\n> max_connections = 100\n> shared_buffers = 64MB\n> work_mem = 16MB\n> maintenance_work_mem = 16MB\n> synchronous_commit on\n\nI would suggest raising shared_buffers to perhaps 512MB and cranking\nup checkpoint_segments to 10 or more. Also, your email doesn't give\ntoo much information about how many CPUs you have and what kind of\ndisk subsystem you are using (RAID? how many disks?) so it's had to\nsay if -j8 is reasonable. That might be too high.\n\nAnother thing I would recommend is that during the restore you use\ntools like top and iostat to monitor the system. You'll want to check\nthings like whether all the CPUs are in use, and how the disk activity\ncompares to the maximum you can generate using some other method\n(perhaps dd).\n\nOne thing I've noticed (to my chagrin) is that if pg_restore is given\na set of options that are incompatible with parallel restore, it just\ndoes a single-threaded restore. The options you've specified look\nright to me, but, again, examining exactly what is going on during the\nrestore should tell you if there's a problem in this area.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Sat, 22 May 2010 07:29:30 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and pg_restore" }, { "msg_contents": "On Mon, May 17, 2010 at 12:04 AM, Jayadevan M\n<[email protected]> wrote:\n> Hello all,\n> I was testing how much time a pg_dump backup would take to get restored.\n> Initially, I tried it with psql (on a backup taken with pg_dumpall). It took\n> me about one hour. I felt that I should target for a recovery time of 15\n> minutes to half an hour. So I went through the blogs/documentation etc and\n> switched to pg_dump and pg_restore. I tested only the database with the\n> maximum volume of data (about 1.5 GB). With\n> pg_restore -U postgres -v -d PROFICIENT --clean -Fc proficient.dmp\n> it took about 45 minutes. I tried it with\n> pg_restore -U postgres -j8 -v -d PROFICIENT --clean -Fc proficient.dmp\n> Not much improvement there either. Have I missed something or 1.5 GB data on\n> a machine with the following configuration will take about 45 minutes? There\n> is nothing else running on the machine consuming memory or CPU. Out of 300\n> odd tables, about 10 tables have millions of records, rest are all having a\n> few thousand records at most.\n>\n> Here are the specs  ( a pc class  machine)-\n>\n> PostgreSQL 8.4.3 on i686-pc-linux-gnu\n> CentOS release 5.2\n> Intel(R) Pentium(R) D CPU 2.80GHz\n> 2 GB RAM\n> Storage is local disk.\n>\n> Postgresql parameters (what I felt are relevant) -\n> max_connections = 100\n> shared_buffers = 64MB\n> work_mem = 16MB\n> maintenance_work_mem = 16MB\n> synchronous_commit on\n\nDo the big tables have lots of indexes? If so, you should raise\nmaintenance_work_mem.\n\nPeter\n", "msg_date": "Sat, 22 May 2010 14:55:22 -0500", "msg_from": "Peter Koczan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and pg_restore" }, { "msg_contents": "I increased shared_buffers and maintenance_work_memto\n128MB and 64MB and the restore was over in about 20 minutes. Anyway, I am \nlearning about PostgreSQL and it is not a critical situation. Thanks for \nall the replies.\nRegards,\nJayadevan\n\n\n\n\nFrom: Robert Haas <[email protected]>\nTo: Jayadevan M <[email protected]>\nCc: [email protected]\nDate: 22/05/2010 16:59\nSubject: Re: [PERFORM] pg_dump and pg_restore\n\n\n\nOn Mon, May 17, 2010 at 1:04 AM, Jayadevan M\n<[email protected]> wrote:\n> Hello all,\n> I was testing how much time a pg_dump backup would take to get restored.\n> Initially, I tried it with psql (on a backup taken with pg_dumpall). It \ntook\n> me about one hour. I felt that I should target for a recovery time of 15\n> minutes to half an hour. So I went through the blogs/documentation etc \nand\n> switched to pg_dump and pg_restore. I tested only the database with the\n> maximum volume of data (about 1.5 GB). With\n> pg_restore -U postgres -v -d PROFICIENT --clean -Fc proficient.dmp\n> it took about 45 minutes. I tried it with\n> pg_restore -U postgres -j8 -v -d PROFICIENT --clean -Fc proficient.dmp\n> Not much improvement there either. Have I missed something or 1.5 GB \ndata on\n> a machine with the following configuration will take about 45 minutes? \nThere\n> is nothing else running on the machine consuming memory or CPU. Out of \n300\n> odd tables, about 10 tables have millions of records, rest are all \nhaving a\n> few thousand records at most.\n>\n> Here are the specs ( a pc class machine)-\n>\n> PostgreSQL 8.4.3 on i686-pc-linux-gnu\n> CentOS release 5.2\n> Intel(R) Pentium(R) D CPU 2.80GHz\n> 2 GB RAM\n> Storage is local disk.\n>\n> Postgresql parameters (what I felt are relevant) -\n> max_connections = 100\n> shared_buffers = 64MB\n> work_mem = 16MB\n> maintenance_work_mem = 16MB\n> synchronous_commit on\n\nI would suggest raising shared_buffers to perhaps 512MB and cranking\nup checkpoint_segments to 10 or more. Also, your email doesn't give\ntoo much information about how many CPUs you have and what kind of\ndisk subsystem you are using (RAID? how many disks?) so it's had to\nsay if -j8 is reasonable. That might be too high.\n\nAnother thing I would recommend is that during the restore you use\ntools like top and iostat to monitor the system. You'll want to check\nthings like whether all the CPUs are in use, and how the disk activity\ncompares to the maximum you can generate using some other method\n(perhaps dd).\n\nOne thing I've noticed (to my chagrin) is that if pg_restore is given\na set of options that are incompatible with parallel restore, it just\ndoes a single-threaded restore. The options you've specified look\nright to me, but, again, examining exactly what is going on during the\nrestore should tell you if there's a problem in this area.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n\n\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n", "msg_date": "Mon, 24 May 2010 09:15:22 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump and pg_restore" } ]
[ { "msg_contents": "\nThis is not a rigorous test and should not be used as a direct comparison\nbetween\noperating systems. My objective was to estimate the ZFS toll in Postgres,\nand\nwhat to expect in performance, comparing to the old server this machine will\nbe replacing. Not all possible configurations were tested, and the CentOS \nbenchmark was made out of curiosity.\n\nAll tests were made using the default instalation, without any additional\ntuning,\nexcept those mentioned in custom configuration. The installation media used\nwas the\n8.0-STABLE-201002 DVD (amd64). The server was rebooted between configuration\nchanges, but NOT between successive runs.\nThere are some inconsistencies between results,probably due to caching. \nDisabling atime surprisingly results in slower runs, in both ZFS and UFS.\nThe CentOS test was made with the same default configuration file used in\nFreeBSD.\n\nAny additional hints are welcome, but I probably won't be able to re-run\nthe tests :)\n\nRegards,\n\tJoão Pinheiro\n\t\n\nMachine Specs:\n - Hp Proliant ML330\n - 1x Intel(R) Xeon CPU E5504 2.0Ghz\n - 12GB RAM\n - 4 x 250GB SATA HD 7.2 (RAID 10)\n - P410 Raid controller w/512MB cache/battery option\n\n\nOS and Postgres Custom Configuration (used on tests marked as custom):\n\n postgresql.conf:\n ----------------------\n max_connections = 500\n work_mem = 16MB\n shared_buffers = 3072MB\n\n /etc/sysctl.conf\n ---------------------\n kern.ipc.shmmax=3758096384\n kern.ipc.shmall=917504\n kern.ipc.shm_use_phys=1\n\n /boot/loader.conf\n -----------------------\n kern.ipc.semmni=512\n kern.ipc.semmns=1024\n kern.ipc.semmnu=512\n \nFreeBSD Setup:\n $ uname -a\n FreeBSD beastie 8.0-STABLE-201002 FreeBSD 8.0-STABLE-201002 #0: Tue Feb 16\n21:05:59 UTC 2010\n [email protected]:/usr/obj/usr/src/sys/GENERIC amd64\n \nCentOS setup:\n [root@dhcppc0 test]# uname -a\n Linux dhcppc0 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64\nx86_64 x86_64 GNU/Linux\n iptables: off\n selinux: off\n \n-------------------------------------------------------------------------------\nFreeBSD 8.0-STABLE, UFS, postgresql84-server from package, default config\n-------------------------------------------------------------------------------\n\npgbench -c 10 -t 1000 testdb : 2363.017640 (including connections\nestablishing)\npgbench -c 20 -t 1000 testdb : 2310.928229 (including connections\nestablishing)\npgbench -c 30 -t 1000 testdb : 2042.681002 (including connections\nestablishing)\npgbench -c 40 -t 1000 testdb : 1891.323153 (including connections\nestablishing)\n\nbonnie++:\n\nVersion 1.96 ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n%CP\nbeastie 24G 700 99 170476 22 59826 11 967 78 156937 15\n475.7 5\nLatency 12177us 500ms 830ms 291ms 145ms \n269ms\nVersion 1.96 ------Sequential Create------ --------Random\nCreate--------\nbeastie -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 20 35329 48 +++++ +++ +++++ +++ 19828 86 +++++ +++ +++++\n+++\nLatency 26925us 15us 83us 547ms 80us \n60us\n\n-------------------------------------------------------------------------------\nFreeBSD 8.0-STABLE, UFS,noatime, postgresql84-server from package, default\nconf\n-------------------------------------------------------------------------------\n\npgbench -c 10 -t 1000 testdb : 2287.987630 (including connections\nestablishing)\npgbench -c 20 -t 1000 testdb : 2255.514875 (including connections\nestablishing)\npgbench -c 30 -t 1000 testdb : 2098.280816 (including connections\nestablishing)\npgbench -c 40 -t 1000 testdb : 1871.193058 (including connections\nestablishing)\n\nbonnie++:\n\nVersion 1.96 ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n%CP\nbeastie 24G 679 99 170965 22 58682 11 942 76 157156 15\n498.7 6\nLatency 12881us 569ms 1171ms 681ms 155ms \n242ms\nVersion 1.96 ------Sequential Create------ --------Random\nCreate--------\nbeastie -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 20 36118 49 +++++ +++ 27231 99 +++++ +++ +++++ +++ +++++\n+++\nLatency 32705us 81us 537ms 14280us 25us \n82us\n\n-------------------------------------------------------------------------------\nFreeBSD 8.0-STABLE, UFS, postgresql84-server from package, custom config \n-------------------------------------------------------------------------------\n\npgbench -c 10 -t 1000 testdb : 2320.509732 (including connections\nestablishing)\npgbench -c 20 -t 1000 testdb : 2289.853556 (including connections\nestablishing)\npgbench -c 30 -t 1000 testdb : 2089.112777 (including connections\nestablishing)\npgbench -c 40 -t 1000 testdb : 1921.034254 (including connections\nestablishing)\npgbench -c 50 -t 1000 testdb : 1574.231270 (including connections\nestablishing)\npgbench -c 100 -t 1000 testdb : 1096.761450 (including connections\nestablishing)\npgbench -c 200 -t 1000 testdb : 256.443268 (including connections\nestablishing)\npgbench -c 300 -t 1000 testdb : 69.174219 (including connections\nestablishing)\n\n-------------------------------------------------------------------------------\nFreeBSD 8.0-STABLE, UFS, postgresql84-server from ports, default config \n-------------------------------------------------------------------------------\n\npgbench -c 10 -t 1000 testdb : 1676.760766 (including connections\nestablishing)\npgbench -c 20 -t 1000 testdb : 2295.344502 (including connections\nestablishing)\npgbench -c 30 -t 1000 testdb : 2066.721058 (including connections\nestablishing)\npgbench -c 40 -t 1000 testdb : 1887.196064 (including connections\nestablishing)\n\n-------------------------------------------------------------------------------\nFreeBSD 8.0-STABLE, UFS, postgresql84-server from ports, custom config \n-------------------------------------------------------------------------------\n\npgbench -c 10 -t 1000 testdb : 2642.962465 (including connections\nestablishing)\npgbench -c 20 -t 1000 testdb : 2253.349999 (including connections\nestablishing)\npgbench -c 30 -t 1000 testdb : 2037.050831 (including connections\nestablishing)\npgbench -c 40 -t 1000 testdb : 1918.147928 (including connections\nestablishing)\npgbench -c 50 -t 1000 testdb : 1773.778845 (including connections\nestablishing)\npgbench -c 100 -t 1000 testdb : 1153.303103 (including connections\nestablishing)\npgbench -c 200 -t 1000 testdb : 517.648628 (including connections\nestablishing)\npgbench -c 300 -t 1000 testdb : 97.442573 (including connections\nestablishing)\n\n-------------------------------------------------------------------------------\nFreeBSD 8.0-STABLE, ZFS, default block size\n-------------------------------------------------------------------------------\n\nbonnie++:\n\nVersion 1.96 ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n%CP\nbeastie 24G 120 99 154421 31 81917 16 327 98 196125 16\n175.7 4\nLatency 222ms 13439ms 12749ms 163ms 1034ms \n655ms\nVersion 1.96 ------Sequential Create------ --------Random\nCreate--------\nbeastie -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 20 25979 99 32036 55 16342 99 24022 99 +++++ +++ 21384 \n99\nLatency 13639us 107ms 977us 31274us 144us \n257us\n\n-------------------------------------------------------------------------------\nFreeBSD 8.0-STABLE,ZFS (8k block),postgresql84-server from package,default\nconf. \n-------------------------------------------------------------------------------\n\npgbench -c 10 -t 1000 testdb : 2380.858280 (including connections\nestablishing)\npgbench -c 20 -t 1000 testdb : 2135.241727 (including connections\nestablishing)\npgbench -c 30 -t 1000 testdb : 2002.773173 (including connections\nestablishing)\npgbench -c 40 -t 1000 testdb : 1692.844103 (including connections\nestablishing)\n\nbonnie++:\n\nVersion 1.96 ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n%CP\nbeastie 24G 107 99 119235 22 50495 17 321 98 143257 25\n201.5 4\nLatency 82593us 13575ms 11274ms 74329us 1398ms \n666ms\nVersion 1.96 ------Sequential Create------ --------Random\nCreate--------\nbeastie -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 20 24960 99 +++++ +++ 21449 99 22501 99 +++++ +++ 22156 \n99\nLatency 17467us 137us 167us 37528us 110us \n147us\n\n-------------------------------------------------------------------------------\nFreeBSD 8.0-STABLE,ZFS-atime=off,postgresql84-server from package,default\nconf. \n-------------------------------------------------------------------------------\n\nbonnie++:\n\nVersion 1.96 ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n%CP\nbeastie 24G 114 99 151906 32 81971 17 295 90 192878 16\n173.5 5\nLatency 286ms 15766ms 13315ms 1062ms 433ms \n727ms\nVersion 1.96 ------Sequential Create------ --------Random\nCreate--------\nbeastie -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 20 25295 98 15774 78 12823 99 20979 99 +++++ +++ 22466 \n99\nLatency 30857us 98288us 1321us 35519us 113us \n144us\n\n-------------------------------------------------------------------------------\nFreeBSD 8.0-STABLE,ZFS-8k/atime=off,postgresql84-server from package,default\nconf. \n-------------------------------------------------------------------------------\n\npgbench -c 10 -t 1000 testdb : 2235.009456 (including connections\nestablishing)\npgbench -c 20 -t 1000 testdb : 1915.160680 (including connections\nestablishing)\npgbench -c 30 -t 1000 testdb : 1824.546833 (including connections\nestablishing)\npgbench -c 40 -t 1000 testdb : 1663.443537 (including connections\nestablishing)\n\nbonnie++:\n\nVersion 1.96 ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n%CP\nbeastie 24G 109 99 123147 24 51160 17 327 97 117123 21\n191.4 4\nLatency 153ms 17900ms 9873ms 164ms 669ms \n600ms\nVersion 1.96 ------Sequential Create------ --------Random\nCreate--------\nbeastie -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 20 25324 99 +++++ +++ 18689 99 20837 99 18828 100 5055\n100\nLatency 13866us 48457us 241us 30971us 199us \n596us\n\n-------------------------------------------------------------------------------\nFreeBSD 8.0-STABLE,ZFS-8k,postgresql84-server from package,custom config\n-------------------------------------------------------------------------------\n\npgbench -c 10 -t 1000 testdb : 2438.179353 (including connections\nestablishing)\npgbench -c 20 -t 1000 testdb : 1949.016648 (including connections\nestablishing)\npgbench -c 30 -t 1000 testdb : 1570.176692 (including connections\nestablishing)\npgbench -c 40 -t 1000 testdb : 1683.720510 (including connections\nestablishing)\npgbench -c 50 -t 1000 testdb : 1481.249222 (including connections\nestablishing)\npgbench -c 100 -t 1000 testdb : 1034.946222 (including connections\nestablishing)\npgbench -c 200 -t 1000 testdb : 288.125818 (including connections\nestablishing)\npgbench -c 300 -t 1000 testdb : 57.924377 (including connections\nestablishing)\n\n-------------------------------------------------------------------------------\nFreeBSD 8.0-STABLE,ZFS-8k,postgresql84-server from ports,custom config\n-------------------------------------------------------------------------------\n\npgbench -c 10 -t 1000 testdb : 2252.105155 (including connections\nestablishing)\npgbench -c 20 -t 1000 testdb : 2065.147771 (including connections\nestablishing)\npgbench -c 30 -t 1000 testdb : 1762.356143 (including connections\nestablishing)\npgbench -c 40 -t 1000 testdb : 1832.577548 (including connections\nestablishing)\npgbench -c 50 -t 1000 testdb : 1571.061549 (including connections\nestablishing)\npgbench -c 100 -t 1000 testdb : 1091.865609 (including connections\nestablishing)\npgbench -c 200 -t 1000 testdb : 476.009429 (including connections\nestablishing)\npgbench -c 300 -t 1000 testdb : 101.918069 (including connections\nestablishing)\n\n-------------------------------------------------------------------------------\nCentOS 5.4, EXT3,postgresql84-server from package, default config\n-------------------------------------------------------------------------------\n\npgbench -c 10 -t 1000 testdb : 2407.934433 (including connections\nestablishing)\npgbench -c 20 -t 1000 testdb : 2369.103760 (including connections\nestablishing)\npgbench -c 30 -t 1000 testdb : 2165.642174 (including connections\nestablishing)\npgbench -c 40 -t 1000 testdb : 2060.112114 (including connections\nestablishing)\n\nbonnie++:\n\nVersion 1.94 ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n%CP\ndhcppc0 24G 548 99 167274 30 73785 11 1088 73 217100 11\n612.3 17\nLatency 15995us 957ms 575ms 576ms 111ms \n73868us\nVersion 1.94 ------Sequential Create------ --------Random\nCreate--------\ndhcppc0 -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 20 39388 94 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++\n+++\nLatency 114us 725us 750us 182us 78us \n370us\n\n-- \nView this message in context: http://old.nabble.com/Benchmark-with-FreeBSD-8.0-and-pgbench-tp28569544p28569544.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Sat, 15 May 2010 09:42:44 -0700 (PDT)", "msg_from": "\"joao.pinheiro\" <[email protected]>", "msg_from_op": true, "msg_subject": "Benchmark with FreeBSD 8.0 and pgbench" }, { "msg_contents": "Joao,\n\nWow, thanks for doing this!\n\nIn general, your tests seem to show that there isn't a substantial \npenalty for using ZFS as of version 8.0.\n\nIf you have time for more tests, I'd like to ask you for a few more tweaks:\n\n(1) change the following settings according to conventional wisdom:\n\twal_buffers = 8MB\n\teffective_cache_size = 9GB\n\tcheckpoint_segments = 32\n\ton ZFS only: full_page_writes=off\n\n(2) What scale were you using for the pgbench database? I didn't see it \nin the e-mail. It would be worth testing:\n\ts = 10 (small database, in memory)\n\ts = 500 (7GB, ram mostly full)\n\ts = 1000 (14GB, slightly larger than ram)\n\ts = 3000 (43GB, much larger than ram)\n\nIf you were only testing a small size in your runs, then the only \nFilesystem behavoir you were testing was the transaction log.\n\n(3) Try a ZFS 128K record size\n\n(4) Centos/Ext3 appears to have had better staying power with high \nnumbers of clients. Can you continue testing with 50, 100 and 200 \nclients on that combination? And with data=writeback,noatime on Ext3?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Sat, 15 May 2010 12:48:05 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark with FreeBSD 8.0 and pgbench" }, { "msg_contents": "\nHi,\nThe tests were made without the -s parameter, (so 1 is assumed). I'm running\nthe numbers\nagain on CentOS, with the optimized config and I'll test also different\nscale values. I also will be able\nto repeat the test again in FreeBSD with ZFS with the new options and\ndifferent scale,\nbut probably I won't repeat the UFS tests. I'll post the data when I finnish\nthe tests. \n\nThanks for your feedback,\n João Pinheiro\n\n\nJosh Berkus wrote:\n> \n> Joao,\n> \n> Wow, thanks for doing this!\n> \n> In general, your tests seem to show that there isn't a substantial \n> penalty for using ZFS as of version 8.0.\n> \n> If you have time for more tests, I'd like to ask you for a few more\n> tweaks:\n> \n> (1) change the following settings according to conventional wisdom:\n> \twal_buffers = 8MB\n> \teffective_cache_size = 9GB\n> \tcheckpoint_segments = 32\n> \ton ZFS only: full_page_writes=off\n> \n> (2) What scale were you using for the pgbench database? I didn't see it \n> in the e-mail. It would be worth testing:\n> \ts = 10 (small database, in memory)\n> \ts = 500 (7GB, ram mostly full)\n> \ts = 1000 (14GB, slightly larger than ram)\n> \ts = 3000 (43GB, much larger than ram)\n> \n> If you were only testing a small size in your runs, then the only \n> Filesystem behavoir you were testing was the transaction log.\n> \n> (3) Try a ZFS 128K record size\n> \n> (4) Centos/Ext3 appears to have had better staying power with high \n> numbers of clients. Can you continue testing with 50, 100 and 200 \n> clients on that combination? And with data=writeback,noatime on Ext3?\n> \n> -- \n> -- Josh Berkus\n> PostgreSQL Experts Inc.\n> http://www.pgexperts.com\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n\n-- \nView this message in context: http://old.nabble.com/Benchmark-with-FreeBSD-8.0-and-pgbench-tp28569544p28570856.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Sat, 15 May 2010 13:41:44 -0700 (PDT)", "msg_from": "\"joao.pinheiro\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark with FreeBSD 8.0 and pgbench" }, { "msg_contents": "Hi.\n\nNot strictly connected to your tests, but:\nAs of ZFS, we've had experience that it degrades over time after random\nupdates because of files becoming non-linear and sequential reads becomes\nrandom.\nAlso there are Q about ZFS block size - setting it to 8K makes first problem\nworse, setting it to higher values means that 8K write will need a read to\nrecreate the whole block in new place.\n\nBest regards,\n Vitalii Tymchyshyn\n\nHi.\nNot strictly connected to your tests, but:As of ZFS, we've had experience that it degrades over time after random updates because of files becoming non-linear and sequential reads becomes random.\nAlso there are Q about ZFS block size - setting it to 8K makes first problem worse, setting it to higher values means that 8K write will need a read to recreate the whole block in new place.\nBest regards, Vitalii Tymchyshyn", "msg_date": "Mon, 17 May 2010 10:06:25 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark with FreeBSD 8.0 and pgbench" } ]
[ { "msg_contents": "Sample code:\n\nSELECT *\nFROM MyTable\nWHERE foo = 'bar' AND MySlowFunc('foo') = 'bar'\n\nLet's say this required a SEQSCAN because there were no indexes to support \ncolumn foo. For every row where foo <> 'bar' would the filter on the SEQSCAN \nshort-circuit the AND return false right away, or would it still execute \nMySlowFunc('foo') ?\n\nThanks!\n\nCarlo \n\n", "msg_date": "Tue, 18 May 2010 18:28:25 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Does FILTER in SEQSCAN short-circuit AND?" }, { "msg_contents": "\"Carlo Stonebanks\" <[email protected]> wrote:\n \n> SELECT *\n> FROM MyTable\n> WHERE foo = 'bar' AND MySlowFunc('foo') = 'bar'\n> \n> Let's say this required a SEQSCAN because there were no indexes to\n> support column foo. For every row where foo <> 'bar' would the\n> filter on the SEQSCAN short-circuit the AND return false right\n> away, or would it still execute MySlowFunc('foo') ?\n \nFor that example, I'm pretty sure it will skip the slow function for\nrows which fail the first test. A quick test confirmed that for me.\nIf you create a sufficiently slow function, you shouldn't have much\ntrouble testing that yourself. :-)\n \n-Kevin\n", "msg_date": "Thu, 27 May 2010 15:27:11 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does FILTER in SEQSCAN short-circuit AND?" }, { "msg_contents": "On 5/18/10 3:28 PM, Carlo Stonebanks wrote:\n> Sample code:\n>\n> SELECT *\n> FROM MyTable\n> WHERE foo = 'bar' AND MySlowFunc('foo') = 'bar'\n>\n> Let's say this required a SEQSCAN because there were no indexes to\n> support column foo. For every row where foo <> 'bar' would the filter on\n> the SEQSCAN short-circuit the AND return false right away, or would it\n> still execute MySlowFunc('foo') ?\n\nI asked a similar question a few years back, and the answer is that the planner just makes a guess and applies it to all functions. It has no idea whether your function is super fast or incredibly slow, they're all assigned the same cost.\n\nIn this fairly simple case, the planner might reasonably guess that \"foo = 'bar'\" will always be faster than \"AnyFunc(foo) = 'bar'\". But for real queries, that might not be the case.\n\nIn my case, I have a function that is so slow that it ALWAYS is good to avoid it. Unfortunately, there's no way to explain that to Postgres, so I have to use other tricks to force the planner not to use it.\n\n select * from\n (select * from MyTable where foo = 'bar' offset 0)\n where MySlowFunc(foo) = 'bar';\n\nThe \"offset 0\" prevents the planner from collapsing this query back into your original syntax. It will only apply MySlowFunc() to rows where you already know that foo = 'bar'.\n\nIt would be nice if Postgres had a way to assign a cost to every function. Until then, you have to use convoluted SQL if you have a really slow function.\n\nCraig\n", "msg_date": "Thu, 27 May 2010 14:13:50 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does FILTER in SEQSCAN short-circuit AND?" }, { "msg_contents": "Craig James wrote on 27.05.2010 23:13:\n> It would be nice if Postgres had a way to assign a cost to every\n> function.\n\nIsn't that what the COST parameter is intended to be:\n\nhttp://www.postgresql.org/docs/current/static/sql-createfunction.html\n\nThomas\n\n", "msg_date": "Thu, 27 May 2010 23:26:31 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does FILTER in SEQSCAN short-circuit AND?" }, { "msg_contents": "Craig James <[email protected]> wrote:\n \n> It would be nice if Postgres had a way to assign a cost to every\n> function.\n \nThe COST clause of CREATE FUNCTION doesn't do what you want?\n \nhttp://www.postgresql.org/docs/8.4/interactive/sql-createfunction.html\n \n-Kevin\n", "msg_date": "Thu, 27 May 2010 16:28:48 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does FILTER in SEQSCAN short-circuit AND?" }, { "msg_contents": "On 5/27/10 2:28 PM, Kevin Grittner wrote:\n> Craig James<[email protected]> wrote:\n>\n>> It would be nice if Postgres had a way to assign a cost to every\n>> function.\n>\n> The COST clause of CREATE FUNCTION doesn't do what you want?\n>\n> http://www.postgresql.org/docs/8.4/interactive/sql-createfunction.html\n\nCool ... I must have missed it when this feature was added. Nice!\n\nCraig\n", "msg_date": "Thu, 27 May 2010 14:44:19 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does FILTER in SEQSCAN short-circuit AND?" }, { "msg_contents": "We are currently using pltclu as our PL of choice AFTER plpgSql.\n\nI'd like to know if anyone can comment on the performance costs of the\nvarious PL languages BESIDES C. For example, does pltclu instantiate faster\nthan pltcl (presumably because it uses a shared interpreter?) Is Perl more\nlightweight?\n\nI know that everything depends on context - what you are doing with it, e.g.\nchoose Tcl for string handling vs. Perl for number crunching - but for those\nwho know about this, is there a clear performance advantage for any of the\nvarious PL languages - and if so, is it a difference so big to be worth\nswitching?\n\nI ask this because I had expected to see pl/pgsql as a clear winner in terms\nof performance over pltclu, but my initial test showed the opposite. I know\nthis may be an apples vs oranges problem and I will test further, but if\nanyone has any advice or insight, I would appreciate it so I can tailor my\ntests accordingly.\n\n\nThanks,\n\nCarlo\n\n\n\n", "msg_date": "Tue, 27 Dec 2011 16:09:50 -0500", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance costs of various PL languages" }, { "msg_contents": "Hello\n\n2011/12/27 Carlo Stonebanks <[email protected]>:\n> We are currently using pltclu as our PL of choice AFTER plpgSql.\n>\n> I'd like to know if anyone can comment on the performance costs of the\n> various PL languages BESIDES C. For example, does pltclu instantiate faster\n> than pltcl (presumably because it uses a shared interpreter?) Is Perl more\n> lightweight?\n>\n> I know that everything depends on context - what you are doing with it, e.g.\n> choose Tcl for string handling vs. Perl for number crunching - but for those\n> who know about this, is there a clear performance advantage for any of the\n> various PL languages - and if so, is it a difference so big to be worth\n> switching?\n>\n> I ask this because I had expected to see pl/pgsql as a clear winner in terms\n> of performance over pltclu, but my initial test showed the opposite. I know\n> this may be an apples vs oranges problem and I will test further, but if\n> anyone has any advice or insight, I would appreciate it so I can tailor my\n> tests accordingly.\n>\n\nA performance strongly depends on use case.\n\nPL/pgSQL has fast start but any expression is evaluated as simple SQL\nexpression - and some repeated operation should be very expensive -\narray update, string update. PL/pgSQL is best as SQL glue. Positive to\nperformance is type compatibility between plpgsql and Postgres.\nInterpret plpgsql is very simply - there are +/- zero optimizations -\nplpgsql code should be minimalistic, but when you don't do some really\nwrong, then a speed is comparable with PHP.\n\nhttp://www.pgsql.cz/index.php/PL/pgSQL_%28en%29#Inappropriate_use_of_the_PL.2FpgSQL_language\n\nPL/Perl has slower start - but string or array operations are very\nfast. Perl has own expression evaluator - faster than expression\nevaluation in plpgsql. On second hand - any input must be transformed\nfrom postgres format to perl format and any result must be transformed\ntoo. Perl and other languages doesn't use data type compatible with\nPostgres.\n\nRegards\n\nPavel Stehule\n\n\n>\n> Thanks,\n>\n> Carlo\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Dec 2011 23:20:11 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance costs of various PL languages" }, { "msg_contents": "On Tue, Dec 27, 2011 at 4:20 PM, Pavel Stehule <[email protected]> wrote:\n> Hello\n>\n> 2011/12/27 Carlo Stonebanks <[email protected]>:\n>> We are currently using pltclu as our PL of choice AFTER plpgSql.\n>>\n>> I'd like to know if anyone can comment on the performance costs of the\n>> various PL languages BESIDES C. For example, does pltclu instantiate faster\n>> than pltcl (presumably because it uses a shared interpreter?) Is Perl more\n>> lightweight?\n>>\n>> I know that everything depends on context - what you are doing with it, e.g.\n>> choose Tcl for string handling vs. Perl for number crunching - but for those\n>> who know about this, is there a clear performance advantage for any of the\n>> various PL languages - and if so, is it a difference so big to be worth\n>> switching?\n>>\n>> I ask this because I had expected to see pl/pgsql as a clear winner in terms\n>> of performance over pltclu, but my initial test showed the opposite. I know\n>> this may be an apples vs oranges problem and I will test further, but if\n>> anyone has any advice or insight, I would appreciate it so I can tailor my\n>> tests accordingly.\n>>\n>\n> A performance strongly depends on use case.\n>\n> PL/pgSQL has fast start but any expression is evaluated as simple SQL\n> expression - and some repeated operation should be very expensive -\n> array update, string update. PL/pgSQL is best as SQL glue. Positive to\n> performance is type compatibility between plpgsql and Postgres.\n> Interpret plpgsql is very simply - there are +/- zero optimizations -\n> plpgsql code should be minimalistic, but when you don't do some really\n> wrong, then a speed is comparable with PHP.\n>\n> http://www.pgsql.cz/index.php/PL/pgSQL_%28en%29#Inappropriate_use_of_the_PL.2FpgSQL_language\n>\n> PL/Perl has slower start - but string or array operations are very\n> fast. Perl has own expression evaluator - faster than expression\n> evaluation in plpgsql. On second hand - any input must be transformed\n> from postgres format to perl format and any result must be transformed\n> too. Perl and other languages doesn't use data type compatible with\n> Postgres.\n\nOne big advantage pl/pgsql has over scripting languages is that it\nunderstands postgresql types natively. It knows what a postgres array\nis, and can manipulate one directly. pl/perl would typically have to\nhave the database convert it to a string, parse it into a perl\nstructure, do the manipulation, then send it to the database to be\nparsed again. If your procedure code is mainly moving data between\ntables and doing minimal intermediate heavy processing, this adds up\nto a big advantage. Which pl to go with really depends on what you\nneed to do. pl/pgsql is always my first choice though.\n\nperl and tcl are not particularly fast languages in the general case\n-- you are largely at the mercy of how well the language's syntax or\nlibrary features map to the particular problem you're solving. if you\nneed a fast general purpose language in the backend and are (very\nunderstandably) skeptical about C, I'd look at pl/java.\n\nmerlin\n", "msg_date": "Tue, 27 Dec 2011 16:54:17 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance costs of various PL languages" }, { "msg_contents": "\n\nOn 12/27/2011 05:54 PM, Merlin Moncure wrote:\n> On Tue, Dec 27, 2011 at 4:20 PM, Pavel Stehule<[email protected]> wrote:\n>> Hello\n>>\n>> 2011/12/27 Carlo Stonebanks<[email protected]>:\n>>> We are currently using pltclu as our PL of choice AFTER plpgSql.\n>>>\n>>> I'd like to know if anyone can comment on the performance costs of the\n>>> various PL languages BESIDES C. For example, does pltclu instantiate faster\n>>> than pltcl (presumably because it uses a shared interpreter?) Is Perl more\n>>> lightweight?\n>>>\n>>> I know that everything depends on context - what you are doing with it, e.g.\n>>> choose Tcl for string handling vs. Perl for number crunching - but for those\n>>> who know about this, is there a clear performance advantage for any of the\n>>> various PL languages - and if so, is it a difference so big to be worth\n>>> switching?\n>>>\n>>> I ask this because I had expected to see pl/pgsql as a clear winner in terms\n>>> of performance over pltclu, but my initial test showed the opposite. I know\n>>> this may be an apples vs oranges problem and I will test further, but if\n>>> anyone has any advice or insight, I would appreciate it so I can tailor my\n>>> tests accordingly.\n>>>\n>> A performance strongly depends on use case.\n>>\n>> PL/pgSQL has fast start but any expression is evaluated as simple SQL\n>> expression - and some repeated operation should be very expensive -\n>> array update, string update. PL/pgSQL is best as SQL glue. Positive to\n>> performance is type compatibility between plpgsql and Postgres.\n>> Interpret plpgsql is very simply - there are +/- zero optimizations -\n>> plpgsql code should be minimalistic, but when you don't do some really\n>> wrong, then a speed is comparable with PHP.\n>>\n>> http://www.pgsql.cz/index.php/PL/pgSQL_%28en%29#Inappropriate_use_of_the_PL.2FpgSQL_language\n>>\n>> PL/Perl has slower start - but string or array operations are very\n>> fast. Perl has own expression evaluator - faster than expression\n>> evaluation in plpgsql. On second hand - any input must be transformed\n>> from postgres format to perl format and any result must be transformed\n>> too. Perl and other languages doesn't use data type compatible with\n>> Postgres.\n> One big advantage pl/pgsql has over scripting languages is that it\n> understands postgresql types natively. It knows what a postgres array\n> is, and can manipulate one directly. pl/perl would typically have to\n> have the database convert it to a string, parse it into a perl\n> structure, do the manipulation, then send it to the database to be\n> parsed again. If your procedure code is mainly moving data between\n> tables and doing minimal intermediate heavy processing, this adds up\n> to a big advantage. Which pl to go with really depends on what you\n> need to do. pl/pgsql is always my first choice though.\n>\n> perl and tcl are not particularly fast languages in the general case\n> -- you are largely at the mercy of how well the language's syntax or\n> library features map to the particular problem you're solving. if you\n> need a fast general purpose language in the backend and are (very\n> understandably) skeptical about C, I'd look at pl/java.\n>\n\n\nPLV8, which is not yet ready for prime time, maps many common Postgres \ntypes into native JS types without the use of Input/Output functions, \nwhich means the conversion is very fast. It's work which could very \nwell do with repeating for the other PL's.\n\ncheers\n\nandrew\n", "msg_date": "Tue, 27 Dec 2011 18:11:47 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance costs of various PL languages" }, { "msg_contents": "Thanks guys. \n\nAh, Pl/java - of course. I would miss writing the code right in the SQL\nscript, but that would have been true of C as well.\n\nNone of these procedures really qualify as stored procs that move data;\nrather they are scalar functions used for fuzzy string comparisons based on\nour own domain logic - imagine something like,\n\n SELECT *\n FROM fathers AS f, sons AS s\n WHERE same_name(f.last_name, s.last_name)\n\n... and same_name had business logic that corrected for O'reilly vs oreilly,\nVan De Lay vs Vandelay, etc.\n\nThe point is that as we learn about the domain, we would add the rules into\nthe function same_name() so that all apps would benefit from the new rules.\n\nSome of the functions are data-driven, for example a table of common\nabbreviations with regex or LIKE expressions that would be run against both\nstrings so that each string is reduced to common abbreviations (i.e. lowest\ncommon denominator) then compared, e.g.\n\n SELECT *\n FROM companies AS c\n WHERE same_business_name(s, 'ACME Business Supplies, Incorporated')\n\nWould reduce both parameters down to the most common abbreviation and then\ncompare again with fuzzy logic.\n\nOf course, even if this was written in C, the function would be data-bound\nas it read from the abbreviation table - unless you guys tell that there is\na not inconsiderable cost involved in type conversion from PG to internal\nvars.\n\nCarlo\n\n\n-----Original Message-----\nFrom: Merlin Moncure [mailto:[email protected]] \nSent: December 27, 2011 5:54 PM\nTo: Pavel Stehule\nCc: Carlo Stonebanks; [email protected]\nSubject: Re: [PERFORM] Performance costs of various PL languages\n\nOn Tue, Dec 27, 2011 at 4:20 PM, Pavel Stehule <[email protected]>\nwrote:\n> Hello\n>\n> 2011/12/27 Carlo Stonebanks <[email protected]>:\n>> We are currently using pltclu as our PL of choice AFTER plpgSql.\n>>\n>> I'd like to know if anyone can comment on the performance costs of the\n>> various PL languages BESIDES C. For example, does pltclu instantiate\nfaster\n>> than pltcl (presumably because it uses a shared interpreter?) Is Perl\nmore\n>> lightweight?\n>>\n>> I know that everything depends on context - what you are doing with it,\ne.g.\n>> choose Tcl for string handling vs. Perl for number crunching - but for\nthose\n>> who know about this, is there a clear performance advantage for any of\nthe\n>> various PL languages - and if so, is it a difference so big to be worth\n>> switching?\n>>\n>> I ask this because I had expected to see pl/pgsql as a clear winner in\nterms\n>> of performance over pltclu, but my initial test showed the opposite. I\nknow\n>> this may be an apples vs oranges problem and I will test further, but if\n>> anyone has any advice or insight, I would appreciate it so I can tailor\nmy\n>> tests accordingly.\n>>\n>\n> A performance strongly depends on use case.\n>\n> PL/pgSQL has fast start but any expression is evaluated as simple SQL\n> expression - and some repeated operation should be very expensive -\n> array update, string update. PL/pgSQL is best as SQL glue. Positive to\n> performance is type compatibility between plpgsql and Postgres.\n> Interpret plpgsql is very simply - there are +/- zero optimizations -\n> plpgsql code should be minimalistic, but when you don't do some really\n> wrong, then a speed is comparable with PHP.\n>\n>\nhttp://www.pgsql.cz/index.php/PL/pgSQL_%28en%29#Inappropriate_use_of_the_PL.\n2FpgSQL_language\n>\n> PL/Perl has slower start - but string or array operations are very\n> fast. Perl has own expression evaluator - faster than expression\n> evaluation in plpgsql. On second hand - any input must be transformed\n> from postgres format to perl format and any result must be transformed\n> too. Perl and other languages doesn't use data type compatible with\n> Postgres.\n\nOne big advantage pl/pgsql has over scripting languages is that it\nunderstands postgresql types natively. It knows what a postgres array\nis, and can manipulate one directly. pl/perl would typically have to\nhave the database convert it to a string, parse it into a perl\nstructure, do the manipulation, then send it to the database to be\nparsed again. If your procedure code is mainly moving data between\ntables and doing minimal intermediate heavy processing, this adds up\nto a big advantage. Which pl to go with really depends on what you\nneed to do. pl/pgsql is always my first choice though.\n\nperl and tcl are not particularly fast languages in the general case\n-- you are largely at the mercy of how well the language's syntax or\nlibrary features map to the particular problem you're solving. if you\nneed a fast general purpose language in the backend and are (very\nunderstandably) skeptical about C, I'd look at pl/java.\n\nmerlin\n\n", "msg_date": "Tue, 27 Dec 2011 18:38:26 -0500", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance costs of various PL languages" } ]
[ { "msg_contents": "Machine: 8 core AMD opteron 2.1GHz, 12 disk RAID-10, 2 disk pg_xlog,\nRHEL 5.4 pg version 8.3.9 (upgrading soon to 8.3.11 or so)\n\nThis query:\nSELECT sum(f.bytes) AS sum FROM files f INNER JOIN events ev ON f.eid\n= ev.eid WHERE ev.orgid = 969677;\n\nis choosing a merge join, which never returns from explain analyze (it\nmight after 10 or so minutes, but I'm not beating up my production\nserver over it)\n\n Aggregate (cost=902.41..902.42 rows=1 width=4)\n -> Merge Join (cost=869.97..902.40 rows=1 width=4)\n Merge Cond: (f.eid = ev.eid)\n -> Index Scan using files_eid_idx on files f\n(cost=0.00..157830.39 rows=3769434 width=8)\n -> Sort (cost=869.52..872.02 rows=1002 width=4)\n Sort Key: ev.eid\n -> Index Scan using events_orgid_idx on events ev\n(cost=0.00..819.57 rows=1002 width=4)\n Index Cond: (orgid = 969677)\n\n\nIf I turn off mergejoin it's fast:\n\nexplain analyze SELECT sum(f.bytes) AS sum FROM files f INNER JOIN\nevents ev ON f.eid = ev.eid WHERE ev.orgid = 969677;\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=3653.28..3653.29 rows=1 width=4) (actual\ntime=1.541..1.541 rows=1 loops=1)\n -> Nested Loop (cost=0.00..3653.28 rows=1 width=4) (actual\ntime=1.537..1.537 rows=0 loops=1)\n -> Index Scan using events_orgid_idx on events ev\n(cost=0.00..819.57 rows=1002 width=4) (actual time=0.041..0.453\nrows=185 loops=1)\n Index Cond: (orgid = 969677)\n -> Index Scan using files_eid_idx on files f\n(cost=0.00..2.82 rows=1 width=8) (actual time=0.005..0.005 rows=0\nloops=185)\n Index Cond: (f.eid = ev.eid)\n Total runtime: 1.637 ms\n\nI've played around with random_page_cost. All the other things you'd\nexpect, like effective_cache_size are set rather large (it's a server\nwith 32Gig ram and a 12 disk RAID-10) and no setting of\nrandom_page_cost forces it to choose the non-mergejoin plan.\n\nAnybody with any ideas, I'm all ears.\n", "msg_date": "Tue, 18 May 2010 18:17:33 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "merge join killing performance" }, { "msg_contents": "On Tue, 18 May 2010, Scott Marlowe wrote:\n> Aggregate (cost=902.41..902.42 rows=1 width=4)\n> -> Merge Join (cost=869.97..902.40 rows=1 width=4)\n> Merge Cond: (f.eid = ev.eid)\n> -> Index Scan using files_eid_idx on files f\n> (cost=0.00..157830.39 rows=3769434 width=8)\n\nOkay, that's weird. How is the cost of the merge join only 902, when the \ncost of one of the branches 157830, when there is no LIMIT?\n\nAre the statistics up to date?\n\nMatthew\n\n-- \n As you approach the airport, you see a sign saying \"Beware - low\n flying airplanes\". There's not a lot you can do about that. Take \n your hat off? -- Michael Flanders\n", "msg_date": "Tue, 18 May 2010 23:00:18 -0400 (EDT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: merge join killing performance" }, { "msg_contents": "On Tue, May 18, 2010 at 9:00 PM, Matthew Wakeling <[email protected]> wrote:\n> On Tue, 18 May 2010, Scott Marlowe wrote:\n>>\n>> Aggregate  (cost=902.41..902.42 rows=1 width=4)\n>>  ->  Merge Join  (cost=869.97..902.40 rows=1 width=4)\n>>        Merge Cond: (f.eid = ev.eid)\n>>        ->  Index Scan using files_eid_idx on files f\n>> (cost=0.00..157830.39 rows=3769434 width=8)\n>\n> Okay, that's weird. How is the cost of the merge join only 902, when the\n> cost of one of the branches 157830, when there is no LIMIT?\n>\n> Are the statistics up to date?\n\nYep. The explain analyze shows it being close enough it should guess\nright (I think) We have default stats target set to 200 and the table\nis regularly analyzed by autovac, which now has much smaller settings\nfor threshold and % than default to handle these big tables.\n", "msg_date": "Tue, 18 May 2010 21:06:25 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: merge join killing performance" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> On Tue, 18 May 2010, Scott Marlowe wrote:\n>> Aggregate (cost=902.41..902.42 rows=1 width=4)\n>> -> Merge Join (cost=869.97..902.40 rows=1 width=4)\n>> Merge Cond: (f.eid = ev.eid)\n>> -> Index Scan using files_eid_idx on files f\n>> (cost=0.00..157830.39 rows=3769434 width=8)\n\n> Okay, that's weird. How is the cost of the merge join only 902, when the \n> cost of one of the branches 157830, when there is no LIMIT?\n\nIt's apparently estimating (wrongly) that the merge join won't have to\nscan very much of \"files\" before it can stop because it finds an eid\nvalue larger than any eid in the other table. So the issue here is an\ninexact stats value for the max eid.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 May 2010 12:53:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: merge join killing performance " }, { "msg_contents": "On Wed, May 19, 2010 at 10:53 AM, Tom Lane <[email protected]> wrote:\n> Matthew Wakeling <[email protected]> writes:\n>> On Tue, 18 May 2010, Scott Marlowe wrote:\n>>> Aggregate  (cost=902.41..902.42 rows=1 width=4)\n>>>     ->  Merge Join  (cost=869.97..902.40 rows=1 width=4)\n>>>         Merge Cond: (f.eid = ev.eid)\n>>>         ->  Index Scan using files_eid_idx on files f\n>>>         (cost=0.00..157830.39 rows=3769434 width=8)\n>\n>> Okay, that's weird. How is the cost of the merge join only 902, when the\n>> cost of one of the branches 157830, when there is no LIMIT?\n>\n> It's apparently estimating (wrongly) that the merge join won't have to\n> scan very much of \"files\" before it can stop because it finds an eid\n> value larger than any eid in the other table.  So the issue here is an\n> inexact stats value for the max eid.\n\nThat's a big table. I'll try cranking up the stats target for that\ncolumn and see what happens. Thanks!\n", "msg_date": "Wed, 19 May 2010 11:08:21 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: merge join killing performance" }, { "msg_contents": "On Wed, May 19, 2010 at 10:53 AM, Tom Lane <[email protected]> wrote:\n> Matthew Wakeling <[email protected]> writes:\n>> On Tue, 18 May 2010, Scott Marlowe wrote:\n>>> Aggregate  (cost=902.41..902.42 rows=1 width=4)\n>>>     ->  Merge Join  (cost=869.97..902.40 rows=1 width=4)\n>>>         Merge Cond: (f.eid = ev.eid)\n>>>         ->  Index Scan using files_eid_idx on files f\n>>>         (cost=0.00..157830.39 rows=3769434 width=8)\n>\n>> Okay, that's weird. How is the cost of the merge join only 902, when the\n>> cost of one of the branches 157830, when there is no LIMIT?\n>\n> It's apparently estimating (wrongly) that the merge join won't have to\n> scan very much of \"files\" before it can stop because it finds an eid\n> value larger than any eid in the other table.  So the issue here is an\n> inexact stats value for the max eid.\n\nI changed stats target to 1000 for that field and still get the bad plan.\n", "msg_date": "Wed, 19 May 2010 14:27:05 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: merge join killing performance" }, { "msg_contents": "On Wed, May 19, 2010 at 2:27 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, May 19, 2010 at 10:53 AM, Tom Lane <[email protected]> wrote:\n>> Matthew Wakeling <[email protected]> writes:\n>>> On Tue, 18 May 2010, Scott Marlowe wrote:\n>>>> Aggregate  (cost=902.41..902.42 rows=1 width=4)\n>>>>     ->  Merge Join  (cost=869.97..902.40 rows=1 width=4)\n>>>>         Merge Cond: (f.eid = ev.eid)\n>>>>         ->  Index Scan using files_eid_idx on files f\n>>>>         (cost=0.00..157830.39 rows=3769434 width=8)\n>>\n>>> Okay, that's weird. How is the cost of the merge join only 902, when the\n>>> cost of one of the branches 157830, when there is no LIMIT?\n>>\n>> It's apparently estimating (wrongly) that the merge join won't have to\n>> scan very much of \"files\" before it can stop because it finds an eid\n>> value larger than any eid in the other table.  So the issue here is an\n>> inexact stats value for the max eid.\n>\n> I changed stats target to 1000 for that field and still get the bad plan.\n\nAnd of course ran analyze across the table...\n", "msg_date": "Wed, 19 May 2010 14:47:06 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: merge join killing performance" }, { "msg_contents": "On Wed, 19 May 2010, Scott Marlowe wrote:\n>> It's apparently estimating (wrongly) that the merge join won't have to\n>> scan very much of \"files\" before it can stop because it finds an eid\n>> value larger than any eid in the other table.  So the issue here is an\n>> inexact stats value for the max eid.\n\nI wandered if it could be something like that, but I rejected that idea, \nas it obviously wasn't the real world case, and statistics should at least \nget that right, if they are up to date.\n\n> I changed stats target to 1000 for that field and still get the bad plan.\n\nWhat do the stats say the max values are?\n\nMatthew\n\n-- \n Nog: Look! They've made me into an ensign!\n O'Brien: I didn't know things were going so badly.\n Nog: Frightening, isn't it?", "msg_date": "Wed, 19 May 2010 21:46:29 -0400 (EDT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: merge join killing performance" }, { "msg_contents": "On Wed, May 19, 2010 at 7:46 PM, Matthew Wakeling <[email protected]> wrote:\n> On Wed, 19 May 2010, Scott Marlowe wrote:\n>>>\n>>> It's apparently estimating (wrongly) that the merge join won't have to\n>>> scan very much of \"files\" before it can stop because it finds an eid\n>>> value larger than any eid in the other table.  So the issue here is an\n>>> inexact stats value for the max eid.\n>\n> I wandered if it could be something like that, but I rejected that idea, as\n> it obviously wasn't the real world case, and statistics should at least get\n> that right, if they are up to date.\n>\n>> I changed stats target to 1000 for that field and still get the bad plan.\n>\n> What do the stats say the max values are?\n\n5277063,5423043,13843899 (I think).\n\n# select count(distinct eid) from files;\n count\n-------\n 365\n(1 row)\n\n# select count(*) from files;\n count\n---------\n 3793748\n", "msg_date": "Wed, 19 May 2010 20:04:18 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: merge join killing performance" }, { "msg_contents": "On Wed, May 19, 2010 at 8:04 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, May 19, 2010 at 7:46 PM, Matthew Wakeling <[email protected]> wrote:\n>> On Wed, 19 May 2010, Scott Marlowe wrote:\n>>>>\n>>>> It's apparently estimating (wrongly) that the merge join won't have to\n>>>> scan very much of \"files\" before it can stop because it finds an eid\n>>>> value larger than any eid in the other table.  So the issue here is an\n>>>> inexact stats value for the max eid.\n>>\n>> I wandered if it could be something like that, but I rejected that idea, as\n>> it obviously wasn't the real world case, and statistics should at least get\n>> that right, if they are up to date.\n>>\n>>> I changed stats target to 1000 for that field and still get the bad plan.\n>>\n>> What do the stats say the max values are?\n>\n> 5277063,5423043,13843899 (I think).\n>\n> # select count(distinct eid) from files;\n>  count\n> -------\n>   365\n> (1 row)\n>\n> # select count(*) from files;\n>  count\n> ---------\n>  3793748\n\nA followup. of those rows,\n\nselect count(*) from files where eid is null;\n count\n---------\n 3793215\n\nare null.\n", "msg_date": "Wed, 19 May 2010 20:06:15 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: merge join killing performance" }, { "msg_contents": "On Wed, May 19, 2010 at 8:06 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, May 19, 2010 at 8:04 PM, Scott Marlowe <[email protected]> wrote:\n>> On Wed, May 19, 2010 at 7:46 PM, Matthew Wakeling <[email protected]> wrote:\n>>> On Wed, 19 May 2010, Scott Marlowe wrote:\n>>>>>\n>>>>> It's apparently estimating (wrongly) that the merge join won't have to\n>>>>> scan very much of \"files\" before it can stop because it finds an eid\n>>>>> value larger than any eid in the other table.  So the issue here is an\n>>>>> inexact stats value for the max eid.\n>>>\n>>> I wandered if it could be something like that, but I rejected that idea, as\n>>> it obviously wasn't the real world case, and statistics should at least get\n>>> that right, if they are up to date.\n>>>\n>>>> I changed stats target to 1000 for that field and still get the bad plan.\n>>>\n>>> What do the stats say the max values are?\n>>\n>> 5277063,5423043,13843899 (I think).\n>>\n>> # select count(distinct eid) from files;\n>>  count\n>> -------\n>>   365\n>> (1 row)\n>>\n>> # select count(*) from files;\n>>  count\n>> ---------\n>>  3793748\n>\n> A followup.  of those rows,\n>\n> select count(*) from files where eid is null;\n>  count\n> ---------\n>  3793215\n>\n> are null.\n\nSo, Tom, so you think it's possible that the planner isn't noticing\nall those nulls and thinks it'll just take a row or two to get to the\nvalue it needs to join on?\n", "msg_date": "Wed, 19 May 2010 20:07:39 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: merge join killing performance" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> So, Tom, so you think it's possible that the planner isn't noticing\n> all those nulls and thinks it'll just take a row or two to get to the\n> value it needs to join on?\n\nCould be. I don't have time right now to chase through the code, but\nthat sounds like a plausible theory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 May 2010 10:28:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: merge join killing performance " }, { "msg_contents": "On Thu, May 20, 2010 at 8:28 AM, Tom Lane <[email protected]> wrote:\n> Scott Marlowe <[email protected]> writes:\n>> So, Tom, so you think it's possible that the planner isn't noticing\n>> all those nulls and thinks it'll just take a row or two to get to the\n>> value it needs to join on?\n>\n> Could be.  I don't have time right now to chase through the code, but\n> that sounds like a plausible theory.\n\nK. I think I'll try an index on that field \"where not null\" and see\nif that helps.\n", "msg_date": "Thu, 20 May 2010 08:35:37 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: merge join killing performance" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> So, Tom, so you think it's possible that the planner isn't noticing\n> all those nulls and thinks it'll just take a row or two to get to the\n> value it needs to join on?\n\nI dug through this and have concluded that it's really an oversight in\nthe patch I wrote some years ago in response to this:\nhttp://archives.postgresql.org/pgsql-performance/2005-05/msg00219.php\n\nThat patch taught nodeMergejoin that a row containing a NULL key can't\npossibly match anything on the other side. However, its response to\nobserving a NULL is just to advance to the next row of that input.\nWhat we should do, if the NULL is in the first merge column and the sort\norder is nulls-high, is realize that every following row in that input\nmust also contain a NULL and so we can just terminate the mergejoin\nimmediately. The original patch works well for cases where there are\njust a few nulls in one input and the important factor is to not read\nall the rest of the other input --- but it fails to cover the case where\nthere are many nulls and the important factor is to not read all the\nrest of the nulls. The problem can be demonstrated if you modify the\nexample given in the above-referenced message so that table t1 contains\nlots of nulls rather than just a few: explain analyze will show that\nall of t1 gets read by the mergejoin, and that's not necessary.\n\nI'm inclined to think this is a performance bug and should be\nback-patched, assuming the fix is simple (which I think it is, but\nhaven't coded/tested yet). It'd probably be reasonable to go back to\n8.3; before that, sorting nulls high versus nulls low was pretty poorly\ndefined and so there'd be risk of breaking cases that gave the right\nanswers before.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 May 2010 16:06:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "mergejoin null handling (was Re: [PERFORM] merge join killing\n\tperformance)" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> So, Tom, so you think it's possible that the planner isn't noticing\n> all those nulls and thinks it'll just take a row or two to get to the\n> value it needs to join on?\n\nI've committed a patch for this, if you're interested in testing that\nit fixes your situation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 May 2010 21:16:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: merge join killing performance " }, { "msg_contents": "On Thu, May 27, 2010 at 7:16 PM, Tom Lane <[email protected]> wrote:\n> Scott Marlowe <[email protected]> writes:\n>> So, Tom, so you think it's possible that the planner isn't noticing\n>> all those nulls and thinks it'll just take a row or two to get to the\n>> value it needs to join on?\n>\n> I've committed a patch for this, if you're interested in testing that\n> it fixes your situation.\n\nCool, do we have a snapshot build somewhere or do I need to get all\nthe extra build bits like flex or yacc or bison or whatnot?\n", "msg_date": "Thu, 27 May 2010 20:46:21 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: merge join killing performance" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> On Thu, May 27, 2010 at 7:16 PM, Tom Lane <[email protected]> wrote:\n>> I've committed a patch for this, if you're interested in testing that\n>> it fixes your situation.\n\n> Cool, do we have a snapshot build somewhere or do I need to get all\n> the extra build bits like flex or yacc or bison or whatnot?\n\nThere's a nightly snapshot tarball of HEAD on the ftp server.\nI don't believe there's any snapshots for back branches though.\n\nAlternatively, you could grab the latest release tarball for whichever\nbranch you want and just apply that patch --- it should apply cleanly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 May 2010 22:56:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: merge join killing performance " } ]
[ { "msg_contents": "Hi,\n\nI recently switched to PostgreSQL from MySQL so that I can use PL/R for data\nanalysis. The query in MySQL form (against a more complex table structure)\ntakes ~5 seconds to run. The query in PostgreSQL I have yet to let finish,\nas it takes over a minute. I think I have the correct table structure in\nplace (it is much simpler than the former structure in MySQL), however the\nquery executes a full table scan against the parent table's 273 million\nrows.\n\n*Questions*\n\nWhat is the proper way to index the dates to avoid full table scans?\n\nOptions I have considered:\n\n - GIN\n - GiST\n - Rewrite the WHERE clause\n - Separate year_taken, month_taken, and day_taken columns to the tables\n\n*Details\n*\nThe HashAggregate from the plan shows a cost of 10006220141.11, which is, I\nsuspect, on the astronomically huge side. There is a full table scan on the\nmeasurement table (itself having neither data nor indexes) being performed.\nThe table aggregates 237 million rows from its child tables. The\nsluggishness comes from this part of the query:\n\n m.taken BETWEEN\n /* Start date. */\n (extract( YEAR FROM m.taken )||'-01-01')::date AND\n /* End date. Calculated by checking to see if the end date wraps\n into the next year. If it does, then add 1 to the current year.\n */\n (cast(extract( YEAR FROM m.taken ) + greatest( -1 *\n sign(\n (extract( YEAR FROM m.taken )||'-12-31')::date -\n (extract( YEAR FROM m.taken )||'-01-01')::date ), 0\n ) AS text)||'-12-31')::date\n\nThere are 72 child tables, each having a year index and a station index,\nwhich are defined as follows:\n\n CREATE TABLE climate.measurement_12_013 (\n -- Inherited from table climate.measurement_12_013: id bigint NOT NULL\nDEFAULT nextval('climate.measurement_id_seq'::regclass),\n -- Inherited from table climate.measurement_12_013: station_id integer\nNOT NULL,\n -- Inherited from table climate.measurement_12_013: taken date NOT\nNULL,\n -- Inherited from table climate.measurement_12_013: amount numeric(8,2)\nNOT NULL,\n -- Inherited from table climate.measurement_12_013: category_id\nsmallint NOT NULL,\n -- Inherited from table climate.measurement_12_013: flag character\nvarying(1) NOT NULL DEFAULT ' '::character varying,\n CONSTRAINT measurement_12_013_category_id_check CHECK (category_id =\n7),\n CONSTRAINT measurement_12_013_taken_check CHECK\n(date_part('month'::text, taken)::integer = 12)\n )\n INHERITS (climate.measurement)\n\n CREATE INDEX measurement_12_013_s_idx\n ON climate.measurement_12_013\n USING btree\n (station_id);\n CREATE INDEX measurement_12_013_y_idx\n ON climate.measurement_12_013\n USING btree\n (date_part('year'::text, taken));\n\n(Foreign key constraints to be added later.)\n\nThe following query runs abysmally slow due to a full table scan:\n\n SELECT\n count(1) AS measurements,\n avg(m.amount) AS amount\n FROM\n climate.measurement m\n WHERE\n m.station_id IN (\n SELECT\n s.id\n FROM\n climate.station s,\n climate.city c\n WHERE\n /* For one city... */\n c.id = 5182 AND\n\n /* Where stations are within an elevation range... */\n s.elevation BETWEEN 0 AND 3000 AND\n\n /* and within a specific radius... */\n 6371.009 * SQRT(\n POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) +\n (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) *\n POW(RADIANS(c.longitude_decimal - s.longitude_decimal),\n2))\n ) <= 50\n ) AND\n\n /* Data before 1900 is shaky; insufficient after 2009. */\n extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND\n\n /* Whittled down by category... */\n m.category_id = 1 AND\n\n /* Between the selected days and years... */\n m.taken BETWEEN\n /* Start date. */\n (extract( YEAR FROM m.taken )||'-01-01')::date AND\n /* End date. Calculated by checking to see if the end date wraps\n into the next year. If it does, then add 1 to the current year.\n */\n (cast(extract( YEAR FROM m.taken ) + greatest( -1 *\n sign(\n (extract( YEAR FROM m.taken )||'-12-31')::date -\n (extract( YEAR FROM m.taken )||'-01-01')::date ), 0\n ) AS text)||'-12-31')::date\n GROUP BY\n extract( YEAR FROM m.taken )\n\nWhat are your thoughts?\n\nThank you!\n\nHi,I recently switched to PostgreSQL from MySQL so that I can use PL/R for data analysis. The query in MySQL form (against a more complex table structure) takes ~5 seconds to run. The query in PostgreSQL I have yet to let finish, as it takes over a minute. I think I have the correct table structure in place (it is much simpler than the former structure in MySQL), however the query executes a full table scan against the parent table's 273 million rows.\nQuestionsWhat is the proper way to index the dates to avoid full table scans?Options I have considered:GINGiSTRewrite the WHERE clauseSeparate year_taken, month_taken, and day_taken columns to the tables\nDetailsThe HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There\nis a full table scan on the measurement table (itself having neither\ndata nor indexes) being performed. The table aggregates 237 million\nrows from its child tables. The sluggishness comes from this part of the query:\n      m.taken BETWEEN        /* Start date. */\n      (extract( YEAR FROM m.taken )||'-01-01')::date AND        /* End date. Calculated by checking to see if the end date wraps\n          into the next year. If it does, then add 1 to the current year.        */\n        (cast(extract( YEAR FROM m.taken ) + greatest( -1 *          sign(\n            (extract( YEAR FROM m.taken )||'-12-31')::date -            (extract( YEAR FROM m.taken )||'-01-01')::date ), 0\n        ) AS text)||'-12-31')::dateThere are 72 child tables, each having a year index and a station index, which are defined as follows:\n    CREATE TABLE climate.measurement_12_013 (    -- Inherited from table climate.measurement_12_013:  id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass),\n    -- Inherited from table climate.measurement_12_013:  station_id integer NOT NULL,    -- Inherited from table climate.measurement_12_013:  taken date NOT NULL,\n    -- Inherited from table climate.measurement_12_013:  amount numeric(8,2) NOT NULL,    -- Inherited from table climate.measurement_12_013:  category_id smallint NOT NULL,\n    -- Inherited from table climate.measurement_12_013:  flag character varying(1) NOT NULL DEFAULT ' '::character varying,\n      CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7),      CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12)\n    )    INHERITS (climate.measurement)\n    CREATE INDEX measurement_12_013_s_idx      ON climate.measurement_12_013\n      USING btree      (station_id);\n    CREATE INDEX measurement_12_013_y_idx      ON climate.measurement_12_013\n      USING btree      (date_part('year'::text, taken));\n(Foreign key constraints to be added later.)The following query runs abysmally slow due to a full table scan:    SELECT\n      count(1) AS measurements,      avg(m.amount) AS amount\n    FROM      climate.measurement m\n    WHERE      m.station_id IN (\n        SELECT          s.id\n        FROM          climate.station s,\n          climate.city c        WHERE\n            /* For one city... */            c.id = 5182 AND\n            /* Where stations are within an elevation range... */            s.elevation BETWEEN 0 AND 3000 AND\n            /* and within a specific radius... */            6371.009 * SQRT( \n              POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) +                (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) *\n                  POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2))            ) <= 50\n        ) AND      /* Data before 1900 is shaky; insufficient after 2009. */\n      extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND      /* Whittled down by category... */\n      m.category_id = 1 AND      /* Between the selected days and years... */\n      m.taken BETWEEN       /* Start date. */\n       (extract( YEAR FROM m.taken )||'-01-01')::date AND        /* End date. Calculated by checking to see if the end date wraps\n           into the next year. If it does, then add 1 to the current year.        */\n        (cast(extract( YEAR FROM m.taken ) + greatest( -1 *          sign(\n            (extract( YEAR FROM m.taken )||'-12-31')::date -            (extract( YEAR FROM m.taken )||'-01-01')::date ), 0\n        ) AS text)||'-12-31')::date    GROUP BY\n      extract( YEAR FROM m.taken )What are your thoughts?\nThank you!", "msg_date": "Wed, 19 May 2010 22:06:02 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Optimize date query for large child tables: GiST or GIN?" }, { "msg_contents": "Hello David,\n> The table aggregates 237 million rows from its child tables. The \n> sluggishness comes from this part of the query:\n>\n> m.taken BETWEEN\n> /* Start date. */\n> (extract( YEAR FROM m.taken )||'-01-01')::date AND\n> /* End date. Calculated by checking to see if the end date wraps\n> into the next year. If it does, then add 1 to the current year.\n> */\n> (cast(extract( YEAR FROM m.taken ) + greatest( -1 *\n> sign(\n> (extract( YEAR FROM m.taken )||'-12-31')::date -\n> (extract( YEAR FROM m.taken )||'-01-01')::date ), 0\n> ) AS text)||'-12-31')::date\nEither I had too less coffee and completely misunderstand this \nexpression, or it is always true and can be omitted. Could you explain a \nbit what this part tries to do and maybe also show it's original \ncounterpart in the source database?\n\nregards,\nYeb Havinga\n\n", "msg_date": "Thu, 20 May 2010 10:20:42 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n GIN?" }, { "msg_contents": "On 20 May 2010 06:06, David Jarvis <[email protected]> wrote:\n> Hi,\n>\n> I recently switched to PostgreSQL from MySQL so that I can use PL/R for data\n> analysis. The query in MySQL form (against a more complex table structure)\n> takes ~5 seconds to run. The query in PostgreSQL I have yet to let finish,\n> as it takes over a minute. I think I have the correct table structure in\n> place (it is much simpler than the former structure in MySQL), however the\n> query executes a full table scan against the parent table's 273 million\n> rows.\n>\n> Questions\n>\n> What is the proper way to index the dates to avoid full table scans?\n>\n> Options I have considered:\n>\n> GIN\n> GiST\n> Rewrite the WHERE clause\n> Separate year_taken, month_taken, and day_taken columns to the tables\n>\n> Details\n>\n> The HashAggregate from the plan shows a cost of 10006220141.11, which is, I\n> suspect, on the astronomically huge side. There is a full table scan on the\n> measurement table (itself having neither data nor indexes) being performed.\n> The table aggregates 237 million rows from its child tables. The\n> sluggishness comes from this part of the query:\n>\n>       m.taken BETWEEN\n>         /* Start date. */\n>       (extract( YEAR FROM m.taken )||'-01-01')::date AND\n>         /* End date. Calculated by checking to see if the end date wraps\n>           into the next year. If it does, then add 1 to the current year.\n>         */\n>         (cast(extract( YEAR FROM m.taken ) + greatest( -1 *\n>           sign(\n>             (extract( YEAR FROM m.taken )||'-12-31')::date -\n>             (extract( YEAR FROM m.taken )||'-01-01')::date ), 0\n>         ) AS text)||'-12-31')::date\n>\n> There are 72 child tables, each having a year index and a station index,\n> which are defined as follows:\n>\n>     CREATE TABLE climate.measurement_12_013 (\n>     -- Inherited from table climate.measurement_12_013:  id bigint NOT NULL\n> DEFAULT nextval('climate.measurement_id_seq'::regclass),\n>     -- Inherited from table climate.measurement_12_013:  station_id integer\n> NOT NULL,\n>     -- Inherited from table climate.measurement_12_013:  taken date NOT\n> NULL,\n>     -- Inherited from table climate.measurement_12_013:  amount numeric(8,2)\n> NOT NULL,\n>     -- Inherited from table climate.measurement_12_013:  category_id\n> smallint NOT NULL,\n>     -- Inherited from table climate.measurement_12_013:  flag character\n> varying(1) NOT NULL DEFAULT ' '::character varying,\n>       CONSTRAINT measurement_12_013_category_id_check CHECK (category_id =\n> 7),\n>       CONSTRAINT measurement_12_013_taken_check CHECK\n> (date_part('month'::text, taken)::integer = 12)\n>     )\n>     INHERITS (climate.measurement)\n>\n>     CREATE INDEX measurement_12_013_s_idx\n>       ON climate.measurement_12_013\n>       USING btree\n>       (station_id);\n>     CREATE INDEX measurement_12_013_y_idx\n>       ON climate.measurement_12_013\n>       USING btree\n>       (date_part('year'::text, taken));\n>\n> (Foreign key constraints to be added later.)\n>\n> The following query runs abysmally slow due to a full table scan:\n>\n>     SELECT\n>       count(1) AS measurements,\n>       avg(m.amount) AS amount\n>     FROM\n>       climate.measurement m\n>     WHERE\n>       m.station_id IN (\n>         SELECT\n>           s.id\n>         FROM\n>           climate.station s,\n>           climate.city c\n>         WHERE\n>             /* For one city... */\n>             c.id = 5182 AND\n>\n>             /* Where stations are within an elevation range... */\n>             s.elevation BETWEEN 0 AND 3000 AND\n>\n>             /* and within a specific radius... */\n>             6371.009 * SQRT(\n>               POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) +\n>                 (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) *\n>                   POW(RADIANS(c.longitude_decimal - s.longitude_decimal),\n> 2))\n>             ) <= 50\n>         ) AND\n>\n>       /* Data before 1900 is shaky; insufficient after 2009. */\n>       extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND\n>\n>       /* Whittled down by category... */\n>       m.category_id = 1 AND\n>\n>       /* Between the selected days and years... */\n>       m.taken BETWEEN\n>        /* Start date. */\n>        (extract( YEAR FROM m.taken )||'-01-01')::date AND\n>         /* End date. Calculated by checking to see if the end date wraps\n>            into the next year. If it does, then add 1 to the current year.\n>         */\n>         (cast(extract( YEAR FROM m.taken ) + greatest( -1 *\n>           sign(\n>             (extract( YEAR FROM m.taken )||'-12-31')::date -\n>             (extract( YEAR FROM m.taken )||'-01-01')::date ), 0\n>         ) AS text)||'-12-31')::date\n>     GROUP BY\n>       extract( YEAR FROM m.taken )\n>\n> What are your thoughts?\n>\n> Thank you!\n>\n>\n\nCould you provide the EXPLAIN output for that slow query?\n\nThom\n", "msg_date": "Thu, 20 May 2010 09:33:15 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "On Wed, 19 May 2010, David Jarvis wrote:\n> extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND\n\nThat portion of the WHERE clause cannot use an index on m.taken. Postgres \ndoes not look inside functions (like extract) to see if something \nindexable is present. To get an index to work, you could create an index \non (extract(YEAR FROM m.taken)).\n\nMatthew\n\n-- \n Here we go - the Fairy Godmother redundancy proof.\n -- Computer Science Lecturer\n", "msg_date": "Thu, 20 May 2010 14:03:11 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n GIN?" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> On Wed, 19 May 2010, David Jarvis wrote:\n>> extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND\n\n> That portion of the WHERE clause cannot use an index on m.taken. Postgres \n> does not look inside functions (like extract) to see if something \n> indexable is present. To get an index to work, you could create an index \n> on (extract(YEAR FROM m.taken)).\n\nWhat you really need to do is not do date arithmetic using text-string\noperations. The planner has no intelligence about that whatsoever.\nConvert the operations to something natural using real date or timestamp\ntypes, and then look at what indexes you need.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 May 2010 09:56:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or GIN? " }, { "msg_contents": "Hi,\n\nI have posted an image of the user inputs here:\n\nhttp://i.imgur.com/MUkuZ.png\n\nThe problem is that I am given a range of days (Dec 22 - Mar 22) over a\nrange of years (1900 - 2009) and the range of days can span from one year to\nthe next. This is not the same as saying Dec 22, 1900 to Mar 22, 2009, for\nwhich I do not need date math.\n\nWhat you really need to do is not do date arithmetic using text-string\n> operations. The planner has no intelligence about that whatsoever.\n> Convert the operations to something natural using real date or timestamp\n> types, and then look at what indexes you need.\n>\n\nAny suggestions on how to go about this?\n\nThanks again!\n\nDave\n\nHi,I have posted an image of the user inputs here:http://i.imgur.com/MUkuZ.pngThe problem is that I am given a range of days (Dec 22 - Mar 22) over a range of years (1900 - 2009) and the range of days can span from one year to the next. This is not the same as saying Dec 22, 1900 to Mar 22, 2009, for which I do not need date math.\nWhat you really need to do is not do date arithmetic using text-string\n\noperations.  The planner has no intelligence about that whatsoever.\nConvert the operations to something natural using real date or timestamp\ntypes, and then look at what indexes you need.Any suggestions on how to go about this?Thanks again!Dave", "msg_date": "Thu, 20 May 2010 08:43:26 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "On 20 May 2010 17:36, David Jarvis <[email protected]> wrote:\n> Hi, Thom.\n>\n> The query is given two items:\n>\n> Range of years\n> Range of days\n>\n> I need to select all data between the range of days (e.g., Dec 22 - Mar 22)\n> over the range of years (e.g., 1950 - 1970), such as shown here:\n>\n> http://i.imgur.com/MUkuZ.png\n>\n> For Jun 1 to Jul 1 it would be no problem because they the same year. But\n> for Dec 22 to Mar 22, it is difficult because Mar 22 is in the next year\n> (relative to Dec 22).\n>\n> How do I do that without strings?\n>\n> Dave\n>\n>\n\nOkay, get your app to convert the month-date to a day of year, so we\nhave year_start, year_end, day_of_year_start, day_of_year_end\n\nand your where clause would something like this:\n\nWHERE extract(YEAR from m.taken) BETWEEN year1 and year2\nAND (\n\textract(DOY from m.taken) BETWEEN day_of_year_start AND day_of_year_end\n\tOR (\n\t\textract(DOY from m.taken) >= day_of_year_start OR extract(DOY from\nm.taken) <= day_of_year_end\n\t)\n)\n\n... substituting the placeholders where they appear.\n\nSo if we had:\n\nyear1=1941\nyear2=1952\nday_of_year_start=244 (based on input date of 1st September)\nday_of_year_end=94 (based on 4th April)\n\nWe'd have:\n\nWHERE extract(YEAR from m.taken) BETWEEN 1941 and 1952\nAND (\n\textract(DOY from m.taken) BETWEEN 244 AND 94\n\tOR (\n\t\textract(DOY from m.taken) >= 244 OR extract(DOY from m.taken) <= 94\n\t)\n)\n\nThen you could add expression indexes for the YEAR and DOY extract parts, like:\n\nCREATE INDEX idx_taken_doy ON climate.measurement (EXTRACT(DOY from taken));\nCREATE INDEX idx_taken_year ON climate.measurement (EXTRACT(YEAR from taken));\n\nAlthough maybe you don't need those, depending on how the date\ndatatype matching works in the planner with the EXTRACT function.\n\nRegards\n\nThom\n", "msg_date": "Thu, 20 May 2010 19:36:36 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "On 20 May 2010 19:36, Thom Brown <[email protected]> wrote:\n> On 20 May 2010 17:36, David Jarvis <[email protected]> wrote:\n>> Hi, Thom.\n>>\n>> The query is given two items:\n>>\n>> Range of years\n>> Range of days\n>>\n>> I need to select all data between the range of days (e.g., Dec 22 - Mar 22)\n>> over the range of years (e.g., 1950 - 1970), such as shown here:\n>>\n>> http://i.imgur.com/MUkuZ.png\n>>\n>> For Jun 1 to Jul 1 it would be no problem because they the same year. But\n>> for Dec 22 to Mar 22, it is difficult because Mar 22 is in the next year\n>> (relative to Dec 22).\n>>\n>> How do I do that without strings?\n>>\n>> Dave\n>>\n>>\n>\n> Okay, get your app to convert the month-date to a day of year, so we\n> have year_start, year_end, day_of_year_start, day_of_year_end\n>\n> and your where clause would something like this:\n>\n> WHERE extract(YEAR from m.taken) BETWEEN year1 and year2\n> AND (\n>        extract(DOY from m.taken) BETWEEN day_of_year_start AND day_of_year_end\n>        OR (\n>                extract(DOY from m.taken) >= day_of_year_start OR extract(DOY from\n> m.taken) <= day_of_year_end\n>        )\n> )\n>\n> ... substituting the placeholders where they appear.\n>\n> So if we had:\n>\n> year1=1941\n> year2=1952\n> day_of_year_start=244 (based on input date of 1st September)\n> day_of_year_end=94 (based on 4th April)\n>\n> We'd have:\n>\n> WHERE extract(YEAR from m.taken) BETWEEN 1941 and 1952\n> AND (\n>        extract(DOY from m.taken) BETWEEN 244 AND 94\n>        OR (\n>                extract(DOY from m.taken) >= 244 OR extract(DOY from m.taken) <= 94\n>        )\n> )\n>\n> Then you could add expression indexes for the YEAR and DOY extract parts, like:\n>\n> CREATE INDEX idx_taken_doy ON climate.measurement (EXTRACT(DOY from taken));\n> CREATE INDEX idx_taken_year ON climate.measurement (EXTRACT(YEAR from taken));\n>\n> Although maybe you don't need those, depending on how the date\n> datatype matching works in the planner with the EXTRACT function.\n>\n> Regards\n>\n> Thom\n>\n\nActually, you could change that last bit from:\n\n OR (\n                extract(DOY from m.taken) >= day_of_year_start OR\nextract(DOY from m.taken) <= day_of_year_end\n       )\n\nto\n\nOR extract(DOY from m.taken) NOT BETWEEN day_of_year_end AND day_of_year_start\n\nThat would be tidier and simpler :)\n\nThom\n", "msg_date": "Thu, 20 May 2010 19:58:53 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "Thom Brown <[email protected]> writes:\n> On 20 May 2010 17:36, David Jarvis <[email protected]> wrote:\n> Okay, get your app to convert the month-date to a day of year, so we\n> have year_start, year_end, day_of_year_start, day_of_year_end\n\n> and your where clause would something like this:\n\n> WHERE extract(YEAR from m.taken) BETWEEN year1 and year2\n> AND (\n> \textract(DOY from m.taken) BETWEEN day_of_year_start AND day_of_year_end\n> \tOR (\n> \t\textract(DOY from m.taken) >= day_of_year_start OR extract(DOY from\n> m.taken) <= day_of_year_end\n> \t)\n> )\n\nextract(DOY) seems a bit problematic here, because its day numbering is\ngoing to be different between leap years and non-leap years, and David's\nproblem statement doesn't allow for off-by-one errors. You could\ncertainly invent your own function that worked similarly but always\ntranslated a given month/day to the same number.\n\nThe other thing that's messy here is the wraparound requirement.\nRather than trying an OR like the above (which I think doesn't quite\nwork anyway --- won't it select everything?), it would be better if\nyou can have the app distinguish wraparound from non-wraparound cases\nand issue different queries in the two cases. In the non-wrap case\n(start_day < end_day) it's pretty easy, just\n\tmy_doy(m.taken) BETWEEN start_val AND end_val\nThe easy way to handle the wrap case is\n\tmy_doy(m.taken) <= start_val OR my_doy(m.taken) >= end_val\nalthough I can't help feeling there should be a smarter way to do\nthis where you can use an AND range check on some modified expression\nderived from the date.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 May 2010 15:02:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or GIN? " }, { "msg_contents": "On 20 May 2010 20:02, Tom Lane <[email protected]> wrote:\n> Thom Brown <[email protected]> writes:\n>> On 20 May 2010 17:36, David Jarvis <[email protected]> wrote:\n>> Okay, get your app to convert the month-date to a day of year, so we\n>> have year_start, year_end, day_of_year_start, day_of_year_end\n>\n>> and your where clause would something like this:\n>\n>> WHERE extract(YEAR from m.taken) BETWEEN year1 and year2\n>> AND (\n>>       extract(DOY from m.taken) BETWEEN day_of_year_start AND day_of_year_end\n>>       OR (\n>>               extract(DOY from m.taken) >= day_of_year_start OR extract(DOY from\n>> m.taken) <= day_of_year_end\n>>       )\n>> )\n>\n> extract(DOY) seems a bit problematic here, because its day numbering is\n> going to be different between leap years and non-leap years, and David's\n> problem statement doesn't allow for off-by-one errors.  You could\n> certainly invent your own function that worked similarly but always\n> translated a given month/day to the same number.\n>\n> The other thing that's messy here is the wraparound requirement.\n> Rather than trying an OR like the above (which I think doesn't quite\n> work anyway --- won't it select everything?)\n\nNo. It only would if using BETWEEN SYMMETRIC.\n\nLike if m.taken is '2003-02-03', using a start day of year as 11th Nov\nand end as 17th Feb, it would match the 2nd part of the outer OR\nexpression. If you changed the end day of year to 2nd Feb, it would\nyield no result as nothing is between 11th Nov and 17th Feb as it's a\nnegative difference, and 2nd Feb is lower than the taken date so fails\nto match the first half of the inner most OR expression.\n\n> , it would be better if\n> you can have the app distinguish wraparound from non-wraparound cases\n> and issue different queries in the two cases.  In the non-wrap case\n> (start_day < end_day) it's pretty easy, just\n>        my_doy(m.taken) BETWEEN start_val AND end_val\n> The easy way to handle the wrap case is\n>        my_doy(m.taken) <= start_val OR my_doy(m.taken) >= end_val\n> although I can't help feeling there should be a smarter way to do\n> this where you can use an AND range check on some modified expression\n> derived from the date.\n>\n>                        regards, tom lane\n>\n\nYes, I guess I agree that the app can run different queries depending\non which date is higher. I hadn't factored leap years into the\nequation. Can't think of what could be done for those cases off the\ntop of my head. What is really needed is a way to match against day\nand month parts instead of day, month and year.... without resorting\nto casting to text of course.\n\nThom\n", "msg_date": "Thu, 20 May 2010 20:21:30 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "When using MySQL, the performance was okay (~5 seconds per query) using:\n\n date( concat_ws( '-', y.year, m.month, d.day ) ) between\n -- Start date.\n date( concat_ws( '-', y.year, $P{Month1}, $P{Day1} ) ) AND\n -- End date. Calculated by checking to see if the end date wraps\n -- into the next year. If it does, then add 1 to the current year.\n --\n date(\n concat_ws( '-',\n y.year + greatest( -1 *\n sign(\n datediff(\n date(\n concat_ws('-', y.year, $P{Month2}, $P{Day2} )\n ),\n date(\n concat_ws('-', y.year, $P{Month1}, $P{Day1} )\n )\n )\n ), 0\n ), $P{Month2}, $P{Day2}\n )\n )\n\nThis calculated the correct start days and end days, including leap years.\n\nWith MySQL, I \"normalized\" the date into three different tables: year\nreferences, month references, and day references. The days contained only\nthe day (of the month) the measurement was made and the measured value. The\nmonth references contained the month number for the measurement. The year\nreferences had the years and station. Each table had its own index on the\nyear, month, or day.\n\nWhen I had proposed that solution to the mailing list, I was introduced to a\nmore PostgreSQL-way, which was to use indexes on the date field.\n\nIn PostgreSQL, I have a single \"measurement\" table for the data (divided\ninto 72 child tables), which includes the date and station. I like this\nbecause it feels clean and it is easier to understand. So far, however, it\nhas not been fast.\n\nI was thinking that I could add three more columns to the measurement table:\n\nyear_taken, month_taken, day_taken\n\nThen index those. That should allow me to avoid extracting years, months,\nand days from the *m.taken* date column.\n\nWhat do you think?\n\nThanks again!\nDave\n\nWhen using MySQL, the performance was okay (~5 seconds per query) using:  date( concat_ws( '-', y.year, m.month, d.day ) ) between\n    -- Start date.    date( concat_ws( '-', y.year, $P{Month1}, $P{Day1} ) ) AND\n    -- End date. Calculated by checking to see if the end date wraps    -- into the next year. If it does, then add 1 to the current year.\n    --    date(\n      concat_ws( '-',        y.year + greatest( -1 *\n          sign(            datediff(\n              date(                concat_ws('-', y.year, $P{Month2}, $P{Day2} )\n              ),              date(\n                concat_ws('-', y.year, $P{Month1}, $P{Day1} )              )\n            )          ), 0\n        ), $P{Month2}, $P{Day2}      )\n    )This calculated the correct start days and end days, including leap years.With MySQL, I \"normalized\" the date into three different tables: year references, month references, and day references. The days contained only the day (of the month) the measurement was made and the measured value. The month references contained the month number for the measurement. The year references had the years and station. Each table had its own index on the year, month, or day.\nWhen I had proposed that solution to the mailing list, I was introduced to a more PostgreSQL-way, which was to use indexes on the date field.In PostgreSQL, I have a single \"measurement\" table for the data (divided into 72 child tables), which includes the date and station. I like this because it feels clean and it is easier to understand. So far, however, it has not been fast.\nI was thinking that I could add three more columns to the measurement table:year_taken, month_taken, day_takenThen index those. That should allow me to avoid extracting years, months, and days from the m.taken date column.\nWhat do you think?Thanks again!Dave", "msg_date": "Thu, 20 May 2010 12:45:53 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "David Jarvis <[email protected]> writes:\n> I was thinking that I could add three more columns to the measurement table:\n> year_taken, month_taken, day_taken\n> Then index those. That should allow me to avoid extracting years, months,\n> and days from the *m.taken* date column.\n\nYou could, but I don't think there's any advantage to that versus\nputting indexes on extract(day from taken) etc. The extra fields\neat more space in the table proper, and the functional index isn't\nreally any more expensive than a plain index. Not to mention that\nyou can have bugs with changing the date and forgetting to update\nthe derived columns, etc etc.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 May 2010 15:52:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or GIN? " }, { "msg_contents": "What if I were to have the application pass in two sets of date ranges?\n\nFor the condition of Dec 22 to Mar 22:\n\nDec 22 would become:\n\n - Dec 22 - Dec 31\n\nMar 22 would become:\n\n - Jan 1 - Mar 22\n\nThe first range would always be for the current year; the second range would\nalways be for the year following the current year.\n\nWould that allow PostgreSQL to use the index?\n\nDave\n\nWhat if I were to have the application pass in two sets of date ranges?For the condition of Dec 22 to Mar 22:Dec 22 would become:\nDec 22 - Dec 31Mar 22 would become:Jan 1 - Mar 22The first range would always be for the current year; the second range would always be for the year following the current year.\nWould that allow PostgreSQL to use the index?Dave", "msg_date": "Thu, 20 May 2010 12:58:16 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "David Jarvis <[email protected]> writes:\n> What if I were to have the application pass in two sets of date ranges?\n> For the condition of Dec 22 to Mar 22:\n> Dec 22 would become:\n> - Dec 22 - Dec 31\n> Mar 22 would become:\n> - Jan 1 - Mar 22\n\nI think what you're essentially describing here is removing the OR from\nthe query in favor of issuing two queries and then combining the results\nin the app. Yeah, you could do that, but one would hope that it isn't\nfaster ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 May 2010 16:03:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or GIN? " }, { "msg_contents": "I was hoping to eliminate this part of the query:\n\n (cast(extract( YEAR FROM m.taken ) + greatest( -1 *\n sign(\n (extract( YEAR FROM m.taken )||'-12-31')::date -\n (extract( YEAR FROM m.taken )||'-01-01')::date ), 0\n ) AS text)||'-12-31')::date\n\nThat uses functions to create the dates, which is definitely the problem.\nI'd still have the query return all the results for both data sets. If\nproviding the query with two data sets won't work, what will?\n\nDave\n\nI was hoping to eliminate this part of the query:        (cast(extract( YEAR FROM m.taken ) + greatest( -1 *          sign(            (extract( YEAR FROM m.taken )||'-12-31')::date -            (extract( YEAR FROM m.taken )||'-01-01')::date ), 0\n        ) AS text)||'-12-31')::dateThat uses functions to create the dates, which is definitely the problem. I'd still have the query return all the results for both data sets. If providing the query with two data sets won't work, what will?\nDave", "msg_date": "Thu, 20 May 2010 13:09:05 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "David Jarvis <[email protected]> writes:\n> I was hoping to eliminate this part of the query:\n> (cast(extract( YEAR FROM m.taken ) + greatest( -1 *\n> sign(\n> (extract( YEAR FROM m.taken )||'-12-31')::date -\n> (extract( YEAR FROM m.taken )||'-01-01')::date ), 0\n> ) AS text)||'-12-31')::date\n\n> That uses functions to create the dates, which is definitely the problem.\n\nWell, it's not the functions per se that's the problem, it's the lack of\na useful index on the expression. But as somebody remarked upthread,\nthat expression doesn't look correct at all. Doesn't the whole\ngreatest() subexpression reduce to a constant?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 May 2010 16:18:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or GIN? " }, { "msg_contents": "Hi,\n\nI was still referring to the measurement table. You have an index on\n> stationid, but still seem to be getting a sequential scan. Maybe the planner\n> does not realise that you are selecting a small number of stations. Posting\n> an EXPLAIN ANALYSE would really help here.\n>\n\nHere is the result from an *EXPLAIN ANALYZE*:\n\n\"HashAggregate (cost=5486752.27..5486756.27 rows=200 width=12) (actual\ntime=314328.657..314328.728 rows=110 loops=1)\"\n\" -> Hash Semi Join (cost=1045.52..5451155.11 rows=4746289 width=12)\n(actual time=197.950..313605.795 rows=463926 loops=1)\"\n\" Hash Cond: (m.station_id = s.id)\"\n\" -> Append (cost=0.00..5343318.08 rows=4746289 width=16) (actual\ntime=74.411..306533.820 rows=42737997 loops=1)\"\n\" -> Seq Scan on measurement m (cost=0.00..148.00 rows=1\nwidth=20) (actual time=0.001..0.001 rows=0 loops=1)\"\n\" Filter: ((category_id = 1) AND (date_part('year'::text,\n(taken)::timestamp without time zone) >= 1900::double precision) AND\n(date_part('year'::text, (taken)::timestamp without time zone) <=\n2009::double precision) AND (taken >= (((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND\n(taken <= ((((date_part('year'::text, (taken)::timestamp without time zone)\n+ GREATEST(((-1)::double precision * sign((((((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-12-31'::text))::date -\n(((date_part('year'::text, (taken)::timestamp without time zone))::text ||\n'-01-01'::text))::date))::double precision)), 0::double precision)))::text\n|| '-12-31'::text))::date))\"\n\" -> Seq Scan on measurement_01_001 m (cost=0.00..438102.26\nrows=389080 width=16) (actual time=74.409..24800.171 rows=3503256 loops=1)\"\n\" Filter: ((category_id = 1) AND (date_part('year'::text,\n(taken)::timestamp without time zone) >= 1900::double precision) AND\n(date_part('year'::text, (taken)::timestamp without time zone) <=\n2009::double precision) AND (taken >= (((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND\n(taken <= ((((date_part('year'::text, (taken)::timestamp without time zone)\n+ GREATEST(((-1)::double precision * sign((((((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-12-31'::text))::date -\n(((date_part('year'::text, (taken)::timestamp without time zone))::text ||\n'-01-01'::text))::date))::double precision)), 0::double precision)))::text\n|| '-12-31'::text))::date))\"\n\" -> Seq Scan on measurement_02_001 m (cost=0.00..399834.28\nrows=354646 width=16) (actual time=29.217..22209.877 rows=3196631 loops=1)\"\n\" Filter: ((category_id = 1) AND (date_part('year'::text,\n(taken)::timestamp without time zone) >= 1900::double precision) AND\n(date_part('year'::text, (taken)::timestamp without time zone) <=\n2009::double precision) AND (taken >= (((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND\n(taken <= ((((date_part('year'::text, (taken)::timestamp without time zone)\n+ GREATEST(((-1)::double precision * sign((((((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-12-31'::text))::date -\n(((date_part('year'::text, (taken)::timestamp without time zone))::text ||\n'-01-01'::text))::date))::double precision)), 0::double precision)))::text\n|| '-12-31'::text))::date))\"\n\" -> Seq Scan on measurement_03_001 m (cost=0.00..438380.23\nrows=389148 width=16) (actual time=15.915..24366.766 rows=3503937 loops=1)\"\n\" Filter: ((category_id = 1) AND (date_part('year'::text,\n(taken)::timestamp without time zone) >= 1900::double precision) AND\n(date_part('year'::text, (taken)::timestamp without time zone) <=\n2009::double precision) AND (taken >= (((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND\n(taken <= ((((date_part('year'::text, (taken)::timestamp without time zone)\n+ GREATEST(((-1)::double precision * sign((((((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-12-31'::text))::date -\n(((date_part('year'::text, (taken)::timestamp without time zone))::text ||\n'-01-01'::text))::date))::double precision)), 0::double precision)))::text\n|| '-12-31'::text))::date))\"\n\" -> Seq Scan on measurement_04_001 m (cost=0.00..432850.57\nrows=384539 width=16) (actual time=15.852..24280.031 rows=3461931 loops=1)\"\n\" Filter: ((category_id = 1) AND (date_part('year'::text,\n(taken)::timestamp without time zone) >= 1900::double precision) AND\n(date_part('year'::text, (taken)::timestamp without time zone) <=\n2009::double precision) AND (taken >= (((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND\n(taken <= ((((date_part('year'::text, (taken)::timestamp without time zone)\n+ GREATEST(((-1)::double precision * sign((((((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-12-31'::text))::date -\n(((date_part('year'::text, (taken)::timestamp without time zone))::text ||\n'-01-01'::text))::date))::double precision)), 0::double precision)))::text\n|| '-12-31'::text))::date))\"\n\" -> Seq Scan on measurement_05_001 m (cost=0.00..466852.96\nrows=415704 width=16) (actual time=19.495..26158.828 rows=3737276 loops=1)\"\n\" Filter: ((category_id = 1) AND (date_part('year'::text,\n(taken)::timestamp without time zone) >= 1900::double precision) AND\n(date_part('year'::text, (taken)::timestamp without time zone) <=\n2009::double precision) AND (taken >= (((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND\n(taken <= ((((date_part('year'::text, (taken)::timestamp without time zone)\n+ GREATEST(((-1)::double precision * sign((((((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-12-31'::text))::date -\n(((date_part('year'::text, (taken)::timestamp without time zone))::text ||\n'-01-01'::text))::date))::double precision)), 0::double precision)))::text\n|| '-12-31'::text))::date))\"\n\" -> Seq Scan on measurement_06_001 m (cost=0.00..458098.05\nrows=407244 width=16) (actual time=25.062..26054.019 rows=3668108 loops=1)\"\n\" Filter: ((category_id = 1) AND (date_part('year'::text,\n(taken)::timestamp without time zone) >= 1900::double precision) AND\n(date_part('year'::text, (taken)::timestamp without time zone) <=\n2009::double precision) AND (taken >= (((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND\n(taken <= ((((date_part('year'::text, (taken)::timestamp without time zone)\n+ GREATEST(((-1)::double precision * sign((((((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-12-31'::text))::date -\n(((date_part('year'::text, (taken)::timestamp without time zone))::text ||\n'-01-01'::text))::date))::double precision)), 0::double precision)))::text\n|| '-12-31'::text))::date))\"\n\" -> Seq Scan on measurement_07_001 m (cost=0.00..472679.60\nrows=420736 width=16) (actual time=17.852..26829.286 rows=3784626 loops=1)\"\n\" Filter: ((category_id = 1) AND (date_part('year'::text,\n(taken)::timestamp without time zone) >= 1900::double precision) AND\n(date_part('year'::text, (taken)::timestamp without time zone) <=\n2009::double precision) AND (taken >= (((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND\n(taken <= ((((date_part('year'::text, (taken)::timestamp without time zone)\n+ GREATEST(((-1)::double precision * sign((((((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-12-31'::text))::date -\n(((date_part('year'::text, (taken)::timestamp without time zone))::text ||\n'-01-01'::text))::date))::double precision)), 0::double precision)))::text\n|| '-12-31'::text))::date))\"\n\" -> Seq Scan on measurement_08_001 m (cost=0.00..471200.02\nrows=418722 width=16) (actual time=20.781..26875.574 rows=3772848 loops=1)\"\n\" Filter: ((category_id = 1) AND (date_part('year'::text,\n(taken)::timestamp without time zone) >= 1900::double precision) AND\n(date_part('year'::text, (taken)::timestamp without time zone) <=\n2009::double precision) AND (taken >= (((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND\n(taken <= ((((date_part('year'::text, (taken)::timestamp without time zone)\n+ GREATEST(((-1)::double precision * sign((((((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-12-31'::text))::date -\n(((date_part('year'::text, (taken)::timestamp without time zone))::text ||\n'-01-01'::text))::date))::double precision)), 0::double precision)))::text\n|| '-12-31'::text))::date))\"\n\" -> Seq Scan on measurement_09_001 m (cost=0.00..447468.05\nrows=397415 width=16) (actual time=17.454..25355.688 rows=3580395 loops=1)\"\n\" Filter: ((category_id = 1) AND (date_part('year'::text,\n(taken)::timestamp without time zone) >= 1900::double precision) AND\n(date_part('year'::text, (taken)::timestamp without time zone) <=\n2009::double precision) AND (taken >= (((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND\n(taken <= ((((date_part('year'::text, (taken)::timestamp without time zone)\n+ GREATEST(((-1)::double precision * sign((((((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-12-31'::text))::date -\n(((date_part('year'::text, (taken)::timestamp without time zone))::text ||\n'-01-01'::text))::date))::double precision)), 0::double precision)))::text\n|| '-12-31'::text))::date))\"\n\" -> Seq Scan on measurement_10_001 m (cost=0.00..449691.17\nrows=399362 width=16) (actual time=17.911..25144.829 rows=3594957 loops=1)\"\n\" Filter: ((category_id = 1) AND (date_part('year'::text,\n(taken)::timestamp without time zone) >= 1900::double precision) AND\n(date_part('year'::text, (taken)::timestamp without time zone) <=\n2009::double precision) AND (taken >= (((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND\n(taken <= ((((date_part('year'::text, (taken)::timestamp without time zone)\n+ GREATEST(((-1)::double precision * sign((((((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-12-31'::text))::date -\n(((date_part('year'::text, (taken)::timestamp without time zone))::text ||\n'-01-01'::text))::date))::double precision)), 0::double precision)))::text\n|| '-12-31'::text))::date))\"\n\" -> Seq Scan on measurement_11_001 m (cost=0.00..429363.73\nrows=380826 width=16) (actual time=18.944..24106.477 rows=3430085 loops=1)\"\n\" Filter: ((category_id = 1) AND (date_part('year'::text,\n(taken)::timestamp without time zone) >= 1900::double precision) AND\n(date_part('year'::text, (taken)::timestamp without time zone) <=\n2009::double precision) AND (taken >= (((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND\n(taken <= ((((date_part('year'::text, (taken)::timestamp without time zone)\n+ GREATEST(((-1)::double precision * sign((((((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-12-31'::text))::date -\n(((date_part('year'::text, (taken)::timestamp without time zone))::text ||\n'-01-01'::text))::date))::double precision)), 0::double precision)))::text\n|| '-12-31'::text))::date))\"\n\" -> Seq Scan on measurement_12_001 m (cost=0.00..438649.19\nrows=388866 width=16) (actual time=22.830..24466.324 rows=3503947 loops=1)\"\n\" Filter: ((category_id = 1) AND (date_part('year'::text,\n(taken)::timestamp without time zone) >= 1900::double precision) AND\n(date_part('year'::text, (taken)::timestamp without time zone) <=\n2009::double precision) AND (taken >= (((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND\n(taken <= ((((date_part('year'::text, (taken)::timestamp without time zone)\n+ GREATEST(((-1)::double precision * sign((((((date_part('year'::text,\n(taken)::timestamp without time zone))::text || '-12-31'::text))::date -\n(((date_part('year'::text, (taken)::timestamp without time zone))::text ||\n'-01-01'::text))::date))::double precision)), 0::double precision)))::text\n|| '-12-31'::text))::date))\"\n\" -> Hash (cost=994.94..994.94 rows=4046 width=4) (actual\ntime=120.793..120.793 rows=129 loops=1)\"\n\" -> Nested Loop (cost=0.00..994.94 rows=4046 width=4)\n(actual time=71.112..120.728 rows=129 loops=1)\"\n\" Join Filter: ((6371.009::double precision *\nsqrt((pow(radians(((c.latitude_decimal - s.latitude_decimal))::double\nprecision), 2::double precision) + (cos((radians(((c.latitude_decimal +\ns.latitude_decimal))::double precision) / 2::double precision)) *\npow(radians(((c.longitude_decimal - s.longitude_decimal))::double\nprecision), 2::double precision))))) <= 50::double precision)\"\n\" -> Index Scan using city_pkey1 on city c\n(cost=0.00..6.27 rows=1 width=16) (actual time=61.311..61.314 rows=1\nloops=1)\"\n\" Index Cond: (id = 5182)\"\n\" -> Seq Scan on station s (cost=0.00..321.08\nrows=12138 width=20) (actual time=9.745..19.035 rows=12139 loops=1)\"\n\" Filter: ((s.elevation >= 0) AND (s.elevation <=\n3000))\"\n\"Total runtime: 314329.201 ms\"\n\nDave\n\nHi,I was still referring to the measurement table. You have an index on stationid, but still seem to be getting a sequential scan. Maybe the planner does not realise that you are selecting a small number of stations. Posting an EXPLAIN ANALYSE would really help here.\nHere is the result from an EXPLAIN ANALYZE:\"HashAggregate  (cost=5486752.27..5486756.27 rows=200 width=12) (actual time=314328.657..314328.728 rows=110 loops=1)\"\"  ->  Hash Semi Join  (cost=1045.52..5451155.11 rows=4746289 width=12) (actual time=197.950..313605.795 rows=463926 loops=1)\"\n\"        Hash Cond: (m.station_id = s.id)\"\"        ->  Append  (cost=0.00..5343318.08 rows=4746289 width=16) (actual time=74.411..306533.820 rows=42737997 loops=1)\"\n\"              ->  Seq Scan on measurement m  (cost=0.00..148.00 rows=1 width=20) (actual time=0.001..0.001 rows=0 loops=1)\"\"                    Filter: ((category_id = 1) AND (date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2009::double precision) AND (taken >= (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND (taken <= ((((date_part('year'::text, (taken)::timestamp without time zone) + GREATEST(((-1)::double precision * sign((((((date_part('year'::text, (taken)::timestamp without time zone))::text || '-12-31'::text))::date - (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date))::double precision)), 0::double precision)))::text || '-12-31'::text))::date))\"\n\"              ->  Seq Scan on measurement_01_001 m  (cost=0.00..438102.26 rows=389080 width=16) (actual time=74.409..24800.171 rows=3503256 loops=1)\"\"                    Filter: ((category_id = 1) AND (date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2009::double precision) AND (taken >= (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND (taken <= ((((date_part('year'::text, (taken)::timestamp without time zone) + GREATEST(((-1)::double precision * sign((((((date_part('year'::text, (taken)::timestamp without time zone))::text || '-12-31'::text))::date - (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date))::double precision)), 0::double precision)))::text || '-12-31'::text))::date))\"\n\"              ->  Seq Scan on measurement_02_001 m  (cost=0.00..399834.28 rows=354646 width=16) (actual time=29.217..22209.877 rows=3196631 loops=1)\"\"                    Filter: ((category_id = 1) AND (date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2009::double precision) AND (taken >= (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND (taken <= ((((date_part('year'::text, (taken)::timestamp without time zone) + GREATEST(((-1)::double precision * sign((((((date_part('year'::text, (taken)::timestamp without time zone))::text || '-12-31'::text))::date - (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date))::double precision)), 0::double precision)))::text || '-12-31'::text))::date))\"\n\"              ->  Seq Scan on measurement_03_001 m  (cost=0.00..438380.23 rows=389148 width=16) (actual time=15.915..24366.766 rows=3503937 loops=1)\"\"                    Filter: ((category_id = 1) AND (date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2009::double precision) AND (taken >= (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND (taken <= ((((date_part('year'::text, (taken)::timestamp without time zone) + GREATEST(((-1)::double precision * sign((((((date_part('year'::text, (taken)::timestamp without time zone))::text || '-12-31'::text))::date - (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date))::double precision)), 0::double precision)))::text || '-12-31'::text))::date))\"\n\"              ->  Seq Scan on measurement_04_001 m  (cost=0.00..432850.57 rows=384539 width=16) (actual time=15.852..24280.031 rows=3461931 loops=1)\"\"                    Filter: ((category_id = 1) AND (date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2009::double precision) AND (taken >= (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND (taken <= ((((date_part('year'::text, (taken)::timestamp without time zone) + GREATEST(((-1)::double precision * sign((((((date_part('year'::text, (taken)::timestamp without time zone))::text || '-12-31'::text))::date - (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date))::double precision)), 0::double precision)))::text || '-12-31'::text))::date))\"\n\"              ->  Seq Scan on measurement_05_001 m  (cost=0.00..466852.96 rows=415704 width=16) (actual time=19.495..26158.828 rows=3737276 loops=1)\"\"                    Filter: ((category_id = 1) AND (date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2009::double precision) AND (taken >= (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND (taken <= ((((date_part('year'::text, (taken)::timestamp without time zone) + GREATEST(((-1)::double precision * sign((((((date_part('year'::text, (taken)::timestamp without time zone))::text || '-12-31'::text))::date - (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date))::double precision)), 0::double precision)))::text || '-12-31'::text))::date))\"\n\"              ->  Seq Scan on measurement_06_001 m  (cost=0.00..458098.05 rows=407244 width=16) (actual time=25.062..26054.019 rows=3668108 loops=1)\"\"                    Filter: ((category_id = 1) AND (date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2009::double precision) AND (taken >= (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND (taken <= ((((date_part('year'::text, (taken)::timestamp without time zone) + GREATEST(((-1)::double precision * sign((((((date_part('year'::text, (taken)::timestamp without time zone))::text || '-12-31'::text))::date - (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date))::double precision)), 0::double precision)))::text || '-12-31'::text))::date))\"\n\"              ->  Seq Scan on measurement_07_001 m  (cost=0.00..472679.60 rows=420736 width=16) (actual time=17.852..26829.286 rows=3784626 loops=1)\"\"                    Filter: ((category_id = 1) AND (date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2009::double precision) AND (taken >= (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND (taken <= ((((date_part('year'::text, (taken)::timestamp without time zone) + GREATEST(((-1)::double precision * sign((((((date_part('year'::text, (taken)::timestamp without time zone))::text || '-12-31'::text))::date - (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date))::double precision)), 0::double precision)))::text || '-12-31'::text))::date))\"\n\"              ->  Seq Scan on measurement_08_001 m  (cost=0.00..471200.02 rows=418722 width=16) (actual time=20.781..26875.574 rows=3772848 loops=1)\"\"                    Filter: ((category_id = 1) AND (date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2009::double precision) AND (taken >= (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND (taken <= ((((date_part('year'::text, (taken)::timestamp without time zone) + GREATEST(((-1)::double precision * sign((((((date_part('year'::text, (taken)::timestamp without time zone))::text || '-12-31'::text))::date - (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date))::double precision)), 0::double precision)))::text || '-12-31'::text))::date))\"\n\"              ->  Seq Scan on measurement_09_001 m  (cost=0.00..447468.05 rows=397415 width=16) (actual time=17.454..25355.688 rows=3580395 loops=1)\"\"                    Filter: ((category_id = 1) AND (date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2009::double precision) AND (taken >= (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND (taken <= ((((date_part('year'::text, (taken)::timestamp without time zone) + GREATEST(((-1)::double precision * sign((((((date_part('year'::text, (taken)::timestamp without time zone))::text || '-12-31'::text))::date - (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date))::double precision)), 0::double precision)))::text || '-12-31'::text))::date))\"\n\"              ->  Seq Scan on measurement_10_001 m  (cost=0.00..449691.17 rows=399362 width=16) (actual time=17.911..25144.829 rows=3594957 loops=1)\"\"                    Filter: ((category_id = 1) AND (date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2009::double precision) AND (taken >= (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND (taken <= ((((date_part('year'::text, (taken)::timestamp without time zone) + GREATEST(((-1)::double precision * sign((((((date_part('year'::text, (taken)::timestamp without time zone))::text || '-12-31'::text))::date - (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date))::double precision)), 0::double precision)))::text || '-12-31'::text))::date))\"\n\"              ->  Seq Scan on measurement_11_001 m  (cost=0.00..429363.73 rows=380826 width=16) (actual time=18.944..24106.477 rows=3430085 loops=1)\"\"                    Filter: ((category_id = 1) AND (date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2009::double precision) AND (taken >= (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND (taken <= ((((date_part('year'::text, (taken)::timestamp without time zone) + GREATEST(((-1)::double precision * sign((((((date_part('year'::text, (taken)::timestamp without time zone))::text || '-12-31'::text))::date - (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date))::double precision)), 0::double precision)))::text || '-12-31'::text))::date))\"\n\"              ->  Seq Scan on measurement_12_001 m  (cost=0.00..438649.19 rows=388866 width=16) (actual time=22.830..24466.324 rows=3503947 loops=1)\"\"                    Filter: ((category_id = 1) AND (date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2009::double precision) AND (taken >= (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date) AND (taken <= ((((date_part('year'::text, (taken)::timestamp without time zone) + GREATEST(((-1)::double precision * sign((((((date_part('year'::text, (taken)::timestamp without time zone))::text || '-12-31'::text))::date - (((date_part('year'::text, (taken)::timestamp without time zone))::text || '-01-01'::text))::date))::double precision)), 0::double precision)))::text || '-12-31'::text))::date))\"\n\"        ->  Hash  (cost=994.94..994.94 rows=4046 width=4) (actual time=120.793..120.793 rows=129 loops=1)\"\"              ->  Nested Loop  (cost=0.00..994.94 rows=4046 width=4) (actual time=71.112..120.728 rows=129 loops=1)\"\n\"                    Join Filter: ((6371.009::double precision * sqrt((pow(radians(((c.latitude_decimal - s.latitude_decimal))::double precision), 2::double precision) + (cos((radians(((c.latitude_decimal + s.latitude_decimal))::double precision) / 2::double precision)) * pow(radians(((c.longitude_decimal - s.longitude_decimal))::double precision), 2::double precision))))) <= 50::double precision)\"\n\"                    ->  Index Scan using city_pkey1 on city c  (cost=0.00..6.27 rows=1 width=16) (actual time=61.311..61.314 rows=1 loops=1)\"\"                          Index Cond: (id = 5182)\"\n\"                    ->  Seq Scan on station s  (cost=0.00..321.08 rows=12138 width=20) (actual time=9.745..19.035 rows=12139 loops=1)\"\"                          Filter: ((s.elevation >= 0) AND (s.elevation <= 3000))\"\n\"Total runtime: 314329.201 ms\"Dave", "msg_date": "Thu, 20 May 2010 13:19:25 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "The greatest() expression reduces to either the current year (year + 0) or\nthe next year (year + 1) by taking the sign of the difference in start/end\ndays. This allows me to derive an end date, such as:\n\nDec 22, 1900 to Mar 22, 1901\n\nThen I check if the measured date falls between those two dates.\n\nThe expression might not be correct as I'm still quite new to PostgreSQL's\nsyntax.\n\nDave\n\nThe greatest() expression reduces to either the current year (year + 0) or the next year (year + 1) by taking the sign of the difference in start/end days. This allows me to derive an end date, such as:\nDec 22, 1900 to Mar 22, 1901Then I check if the measured date falls between those two dates.The expression might not be correct as I'm still quite new to PostgreSQL's syntax.Dave", "msg_date": "Thu, 20 May 2010 13:30:48 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "* David Jarvis ([email protected]) wrote:\n> I was hoping to eliminate this part of the query:\n> (cast(extract( YEAR FROM m.taken ) + greatest( -1 *\n> sign(\n> (extract( YEAR FROM m.taken )||'-12-31')::date -\n> (extract( YEAR FROM m.taken )||'-01-01')::date ), 0\n> ) AS text)||'-12-31')::date\n>\n> That uses functions to create the dates, which is definitely the problem. \n[...]\n> The greatest() expression reduces to either the current year (year + 0) or\n> the next year (year + 1) by taking the sign of the difference in start/end\n> days. This allows me to derive an end date, such as:\n> \n> Dec 22, 1900 to Mar 22, 1901\n\nSomething in here really smells fishy to me. Those extract's above are\nworking on values which are from the table.. Why aren't you using these\nfunctions to figure out how to construct the actual dates based on the\nvalues provided by the *user*..?\n\nLooking at your screenshot, I think you need to take those two date\nvalues that the user provides, make them into actual dates (maybe you\nneed a CASE statement or something similar, that shouldn't be that hard,\nand PG should just run that whole bit once, since to PG's point of view,\nit's all constants), and then use those dates to query the tables.\n\nAlso, you're trying to do constraint_exclusion, but have you made sure\nthat it's turned on? And have you made sure that those constraints are\nreally the right ones and that they make sense? You're using a bunch of\nextract()'s there too, why not just specify a CHECK constraint on the\ndate ranges which are allowed in the table..?\n\nMaybe I've misunderstood the whole point here, but I don't think so.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 20 May 2010 17:01:06 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "Tom Lane wrote:\n> David Jarvis <[email protected]> writes:\n> \n>> I was hoping to eliminate this part of the query:\n>> (cast(extract( YEAR FROM m.taken ) + greatest( -1 *\n>> sign(\n>> (extract( YEAR FROM m.taken )||'-12-31')::date -\n>> (extract( YEAR FROM m.taken )||'-01-01')::date ), 0\n>> ) AS text)||'-12-31')::date\n>> \n>> That uses functions to create the dates, which is definitely the problem.\n>> \n>\n> Well, it's not the functions per se that's the problem, it's the lack of\n> a useful index on the expression. But as somebody remarked upthread,\n> that expression doesn't look correct at all. Doesn't the whole\n> greatest() subexpression reduce to a constant?\n> \nThat somebody was probably me. I still think the whole BETWEEN \nexpression is a tautology. A small test did not provide a \ncounterexample. In the select below everything but the select was \ncopy/pasted.\n\ncreate table m (taken timestamptz);\ninsert into m values (now());\ninsert into m values ('1900-12-31');\ninsert into m values ('2000-04-06');\nselect m.taken BETWEEN\n /* Start date. */\n (extract( YEAR FROM m.taken )||'-01-01')::date AND\n /* End date. Calculated by checking to see if the end date wraps\n into the next year. If it does, then add 1 to the current year.\n */\n (cast(extract( YEAR FROM m.taken ) + greatest( -1 *\n sign(\n (extract( YEAR FROM m.taken )||'-12-31')::date -\n (extract( YEAR FROM m.taken )||'-01-01')::date ), 0\n ) AS text)||'-12-31')::date from m;\n ?column?\n----------\n t\n t\n t\n(3 rows)\n\nAnother thing is that IF the climate measurements is partitioned on time \n(e.g each year?), then a function based index on the year part of \nm.taken is useless, pardon my french. I'm not sure if it is partitioned \nthat way but it is an interesting thing to inspect, and perhaps rewrite \nthe query to use constraint exclusion.\n\nregards,\nYeb Havinga\n\n", "msg_date": "Thu, 20 May 2010 23:08:21 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n GIN?" }, { "msg_contents": "* David Jarvis ([email protected]) wrote:\n> There are 72 child tables, each having a year index and a station index,\n> which are defined as follows:\n\nSoooo, my thoughts:\n\nPartition by something that makes sense... Typically, I'd say that you\nwould do it by the category id and when the measurement was taken. Then\nset up the appropriate check constraints on that so that PG can use\nconstraint_exclusion to identify what table it needs to actually go look\nin. How much data are we talking about, by the way? (# of rows) If\nyou're not in the milions, partitioning at all is probably overkill and\nmight be part of the problem here..\n\ncreate table climate.measurement_12_013 (\n\tid bigint not null DEFAULT nextval('climate.measurement_id_seq'::regclass),\n\tstation_id integer not null,\n\ttaken date not null,\n\tamount numeric(8,2) not null,\n\tcategory_id integer not null,\n\tflag varchar(1) not null default ' ',\n\tcheck (category_id = 7),\n\tcheck (taken >= '1913-12-01' and taken <= '1913-12-31')\n\t)\n inherits (climate.measurement);\n\n CREATE INDEX measurement_12_013_s_idx\n ON climate.measurement_12_013\n USING btree\n (station_id);\n\n CREATE INDEX measurement_12_013_d_idx\n ON climate.measurement_12_013\n USING btree\n (taken);\n\n SELECT\n count(1) AS measurements,\n avg(m.amount) AS amount\n FROM\n climate.measurement m\n WHERE\n m.station_id IN (\n SELECT\n s.id\n FROM\n climate.station s,\n climate.city c\n WHERE\n /* For one city... */\n c.id = 5182 AND\n\n /* Where stations are within an elevation range... */\n s.elevation BETWEEN 0 AND 3000 AND\n\n /* and within a specific radius... */\n\t\t\t-- Seriously, you should be using PostGIS here, that can\n\t\t\t-- then use a GIST index to do this alot faster with a\n\t\t\t-- bounding box...\n 6371.009 * SQRT(\n POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) +\n (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) *\n POW(RADIANS(c.longitude_decimal - s.longitude_decimal),\n2))\n ) <= 50\n ) AND\n\n /* Data before 1900 is shaky; insufficient after 2009. */\n\t -- I have no idea why this is here.. Aren't you forcing\n\t -- this already in your application code that's checking\n\t -- user input values? Also, do you actually *have* any\n\t -- data outside this range? If so, just pull out the\n\t -- tables with that data from the inheiritance\n\t -- m.taken >= '1900-01-01' AND m.taken <= '2009-12-31'\n -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND\n\n /* Whittled down by category... */\n m.category_id = 1 AND\n\n /* Between the selected days and years... */\n\t CASE\n\t WHEN (user_start_year || user_start_day <= user_stop_year || user_stop) THEN\n\t m.taken BETWEEN user_start_year || user_start_day AND user_stop_year || user_stop\n\t WHEN (user_start_year || user_start_day > user_stop_year || user_stop) THEN\n\t m.taken BETWEEN (user_start_year || user_start_day)::date AND\n\t\t ((user_stop_year || user_stop)::date + '1\n\t\t year'::interval)::date\n\t-- I don't think you need/want this..?\n -- GROUP BY\n -- extract( YEAR FROM m.taken )\n\n\t\tEnjoy,\n\n\t\t\tStephen", "msg_date": "Thu, 20 May 2010 17:19:19 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "* David Jarvis ([email protected]) wrote:\n> I was still referring to the measurement table. You have an index on\n> > stationid, but still seem to be getting a sequential scan. Maybe the planner\n> > does not realise that you are selecting a small number of stations. Posting\n> > an EXPLAIN ANALYSE would really help here.\n> >\n> \n> Here is the result from an *EXPLAIN ANALYZE*:\n\nYeah.. this is a horrible, horrible plan. It does look like you've got\nsome serious data tho, at least. Basically, PG is sequentially scanning\nthrough all of the tables in your partitioning setup. What is\nconstraint_exclusion set to? What version of PG is this? Do the\nresults og this query look at all correct to you?\n\nHave you considered an index on elevation, btw? How many records in\nthat city table are there and how many are actually in that range?\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 20 May 2010 17:30:29 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "Hi,\n\n~300 million measurements\n~12000 stations (not 70000 as I mentioned before)\n~5500 cities\n\nsome serious data tho, at least. Basically, PG is sequentially scanning\n> through all of the tables in your partitioning setup. What is\n> constraint_exclusion set to? What version of PG is this? Do the\n> results og this query look at all correct to you?\n>\n\nPG 8.4\n\nshow constraint_exclusion;\npartition\n\nWith so much data, it is really hard to tell if the query looks okay without\nhaving it visualized. I can't visualize it until I have the query set up\ncorrectly. At the moment it looks like the query is wrong. :-(\n\nHave you considered an index on elevation, btw? How many records in\n> that city table are there and how many are actually in that range?\n>\n\nI've since added a constraint on elevation; it'll help a bit:\n\nCREATE INDEX station_elevation_idx\n ON climate.station\n USING btree\n (elevation);\n\nDave\n\nHi,~300 million measurements~12000 stations (not 70000 as I mentioned before)~5500 cities\n\nsome serious data tho, at least.  Basically, PG is sequentially scanning\nthrough all of the tables in your partitioning setup.  What is\nconstraint_exclusion set to?  What version of PG is this?  Do the\nresults og this query look at all correct to you?PG 8.4show constraint_exclusion;partition With so much data, it is really hard to tell if the query looks okay without having it visualized. I can't visualize it until I have the query set up correctly. At the moment it looks like the query is wrong. :-(\n\n\nHave you considered an index on elevation, btw?  How many records in\nthat city table are there and how many are actually in that range?I've since added a constraint on elevation; it'll help a bit:CREATE INDEX station_elevation_idx\n  ON climate.station  USING btree  (elevation);Dave", "msg_date": "Thu, 20 May 2010 17:19:06 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "Hi,\n\n check (taken >= '1913-12-01' and taken <= '1913-12-31')\n>\n\nI don't think I want to constrain by year, for a few reasons:\n\n1. There are a lot of years -- over 110.\n2. There will be more years added (both in the future for 2010 and in the\npast as I get data from other sources).\n\nCurrently I have it constrained by month and category. Each table then has\nabout 3 million rows (which is 216 million, but some tables have more, which\nbrings it to 273 million).\n\n\n> /* Data before 1900 is shaky; insufficient after 2009. */\n> -- I have no idea why this is here.. Aren't you forcing\n>\n\nMostly temporary. It is also constrained by the user interface; however that\nwill likely change in the future. It should not be present in the database\nstructure itself.\n\n\n\n> /* Between the selected days and years... */\n>\n> CASE\n> WHEN (user_start_year || user_start_day <= user_stop_year ||\n> user_stop) THEN\n> m.taken BETWEEN user_start_year || user_start_day AND\n> user_stop_year || user_stop\n> WHEN (user_start_year || user_start_day > user_stop_year ||\n> user_stop) THEN\n> m.taken BETWEEN (user_start_year || user_start_day)::date AND\n> ((user_stop_year || user_stop)::date + '1\n> year'::interval)::date\n> -- I don't think you need/want this..?\n>\n\nUser selects this:\n\n1. Years: 1950 to 1974\n2. Days: Dec 22 to Mar 22\n\nThis means that the query must average data between Dec 22 1950 and Mar 22\n1951 for the year of 1950. For 1951, the range is Dec 22 1951 to Mar 22\n1952, and so on. If we switch the calendar (or alter the seasons) so that\nwinter starts Jan 1st (or ends Dec 31), then I could simplify the query. ;-)\n\nDave\n\nHi,\n       check (taken >= '1913-12-01' and taken <= '1913-12-31')I don't think I want to constrain by year, for a few reasons:1. There are a lot of years -- over 110.\n2. There will be more years added (both in the future for 2010 and in the past as I get data from other sources).Currently I have it constrained by month and category. Each table then has about 3 million rows (which is 216 million, but some tables have more, which brings it to 273 million).\n \n      /* Data before 1900 is shaky; insufficient after 2009. */\n          -- I have no idea why this is here..  Aren't you forcingMostly temporary. It is also constrained by the user interface; however that will likely change in the future. It should not be present in the database structure itself.\n \n      /* Between the selected days and years... */\n           CASE\n             WHEN (user_start_year || user_start_day <= user_stop_year || user_stop) THEN\n             m.taken BETWEEN user_start_year || user_start_day  AND user_stop_year || user_stop\n             WHEN (user_start_year || user_start_day > user_stop_year || user_stop) THEN\n             m.taken BETWEEN (user_start_year || user_start_day)::date  AND\n                 ((user_stop_year || user_stop)::date + '1\n                 year'::interval)::date\n        -- I don't think you need/want this..?\nUser selects this:1. Years: 1950 to 19742. Days: Dec 22 to Mar 22This means that the query must average data between Dec 22 1950 and Mar 22 1951 for the year of 1950. For 1951, the range is Dec 22 1951 to Mar 22 1952, and so on. If we switch the calendar (or alter the seasons) so that winter starts Jan 1st (or ends Dec 31), then I could simplify the query. ;-)\nDave", "msg_date": "Thu, 20 May 2010 17:28:26 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "Hi,\n\nSomething in here really smells fishy to me. Those extract's above are\n> working on values which are from the table.. Why aren't you using these\n> functions to figure out how to construct the actual dates based on the\n> values provided by the *user*..?\n>\n\nBecause I've only been using PostgreSQL for one week. For the last several\nyears I've been developing with Oracle on mid-sized systems (40 million\nbooks, 5 million reservations per year, etc.). And even then, primarily on\nthe user-facing side of the applications.\n\nLooking at your screenshot, I think you need to take those two date\n> values that the user provides, make them into actual dates (maybe you\n> need a CASE statement or something similar, that shouldn't be that hard,\n>\n\nSo the user selects Dec 22 and Mar 22 for 1900 to 2009 and the system feeds\nthe report a WHERE clause that looks like:\n\n m.taken BETWEEN '22-12-1900'::date AND '22-03-1901'::date and\n m.taken BETWEEN '22-12-1901'::date AND '22-03-1902'::date and\n m.taken BETWEEN '22-12-1902'::date AND '22-03-1903'::date and ...\n\nThat tightly couples the report query to the code that sets the report\nengine parameters. One of the parameters would be SQL code in the form of a\ndynamically crafted WHERE clause. I'd rather keep the SQL code that is used\nto create the report entirely with the report engine if at all possible.\n\n\n> Also, you're trying to do constraint_exclusion, but have you made sure\n> that it's turned on? And have you made sure that those constraints are\n> really the right ones and that they make sense? You're using a bunch of\n> extract()'s there too, why not just specify a CHECK constraint on the\n> date ranges which are allowed in the table..?\n>\n\nI don't know what the date ranges are? So I can't partition them by year?\n\nRight now I created 72 child tables by using the category and month. This\nmay have been a bad choice. But at least all the data is in the system now\nso dissecting or integrating it back in different ways shouldn't take days.\n\nThanks everyone for all your help, I really appreciate the time you've taken\nto guide me in the right direction to make the system as fast as it can be.\n\nDave\n\nHi,Something in here really smells fishy to me.  Those extract's above are\n\nworking on values which are from the table..  Why aren't you using these\nfunctions to figure out how to construct the actual dates based on the\nvalues provided by the *user*..?Because I've only been using PostgreSQL for one week. For the last several years I've been developing with Oracle on mid-sized systems (40 million books, 5 million reservations per year, etc.). And even then, primarily on the user-facing side of the applications.\n\n\nLooking at your screenshot, I think you need to take those two date\nvalues that the user provides, make them into actual dates (maybe you\nneed a CASE statement or something similar, that shouldn't be that hard,So the user selects Dec 22 and Mar 22 for 1900 to 2009 and the system feeds the report a WHERE clause that looks like:\n  m.taken BETWEEN '22-12-1900'::date AND '22-03-1901'::date and  m.taken BETWEEN '22-12-1901'::date AND '22-03-1902'::date and  m.taken BETWEEN '22-12-1902'::date AND '22-03-1903'::date and ...\nThat tightly couples the report query to the code that sets the report engine parameters. One of the parameters would be SQL code in the form of a dynamically crafted WHERE clause. I'd rather keep the SQL code that is used to create the report entirely with the report engine if at all possible.\n \nAlso, you're trying to do constraint_exclusion, but have you made sure\nthat it's turned on?  And have you made sure that those constraints are\nreally the right ones and that they make sense?  You're using a bunch of\nextract()'s there too, why not just specify a CHECK constraint on the\ndate ranges which are allowed in the table..?I don't know what the date ranges are? So I can't partition them by year?Right now I created 72 child tables by using the category and month. This may have been a bad choice. But at least all the data is in the system now so dissecting or integrating it back in different ways shouldn't take days.\nThanks everyone for all your help, I really appreciate the time you've taken to guide me in the right direction to make the system as fast as it can be.Dave", "msg_date": "Thu, 20 May 2010 17:46:43 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "I took out the date conditions:\n\nSELECT\n m.*\nFROM\n climate.measurement m\nWHERE\n m.category_id = 1 and\n m.station_id = 2043\n\nThis uses the station indexes:\n\n\"Result (cost=0.00..21781.18 rows=8090 width=28)\"\n\" -> Append (cost=0.00..21781.18 rows=8090 width=28)\"\n\" -> Seq Scan on measurement m (cost=0.00..28.00 rows=1 width=38)\"\n\" Filter: ((category_id = 1) AND (station_id = 2043))\"\n\" -> Bitmap Heap Scan on measurement_01_001 m (cost=11.79..1815.67\nrows=677 width=28)\"\n\" Recheck Cond: (station_id = 2043)\"\n\" Filter: (category_id = 1)\"\n\" -> Bitmap Index Scan on measurement_01_001_s_idx\n(cost=0.00..11.62 rows=677 width=0)\"\n\" Index Cond: (station_id = 2043)\"\n\" -> Bitmap Heap Scan on measurement_02_001 m (cost=14.47..1682.18\nrows=627 width=28)\"\n\" Recheck Cond: (station_id = 2043)\"\n\" Filter: (category_id = 1)\"\n\" -> Bitmap Index Scan on measurement_02_001_s_idx\n(cost=0.00..14.32 rows=627 width=0)\"\n\" Index Cond: (station_id = 2043)\"\n\n2500+ rows in 185 milliseconds.\n\nThat is pretty good (I'll need it to be better but for now it works).\n\nThen combined the selection of the station:\n\nSELECT\n m.*\nFROM\n climate.measurement m,\n (SELECT\n s.id\n FROM\n climate.station s,\n climate.city c\n WHERE\n c.id = 5182 AND\n s.elevation BETWEEN 0 AND 3000 AND\n 6371.009 * SQRT(\n POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) +\n (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) *\n POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2))\n ) <= 25\n ) t\nWHERE\n m.category_id = 1 and\n m.station_id = t.id\n\nThe station index is no longer used, resulting in full table scans:\n\n\"Hash Join (cost=1045.52..1341150.09 rows=14556695 width=28)\"\n\" Hash Cond: (m.station_id = s.id)\"\n\" -> Append (cost=0.00..867011.99 rows=43670085 width=28)\"\n\" -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=38)\"\n\" Filter: (category_id = 1)\"\n\" -> Seq Scan on measurement_01_001 m (cost=0.00..71086.96\nrows=3580637 width=28)\"\n\" Filter: (category_id = 1)\"\n\" -> Seq Scan on measurement_02_001 m (cost=0.00..64877.40\nrows=3267872 width=28)\"\n\" Filter: (category_id = 1)\"\n\" -> Seq Scan on measurement_03_001 m (cost=0.00..71131.44\nrows=3582915 width=28)\"\n\" Filter: (category_id = 1)\"\n\nHow do I avoid the FTS?\n\n(I know about PostGIS but I can only learn and do so much at once.) ;-)\n\nHere's the station query:\n\nSELECT\n s.id\nFROM\n climate.station s,\n climate.city c\nWHERE\n c.id = 5182 AND\n s.elevation BETWEEN 0 AND 3000 AND\n 6371.009 * SQRT(\n POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) +\n (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) *\n POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2))\n ) <= 25\n\nAnd its EXPLAIN:\n\n\"Nested Loop (cost=0.00..994.94 rows=4046 width=4)\"\n\" Join Filter: ((6371.009::double precision *\nsqrt((pow(radians(((c.latitude_decimal - s.latitude_decimal))::double\nprecision), 2::double precision) + (cos((radians(((c.latitude_decimal +\ns.latitude_decimal))::double precision) / 2::double precision)) *\npow(radians(((c.longitude_decimal - s.longitude_decimal))::double\nprecision), 2::double precision))))) <= 25::double precision)\"\n\" -> Index Scan using city_pkey1 on city c (cost=0.00..6.27 rows=1\nwidth=16)\"\n\" Index Cond: (id = 5182)\"\n\" -> Seq Scan on station s (cost=0.00..321.08 rows=12138 width=20)\"\n\" Filter: ((s.elevation >= 0) AND (s.elevation <= 3000))\"\n\nI get a set of 78 rows returned in very little time.\n\nThanks again!\nDave\n\nI took out the date conditions:SELECT  m.*FROM  climate.measurement mWHERE  m.category_id = 1 and  m.station_id = 2043This uses the station indexes:\n\"Result  (cost=0.00..21781.18 rows=8090 width=28)\"\"  ->  Append  (cost=0.00..21781.18 rows=8090 width=28)\"\"        ->  Seq Scan on measurement m  (cost=0.00..28.00 rows=1 width=38)\"\n\"              Filter: ((category_id = 1) AND (station_id = 2043))\"\"        ->  Bitmap Heap Scan on measurement_01_001 m  (cost=11.79..1815.67 rows=677 width=28)\"\"              Recheck Cond: (station_id = 2043)\"\n\"              Filter: (category_id = 1)\"\"              ->  Bitmap Index Scan on measurement_01_001_s_idx  (cost=0.00..11.62 rows=677 width=0)\"\"                    Index Cond: (station_id = 2043)\"\n\"        ->  Bitmap Heap Scan on measurement_02_001 m  (cost=14.47..1682.18 rows=627 width=28)\"\"              Recheck Cond: (station_id = 2043)\"\"              Filter: (category_id = 1)\"\n\"              ->  Bitmap Index Scan on measurement_02_001_s_idx  (cost=0.00..14.32 rows=627 width=0)\"\"                    Index Cond: (station_id = 2043)\"2500+ rows in 185 milliseconds.\nThat is pretty good (I'll need it to be better but for now it works).Then combined the selection of the station:SELECT  m.*FROM  climate.measurement m,\n  (SELECT     s.id   FROM     climate.station s,     climate.city c   WHERE     c.id = 5182 AND     s.elevation BETWEEN 0 AND 3000 AND     6371.009 * SQRT(\n       POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) +       (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) *        POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2))     ) <= 25\n   ) tWHERE  m.category_id = 1 and  m.station_id = t.idThe station index is no longer used, resulting in full table scans:\"Hash Join  (cost=1045.52..1341150.09 rows=14556695 width=28)\"\n\"  Hash Cond: (m.station_id = s.id)\"\"  ->  Append  (cost=0.00..867011.99 rows=43670085 width=28)\"\"        ->  Seq Scan on measurement m  (cost=0.00..25.00 rows=6 width=38)\"\n\"              Filter: (category_id = 1)\"\"        ->  Seq Scan on measurement_01_001 m  (cost=0.00..71086.96 rows=3580637 width=28)\"\"              Filter: (category_id = 1)\"\"        ->  Seq Scan on measurement_02_001 m  (cost=0.00..64877.40 rows=3267872 width=28)\"\n\"              Filter: (category_id = 1)\"\"        ->  Seq Scan on measurement_03_001 m  (cost=0.00..71131.44 rows=3582915 width=28)\"\"              Filter: (category_id = 1)\"\nHow do I avoid the FTS?(I know about PostGIS but I can only learn and do so much at once.) ;-) Here's the station query:SELECT  s.id\nFROM  climate.station s,  climate.city cWHERE  c.id = 5182 AND  s.elevation BETWEEN 0 AND 3000 AND  6371.009 * SQRT(    POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) +\n    (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) *    POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2))  ) <= 25And its EXPLAIN:\"Nested Loop  (cost=0.00..994.94 rows=4046 width=4)\"\n\"  Join Filter: ((6371.009::double precision * sqrt((pow(radians(((c.latitude_decimal - s.latitude_decimal))::double precision), 2::double precision) + (cos((radians(((c.latitude_decimal + s.latitude_decimal))::double precision) / 2::double precision)) * pow(radians(((c.longitude_decimal - s.longitude_decimal))::double precision), 2::double precision))))) <= 25::double precision)\"\n\"  ->  Index Scan using city_pkey1 on city c  (cost=0.00..6.27 rows=1 width=16)\"\"        Index Cond: (id = 5182)\"\"  ->  Seq Scan on station s  (cost=0.00..321.08 rows=12138 width=20)\"\n\"        Filter: ((s.elevation >= 0) AND (s.elevation <= 3000))\"I get a set of 78 rows returned in very little time.Thanks again!Dave", "msg_date": "Thu, 20 May 2010 19:02:34 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "On Thu, 20 May 2010, David Jarvis wrote:\n> I took out the date conditions:\n>\n> SELECT\n> m.*\n> FROM\n> climate.measurement m\n> WHERE\n> m.category_id = 1 and\n> m.station_id = 2043\n>\n> This uses the station indexes:\n\nYes, because there is only one station_id selected. That's exactly what an \nindex is for.\n\n> Then combined the selection of the station:\n> The station index is no longer used, resulting in full table scans:\n\n> \"Nested Loop (cost=0.00..994.94 rows=4046 width=4)\"\n> \" Join Filter: ((6371.009::double precision *\n> sqrt((pow(radians(((c.latitude_decimal - s.latitude_decimal))::double\n> precision), 2::double precision) + (cos((radians(((c.latitude_decimal +\n> s.latitude_decimal))::double precision) / 2::double precision)) *\n> pow(radians(((c.longitude_decimal - s.longitude_decimal))::double\n> precision), 2::double precision))))) <= 25::double precision)\"\n> \" -> Index Scan using city_pkey1 on city c (cost=0.00..6.27 rows=1\n> width=16)\"\n> \" Index Cond: (id = 5182)\"\n> \" -> Seq Scan on station s (cost=0.00..321.08 rows=12138 width=20)\"\n> \" Filter: ((s.elevation >= 0) AND (s.elevation <= 3000))\"\n>\n> I get a set of 78 rows returned in very little time.\n\n(An EXPLAIN ANALYSE would be better here). Look at the expected number of \nstations returned. It expects 4046 which is a large proportion of the \navailable stations. It therefore expects to have to touch a large \nproportion of the measurement table, therefore it thinks that it will be \nfastest to do a seq scan. In actual fact, for 78 stations, the index would \nbe faster, but for 4046 it wouldn't.\n\nIf you will be querying by season quite regularly, had you considered \npartitioning by season?\n\nMatthew\n\n-- \n Geography is going places.\n", "msg_date": "Thu, 20 May 2010 23:27:54 -0400 (EDT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n GIN?" }, { "msg_contents": "Hi,\n\n(An EXPLAIN ANALYSE would be better here). Look at the expected number of\n> stations\n\n\n\"Nested Loop (cost=0.00..994.94 rows=4046 width=4) (actual\ntime=0.053..41.173 rows=78 loops=1)\"\n\" Join Filter: ((6371.009::double precision *\nsqrt((pow(radians(((c.latitude_decimal - s.latitude_decimal))::double\nprecision), 2::double precision) + (cos((radians(((c.latitude_decimal +\ns.latitude_decimal))::double precision) / 2::double precision)) *\npow(radians(((c.longitude_decimal - s.longitude_decimal))::double\nprecision), 2::double precision))))) <= 25::double precision)\"\n\" -> Index Scan using city_pkey1 on city c (cost=0.00..6.27 rows=1\nwidth=16) (actual time=0.014..0.016 rows=1 loops=1)\"\n\" Index Cond: (id = 5182)\"\n\" -> Seq Scan on station s (cost=0.00..321.08 rows=12138 width=20)\n(actual time=0.007..5.256 rows=12139 loops=1)\"\n\" Filter: ((s.elevation >= 0) AND (s.elevation <= 3000))\"\n\"Total runtime: 41.235 ms\"\n\nexpects to have to touch a large proportion of the measurement table,\n> therefore it thinks that it will be fastest to do a seq scan. In actual\n> fact, for 78 stations, the index would be faster, but for 4046 it wouldn't.\n>\n\nThis is rather unexpected. I'd have figured it would use the actual number.\n\n\n> If you will be querying by season quite regularly, had you considered\n> partitioning by season?\n>\n\nI have no idea what the \"regular\" queries will be. The purpose of the system\nis to open the data up to the public using a simple user interface so that\nthey can generate their own custom reports. That user interface allows\npeople to pick year intervals, day ranges, elevations, categories\n(temperature, precipitation, snow depth, etc.), and lat/long perimeter\ncoordinates (encompassing any number of stations) or a city and radius.\n\nDave\n\nHi,(An EXPLAIN ANALYSE would be better here). Look at the expected number of stations \n\"Nested Loop  (cost=0.00..994.94 rows=4046 width=4) (actual time=0.053..41.173 rows=78 loops=1)\"\"  Join Filter: ((6371.009::double precision * sqrt((pow(radians(((c.latitude_decimal - s.latitude_decimal))::double precision), 2::double precision) + (cos((radians(((c.latitude_decimal + s.latitude_decimal))::double precision) / 2::double precision)) * pow(radians(((c.longitude_decimal - s.longitude_decimal))::double precision), 2::double precision))))) <= 25::double precision)\"\n\"  ->  Index Scan using city_pkey1 on city c  (cost=0.00..6.27 rows=1 width=16) (actual time=0.014..0.016 rows=1 loops=1)\"\"        Index Cond: (id = 5182)\"\"  ->  Seq Scan on station s  (cost=0.00..321.08 rows=12138 width=20) (actual time=0.007..5.256 rows=12139 loops=1)\"\n\"        Filter: ((s.elevation >= 0) AND (s.elevation <= 3000))\"\"Total runtime: 41.235 ms\"\nexpects to have to touch a large proportion of the measurement table, therefore it thinks that it will be fastest to do a seq scan. In actual fact, for 78 stations, the index would be faster, but for 4046 it wouldn't.\nThis is rather unexpected. I'd have figured it would use the actual number. \n\n\nIf you will be querying by season quite regularly, had you considered partitioning by season?I have no idea what the \"regular\" queries will be. The purpose of the system is to open the data up to the public using a simple user interface so that they can generate their own custom reports. That user interface allows people to pick year intervals, day ranges, elevations, categories (temperature, precipitation, snow depth, etc.), and lat/long perimeter coordinates (encompassing any number of stations) or a city and radius.\nDave", "msg_date": "Thu, 20 May 2010 21:11:52 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "David Jarvis wrote: \n>\n> Also, you're trying to do constraint_exclusion, but have you made sure\n> that it's turned on? And have you made sure that those\n> constraints are\n> really the right ones and that they make sense? You're using a\n> bunch of\n> extract()'s there too, why not just specify a CHECK constraint on the\n> date ranges which are allowed in the table..?\n>\n>\n> I don't know what the date ranges are? So I can't partition them by year?\n>\n> Right now I created 72 child tables by using the category and month. \n> This may have been a bad choice. But at least all the data is in the \n> system now so dissecting or integrating it back in different ways \n> shouldn't take days.\n>\n> Thanks everyone for all your help, I really appreciate the time you've \n> taken to guide me in the right direction to make the system as fast as \n> it can be.\n\nMy $0.02 - its hard to comment inline due to the number of responses, \nbut: the partitioning is only useful for speed, if it matches how your \nqueries select data. For time based data I would for sure go for year \nbased indexing. If you want a fixed number of partitions, you could \nperhaps do something like year % 64. I did a test to see of the \nconstraint exclusion could work with extract but that failed:\n\ntest=# create table parent(t timestamptz);\ntest=# create table child1(check ((extract(year from t)::int % 2)=0)) \ninherits( parent);\ntest=# create table child2(check ((extract(year from t)::int % 2)=1)) \ninherits(parent);\ntest=# explain select * from parent where (extract(year from t)::int % \n2) = 0;\n QUERY PLAN \n---------------------------------------------------------------------------\n Result (cost=0.00..158.40 rows=33 width=8)\n -> Append (cost=0.00..158.40 rows=33 width=8)\n -> Seq Scan on parent (cost=0.00..52.80 rows=11 width=8)\n Filter: (((date_part('year'::text, t))::integer % 2) = 0)\n -> Seq Scan on child1 parent (cost=0.00..52.80 rows=11 width=8)\n Filter: (((date_part('year'::text, t))::integer % 2) = 0)\n -> Seq Scan on child2 parent (cost=0.00..52.80 rows=11 width=8)\n Filter: (((date_part('year'::text, t))::integer % 2) = 0)\n\nIt hits all partitions even when I requested for a single year.\n\nSo an extra column would be needed, attempt 2 with added year smallint.\n\ntest=# create table parent(t timestamptz, y smallint);\ntest=# create table child1(check ((y % 2)=0)) inherits( parent);\ntest=# create table child2(check ((y % 2)=1)) inherits( parent);\ntest=# explain select * from parent where (y % 2) between 0 and 0;\n QUERY \nPLAN \n---------------------------------------------------------------------------------\n Result (cost=0.00..122.00 rows=20 width=10)\n -> Append (cost=0.00..122.00 rows=20 width=10)\n -> Seq Scan on parent (cost=0.00..61.00 rows=10 width=10)\n Filter: ((((y)::integer % 2) >= 0) AND (((y)::integer % \n2) <= 0))\n -> Seq Scan on child1 parent (cost=0.00..61.00 rows=10 width=10)\n Filter: ((((y)::integer % 2) >= 0) AND (((y)::integer % \n2) <= 0))\n\nThis works: only one child table hit.\n\nThat made me think: if you'd scan two consecutive years, you'd always \nhit two different partitions. For your use case it'd be nice if some \nyear wraparounds would fall in the same partition. The following query \nshows partition numbers for 1900 - 2010 with 4 consecutive years in the \nsame partition. It also shows that in this case 32 partitions is enough:\n\ntest=# select x, (x / 4) % 32 from generate_series(1900,2010) as x(x);\n x | ?column?\n------+----------\n 1900 | 27\n 1901 | 27\n 1902 | 27\n 1903 | 27\n 1904 | 28\n 1905 | 28\netc\n 1918 | 31\n 1919 | 31\n 1920 | 0\n 1921 | 0\netc\n 2005 | 21\n 2006 | 21\n 2007 | 21\n 2008 | 22\n 2009 | 22\n 2010 | 22\n(111 rows)\n\nThis would mean that a extra smallint column is needed which would \ninflate the 300M relation with.. almost a GB, but I still think it'd be \na good idea.\n\ncreate or replace function yearmod(int) RETURNS int\nas 'select (($1 >> 2) %32);'\nlanguage sql\nimmutable\nstrict;\n\ncreate table parent(t timestamptz, y smallint);\n\nselect 'create table child'||x||'(check (yearmod(y)='||x-1||')) \ninherits(parent);' from generate_series(1,32) as x(x);\n ?column? \n---------------------------------------------------------------\n create table child1(check (yearmod(y)=0)) inherits(parent);\n create table child2(check (yearmod(y)=1)) inherits(parent);\n create table child3(check (yearmod(y)=2)) inherits(parent);\netc\n create table child30(check (yearmod(y)=29)) inherits(parent);\n create table child31(check (yearmod(y)=30)) inherits(parent);\n create table child32(check (yearmod(y)=31)) inherits(parent);\n(32 rows)\n\nCopy and paste output of this query in psql to create child tables.\n\nExample with period 1970 to 1980:\n\ntest=# explain select * from parent where yearmod(y) between \nyearmod(1970) and yearmod(1980);\n QUERY \nPLAN \n-----------------------------------------------------------------------------\n Result (cost=0.00..305.00 rows=50 width=10)\n -> Append (cost=0.00..305.00 rows=50 width=10)\n -> Seq Scan on parent (cost=0.00..61.00 rows=10 width=10)\n Filter: ((((y / 4) % 32) >= 12) AND (((y / 4) % 32) <= 15))\n -> Seq Scan on child13 parent (cost=0.00..61.00 rows=10 width=10)\n Filter: ((((y / 4) % 32) >= 12) AND (((y / 4) % 32) <= 15))\n -> Seq Scan on child14 parent (cost=0.00..61.00 rows=10 width=10)\n Filter: ((((y / 4) % 32) >= 12) AND (((y / 4) % 32) <= 15))\n -> Seq Scan on child15 parent (cost=0.00..61.00 rows=10 width=10)\n Filter: ((((y / 4) % 32) >= 12) AND (((y / 4) % 32) <= 15))\n -> Seq Scan on child16 parent (cost=0.00..61.00 rows=10 width=10)\n Filter: ((((y / 4) % 32) >= 12) AND (((y / 4) % 32) <= 15))\n(12 rows)\n\nThis works: query for 11 consecutive years hits only 4 from 31.\n\nBut the between fails for yearmods that wrap the 31 boundary, what \nhappens here between 1910 and 1920\n\ntest=# explain select * from parent where yearmod(y) between \nyearmod(1910) and yearmod(1920);\n QUERY PLAN \n------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0)\n One-Time Filter: false\n(2 rows)\n\nSo for the wraparound case we need a CASE:\n\ntest=# explain select * from parent where case when yearmod(1910) <= \nyearmod(1920)\nthen yearmod(y) between yearmod(1910) and yearmod(1920)\nelse (yearmod(y) >= yearmod(1910) or yearmod(y) <= yearmod(1920)) end;\n QUERY \nPLAN \n-------------------------------------------------------------------------------\n Result (cost=0.00..305.00 rows=5665 width=10)\n -> Append (cost=0.00..305.00 rows=5665 width=10)\n -> Seq Scan on parent (cost=0.00..61.00 rows=1133 width=10)\n Filter: ((((y / 4) % 32) >= 29) OR (((y / 4) % 32) <= 0))\n -> Seq Scan on child1 parent (cost=0.00..61.00 rows=1133 \nwidth=10)\n Filter: ((((y / 4) % 32) >= 29) OR (((y / 4) % 32) <= 0))\n -> Seq Scan on child30 parent (cost=0.00..61.00 rows=1133 \nwidth=10)\n Filter: ((((y / 4) % 32) >= 29) OR (((y / 4) % 32) <= 0))\n -> Seq Scan on child31 parent (cost=0.00..61.00 rows=1133 \nwidth=10)\n Filter: ((((y / 4) % 32) >= 29) OR (((y / 4) % 32) <= 0))\n -> Seq Scan on child32 parent (cost=0.00..61.00 rows=1133 \nwidth=10)\n Filter: ((((y / 4) % 32) >= 29) OR (((y / 4) % 32) <= 0))\n(12 rows)\n\nThis should work for all year ranges and I think is a good solution for \npartitioning on year with a fixed amount of partitions.\n\n From the optimizer perspective I wonder what the best access path for \nthis kind of query would be (if there would be no partitions). Building \non ideas from one of Thom Brown's first replies with indexes on year and \ndoy, and Tom Lane's remark about the leap year problem. Suppose the leap \nyears did not exist, having a index on year, and having a different \nindex on doy, sounds like a bitmap and of a scan of both the year and \ndoy indexes could provide a optimal path. Maybe this would still be \npossible, if the leap year problem could be 'fixed' by a additional \ncondition in the where clause that filters the surplus records.\n\nregards,\nYeb Havinga\n\n", "msg_date": "Fri, 21 May 2010 10:38:43 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n GIN?" }, { "msg_contents": "There is a thing that might lead to confusion in the previous post:\n> create or replace function yearmod(int) RETURNS int\n> as 'select (($1 >> 2) %32);'\n> language sql\n> immutable\n> strict;\nis equivalent with\n\ncreate or replace function yearmod(int) RETURNS int\nas 'select (($1 / 4) %32);'\nlanguage sql\nimmutable\nstrict;\n\nand that is the function that was used with all the other output (it can \nbe seen inlined in the explain output). I did not catch this until after \nthe post.\n\nregards,\nYeb Havinga\n\n\n", "msg_date": "Fri, 21 May 2010 10:45:36 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n GIN?" }, { "msg_contents": "On Fri, 21 May 2010, Yeb Havinga wrote:\n> For time based data I would for sure go for year based indexing.\n\nOn the contrary, most of the queries seem to be over many years, but \nrather restricting on the time of year. Therefore, partitioning by month \nor some other per-year method would seem sensible.\n\nRegarding the leap year problem, you might consider creating a modified \nday of year field, which always assumes that the year contains a leap day. \nThen a given number always resolves to a given date, regardless of year. \nIf you then partition (or index) on that field, then you may get a \nbenefit.\n\nIn this case, partitioning is only really useful when you are going to be \nforced to do seq scans. If you can get a suitably selective index, in the \ncase where you are selecting a small proportion of the data, then I would \nconcentrate on getting the index right, rather than the partition, and \nmaybe even not do partitioning.\n\nMatthew\n\n-- \n Trying to write a program that can't be written is... well, it can be an\n enormous amount of fun! -- Computer Science Lecturer\n", "msg_date": "Fri, 21 May 2010 11:08:38 -0400 (EDT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n GIN?" }, { "msg_contents": "Hi, Yeb.\n\nThis is starting to go back to the design I used with MySQL:\n\n - YEAR_REF - Has year and station\n - MONTH_REF - Has month, category, and yea referencer\n - MEASUREMENT - Has month reference, amount, and day\n\nNormalizing by date parts was fast. Partitioning the tables by year won't do\nmuch good -- users will probably choose 1900 to 2009, predominately.\n\nI thought about splitting the data by station by category, but that's ~73000\ntables. My understanding is that PostgreSQL uses files per index, which\nwould be messy at the OS level (Linux 2.6.31). Even by station alone is\n12139 tables, which might be tolerable for now, but with an order of\nmagnitude more stations on the distant horizon, it will not scale.\n\nI also thought about splitting the data by station district by category --\nthere are 79 districts, yielding 474 child tables, which is ~575000 rows per\nchild table. Most of the time I'd imagine only one or two districts would be\nselected. (Again, hard to know exactly.)\n\nDave\n\nHi, Yeb.This is starting to go back to the design I used with MySQL:YEAR_REF - Has year and stationMONTH_REF - Has month, category, and yea referencerMEASUREMENT - Has month reference, amount, and day\nNormalizing by date parts was fast. Partitioning the tables by year won't do much good -- users will probably choose 1900 to 2009, predominately.I thought about splitting the data by station by category, but that's ~73000 tables. My understanding is that PostgreSQL uses files per index, which would be messy at the OS level (Linux 2.6.31). Even by station alone is 12139 tables, which might be tolerable for now, but with an order of magnitude more stations on the distant horizon, it will not scale.\nI also thought about splitting the data by station district by category -- there are 79 districts, yielding 474 child tables, which is ~575000 rows per child table. Most of the time I'd imagine only one or two districts would be selected. (Again, hard to know exactly.)\nDave", "msg_date": "Fri, 21 May 2010 08:17:57 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "Matthew Wakeling wrote:\n> On Fri, 21 May 2010, Yeb Havinga wrote:\n>> For time based data I would for sure go for year based indexing.\n>\n> On the contrary, most of the queries seem to be over many years, but \n> rather restricting on the time of year. Therefore, partitioning by \n> month or some other per-year method would seem sensible.\nThe fact is that at the time I wrote my mail, I had not read a specifion \nof distribution of parameters (or I missed it). That's why the sentence \nof my mail before the one you quoted said: \"the partitioning is only \nuseful for speed, if it matches how your queries select data.\". In most \nof the databases I've worked with, the recent data was queried most \n(accounting, medical) but I can see that for climate analysis this might \nbe different.\n> Regarding the leap year problem, you might consider creating a \n> modified day of year field, which always assumes that the year \n> contains a leap day. Then a given number always resolves to a given \n> date, regardless of year. If you then partition (or index) on that \n> field, then you may get a benefit.\nShouldn't it be just the other way around - assume all years are non \nleap years for the doy part field to be indexed.\n\nregards,\nYeb Havinga\n\n", "msg_date": "Fri, 21 May 2010 20:21:21 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n GIN?" }, { "msg_contents": "David Jarvis wrote:\n> Hi, Yeb.\n>\n> This is starting to go back to the design I used with MySQL:\n>\n> * YEAR_REF - Has year and station\n> * MONTH_REF - Has month, category, and yea referencer\n> * MEASUREMENT - Has month reference, amount, and day\n>\n> Normalizing by date parts was fast. Partitioning the tables by year \n> won't do much good -- users will probably choose 1900 to 2009, \n> predominately.\nOk, in that case it is a bad idea.\n> I thought about splitting the data by station by category, but that's \n> ~73000 tables. My understanding is that PostgreSQL uses files per \n> index, which would be messy at the OS level (Linux 2.6.31). Even by \n> station alone is 12139 tables, which might be tolerable for now, but \n> with an order of magnitude more stations on the distant horizon, it \n> will not scale.\nYes, I've read a few times now that PG's partitioning doesn't scale \nbeyond a few 100 partitions.\n> I also thought about splitting the data by station district by \n> category -- there are 79 districts, yielding 474 child tables, which \n> is ~575000 rows per child table. Most of the time I'd imagine only one \n> or two districts would be selected. (Again, hard to know exactly.)\nI agee with Matthew Wakeling in a different post: its probably wise to \nfirst see how fast things can get by using indexes. Only if that fails \nto be fast, partitioning might be an option. (Though sequentially \nscanning 0.5M rows is not cheap).\n\nI experimented a bit with a doy and year function.\n\n-- note: leap year fix must still be added\ncreate or replace function doy(timestamptz) RETURNS float8\nas 'select extract(doy from $1);'\nlanguage sql\nimmutable\nstrict;\ncreate or replace function year(timestamptz) RETURNS float8\nas 'select extract(year from $1);'\nlanguage sql\nimmutable\nstrict;\n\n\\d parent\n Table \"public.parent\"\n Column | Type | Modifiers\n--------+--------------------------+-----------\n t | timestamp with time zone |\n y | smallint |\nIndexes:\n \"doy_i\" btree (doy(t))\n \"year_i\" btree (year(t))\n\nA plan like the following is probably what you want\n\ntest=# explain select * from parent where doy(t) between 10 and 20 and \nyear(t) between 1900 and 2009;\n \nQUERY \nPLAN \n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on parent (cost=9.95..14.97 rows=1 width=10)\n Recheck Cond: ((year(t) >= 1900::double precision) AND (year(t) <= \n2009::double precision) AND (doy(t) >= 10::double precision) AND (doy(t) \n<= 20::double precision))\n -> BitmapAnd (cost=9.95..9.95 rows=1 width=0)\n -> Bitmap Index Scan on year_i (cost=0.00..4.85 rows=10 width=0)\n Index Cond: ((year(t) >= 1900::double precision) AND \n(year(t) <= 2009::double precision))\n -> Bitmap Index Scan on doy_i (cost=0.00..4.85 rows=10 width=0)\n Index Cond: ((doy(t) >= 10::double precision) AND (doy(t) \n<= 20::double precision))\n(7 rows)\n\nregards,\nYeb Havinga\n\n\n\n", "msg_date": "Fri, 21 May 2010 20:49:36 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n GIN?" }, { "msg_contents": ">> Regarding the leap year problem, you might consider creating a modified day \n>> of year field, which always assumes that the year contains a leap day. Then \n>> a given number always resolves to a given date, regardless of year. If you \n>> then partition (or index) on that field, then you may get a benefit.\nOn Fri, 21 May 2010, Yeb Havinga wrote:\n> Shouldn't it be just the other way around - assume all years are non leap \n> years for the doy part field to be indexed.\n\nThe mapping doesn't matter massively, as long as all days of the year can \nbe mapped uniquely onto a number, and the numbers are sequential. Your \nsuggestion does not satisfy the first of those two requirements.\n\nIf you assume that all yeasr are leap years, then you merely skip a number \nin the middle of the year, which isn't a problem when you want to check \nfor days between two bounds. However, if you assume non leap year, then \nthere is no representation for the 29th of February, so not all data \npoints will have a representative number to insert into the database.\n\nMatthew\n\n-- \n No, C++ isn't equal to D. 'C' is undeclared, so we assume it's an int,\n with a default value of zero. Hence, C++ should really be called 1.\n -- met24, commenting on the quote \"C++ -- shouldn't it be called D?\"\n", "msg_date": "Fri, 21 May 2010 15:12:12 -0400 (EDT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n GIN?" }, { "msg_contents": "* Yeb Havinga ([email protected]) wrote:\n>> Normalizing by date parts was fast. Partitioning the tables by year \n>> won't do much good -- users will probably choose 1900 to 2009, \n>> predominately.\n> Ok, in that case it is a bad idea.\n\nYeah, now that I understand what the user actually wants, I can\ncertainly understand that you wouldn't want to partition by year. It\ndoes strike me that perhaps you could partition by day ranges, but you'd\nhave to store them as something other than the 'date' type, which is\ncertainly frustrating, but you're not really operating on these in a\n'normal' fashion as you would with a date.\n\nThe next question I would have, however, is if you could pre-aggregate\nsome of this data.. If users are going to typically use 1900-2009 for\nyears, then could the information about all of those years be aggregated\napriori to make those queries faster?\n\n>> I thought about splitting the data by station by category, but that's \n>> ~73000 tables.\n\nDo not get hung up on having to have a separate table for every unique\nvalue in the column- you don't need that. constraint_exclusion will\nwork just fine with ranges too- the problem is that you need to have\nranges that make sense with the data type you're using and with the\nqueries you're running. That doesn't really work here with the\nmeasurement_date, but it might work just fine with your station_id\nfield.\n\n>> I also thought about splitting the data by station district by \n>> category -- there are 79 districts, yielding 474 child tables, which \n>> is ~575000 rows per child table. Most of the time I'd imagine only one \n>> or two districts would be selected. (Again, hard to know exactly.)\n\nAlso realize that PG will use multiple files for a single table once the\nsize of that table goes beyond 1G.\n\n> I agee with Matthew Wakeling in a different post: its probably wise to \n> first see how fast things can get by using indexes. Only if that fails \n> to be fast, partitioning might be an option. (Though sequentially \n> scanning 0.5M rows is not cheap).\n\nI would agree with this too- get it working first, then look at\npartitioning. Even more so- work on a smaller data set to begin with\nwhile you're figuring out how to get the right answer in a generally\nefficient way (not doing seq. scans through everything because you're\noperating on every row for something). It needs to be a couple\nhundred-thousand rows, but it doesn't need to be the full data set, imv.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Fri, 21 May 2010 15:17:50 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "Hi,\n\nCREATE INDEX measurement_01_001_y_idx\n>> ON climate.measurement_01_001\n>> USING btree\n>> (date_part('year'::text, taken));\n>>\n>> Is that equivalent to what you suggest?\n>>\n>\n> No. It is not the same function, so Postgres has no way to know it produces\n> the same results (if it does).\n>\n\nThis is what I ran:\n\nCREATE INDEX\n measurement_013_taken_year_idx\nON\n climate.measurement_013\n (EXTRACT( YEAR FROM taken ));\n\nThis is what pgadmin3 shows me:\n\nCREATE INDEX measurement_013_taken_year_idx\n ON climate.measurement_013\n USING btree\n (date_part('year'::text, taken));\n\nAs far as I can tell, it appears they are equivalent?\n\nEither way, the cost for performing a GROUP BY is high (I ran once with\nextract and once with date_part). The date_part EXPLAIN ANALYSE resulted in:\n\n\"Limit (cost=1748024.65..1748028.65 rows=200 width=12) (actual\ntime=65471.448..65471.542 rows=101 loops=1)\"\n\nThe EXTRACT EXPLAIN ANALYSE came to:\n\n\"Limit (cost=1748024.65..1748028.65 rows=200 width=12) (actual\ntime=44913.263..44913.330 rows=101 loops=1)\"\n\nIf PG treats them differently, I'd like to know how so that I can do the\nright thing. As it is, I cannot see the difference in performance between\ndate_part and EXTRACT.\n\nDave\n\nHi,\n\nCREATE INDEX measurement_01_001_y_idx\n ON climate.measurement_01_001\n USING btree\n (date_part('year'::text, taken));\n\nIs that equivalent to what you suggest?\n\n\nNo. It is not the same function, so Postgres has no way to know it produces the same results (if it does).This is what I ran:CREATE INDEX  measurement_013_taken_year_idx\nON  climate.measurement_013  (EXTRACT( YEAR FROM taken ));This is what pgadmin3 shows me:CREATE INDEX measurement_013_taken_year_idx  ON climate.measurement_013\n  USING btree  (date_part('year'::text, taken));As far as I can tell, it appears they are equivalent?Either way, the cost for performing a GROUP BY is high (I ran once with extract and once with date_part). The date_part EXPLAIN ANALYSE resulted in:\n\"Limit  (cost=1748024.65..1748028.65 rows=200 width=12) (actual time=65471.448..65471.542 rows=101 loops=1)\"The EXTRACT EXPLAIN ANALYSE came to:\"Limit  (cost=1748024.65..1748028.65 rows=200 width=12) (actual time=44913.263..44913.330 rows=101 loops=1)\"\nIf PG treats them differently, I'd like to know how so that I can do the right thing. As it is, I cannot see the difference in performance between date_part and EXTRACT.Dave", "msg_date": "Sat, 22 May 2010 01:11:09 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "Hi,\n\ncertainly understand that you wouldn't want to partition by year. It\n>\n\nDefinitely not.\n\n\n> does strike me that perhaps you could partition by day ranges, but you'd\n>\n\nI don't think that will work; users can choose any day range, with the most\ncommon as Jan 1 - Dec 31, followed by seasonal ranges, followed by arbitrary\nranges.\n\n\n> some of this data.. If users are going to typically use 1900-2009 for\n> years, then could the information about all of those years be aggregated\n> apriori to make those queries faster?\n>\n\nI'm not sure what you mean. I could create a separate table that lumps the\naggregated averages per year per station per category, but that will only\nhelp in the one case. There are five different reporting pages (Basic\nthrough Guru). On three of those pages the user must select arbitrary day\nranges. On one of those pages, the user can select a season, which then maps\nto, for all intents and purposes, an arbitrary day range.\n\nOnly the most basic page do not offer the user a day range selection.\n\n\n> Do not get hung up on having to have a separate table for every unique\n> value in the column- you don't need that. constraint_exclusion will\n>\n\nThat's good advice. I have repartitioned the data into seven tables: one per\ncategory.\n\n\n> I agee with Matthew Wakeling in a different post: its probably wise to\n> I would agree with this too- get it working first, then look at\n> partitioning. Even more so- work on a smaller data set to begin with\n>\n\nThe query speed has now much improved thanks to everybody's advice.\n\n From a cost of 10006220141 down to 704924. Here is the query:\n\nSELECT\n avg(m.amount),\n extract(YEAR FROM m.taken) AS year_taken\nFROM\n climate.city c,\n climate.station s,\n climate.measurement m\nWHERE\n c.id = 5182 AND\n 6371.009 * SQRT(\n POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) +\n (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) *\n POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2))\n ) <= 25 AND\n s.elevation BETWEEN 0 AND 3000 AND\n m.category_id = 7 AND\n m.station_id = s.id AND\n extract(YEAR FROM m.taken) BETWEEN 1900 AND 2000\nGROUP BY\n extract(YEAR FROM m.taken)\nORDER BY\n extract(YEAR FROM m.taken)\n\n(Note that *extract(YEAR FROM m.taken)* is much faster than\n*date_part('year'::text,\nm.taken)*.)\n\nThe query plan for the above SQL reveals:\n\n\"Sort (cost=704924.25..704924.75 rows=200 width=9) (actual\ntime=9476.518..9476.521 rows=46 loops=1)\"\n\" Sort Key: (date_part('year'::text, (m.taken)::timestamp without time\nzone))\"\n\" Sort Method: quicksort Memory: 28kB\"\n\" -> HashAggregate (cost=704913.10..704916.60 rows=200 width=9) (actual\ntime=9476.465..9476.489 rows=46 loops=1)\"\n\" -> Hash Join (cost=1043.52..679956.79 rows=4991262 width=9)\n(actual time=46.399..9344.537 rows=120678 loops=1)\"\n\" Hash Cond: (m.station_id = s.id)\"\n\" -> Append (cost=0.00..529175.42 rows=14973786 width=13)\n(actual time=0.076..7739.647 rows=14874909 loops=1)\"\n\" -> Seq Scan on measurement m (cost=0.00..43.00 rows=1\nwidth=20) (actual time=0.000..0.000 rows=0 loops=1)\"\n\" Filter: ((category_id = 7) AND\n(date_part('year'::text, (taken)::timestamp without time zone) >=\n1900::double precision) AND (date_part('year'::text, (taken)::timestamp\nwithout time zone) <= 2000::double precision))\"\n\" -> Index Scan using measurement_013_taken_year_idx on\nmeasurement_013 m (cost=0.01..529132.42 rows=14973785 width=13) (actual\ntime=0.075..6266.385 rows=14874909 loops=1)\"\n\" Index Cond: ((date_part('year'::text,\n(taken)::timestamp without time zone) >= 1900::double precision) AND\n(date_part('year'::text, (taken)::timestamp without time zone) <=\n2000::double precision))\"\n\" Filter: (category_id = 7)\"\n\" -> Hash (cost=992.94..992.94 rows=4046 width=4) (actual\ntime=43.420..43.420 rows=78 loops=1)\"\n\" -> Nested Loop (cost=0.00..992.94 rows=4046 width=4)\n(actual time=0.053..43.390 rows=78 loops=1)\"\n\" Join Filter: ((6371.009::double precision *\nsqrt((pow(radians(((c.latitude_decimal - s.latitude_decimal))::double\nprecision), 2::double precision) + (cos((radians(((c.latitude_decimal +\ns.latitude_decimal))::double precision) / 2::double precision)) *\npow(radians(((c.longitude_decimal - s.longitude_decimal))::double\nprecision), 2::double precision))))) <= 25::double precision)\"\n\" -> Index Scan using city_pkey1 on city c\n(cost=0.00..4.27 rows=1 width=16) (actual time=0.021..0.022 rows=1 loops=1)\"\n\" Index Cond: (id = 5182)\"\n\" -> Seq Scan on station s (cost=0.00..321.08\nrows=12138 width=20) (actual time=0.008..5.457 rows=12139 loops=1)\"\n\" Filter: ((s.elevation >= 0) AND\n(s.elevation <= 3000))\"\n\"Total runtime: 9476.626 ms\"\n\nThat's about 10 seconds using the category with the smallest table. The\nlargest table takes 17 seconds (fantastic!) after a few runs and 85 seconds\ncold. About 1 second is my goal, before the pending hardware upgrades.\n\nWhen I recreated the tables, I sorted them by date then station id so that\nthere is now a 1:1 correlation between the sequence number and the\nmeasurement date.\n\nWould clustering on the date and station make a difference?\n\nIs there any other way to index the data that I have missed?\n\nThank you very much.\n\nDave\n\nHi,\ncertainly understand that you wouldn't want to partition by year.  ItDefinitely not. \n\ndoes strike me that perhaps you could partition by day ranges, but you'dI don't think that will work; users can choose any day range, with the most common as Jan 1 - Dec 31, followed by seasonal ranges, followed by arbitrary ranges.\n \nsome of this data..  If users are going to typically use 1900-2009 for\nyears, then could the information about all of those years be aggregated\napriori to make those queries faster?I'm not sure what you mean. I could create a separate table that lumps the aggregated averages per year per station per category, but that will only help in the one case. There are five different reporting pages (Basic through Guru). On three of those pages the user must select arbitrary day ranges. On one of those pages, the user can select a season, which then maps to, for all intents and purposes, an arbitrary day range.\nOnly the most basic page do not offer the user a day range selection. Do not get hung up on having to have a separate table for every unique\n\nvalue in the column- you don't need that.  constraint_exclusion willThat's good advice. I have repartitioned the data into seven tables: one per category. \n I agee with Matthew Wakeling in a different post: its probably wise toI would agree with this too- get it working first, then look at\npartitioning.  Even more so- work on a smaller data set to begin withThe query speed has now much improved thanks to everybody's advice.From a cost of 10006220141 down to 704924. Here is the query:\nSELECT  avg(m.amount),  extract(YEAR FROM m.taken) AS year_takenFROM  climate.city c,  climate.station s,  climate.measurement mWHERE  c.id = 5182 AND\n  6371.009 * SQRT(    POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) +    (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) *     POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2))\n  ) <= 25 AND  s.elevation BETWEEN 0 AND 3000 AND  m.category_id = 7 AND  m.station_id = s.id AND  extract(YEAR FROM m.taken) BETWEEN 1900 AND 2000GROUP BY  extract(YEAR FROM m.taken)\nORDER BY  extract(YEAR FROM m.taken)(Note that extract(YEAR FROM m.taken) is much faster than date_part('year'::text, m.taken).)The query plan for the above SQL reveals:\n\"Sort  (cost=704924.25..704924.75 rows=200 width=9) (actual time=9476.518..9476.521 rows=46 loops=1)\"\"  Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))\"\"  Sort Method:  quicksort  Memory: 28kB\"\n\"  ->  HashAggregate  (cost=704913.10..704916.60 rows=200 width=9) (actual time=9476.465..9476.489 rows=46 loops=1)\"\"        ->  Hash Join  (cost=1043.52..679956.79 rows=4991262 width=9) (actual time=46.399..9344.537 rows=120678 loops=1)\"\n\"              Hash Cond: (m.station_id = s.id)\"\"              ->  Append  (cost=0.00..529175.42 rows=14973786 width=13) (actual time=0.076..7739.647 rows=14874909 loops=1)\"\n\"                    ->  Seq Scan on measurement m  (cost=0.00..43.00 rows=1 width=20) (actual time=0.000..0.000 rows=0 loops=1)\"\"                          Filter: ((category_id = 7) AND (date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2000::double precision))\"\n\"                    ->  Index Scan using measurement_013_taken_year_idx on measurement_013 m  (cost=0.01..529132.42 rows=14973785 width=13) (actual time=0.075..6266.385 rows=14874909 loops=1)\"\"                          Index Cond: ((date_part('year'::text, (taken)::timestamp without time zone) >= 1900::double precision) AND (date_part('year'::text, (taken)::timestamp without time zone) <= 2000::double precision))\"\n\"                          Filter: (category_id = 7)\"\"              ->  Hash  (cost=992.94..992.94 rows=4046 width=4) (actual time=43.420..43.420 rows=78 loops=1)\"\"                    ->  Nested Loop  (cost=0.00..992.94 rows=4046 width=4) (actual time=0.053..43.390 rows=78 loops=1)\"\n\"                          Join Filter: ((6371.009::double precision * sqrt((pow(radians(((c.latitude_decimal - s.latitude_decimal))::double precision), 2::double precision) + (cos((radians(((c.latitude_decimal + s.latitude_decimal))::double precision) / 2::double precision)) * pow(radians(((c.longitude_decimal - s.longitude_decimal))::double precision), 2::double precision))))) <= 25::double precision)\"\n\"                          ->  Index Scan using city_pkey1 on city c  (cost=0.00..4.27 rows=1 width=16) (actual time=0.021..0.022 rows=1 loops=1)\"\"                                Index Cond: (id = 5182)\"\n\"                          ->  Seq Scan on station s  (cost=0.00..321.08 rows=12138 width=20) (actual time=0.008..5.457 rows=12139 loops=1)\"\"                                Filter: ((s.elevation >= 0) AND (s.elevation <= 3000))\"\n\"Total runtime: 9476.626 ms\"That's about 10 seconds using the category with the smallest table. The largest table takes 17 seconds (fantastic!) after a few runs and 85 seconds cold. About 1 second is my goal, before the pending hardware upgrades.\nWhen I recreated the tables, I sorted them by date then station id so that there is now a 1:1 correlation between the sequence number and the measurement date.Would clustering on the date and station make a difference?\nIs there any other way to index the data that I have missed?Thank you very much.Dave", "msg_date": "Sat, 22 May 2010 10:54:14 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "Hi,\n\nThe problem is now solved (in theory).\n\nWell, it's not the functions per se that's the problem, it's the lack of\n> a useful index on the expression.\n>\n\nThe measurement table indexes (on date and weather station) were not being\nused because the only given date ranges (e.g., 1900 - 2009) were causing the\nplanner to do a full table scan, which is correct. What I had to do was find\na way to reduce the dates so that the planner would actually use the index,\nrather than doing a full table scan on 43 million records. By passing in\n1955 - 1960 the full table scan went away in favour of an index scan, as\nexpected.\n\nEach weather station has a known lifespan (per climate category). That is,\nnot all weather stations between 1880 and 2009 collected data. For example,\none weather station monitored the maximum daily temperature between\n2006-11-29 and 2009-12-31. Some stations span more than 30 years, but I\nbelieve those are in the minority (e.g., 1896-12-01 to 1959-01-31). (I'll be\nable to verify once the analysis is finished.)\n\nI will add another table that maps the stations to category and min/max\ndates. I can then use this reference table which should (theory part here)\ntell the planner to use the index.\n\nWhat is *really impressive*, though... If my understanding is correct...\n\nPostgreSQL scanned 43 million rows 78 times, returning results in ~90 sec.\n\nThanks again for all your help, everybody. I sincerely appreciate your\npatience, comments, and ideas.\n\nDave\n\nHi,The problem is now solved (in theory).Well, it's not the functions per se that's the problem, it's the lack of\n\na useful index on the expression.The measurement table indexes (on date and weather station) were not being used because the only given date ranges (e.g., 1900 - 2009) were causing the planner to do a full table scan, which is correct. What I had to do was find a way to reduce the dates so that the planner would actually use the index, rather than doing a full table scan on 43 million records. By passing in 1955 - 1960 the full table scan went away in favour of an index scan, as expected.\nEach weather station has a known lifespan (per climate category). That is, not all weather stations between 1880 and 2009 collected data.  For example, one weather station monitored the maximum daily temperature between 2006-11-29 and 2009-12-31. Some stations span more than 30 years, but I believe those are in the minority (e.g., 1896-12-01 to 1959-01-31). (I'll be able to verify once the analysis is finished.)\nI will add another table that maps the stations to category and min/max dates. I can then use this reference table which should (theory part here) tell the planner to use the index.What is really impressive, though... If my understanding is correct...\nPostgreSQL scanned 43 million rows 78 times, returning results in ~90 sec.Thanks again for all your help, everybody. I sincerely appreciate your patience, comments, and ideas.\nDave", "msg_date": "Sun, 23 May 2010 15:55:21 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "On Sun, 23 May 2010, David Jarvis wrote:\n> The measurement table indexes (on date and weather station) were not being\n> used because the only given date ranges (e.g., 1900 - 2009) were causing the\n> planner to do a full table scan, which is correct.\n\nI wonder if you might see some benefit from CLUSTERing the tables on the \nindex.\n\nMatthew\n\n-- \n And the lexer will say \"Oh look, there's a null string. Oooh, there's \n another. And another.\", and will fall over spectacularly when it realises\n there are actually rather a lot.\n - Computer Science Lecturer (edited)\n", "msg_date": "Tue, 1 Jun 2010 10:55:35 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n GIN?" }, { "msg_contents": "Excerpts from Matthew Wakeling's message of mar jun 01 05:55:35 -0400 2010:\n> On Sun, 23 May 2010, David Jarvis wrote:\n> > The measurement table indexes (on date and weather station) were not being\n> > used because the only given date ranges (e.g., 1900 - 2009) were causing the\n> > planner to do a full table scan, which is correct.\n> \n> I wonder if you might see some benefit from CLUSTERing the tables on the \n> index.\n\nEh, isn't this a GIN or GiST index? I don't think you can cluster on\nthose, can you?\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 01 Jun 2010 13:14:06 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or GIN?" }, { "msg_contents": "Sorry, Alvaro.\n\nI was contemplating using a GIN or GiST index as a way of optimizing the\nquery.\n\nInstead, I found that re-inserting the data in order of station ID (the\nprimary look-up column) and then CLUSTER'ing on the station ID, taken date,\nand category index increased the speed by an order of magnitude.\n\nI might be able to drop the station/taken/category index in favour of the\nsimple station index and CLUSTER on that, instead (saving plenty of disk\nspace). Either way, it's fast right now so I'm not keen to try and make it\nmuch faster.\n\nDave\n\nSorry, Alvaro.I was contemplating using a GIN or GiST index as a way of optimizing the query.Instead, I found that re-inserting the data in order of station ID (the primary look-up column) and then CLUSTER'ing on the station ID, taken date, and category index increased the speed by an order of magnitude.\nI might be able to drop the station/taken/category index in favour of the simple station index and CLUSTER on that, instead (saving plenty of disk space). Either way, it's fast right now so I'm not keen to try and make it much faster.\nDave", "msg_date": "Tue, 1 Jun 2010 11:01:22 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" }, { "msg_contents": "Excerpts from David Jarvis's message of mar jun 01 14:01:22 -0400 2010:\n> Sorry, Alvaro.\n> \n> I was contemplating using a GIN or GiST index as a way of optimizing the\n> query.\n\nMy fault -- I didn't read the whole thread.\n\n> Instead, I found that re-inserting the data in order of station ID (the\n> primary look-up column) and then CLUSTER'ing on the station ID, taken date,\n> and category index increased the speed by an order of magnitude.\n\nHmm, that's nice, though I cannot but wonder whether the exclusive lock\nrequired by CLUSTER is going to be a problem in the long run.\n\n> I might be able to drop the station/taken/category index in favour of the\n> simple station index and CLUSTER on that, instead (saving plenty of disk\n> space). Either way, it's fast right now so I'm not keen to try and make it\n> much faster.\n\nHm, keep in mind that if the station clause alone is not selective\nenough, scanning it may be too expensive. The current three column\nindex is probably a lot faster to search (though of course it's causing\nmore work to be kept up to date on insertions).\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 01 Jun 2010 15:00:26 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize date query for large child tables: GiST or GIN?" }, { "msg_contents": "Hi,\n\nHmm, that's nice, though I cannot but wonder whether the exclusive lock\n> required by CLUSTER is going to be a problem in the long run.\n>\n\nNot an issue; the inserts are one-time (or very rare; at most: once a year).\n\n\n> Hm, keep in mind that if the station clause alone is not selective\n> enough, scanning it may be too expensive. The current three column\n>\n\nThe seven child tables (split on category ID) have the following indexes:\n\n 1. Primary key (unique ID, sequence)\n 2. Station ID (table data is physically inserted by station ID order)\n 3. Station ID, Date, and Category ID (this index is CLUSTER'ed)\n\nI agree that the last index is probably all that is necessary. 99% of the\nsearches use the station ID, date, and category. I don't think PostgreSQL\nnecessarily uses that last index, though.\n\nDave\n\nHi,\nHmm, that's nice, though I cannot but wonder whether the exclusive lock\nrequired by CLUSTER is going to be a problem in the long run.Not an issue; the inserts are one-time (or very rare; at most: once a year). \n\nHm, keep in mind that if the station clause alone is not selective\nenough, scanning it may be too expensive.  The current three column\nThe seven child tables (split on category ID) have the following indexes:Primary key (unique ID, sequence)Station ID (table data is physically inserted by station ID order)\nStation ID, Date, and Category ID (this index is CLUSTER'ed)I agree that the last index is probably all that is necessary. 99% of the searches use the station ID, date, and category. I don't think PostgreSQL necessarily uses that last index, though.\nDave", "msg_date": "Tue, 1 Jun 2010 21:55:38 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize date query for large child tables: GiST or\n\tGIN?" } ]
[ { "msg_contents": "Hello,\n\nSorry for the re-post  - not sure list is the relevant one, I included\nslightly changed query in the previous message, sent to bugs list.\n\nI have an ORM-generated queries where parent table keys are used to\nfetch the records from the child table (with relevant FK indicies),\nwhere child table is partitioned. My understanding is that Postgres is\nunable to properly use constraint exclusion to query only a relevant\ntable? Half of the join condition is propagated down, while the other\nis not.\n\ntable sources has pk (sureyid,srcid), ts has fk(survey_pk,source_pk)\non source (sureyid,srcid) and another index with\nsurvey_pk,source_pk,tstype (not used in the query).\n\nThis is very unfortunate as the queries are auto-generated and I\ncannot move predicate to apply it directly to partitioned table.\n\nThe plan includes all the partitions, next snippet shows exclusion\nworks for the table when condition is used directly on the partitioned\ntable.\n\nsurveys-> SELECT  t1.SURVEY_PK, t1.SOURCE_PK, t1.TSTYPE,  t1.VALS\nsurveys->   FROM sources t0 ,TS t1 where\nsurveys->   (t0.SURVEYID = 16 AND t0.SRCID >= 203510110032281 AND\nt0.SRCID <= 203520107001677 and t0.SURVEYID = t1.SURVEY_PK AND t0.SRCID =\nt1.SOURCE_PK ) ORDER BY t0.SURVEYID ASC, t0.SRCID ASC\nsurveys->\nsurveys-> ;\n                                                             QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------\nMerge Join  (cost=11575858.83..11730569.40 rows=3448336 width=60)\n Merge Cond: (t0.srcid = t1.source_pk)\n ->  Index Scan using sources_pkey on sources t0\n(cost=0.00..68407.63 rows=37817 width=12)\n       Index Cond: ((surveyid = 16) AND (srcid >=\n203510110032281::bigint) AND (srcid <= 203520107001677::bigint))\n ->  Materialize  (cost=11575858.83..11618963.03 rows=3448336 width=48)\n       ->  Sort  (cost=11575858.83..11584479.67 rows=3448336 width=48)\n             Sort Key: t1.source_pk\n             ->  Append  (cost=0.00..11049873.18 rows=3448336 width=48)\n                   ->  Index Scan using ts_pkey on ts t1\n(cost=0.00..8.27 rows=1 width=853)\n                         Index Cond: (survey_pk = 16)\n                   ->  Index Scan using ts_part_bs3000l00000_ts_pkey\non ts_part_bs3000l00000 t1  (cost=0.00..8.27 rows=1 width=48)\n                         Index Cond: (survey_pk = 16)\n                   ->  Bitmap Heap Scan on\nts_part_bs3000l00001_cg0346l00000 t1  (cost=5760.36..1481735.21\nrows=462422 width=48)\n                         Recheck Cond: (survey_pk = 16)\n                         ->  Bitmap Index Scan on\nts_part_bs3000l00001_cg0346l00000_ts_pkey  (cost=0.00..5644.75\nrows=462422 width=0)\n                               Index Cond: (survey_pk = 16)\n                   ->  Bitmap Heap Scan on\nts_part_cg0346l00001_cg0816k00000 t1  (cost=5951.07..1565423.79\nrows=488582 width=48)\n                         Recheck Cond: (survey_pk = 16)\n                         ->  Bitmap Index Scan on\nts_part_cg0346l00001_cg0816k00000_ts_pkey  (cost=0.00..5828.93\nrows=488582 width=0)\n                               Index Cond: (survey_pk = 16)\n                   ->  Bitmap Heap Scan on\nts_part_cg0816k00001_cg1180k00000 t1  (cost=5513.54..1432657.90\nrows=447123 width=48)\n                         Recheck Cond: (survey_pk = 16)\n                         ->  Bitmap Index Scan on\nts_part_cg0816k00001_cg1180k00000_ts_pkey  (cost=0.00..5401.75\nrows=447123 width=0)\n                               Index Cond: (survey_pk = 16)\n                   ->  Bitmap Heap Scan on\nts_part_cg1180k00001_cg6204k00000 t1  (cost=5212.63..1329884.46\nrows=415019 width=48)\n                         Recheck Cond: (survey_pk = 16)\n                         ->  Bitmap Index Scan on\nts_part_cg1180k00001_cg6204k00000_ts_pkey  (cost=0.00..5108.87\nrows=415019 width=0)\n                               Index Cond: (survey_pk = 16)\n                   ->  Bitmap Heap Scan on\nts_part_cg6204k00001_lm0022n00000 t1  (cost=5450.37..1371917.76\nrows=428113 width=48)\n                         Recheck Cond: (survey_pk = 16)\n                         ->  Bitmap Index Scan on\nts_part_cg6204k00001_lm0022n00000_ts_pkey  (cost=0.00..5343.35\nrows=428113 width=0)\n                               Index Cond: (survey_pk = 16)\n                   ->  Bitmap Heap Scan on\nts_part_lm0022n00001_lm0276m00000 t1  (cost=5136.71..1298542.32\nrows=405223 width=48)\n                         Recheck Cond: (survey_pk = 16)\n                         ->  Bitmap Index Scan on\nts_part_lm0022n00001_lm0276m00000_ts_pkey  (cost=0.00..5035.40\nrows=405223 width=0)\n                               Index Cond: (survey_pk = 16)\n                   ->  Bitmap Heap Scan on\nts_part_lm0276m00001_lm0584k00000 t1  (cost=5770.98..1525737.42\nrows=476204 width=48)\n                         Recheck Cond: (survey_pk = 16)\n                         ->  Bitmap Index Scan on\nts_part_lm0276m00001_lm0584k00000_ts_pkey  (cost=0.00..5651.93\nrows=476204 width=0)\n                               Index Cond: (survey_pk = 16)\n                   ->  Bitmap Heap Scan on\nts_part_lm0584k00001_sm0073k00000 t1  (cost=4536.03..1043949.51\nrows=325647 width=48)\n                         Recheck Cond: (survey_pk = 16)\n                         ->  Bitmap Index Scan on\nts_part_lm0584k00001_sm0073k00000_ts_pkey  (cost=0.00..4454.62\nrows=325647 width=0)\n                               Index Cond: (survey_pk = 16)\n                   ->  Index Scan using ts_part_sm0073k00001_ts_pkey\non ts_part_sm0073k00001 t1  (cost=0.00..8.27 rows=1 width=48)\n                         Index Cond: (survey_pk = 16)\n(46 rows)\n\n\nCheck to see if the exclusion works and yes, it does.\n\n\nsurveys=> explain SELECT  t1.SURVEY_PK, t1.SOURCE_PK, t1.TSTYPE,  t1.VALS\nsurveys->   FROM  TS t1 WHERE t1.SURVEY_PK =16  AND t1.SOURCE_PK>=\n202970108014045 AND t1.SOURCE_PK <= 202970108014909\nsurveys->  ORDER BY t1.SURVEY_PK ASC, t1.SOURCE_PK ASC\nsurveys->   ;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort  (cost=9454.13..9459.91 rows=2313 width=48)\n Sort Key: t1.source_pk\n ->  Result  (cost=0.00..9324.88 rows=2313 width=48)\n       ->  Append  (cost=0.00..9324.88 rows=2313 width=48)\n             ->  Index Scan using ts_pkey on ts t1  (cost=0.00..8.27\nrows=1 width=853)\n                   Index Cond: ((survey_pk = 16) AND (source_pk >=\n202970108014045::bigint) AND (source_pk <= 202970108014909::bigint))\n             ->  Index Scan using\nts_part_bs3000l00001_cg0346l00000_ts_pkey on\nts_part_bs3000l00001_cg0346l00000 t1  (cost=0.00..9316.61 rows=2312\nwidth=48)\n                   Index Cond: ((survey_pk = 16) AND (source_pk >=\n202970108014045::bigint) AND (source_pk <= 202970108014909::bigint))\n(8 rows)\n\n\n\nIs there any workaround for that, except changing the query? Any plans\nto implement it?\n\nThe second issue is connected to this one it looks like a bug to me:\n\nI have around 30 clients running the same query with different\nparameters, but the query always returns 1000 rows (boundary values\nare pre-calculated,so it's like traversal of the equiwidth histogram\nif it comes to srcid/source_pk) and the rows from parallel queries\ncannot be overlapping. Usually query returns within around a second.\nI noticed however there are some queries that hang for many hours and\nwhat is most curious some of them created several GB of temp files.\nThe partition size is around 10M entries, there are around 10\npartitions as I mentioned there is no way this query should fetch more\nthen 1000 entries. Some queries may span multiple, adjacent partitions\n(but not the one above, as we can see). I tried to run this exploding\nquery without ordering, but it did not change anything, behaviour is\nrepeatable from the command line, if the query is divided by hand with\nsame parameters values that are returned are ok, within a second.\nThere are many AccessShareLock locks, all of them granted.\nKilling the clients that issues the queries does not change much -\nthese are still running, DB immediate restart helps.\n\n\nEnvironment: Rocks 5.2, kernel 2.6.18,  PostgreSQL 8.4.2 on\nx86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20071124 (Red\nHat 4.1.2-42), 64-bit, client is JDBC.\n\nThe queries start to fail eventually due to the lack of space (over\n500GB used by temp files), i.e. one of queries hangs for hours with\ntemp allocation like:\n\n< ls -hs1 data/base/pgsql_tmp/\n1.1G pgsql_tmp27571.0\n1.1G pgsql_tmp27571.1\n1.1G pgsql_tmp27571.10\n1.1G pgsql_tmp27571.11\n1.1G pgsql_tmp27571.12\n1.1G pgsql_tmp27571.13\n1.1G pgsql_tmp27571.14\n1.1G pgsql_tmp27571.15\n1.1G pgsql_tmp27571.16\n1.1G pgsql_tmp27571.17\n1.1G pgsql_tmp27571.18\n1.1G pgsql_tmp27571.19\n1.1G pgsql_tmp27571.2\n1.0G pgsql_tmp27571.20\n1.0G pgsql_tmp27571.21\n1.1G pgsql_tmp27571.22\n1.1G pgsql_tmp27571.23\n1.1G pgsql_tmp27571.24\n1.1G pgsql_tmp27571.25\n1.1G pgsql_tmp27571.26\n801M pgsql_tmp27571.27\n1.1G pgsql_tmp27571.3\n1.1G pgsql_tmp27571.4\n1.1G pgsql_tmp27571.5\n1.1G pgsql_tmp27571.6\n1.1G pgsql_tmp27571.7\n1.1G pgsql_tmp27571.8\n1.1G pgsql_tmp27571.9\n>\n\n\nThe transaction does not generate any updates/deletes.\n\nWorking queries (subsecond) have plan like this, condition pushed down\nwill use the index this time.\n\nsurveys=> explain SELECT t0.SURVEYID, t0.SRCID, t1.SURVEY_PK,\nt1.SOURCE_PK, t1.TSTYPE, t1.VALS,\nt1.CCDIDS, t1.FLAGS, t1.OBSTIME, t1.LEN, t1.VALUEERRORS\nFROM sources t0 ,TS t1 where\n(t0.SURVEYID = 16 AND t0.SRCID >= 202970108014045 AND t0.SRCID <=\n202970108014909 and t0.SURVEYID = t1.SURVEY_PK AND t0.SRCID = t1.SOURCE_PK )\nORDER BY t0.SURVEYID ASC, t0.SRCID ASC\n;\n\n  QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\nNested Loop  (cost=0.00..68958.49 rows=17242 width=192)\n  Join Filter: (t0.srcid = t1.source_pk)\n  ->  Index Scan using sources_pkey on sources t0  (cost=0.00..35.48\nrows=1 width=12)\n        Index Cond: ((surveyid = 16) AND (srcid >=\n202970108014045::bigint) AND (srcid <= 202970108014909::bigint))\n  ->  Append  (cost=0.00..68707.45 rows=17245 width=180)\n        ->  Index Scan using ts_pkey on ts t1  (cost=0.00..8.27 rows=1\nwidth=1518)\n              Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n        ->  Index Scan using ts_part_bs3000l00000_ts_pkey on\nts_part_bs3000l00000 t1  (cost=0.00..8.27 rows=1 width=180)\n              Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n        ->  Bitmap Heap Scan on ts_part_bs3000l00001_cg0346l00000 t1\n(cost=40.29..9208.75 rows=2312 width=180)\n              Recheck Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n              ->  Bitmap Index Scan on\nts_part_bs3000l00001_cg0346l00000_ts_pkey  (cost=0.00..39.71 rows=2312\nwidth=0)\n                    Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk\n= t0.srcid))\n        ->  Bitmap Heap Scan on ts_part_cg0346l00001_cg0816k00000 t1\n(cost=41.60..9729.55 rows=2443 width=180)\n              Recheck Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n              ->  Bitmap Index Scan on\nts_part_cg0346l00001_cg0816k00000_ts_pkey  (cost=0.00..40.99 rows=2443\nwidth=0)\n                    Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk\n= t0.srcid))\n        ->  Bitmap Heap Scan on ts_part_cg0816k00001_cg1180k00000 t1\n(cost=39.25..8906.31 rows=2236 width=180)\n              Recheck Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n              ->  Bitmap Index Scan on\nts_part_cg0816k00001_cg1180k00000_ts_pkey  (cost=0.00..38.69 rows=2236\nwidth=0)\n                    Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk\n= t0.srcid))\n        ->  Bitmap Heap Scan on ts_part_cg1180k00001_cg6204k00000 t1\n(cost=37.50..8266.11 rows=2075 width=180)\n              Recheck Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n              ->  Bitmap Index Scan on\nts_part_cg1180k00001_cg6204k00000_ts_pkey  (cost=0.00..36.98 rows=2075\nwidth=0)\n                    Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk\n= t0.srcid))\n        ->  Bitmap Heap Scan on ts_part_cg6204k00001_lm0022n00000 t1\n(cost=38.44..8528.77 rows=2141 width=180)\n              Recheck Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n              ->  Bitmap Index Scan on\nts_part_cg6204k00001_lm0022n00000_ts_pkey  (cost=0.00..37.91 rows=2141\nwidth=0)\n                    Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk\n= t0.srcid))\n        ->  Bitmap Heap Scan on ts_part_lm0022n00001_lm0276m00000 t1\n(cost=36.99..8071.29 rows=2026 width=180)\n              Recheck Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n              ->  Bitmap Index Scan on\nts_part_lm0022n00001_lm0276m00000_ts_pkey  (cost=0.00..36.49 rows=2026\nwidth=0)\n                    Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk\n= t0.srcid))\n        ->  Bitmap Heap Scan on ts_part_lm0276m00001_lm0584k00000 t1\n(cost=40.80..9482.89 rows=2381 width=180)\n              Recheck Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n              ->  Bitmap Index Scan on\nts_part_lm0276m00001_lm0584k00000_ts_pkey  (cost=0.00..40.21 rows=2381\nwidth=0)\n                    Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk\n= t0.srcid))\n        ->  Bitmap Heap Scan on ts_part_lm0584k00001_sm0073k00000 t1\n(cost=32.95..6488.95 rows=1628 width=180)\n              Recheck Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n              ->  Bitmap Index Scan on\nts_part_lm0584k00001_sm0073k00000_ts_pkey  (cost=0.00..32.54 rows=1628\nwidth=0)\n                    Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk\n= t0.srcid))\n        ->  Index Scan using ts_part_sm0073k00001_ts_pkey on\nts_part_sm0073k00001 t1  (cost=0.00..8.27 rows=1 width=180)\n              Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n(43 rows)\n\n\n\n\nAnd to check if partition exclusion is used for the working query:\nsurveys=> explain select t0.SURVEY_PK, t0.SOURCE_PK, t0.TSTYPE,\nt0.VALS from ts t0 where t0.SURVEY_pk = 16 AND t0.SOURCE_PK >=\n202970108014045 AND t0.Source_pk <= 202970108014909 ;\n\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\nResult  (cost=0.00..9324.88 rows=2313 width=48)\n ->  Append  (cost=0.00..9324.88 rows=2313 width=48)\n       ->  Index Scan using ts_pkey on ts t0  (cost=0.00..8.27\nrows=1 width=853)\n             Index Cond: ((survey_pk = 16) AND (source_pk >=\n202970108014045::bigint) AND (source_pk <= 202970108014909::bigint))\n       ->  Index Scan using\nts_part_bs3000l00001_cg0346l00000_ts_pkey on\nts_part_bs3000l00001_cg0346l00000 t0  (cost=0.00..9316.61 rows=2312\nwidth=48)\n             Index Cond: ((survey_pk = 16) AND (source_pk >=\n202970108014045::bigint) AND (source_pk <= 202970108014909::bigint))\n(6 rows)\n\n\n\n\nCould you please advise how to cope with this? Should I file the bug?\nDoes any workaround exist?\n\nBest Regards,\nKrzysztof\n", "msg_date": "Thu, 20 May 2010 16:00:03 +0200", "msg_from": "Krzysztof Nienartowicz <[email protected]>", "msg_from_op": true, "msg_subject": "Query causing explosion of temp space with join involving\n\tpartitioning" }, { "msg_contents": "Krzysztof Nienartowicz <[email protected]> writes:\n> surveys-> SELECT t1.SURVEY_PK, t1.SOURCE_PK, t1.TSTYPE, t1.VALS\n> surveys-> FROM sources t0 ,TS t1 where\n> surveys-> (t0.SURVEYID = 16 AND t0.SRCID >= 203510110032281 AND\n> t0.SRCID <= 203520107001677 and t0.SURVEYID = t1.SURVEY_PK AND t0.SRCID =\n> t1.SOURCE_PK ) ORDER BY t0.SURVEYID ASC, t0.SRCID ASC\n\nWe don't make any attempt to infer derived inequality conditions,\nso no, those constraints on t0.srcid won't be propagated over to\nt1.source_pk. Sorry. It's been suggested before, but it would be\na lot of new mechanism and expense in the planner, and for most\nqueries it'd just slow things down to try to do that.\n\n> I have around 30 clients running the same query with different\n> parameters, but the query always returns 1000 rows (boundary values\n> are pre-calculated,so it's like traversal of the equiwidth histogram\n> if it comes to srcid/source_pk) and the rows from parallel queries\n> cannot be overlapping. Usually query returns within around a second.\n> I noticed however there are some queries that hang for many hours and\n> what is most curious some of them created several GB of temp files.\n\nCan you show us the query plan for the slow cases?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 May 2010 15:21:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query causing explosion of temp space with join involving\n\tpartitioning" } ]
[ { "msg_contents": "Hi All,\n In my application we are using postgres which runs on an embedded\nbox. I have configured autovacuum to run once for every one hour. It has 5\ndifferent databases in it. When I saw the log messages, I found that it is\nrunning autovacuum on one database every hour. As a result, on my database\nautovacuum is run once in 5 hours. Is there any way to make it run it every\nhour.\n\n\nThank you,\nVenu\n\nHi All,       In my application we are using postgres which runs on an embedded box. I have configured autovacuum to run once for every one hour. It has 5 different databases in it. When I saw the log messages, I found that it is running autovacuum on one database every hour. As a result, on my database autovacuum is run once in 5 hours. Is there any way to make it run it every hour.\nThank you,Venu", "msg_date": "Fri, 21 May 2010 15:08:43 +0530", "msg_from": "venu madhav <[email protected]>", "msg_from_op": true, "msg_subject": "Autovacuum in postgres." }, { "msg_contents": "One more question \" Is is expected ?\"\n\nOn Fri, May 21, 2010 at 3:08 PM, venu madhav <[email protected]>wrote:\n\n> Hi All,\n> In my application we are using postgres which runs on an embedded\n> box. I have configured autovacuum to run once for every one hour. It has 5\n> different databases in it. When I saw the log messages, I found that it is\n> running autovacuum on one database every hour. As a result, on my database\n> autovacuum is run once in 5 hours. Is there any way to make it run it every\n> hour.\n>\n>\n> Thank you,\n> Venu\n>\n\nOne more question \" Is is expected ?\"On Fri, May 21, 2010 at 3:08 PM, venu madhav <[email protected]> wrote:\nHi All,       In my application we are using postgres which runs on an embedded box. I have configured autovacuum to run once for every one hour. It has 5 different databases in it. When I saw the log messages, I found that it is running autovacuum on one database every hour. As a result, on my database autovacuum is run once in 5 hours. Is there any way to make it run it every hour.\nThank you,Venu", "msg_date": "Fri, 21 May 2010 15:09:46 +0530", "msg_from": "venu madhav <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum in postgres." }, { "msg_contents": "venu madhav wrote:\n> Hi All,\n> In my application we are using postgres which runs on an embedded\n> box. I have configured autovacuum to run once for every one hour. It has 5\n> different databases in it. When I saw the log messages, I found that it is\n> running autovacuum on one database every hour. As a result, on my database\n> autovacuum is run once in 5 hours. Is there any way to make it run it every\n> hour.\n\nWhat settings did you change to make it run every hour? Also, it will\nonly vacuum tables that need vacuuming. What version of Postgres are\nyou using?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n", "msg_date": "Thu, 27 May 2010 10:33:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum in postgres." }, { "msg_contents": "Thanks for the reply..\n I am using postgres 8.01 and since it runs on a client box, I\ncan't upgrade it. I've set the auto vacuum nap time to 3600 seconds.\n\nOn Thu, May 27, 2010 at 8:03 PM, Bruce Momjian <[email protected]> wrote:\n\n> venu madhav wrote:\n> > Hi All,\n> > In my application we are using postgres which runs on an embedded\n> > box. I have configured autovacuum to run once for every one hour. It has\n> 5\n> > different databases in it. When I saw the log messages, I found that it\n> is\n> > running autovacuum on one database every hour. As a result, on my\n> database\n> > autovacuum is run once in 5 hours. Is there any way to make it run it\n> every\n> > hour.\n>\n> What settings did you change to make it run every hour? Also, it will\n> only vacuum tables that need vacuuming. What version of Postgres are\n> you using?\n>\n> --\n> Bruce Momjian <[email protected]> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n>\n\nThanks for the reply..           I am using postgres 8.01 and since it runs on a client box, I can't upgrade it. I've set the auto vacuum nap time to 3600 seconds.On Thu, May 27, 2010 at 8:03 PM, Bruce Momjian <[email protected]> wrote:\nvenu madhav wrote:\n> Hi All,\n>        In my application we are using postgres which runs on an embedded\n> box. I have configured autovacuum to run once for every one hour. It has 5\n> different databases in it. When I saw the log messages, I found that it is\n> running autovacuum on one database every hour. As a result, on my database\n> autovacuum is run once in 5 hours. Is there any way to make it run it every\n> hour.\n\nWhat settings did you change to make it run every hour?  Also, it will\nonly vacuum tables that need vacuuming.  What version of Postgres are\nyou using?\n\n--\n  Bruce Momjian  <[email protected]>        http://momjian.us\n  EnterpriseDB                             http://enterprisedb.com", "msg_date": "Thu, 27 May 2010 20:31:08 +0530", "msg_from": "venu madhav <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum in postgres." }, { "msg_contents": "venu madhav wrote:\n> Thanks for the reply..\n> I am using postgres 8.01 and since it runs on a client box, I\n> can't upgrade it. I've set the auto vacuum nap time to 3600 seconds.\n\nThat is an older version of autovacuum that wasn't very capable.\n\n---------------------------------------------------------------------------\n\n> On Thu, May 27, 2010 at 8:03 PM, Bruce Momjian <[email protected]> wrote:\n> \n> > venu madhav wrote:\n> > > Hi All,\n> > > In my application we are using postgres which runs on an embedded\n> > > box. I have configured autovacuum to run once for every one hour. It has\n> > 5\n> > > different databases in it. When I saw the log messages, I found that it\n> > is\n> > > running autovacuum on one database every hour. As a result, on my\n> > database\n> > > autovacuum is run once in 5 hours. Is there any way to make it run it\n> > every\n> > > hour.\n> >\n> > What settings did you change to make it run every hour? Also, it will\n> > only vacuum tables that need vacuuming. What version of Postgres are\n> > you using?\n> >\n> > --\n> > Bruce Momjian <[email protected]> http://momjian.us\n> > EnterpriseDB http://enterprisedb.com\n> >\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n", "msg_date": "Thu, 27 May 2010 11:07:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum in postgres." }, { "msg_contents": "Excerpts from venu madhav's message of vie may 21 05:38:43 -0400 2010:\n> Hi All,\n> In my application we are using postgres which runs on an embedded\n> box. I have configured autovacuum to run once for every one hour. It has 5\n> different databases in it. When I saw the log messages, I found that it is\n> running autovacuum on one database every hour. As a result, on my database\n> autovacuum is run once in 5 hours. Is there any way to make it run it every\n> hour.\n\nIf you set naptime to 12 mins, it will run on one database every 12\nminutes, so once per hour for your database. This is not really the\nintended usage though. You will have to adjust the time if another\ndatabase is created.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 27 May 2010 12:10:20 -0400", "msg_from": "alvherre <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum in postgres." }, { "msg_contents": "On Thu, May 27, 2010 at 9:01 AM, venu madhav <[email protected]> wrote:\n> Thanks for the reply..\n>            I am using postgres 8.01 and since it runs on a client box, I\n> can't upgrade it. I've set the auto vacuum nap time to 3600 seconds.\n\nYou've pretty much made autovac run every 5 hours with that setting.\nWhat was wrong with the original settings? Just wondering what\nproblem you were / are trying to solve here.\n", "msg_date": "Wed, 2 Jun 2010 14:12:15 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum in postgres." } ]
[ { "msg_contents": "We're using a function that when run as a select statement outside of the \nfunction takes roughly 1.5s to complete whereas running an identical\nquery within a function is taking around 55s to complete.\n\nWe are lost as to why placing this query within a function as opposed to\nsubstituting the variables in a select statement is so drastically different.\n\nThe timings posted here are from a 512MB memory virtual machine and are not of\nmajor concern on their own but we are finding the same issue in our production\nenvironment with far superior hardware.\n\nThe function can be found here:\nhttp://campbell-lange.net/media/files/fn_medirota_get_staff_leave_summary.sql\n\n---\n\nTimings for the individual components on their own is as follows:\n\nselect * from fn_medirota_validate_rota_master(6);\nTime: 0.670 ms\n\nselect to_date(EXTRACT (YEAR FROM current_date)::text, 'YYYY');\nTime: 0.749 ms\n\nselect * from fn_medirota_people_template_generator(2, 6, date'2009-01-01',\ndate'2009-12-31', TRUE) AS templates;\nTime: 68.004 ms\n\nselect * from fn_medirota_people_template_generator(2, 6, date'2010-01-01',\ndate'2010-12-31', TRUE) AS templates;\nTime: 1797.323\n\n\nCopying the exact same for loop select statement from the query above into\nthe psql query buffer and running them with variable substitution yields the\nfollowing:\n\nRunning FOR loop SElECT with variable substitution:\nTime: 3150.585 ms\n\n\nWhereas invoking the function yields:\n\nselect * from fn_medirota_get_staff_leave_summary(6);\nTime: 57375.477 ms\n\n\nWe have tried using explain analyse to update the query optimiser, dropped and\nrecreated the function and have restarted both the machine and the postgres\nserver multiple times.\n\nAny help or advice would be greatly appreciated.\n\n\nKindest regards,\nTyler Hildebrandt\n\n---\n\nEXPLAIN ANALYSE VERBOSE SELECT * FROM fn_medirota_get_staff_leave_summary(6);\n\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n {FUNCTIONSCAN\n :startup_cost 0.00\n :total_cost 260.00\n :plan_rows 1000\n :plan_width 85\n :targetlist (\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n :resno 1\n :resname id\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 2\n :vartype 1043\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n :resno 2\n :resname t_full_name\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 3\n :vartype 16\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n :resno 3\n :resname b_enabled\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 4\n :vartype 1043\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n }\n :resno 4\n :resname t_anniversary\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 5\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 5\n }\n :resno 5\n :resname n_last_year_annual\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 6\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n :resno 6\n :resname n_last_year_other\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 7\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 7\n }\n :resno 7\n :resname n_this_year_annual\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 8\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 8\n }\n :resno 8\n :resname n_this_year_other\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n )\n :qual <>\n :lefttree <>\n :righttree <>\n :initPlan <>\n :extParam (b)\n :allParam (b)\n :scanrelid 1\n :funcexpr\n {FUNCEXPR\n :funcid 150447\n :funcresulttype 149366\n :funcretset true\n :funcformat 0\n :args (\n {CONST\n :consttype 23\n :consttypmod -1\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 6 0 0 0 0 0 0 0 ]\n }\n )\n }\n :funccolnames (\"id\" \"t_full_name\" \"b_enabled\" \"t_anniversary\" \"n_last_year_\n annual\" \"n_last_year_other\" \"n_this_year_annual\" \"n_this_year_other\")\n :funccoltypes <>\n :funccoltypmods <>\n }\n\nFunction Scan on fn_medirota_get_staff_leave_summary (cost=0.00..260.00\nrows=1000 width=85) (actual time=51877.812..51877.893 rows=94 loops=1)\nTotal runtime: 51878.008 ms\n(183 rows)\n\n\n\n\n-- \nTyler Hildebrandt\nSoftware Developer\[email protected]\n\nCampbell-Lange Workshop\nwww.campbell-lange.net\n020 7631 1555\n3 Tottenham Street London W1T 2AF\nRegistered in England No. 04551928\n", "msg_date": "Fri, 21 May 2010 14:54:15 +0100", "msg_from": "Tyler Hildebrandt <[email protected]>", "msg_from_op": true, "msg_subject": "Query timing increased from 3s to 55s when used as function\n\tinstead of select" }, { "msg_contents": "On 21/05/2010 9:54 PM, Tyler Hildebrandt wrote:\n> We're using a function that when run as a select statement outside of the\n> function takes roughly 1.5s to complete whereas running an identical\n> query within a function is taking around 55s to complete.\n>\n> We are lost as to why placing this query within a function as opposed to\n> substituting the variables in a select statement is so drastically different.\n\nThis is a frequently asked question. It's the same issue as with \nprepared queries, where the planner has to pick a more general plan when \nit doesn't know the value of a parameter. The short answer is \"work \naround it by using EXECUTE ... USING to invoke your query dynamically\".\n\n( Oddly, this FAQ doesn't seem to be on the FAQ list at \nhttp://wiki.postgresql.org/wiki/FAQ )\n\n--\nCraig Ringer\n\n", "msg_date": "Thu, 27 May 2010 23:33:06 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query timing increased from 3s to 55s when used as\n\tfunction\tinstead of select" }, { "msg_contents": "On 27/05/2010 11:33 PM, Craig Ringer wrote:\n> On 21/05/2010 9:54 PM, Tyler Hildebrandt wrote:\n>> We're using a function that when run as a select statement outside of the\n>> function takes roughly 1.5s to complete whereas running an identical\n>> query within a function is taking around 55s to complete.\n>>\n>> We are lost as to why placing this query within a function as opposed to\n>> substituting the variables in a select statement is so drastically\n>> different.\n>\n> This is a frequently asked question. It's the same issue as with\n> prepared queries, where the planner has to pick a more general plan when\n> it doesn't know the value of a parameter. The short answer is \"work\n> around it by using EXECUTE ... USING to invoke your query dynamically\".\n>\n> ( Oddly, this FAQ doesn't seem to be on the FAQ list at\n> http://wiki.postgresql.org/wiki/FAQ )\n\nAdded as:\n\nhttp://wiki.postgresql.org/wiki/FAQ#Why_is_my_query_much_slower_when_run_as_a_prepared_query.3F\n\nand the subsequent entry too.\n\nComments, edits, clarification appreciated. I know it's not as well \nwritten as it could be, could use archive links, etc; it's just pass 1.\n\n\n--\nCraig Ringer\n", "msg_date": "Thu, 27 May 2010 23:55:13 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query timing increased from 3s to 55s when used as\n\tfunction\tinstead of select" } ]
[ { "msg_contents": "Hi everyone,\n\nI use DBD::Pg to interface with our 8.4.2 database, but for a particular query, performance is horrible. I'm assuming that the behavior of $dbh->prepare is as if I did PREPARE foo AS (query), so I did an explain analyze in the commandline:\n> db_alpha=# prepare foo6 as (SELECT me.id, me.assignment, me.title, me.x_firstname, me.x_lastname, me.owner, me.node, me.grade, me.folder, me.word_count, me.char_length, me.char_count, me.page_count FROM submissions me WHERE ( ( owner = $1 AND me.assignment = $2 ) ));\n> PREPARE\n> db_alpha=# explain analyze execute foo6('-1', '8996557');\n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on submissions me (cost=38.84..42.85 rows=1 width=70) (actual time=346567.665..346567.665 rows=0 loops=1)\n> Recheck Cond: ((assignment = $2) AND (owner = $1))\n> -> BitmapAnd (cost=38.84..38.84 rows=1 width=0) (actual time=346567.642..346567.642 rows=0 loops=1)\n> -> Bitmap Index Scan on submissions_assignment_idx (cost=0.00..19.27 rows=177 width=0) (actual time=0.038..0.038 rows=2 loops=1)\n> Index Cond: (assignment = $2)\n> -> Bitmap Index Scan on submissions_owner_idx (cost=0.00..19.32 rows=184 width=0) (actual time=346566.501..346566.501 rows=28977245 loops=1)\n> Index Cond: (owner = $1)\n> Total runtime: 346567.757 ms\n> (8 rows)\n\n\nNow, if I run it without preparing it--just run it directly in the commandline--I get this plan:\n> db_alpha=# explain analyze SELECT me.id, me.assignment, me.title, me.x_firstname, me.x_lastname, me.owner, me.node, me.grade, me.folder, me.word_count, me.char_length, me.char_count, me.page_count FROM submissions me WHERE ( ( owner = -1 AND me.assignment = 8996557 ) )\n> db_alpha-# ;\n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using submissions_assignment_idx on submissions me (cost=0.00..549.15 rows=36 width=70) (actual time=0.021..0.021 rows=0 loops=1)\n> Index Cond: (assignment = 8996557)\n> Filter: (owner = (-1))\n> Total runtime: 0.042 ms\n> (4 rows)\n\nsubmissions has ~124 million rows, and owner -1 is a placeholder in my database, to fulfill a foreign key requirement. I tried REINDEXing submissions_owner_idx and performing a VACUUM ANALYZE on the submissions table, but nothing seems to make a difference for this query. One other thing to note is that if I use any other value for the owner column, it comes back really fast (< 0.04 ms).\n\nAny ideas why the query planner chooses a different query plan when using prepared statements?\n\n--Richard", "msg_date": "Fri, 21 May 2010 15:53:41 -0700", "msg_from": "Richard Yen <[email protected]>", "msg_from_op": true, "msg_subject": "prepared query performs much worse than regular query" }, { "msg_contents": "On Fri, May 21, 2010 at 4:53 PM, Richard Yen <[email protected]> wrote:\n> Any ideas why the query planner chooses a different query plan when using prepared statements?\n\nA prepared plan is the best one the planner can come up with *in\ngeneral* for the query in question. If the distribution of the values\nyou're querying against -- in your case, \"owner\" and \"assignment\" --\naren't relatively uniform, that plan is going to be suboptimal, if not\ndownright pathological, for the more outlying-ly distributed values.\n\nLooking at your prepared plan, it seems that, on average, there are\n177 rows for every \"assignment\", and 184 per \"owner\". As it turns\nout, though, nearly a quarter of your table has an \"owner\" of -1.\nIt's not terribly surprising, with a table that big and a distribution\nskew of that magnitude, that this query plan, with these arguments,\nends up pretty firmly in the \"pathological\" category.\n\nrls\n\n-- \n:wq\n", "msg_date": "Fri, 21 May 2010 18:30:23 -0600", "msg_from": "Rosser Schwarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: prepared query performs much worse than regular query" }, { "msg_contents": "On Fri, 21 May 2010, Richard Yen wrote:\n> Any ideas why the query planner chooses a different query plan when using prepared statements?\n\nThis is a FAQ. Preparing a statement makes Postgres create a plan, without \nknowing the values that you will plug in, so it will not be as optimal as \nif the values were available. The whole idea is to avoid the planning cost \neach time the query is executed, but if your data is unusual it can \nresult in worse plans.\n\nMatthew\n\n-- \n Existence is a convenient concept to designate all of the files that an\n executable program can potentially process. -- Fortran77 standard\n", "msg_date": "Fri, 21 May 2010 23:26:50 -0400 (EDT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: prepared query performs much worse than regular\n query" }, { "msg_contents": "\nOn May 21, 2010, at 8:26 PM, Matthew Wakeling wrote:\n\n> On Fri, 21 May 2010, Richard Yen wrote:\n>> Any ideas why the query planner chooses a different query plan when using prepared statements?\n> \n> This is a FAQ. Preparing a statement makes Postgres create a plan, without \n> knowing the values that you will plug in, so it will not be as optimal as \n> if the values were available. The whole idea is to avoid the planning cost \n> each time the query is executed, but if your data is unusual it can \n> result in worse plans.\n> \n\nTwo things I disagree with. \n1. The \"whole idea\" is not just to avoid planning cost. It is also to easily avoid SQL injection, reduce query parse time, and to make client code cleaner and more re-usable.\n2. The data does not need to be \"unusual\". It just needs to have a skewed distribution. Skewed is not unusual (well, it would be for a primary key :P ).\n\nMaybe the planner could note a prepared query parameter is on a high skew column and build a handful of plans to choose from, or just partially re-plan on the skewed column with each execution. \nOr make it easier for a user to have a prepared statement that re-plans the query each time. Even just a per connection parameter \"SET prepared.query.cacheplan = FALSE\"\n\n> Matthew\n> \n> -- \n> Existence is a convenient concept to designate all of the files that an\n> executable program can potentially process. -- Fortran77 standard\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Tue, 25 May 2010 11:27:08 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: prepared query performs much worse than regular query" }, { "msg_contents": "On Tue, May 25, 2010 at 11:27:08AM -0700, Scott Carey wrote:\n> On May 21, 2010, at 8:26 PM, Matthew Wakeling wrote:\n> > On Fri, 21 May 2010, Richard Yen wrote:\n> >> Any ideas why the query planner chooses a different query plan when using prepared statements?\n> > \n> > This is a FAQ. Preparing a statement makes Postgres create a plan, without \n> > knowing the values that you will plug in, so it will not be as optimal as \n> > if the values were available. The whole idea is to avoid the planning cost \n> > each time the query is executed, but if your data is unusual it can \n> > result in worse plans.\n> > \n> Maybe the planner could note a prepared query parameter is on a high skew\n> column and build a handful of plans to choose from, or just partially\n> re-plan on the skewed column with each execution. Or make it easier for a\n> user to have a prepared statement that re-plans the query each time. Even\n> just a per connection parameter \"SET prepared.query.cacheplan = FALSE\"\n\nThere was talk in this year's developers' meeting of doing this replanning\nyou've suggested. (\"Re(?)plan parameterized plans with actual parameter\nvalues\" on http://wiki.postgresql.org/wiki/PgCon_2010_Developer_Meeting,\nspecificall). This wouldn't show up until at least 9.1, but it's something\npeople are thinking about.\n\n--\nJoshua Tolley / eggyknap\nEnd Point Corporation\nhttp://www.endpoint.com", "msg_date": "Tue, 25 May 2010 13:01:49 -0600", "msg_from": "Joshua Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: prepared query performs much worse than regular query" } ]
[ { "msg_contents": "Hi group,\n\nI could really use your help with this one. I don't have all the\ndetails right now (I can provide more descriptions tomorrow and logs\nif needed), but maybe this will be enough:\n\nI have written a PG (8.3.8) module, which uses Flex Lexical Analyser.\nIt takes text from database field and finds matches for defined rules.\nIt returns a set of two text fields (value found and value type).\n\nWhen I run query like this:\nSELECT * FROM flex_me(SELECT some_text FROM some_table WHERE id = 1);\nIt works perfectly fine. Memory never reaches more than 1% (usually\nits below 0.5% of system mem).\n\nBut when I run query like this:\nSELECT flex_me(some_text_field) FROM some_table WHERE id = 1;\nMemory usage goes through the roof, and if the result is over about\n10k matches (rows) it eats up all memory and I get \"out of memory\"\nerror.\n\nI try to free all memory allocated, and even did a version with double\nlinked list of results but the same behaviour persists. I tried to\ntrack it down on my own and from my own trials it seems that the\nproblem lies directly in the set returning function in File 2\n\"flex_me()\" as even with 40k of results in a 2 column array it\nshouldn't take more than 1MB of memory. Also when I run it just to the\npoint of SRF_IS_FIRSTCALL() (whole bit) the memory usage doesn't go\nup, but when subsequent SRF_PERCALL calls are made it's where the\nmemory usage goes through the roof.\n\nBtw, if the following code contains some nasty errors and I'm pretty\nsure it does, please know that I'm just learning PG and C programming.\nAny help or tips would be greatly appreciated.\n\nSimplified (but still relevant) code below:\n\nFile 1 (Flex parser template which is compiled with flex):\n\n%{\n#include <stdio.h>\n\nextern void *addToken(int type);\nextern char ***flexme(char *ptr);\n\n#define T_NUM 1\n#define S_NUM \"number\"\n#define T_FLO 2\n#define S_FLO \"float\"\n#define T_DAT 3\n#define S_DAT \"date\n#define T_WRD 7\n#define S_WRD \"word\"\n\nchar ***vals;\n\nint cnt = 0, mem_cnt = 64;\n\n%}\n\nDGT [0-9]\nNUMBER (-)?{DGT}+\nFLOAT ((-)?{DGT}+[\\.,]{DGT}+)|{NUMBER}\n\nDATE_S1 \"-\"\nDATE_S2 \",\"\nDATE_S3 \".\"\nDATE_S4 \"/\"\nDATE_S5 \"\"\nDATE_YY ([0-9]|([0-9][0-9])|([0-1][0-9][0-9][0-9])|(2[0-4][0-9][0-9]))\nDATE_DD ([1-9]|(([0-2][0-9])|(3[0-1])))\nDATE_MM ([1-9]|((0[1-9])|(1[0-2])))\n\nDATE_YMD_S1 ({DATE_YY}{DATE_S1}{DATE_MM}{DATE_S1}{DATE_DD})\nDATE_YMD_S2 ({DATE_YY}{DATE_S2}{DATE_MM}{DATE_S2}{DATE_DD})\nDATE_YMD_S3 ({DATE_YY}{DATE_S3}{DATE_MM}{DATE_S3}{DATE_DD})\nDATE_YMD_S4 ({DATE_YY}{DATE_S4}{DATE_MM}{DATE_S4}{DATE_DD})\nDATE_YMD_S5 ({DATE_YY}{DATE_S5}{DATE_MM}{DATE_S5}{DATE_DD})\nDATE_YMD ({DATE_YMD_S1}|{DATE_YMD_S2}|{DATE_YMD_S3}|{DATE_YMD_S4}|{DATE_YMD_S5})\n\nWORD ([a-zA-Z0-9]+)\n\n%%\n\n{FLOAT} addToken(T_FLO);\n\n{DATE_YMD} addToken(T_DAT);\n\n{WORD} addToken(T_WRD);\n\n.|\\n /* eat up any unmatched character */\n\n%%\n\nvoid *\naddToken(int type)\n{\n int x = 0;\n\n// elog(NOTICE,\"W[%d] %s\", type, yytext);\n\n //check if we need to add more mem\n if (mem_cnt-1 <= cnt) {\n mem_cnt *= 2;\n vals = repalloc(vals, mem_cnt * sizeof(char *));\n// elog(NOTICE, \"mem increased to: %d\", mem_cnt*sizeof(char *));\n }\n vals[cnt] = palloc(2 * sizeof(char *));\n\n //types\n switch (type) {\n case T_FLO: //float\n x = strlen(S_FLO);\n vals[cnt][1] = palloc((x+1) * sizeof(char));\n strncpy(vals[cnt][1], S_FLO, x);\n vals[cnt][1][x] = '\\0';\n break;\n case T_DAT: //date\n x = strlen(S_DAT);\n vals[cnt][1] = palloc((x+1) * sizeof(char));\n strncpy(vals[cnt][1], S_DAT, x);\n vals[cnt][1][x] = '\\0';\n break;\n case T_WRD: //word\n x = strlen(S_WRD);\n vals[cnt][1] = palloc((x+1) * sizeof(char));\n strncpy(vals[cnt][1], S_WRD, x);\n vals[cnt][1][x] = '\\0';\n break;\n default:\n elog(ERROR,\"Unknown flexme type: %d\", type);\n break;\n }\n //value\n vals[cnt][0] = palloc((yyleng+1) * sizeof(char));\n strncpy(vals[cnt][0], yytext, yyleng);\n vals[cnt][0][yyleng] = '\\0';\n\n cnt++;\n// elog(NOTICE,\"i: %d\", cnt);\n\n return 0;\n}\n\nchar ***flexme(char *ptr)\n{\n\n YY_BUFFER_STATE bp;\n int yyerr = 0;\n cnt = 0;\n\n //initial table size\n vals = palloc(mem_cnt * sizeof(char *));\n\n bp = yy_scan_string(ptr);\n yy_switch_to_buffer(bp);\n yyerr = yylex();\n yy_delete_buffer(bp);\n\n if (yyerr != 0) {\n elog(ERROR, \"Flex parser error code: %d\", yyerr);\n }\n\n return vals;\n}\n\n\n\nFile 2 (PG function, which includes flex output analyser of compiled\nFile 1 - lex.yy.c):\n\n#include \"postgres.h\"\n#include \"fmgr.h\"\n#include \"funcapi.h\"\n\n#include \"lex.yy.c\"\n\nchar *text_to_cstring(const text *t); //this is copied directly from\nPG sources\nchar *\ntext_to_cstring(const text *t)\n{\n /* must cast away the const, unfortunately */\n text *tunpacked = pg_detoast_datum_packed((struct\nvarlena *) t);\n int len = VARSIZE_ANY_EXHDR(tunpacked);\n char *result;\n\n result = (char *) palloc(len + 1);\n memcpy(result, VARDATA_ANY(tunpacked), len);\n result[len] = '\\0';\n\n if (tunpacked != t)\n pfree(tunpacked);\n\n return result;\n}\n\n\nPG_FUNCTION_INFO_V1(flex_me);\nDatum flex_me(PG_FUNCTION_ARGS);\n\nDatum\nflex_me(PG_FUNCTION_ARGS) {\n text *in = PG_GETARG_TEXT_P(0);\n\n FuncCallContext *funcctx;\n TupleDesc tupdesc;\n AttInMetadata *attinmeta;\n int call_cntr, max_calls;\n char ***values;\n char *ptr;\n\n // stuff done only on the first call of the function\n if (SRF_IS_FIRSTCALL()) {\n MemoryContext oldcontext;\n\n // create a function context for cross-call persistence\n funcctx = SRF_FIRSTCALL_INIT();\n\n // switch to memory context appropriate for multiple function calls\n oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);\n\n ptr = text_to_cstring_imm(in);\n values = flexme(ptr);\n\n //free char pointer\n pfree(ptr);\n\n // Build a tuple descriptor for our result type\n if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)\n ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg\n (\"function returning record called in context \"\n \"that cannot accept type record\")));\n\n // generate attribute metadata needed later to produce\n // tuples from raw C strings\n attinmeta = TupleDescGetAttInMetadata(tupdesc);\n funcctx->attinmeta = attinmeta;\n\n //pass first list element\n funcctx->user_fctx = values;\n\n // total number of tuples to be returned\n funcctx->max_calls = cnt;\n\n //go back to normal memory context\n MemoryContextSwitchTo(oldcontext);\n }\n\n // stuff done on every call of the function.\n funcctx = SRF_PERCALL_SETUP();\n call_cntr = funcctx->call_cntr;\n max_calls = funcctx->max_calls;\n attinmeta = funcctx->attinmeta;\n values = (char ***) funcctx->user_fctx;\n\n //set return routine\n if (call_cntr < max_calls) {\n char **rvals;\n HeapTuple tuple;\n Datum result;\n int i;\n\n // Prepare a values array for building the returned\n //tuple. This should be an array of C strings which\n //will be processed later by the type input functions\n rvals = palloc(2*sizeof(char *));\n\n //value (text)\n i = strlen(values[call_cntr][0]);\n rvals[0] = palloc((i+1)*sizeof(char));\n strncpy(rvals[0], values[call_cntr][0], i);\n rvals[0][i] = '\\0';\n\n //type (text)\n i = strlen(values[call_cntr][1]);\n rvals[1] = palloc((i+1)*sizeof(char));\n strncpy(rvals[1], values[call_cntr][1], i);\n rvals[1][i] = '\\0';\n\n // build a tuple and make into datum.\n tuple = BuildTupleFromCStrings(attinmeta, rvals);\n\n result = HeapTupleGetDatum(tuple);\n\n\n //free memory\n pfree(rvals[0]);\n pfree(rvals[1]);\n pfree(rvals);\n pfree(values[call_cntr][0]);\n pfree(values[call_cntr][1]);\n pfree(values[call_cntr]);\n\n //return datum\n SRF_RETURN_NEXT(funcctx, result);\n }\n else {\n SRF_RETURN_DONE(funcctx);\n }\n\n return true;\n}\n", "msg_date": "Mon, 24 May 2010 18:50:32 +0200", "msg_from": "=?ISO-8859-2?Q?=A3ukasz_Dejneka?= <[email protected]>", "msg_from_op": true, "msg_subject": "Certain query eating up all free memory (out of memory error)" }, { "msg_contents": "Hi group,\n\nI could really use your help with this one. I don't have all the\ndetails right now (I can provide more descriptions tomorrow and logs\nif needed), but maybe this will be enough:\n\nI have written a PG (8.3.8) module, which uses Flex Lexical Analyser.\nIt takes text from database field and finds matches for defined rules.\nIt returns a set of two text fields (value found and value type).\n\nWhen I run query like this:\nSELECT * FROM flex_me(SELECT some_text FROM some_table WHERE id = 1);\nIt works perfectly fine. Memory never reaches more than 1% (usually\nits below 0.5% of system mem).\n\nBut when I run query like this:\nSELECT flex_me(some_text_field) FROM some_table WHERE id = 1;\nMemory usage goes through the roof, and if the result is over about\n10k matches (rows) it eats up all memory and I get \"out of memory\"\nerror.\n\nI try to free all memory allocated, and even did a version with double\nlinked list of results but the same behaviour persists. I tried to\ntrack it down on my own and from my own trials it seems that the\nproblem lies directly in the set returning function in File 2\n\"flex_me()\" as even with 40k of results in a 2 column array it\nshouldn't take more than 1MB of memory. Also when I run it just to the\npoint of SRF_IS_FIRSTCALL() (whole bit) the memory usage doesn't go\nup, but when subsequent SRF_PERCALL calls are made it's where the\nmemory usage goes through the roof.\n\nBtw, if the following code contains some nasty errors and I'm pretty\nsure it does, please know that I'm just learning PG and C programming.\nAny help or tips would be greatly appreciated.\n\nSimplified (but still relevant) code below:\n\nFile 1 (Flex parser template which is compiled with flex):\n\n%{\n#include <stdio.h>\n\nextern void *addToken(int type);\nextern char ***flexme(char *ptr);\n\n#define T_NUM  1\n#define S_NUM  \"number\"\n#define T_FLO  2\n#define S_FLO  \"float\"\n#define T_DAT  3\n#define S_DAT  \"date\n#define T_WRD  7\n#define S_WRD  \"word\"\n\nchar ***vals;\n\nint cnt = 0, mem_cnt = 64;\n\n%}\n\nDGT          [0-9]\nNUMBER       (-)?{DGT}+\nFLOAT        ((-)?{DGT}+[\\.,]{DGT}+)|{NUMBER}\n\nDATE_S1      \"-\"\nDATE_S2      \",\"\nDATE_S3      \".\"\nDATE_S4      \"/\"\nDATE_S5      \"\"\nDATE_YY      ([0-9]|([0-9][0-9])|([0-1][0-9][0-9][0-9])|(2[0-4][0-9][0-9]))\nDATE_DD      ([1-9]|(([0-2][0-9])|(3[0-1])))\nDATE_MM      ([1-9]|((0[1-9])|(1[0-2])))\n\nDATE_YMD_S1  ({DATE_YY}{DATE_S1}{DATE_MM}{DATE_S1}{DATE_DD})\nDATE_YMD_S2  ({DATE_YY}{DATE_S2}{DATE_MM}{DATE_S2}{DATE_DD})\nDATE_YMD_S3  ({DATE_YY}{DATE_S3}{DATE_MM}{DATE_S3}{DATE_DD})\nDATE_YMD_S4  ({DATE_YY}{DATE_S4}{DATE_MM}{DATE_S4}{DATE_DD})\nDATE_YMD_S5  ({DATE_YY}{DATE_S5}{DATE_MM}{DATE_S5}{DATE_DD})\nDATE_YMD     ({DATE_YMD_S1}|{DATE_YMD_S2}|{DATE_YMD_S3}|{DATE_YMD_S4}|{DATE_YMD_S5})\n\nWORD         ([a-zA-Z0-9]+)\n\n%%\n\n{FLOAT}      addToken(T_FLO);\n\n{DATE_YMD}   addToken(T_DAT);\n\n{WORD}       addToken(T_WRD);\n\n.|\\n     /* eat up any unmatched character */\n\n%%\n\nvoid *\naddToken(int type)\n{\n int   x = 0;\n\n//    elog(NOTICE,\"W[%d] %s\", type, yytext);\n\n   //check if we need to add more mem\n   if (mem_cnt-1 <= cnt) {\n       mem_cnt *= 2;\n       vals = repalloc(vals, mem_cnt * sizeof(char *));\n//        elog(NOTICE, \"mem increased to: %d\", mem_cnt*sizeof(char *));\n   }\n   vals[cnt] = palloc(2 * sizeof(char *));\n\n   //types\n   switch (type) {\n       case T_FLO:    //float\n           x = strlen(S_FLO);\n           vals[cnt][1] = palloc((x+1) * sizeof(char));\n           strncpy(vals[cnt][1], S_FLO, x);\n           vals[cnt][1][x] = '\\0';\n           break;\n       case T_DAT:     //date\n           x = strlen(S_DAT);\n           vals[cnt][1] = palloc((x+1) * sizeof(char));\n           strncpy(vals[cnt][1], S_DAT, x);\n           vals[cnt][1][x] = '\\0';\n           break;\n       case T_WRD:     //word\n           x = strlen(S_WRD);\n           vals[cnt][1] = palloc((x+1) * sizeof(char));\n           strncpy(vals[cnt][1], S_WRD, x);\n           vals[cnt][1][x] = '\\0';\n           break;\n       default:\n           elog(ERROR,\"Unknown flexme type: %d\", type);\n           break;\n   }\n   //value\n   vals[cnt][0] = palloc((yyleng+1) * sizeof(char));\n   strncpy(vals[cnt][0], yytext, yyleng);\n   vals[cnt][0][yyleng] = '\\0';\n\n   cnt++;\n//    elog(NOTICE,\"i: %d\", cnt);\n\n   return 0;\n}\n\nchar ***flexme(char *ptr)\n{\n\n   YY_BUFFER_STATE bp;\n   int   yyerr = 0;\n   cnt = 0;\n\n   //initial table size\n   vals = palloc(mem_cnt * sizeof(char *));\n\n   bp = yy_scan_string(ptr);\n   yy_switch_to_buffer(bp);\n   yyerr = yylex();\n   yy_delete_buffer(bp);\n\n   if (yyerr != 0) {\n       elog(ERROR, \"Flex parser error code: %d\", yyerr);\n   }\n\n   return vals;\n}\n\n\n\nFile 2 (PG function, which includes flex output analyser of compiled\nFile 1 - lex.yy.c):\n\n#include \"postgres.h\"\n#include \"fmgr.h\"\n#include \"funcapi.h\"\n\n#include \"lex.yy.c\"\n\nchar *text_to_cstring(const text *t);   //this is copied directly from\nPG sources\nchar *\ntext_to_cstring(const text *t)\n{\n       /* must cast away the const, unfortunately */\n       text           *tunpacked = pg_detoast_datum_packed((struct\nvarlena *) t);\n       int                        len = VARSIZE_ANY_EXHDR(tunpacked);\n       char           *result;\n\n       result = (char *) palloc(len + 1);\n       memcpy(result, VARDATA_ANY(tunpacked), len);\n       result[len] = '\\0';\n\n       if (tunpacked != t)\n               pfree(tunpacked);\n\n       return result;\n}\n\n\nPG_FUNCTION_INFO_V1(flex_me);\nDatum    flex_me(PG_FUNCTION_ARGS);\n\nDatum\nflex_me(PG_FUNCTION_ARGS) {\n   text             *in = PG_GETARG_TEXT_P(0);\n\n   FuncCallContext  *funcctx;\n   TupleDesc        tupdesc;\n   AttInMetadata    *attinmeta;\n   int              call_cntr, max_calls;\n   char             ***values;\n   char             *ptr;\n\n   // stuff done only on the first call of the function\n   if (SRF_IS_FIRSTCALL()) {\n       MemoryContext oldcontext;\n\n       // create a function context for cross-call persistence\n       funcctx = SRF_FIRSTCALL_INIT();\n\n       // switch to memory context appropriate for multiple  function calls\n       oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);\n\n       ptr = text_to_cstring_imm(in);\n       values = flexme(ptr);\n\n       //free char pointer\n       pfree(ptr);\n\n       // Build a tuple descriptor for our result type\n       if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)\n           ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg\n              (\"function returning record called in context \"\n               \"that cannot accept type record\")));\n\n       // generate attribute metadata needed later to produce\n       //   tuples from raw C strings\n       attinmeta = TupleDescGetAttInMetadata(tupdesc);\n       funcctx->attinmeta = attinmeta;\n\n       //pass first list element\n       funcctx->user_fctx = values;\n\n       // total number of tuples to be returned\n       funcctx->max_calls = cnt;\n\n       //go back to normal memory context\n       MemoryContextSwitchTo(oldcontext);\n   }\n\n   // stuff done on every call of the function.\n   funcctx = SRF_PERCALL_SETUP();\n   call_cntr = funcctx->call_cntr;\n   max_calls = funcctx->max_calls;\n   attinmeta = funcctx->attinmeta;\n   values = (char ***) funcctx->user_fctx;\n\n   //set return routine\n   if (call_cntr < max_calls) {\n       char      **rvals;\n       HeapTuple tuple;\n       Datum     result;\n       int       i;\n\n       // Prepare a values array for building the returned\n       //tuple. This should be an array of C strings which\n       //will be processed later by the type input functions\n       rvals = palloc(2*sizeof(char *));\n\n       //value (text)\n       i = strlen(values[call_cntr][0]);\n       rvals[0] = palloc((i+1)*sizeof(char));\n       strncpy(rvals[0], values[call_cntr][0], i);\n       rvals[0][i] = '\\0';\n\n       //type (text)\n       i = strlen(values[call_cntr][1]);\n       rvals[1] = palloc((i+1)*sizeof(char));\n       strncpy(rvals[1], values[call_cntr][1], i);\n       rvals[1][i] = '\\0';\n\n       // build a tuple and make into datum.\n       tuple = BuildTupleFromCStrings(attinmeta, rvals);\n\n       result = HeapTupleGetDatum(tuple);\n\n\n       //free memory\n       pfree(rvals[0]);\n       pfree(rvals[1]);\n       pfree(rvals);\n       pfree(values[call_cntr][0]);\n       pfree(values[call_cntr][1]);\n       pfree(values[call_cntr]);\n\n       //return datum\n       SRF_RETURN_NEXT(funcctx, result);\n   }\n   else {\n       SRF_RETURN_DONE(funcctx);\n   }\n\n   return true;\n}\n", "msg_date": "Tue, 25 May 2010 08:16:15 +0200", "msg_from": "=?ISO-8859-2?Q?=A3ukasz_Dejneka?= <[email protected]>", "msg_from_op": true, "msg_subject": "Certain query eating up all free memory (out of memory error)" }, { "msg_contents": "EXPLAIN ANALYSE on smaller query:\n\"Seq Scan on teksty (cost=0.00..1353.50 rows=1 width=695) (actual\ntime=0.220..12.354 rows=368 loops=1)\"\n\" Filter: (id = 1)\"\n\"Total runtime: 12.488 ms\"\n\n\nMemory config:\n\n# - Memory -\n\nshared_buffers = 24MB\ntemp_buffers = 8MB\nmax_prepared_transactions = 5\nwork_mem = 16MB # min 64kB\nmaintenance_work_mem = 16MB # min 1MB\nmax_stack_depth = 2MB # min 100kB\n\n# - Free Space Map -\n\nmax_fsm_pages = 153600\n#max_fsm_relations = 1000\n\nMemory info from logs:\n\nTopMemoryContext: 49416 total in 6 blocks; 7680 free (8 chunks); 41736 used\n TopTransactionContext: 8192 total in 1 blocks; 7856 free (1 chunks); 336 used\n Type information cache: 8192 total in 1 blocks; 1800 free (0\nchunks); 6392 used\n CFuncHash: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 used\n MbProcContext: 1024 total in 1 blocks; 928 free (6 chunks); 96 used\n Operator class cache: 8192 total in 1 blocks; 3848 free (0 chunks); 4344 used\n Operator lookup cache: 24576 total in 2 blocks; 14072 free (6\nchunks); 10504 used\n MessageContext: 8192 total in 1 blocks; 752 free (0 chunks); 7440 used\n smgr relation table: 8192 total in 1 blocks; 2808 free (0 chunks); 5384 used\n TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0\nchunks); 16 used\n Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used\n PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used\n PortalHeapMemory: 1024 total in 1 blocks; 880 free (0 chunks); 144 used\n ExecutorState: 516096 total in 6 blocks; 15368 free (7 chunks);\n500728 used\n SRF multi-call context: 2499608 total in 276 blocks; 714136\nfree (38704 chunks); 1785472 used\n ExprContext: 3157941940 total in 12908 blocks; 505592 free (11\nchunks); 3157436348 used\n Relcache by OID: 8192 total in 1 blocks; 3376 free (0 chunks); 4816 used\n CacheMemoryContext: 667472 total in 20 blocks; 239368 free (1\nchunks); 428104 used\n pg_toast_150116_index: 1024 total in 1 blocks; 240 free (0 chunks); 784 used\n pg_database_datname_index: 1024 total in 1 blocks; 344 free (0\nchunks); 680 used\n pg_index_indrelid_index: 1024 total in 1 blocks; 304 free (0\nchunks); 720 used\n pg_ts_dict_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680 used\n pg_aggregate_fnoid_index: 1024 total in 1 blocks; 344 free (0\nchunks); 680 used\n pg_language_name_index: 1024 total in 1 blocks; 344 free (0\nchunks); 680 used\n pg_statistic_relid_att_index: 1024 total in 1 blocks; 240 free (0\nchunks); 784 used\n pg_ts_dict_dictname_index: 1024 total in 1 blocks; 280 free (0\nchunks); 744 used\n pg_namespace_nspname_index: 1024 total in 1 blocks; 304 free (0\nchunks); 720 used\n pg_opfamily_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680 used\n pg_opclass_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720 used\n pg_ts_parser_prsname_index: 1024 total in 1 blocks; 280 free (0\nchunks); 744 used\n pg_amop_fam_strat_index: 1024 total in 1 blocks; 88 free (0\nchunks); 936 used\n pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 192 free (0\nchunks); 832 used\n pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 240 free\n(0 chunks); 784 used\n pg_cast_source_target_index: 1024 total in 1 blocks; 240 free (0\nchunks); 784 used\n pg_auth_members_role_member_index: 1024 total in 1 blocks; 280\nfree (0 chunks); 744 used\n pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 240 free\n(0 chunks); 784 used\n pg_ts_config_cfgname_index: 1024 total in 1 blocks; 280 free (0\nchunks); 744 used\n pg_authid_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720 used\n pg_ts_config_oid_index: 1024 total in 1 blocks; 344 free (0\nchunks); 680 used\n pg_conversion_default_index: 1024 total in 1 blocks; 88 free (0\nchunks); 936 used\n pg_language_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680 used\n pg_enum_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680 used\n pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 152 free\n(0 chunks); 872 used\n pg_ts_parser_oid_index: 1024 total in 1 blocks; 344 free (0\nchunks); 680 used\n pg_database_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720 used\n pg_conversion_name_nsp_index: 1024 total in 1 blocks; 280 free (0\nchunks); 744 used\n pg_class_relname_nsp_index: 1024 total in 1 blocks; 240 free (0\nchunks); 784 used\n pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 280 free\n(0 chunks); 744 used\n pg_class_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720 used\n pg_amproc_fam_proc_index: 1024 total in 1 blocks; 88 free (0\nchunks); 936 used\n pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 88 free\n(0 chunks); 936 used\n pg_index_indexrelid_index: 1024 total in 1 blocks; 304 free (0\nchunks); 720 used\n pg_type_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720 used\n pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 280 free (0\nchunks); 744 used\n pg_authid_rolname_index: 1024 total in 1 blocks; 304 free (0\nchunks); 720 used\n pg_auth_members_member_role_index: 1024 total in 1 blocks; 280\nfree (0 chunks); 744 used\n pg_enum_typid_label_index: 1024 total in 1 blocks; 280 free (0\nchunks); 744 used\n pg_constraint_oid_index: 1024 total in 1 blocks; 344 free (0\nchunks); 680 used\n pg_conversion_oid_index: 1024 total in 1 blocks; 344 free (0\nchunks); 680 used\n pg_ts_template_tmplname_index: 1024 total in 1 blocks; 280 free (0\nchunks); 744 used\n pg_ts_config_map_index: 1024 total in 1 blocks; 192 free (0\nchunks); 832 used\n pg_namespace_oid_index: 1024 total in 1 blocks; 344 free (0\nchunks); 680 used\n pg_type_typname_nsp_index: 1024 total in 1 blocks; 280 free (0\nchunks); 744 used\n pg_operator_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720 used\n pg_amop_opr_fam_index: 1024 total in 1 blocks; 240 free (0 chunks); 784 used\n pg_proc_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720 used\n pg_opfamily_am_name_nsp_index: 1024 total in 1 blocks; 192 free (0\nchunks); 832 used\n pg_ts_template_oid_index: 1024 total in 1 blocks; 344 free (0\nchunks); 680 used\n MdSmgr: 8192 total in 1 blocks; 7984 free (0 chunks); 208 used\n LOCALLOCK hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used\n Timezones: 48616 total in 2 blocks; 5968 free (0 chunks); 42648 used\n ErrorContext: 8192 total in 1 blocks; 8176 free (4 chunks); 16 used\n", "msg_date": "Tue, 25 May 2010 08:37:11 +0200", "msg_from": "=?ISO-8859-2?Q?=A3ukasz_Dejneka?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Certain query eating up all free memory (out of memory error)" }, { "msg_contents": "On Mon, May 24, 2010 at 12:50 PM, Łukasz Dejneka <[email protected]> wrote:\n> Hi group,\n>\n> I could really use your help with this one. I don't have all the\n> details right now (I can provide more descriptions tomorrow and logs\n> if needed), but maybe this will be enough:\n>\n> I have written a PG (8.3.8) module, which uses Flex Lexical Analyser.\n> It takes text from database field and finds matches for defined rules.\n> It returns a set of two text fields (value found and value type).\n>\n> When I run query like this:\n> SELECT * FROM flex_me(SELECT some_text FROM some_table WHERE id = 1);\n> It works perfectly fine. Memory never reaches more than 1% (usually\n> its below 0.5% of system mem).\n>\n> But when I run query like this:\n> SELECT flex_me(some_text_field) FROM some_table WHERE id = 1;\n> Memory usage goes through the roof, and if the result is over about\n> 10k matches (rows) it eats up all memory and I get \"out of memory\"\n> error.\n\nI'm not sure exactly what's happening in your particular case, but\nthere is some known suckage in this area.\n\nhttp://archives.postgresql.org/pgsql-hackers/2010-05/msg00230.php\nhttp://archives.postgresql.org/pgsql-hackers/2010-05/msg00395.php\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 2 Jun 2010 17:26:35 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Certain query eating up all free memory (out of memory\n\terror)" } ]
[ { "msg_contents": "Hello,\n\nI work for a web app to send email newsletters, and I have one question\nabout postgres' performance in two different setups. Actually we have one\n4GB Ram VPS running our app server (it's a rails app under nginx and thin)\nand a 4GB Ram VPS running the database (18GB). We want to migrate to bare\nmetal servers, but I have made two different setups and don't know what is\nthe best option:\n\nOption 1:\nApp Server: Dual Xeon 5130 dual core with 4GB ram and SATA disk\nPostgres: Xeon 3360 quad core with 4GB ram and 2 x 146GB 15k RPM SAS (RAID1)\ndisks\n\nOption 2:\nApp Server and Postgres: Dual Xeon 5520 quad core with 12GB ram and 2x 146GB\n15k RPM SAS (RAID1) disks\n\nI know the first option would be better in terms of I/O for postgres, but\nour app server doesnt use much I/O and with the second option we would have\nmuch more ram.\n\n\nThank you\n\nPedro Axelrud\nhttp://mailee.me\nhttp://softa.com.br\nhttp://flavors.me/pedroaxl\n\nHello, I work for a web app to send email newsletters, and I have one question about postgres' performance in two different setups. Actually we have one 4GB Ram VPS running our app server (it's a rails app under nginx and thin) and a 4GB Ram VPS running the database (18GB). We want to migrate to bare metal servers, but I have made two different setups and don't know what is the best option:\nOption 1:App Server: Dual Xeon 5130 dual core with 4GB ram and SATA disk Postgres: Xeon 3360 quad core with 4GB ram and 2 x 146GB 15k RPM SAS (RAID1) disks \nOption 2:App Server and Postgres: Dual Xeon 5520 quad core with 12GB ram and 2x 146GB 15k RPM SAS (RAID1) disksI know the first option would be better in terms of I/O for postgres, but our app server doesnt use much I/O and with the second option we would have much more ram.\nThank youPedro Axelrudhttp://mailee.mehttp://softa.com.brhttp://flavors.me/pedroaxl", "msg_date": "Mon, 24 May 2010 18:03:43 -0300", "msg_from": "Pedro Axelrud <[email protected]>", "msg_from_op": true, "msg_subject": "which hardware setup" }, { "msg_contents": "> Option 2:\n> App Server and Postgres: Dual Xeon 5520 quad core with 12GB ram and \n> 2x 146GB 15k RPM SAS (RAID1) disks\n>\n\n you didnt mention your dataset size, but i the second option would \nbe preferrable in most situations since it gives more of the os memory \nfor disc caching. 12 gb vs 4 gb for the host running pg\n", "msg_date": "Tue, 25 May 2010 08:21:47 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which hardware setup" }, { "msg_contents": "Sorry Jesper, I thought I had mentioned.. our dataset have 18GB.\n\n\nPedro Axelrud\nhttp://mailee.me\nhttp://softa.com.br\nhttp://flavors.me/pedroaxl\n\n\nOn Tue, May 25, 2010 at 03:21, Jesper Krogh <[email protected]> wrote:\n\n> Option 2:\n>> App Server and Postgres: Dual Xeon 5520 quad core with 12GB ram and 2x\n>> 146GB 15k RPM SAS (RAID1) disks\n>>\n>>\n> you didnt mention your dataset size, but i the second option would be\n> preferrable in most situations since it gives more of the os memory for disc\n> caching. 12 gb vs 4 gb for the host running pg\n>\n\nSorry Jesper, I thought I had mentioned.. our dataset have 18GB.Pedro Axelrudhttp://mailee.mehttp://softa.com.br\nhttp://flavors.me/pedroaxl\nOn Tue, May 25, 2010 at 03:21, Jesper Krogh <[email protected]> wrote:\n\nOption 2:\nApp Server and Postgres: Dual Xeon 5520 quad core with 12GB ram and 2x 146GB 15k RPM SAS (RAID1) disks\n\n\n\n you didnt mention your dataset size, but i the second option would be preferrable in most situations since it gives more of the os memory for disc caching. 12 gb vs 4 gb for the host running pg", "msg_date": "Tue, 25 May 2010 14:07:46 -0300", "msg_from": "Pedro Axelrud <[email protected]>", "msg_from_op": true, "msg_subject": "Re: which hardware setup" } ]
[ { "msg_contents": "Hi,\n\nI wrote a query (see below) that extracts climate data from weather stations\nwithin a given radius of a city using the dates for which those weather\nstations actually have data. The query uses the measurement table's only\nindex:\n\nCREATE UNIQUE INDEX measurement_001_stc_idx\n ON climate.measurement_001\n USING btree\n (*station_id, taken, category_id*);\n\nThe value for *random_page_cost* was at 2.0; reducing it to 1.1 had a\nmassive performance improvement (nearly an order of magnitude). While the\nresults now return in 5 seconds (down from ~85 seconds), problematic lines\nremain. Bumping the query's end date by a single year causes a full table\nscan:\n\n sc.taken_start >= '1900-01-01'::date AND\n sc.taken_end <= '1997-12-31'::date AND *\n*\nHow do I persuade PostgreSQL to use the indexes, regardless of number of\nyears between the two dates? (A full table scan against 43 million rows is\nprobably not the best plan.) Find the EXPLAIN ANALYSE results below the\nquery.\n\nThanks again!\n\nDave\n\nQuery\n SELECT\n extract(YEAR FROM m.taken) AS year,\n avg(m.amount) as amount\n FROM\n climate.city c,\n climate.station s,\n climate.station_category sc,\n climate.measurement m\n WHERE\n c.id = 5182 AND\n earth_distance(\n ll_to_earth(c.latitude_decimal,c.longitude_decimal),\n ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND\n s.elevation BETWEEN 0 AND 3000 AND\n s.applicable = TRUE AND\n sc.station_id = s.id AND\n sc.category_id = 1 AND\n* sc.taken_start >= '1900-01-01'::date AND\n sc.taken_end <= '1996-12-31'::date AND\n* m.station_id = s.id AND\n m.taken BETWEEN sc.taken_start AND sc.taken_end AND\n m.category_id = sc.category_id\n GROUP BY\n extract(YEAR FROM m.taken)\n ORDER BY\n extract(YEAR FROM m.taken)\n\n1900 to 1996: Index*\n*\"Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual\ntime=2268.929..2268.935 rows=92 loops=1)\"\n\" Sort Key: (date_part('year'::text, (m.taken)::timestamp without time\nzone))\"\n\" Sort Method: quicksort Memory: 32kB\"\n\" -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12)\n(actual time=2268.829..2268.886 rows=92 loops=1)\"\n\" -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12)\n(actual time=0.807..2084.206 rows=134893 loops=1)\"\n\" Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <=\nsc.taken_end) AND (sc.station_id = m.station_id))\"\n\" -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18)\n(actual time=0.502..521.937 rows=23 loops=1)\"\n\" Join Filter:\n((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double\nprecision, (c.longitude_decimal)::double precision))::cube,\n(ll_to_earth((s.latitude_decimal)::double precision,\n(s.longitude_decimal)::double precision))::cube)) / 1000::double precision)\n<= 30::double precision)\"\n\" -> Index Scan using city_pkey1 on city c\n(cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)\"\n\" Index Cond: (id = 5182)\"\n\" -> Nested Loop (cost=0.00..9907.73 rows=3659\nwidth=34) (actual time=0.014..28.937 rows=3458 loops=1)\"\n\" -> Seq Scan on station_category sc\n(cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458\nloops=1)\"\n\" Filter: ((taken_start >=\n'1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id =\n1))\"\n\" -> Index Scan using station_pkey1 on station s\n(cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1\nloops=3458)\"\n\" Index Cond: (s.id = sc.station_id)\"\n\" Filter: (s.applicable AND (s.elevation >=\n0) AND (s.elevation <= 3000))\"\n\" -> Append (cost=0.00..1072.27 rows=947 width=18) (actual\ntime=6.996..63.199 rows=5865 loops=23)\"\n\" -> Seq Scan on measurement m (cost=0.00..25.00 rows=6\nwidth=22) (actual time=0.000..0.000 rows=0 loops=23)\"\n\" Filter: (m.category_id = 1)\"\n\" -> Bitmap Heap Scan on measurement_001 m\n(cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865\nloops=23)\"\n\" Recheck Cond: ((m.station_id = sc.station_id) AND\n(m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id\n= 1))\"\n\" -> Bitmap Index Scan on measurement_001_stc_idx\n(cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865\nloops=23)\"\n\" Index Cond: ((m.station_id = sc.station_id)\nAND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND\n(m.category_id = 1))\"\n\"Total runtime: 2269.264 ms\"\n\n1900 to 1997: Full Table Scan*\n*\"Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual\ntime=86165.797..86165.809 rows=94 loops=1)\"\n\" Sort Key: (date_part('year'::text, (m.taken)::timestamp without time\nzone))\"\n\" Sort Method: quicksort Memory: 32kB\"\n\" -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12)\n(actual time=86165.654..86165.736 rows=94 loops=1)\"\n\" -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12)\n(actual time=534.786..85920.007 rows=139721 loops=1)\"\n\" Hash Cond: (m.station_id = sc.station_id)\"\n\" Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <=\nsc.taken_end))\"\n\" -> Append (cost=0.00..867005.80 rows=43670150 width=18)\n(actual time=0.009..79202.329 rows=43670079 loops=1)\"\n\" -> Seq Scan on measurement m (cost=0.00..25.00 rows=6\nwidth=22) (actual time=0.001..0.001 rows=0 loops=1)\"\n\" Filter: (category_id = 1)\"\n\" -> Seq Scan on measurement_001 m\n(cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008\nrows=43670079 loops=1)\"\n\" Filter: (category_id = 1)\"\n\" -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual\ntime=534.704..534.704 rows=25 loops=1)\"\n\" -> Nested Loop (cost=847.87..4277.93 rows=1253\nwidth=18) (actual time=415.837..534.682 rows=25 loops=1)\"\n\" Join Filter:\n((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double\nprecision, (c.longitude_decimal)::double precision))::cube,\n(ll_to_earth((s.latitude_decimal)::double precision,\n(s.longitude_decimal)::double precision))::cube)) / 1000::double precision)\n<= 30::double precision)\"\n\" -> Index Scan using city_pkey1 on city c\n(cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)\"\n\" Index Cond: (id = 5182)\"\n\" -> Hash Join (cost=847.87..1352.07 rows=3760\nwidth=34) (actual time=6.427..35.107 rows=3552 loops=1)\"\n\" Hash Cond: (s.id = sc.station_id)\"\n\" -> Seq Scan on station s\n(cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949\nloops=1)\"\n\" Filter: (applicable AND (elevation >=\n0) AND (elevation <= 3000))\"\n\" -> Hash (cost=800.87..800.87 rows=3760\nwidth=14) (actual time=6.416..6.416 rows=3552 loops=1)\"\n\" -> Bitmap Heap Scan on\nstation_category sc (cost=430.29..800.87 rows=3760 width=14) (actual\ntime=2.316..5.353 rows=3552 loops=1)\"\n\" Recheck Cond: (category_id =\n1)\"\n\" Filter: ((taken_start >=\n'1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))\"\n\" -> Bitmap Index Scan on\nstation_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0)\n(actual time=2.268..2.268 rows=6339 loops=1)\"\n\" Index Cond: (category_id\n= 1)\"\n\"Total runtime: 86165.936 ms\"\n*\n*\n\nHi,I wrote a query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the measurement table's only index:\nCREATE UNIQUE INDEX measurement_001_stc_idx  ON climate.measurement_001\n  USING btree  (station_id, taken, category_id);\nThe value for random_page_cost was at 2.0; reducing it to 1.1 had a massive performance improvement (nearly an order of magnitude). While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan:\n    sc.taken_start >= '1900-01-01'::date AND\n    sc.taken_end <= '1997-12-31'::date AND \nHow do I persuade PostgreSQL to use the indexes, regardless of number of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query.\nThanks again!DaveQuery  SELECT     extract(YEAR FROM m.taken) AS year,\n    avg(m.amount) as amount  FROM \n    climate.city c,     climate.station s, \n    climate.station_category sc,     climate.measurement m\n  WHERE     c.id = 5182 AND \n    earth_distance(      ll_to_earth(c.latitude_decimal,c.longitude_decimal),\n      ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND     s.elevation BETWEEN 0 AND 3000 AND \n    s.applicable = TRUE AND    sc.station_id = s.id AND \n    sc.category_id = 1 AND     sc.taken_start >= '1900-01-01'::date AND\n    sc.taken_end <= '1996-12-31'::date AND     m.station_id = s.id AND\n    m.taken BETWEEN sc.taken_start AND sc.taken_end AND    m.category_id = sc.category_id\n  GROUP BY    extract(YEAR FROM m.taken)\n  ORDER BY    extract(YEAR FROM m.taken)\n1900 to 1996: Index\"Sort  (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)\"\"  Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))\"\n\"  Sort Method:  quicksort  Memory: 32kB\"\"  ->  HashAggregate  (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)\"\"        ->  Nested Loop  (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)\"\n\"              Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))\"\"              ->  Nested Loop  (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)\"\n\"                    Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)\"\n\"                    ->  Index Scan using city_pkey1 on city c  (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)\"\"                          Index Cond: (id = 5182)\"\n\"                    ->  Nested Loop  (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)\"\"                          ->  Seq Scan on station_category sc  (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)\"\n\"                                Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))\"\"                          ->  Index Scan using station_pkey1 on station s  (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)\"\n\"                                Index Cond: (s.id = sc.station_id)\"\"                                Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))\"\n\"              ->  Append  (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)\"\"                    ->  Seq Scan on measurement m  (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)\"\n\"                          Filter: (m.category_id = 1)\"\"                    ->  Bitmap Heap Scan on measurement_001 m  (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)\"\n\"                          Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))\"\"                          ->  Bitmap Index Scan on measurement_001_stc_idx  (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)\"\n\"                                Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))\"\"Total runtime: 2269.264 ms\"\n1900 to 1997: Full Table Scan\"Sort  (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)\"\n\"  Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))\"\"  Sort Method:  quicksort  Memory: 32kB\"\"  ->  HashAggregate  (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)\"\n\"        ->  Hash Join  (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)\"\"              Hash Cond: (m.station_id = sc.station_id)\"\"              Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))\"\n\"              ->  Append  (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)\"\"                    ->  Seq Scan on measurement m  (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)\"\n\"                          Filter: (category_id = 1)\"\"                    ->  Seq Scan on measurement_001 m  (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)\"\n\"                          Filter: (category_id = 1)\"\"              ->  Hash  (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)\"\"                    ->  Nested Loop  (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)\"\n\"                          Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)\"\n\"                          ->  Index Scan using city_pkey1 on city c  (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)\"\"                                Index Cond: (id = 5182)\"\n\"                          ->  Hash Join  (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)\"\"                                Hash Cond: (s.id = sc.station_id)\"\n\"                                ->  Seq Scan on station s  (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)\"\"                                      Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))\"\n\"                                ->  Hash  (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)\"\"                                      ->  Bitmap Heap Scan on station_category sc  (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)\"\n\"                                            Recheck Cond: (category_id = 1)\"\"                                            Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))\"\n\"                                            ->  Bitmap Index Scan on station_category_station_category_idx  (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)\"\"                                                  Index Cond: (category_id = 1)\"\n\"Total runtime: 86165.936 ms\"", "msg_date": "Mon, 24 May 2010 22:54:11 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Random Page Cost and Planner" }, { "msg_contents": "Hi,\n\nI changed the date comparison to be based on year alone:\n\n extract(YEAR FROM sc.taken_start) >= 1900 AND\n extract(YEAR FROM sc.taken_end) <= 2009 AND\n\nThe indexes are now always used; if someone wants to explain why using the\nnumbers works (a constant) but using a date (another constant?) does not\nwork, I'd really appreciate it.\n\nThanks again, everybody, for your time and help.\n\nDave\n\nHi,I changed the date comparison to be based on year alone:    extract(YEAR FROM sc.taken_start) >= 1900 AND    extract(YEAR FROM sc.taken_end) <= 2009 AND The indexes are now always used; if someone wants to explain why using the numbers works (a constant) but using a date (another constant?) does not work, I'd really appreciate it.\nThanks again, everybody, for your time and help.Dave", "msg_date": "Mon, 24 May 2010 23:41:29 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "David Jarvis <[email protected]> wrote:\n \n> The value for *random_page_cost* was at 2.0; reducing it to 1.1\n> had a massive performance improvement (nearly an order of\n> magnitude). While the results now return in 5 seconds (down from\n> ~85 seconds)\n \nIt sounds as though the active portion of your database is pretty\nmuch cached in RAM. True?\n \n> problematic lines remain. Bumping the query's end date by a single\n> year causes a full table scan\n \n> How do I persuade PostgreSQL to use the indexes, regardless of\n> number of years between the two dates?\n \nI don't know about \"regardless of the number of years\" -- but you\ncan make such plans look more attractive by cutting both\nrandom_page_cost and seq_page_cost. Some highly cached loads\nperform well with these set to equal values on the order of 0.1 to\n0.001.\n \n> (A full table scan against 43 million rows is probably not the\n> best plan.)\n \nIt would tend to be better than random access to 43 million rows, at\nleast if you need to go to disk for many of them.\n \n-Kevin\n", "msg_date": "Tue, 25 May 2010 13:28:41 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "Hi, Kevin.\n\nThanks for the response.\n\nIt sounds as though the active portion of your database is pretty\n> much cached in RAM. True?\n>\n\nI would not have thought so; there are seven tables, each with 39 to 43\nmillion rows as:\n\nCREATE TABLE climate.measurement (\n id bigserial NOT NULL,\n taken date NOT NULL,\n station_id integer NOT NULL,\n amount numeric(8,2) NOT NULL,\n flag character varying(1) NOT NULL DEFAULT ' '::character varying,\n category_id smallint NOT NULL,\n}\n\nThe machine has 4GB of RAM, donated to PG as follows:\n\n*shared_buffers = 1GB\ntemp_buffers = 32MB\nwork_mem = 32MB\nmaintenance_work_mem = 64MB\neffective_cache_size = 256MB\n*\n\nEverything else is at its default value. The kernel:\n\n$ cat /proc/sys/kernel/shmmax\n2147483648\n\nTwo postgres processes are enjoying the (virtual) space:\n\n2619 postgres 20 0 *1126m* 524m 520m S 0 13.2 0:09.41 postgres\n2668 postgres 20 0 *1124m* 302m 298m S 0 7.6 0:04.35 postgres\n\ncan make such plans look more attractive by cutting both\n> random_page_cost and seq_page_cost. Some highly cached loads\n> perform well with these set to equal values on the order of 0.1 to\n> 0.001.\n>\n\nI tried this: no improvement.\n\nIt would tend to be better than random access to 43 million rows, at\n> least if you need to go to disk for many of them.\n>\n\nI thought that the index would take care of this? The index has been set to\nthe unique key of:\n\nstation_id, taken, and category_id (the filter for child tables).\n\nEach time I scan for data, I always provide the station identifier and its\ndate range. The date range is obtained from another table (given the same\nstation_id).\n\nI will be trying various other indexes. I've noticed now that sometimes the\nresults are very quick and sometimes very slow. For the query I posted, it\nwould be great to know what would be the best indexes to use. I have a\nsuspicion that that's going to require trial and many errors.\n\nDave\n\nHi, Kevin.Thanks for the response.\nIt sounds as though the active portion of your database is pretty\n\n\n\nmuch cached in RAM.  True?I would not have thought so; there are seven tables, each with 39 to 43 million rows as:CREATE TABLE climate.measurement (\n\n\n  id bigserial NOT NULL,  taken date NOT NULL,  station_id integer NOT NULL,  amount numeric(8,2) NOT NULL,  flag character varying(1) NOT NULL DEFAULT ' '::character varying,  category_id smallint NOT NULL,\n\n\n}The machine has 4GB of RAM, donated to PG as follows:shared_buffers = 1GBtemp_buffers = 32MBwork_mem = 32MBmaintenance_work_mem = 64MB\n\neffective_cache_size = 256MBEverything else is at its default value. The kernel:$ cat /proc/sys/kernel/shmmax2147483648Two postgres processes are enjoying the (virtual) space:\n2619 postgres  20   0 1126m 524m 520m S    0 13.2   0:09.41 postgres2668 postgres  20   0 1124m 302m 298m S    0  7.6   0:04.35 postgres\n\ncan make such plans look more attractive by cutting both\nrandom_page_cost and seq_page_cost.  Some highly cached loads\nperform well with these set to equal values on the order of 0.1 to\n0.001.I tried this: no improvement.\nIt would tend to be better than random access to 43 million rows, at\nleast if you need to go to disk for many of them.\nI thought that the index would take care of this? The index has been set to the unique key of:station_id, taken, and category_id (the filter for child tables).\nEach time I scan for data, I always provide the station identifier and its date range. The date range is obtained from another table (given the same station_id).I will be trying various other indexes. I've noticed now that sometimes the results are very quick and sometimes very slow. For the query I posted, it would be great to know what would be the best indexes to use. I have a suspicion that that's going to require trial and many errors.\nDave", "msg_date": "Tue, 25 May 2010 16:26:52 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "David Jarvis <[email protected]> writes:\n>> It sounds as though the active portion of your database is pretty\n>> much cached in RAM. True?\n\n> I would not have thought so; there are seven tables, each with 39 to 43\n> million rows as: [ perhaps 64 bytes per row ]\n> The machine has 4GB of RAM, donated to PG as follows:\n\nWell, the thing you need to be *really* wary of is setting the cost\nparameters to make isolated tests look good. When you repeat a\nparticular test case multiple times, all times after the first probably\nare fully cached ... but if your DB doesn't actually fit in RAM, that\nmight not be too representative of what will happen under load.\nSo if you want to cut the xxx_page_cost settings some more, pay close\nattention to what happens to average response time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 May 2010 20:24:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random Page Cost and Planner " }, { "msg_contents": "On Tue, May 25, 2010 at 4:26 PM, David Jarvis <[email protected]> wrote:\n> shared_buffers = 1GB\n> temp_buffers = 32MB\n> work_mem = 32MB\n> maintenance_work_mem = 64MB\n> effective_cache_size = 256MB\n\nShouldn't effective_cache_size be significantly larger?\n\n-- \nRob Wultsch\[email protected]\n", "msg_date": "Tue, 25 May 2010 17:56:58 -0700", "msg_from": "Rob Wultsch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "Hi, Tom.\n\nYes, that is what happened, making the tests rather meaningless, and giving\nme the false impression that the indexes were being used. They were but only\nbecause of cached results. When multiple users making different queries, the\nperformance will return to ~80s per query.\n\nI also tried Kevin's suggestion, which had no noticeable effect:\neffective_cache_size = 512MB\n\nThat said, when using the following condition, the query is fast (1 second):\n\n extract(YEAR FROM sc.taken_start) >= 1963 AND\n extract(YEAR FROM sc.taken_end) <= 2009 AND\n\n\" -> Index Scan using measurement_013_stc_idx on\nmeasurement_013 m (cost=0.00..511.00 rows=511 width=15) (actual\ntime=0.018..3.601 rows=3356 loops=104)\"\n\" Index Cond: ((m.station_id = sc.station_id) AND\n(m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id\n= 7))\"\n\nThis condition makes it slow (13 seconds on first run, 8 seconds\nthereafter):\n\n* extract(YEAR FROM sc.taken_start) >= 1900 AND\n* extract(YEAR FROM sc.taken_end) <= 2009 AND\n\n\" Filter: (category_id = 7)\"\n\" -> Seq Scan on measurement_013 m\n(cost=0.00..359704.80 rows=18118464 width=15) (actual time=0.008..4025.692\nrows=18118395 loops=1)\"\n\nAt this point, I'm tempted to write a stored procedure that iterates over\neach station category for all the years of each station. My guess is that\nthe planner's estimate for the number of rows that will be returned by\n*extract(YEAR\nFROM sc.taken_start) >= 1900* is incorrect and so it chooses a full table\nscan for all rows. Even though the lower bound appears to be a constant\nvalue of the 1900, the average year a station started collecting data was 44\nyears ago (1965), and did so for an average of 21.4 years.\n\nThe part I am having trouble with is convincing PG to use the index for the\nstation ID and the date range for when the station was active. Each station\nhas a unique ID; the data in the measurement table is ordered by measurement\ndate then by station.\n\nShould I add a clustered index by station then by date?\n\nAny other suggestions are very much appreciated.\n\nDave\n\nHi, Tom.Yes, that is what happened, making the tests rather meaningless, and giving me the false impression that the indexes were being used. They were but only because of cached results. When multiple users making different queries, the performance will return to ~80s per query.\nI also tried Kevin's suggestion, which had no noticeable effect:effective_cache_size = 512MBThat said, when using the following condition, the query is fast (1 second):\n    extract(YEAR FROM sc.taken_start) >= 1963 AND    extract(YEAR FROM sc.taken_end) <= 2009 AND \"                    ->  Index Scan using measurement_013_stc_idx on measurement_013 m  (cost=0.00..511.00 rows=511 width=15) (actual time=0.018..3.601 rows=3356 loops=104)\"\n\"                          Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 7))\"This condition makes it slow (13 seconds on first run, 8 seconds thereafter):\n    extract(YEAR FROM sc.taken_start) >= 1900 AND    extract(YEAR FROM sc.taken_end) <= 2009 AND \"                          Filter: (category_id = 7)\"\"                    ->  Seq Scan on measurement_013 m  (cost=0.00..359704.80 rows=18118464 width=15) (actual time=0.008..4025.692 rows=18118395 loops=1)\"\nAt this point, I'm tempted to write a stored procedure that iterates over each station category for all the years of each station. My guess is that the planner's estimate for the number of rows that will be returned by extract(YEAR FROM sc.taken_start) >= 1900 is incorrect and so it chooses a full table scan for all rows. Even though the lower bound appears to be a constant value of the 1900, the average year a station started collecting data was 44 years ago (1965), and did so for an average of 21.4 years.\nThe part I am having trouble with is convincing PG to use the index for the station ID and the date range for when the station was active. Each station has a unique ID; the data in the measurement table is ordered by measurement date then by station.\nShould I add a clustered index by station then by date?Any other suggestions are very much appreciated.Dave", "msg_date": "Tue, 25 May 2010 20:50:09 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "Hi, Rob.\n\nI tried bumping the effective_cache_size. It made no difference.\n\nMy latest attempt at forcing PostgreSQL to use the indexes involved two\nloops: one to loop over the stations, the other to extract the station data\nfrom the measurement table. The outer loop executes in 1.5 seconds. The\ninner loop does a full table scan for each record in the outer loop:\n\n FOR station IN\n SELECT\n sc.station_id,\n sc.taken_start,\n sc.taken_end\n FROM\n climate.city c,\n climate.station s,\n climate.station_category sc\n WHERE\n c.id = city_id AND\n earth_distance(\n ll_to_earth(c.latitude_decimal,c.longitude_decimal),\n ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <=\nradius AND\n s.elevation BETWEEN elevation1 AND elevation2 AND\n s.applicable AND\n sc.station_id = s.id AND\n sc.category_id = category_id AND\n extract(YEAR FROM sc.taken_start) >= year1 AND\n extract(YEAR FROM sc.taken_end) <= year2\n ORDER BY\n sc.station_id\n LOOP\n RAISE NOTICE 'B.1. % % %', station.station_id, station.taken_start,\nstation.taken_end;\n\n FOR measure IN\n SELECT\n extract(YEAR FROM m.taken) AS year,\n avg(m.amount) AS amount\n FROM\n climate.measurement m\n WHERE\n* m.station_id = station.station_id AND\n m.taken BETWEEN station.taken_start AND station.taken_end AND\n m.category_id = category_id\n* GROUP BY\n extract(YEAR FROM m.taken)\n LOOP\n RAISE NOTICE ' B.2. % %', measure.year, measure.amount;\n END LOOP;\n END LOOP;\n\nI thought that the bold lines would have evoked index use. The values used\nfor the inner query:\n\nNOTICE: B.1. 754 1980-08-01 2001-11-30\n\nWhen I run the query manually, using constants, it executes in ~25\nmilliseconds:\n\nSELECT\n extract(YEAR FROM m.taken) AS year,\n avg(m.amount) AS amount\nFROM\n climate.measurement m\nWHERE\n m.station_id = 754 AND\n m.taken BETWEEN '1980-08-01'::date AND '2001-11-30'::date AND\n m.category_id = 7\nGROUP BY\n extract(YEAR FROM m.taken)\n\nWith 106 rows it should execute in ~2.65 seconds, which is better than the 5\nseconds I get when everything is cached and a tremendous improvement over\nthe ~85 seconds from cold.\n\nI do not understand why the below query uses a full table scan (executes in\n~13 seconds):\n\nSELECT\n extract(YEAR FROM m.taken) AS year,\n avg(m.amount) AS amount\nFROM\n climate.measurement m\nWHERE\n* m.station_id = station.station_id AND*\n* m.taken BETWEEN station.taken_start AND station.taken_end AND*\n* m.category_id = category_id*\nGROUP BY\n extract(YEAR FROM m.taken)\n\nMoreover, what can I do to solve the problem?\n\nThanks again!\n\nDave\n\nHi, Rob.I tried bumping the effective_cache_size. It made no difference.My\nlatest attempt at forcing PostgreSQL to use the indexes involved two loops: one to loop over the stations, the\nother to extract the station data from the measurement table. The outer loop\nexecutes in 1.5 seconds. The inner loop does a full table scan for each\nrecord in the outer loop:\n  FOR station IN    SELECT \n      sc.station_id,      sc.taken_start,\n      sc.taken_end    FROM       climate.city c, \n      climate.station s,       climate.station_category sc\n    WHERE       c.id = city_id AND \n      earth_distance(        ll_to_earth(c.latitude_decimal,c.longitude_decimal),\n        ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= radius AND       s.elevation BETWEEN elevation1 AND elevation2 AND\n      s.applicable AND      sc.station_id = s.id AND \n      sc.category_id = category_id AND       extract(YEAR FROM sc.taken_start) >= year1 AND\n      extract(YEAR FROM sc.taken_end) <= year2    ORDER BY\n      sc.station_id  LOOP    RAISE NOTICE 'B.1. % % %', station.station_id, station.taken_start, station.taken_end;\n        FOR measure IN      SELECT\n        extract(YEAR FROM m.taken) AS year,        avg(m.amount) AS amount\n      FROM        climate.measurement m\n      WHERE        m.station_id = station.station_id AND\n        m.taken BETWEEN station.taken_start AND station.taken_end AND        m.category_id = category_id\n      GROUP BY        extract(YEAR FROM m.taken)\n    LOOP      RAISE NOTICE '  B.2. % %', measure.year, measure.amount;\n    END LOOP;  END LOOP;I thought that the bold lines would have evoked index use. The values used for the inner query:\nNOTICE:  B.1. 754 1980-08-01 2001-11-30When I run the query manually, using constants, it executes in ~25 milliseconds:\nSELECT  extract(YEAR FROM m.taken) AS year,  avg(m.amount) AS amountFROM  climate.measurement mWHERE  m.station_id = 754 AND  m.taken BETWEEN '1980-08-01'::date AND '2001-11-30'::date AND\n\n  m.category_id = 7GROUP BY  extract(YEAR FROM m.taken)With 106 rows it should execute in ~2.65 seconds, which is better than the 5 seconds I get when everything is cached and a tremendous improvement over the ~85 seconds from cold.\nI do not understand why the below query uses a full table scan (executes in ~13 seconds):SELECT\n  extract(YEAR FROM m.taken) AS year,  avg(m.amount) AS amount\nFROM  climate.measurement mWHERE\n  m.station_id = station.station_id AND\n  m.taken BETWEEN station.taken_start AND station.taken_end AND  m.category_id = category_id\nGROUP BY  extract(YEAR FROM m.taken)\nMoreover, what can I do to solve the problem?Thanks again!Dave", "msg_date": "Tue, 25 May 2010 23:13:45 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "\nOn May 26, 2010, at 6:50 AM, David Jarvis wrote:\n> \n> That said, when using the following condition, the query is fast (1 second):\n> \n> extract(YEAR FROM sc.taken_start) >= 1963 AND\n> extract(YEAR FROM sc.taken_end) <= 2009 AND \n> \n> \" -> Index Scan using measurement_013_stc_idx on measurement_013 m (cost=0.00..511.00 rows=511 width=15) (actual time=0.018..3.601 rows=3356 loops=104)\"\n> \" Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 7))\"\n> \n> This condition makes it slow (13 seconds on first run, 8 seconds thereafter):\n> \n> extract(YEAR FROM sc.taken_start) >= 1900 AND\n> extract(YEAR FROM sc.taken_end) <= 2009 AND \n> \n> \" Filter: (category_id = 7)\"\n> \" -> Seq Scan on measurement_013 m (cost=0.00..359704.80 rows=18118464 width=15) (actual time=0.008..4025.692 rows=18118395 loops=1)\"\n> \n> At this point, I'm tempted to write a stored procedure that iterates over each station category for all the years of each station. My guess is that the planner's estimate for the number of rows that will be returned by extract(YEAR FROM sc.taken_start) >= 1900 is incorrect and so it chooses a full table scan for all rows. \n\nNope, it appears that the planner estimate is correct (it estimates 18118464 vs 18118464 real rows). I think what's happening there is that 18M rows is large enough part of the total table rows that it makes sense to scan it sequentially (eliminating random access costs). Try SET enable_seqsan = false and repeat the query - there is a chance that the index scan would be even slower.\n\n> The part I am having trouble with is convincing PG to use the index for the station ID and the date range for when the station was active. Each station has a unique ID; the data in the measurement table is ordered by measurement date then by station.\n> \n> Should I add a clustered index by station then by date?\n> \n> Any other suggestions are very much appreciated.\n\nIs it necessary to get the data as far as 1900 all the time ? Maybe there is a possibility to aggregate results\nfrom the past years if they are constant. \n\nRegards,\n--\nAlexey Klyukin <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\n \n\n", "msg_date": "Wed, 26 May 2010 12:00:18 +0300", "msg_from": "Alexey Klyukin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": " Current Folder: Sent \tSign Out\nCompose Addresses Folders Options Autoreply Search Help \nCalendar \tG-Hosting.cz\n\nMessage List | Delete | Edit Message as New\tPrevious | Next \tForward |\nForward as Attachment | Reply | Reply All\nSubject: \tRe: [PERFORM] Random Page Cost and Planner\nFrom: \[email protected]\nDate: \tWed, May 26, 2010 12:01 pm\nTo: \t\"David Jarvis\" <[email protected]>\nPriority: \tNormal\nOptions: \tView Full Header | View Printable Version | Download this as\na file | View Message details\n\n> Hi, Tom.\n>\n> Yes, that is what happened, making the tests rather meaningless, and\n> giving\n> me the false impression that the indexes were being used. They were but\n> only\n> because of cached results. When multiple users making different queries,\n> the\n> performance will return to ~80s per query.\n>\n> I also tried Kevin's suggestion, which had no noticeable effect:\n> effective_cache_size = 512MB\n>\n> That said, when using the following condition, the query is fast (1\n> second):\n>\n> extract(YEAR FROM sc.taken_start) >= 1963 AND\n> extract(YEAR FROM sc.taken_end) <= 2009 AND\n>\n> \" -> Index Scan using measurement_013_stc_idx on\n> measurement_013 m (cost=0.00..511.00 rows=511 width=15) (actual\n> time=0.018..3.601 rows=3356 loops=104)\"\n> \" Index Cond: ((m.station_id = sc.station_id) AND\n> (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND\n> (m.category_id\n> = 7))\"\n>\n> This condition makes it slow (13 seconds on first run, 8 seconds\n> thereafter):\n>\n> * extract(YEAR FROM sc.taken_start) >= 1900 AND\n> * extract(YEAR FROM sc.taken_end) <= 2009 AND\n>\n> \" Filter: (category_id = 7)\"\n> \" -> Seq Scan on measurement_013 m\n> (cost=0.00..359704.80 rows=18118464 width=15) (actual time=0.008..4025.692\n> rows=18118395 loops=1)\"\n>\n> At this point, I'm tempted to write a stored procedure that iterates over\n> each station category for all the years of each station. My guess is that\n> the planner's estimate for the number of rows that will be returned by\n> *extract(YEAR\n> FROM sc.taken_start) >= 1900* is incorrect and so it chooses a full table\n> scan for all rows. Even though the lower bound appears to be a constant\n> value of the 1900, the average year a station started collecting data was\n> 44\n> years ago (1965), and did so for an average of 21.4 years.\n>\n> The part I am having trouble with is convincing PG to use the index for\n> the\n> station ID and the date range for when the station was active. Each\n> station\n> has a unique ID; the data in the measurement table is ordered by\n> measurement\n> date then by station.\n\nWell, don't forget indexes may not be the best way to evaluate the query -\nif the selectivity is low (the query returns large portion of the table)\nthe sequetial scan is actually faster. The problem is using index means\nyou have to read the index blocks too, and then the table blocks, and this\nis actually random access. So your belief that thanks to using indexes the\nquery will run faster could be false.\n\nAnd this is what happens in the queries above - the first query covers\nyears 1963-2009, while the second one covers 1900-2009. Given the fact\nthis table contains ~40m rows, the first query returns about 0.01% (3k\nrows) while the second one returns almost 50% of the data (18m rows). So I\ndoubt this might be improved using an index ...\n\nBut you can try that by setting enable_seqscan=off or proper setting of\nthe random_page_cost / seq_page_cost variables (so that the plan with\nindexes is cheaper than the sequential scan). You can do that in the\nsession (e.g. use SET enable_seqscan=off) so that you won't harm other\nsessions.\n\n> Should I add a clustered index by station then by date?\n>\n> Any other suggestions are very much appreciated.\n\nWell, the only thing that crossed my mind is partitioning with properly\ndefined constraints and constrain_exclusion=on. I'd recommend partitioning\nby time (each year a separate partition) but you'll have to investigate\nthat on your own (depends on your use-cases).\n\nBTW the cache_effective_size mentioned in the previous posts is just an\n'information parameter' - it does not increase the amount of memory\nallocated by PostgreSQL. It merely informs PostgreSQL of expected disk\ncache size maintained by the OS (Linux), so that PostgreSQL may estimate\nthe change that the requested data are actually cached (and won't be read\nfrom the disk).\n\nregards\nTomas\n\n\n\n", "msg_date": "Wed, 26 May 2010 12:19:28 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "David Jarvis <[email protected]> wrote:\n \n>> It sounds as though the active portion of your database is pretty\n>> much cached in RAM. True?\n \n> I would not have thought so; there are seven tables, each with 39\n> to 43 million rows\n \n> The machine has 4GB of RAM\n \nIn that case, modifying seq_page_cost or setting random_page_cost\nbelow something in the range of 1.5 to 2 is probably not going to be\na good choice for the mix as a whole.\n \n> effective_cache_size = 256MB\n \nThis should probably be set to something on the order of 3GB. This\nwill help the optimizer make more intelligent choices about when use\nof the index will be a win.\n \n>> It would tend to be better than random access to 43 million rows,\n>> at least if you need to go to disk for many of them.\n> \n> I thought that the index would take care of this?\n \nWhen the index can limit the number of rows to a fraction of the 43\nmillion rows, using it is a win. The trick is to accurately model\nthe relative costs of different aspects of running the query, so\nthat when the various plans are compared, the one which looks the\ncheapest actually *is*. Attempting to force any particular plan\nthrough other means is risky.\n \n> I will be trying various other indexes. I've noticed now that\n> sometimes the results are very quick and sometimes very slow. For\n> the query I posted, it would be great to know what would be the\n> best indexes to use. I have a suspicion that that's going to\n> require trial and many errors.\n \nYeah, there's no substitute for testing your actual software against\nthe actual data. Be careful, though -- as previously mentioned\ncaching can easily distort results, particularly when you run the\nsame query, all by itself (with no competing queries) multiple\ntimes. You'll get your best information if you can simulate a\nmore-or-less realistic load, and try that with various settings and\nindexes. The cache turnover and resource contention involved in\nproduction can influence performance, and are hard to estimate any\nother way.\n \n-Kevin\n", "msg_date": "Wed, 26 May 2010 10:55:04 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "Hi, Alexey.\n\nIs it necessary to get the data as far as 1900 all the time ? Maybe there is\n> a possibility to aggregate results from the past years if they are constant.\n>\n\nThis I have done. I created another table (station_category) that associates\nstations with when they started to take measurements and when they stopped\n(based on the data in the measurement table). For example:\n\nstation_id; category_id; taken_start; taken_end\n1;4;\"1984-07-01\";\"1996-11-30\"\n1;5;\"1984-07-01\";\"1996-11-30\"\n1;6;\"1984-07-01\";\"1996-11-10\"\n1;7;\"1984-07-01\";\"1996-10-31\"\n\nThis means that station 1 has data for categories 4 through 7. The\nmeasurement table returns 3865 rows for station 1 and category 7 (this uses\nan index and took 7 seconds cold):\n\nstation_id; taken; amount\n1;\"1984-07-01\";0.00\n1;\"1984-07-02\";0.00\n1;\"1984-07-03\";0.00\n1;\"1984-07-04\";0.00\n\nThe station_category table is basically another index.\n\nWould explicitly sorting the measurement table (273M rows) by station then\nby date help?\n\nDave\n\nHi, Alexey.Is it necessary to get the data as far as 1900 all the time ? Maybe there is a possibility to aggregate results\nfrom the past years if they are constant.This I have done. I created another table (station_category) that associates stations with when they started to take measurements and when they stopped (based on the data in the measurement table). For example:\nstation_id; category_id; taken_start; taken_end1;4;\"1984-07-01\";\"1996-11-30\"1;5;\"1984-07-01\";\"1996-11-30\"1;6;\"1984-07-01\";\"1996-11-10\"\n1;7;\"1984-07-01\";\"1996-10-31\"This means that station 1 has data for categories 4 through 7. The measurement table returns 3865 rows for station 1 and category 7 (this uses an index and took 7 seconds cold):\nstation_id; taken; amount1;\"1984-07-01\";0.001;\"1984-07-02\";0.001;\"1984-07-03\";0.001;\"1984-07-04\";0.00The station_category table is basically another index.\nWould explicitly sorting the measurement table (273M rows) by station then by date help?Dave", "msg_date": "Wed, 26 May 2010 09:30:19 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "> Hi,\n>\n> And this is what happens in the queries above - the first query covers\n>> years 1963-2009, while the second one covers 1900-2009. Given the fact\n>> this table contains ~40m rows, the first query returns about 0.01% (3k\n>> rows) while the second one returns almost 50% of the data (18m rows). So\n>> I\n>> doubt this might be improved using an index ...\n>>\n>\n> I don't think that's what I'm doing.\n>\n> There are two tables involved: station_category (sc) and measurement (m).\n>\n> The first part of the query:\n>\n> extract(YEAR FROM sc.taken_start) >= 1900 AND\n> extract(YEAR FROM sc.taken_end) <= 2009 AND\n>\n> That is producing a limit on the station_category table. There are, as far\n> as I can tell, no stations that have been taking weather readings for 110\n> years. Most of them have a lifespan of 24 years. The above condition just\n> makes sure that I don't get data before 1900 or after 2009.\n>\n\n\nOK, I admit I'm a little bit condfused by the query, especially by these\nrows:\n\nsc.taken_start >= '1900-01-01'::date AND\nsc.taken_end <= '1996-12-31'::date AND\nm.taken BETWEEN sc.taken_start AND sc.taken_end AND\n\nWhich seems to me a little bit \"convoluted\". Well, I think I understand\nwhat that means - give me all stations for a given city, collecting the\ncategory of data at a certain time. But I'm afraid this makes the planning\nmuch more difficult, as the select from measurements depend on the data\nreturned by other parts of the query (rows from category).\n\nSee this http://explain.depesz.com/s/H1 and this\nhttp://explain.depesz.com/s/GGx\n\nI guess the planner is confused in the second case - believes it has to\nread a lot more data from the measurement table, and so chooses the\nsequential scan. The question is if this is the right decision (I believe\nit is not).\n\nHow many rows does the query return without the group by clause? About\n140000 in both cases, right?\n\n>> by time (each year a separate partition) but you'll have to investigate\n>> that on your own (depends on your use-cases).\n>>\n>\n> I cannot partition by time. First, there are 7 categories, which would\n> mean\n> 770 partitions if I did it by year -- 345000 rows per partition. This will\n> grow in the future. I have heard there are troubles with having lots of\n> child tables (too many files for the operating system). Second, the user\n> has\n> the ability to pick arbitrary day ranges for arbitrary year spans.\n>\n> There's a \"year wrapping\" issue that I won't explain because I never get\n> it\n> right the first time. ;-)\n\nOK, I haven't noticed the table is already partitioned by category_id and\nI didn't mean to partition by (taken, category_id) - that would produce a\nlot of partitions. Yes, that might cause problems related to number of\nfiles, but that's rather a filesystem related issue.\n\nI'd expect rather issues related to RULEs or triggers (not sure which of\nthem you use to redirect the data into partitions). But when partitioning\nby time (and not by category_id) the number of partitions will be much\nlower and you don't have to keep all of the rules active - all you need is\na rule for the current year (and maybe the next one).\n\nI'm not sure what you mean by 'year wrapping issue' but I think it might\nwork quite well - right not the problem is PostgreSQL decides to scan the\nwhole partition (all data for a given category_id).\n\nregards\nTomas\n\n", "msg_date": "Wed, 26 May 2010 20:16:28 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "Hi, Kevin.\n\nbelow something in the range of 1.5 to 2 is probably not going to be\n> a good choice for the mix as a whole.\n>\n\nGood to know; thanks.\n\n\n> This should probably be set to something on the order of 3GB. This\n> will help the optimizer make more intelligent choices about when use\n> of the index will be a win.\n>\n\nI'll try this.\n\n\n> times. You'll get your best information if you can simulate a\n> more-or-less realistic load, and try that with various settings and\n>\n\nI have no idea what a realistic load will be. The system is still in\ndevelopment and not open to the general public. I also don't know how much\npublicity the system will receive when finished. Could be a few hundred\nhits, could be over ten thousand.\n\nI want the system to be ready for the latter case, which means it needs to\nreturn data for many different query parameters (date span, elevation, year,\nradius, etc.) in under two seconds.\n\n\n> indexes. The cache turnover and resource contention involved in\n> production can influence performance, and are hard to estimate any\n> other way.\n>\n\nAnother person suggested to take a look at the data.\n\nI ran a query to see if it makes sense to split the data by year. The\ntrouble is that there are 110 years and 7 categories. The data is already\nfiltered into child tables by category (that is logical because reporting on\ntwo different categories is nonsensical -- it is meaningless to report on\nsnow depth *and* temperature: we already know it needs to be cold for snow).\n\ncount;decade start; decade end; min date; max date\n3088;1990;2000;\"1990-01-01\";\"2009-12-31\"\n2925;1980;2000;\"1980-01-01\";\"2009-12-31\"\n2752;2000;2000;\"2000-01-01\";\"2009-12-31\"\n2487;1970;1970;\"1970-01-01\";\"1979-12-31\"\n2391;1980;1990;\"1980-02-01\";\"1999-12-31\"\n2221;1980;1980;\"1980-01-01\";\"1989-12-31\"\n1934;1960;2000;\"1960-01-01\";\"2009-12-31\"\n1822;1960;1960;\"1960-01-01\";\"1969-12-31\"\n1659;1970;1980;\"1970-01-01\";\"1989-12-31\"\n1587;1960;1970;\"1960-01-01\";\"1979-12-31\"\n1524;1970;2000;\"1970-01-01\";\"2009-12-31\"\n\nThe majority of data collected by weather stations is between 1960 and 2009,\nwhich makes sense because transistor technology would have made for\n(relatively) inexpensive automated monitoring stations. Or maybe there were\nmore people and more taxes collected thus a bigger budget for weather study.\nEither way. ;-)\n\nThe point is the top three decades (1990, 1980, 2000) have the most data,\ngiving me a few options:\n\n - Split the seven tables twice more: before 1960 and after 1960.\n - Split the seven tables by decade.\n\nThe first case gives 14 tables. The second case gives 102 tables (at 2.5M\nrows per table) as there are about 17 decades in total. This seems like a\nmanageable number of tables as the data might eventually span 22 decades,\nwhich would be 132 tables.\n\nEven though the users will be selecting 1900 to 2009, most of the stations\nthemselves will be within the 1960 - 2009 range, with the majority of those\nactive between 1980 and 2009.\n\nWould splitting by decade improve the speed?\n\nThank you very much.\n\nDave\n\nHi, Kevin.\nbelow something in the range of 1.5 to 2 is probably not going to be\na good choice for the mix as a whole.Good to know; thanks. \n\nThis should probably be set to something on the order of 3GB.  This\nwill help the optimizer make more intelligent choices about when use\nof the index will be a win.I'll try this. times.  You'll get your best information if you can simulate a\n\nmore-or-less realistic load, and try that with various settings andI have no idea what a realistic load will be. The system is still in development and not open to the general public. I also don't know how much publicity the system will receive when finished. Could be a few hundred hits, could be over ten thousand.\nI want the system to be ready for the latter case, which means it needs to return data for many different query parameters (date span, elevation, year, radius, etc.) in under two seconds. \n\nindexes.  The cache turnover and resource contention involved in\nproduction can influence performance, and are hard to estimate any\nother way.Another person suggested to take a look at the data.I ran a query to see if it makes sense to split the data by year. The trouble is that there are 110 years and 7 categories. The data is already filtered into child tables by category (that is logical because reporting on two different categories is nonsensical -- it is meaningless to report on snow depth and temperature: we already know it needs to be cold for snow).\ncount;decade start; decade end; min date; max date3088;1990;2000;\"1990-01-01\";\"2009-12-31\"2925;1980;2000;\"1980-01-01\";\"2009-12-31\"2752;2000;2000;\"2000-01-01\";\"2009-12-31\"\n2487;1970;1970;\"1970-01-01\";\"1979-12-31\"2391;1980;1990;\"1980-02-01\";\"1999-12-31\"2221;1980;1980;\"1980-01-01\";\"1989-12-31\"1934;1960;2000;\"1960-01-01\";\"2009-12-31\"\n1822;1960;1960;\"1960-01-01\";\"1969-12-31\"1659;1970;1980;\"1970-01-01\";\"1989-12-31\"1587;1960;1970;\"1960-01-01\";\"1979-12-31\"1524;1970;2000;\"1970-01-01\";\"2009-12-31\"\nThe majority of data collected by weather stations is between 1960 and 2009, which makes sense because transistor technology would have made for (relatively) inexpensive automated monitoring stations. Or maybe there were more people and more taxes collected thus a bigger budget for weather study. Either way. ;-)\nThe point is the top three decades (1990, 1980, 2000) have the most data, giving me a few options:Split the seven tables twice more: before 1960 and after 1960.Split the seven tables by decade.\nThe first case gives 14 tables. The second case gives 102 tables (at 2.5M rows per table) as there are about 17 decades in total. This seems like a manageable number of tables as the data might eventually span 22 decades, which would be 132 tables.\nEven though the users will be selecting 1900 to 2009, most of the stations themselves will be within the 1960 - 2009 range, with the majority of those active between 1980 and 2009.Would splitting by decade improve the speed?\nThank you very much.Dave", "msg_date": "Wed, 26 May 2010 11:26:53 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "Hi,\n\nsc.taken_end <= '1996-12-31'::date AND\n> m.taken BETWEEN sc.taken_start AND sc.taken_end AND\n>\n> category of data at a certain time. But I'm afraid this makes the planning\n> much more difficult, as the select from measurements depend on the data\n> returned by other parts of the query (rows from category).\n>\n\nRight. Users can select 1900 - 2009. Station data hardly ever spans that\nrange.\n\nThe *station_category* is used to create a unique key into the measurement\ndata for every station: station_id, category_id, and taken_start. The\nmeasurement data should be contiguous until taken_end.\n\nI thought that that combination would be a pointer to the exact spot in the\nmeasurement table where the data starts, which should be ridiculously fast\nto find.\n\nSee this http://explain.depesz.com/s/H1 and this\n> http://explain.depesz.com/s/GGx\n>\n\nI was getting some red lines when I looked at a different plan. It's a great\nsite.\n\nHow many rows does the query return without the group by clause? About\n> 140000 in both cases, right?\n>\n\nSELECT\n *\nFROM\n climate.measurement m\nWHERE\n m.station_id = 5148 AND\n m.taken BETWEEN '1900-08-01'::date AND '2009-12-31'::date AND\n m.category_id = 1\n\n5397 rows (10 seconds cold; 0.5 seconds hot); estimated too high by 2275\nrows?\n\nhttp://explain.depesz.com/s/uq\n\n OK, I haven't noticed the table is already partitioned by category_id and\n> I didn't mean to partition by (taken, category_id) - that would produce a\n> lot of partitions. Yes, that might cause problems related to number of\n> files, but that's rather a filesystem related issue.\n>\n\nConstrained as:\n\n CONSTRAINT measurement_013_category_id_ck CHECK (category_id = 7)\n\n\n> I'd expect rather issues related to RULEs or triggers (not sure which of\n> them you use to redirect the data into partitions). But when partitioning\n>\n\nI created seven child tables of measurement. Each of these has a constraint\nby category_id. This makes it extremely fast to select the correct\npartition.\n\n\n> I'm not sure what you mean by 'year wrapping issue' but I think it might\n> work quite well - right not the problem is PostgreSQL decides to scan the\n> whole partition (all data for a given category_id).\n>\n\nI'll give it another try. :-)\n\n*Use Case #1*\nUser selects: Mar 22 to Dec 22\nUser selects: 1900 to 2009\n\nResult: Query should average *9 months* of climate data per year between Mar\n22 and Dec 22 of Year.\n\n*Use Case #2*\nUser selects: Dec 22 to Mar 22\nUser selects: 1900 to 2009\n\nResult: Query should average *3 months* of climate data per year between Dec\n22 of Year and Mar 22 of Year+1.\n\nSo if a user selects 1950 to *1960*:\n\n - first case should average between 1950 and *1960*; and\n - second case should average between 1950 and *1961*.\n\nDave\n\nHi,\nsc.taken_end <= '1996-12-31'::date AND\nm.taken BETWEEN sc.taken_start AND sc.taken_end AND\n\ncategory of data at a certain time. But I'm afraid this makes the planning\nmuch more difficult, as the select from measurements depend on the data\nreturned by other parts of the query (rows from category).Right. Users can select 1900 - 2009. Station data hardly ever spans that range.The station_category is used to create a unique key into the measurement data for every station: station_id, category_id, and taken_start. The measurement data should be contiguous until taken_end.\nI thought that that combination would be a pointer to the exact spot in the measurement table  where the data starts, which should be ridiculously fast to find.\n\n\nSee this http://explain.depesz.com/s/H1 and this\nhttp://explain.depesz.com/s/GGx\nI was getting some red lines when I looked at a different plan. It's a great site.\n\nHow many rows does the query return without the group by clause? About\n140000 in both cases, right?SELECT  *FROM  climate.measurement mWHERE  m.station_id = 5148 AND  m.taken BETWEEN '1900-08-01'::date AND '2009-12-31'::date AND\n  m.category_id = 15397 rows (10 seconds cold; 0.5 seconds hot); estimated too high by 2275 rows?http://explain.depesz.com/s/uq\n\n\nOK, I haven't noticed the table is already partitioned by category_id and\nI didn't mean to partition by (taken, category_id) - that would produce a\nlot of partitions. Yes, that might cause problems related to number of\nfiles, but that's rather a filesystem related issue.Constrained as:  CONSTRAINT measurement_013_category_id_ck CHECK (category_id = 7) \n\n\nI'd expect rather issues related to RULEs or triggers (not sure which of\nthem you use to redirect the data into partitions). But when partitioningI created seven child tables of measurement. Each of these has a constraint by category_id. This makes it extremely fast to select the correct partition.\n \n\nI'm not sure what you mean by 'year wrapping issue' but I think it might\nwork quite well - right not the problem is PostgreSQL decides to scan the\nwhole partition (all data for a given category_id).I'll give it another try. :-)Use Case #1User selects: Mar 22 to Dec 22User selects: 1900 to 2009Result: Query should average 9 months of climate data per year between Mar 22 and Dec 22 of Year.\nUse Case #2\nUser selects: Dec 22 to Mar 22User selects: 1900 to 2009Result: Query should average 3 months of climate data per year between Dec 22 of Year and Mar 22 of Year+1.So if a user selects 1950 to 1960:\nfirst case should average between 1950 and 1960; andsecond case should average between 1950 and 1961.Dave", "msg_date": "Wed, 26 May 2010 11:55:37 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "I was told to try OVERLAPS instead of checking years. The query is now:\n\n SELECT\n extract(YEAR FROM m.taken) AS year,\n avg(m.amount) as amount\n FROM\n climate.city c,\n climate.station s,\n climate.station_category sc,\n climate.measurement m\n WHERE\n c.id = 5148 AND\n earth_distance(\n ll_to_earth(c.latitude_decimal,c.longitude_decimal),\n ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND\n s.elevation BETWEEN 0 AND 3000 AND\n s.applicable = TRUE AND\n sc.station_id = s.id AND\n sc.category_id = 7 AND\n* (sc.taken_start, sc.taken_end) OVERLAPS ('1900-01-01'::date,\n'2009-12-31'::date) AND*\n m.station_id = s.id AND\n m.taken BETWEEN sc.taken_start AND sc.taken_end AND\n m.category_id = sc.category_id\n GROUP BY\n extract(YEAR FROM m.taken)\n ORDER BY\n extract(YEAR FROM m.taken)\n\n25 seconds from cold, no full table scan:\n\nhttp://explain.depesz.com/s/VV5\n\nMuch better than 85 seconds, but still an order of magnitude too slow.\n\nI was thinking of changing the *station_category* table to use the\nmeasurement table's primary key, instead of keying off date, as converting\nthe dates for comparison strikes me as a bit of overhead. Also, I can get\nremove the \"/ 1000\" by changing the Earth's radius to kilometres (from\nmetres), but a constant division shouldn't be significant.\n\nI really appreciate all your patience and help over the last sixteen days\ntrying to optimize this database and these queries.\n\nDave\n\nI was told to try OVERLAPS instead of checking years. The query is now:  SELECT     extract(YEAR FROM m.taken) AS year,    avg(m.amount) as amount\n  FROM     climate.city c,     climate.station s,     climate.station_category sc,     climate.measurement m  WHERE     c.id = 5148 AND     earth_distance(      ll_to_earth(c.latitude_decimal,c.longitude_decimal),\n      ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND     s.elevation BETWEEN 0 AND 3000 AND     s.applicable = TRUE AND    sc.station_id = s.id AND     sc.category_id = 7 AND \n    (sc.taken_start, sc.taken_end) OVERLAPS ('1900-01-01'::date, '2009-12-31'::date) AND    m.station_id = s.id AND    m.taken BETWEEN sc.taken_start AND sc.taken_end AND\n    m.category_id = sc.category_id  GROUP BY    extract(YEAR FROM m.taken)  ORDER BY    extract(YEAR FROM m.taken)25 seconds from cold, no full table scan:\nhttp://explain.depesz.com/s/VV5Much better than 85 seconds, but still an order of magnitude too slow.I was thinking of changing the station_category table to use the measurement table's primary key, instead of keying off date, as converting the dates for comparison strikes me as a bit of overhead. Also, I can get remove the \"/ 1000\" by changing the Earth's radius to kilometres (from metres), but a constant division shouldn't be significant.\nI really appreciate all your patience and help over the last sixteen days trying to optimize this database and these queries.Dave", "msg_date": "Wed, 26 May 2010 13:21:19 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "Hi, Bryan.\n\nI was just about to reply to the thread, thanks for asking. Clustering was\nkey. After rebooting the machine (just to make sure absolutely nothing was\ncached), I immediately ran a report on Toronto: 5.25 seconds!\n\nHere's what I did:\n\n 1. Created a new set of tables that matched the old set, with statistics\n of 1000 on the station and taken (date) columns.\n 2. Inserted the data from the old hierarchy into the new set, ordered by\n station id then by date (same seven child tables as before: one per\n category).\n - I wanted to ensure a strong correlation between primary key and\n station id.\n 3. Added three indexes per table: (a) station id; (b) date taken; and\n (c) station-taken-category.\n 4. Set the station-taken-category index as CLUSTER.\n 5. Vacuumed the new tables.\n 6. Dropped the old tables.\n 7. Set the following configuration values:\n - shared_buffers = 1GB\n - temp_buffers = 32MB\n - work_mem = 32MB\n - maintenance_work_mem = 64MB\n - seq_page_cost = 1.0\n - random_page_cost = 2.0\n - cpu_index_tuple_cost = 0.001\n - effective_cache_size = 512MB\n\nI ran a few more reports (no reboots, but reading vastly different data\nsets):\n\n - Vancouver: 4.2s\n - Yellowknife: 1.7s\n - Montreal: 6.5s\n - Trois-Riviers: 2.8s\n\nNo full table scans. I imagine some indexes are not strictly necessary and\nwill test to see which can be removed (my guess: the station and taken\nindexes). The problem was that the station ids were scattered and so\nPostgreSQL presumed a full table scan would be faster.\n\nPhysically ordering the data by station ids triggers index use every time.\n\nNext week's hardware upgrade should halve those times -- unless anyone has\nfurther suggestions to squeeze more performance out of PG. ;-)\n\nDave\n\nHi, Bryan.I was just about to reply to the thread, thanks for asking. Clustering was key. After rebooting the machine (just to make sure absolutely\nnothing was cached), I immediately ran a report on Toronto: 5.25 seconds!Here's what I did:Created a new set of tables that matched the old set, with statistics of 1000 on the station and taken (date) columns.\nInserted\nthe data from the old hierarchy into the new set, ordered by station id\nthen by date (same seven child tables as before: one per category).I wanted to ensure a strong correlation between primary key and station id.\nAdded three indexes per table: (a) station id; (b) date taken; and (c) station-taken-category.Set the station-taken-category index as CLUSTER.Vacuumed the new tables.Dropped the old tables.\nSet the following configuration values:shared_buffers = 1GBtemp_buffers = 32MBwork_mem = 32MBmaintenance_work_mem = 64MBseq_page_cost = 1.0random_page_cost = 2.0\ncpu_index_tuple_cost = 0.001effective_cache_size = 512MB\nI ran a few more reports (no reboots, but reading vastly different data sets):\nVancouver: 4.2sYellowknife: 1.7sMontreal: 6.5sTrois-Riviers: 2.8s\nNo full table scans. I imagine some indexes are not strictly\nnecessary and will test to see which can be removed (my guess: the station and taken indexes). The problem was that the station ids\nwere scattered and so PostgreSQL presumed a full table scan would\nbe faster. Physically ordering the data by station ids triggers index use every time.Next week's hardware upgrade should halve those times -- unless anyone has further suggestions to squeeze more performance out of PG. ;-)\nDave", "msg_date": "Thu, 27 May 2010 00:43:10 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "2010/5/27 David Jarvis <[email protected]>:\n> Hi, Bryan.\n>\n> I was just about to reply to the thread, thanks for asking. Clustering was\n> key. After rebooting the machine (just to make sure absolutely nothing was\n> cached), I immediately ran a report on Toronto: 5.25 seconds!\n>\n> Here's what I did:\n>\n> Created a new set of tables that matched the old set, with statistics of\n> 1000 on the station and taken (date) columns.\n> Inserted the data from the old hierarchy into the new set, ordered by\n> station id then by date (same seven child tables as before: one per\n> category).\n>\n> I wanted to ensure a strong correlation between primary key and station id.\n>\n> Added three indexes per table: (a) station id; (b) date taken; and (c)\n> station-taken-category.\n> Set the station-taken-category index as CLUSTER.\n> Vacuumed the new tables.\n> Dropped the old tables.\n> Set the following configuration values:\n>\n> shared_buffers = 1GB\n> temp_buffers = 32MB\n> work_mem = 32MB\n> maintenance_work_mem = 64MB\n> seq_page_cost = 1.0\n> random_page_cost = 2.0\n> cpu_index_tuple_cost = 0.001\n> effective_cache_size = 512MB\n>\n> I ran a few more reports (no reboots, but reading vastly different data\n> sets):\n>\n> Vancouver: 4.2s\n> Yellowknife: 1.7s\n> Montreal: 6.5s\n> Trois-Riviers: 2.8s\n>\n> No full table scans. I imagine some indexes are not strictly necessary and\n> will test to see which can be removed (my guess: the station and taken\n> indexes). The problem was that the station ids were scattered and so\n> PostgreSQL presumed a full table scan would be faster.\n>\n> Physically ordering the data by station ids triggers index use every time.\n>\n> Next week's hardware upgrade should halve those times -- unless anyone has\n> further suggestions to squeeze more performance out of PG. ;-)\n\nI wonder what the plan will be if you replace sc.taken_* in :\nm.taken BETWEEN sc.taken_start AND sc.taken_end\nby values. It might help the planner...\n\nAlso, I'll consider explicit ordered join but I admit I haven't read\nthe whole thread (in particular the table size).\nHo, and I set statistics to a highter value for column category_id,\ntable station_category (seeing the same resquest and explain analyze\nwithout date in the query will help)\n\n\n>\n> Dave\n>\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Thu, 27 May 2010 10:03:09 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "Salut, Cédric.\n\nI wonder what the plan will be if you replace sc.taken_* in :\n> m.taken BETWEEN sc.taken_start AND sc.taken_end\n> by values. It might help the planner...\n>\n\nThat is a fairly important restriction. I will try making it *\n(year1||'-01-01')::date*, but I have no constant value for it -- it is a\nuser-supplied parameter. And then there's the year wrapping problem, too,\nwhere the ending year will differ from the starting year in certain cases.\n(Like querying rows between Dec 22, 1900 to Mar 22 *1901* rather than Mar 22\n1900 to Dec 22 1900. The first query is the winter season and the second\nquery is all seasons except winter.)\n\n\n> Also, I'll consider explicit ordered join but I admit I haven't read\n> the whole thread (in particular the table size).\n>\n\nC'est une grosse table. Pres que 40 million lines; il y a sept tableau comme\nca.\n\nI tried an explicit join in the past: it did not help much. But that was\nbefore everything was running this fast, so now that the system performs\ndifferently, maybe it will help?\n\nDave\n\nSalut, Cédric.I wonder what the plan will be if you replace sc.taken_* in :\nm.taken BETWEEN sc.taken_start AND sc.taken_end\nby values. It might help the planner...That is a fairly important restriction. I will try making it (year1||'-01-01')::date, but I have no constant value for it -- it is a user-supplied parameter. And then there's the year wrapping problem, too, where the ending year will differ from the starting year in certain cases. (Like querying rows between Dec 22, 1900 to Mar 22 1901 rather than Mar 22 1900 to Dec 22 1900. The first query is the winter season and the second query is all seasons except winter.)\n \n\nAlso, I'll consider explicit ordered join but I admit I haven't read\nthe whole thread (in particular the table size).C'est une grosse table. Pres que 40 million lines; il y a sept tableau comme ca.I tried an explicit join in the past: it did not help much. But that was before everything was running this fast, so now that the system performs differently, maybe it will help?\nDave", "msg_date": "Thu, 27 May 2010 08:55:55 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "2010/5/27 David Jarvis <[email protected]>:\n> Salut, Cédric.\n>\n>> I wonder what the plan will be if you replace sc.taken_* in :\n>> m.taken BETWEEN sc.taken_start AND sc.taken_end\n>> by values. It might help the planner...\n>\n> That is a fairly important restriction. I will try making it\n> (year1||'-01-01')::date, but I have no constant value for it -- it is a\n> user-supplied parameter. And then there's the year wrapping problem, too,\n> where the ending year will differ from the starting year in certain cases.\n> (Like querying rows between Dec 22, 1900 to Mar 22 1901 rather than Mar 22\n> 1900 to Dec 22 1900. The first query is the winter season and the second\n> query is all seasons except winter.)\n\nAh, I though that you had a start and an end provided (so able to put\nthem in the query)\n\n>\n>>\n>> Also, I'll consider explicit ordered join but I admit I haven't read\n>> the whole thread (in particular the table size).\n>\n> C'est une grosse table. Pres que 40 million lines; il y a sept tableau comme\n> ca.\n>\n> I tried an explicit join in the past: it did not help much. But that was\n> before everything was running this fast, so now that the system performs\n> differently, maybe it will help?\n\nyes. the documentation is fine for this topic :\nhttp://www.postgresql.org/docs/8.4/interactive/explicit-joins.html\nConsider the parameter to explicit join order (you can set it per sql session).\n\nYou know your data and know what are the tables with less results to\njoin first. ;)\n\n>\n> Dave\n>\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Thu, 27 May 2010 20:28:36 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "Agree with Tom on his point about avoidance of cost param adjustments to fit\nspecific test cases.\nA few suggestions...as I assume you own this database...\n- check out pg_statio_user_tables - optimize your cache hit ratio on blocks\nread...different time durations... pg_stat_bgwriter (read from a script or\nsomething and snapshot)\n- pg_buffercache in contrib/\n- /proc/meminfo on linux\n- find out exactly what is going on with your kernel buffer cache (size, how\nit is buffering) and if your controller or drive is using a read ahead\ncache.\n- might want to play around with partial indexes vs. and/or range\npartitioning with exclusion constraints, etc.\n- define I/O characteristics of the dataset - taking into account index\nclustering and index order on in-memory pages (i.e. re-cluster?), why need\nfor multiple index if clustering indexes on heap?\n- solidify the referential integrity constraints between those tables, on\npaper....define the use cases before modifying the database tables...i\nassume this is a dev database\n- linux fs mount options to explore - i.e. noatime, writeback, etc.\n-maybe look at prepared statements if you are running alot of similar\nqueries from a single session? assuming web front end for your db - with say\nfrequently queried region/category/dates for large read-only dataset with\nmultiple join conditions?\n\nThere are some good presentations on pgcon.org from PGCon 2010 that was held\nlast week...\n http://www.pgcon.org/2010/schedule/events/218.en.html\n\nIf you take everything into account and model it correctly (not too loose,\nnot too tight), your solution will be reusable and will save time and\nhardware expenses.\n\nRegards -\n\nBryan\n\n\n\nOn Thu, May 27, 2010 at 2:43 AM, David Jarvis <[email protected]> wrote:\n\n> Hi, Bryan.\n>\n> I was just about to reply to the thread, thanks for asking. Clustering was\n> key. After rebooting the machine (just to make sure absolutely nothing was\n> cached), I immediately ran a report on Toronto: 5.25 seconds!\n>\n> Here's what I did:\n>\n> 1. Created a new set of tables that matched the old set, with\n> statistics of 1000 on the station and taken (date) columns.\n> 2. Inserted the data from the old hierarchy into the new set, ordered\n> by station id then by date (same seven child tables as before: one per\n> category).\n> - I wanted to ensure a strong correlation between primary key and\n> station id.\n> 3. Added three indexes per table: (a) station id; (b) date taken;\n> and (c) station-taken-category.\n> 4. Set the station-taken-category index as CLUSTER.\n> 5. Vacuumed the new tables.\n> 6. Dropped the old tables.\n> 7. Set the following configuration values:\n> - shared_buffers = 1GB\n> - temp_buffers = 32MB\n> - work_mem = 32MB\n> - maintenance_work_mem = 64MB\n> - seq_page_cost = 1.0\n> - random_page_cost = 2.0\n> - cpu_index_tuple_cost = 0.001\n> - effective_cache_size = 512MB\n>\n> I ran a few more reports (no reboots, but reading vastly different data\n> sets):\n>\n> - Vancouver: 4.2s\n> - Yellowknife: 1.7s\n> - Montreal: 6.5s\n> - Trois-Riviers: 2.8s\n>\n> No full table scans. I imagine some indexes are not strictly necessary and\n> will test to see which can be removed (my guess: the station and taken\n> indexes). The problem was that the station ids were scattered and so\n> PostgreSQL presumed a full table scan would be faster.\n>\n> Physically ordering the data by station ids triggers index use every time.\n>\n> Next week's hardware upgrade should halve those times -- unless anyone has\n> further suggestions to squeeze more performance out of PG. ;-)\n>\n> Dave\n>\n>\n\nAgree with Tom on his point about avoidance of cost param adjustments to fit specific test cases.A few suggestions...as I assume you own this database...- check out pg_statio_user_tables - optimize your cache hit ratio on blocks read...different time durations... pg_stat_bgwriter (read from a script or something and snapshot)\n- pg_buffercache in contrib/  - /proc/meminfo on linux - find out exactly what is going on with your kernel buffer cache (size, how it is buffering) and if your controller or drive is using a read ahead cache.  \n- might want to play around with partial indexes vs. and/or range partitioning with exclusion constraints, etc.- define I/O characteristics of the dataset - taking into account index clustering and index order on in-memory pages (i.e. re-cluster?), why need for multiple index if clustering indexes on heap?\n- solidify the referential integrity constraints between those tables, on paper....define the use cases before modifying the database tables...i assume this is a dev database- linux fs mount options to explore - i.e. noatime, writeback, etc.\n-maybe look at prepared statements if you are running alot of similar queries from a single session? assuming web front end for your db - with say frequently queried region/category/dates for large read-only dataset with multiple join conditions?\nThere are some good presentations on pgcon.org from PGCon 2010 that was held last week... http://www.pgcon.org/2010/schedule/events/218.en.html\nIf you take everything into account and model it correctly (not too loose, not too tight), your solution will be reusable and will save time and hardware expenses.Regards - \nBryan On Thu, May 27, 2010 at 2:43 AM, David Jarvis <[email protected]> wrote:\n\nHi, Bryan.I was just about to reply to the thread, thanks for asking. Clustering was key. After rebooting the machine (just to make sure absolutely\nnothing was cached), I immediately ran a report on Toronto: 5.25 seconds!Here's what I did:Created a new set of tables that matched the old set, with statistics of 1000 on the station and taken (date) columns.\nInserted\nthe data from the old hierarchy into the new set, ordered by station id\nthen by date (same seven child tables as before: one per category).I wanted to ensure a strong correlation between primary key and station id.\nAdded three indexes per table: (a) station id; (b) date taken; and (c) station-taken-category.Set the station-taken-category index as CLUSTER.Vacuumed the new tables.Dropped the old tables.\nSet the following configuration values:shared_buffers = 1GBtemp_buffers = 32MBwork_mem = 32MBmaintenance_work_mem = 64MBseq_page_cost = 1.0\nrandom_page_cost = 2.0\ncpu_index_tuple_cost = 0.001effective_cache_size = 512MB\nI ran a few more reports (no reboots, but reading vastly different data sets):\nVancouver: 4.2sYellowknife: 1.7sMontreal: 6.5sTrois-Riviers: 2.8s\nNo full table scans. I imagine some indexes are not strictly\nnecessary and will test to see which can be removed (my guess: the station and taken indexes). The problem was that the station ids\nwere scattered and so PostgreSQL presumed a full table scan would\nbe faster. Physically ordering the data by station ids triggers index use every time.Next week's hardware upgrade should halve those times -- unless anyone has further suggestions to squeeze more performance out of PG. ;-)\nDave", "msg_date": "Thu, 27 May 2010 16:40:12 -0500", "msg_from": "Bryan Hinton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "Hi, Bryan.\n\nThanks for the notes. I thought about using a prepared statement, but I\ncannot find any examples of using a PREPARE statement from within a\nfunction, and don't really feel like tinkering around to figure it out.\n\nPerformance is at the point where the Java/PHP bridge and JasperReports are\nbottlenecks. The run_time variable seldom goes beyond 2.6s now. The reports\ntake about 5 - 6 seconds to appear. At this point I'm into diminishing\nreturns.\n\nI can perform a 60-minute hardware upgrade or spend 12 hours profiling to\nget less than the same net effect (and there is no guarantee I can improve\nthe performance in fewer than 12 hours -- it took me 17 days and countless\ne-mails to this mailing group just to get this far -- *thank you again for\nall the help*, by the way). (If I was a PostgreSQL guru like most people on\nthis list, it might take me 2 hours of profiling to optimize away the\nremaining bottlenecks, but even then the gain would only be a second or two\nin the database arena; the other system components will also gain by a\nhardware upgrade.)\n\nDave\n\nHi, Bryan.Thanks for the notes. I thought about using a prepared statement, but I cannot find any examples of using a PREPARE statement from within a function, and don't really feel like tinkering around to figure it out.\nPerformance is at the point where the Java/PHP bridge and JasperReports are bottlenecks. The run_time variable seldom goes beyond 2.6s now. The reports take about 5 - 6 seconds to appear. At this point I'm into diminishing returns.\nI can perform a 60-minute hardware upgrade or spend 12 hours profiling to get less than the same net effect (and there is no guarantee I can improve the performance in fewer than 12 hours -- it took me 17 days and countless e-mails to this mailing group just to get this far -- thank you again for all the help, by the way). (If I was a PostgreSQL guru like most people on this list, it might take me 2 hours of profiling to optimize away the remaining bottlenecks, but even then the gain would only be a second or two in the database arena; the other system components will also gain by a hardware upgrade.)\nDave", "msg_date": "Thu, 27 May 2010 20:29:13 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Random Page Cost and Planner" }, { "msg_contents": "I'm testing/tuning a new midsize server and ran into an inexplicable problem. With an RAID10 drive, when I move the WAL to a separate RAID1 drive, TPS drops from over 1200 to less than 90! I've checked everything and can't find a reason.\n\nHere are the details.\n\n8 cores (2x4 Intel Nehalem 2 GHz)\n12 GB memory\n12 x 7200 SATA 500 GB disks\n3WARE 9650SE-12ML RAID controller with bbu\n 2 disks: RAID1 500GB ext4 blocksize=4096\n 8 disks: RAID10 2TB, stripe size 64K, blocksize=4096 (ext4 or xfs - see below)\n 2 disks: hot swap\nUbuntu 10.04 LTS (Lucid)\n\nWith xfs or ext4 on the RAID10 I got decent bonnie++ and pgbench results (this one is for xfs):\n\nVersion 1.03e ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nargon 24064M 70491 99 288158 25 129918 16 65296 97 428210 23 558.9 1\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 23283 81 +++++ +++ 13775 56 20143 74 +++++ +++ 15152 54\nargon,24064M,70491,99,288158,25,129918,16,65296,97,428210,23,558.9,1,16,23283,81,+++++,+++,13775,56,20143\\\n,74,+++++,+++,15152,54\n\npgbench -i -s 100 -U test\npgbench -c 10 -t 10000 -U test\n scaling factor: 100\n query mode: simple\n number of clients: 10\n number of transactions per client: 10000\n number of transactions actually processed: 100000/100000\n tps = 1046.104635 (including connections establishing)\n tps = 1046.337276 (excluding connections establishing)\n\nNow the mystery: I moved the pg_xlog directory to a RAID1 array (same 3WARE controller, two more SATA 7200 disks). Run the same tests and ...\n\n tps = 82.325446 (including connections establishing)\n tps = 82.326874 (excluding connections establishing)\n\nI thought I'd made a mistake, like maybe I moved the whole database to the RAID1 array, but I checked and double checked. I even watched the lights blink - the WAL was definitely on the RAID1 and the rest of Postgres on the RAID10.\n\nSo I moved the WAL back to the RAID10 array, and performance jumped right back up to the >1200 TPS range.\n\nNext I check the RAID1 itself:\n\n dd if=/dev/zero of=./bigfile bs=8192 count=2000000\n\nwhich yielded 98.8 MB/sec - not bad. bonnie++ on the RAID1 pair showed good performance too:\n\nVersion 1.03e ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nargon 24064M 68601 99 110057 18 46534 6 59883 90 123053 7 471.3 1\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\nargon,24064M,68601,99,110057,18,46534,6,59883,90,123053,7,471.3,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,\\\n+++,+++++,+++,+++++,+++\n\nSo ... anyone have any idea at all how TPS drops to below 90 when I move the WAL to a separate RAID1 disk? Does this make any sense at all? It's repeatable. It happens for both ext4 and xfs. It's weird.\n\nYou can even watch the disk lights and see it: the RAID10 disks are on almost constantly when the WAL is on the RAID10, but when you move the WAL over to the RAID1, its lights are dim and flicker a lot, like it's barely getting any data, and the RAID10 disk's lights barely go on at all.\n\nThanks,\nCraig\n\n\n\n\n\n\n\n\n\n", "msg_date": "Wed, 02 Jun 2010 16:30:28 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Weird XFS WAL problem" }, { "msg_contents": "On 03/06/10 11:30, Craig James wrote:\n> I'm testing/tuning a new midsize server and ran into an inexplicable \n> problem. With an RAID10 drive, when I move the WAL to a separate \n> RAID1 drive, TPS drops from over 1200 to less than 90! I've checked \n> everything and can't find a reason.\n>\n>\n\nAre the 2 new RAID1 disks the same make and model as the 12 RAID10 ones?\n\nAlso, are barriers *on* on the RAID1 mount and off on the RAID10 one?\n\nCheers\n\nMark\n\n", "msg_date": "Thu, 03 Jun 2010 11:40:49 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "On Wed, Jun 2, 2010 at 7:30 PM, Craig James <[email protected]> wrote:\n> I'm testing/tuning a new midsize server and ran into an inexplicable\n> problem.  With an RAID10 drive, when I move the WAL to a separate RAID1\n> drive, TPS drops from over 1200 to less than 90!   I've checked everything\n> and can't find a reason.\n>\n> Here are the details.\n>\n> 8 cores (2x4 Intel Nehalem 2 GHz)\n> 12 GB memory\n> 12 x 7200 SATA 500 GB disks\n> 3WARE 9650SE-12ML RAID controller with bbu\n>  2 disks: RAID1  500GB ext4  blocksize=4096\n>  8 disks: RAID10 2TB, stripe size 64K, blocksize=4096 (ext4 or xfs - see\n> below)\n>  2 disks: hot swap\n> Ubuntu 10.04 LTS (Lucid)\n>\n> With xfs or ext4 on the RAID10 I got decent bonnie++ and pgbench results\n> (this one is for xfs):\n>\n> Version 1.03e       ------Sequential Output------ --Sequential Input-\n> --Random-\n>                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec\n> %CP\n> argon        24064M 70491  99 288158  25 129918  16 65296  97 428210  23\n> 558.9   1\n>                    ------Sequential Create------ --------Random\n> Create--------\n>                    -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n>              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec\n> %CP\n>                 16 23283  81 +++++ +++ 13775  56 20143  74 +++++ +++ 15152\n>  54\n> argon,24064M,70491,99,288158,25,129918,16,65296,97,428210,23,558.9,1,16,23283,81,+++++,+++,13775,56,20143\\\n> ,74,+++++,+++,15152,54\n>\n> pgbench -i -s 100 -U test\n> pgbench -c 10 -t 10000 -U test\n>    scaling factor: 100\n>    query mode: simple\n>    number of clients: 10\n>    number of transactions per client: 10000\n>    number of transactions actually processed: 100000/100000\n>    tps = 1046.104635 (including connections establishing)\n>    tps = 1046.337276 (excluding connections establishing)\n>\n> Now the mystery: I moved the pg_xlog directory to a RAID1 array (same 3WARE\n> controller, two more SATA 7200 disks).  Run the same tests and ...\n>\n>    tps = 82.325446 (including connections establishing)\n>    tps = 82.326874 (excluding connections establishing)\n>\n> I thought I'd made a mistake, like maybe I moved the whole database to the\n> RAID1 array, but I checked and double checked.  I even watched the lights\n> blink - the WAL was definitely on the RAID1 and the rest of Postgres on the\n> RAID10.\n>\n> So I moved the WAL back to the RAID10 array, and performance jumped right\n> back up to the >1200 TPS range.\n>\n> Next I check the RAID1 itself:\n>\n>  dd if=/dev/zero of=./bigfile bs=8192 count=2000000\n>\n> which yielded 98.8 MB/sec - not bad.  bonnie++ on the RAID1 pair showed good\n> performance too:\n>\n> Version 1.03e       ------Sequential Output------ --Sequential Input-\n> --Random-\n>                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec\n> %CP\n> argon        24064M 68601  99 110057  18 46534   6 59883  90 123053   7\n> 471.3   1\n>                    ------Sequential Create------ --------Random\n> Create--------\n>                    -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n>              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec\n> %CP\n>                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++\n> +++\n> argon,24064M,68601,99,110057,18,46534,6,59883,90,123053,7,471.3,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,\\\n> +++,+++++,+++,+++++,+++\n>\n> So ... anyone have any idea at all how TPS drops to below 90 when I move the\n> WAL to a separate RAID1 disk?  Does this make any sense at all?  It's\n> repeatable. It happens for both ext4 and xfs. It's weird.\n>\n> You can even watch the disk lights and see it: the RAID10 disks are on\n> almost constantly when the WAL is on the RAID10, but when you move the WAL\n> over to the RAID1, its lights are dim and flicker a lot, like it's barely\n> getting any data, and the RAID10 disk's lights barely go on at all.\n\n*) Is your raid 1 configured writeback cache on the controller?\n*) have you tried changing wal_sync_method to fdatasync?\n\nmerlin\n", "msg_date": "Thu, 3 Jun 2010 09:01:01 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Craig James wrote:\n> I'm testing/tuning a new midsize server and ran into an inexplicable \n> problem. With an RAID10 drive, when I move the WAL to a separate \n> RAID1 drive, TPS drops from over 1200 to less than 90!\n\nNormally <100 TPS means that the write cache on the WAL drive volume is \ndisabled (or set to write-through instead of write-back). When things \nin this area get fishy, I will usually download sysbench and have it \nspecifically test how many fsync calls can happen per second. \nhttp://projects.2ndquadrant.com/talks , \"Database Hardware \nBenchmarking\", page 28 has an example of the right incantation for that.\n\nAlso, make sure you run 3ware's utilities and confirm all the disks have \nfinished their initialization and verification stages. If you just \nadjusted disk layout that and immediate launched into benchmarks, those \nare useless until the background cleanup is done.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 03 Jun 2010 12:52:44 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "On 6/2/10 4:40 PM, Mark Kirkwood wrote:\n> On 03/06/10 11:30, Craig James wrote:\n>> I'm testing/tuning a new midsize server and ran into an inexplicable\n>> problem. With an RAID10 drive, when I move the WAL to a separate RAID1\n>> drive, TPS drops from over 1200 to less than 90! I've checked\n>> everything and can't find a reason.\n>\n> Are the 2 new RAID1 disks the same make and model as the 12 RAID10 ones?\n\nYes.\n\n> Also, are barriers *on* on the RAID1 mount and off on the RAID10 one?\n\nIt was the barriers. \"barrier=1\" isn't just a bad idea on ext4, it's a disaster.\n\npgbench -i -s 100 -U test\npgbench -c 10 -t 10000 -U test\n\nChange WAL to barrier=0\n\n tps = 1463.264981 (including connections establishing)\n tps = 1463.725687 (excluding connections establishing)\n\nChange WAL to noatime, nodiratime, barrier=0\n\n tps = 1479.331476 (including connections establishing)\n tps = 1479.810545 (excluding connections establishing)\n\nChange WAL to barrier=1\n\n tps = 82.325446 (including connections establishing)\n tps = 82.326874 (excluding connections establishing)\n\nThis is really hard to believe, because the bonnie++ numbers and dd(1) numbers look good (see my original post). But it's totally repeatable. It must be some really unfortunate \"just missed the next sector going by the write head\" problem.\n\nSo with ext4, bonnie++ and dd aren't the whole story.\n\nBTW, I also learned that if you edit /etc/fstab and use \"mount -oremount\" it WON'T change \"barrier=0/1\" unless it is explicit in the fstab file. That is, if you put \"barrier=0\" into /etc/fstab and use the remount, it will change it to no barriers. But if you then remove it from /etc/fstab, it won't change it back to the default. You have to actually put \"barrier=1\" if you want to get it back to the default. This seems like a bug to me, and it made it really hard to track this down. \"mount -oremount\" is not the same as umount/mount!\n\nCraig\n", "msg_date": "Thu, 03 Jun 2010 10:06:11 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "On Thu, 3 Jun 2010, Craig James wrote:\n>> Also, are barriers *on* on the RAID1 mount and off on the RAID10 one?\n>\n> It was the barriers. \"barrier=1\" isn't just a bad idea on ext4, it's a \n> disaster.\n\nThis worries me a little. Does your array have a battery-backed cache? If \nso, then it should be fast regardless of barriers (although barriers may \nmake a small difference). If it does not, then it is likely that the fast \nspeed you are seeing with barriers off is unsafe.\n\nThere should be no \"just missed the sector going past for write\" problem \never with a battery-backed cache.\n\nMatthew\n\n-- \n There once was a limerick .sig\n that really was not very big\n It was going quite fine\n Till it reached the fourth line\n", "msg_date": "Thu, 3 Jun 2010 18:14:07 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Matthew Wakeling <[email protected]> wrote:\n> On Thu, 3 Jun 2010, Craig James wrote:\n>>> Also, are barriers *on* on the RAID1 mount and off on the RAID10\none?\n>>\n>> It was the barriers. \"barrier=1\" isn't just a bad idea on ext4,\n>> it's a disaster.\n> \n> This worries me a little. Does your array have a battery-backed\n> cache? If so, then it should be fast regardless of barriers\n> (although barriers may make a small difference). If it does not,\n> then it is likely that the fast speed you are seeing with barriers\n> off is unsafe.\n \nI've seen this, too (with xfs). Our RAID controller, in spite of\nhaving BBU cache configured for writeback, waits for actual\npersistence on disk for write barriers (unlike for fsync). This\ndoes strike me as surprising to the point of bordering on qualifying\nas a bug. It means that you can't take advantage of the BBU cache\nand get the benefit of write barriers in OS cache behavior. :-(\n \n-Kevin\n", "msg_date": "Thu, 03 Jun 2010 12:30:59 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Kevin Grittner wrote:\n> I've seen this, too (with xfs). Our RAID controller, in spite of\n> having BBU cache configured for writeback, waits for actual\n> persistence on disk for write barriers (unlike for fsync). This\n> does strike me as surprising to the point of bordering on qualifying\n> as a bug.\nCompletely intentional, and documented at \nhttp://xfs.org/index.php/XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.3F\n\nThe issue is that XFS will actually send the full \"flush your cache\" \ncall to the controller, rather than just the usual fsync call, and that \neliminates the benefit of having a write cache there in the first \nplace. Good controllers respect that and flush their whole write cache \nout. And ext4 has adopted the same mechanism. This is very much a good \nthing from the perspective of database reliability for people with \nregular hard drives who don't have a useful write cache on their cheap \nhard drives. It allows them to keep the disk's write cache on for other \nthings, while still getting the proper cache flushes when the database \ncommits demand them. It does mean that everyone with a non-volatile \nbattery backed cache, via RAID card typically, needs to turn barriers \noff manually.\n\nI've already warned on this list that PostgreSQL commit performance on \next4 is going to appear really terrible to many people. If you \nbenchmark and don't recognize ext3 wasn't operating in a reliable mode \nbefore, the performance drop now that ext4 is doing the right thing with \nbarriers looks impossibly bad.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 03 Jun 2010 14:18:34 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Craig James wrote:\n> This is really hard to believe, because the bonnie++ numbers and dd(1) \n> numbers look good (see my original post). But it's totally \n> repeatable. It must be some really unfortunate \"just missed the next \n> sector going by the write head\" problem.\n\nCommit performance is a separate number to measure that is not reflected \nin any benchmark that tests sequential performance. I consider it the \nfourth axis of disk system performance (seq read, seq write, random \nIOPS, commit rate), and directly measure it with the sysbench fsync test \nI recommended already. (You can do it with the right custom pgbench \nscript too).\n\nYou only get one commit per rotation on a drive, which is exactly what \nyou're seeing: a bit under the 120 spins/second @ 7200 RPM. Attempts \nto time things just right to catch more than one sector per spin are \nextremely difficult to accomplish, I spent a week on that once without \nmaking any good progress. You can easily get 100MB/s on reads and \nwrites but only manage 100 commits/second.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 03 Jun 2010 14:27:38 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Greg Smith <[email protected]> wrote:\n> Kevin Grittner wrote:\n>> I've seen this, too (with xfs). Our RAID controller, in spite of\n>> having BBU cache configured for writeback, waits for actual\n>> persistence on disk for write barriers (unlike for fsync). This\n>> does strike me as surprising to the point of bordering on\n>> qualifying as a bug.\n> Completely intentional, and documented at \n>\nhttp://xfs.org/index.php/XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.3F\n \nYeah, I read that long ago and I've disabled write barriers because\nof it; however, it still seems wrong that the RAID controller\ninsists on flushing to the drives in write-back mode. Here are my\nreasons for wishing it was otherwise:\n \n(1) We've had batteries on our RAID controllers fail occasionally. \nThe controller automatically degrades to write-through, and we get\nan email from the server and schedule a tech to travel to the site\nand replace the battery; but until we take action we are now exposed\nto possible database corruption. Barriers don't automatically come\non when the controller flips to write-through mode.\n \n(2) It precludes any possibility of moving from fsync techniques to\nwrite barrier techniques for ensuring database integrity. If the OS\nrespected write barriers and the controller considered the write\nsatisfied when it hit BBU cache, write barrier techniques would\nwork, and checkpoints could be made smoother. Think how nicely that\nwould inter-operate with point (1).\n \nSo, while I understand it's Working As Designed, I think the design\nis surprising and sub-optimal.\n \n-Kevin\n", "msg_date": "Thu, 03 Jun 2010 13:40:35 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "On Thu, Jun 3, 2010 at 12:40 PM, Kevin Grittner\n<[email protected]> wrote:\n>\n> Yeah, I read that long ago and I've disabled write barriers because\n> of it; however, it still seems wrong that the RAID controller\n> insists on flushing to the drives in write-back mode.  Here are my\n> reasons for wishing it was otherwise:\n\nI think it's a case of the quickest, simplest answer to semi-new tech.\n Not sure what to do with barriers? Just flush the whole cache.\n\nI'm guessing that this will get optimized in the future.\n\nBTW, I'll have LSI Megaraid latest and greatest to test on in a month,\nand older Areca 1680s as well. I'll be updating the firmware on the\narecas, and I'll run some tests on the whole barrier behaviour to see\nif it's gotten any better lately.\n", "msg_date": "Thu, 3 Jun 2010 13:10:12 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Scott Marlowe <[email protected]> wrote:\n \n> I think it's a case of the quickest, simplest answer to semi-new\n> tech. Not sure what to do with barriers? Just flush the whole\n> cache.\n> \n> I'm guessing that this will get optimized in the future.\n \nLet's hope so.\n \nThat reminds me, the write barrier concept is at least on the\nhorizon as a viable technology; does anyone know if the asynchronous\ngraphs concept in this (one page) paper ever came to anything? (I\nhaven't hear anything about it lately.)\n \nhttp://www.usenix.org/events/fast05/wips/burnett.pdf\n \n-Kevin\n", "msg_date": "Thu, 03 Jun 2010 14:17:28 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Scott Marlowe wrote:\n> I think it's a case of the quickest, simplest answer to semi-new tech.\n> Not sure what to do with barriers? Just flush the whole cache.\n> \n\nWell, that really is the only useful thing you can do with regular SATA \ndrives; the ATA command set isn't any finer grained than that in a way \nthat's useful for this context. And it's also quite reasonable for a \nRAID controller to respond to that \"flush the whole cache\" call by \nflushing its cache. So it's not just the simplest first answer, I \nbelieve it's the only answer until a better ATA command set becomes \navailable.\n\nI think this can only be resolved usefully for all of us at the RAID \nfirmware level. If the controller had some logic that said \"it's OK to \nnot flush the cache when that call comes in if my battery is working \nfine\", that would make this whole problem go away. I don't expect it's \npossible to work around the exact set of concerns Kevin listed any other \nway, because as he pointed out the right thing to do is very dependent \non the battery health, which the OS also doesn't know (again, would \nrequire some new command set verbage).\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 03 Jun 2010 15:31:22 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "On Thu, Jun 3, 2010 at 1:31 PM, Greg Smith <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> I think it's a case of the quickest, simplest answer to semi-new tech.\n>>  Not sure what to do with barriers?  Just flush the whole cache.\n>>\n>\n> Well, that really is the only useful thing you can do with regular SATA\n> drives; the ATA command set isn't any finer grained than that in a way\n> that's useful for this context.  And it's also quite reasonable for a RAID\n> controller to respond to that \"flush the whole cache\" call by flushing its\n> cache.  So it's not just the simplest first answer, I believe it's the only\n> answer until a better ATA command set becomes available.\n>\n> I think this can only be resolved usefully for all of us at the RAID\n> firmware level.  If the controller had some logic that said \"it's OK to not\n> flush the cache when that call comes in if my battery is working fine\",\n\nThat's what already happens for fsync on a BBU controller, so I don't\nthink the code to do so would be something fancy and new, just a\nsimple change of logic on which code path to take.\n", "msg_date": "Thu, 3 Jun 2010 13:44:24 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Greg Smith <[email protected]> wrote:\n \n> I think this can only be resolved usefully for all of us at the\n> RAID firmware level. If the controller had some logic that said\n> \"it's OK to not flush the cache when that call comes in if my\n> battery is working fine\", that would make this whole problem go\n> away.\n \nThat is exactly what I've been trying to suggest. Sorry for not\nbeing more clear about it.\n \n-Kevin\n", "msg_date": "Thu, 03 Jun 2010 15:01:03 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "On Thu, 3 Jun 2010, Greg Smith wrote:\n> And it's also quite reasonable for a RAID controller to respond to that \n> \"flush the whole cache\" call by flushing its cache.\n\nRemember that the RAID controller is presenting itself to the OS as a \nlarge disc, and hiding the individual discs from the OS. Why should the OS \ncare what has actually happened to the individual discs' caches, as long \nas that \"flush the whole cache\" command guarantees that the data is \npersistent. Taking the RAID array as a whole, that happens when the data \nhits the write-back cache.\n\nThe only circumstance where you actually need to flush the data to the \nindividual discs is when you need to take that disc away somewhere else \nand read it on another system. That's quite a rare use case for a RAID \narray (http://thedailywtf.com/Articles/RAIDing_Disks.aspx \nnotwithstanding).\n\n> If the controller had some logic that said \"it's OK to not flush the \n> cache when that call comes in if my battery is working fine\", that would \n> make this whole problem go away.\n\nThe only place this can be properly sorted is the RAID controller. \nAnywhere else would be crazy.\n\nMatthew\n\n-- \n\"To err is human; to really louse things up requires root\n privileges.\" -- Alexander Pope, slightly paraphrased\n", "msg_date": "Fri, 4 Jun 2010 10:27:04 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Greg Smith wrote:\n> Kevin Grittner wrote:\n> > I've seen this, too (with xfs). Our RAID controller, in spite of\n> > having BBU cache configured for writeback, waits for actual\n> > persistence on disk for write barriers (unlike for fsync). This\n> > does strike me as surprising to the point of bordering on qualifying\n> > as a bug.\n> Completely intentional, and documented at \n> http://xfs.org/index.php/XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.3F\n> \n> The issue is that XFS will actually send the full \"flush your cache\" \n> call to the controller, rather than just the usual fsync call, and that \n> eliminates the benefit of having a write cache there in the first \n> place. Good controllers respect that and flush their whole write cache \n> out. And ext4 has adopted the same mechanism. This is very much a good \n> thing from the perspective of database reliability for people with \n> regular hard drives who don't have a useful write cache on their cheap \n> hard drives. It allows them to keep the disk's write cache on for other \n> things, while still getting the proper cache flushes when the database \n> commits demand them. It does mean that everyone with a non-volatile \n> battery backed cache, via RAID card typically, needs to turn barriers \n> off manually.\n> \n> I've already warned on this list that PostgreSQL commit performance on \n> ext4 is going to appear really terrible to many people. If you \n> benchmark and don't recognize ext3 wasn't operating in a reliable mode \n> before, the performance drop now that ext4 is doing the right thing with \n> barriers looks impossibly bad.\n\nWell, this is depressing. Now that we finally have common\nbattery-backed cache RAID controller cards, the file system developers\nhave throw down another roadblock in ext4 and xfs. Do we need to\ndocument this?\n\nOn another topic, I am a little unclear on how things behave when the\ndrive is write-back. If the RAID controller card writes to the drive,\nbut the data isn't on the platers, how does it know when it can discard\nthat information from the BBU RAID cache?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Fri, 4 Jun 2010 11:06:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Bruce Momjian <[email protected]> wrote:\n \n> On another topic, I am a little unclear on how things behave when\n> the drive is write-back. If the RAID controller card writes to the\n> drive, but the data isn't on the platers, how does it know when it\n> can discard that information from the BBU RAID cache?\n \nThe controller waits for the drive to tell it that it has made it to\nthe platter before it discards it. What made you think otherwise?\n \n-Kevin\n", "msg_date": "Fri, 04 Jun 2010 10:16:41 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Kevin Grittner wrote:\n> Bruce Momjian <[email protected]> wrote:\n> \n> > On another topic, I am a little unclear on how things behave when\n> > the drive is write-back. If the RAID controller card writes to the\n> > drive, but the data isn't on the platers, how does it know when it\n> > can discard that information from the BBU RAID cache?\n> \n> The controller waits for the drive to tell it that it has made it to\n> the platter before it discards it. What made you think otherwise?\n\nBecause a write-back drive cache says it is on the drive before it hits\nthe platters, which I think is the default for SATA drive. Is that\ninaccurate?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Fri, 4 Jun 2010 11:18:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Bruce Momjian <[email protected]> wrote:\n> Kevin Grittner wrote:\n \n>> The controller waits for the drive to tell it that it has made it\n>> to the platter before it discards it. What made you think\n>> otherwise?\n> \n> Because a write-back drive cache says it is on the drive before it\n> hits the platters, which I think is the default for SATA drive.\n> Is that inaccurate?\n \nAny decent RAID controller will ensure that the drives themselves\naren't using write-back caching. When we've mentioned write-back\nversus write-through on this thread we've been talking about the\nbehavior of the *controller*. We have our controllers configured to\nuse write-back through the BBU cache as long as the battery is good,\nbut to automatically switch to write-through if the battery goes\nbad.\n \n-Kevin\n", "msg_date": "Fri, 04 Jun 2010 10:23:52 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Kevin Grittner wrote:\n> Bruce Momjian <[email protected]> wrote:\n> > Kevin Grittner wrote:\n> \n> >> The controller waits for the drive to tell it that it has made it\n> >> to the platter before it discards it. What made you think\n> >> otherwise?\n> > \n> > Because a write-back drive cache says it is on the drive before it\n> > hits the platters, which I think is the default for SATA drive.\n> > Is that inaccurate?\n> \n> Any decent RAID controller will ensure that the drives themselves\n> aren't using write-back caching. When we've mentioned write-back\n> versus write-through on this thread we've been talking about the\n> behavior of the *controller*. We have our controllers configured to\n> use write-back through the BBU cache as long as the battery is good,\n> but to automatically switch to write-through if the battery goes\n> bad.\n\nOK, good, but when why would a BBU RAID controller flush stuff to disk\nwith a flush-all command? I thought the whole goal of BBU was to avoid\nsuch flushes. What is unique about the command ext4/xfs is sending?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Fri, 4 Jun 2010 11:30:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Bruce Momjian <[email protected]> wrote:\n> Kevin Grittner wrote:\n \n>> Any decent RAID controller will ensure that the drives themselves\n>> aren't using write-back caching. When we've mentioned write-back\n>> versus write-through on this thread we've been talking about the\n>> behavior of the *controller*. We have our controllers configured\n>> to use write-back through the BBU cache as long as the battery is\n>> good, but to automatically switch to write-through if the battery\n>> goes bad.\n> \n> OK, good, but when why would a BBU RAID controller flush stuff to\n> disk with a flush-all command? I thought the whole goal of BBU\n> was to avoid such flushes.\n \nThat has been *precisely* my point.\n \nI don't know at the protocol level; I just know that write barriers\ndo *something* which causes our controllers to wait for actual disk\nplatter persistence, while fsync does not.\n \nThe write barrier concept seems good to me, and I wish it could be\nused at the OS level without killing performance. I blame the\ncontroller, for not treating it the same as fsync (i.e., as long as\nit's in write-back mode it should treat data as persisted as soon as\nit's in BBU cache).\n \n-Kevin\n", "msg_date": "Fri, 04 Jun 2010 10:35:51 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Kevin Grittner wrote:\n> Bruce Momjian <[email protected]> wrote:\n> > Kevin Grittner wrote:\n> \n> >> Any decent RAID controller will ensure that the drives themselves\n> >> aren't using write-back caching. When we've mentioned write-back\n> >> versus write-through on this thread we've been talking about the\n> >> behavior of the *controller*. We have our controllers configured\n> >> to use write-back through the BBU cache as long as the battery is\n> >> good, but to automatically switch to write-through if the battery\n> >> goes bad.\n> > \n> > OK, good, but when why would a BBU RAID controller flush stuff to\n> > disk with a flush-all command? I thought the whole goal of BBU\n> > was to avoid such flushes.\n> \n> That has been *precisely* my point.\n> \n> I don't know at the protocol level; I just know that write barriers\n> do *something* which causes our controllers to wait for actual disk\n> platter persistence, while fsync does not.\n> \n> The write barrier concept seems good to me, and I wish it could be\n> used at the OS level without killing performance. I blame the\n> controller, for not treating it the same as fsync (i.e., as long as\n> it's in write-back mode it should treat data as persisted as soon as\n> it's in BBU cache).\n\nYeah. I wonder if it honors the cache flush because it might think it\nis replacing disks or something odd. I think we are going to have to\ndocument this in 9.0 because obviously you have seen it already.\n\nIs this an issue with SAS cards/drives as well?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Fri, 4 Jun 2010 11:41:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Kevin Grittner wrote:\n> I don't know at the protocol level; I just know that write barriers\n> do *something* which causes our controllers to wait for actual disk\n> platter persistence, while fsync does not\n\nIt's in the docs now: \nhttp://www.postgresql.org/docs/9.0/static/wal-reliability.html\n\nFLUSH CACHE EXT is the ATAPI-6 call that filesystems use to enforce \nbarriers on that type of drive. Here's what the relevant portion of the \nATAPI spec says:\n\n\"This command is used by the host to request the device to flush the \nwrite cache. If there is data in the write\ncache, that data shall be written to the media.The BSY bit shall remain \nset to one until all data has been\nsuccessfully written or an error occurs.\"\n\nSAS systems have a similar call named SYNCHRONIZE CACHE.\n\nThe improvement I actually expect to arrive here first is a reliable \nimplementation of O_SYNC/O_DSYNC writes. Both SAS and SATA drives that \ncapable of doing Native Command Queueing support a write type called \n\"Force Unit Access\", which is essentially just like a direct write that \ncannot be cached. When we get more kernels with reliable sync writing \nthat maps under the hood to FUA, and can change wal_sync_method to use \nthem, the need to constantly call fsync for every write to the WAL will \ngo away. Then the \"blow out the RAID cache when barriers are on\" \nbehavior will only show up during checkpoint fsyncs, which will make \nthings a lot better (albeit still not ideal).\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 05 Jun 2010 18:50:27 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" }, { "msg_contents": "Greg Smith wrote:\n> Kevin Grittner wrote:\n> > I don't know at the protocol level; I just know that write barriers\n> > do *something* which causes our controllers to wait for actual disk\n> > platter persistence, while fsync does not\n> \n> It's in the docs now: \n> http://www.postgresql.org/docs/9.0/static/wal-reliability.html\n> \n> FLUSH CACHE EXT is the ATAPI-6 call that filesystems use to enforce \n> barriers on that type of drive. Here's what the relevant portion of the \n> ATAPI spec says:\n> \n> \"This command is used by the host to request the device to flush the \n> write cache. If there is data in the write\n> cache, that data shall be written to the media.The BSY bit shall remain \n> set to one until all data has been\n> successfully written or an error occurs.\"\n> \n> SAS systems have a similar call named SYNCHRONIZE CACHE.\n> \n> The improvement I actually expect to arrive here first is a reliable \n> implementation of O_SYNC/O_DSYNC writes. Both SAS and SATA drives that \n> capable of doing Native Command Queueing support a write type called \n> \"Force Unit Access\", which is essentially just like a direct write that \n> cannot be cached. When we get more kernels with reliable sync writing \n> that maps under the hood to FUA, and can change wal_sync_method to use \n> them, the need to constantly call fsync for every write to the WAL will \n> go away. Then the \"blow out the RAID cache when barriers are on\" \n> behavior will only show up during checkpoint fsyncs, which will make \n> things a lot better (albeit still not ideal).\n\nGreat information! I have added the attached documentation patch to\nexplain the write-barrier/BBU interaction. This will appear in the 9.0\ndocumentation.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +", "msg_date": "Wed, 7 Jul 2010 10:42:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird XFS WAL problem" } ]
[ { "msg_contents": "Greetings,\n\nin \nhttp://archives.postgresql.org/message-id/1056648218.7041.11.camel@jester, \nit is stated that the performance of temporary tables is \"the same as a \nregular table but without\nWAL on the table contents.\".\n\nI have a datamining-type application which makes heavy use of temporary \ntables to stage (potentially large amounts of) data between different \noperations. WAL is write-ahead\n\nTo effectively multi-thread this application, I (think I) need to switch \nfrom temporary to regular tables, because\n- the concurrent threads need to use different connections, not cursors, \nto effectively operate concurrently\n- temporary tables are not visible across connections (as they are \nacross cursors of the same connection)\n\nThus, I wonder how much this will affect performance. Access on the \ntemporary table is inserting (millions of) rows once in a single \ntransaction, potentially update them all once within a single \ntransaction, then select on them once or more.\n\nOf course, eventually loosing the data in these tables is not a problem \nat all. The threads are synchronized above the SQL level.\n\nThanks for any input on how to maximize performance for this applicaiton.\n\n Joachim\n\n", "msg_date": "Tue, 25 May 2010 09:59:56 +0200", "msg_from": "Joachim Worringen <[email protected]>", "msg_from_op": true, "msg_subject": "performance of temporary vs. regular tables" }, { "msg_contents": "temporary tables are handled pretty much like the regular table. The\nmagic happens on schema level, new schema is setup for connection, so\nthat it can access its own temporary tables.\nTemporary tables also are not autovacuumed.\nAnd that's pretty much the most of the differences.\n", "msg_date": "Tue, 25 May 2010 09:49:13 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of temporary vs. regular tables" }, { "msg_contents": "Am 25.05.2010 10:49, schrieb Grzegorz Jaśkiewicz:\n> temporary tables are handled pretty much like the regular table. The\n> magic happens on schema level, new schema is setup for connection, so\n> that it can access its own temporary tables.\n> Temporary tables also are not autovacuumed.\n> And that's pretty much the most of the differences.\n\nThanks. So, the Write-Ahead-Logging (being used or not) does not matter?\n\nAnd, is there anything like RAM-only tables? I really don't care whether \nthe staging data is lost on the rare event of a machine crash, or \nwhether the query crashes due to lack of memory (I make sure there's \nenough w/o paging) - I only care about performance here.\n\n Joachim\n\n", "msg_date": "Tue, 25 May 2010 11:00:24 +0200", "msg_from": "Joachim Worringen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance of temporary vs. regular tables" }, { "msg_contents": "2010/5/25 Joachim Worringen <[email protected]>:\n> Am 25.05.2010 10:49, schrieb Grzegorz Jaśkiewicz:\n>>\n>> temporary tables are handled pretty much like the regular table. The\n>> magic happens on schema level, new schema is setup for connection, so\n>> that it can access its own temporary tables.\n>> Temporary tables also are not autovacuumed.\n>> And that's pretty much the most of the differences.\n>\n> Thanks. So, the Write-Ahead-Logging (being used or not) does not matter?\n>\n> And, is there anything like RAM-only tables? I really don't care whether the\n> staging data is lost on the rare event of a machine crash, or whether the\n> query crashes due to lack of memory (I make sure there's enough w/o paging)\n> - I only care about performance here.\n>\n>  Joachim\n>\n\nI think can create a tablespace on a ram disk, and create a table there.\n\nThom\n", "msg_date": "Tue, 25 May 2010 10:15:54 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of temporary vs. regular tables" }, { "msg_contents": "Am 25.05.2010 11:15, schrieb Thom Brown:\n> 2010/5/25 Joachim Worringen<[email protected]>:\n>> And, is there anything like RAM-only tables? I really don't care whether the\n>> staging data is lost on the rare event of a machine crash, or whether the\n>> query crashes due to lack of memory (I make sure there's enough w/o paging)\n>> - I only care about performance here.\n>>\n>> Joachim\n>>\n>\n> I think can create a tablespace on a ram disk, and create a table there.\n\nTrue, but I think this makes the database server configuration more \ncomplex (which is acceptable), and may add dependencies between the \nserver configuration and the SQL statements for the selection of \ntablespace name (which would be a problem)?\n\nBut I am a tablespace-novice and will look into this \"workaround\".\n\n thanks, Joachim\n\n", "msg_date": "Tue, 25 May 2010 11:32:14 +0200", "msg_from": "Joachim Worringen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance of temporary vs. regular tables" }, { "msg_contents": "WAL does the same thing to DB journaling does to the FS.\nPlus allows you to roll back (PITR).\n\nAs for the RAM, it will be in ram as long as OS decides to keep it in\nRAM cache, and/or its in the shared buffers memory.\nUnless you have a lot of doubt about the two, I don't think it makes\ntoo much sens to setup ramdisk table space yourself. But try it, and\nsee yourself.\nMake sure that you have logic in place, that would set it up, before\npostgresql starts up, in case you'll reboot, or something.\n", "msg_date": "Tue, 25 May 2010 10:38:02 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of temporary vs. regular tables" }, { "msg_contents": "Am 25.05.2010 11:38, schrieb Grzegorz Jaśkiewicz:\n> WAL does the same thing to DB journaling does to the FS.\n> Plus allows you to roll back (PITR).\n>\n> As for the RAM, it will be in ram as long as OS decides to keep it in\n> RAM cache, and/or its in the shared buffers memory.\n\nOr until I commit the transaction? I have not completely disabled \nsync-to-disk in my setup, as there are of course situations where new \ndata comes into the database that needs to be stored in a safe manner.\n\n> Unless you have a lot of doubt about the two, I don't think it makes\n> too much sens to setup ramdisk table space yourself. But try it, and\n> see yourself.\n> Make sure that you have logic in place, that would set it up, before\n> postgresql starts up, in case you'll reboot, or something.\n\nThat's what I thought about when mentioning \"increased setup \ncomplexity\". Simply adding a keyword like \"NONPERSISTENT\" to the table \ncreation statement would be preferred...\n\n Joachim\n\n", "msg_date": "Tue, 25 May 2010 11:52:20 +0200", "msg_from": "Joachim Worringen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance of temporary vs. regular tables" }, { "msg_contents": "On Tuesday 25 May 2010 11:00:24 Joachim Worringen wrote:\n> Am 25.05.2010 10:49, schrieb Grzegorz Jaśkiewicz:\n> > temporary tables are handled pretty much like the regular table. The\n> > magic happens on schema level, new schema is setup for connection, so\n> > that it can access its own temporary tables.\n> > Temporary tables also are not autovacuumed.\n> > And that's pretty much the most of the differences.\n> \n> Thanks. So, the Write-Ahead-Logging (being used or not) does not matter?\nIt does matter quite significantly in my experience. Both from an io and a cpu \noverhead perspective.\n\nAndres\n", "msg_date": "Tue, 25 May 2010 12:41:58 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of temporary vs. regular tables" }, { "msg_contents": "Am 25.05.2010 12:41, schrieb Andres Freund:\n> On Tuesday 25 May 2010 11:00:24 Joachim Worringen wrote:\n>> Thanks. So, the Write-Ahead-Logging (being used or not) does not matter?\n> It does matter quite significantly in my experience. Both from an io and a cpu\n> overhead perspective.\n\nO.k., looks as if I have to make my own experience... I'll let you know \nif possible.\n\n Joachim\n\n\n", "msg_date": "Wed, 26 May 2010 18:03:15 +0200", "msg_from": "Joachim Worringen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance of temporary vs. regular tables" }, { "msg_contents": "WAL matters in performance. Hence why it is advisable to have it on a\nseparate drive :)\n", "msg_date": "Wed, 26 May 2010 17:47:55 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of temporary vs. regular tables" }, { "msg_contents": "On 05/26/2010 06:03 PM, Joachim Worringen wrote:\n> Am 25.05.2010 12:41, schrieb Andres Freund:\n>> On Tuesday 25 May 2010 11:00:24 Joachim Worringen wrote:\n>>> Thanks. So, the Write-Ahead-Logging (being used or not) does not matter?\n>> It does matter quite significantly in my experience. Both from an io\n>> and a cpu\n>> overhead perspective.\n>\n> O.k., looks as if I have to make my own experience... I'll let you know\n> if possible.\n\nAs promised, I did a tiny benchmark - basically, 8 empty tables are \nfilled with 100k rows each within 8 transactions (somewhat typically for \nmy application). The test machine has 4 cores, 64G RAM and RAID1 10k \ndrives for data.\n\n# INSERTs into a TEMPORARY table:\n[joachim@testsrv scaling]$ time pb query -d scaling_qry_1.xml\n\nreal 3m18.242s\nuser 1m59.074s\nsys 1m51.001s\n\n# INSERTs into a standard table:\n[joachim@testsrv scaling]$ time pb query -d scaling_qry_1.xml\n\nreal 3m35.090s\nuser 2m5.295s\nsys 2m2.307s\n\nThus, there is a slight hit of about 10% (which may even be within \nmeausrement variations) - your milage will vary.\n\n Joachim\n\n", "msg_date": "Fri, 28 May 2010 13:04:13 +0200", "msg_from": "Joachim Worringen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance of temporary vs. regular tables" }, { "msg_contents": "On Fri, May 28, 2010 at 4:04 AM, Joachim Worringen\n<[email protected]> wrote:\n> On 05/26/2010 06:03 PM, Joachim Worringen wrote:\n>>\n>> Am 25.05.2010 12:41, schrieb Andres Freund:\n>>>\n>>> On Tuesday 25 May 2010 11:00:24 Joachim Worringen wrote:\n>>>>\n>>>> Thanks. So, the Write-Ahead-Logging (being used or not) does not matter?\n>>>\n>>> It does matter quite significantly in my experience. Both from an io\n>>> and a cpu\n>>> overhead perspective.\n>>\n>> O.k., looks as if I have to make my own experience... I'll let you know\n>> if possible.\n>\n> As promised, I did a tiny benchmark - basically, 8 empty tables are filled\n> with 100k rows each within 8 transactions (somewhat typically for my\n> application). The test machine has 4 cores, 64G RAM and RAID1 10k drives for\n> data.\n>\n> # INSERTs into a TEMPORARY table:\n> [joachim@testsrv scaling]$ time pb query -d scaling_qry_1.xml\n>\n> real    3m18.242s\n> user    1m59.074s\n> sys     1m51.001s\n>\n> # INSERTs into a standard table:\n> [joachim@testsrv scaling]$ time pb query -d scaling_qry_1.xml\n>\n> real    3m35.090s\n> user    2m5.295s\n> sys     2m2.307s\n>\n> Thus, there is a slight hit of about 10% (which may even be within\n> meausrement variations) - your milage will vary.\n>\n>  Joachim\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\nI think it would be interesting to create a ram disk and insert into\nit. In the MySQL community even thought MyISAM has fallen out of use\nthe Memory table (based on MyISAM) is still somewhat used.\n\n\n-- \nRob Wultsch\[email protected]\n", "msg_date": "Fri, 28 May 2010 10:07:42 -0700", "msg_from": "Rob Wultsch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of temporary vs. regular tables" }, { "msg_contents": "\n> As promised, I did a tiny benchmark - basically, 8 empty tables are \n> filled with 100k rows each within 8 transactions (somewhat typically for \n> my application). The test machine has 4 cores, 64G RAM and RAID1 10k \n> drives for data.\n>\n> # INSERTs into a TEMPORARY table:\n> [joachim@testsrv scaling]$ time pb query -d scaling_qry_1.xml\n>\n> real 3m18.242s\n> user 1m59.074s\n> sys 1m51.001s\n>\n> # INSERTs into a standard table:\n> [joachim@testsrv scaling]$ time pb query -d scaling_qry_1.xml\n>\n> real 3m35.090s\n> user 2m5.295s\n> sys 2m2.307s\n>\n> Thus, there is a slight hit of about 10% (which may even be within \n> meausrement variations) - your milage will vary.\n\nUsually WAL causes a much larger performance hit than this.\n\nSince the following command :\n\nCREATE TABLE tmp AS SELECT n FROM generate_series(1,1000000) AS n\n\nwhich inserts 1M rows takes 1.6 seconds on my desktop, your 800k rows \nINSERT taking more than 3 minutes is a bit suspicious unless :\n\n- you got huge fields that need TOASTing ; in this case TOAST compression \nwill eat a lot of CPU and you're benchmarking TOAST, not the rest of the \nsystem\n- you got some non-indexed foreign key\n- some other reason ?\n", "msg_date": "Wed, 02 Jun 2010 12:03:42 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of temporary vs. regular tables" }, { "msg_contents": "Am 02.06.2010 12:03, schrieb Pierre C:\n> Usually WAL causes a much larger performance hit than this.\n>\n> Since the following command :\n>\n> CREATE TABLE tmp AS SELECT n FROM generate_series(1,1000000) AS n\n>\n> which inserts 1M rows takes 1.6 seconds on my desktop, your 800k rows\n> INSERT taking more than 3 minutes is a bit suspicious unless :\n>\n> - you got huge fields that need TOASTing ; in this case TOAST\n> compression will eat a lot of CPU and you're benchmarking TOAST, not the\n> rest of the system\n> - you got some non-indexed foreign key\n> - some other reason ?\n\nYes, the \"other\" reason is that I am not issueing a single SQL command, \nbut import data from plain ASCII files through the Pyhton-based \nframework into the database.\n\nThe difference between your measurement and my measurent is the upper \npotential of improvement for my system (which has, on the other hand, \nthe advantage of being a bit more powerful and flexible than a single \nSQL statement....;-) )\n\n Joachim\n\n", "msg_date": "Wed, 02 Jun 2010 14:54:22 +0200", "msg_from": "Joachim Worringen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance of temporary vs. regular tables" }, { "msg_contents": "\n> Yes, the \"other\" reason is that I am not issueing a single SQL command, \n> but import data from plain ASCII files through the Pyhton-based \n> framework into the database.\n>\n> The difference between your measurement and my measurent is the upper \n> potential of improvement for my system (which has, on the other hand, \n> the advantage of being a bit more powerful and flexible than a single \n> SQL statement....;-) )\n\nAh, in that case ... ;)\n\nYou could give pypy a try, sometimes it's a lot slower, sometimes it's a \nlot faster.\n", "msg_date": "Mon, 07 Jun 2010 00:37:23 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of temporary vs. regular tables" } ]
[ { "msg_contents": "We're using a function that when run as a select statement outside of the \nfunction takes roughly 1.5s to complete whereas running an identical\nquery within a function is taking around 55s to complete.\n\nWe are lost as to why placing this query within a function as opposed to\nsubstituting the variables in a select statement is so drastically different.\n\nThe timings posted here are from a 512MB memory virtual machine and are not of\nmajor concern on their own but we are finding the same issue in our production\nenvironment with far superior hardware.\n\nThe function can be found here:\nhttp://campbell-lange.net/media/files/fn_medirota_get_staff_leave_summary.sql\n\n---\n\nTimings for the individual components on their own is as follows:\n\nselect * from fn_medirota_validate_rota_master(6);\nTime: 0.670 ms\n\nselect to_date(EXTRACT (YEAR FROM current_date)::text, 'YYYY');\nTime: 0.749 ms\n\nselect * from fn_medirota_people_template_generator(2, 6, date'2009-01-01',\ndate'2009-12-31', TRUE) AS templates;\nTime: 68.004 ms\n\nselect * from fn_medirota_people_template_generator(2, 6, date'2010-01-01',\ndate'2010-12-31', TRUE) AS templates;\nTime: 1797.323\n\n\nCopying the exact same for loop select statement from the query above into\nthe psql query buffer and running them with variable substitution yields the\nfollowing:\n\nRunning FOR loop SElECT with variable substitution:\nTime: 3150.585 ms\n\n\nWhereas invoking the function yields:\n\nselect * from fn_medirota_get_staff_leave_summary(6);\nTime: 57375.477 ms\n\n\nWe have tried using explain analyse to update the query optimiser, dropped and\nrecreated the function and have restarted both the machine and the postgres\nserver multiple times.\n\nAny help or advice would be greatly appreciated.\n\n\nKindest regards,\nTyler Hildebrandt\n\n---\n\nEXPLAIN ANALYSE VERBOSE SELECT * FROM fn_medirota_get_staff_leave_summary(6);\n\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n {FUNCTIONSCAN\n :startup_cost 0.00\n :total_cost 260.00\n :plan_rows 1000\n :plan_width 85\n :targetlist (\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n :resno 1\n :resname id\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 2\n :vartype 1043\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n :resno 2\n :resname t_full_name\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 3\n :vartype 16\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n :resno 3\n :resname b_enabled\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 4\n :vartype 1043\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n }\n :resno 4\n :resname t_anniversary\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 5\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 5\n }\n :resno 5\n :resname n_last_year_annual\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 6\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n :resno 6\n :resname n_last_year_other\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 7\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 7\n }\n :resno 7\n :resname n_this_year_annual\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 8\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 8\n }\n :resno 8\n :resname n_this_year_other\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n )\n :qual <>\n :lefttree <>\n :righttree <>\n :initPlan <>\n :extParam (b)\n :allParam (b)\n :scanrelid 1\n :funcexpr\n {FUNCEXPR\n :funcid 150447\n :funcresulttype 149366\n :funcretset true\n :funcformat 0\n :args (\n {CONST\n :consttype 23\n :consttypmod -1\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 6 0 0 0 0 0 0 0 ]\n }\n )\n }\n :funccolnames (\"id\" \"t_full_name\" \"b_enabled\" \"t_anniversary\" \"n_last_year_\n annual\" \"n_last_year_other\" \"n_this_year_annual\" \"n_this_year_other\")\n :funccoltypes <>\n :funccoltypmods <>\n }\n\nFunction Scan on fn_medirota_get_staff_leave_summary (cost=0.00..260.00\nrows=1000 width=85) (actual time=51877.812..51877.893 rows=94 loops=1)\nTotal runtime: 51878.008 ms\n(183 rows)\n\n-- \nTyler Hildebrandt\nSoftware Developer\[email protected]\n\nCampbell-Lange Workshop\nwww.campbell-lange.net\n020 7631 1555\n3 Tottenham Street London W1T 2AF\nRegistered in England No. 04551928\n", "msg_date": "Tue, 25 May 2010 10:59:43 +0100", "msg_from": "Tyler Hildebrandt <[email protected]>", "msg_from_op": true, "msg_subject": "Query timing increased from 3s to 55s when used as a function\n\tinstead of select" }, { "msg_contents": "In response to Tyler Hildebrandt :\n> We're using a function that when run as a select statement outside of the \n> function takes roughly 1.5s to complete whereas running an identical\n> query within a function is taking around 55s to complete.\n> \n> select * from fn_medirota_get_staff_leave_summary(6);\n> Time: 57375.477 ms\n\nI think, your problem is here:\n\nSELECT INTO current_user * FROM\nfn_medirota_validate_rota_master(in_currentuser);\n\n\nThe planner has no knowledge about how many rows this functions returns\nif he don't know the actual parameter. Because of this, this query\nenforce a seq-scan. Try to rewrite that to something like:\n\nexecute 'select * from fn_medirota_validate_rota_master(' ||\nin_currentuser' || ')' into current_user\n\n\n*untested*\n\n\nHTH, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Tue, 25 May 2010 12:30:26 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query timing increased from 3s to 55s when used as a function\n\tinstead of select" }, { "msg_contents": "> I think, your problem is here:\n> \n> SELECT INTO current_user * FROM\n> fn_medirota_validate_rota_master(in_currentuser);\n> \n> \n> The planner has no knowledge about how many rows this functions returns\n> if he don't know the actual parameter. Because of this, this query\n> enforce a seq-scan. Try to rewrite that to something like:\n> \n> execute 'select * from fn_medirota_validate_rota_master(' ||\n> in_currentuser' || ')' into current_user\n> \n\nThanks for your response. This doesn't seem to solve our issue, unfortunately.\n\nAs a side to that, we have the fn_medirota_validate_rota_master calls in a\nlarge amount of our other functions that are running very well.\n\n-- \nTyler Hildebrandt\nSoftware Developer\[email protected]\n\nCampbell-Lange Workshop\nwww.campbell-lange.net\n020 7631 1555\n3 Tottenham Street London W1T 2AF\nRegistered in England No. 04551928\n", "msg_date": "Tue, 25 May 2010 14:41:00 +0100", "msg_from": "Tyler Hildebrandt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query timing increased from 3s to 55s when used as a\n\tfunction instead of select" }, { "msg_contents": "On Tue, May 25, 2010 at 9:41 AM, Tyler Hildebrandt\n<[email protected]> wrote:\n>> I think, your problem is here:\n>>\n>> SELECT INTO current_user * FROM\n>> fn_medirota_validate_rota_master(in_currentuser);\n>>\n>>\n>> The planner has no knowledge about how many rows this functions returns\n>> if he don't know the actual parameter. Because of this, this query\n>> enforce a seq-scan. Try to rewrite that to something like:\n>>\n>> execute 'select * from fn_medirota_validate_rota_master(' ||\n>> in_currentuser' || ')' into current_user\n>>\n>\n> Thanks for your response.  This doesn't seem to solve our issue, unfortunately.\n>\n> As a side to that, we have the fn_medirota_validate_rota_master calls in a\n> large amount of our other functions that are running very well.\n\nany chance of seeing the function source?\n\nmerlin\n", "msg_date": "Tue, 25 May 2010 10:55:39 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query timing increased from 3s to 55s when used as a\n\tfunction instead of select" }, { "msg_contents": "On Tue, May 25, 2010 at 10:55 AM, Merlin Moncure <[email protected]> wrote:\n> On Tue, May 25, 2010 at 9:41 AM, Tyler Hildebrandt\n> <[email protected]> wrote:\n>>> I think, your problem is here:\n>>>\n>>> SELECT INTO current_user * FROM\n>>> fn_medirota_validate_rota_master(in_currentuser);\n>>>\n>>>\n>>> The planner has no knowledge about how many rows this functions returns\n>>> if he don't know the actual parameter. Because of this, this query\n>>> enforce a seq-scan. Try to rewrite that to something like:\n>>>\n>>> execute 'select * from fn_medirota_validate_rota_master(' ||\n>>> in_currentuser' || ')' into current_user\n>>>\n>>\n>> Thanks for your response.  This doesn't seem to solve our issue, unfortunately.\n>>\n>> As a side to that, we have the fn_medirota_validate_rota_master calls in a\n>> large amount of our other functions that are running very well.\n>\n> any chance of seeing the function source?\n\noops! I missed it :-). looking at your function, what version of\npostgres? have you experimented w/return query?\n\nmerlin\n", "msg_date": "Tue, 25 May 2010 10:57:42 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query timing increased from 3s to 55s when used as a\n\tfunction instead of select" }, { "msg_contents": "Have you read this?\n \nhttp://blog.endpoint.com/2008/12/why-is-my-function-slow.html \n \n99% of the 'function is slow' problems are caused by this.\n\nHave you checked the difference between explain and prepare + explain execute?\n\n>>> Tyler Hildebrandt <[email protected]> 05/25/10 4:59 AM >>>\nWe're using a function that when run as a select statement outside of the \nfunction takes roughly 1.5s to complete whereas running an identical\nquery within a function is taking around 55s to complete.\n\nWe are lost as to why placing this query within a function as opposed to\nsubstituting the variables in a select statement is so drastically different.\n\nThe timings posted here are from a 512MB memory virtual machine and are not of\nmajor concern on their own but we are finding the same issue in our production\nenvironment with far superior hardware.\n\nThe function can be found here:\nhttp://campbell-lange.net/media/files/fn_medirota_get_staff_leave_summary.sql \n\n---\n\nTimings for the individual components on their own is as follows:\n\nselect * from fn_medirota_validate_rota_master(6);\nTime: 0.670 ms\n\nselect to_date(EXTRACT (YEAR FROM current_date)::text, 'YYYY');\nTime: 0.749 ms\n\nselect * from fn_medirota_people_template_generator(2, 6, date'2009-01-01',\ndate'2009-12-31', TRUE) AS templates;\nTime: 68.004 ms\n\nselect * from fn_medirota_people_template_generator(2, 6, date'2010-01-01',\ndate'2010-12-31', TRUE) AS templates;\nTime: 1797.323\n\n\nCopying the exact same for loop select statement from the query above into\nthe psql query buffer and running them with variable substitution yields the\nfollowing:\n\nRunning FOR loop SElECT with variable substitution:\nTime: 3150.585 ms\n\n\nWhereas invoking the function yields:\n\nselect * from fn_medirota_get_staff_leave_summary(6);\nTime: 57375.477 ms\n\n\nWe have tried using explain analyse to update the query optimiser, dropped and\nrecreated the function and have restarted both the machine and the postgres\nserver multiple times.\n\nAny help or advice would be greatly appreciated.\n\n\nKindest regards,\nTyler Hildebrandt\n\n---\n\nEXPLAIN ANALYSE VERBOSE SELECT * FROM fn_medirota_get_staff_leave_summary(6);\n\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n {FUNCTIONSCAN\n :startup_cost 0.00\n :total_cost 260.00\n :plan_rows 1000\n :plan_width 85\n :targetlist (\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n :resno 1\n :resname id\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 2\n :vartype 1043\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n :resno 2\n :resname t_full_name\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 3\n :vartype 16\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n :resno 3\n :resname b_enabled\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 4\n :vartype 1043\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n }\n :resno 4\n :resname t_anniversary\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 5\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 5\n }\n :resno 5\n :resname n_last_year_annual\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 6\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n :resno 6\n :resname n_last_year_other\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 7\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 7\n }\n :resno 7\n :resname n_this_year_annual\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 8\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 8\n }\n :resno 8\n :resname n_this_year_other\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n )\n :qual <>\n :lefttree <>\n :righttree <>\n :initPlan <>\n :extParam (b)\n :allParam (b)\n :scanrelid 1\n :funcexpr\n {FUNCEXPR\n :funcid 150447\n :funcresulttype 149366\n :funcretset true\n :funcformat 0\n :args (\n {CONST\n :consttype 23\n :consttypmod -1\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 6 0 0 0 0 0 0 0 ]\n }\n )\n }\n :funccolnames (\"id\" \"t_full_name\" \"b_enabled\" \"t_anniversary\" \"n_last_year_\n annual\" \"n_last_year_other\" \"n_this_year_annual\" \"n_this_year_other\")\n :funccoltypes <>\n :funccoltypmods <>\n }\n\nFunction Scan on fn_medirota_get_staff_leave_summary (cost=0.00..260.00\nrows=1000 width=85) (actual time=51877.812..51877.893 rows=94 loops=1)\nTotal runtime: 51878.008 ms\n(183 rows)\n\n-- \nTyler Hildebrandt\nSoftware Developer\[email protected] \n\nCampbell-Lange Workshop\nwww.campbell-lange.net \n020 7631 1555\n3 Tottenham Street London W1T 2AF\nRegistered in England No. 04551928\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 May 2010 11:18:11 -0500", "msg_from": "\"Jorge Montero\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query timing increased from 3s to 55s when used\n\tas a function instead of select" } ]
[ { "msg_contents": "Good day list \r\n\r\nI would appreciate some comments to the following: \r\n\r\nI have a Dell PowerEdge SC1420 server with 2 GB of RAM 1 DD 73 Gb SCSI Ulltra320 2 Xeon (4 \r\ncache) with PGSQL 7.3.7 \r\n\r\nrunning GNU / Linux Red Hat Enterprise 4, 0 for 32-bit (kernel 2.6.9-5Elsmp) Nahant (ES) \r\n\r\nand another server start or operate the same system and database engine, HP Proliant ML 150 G6 \r\ntwo Smart Array P410 Xeon 2 GB RAM, 2 DD Sata 15,000 RPM (250 GB) in RAID 1 \r\n\r\n \r\n\r\nI'm validating the operation and Execution / time difference of a process between the two is \r\nnot really maquians muicha few seconds almost no time to think it is just quicker the Dell \r\nmachine, it must obviously affect the technology of hard drives. \r\n\r\nshmmax is set to 500.00.000, annex Execution / parameters of both machines pg_settings \r\nconsultation. \r\n\r\nPlease let me give recommendation to the confituracion, if this correct or would fail or left \r\nover tune. an average of 30 users use the system, and is heavy disk usage, uan table has 8 \r\nmillion + another + 13 milloines, the 8 is used daily, desarfortunadame Progress can not yet \r\nmigrate to 8.x, that tiempoi tiomaria a development, adjustment and testing, but it will fit \r\nwith the current configuration that I mentioned. \r\n\r\nThank you.\r\n\r\n\r\nJuan Pablo Sandoval Rivera\r\nTecnologo Prof. en Ing. de Sistemas\r\n\r\nLinux User : 322765 \r\nmsn : [email protected]\r\nyahoo : [email protected] (juan_pablos.rm)\r\nUIN : 276125187 (ICQ)\r\nJabber : [email protected]\r\nSkype : juan.pablo.sandoval.rivera\r\n\r\nAPOYA A ECOSEARCH.COM - Ayuda a salvar al Planeta.", "msg_date": "Tue, 25 May 2010 14:04:07 +0000", "msg_from": "Juan Pablo Sandoval Rivera <[email protected]>", "msg_from_op": true, "msg_subject": "tunning pgsql 7.3.7 over RHEL 4.0 32 x86 (2.6.9-5ELsmp)" }, { "msg_contents": "On Tue, May 25, 2010 at 02:04:07PM +0000, Juan Pablo Sandoval Rivera wrote:\n> Please let me give recommendation to the confituracion...\n\nThe subject line of this message said you're trying to run PostgreSQL 7.3.7. I\nhope that's a typo, and you really mean 8.3.7, in which case this suggestion\nboils down to \"upgrade to 8.3.11\". But if you're really trying to run a\nversion that's several years old, the best configuration advice you can\nreceive is to upgrade to something not totally prehistoric. There have been\nmajor performance enhancements in each release since 7.3.7, and no amount of\nhardware tuning will make such an old version perform comparatively well. Not\nto mention the much greater risk you have that an unsupported version will eat\nyour data.\n\n--\nJoshua Tolley / eggyknap\nEnd Point Corporation\nhttp://www.endpoint.com", "msg_date": "Tue, 25 May 2010 08:27:01 -0600", "msg_from": "Joshua Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tunning pgsql 7.3.7 over RHEL 4.0 32 x86\n\t(2.6.9-5ELsmp)" } ]
[ { "msg_contents": "Hello,\nThank you for the clarifications. The plan as run from the psql looks ok,\nalso did not notice any specific locks for this particular query.\n\nLogs of the system running queries are not utterly clear, so chasing the\nparameters for the explosive query is not that simple (shared logs between\nmultiple threads), but from what I see there is no difference between them\nand the plan looks like (without removal of irrelevant parameters this time,\nmost of them are float8, but also bytea)\n\nexplain SELECT t0.surveyid, t0.srcid, t1.survey_pk, t1.source_pk, t1.tstype,\nt1.homoscedasticitytest, t1.ljungboxrandomnesstest, t1.maxvalue,\nt1.meanobstime,\nt1.meanvalue, t1.median, t1.minvalue, t1.range, t1.robustweightedstddev,\nt1.symmetrytest, t1.trimmedweightedmean, t1.trimmedweightedrange,\nt1.variabilityflag, t1.weightedkurtosis, t1.weightedmean,\nt1.weightedmeanconfidenceinterval, t1.weightedmeanobstime,\nt1.weightednormalizedp2pscatter,\nt1.weightedskewness, t1.weightedstddevdf,\nt1.weightedstddevwdf, t1.vals, t1.ccdids, t1.flags, t1.obstime, t1.len,\nt1.valueerrors FROM sources t0 INNER JOIN ts t1 ON\nt0.surveyid = t1.survey_pk AND t0.srcid = t1.source_pk WHERE (t0.surveyid =\n16 AND t0.srcid >= 200210107009116 AND t0.srcid <= 200210107009991)\n ORDER BY t0.surveyid ASC, t0.srcid ASC ;\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..2363.21 rows=835683 width=1527)\n Join Filter: (t0.srcid = t1.source_pk)\n -> Index Scan using sources_pkey on sources t0 (cost=0.00..17.88 rows=1\nwidth=12)\n Index Cond: ((surveyid = 16) AND (srcid >= 200210107009116::bigint)\nAND (srcid <= 200210107009991::bigint))\n -> Append (cost=0.00..2325.93 rows=1552 width=1053)\n -> Index Scan using ts_pkey on ts t1 (cost=0.00..4.27 rows=1\nwidth=1665)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_bs3000l00000_ts_pkey on\nts_part_bs3000l00000 t1 (cost=0.00..6.30 rows=2 width=327)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_sm0073k00001_ts_pkey on\nts_part_sm0073k00001 t1 (cost=0.00..1232.63 rows=608 width=327)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_bs3000l00001_cg0346l00000_ts_pkey on\nts_part_bs3000l00001_cg0346l00000 t1 (cost=0.00..145.41 rows=127\nwidth=1556)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_cg0346l00001_cg0816k00000_ts_pkey on\nts_part_cg0346l00001_cg0816k00000 t1 (cost=0.00..147.64 rows=127\nwidth=1669)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_cg0816k00001_cg1180k00000_ts_pkey on\nts_part_cg0816k00001_cg1180k00000 t1 (cost=0.00..138.09 rows=119\nwidth=1615)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_cg1180k00001_cg6204k00000_ts_pkey on\nts_part_cg1180k00001_cg6204k00000 t1 (cost=0.00..125.69 rows=109\nwidth=1552)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_cg6204k00001_lm0022n00000_ts_pkey on\nts_part_cg6204k00001_lm0022n00000 t1 (cost=0.00..133.23 rows=116\nwidth=1509)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_lm0022n00001_lm0276m00000_ts_pkey on\nts_part_lm0022n00001_lm0276m00000 t1 (cost=0.00..131.08 rows=115\nwidth=1500)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_lm0276m00001_lm0584k00000_ts_pkey on\nts_part_lm0276m00001_lm0584k00000 t1 (cost=0.00..158.11 rows=135\nwidth=1471)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_lm0584k00001_sm0073k00000_ts_pkey on\nts_part_lm0584k00001_sm0073k00000 t1 (cost=0.00..103.47 rows=93 width=1242)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n\n\nI could increase debug level on the server, but not sure if the plan printed\nthere is of any help. Could this be caused by some race where there is too\nmuch activity? - DB box is at around 10% CPU load, small io wait, before the\nquery starts to overload the machine.\n\nFor sake of clarity this is the plan for the non-joined parameters to show\nwhich partition would be used (i.e. a single one)\n\nexplain select * from ts t0 where t0.survey_pk = 16 AND t0.source_pk >=\n200210107009116 AND t0.source_pk <= 200210107009991;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..10.72 rows=2 width=1614)\n -> Append (cost=0.00..10.72 rows=2 width=1614)\n -> Index Scan using ts_pkey on ts t0 (cost=0.00..4.27 rows=1\nwidth=1669)\n Index Cond: ((survey_pk = 16) AND (source_pk >=\n200210107009116::bigint) AND (source_pk <= 200210107009991::bigint))\n -> Index Scan using ts_part_bs3000l00001_cg0346l00000_ts_pkey on\nts_part_bs3000l00001_cg0346l00000 t0 (cost=0.00..6.45 rows=1 width=1560)\n Index Cond: ((survey_pk = 16) AND (source_pk >=\n200210107009116::bigint) AND (source_pk <= 200210107009991::bigint))\n(6 rows)\n\nTime: 1.559 ms\n\nand to check the bin size:\n> select count(*) from ts t0 where t0.survey_pk = 16 AND t0.source_pk >=\n200210107009116 AND t0.source_pk <= 200210107009991;\n count\n-------\n 1000\n(1 row)\n\n\nand analyzed plan:\n> explain analyze SELECT t0.surveyid, t0.srcid, t1.survey_pk, t1.source_pk,\nt1.tstype, t1.homoscedasticitytest, t1.ljungboxrandomnesstest, t1.maxvalue,\nt1.meanobstime,\nt1.meanvalue, t1.median, t1.minvalue, t1.range, t1.robustweightedstddev,\nt1.symmetrytest, t1.trimmedweightedmean, t1.trimmedweightedrange,\nt1.variabilityflag, t1.weightedkurtosis, t1.weightedmean,\nt1.weightedmeanconfidenceinterval, t1.weightedmeanobstime,\nt1.weightednormalizedp2pscatter,\nt1.weightedskewness, t1.weightedstddevdf,\nt1.weightedstddevwdf, t1.vals, t1.ccdids, t1.flags, t1.obstime, t1.len,\nt1.valueerrors FROM oglehip.sources t0 INNER JOIN oglehip.ts t1 ON\nt0.surveyid = t1.survey_pk AND t0.srcid = t1.source_pk WHERE (t0.surveyid =\n16 AND t0.srcid >= 200210107009116 AND t0.srcid <= 200210107009991)\n ORDER BY t0.surveyid ASC, t0.srcid ASC ;\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..2363.21 rows=835692 width=1527) (actual\ntime=73.629..585.003 rows=1000 loops=1)\n Join Filter: (t0.srcid = t1.source_pk)\n -> Index Scan using sources_pkey on sources t0 (cost=0.00..17.88 rows=1\nwidth=12) (actual time=73.507..560.589 rows=500 loops=1)\n Index Cond: ((surveyid = 16) AND (srcid >= 200210107009116::bigint)\nAND (srcid <= 200210107009991::bigint))\n -> Append (cost=0.00..2325.93 rows=1552 width=1053) (actual\ntime=0.014..0.045 rows=2 loops=500)\n -> Index Scan using ts_pkey on ts t1 (cost=0.00..4.27 rows=1\nwidth=1665) (actual time=0.001..0.001 rows=0 loops=500)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_bs3000l00000_ts_pkey on\nts_part_bs3000l00000 t1 (cost=0.00..6.30 rows=2 width=327) (actual\ntime=0.002..0.002 rows=0 loops=500)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_sm0073k00001_ts_pkey on\nts_part_sm0073k00001 t1 (cost=0.00..1232.63 rows=608 width=327) (actual\ntime=0.004..0.004 rows=0 loops=500)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_bs3000l00001_cg0346l00000_ts_pkey on\nts_part_bs3000l00001_cg0346l00000 t1 (cost=0.00..145.41 rows=127\nwidth=1556) (actual time=0.006..0.007 rows=2 loops=500)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_cg0346l00001_cg0816k00000_ts_pkey on\nts_part_cg0346l00001_cg0816k00000 t1 (cost=0.00..147.64 rows=127\nwidth=1669) (actual time=0.004..0.004 rows=0 loops=500)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_cg0816k00001_cg1180k00000_ts_pkey on\nts_part_cg0816k00001_cg1180k00000 t1 (cost=0.00..138.09 rows=119\nwidth=1615) (actual time=0.004..0.004 rows=0 loops=500)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_cg1180k00001_cg6204k00000_ts_pkey on\nts_part_cg1180k00001_cg6204k00000 t1 (cost=0.00..125.69 rows=109\nwidth=1552) (actual time=0.004..0.004 rows=0 loops=500)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_cg6204k00001_lm0022n00000_ts_pkey on\nts_part_cg6204k00001_lm0022n00000 t1 (cost=0.00..133.23 rows=116\nwidth=1509) (actual time=0.004..0.004 rows=0 loops=500)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_lm0022n00001_lm0276m00000_ts_pkey on\nts_part_lm0022n00001_lm0276m00000 t1 (cost=0.00..131.08 rows=115\nwidth=1500) (actual time=0.004..0.004 rows=0 loops=500)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_lm0276m00001_lm0584k00000_ts_pkey on\nts_part_lm0276m00001_lm0584k00000 t1 (cost=0.00..158.11 rows=135\nwidth=1471) (actual time=0.004..0.004 rows=0 loops=500)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n -> Index Scan using ts_part_lm0584k00001_sm0073k00000_ts_pkey on\nts_part_lm0584k00001_sm0073k00000 t1 (cost=0.00..103.47 rows=93 width=1242)\n(actual time=0.004..0.004 rows=0 loops=500)\n Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk =\nt0.srcid))\n Total runtime: 585.566 ms\n(28 rows)\n\nTime: 588.102 ms\n\n\nWould be grateful for any pointers as the server restart is the only option\nnow once such a query starts trashing the disk.\n\nBest Regards,\nKrzysztof\n\n\nKrzysztof Nienartowicz <[email protected]> writes:\n> surveys-> SELECT t1.SURVEY_PK, t1.SOURCE_PK, t1.TSTYPE, t1.VALS\n> surveys-> FROM sources t0 ,TS t1 where\n> surveys-> (t0.SURVEYID = 16 AND t0.SRCID >= 203510110032281 AND\n> t0.SRCID <= 203520107001677 and t0.SURVEYID = t1.SURVEY_PK AND t0.SRCID =\n> t1.SOURCE_PK ) ORDER BY t0.SURVEYID ASC, t0.SRCID ASC\n\nWe don't make any attempt to infer derived inequality conditions,\nso no, those constraints on t0.srcid won't be propagated over to\nt1.source_pk. Sorry. It's been suggested before, but it would be\na lot of new mechanism and expense in the planner, and for most\nqueries it'd just slow things down to try to do that.\n\n> I have around 30 clients running the same query with different\n> parameters, but the query always returns 1000 rows (boundary values\n> are pre-calculated,so it's like traversal of the equiwidth histogram\n> if it comes to srcid/source_pk) and the rows from parallel queries\n> cannot be overlapping. Usually query returns within around a second.\n> I noticed however there are some queries that hang for many hours and\n> what is most curious some of them created several GB of temp files.\n\nCan you show us the query plan for the slow cases?\n\nregards, tom lane\n\nHello,Thank you for the clarifications. The plan as run from the psql looks ok, also did not notice any specific locks for this particular query.Logs of the system running queries are not utterly clear, so chasing the parameters for the explosive query is not that simple (shared logs between multiple threads), but from what I see there is no difference between them and the plan looks like (without removal of irrelevant parameters this time, most of them are float8, but also bytea)\nexplain SELECT t0.surveyid, t0.srcid, t1.survey_pk, t1.source_pk, t1.tstype, t1.homoscedasticitytest, t1.ljungboxrandomnesstest, t1.maxvalue, t1.meanobstime, t1.meanvalue, t1.median, t1.minvalue, t1.range, t1.robustweightedstddev, t1.symmetrytest, t1.trimmedweightedmean, t1.trimmedweightedrange, \nt1.variabilityflag, t1.weightedkurtosis, t1.weightedmean, t1.weightedmeanconfidenceinterval, t1.weightedmeanobstime, t1.weightednormalizedp2pscatter, t1.weightedskewness, t1.weightedstddevdf, t1.weightedstddevwdf, t1.vals, t1.ccdids, t1.flags, t1.obstime, t1.len, t1.valueerrors FROM  sources t0 INNER JOIN ts t1 ON \nt0.surveyid = t1.survey_pk AND t0.srcid = t1.source_pk WHERE (t0.surveyid = 16 AND t0.srcid >= 200210107009116  AND t0.srcid <= 200210107009991) ORDER BY t0.surveyid ASC, t0.srcid ASC ;                                                                       QUERY PLAN                                                                        \n--------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=0.00..2363.21 rows=835683 width=1527)\n   Join Filter: (t0.srcid = t1.source_pk)   ->  Index Scan using sources_pkey on sources t0  (cost=0.00..17.88 rows=1 width=12)         Index Cond: ((surveyid = 16) AND (srcid >= 200210107009116::bigint) AND (srcid <= 200210107009991::bigint))\n   ->  Append  (cost=0.00..2325.93 rows=1552 width=1053)         ->  Index Scan using ts_pkey on ts t1  (cost=0.00..4.27 rows=1 width=1665)               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n         ->  Index Scan using ts_part_bs3000l00000_ts_pkey on ts_part_bs3000l00000 t1  (cost=0.00..6.30 rows=2 width=327)               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n         ->  Index Scan using ts_part_sm0073k00001_ts_pkey on ts_part_sm0073k00001 t1  (cost=0.00..1232.63 rows=608 width=327)               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n         ->  Index Scan using ts_part_bs3000l00001_cg0346l00000_ts_pkey on ts_part_bs3000l00001_cg0346l00000 t1  (cost=0.00..145.41 rows=127 width=1556)               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n         ->  Index Scan using ts_part_cg0346l00001_cg0816k00000_ts_pkey on ts_part_cg0346l00001_cg0816k00000 t1  (cost=0.00..147.64 rows=127 width=1669)               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n         ->  Index Scan using ts_part_cg0816k00001_cg1180k00000_ts_pkey on ts_part_cg0816k00001_cg1180k00000 t1  (cost=0.00..138.09 rows=119 width=1615)               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n         ->  Index Scan using ts_part_cg1180k00001_cg6204k00000_ts_pkey on ts_part_cg1180k00001_cg6204k00000 t1  (cost=0.00..125.69 rows=109 width=1552)               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n         ->  Index Scan using ts_part_cg6204k00001_lm0022n00000_ts_pkey on ts_part_cg6204k00001_lm0022n00000 t1  (cost=0.00..133.23 rows=116 width=1509)               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n         ->  Index Scan using ts_part_lm0022n00001_lm0276m00000_ts_pkey on ts_part_lm0022n00001_lm0276m00000 t1  (cost=0.00..131.08 rows=115 width=1500)               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n         ->  Index Scan using ts_part_lm0276m00001_lm0584k00000_ts_pkey on ts_part_lm0276m00001_lm0584k00000 t1  (cost=0.00..158.11 rows=135 width=1471)               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\n         ->  Index Scan using ts_part_lm0584k00001_sm0073k00000_ts_pkey on ts_part_lm0584k00001_sm0073k00000 t1  (cost=0.00..103.47 rows=93 width=1242)               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))\nI could increase debug level on the server, but not sure if the plan printed there is of any help. Could this be caused by some race where there is too much activity? - DB box is at around 10% CPU load, small io wait, before the query starts to overload the machine.\nFor sake of clarity this is the plan for the non-joined parameters to show which partition would be used (i.e. a single one)explain select * from ts t0 where t0.survey_pk = 16 AND t0.source_pk >= 200210107009116  AND t0.source_pk <= 200210107009991;\n                                                                     QUERY PLAN                                                                      -----------------------------------------------------------------------------------------------------------------------------------------------------\n Result  (cost=0.00..10.72 rows=2 width=1614)   ->  Append  (cost=0.00..10.72 rows=2 width=1614)         ->  Index Scan using ts_pkey on ts t0  (cost=0.00..4.27 rows=1 width=1669)\n               Index Cond: ((survey_pk = 16) AND (source_pk >= 200210107009116::bigint) AND (source_pk <= 200210107009991::bigint))         ->  Index Scan using ts_part_bs3000l00001_cg0346l00000_ts_pkey on ts_part_bs3000l00001_cg0346l00000 t0  (cost=0.00..6.45 rows=1 width=1560)\n               Index Cond: ((survey_pk = 16) AND (source_pk >= 200210107009116::bigint) AND (source_pk <= 200210107009991::bigint))(6 rows)Time: 1.559 ms\nand to check the bin size:>  select count(*) from ts t0 where t0.survey_pk = 16 AND t0.source_pk >= 200210107009116  AND t0.source_pk <= 200210107009991; count -------\n  1000(1 row)and analyzed plan:> explain analyze SELECT t0.surveyid, t0.srcid, t1.survey_pk, t1.source_pk, t1.tstype, t1.homoscedasticitytest, t1.ljungboxrandomnesstest, t1.maxvalue, t1.meanobstime, \nt1.meanvalue, t1.median, t1.minvalue, t1.range, t1.robustweightedstddev, t1.symmetrytest, t1.trimmedweightedmean, t1.trimmedweightedrange, t1.variabilityflag, t1.weightedkurtosis, t1.weightedmean, t1.weightedmeanconfidenceinterval, t1.weightedmeanobstime, t1.weightednormalizedp2pscatter, \nt1.weightedskewness, t1.weightedstddevdf, t1.weightedstddevwdf, t1.vals, t1.ccdids, t1.flags, t1.obstime, t1.len, t1.valueerrors FROM oglehip.sources t0 INNER JOIN oglehip.ts t1 ON t0.surveyid = t1.survey_pk AND t0.srcid = t1.source_pk WHERE (t0.surveyid = 16 AND t0.srcid >= 200210107009116  AND t0.srcid <= 200210107009991)\n ORDER BY t0.surveyid ASC, t0.srcid ASC ;                                                                                             QUERY PLAN                                                                                              \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=0.00..2363.21 rows=835692 width=1527) (actual time=73.629..585.003 rows=1000 loops=1)\n   Join Filter: (t0.srcid = t1.source_pk)   ->  Index Scan using sources_pkey on sources t0  (cost=0.00..17.88 rows=1 width=12) (actual time=73.507..560.589 rows=500 loops=1)         Index Cond: ((surveyid = 16) AND (srcid >= 200210107009116::bigint) AND (srcid <= 200210107009991::bigint))\n   ->  Append  (cost=0.00..2325.93 rows=1552 width=1053) (actual time=0.014..0.045 rows=2 loops=500)         ->  Index Scan using ts_pkey on ts t1  (cost=0.00..4.27 rows=1 width=1665) (actual time=0.001..0.001 rows=0 loops=500)\n               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))         ->  Index Scan using ts_part_bs3000l00000_ts_pkey on ts_part_bs3000l00000 t1  (cost=0.00..6.30 rows=2 width=327) (actual time=0.002..0.002 rows=0 loops=500)\n               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))         ->  Index Scan using ts_part_sm0073k00001_ts_pkey on ts_part_sm0073k00001 t1  (cost=0.00..1232.63 rows=608 width=327) (actual time=0.004..0.004 rows=0 loops=500)\n               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))         ->  Index Scan using ts_part_bs3000l00001_cg0346l00000_ts_pkey on ts_part_bs3000l00001_cg0346l00000 t1  (cost=0.00..145.41 rows=127 width=1556) (actual time=0.006..0.007 rows=2 loops=500)\n               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))         ->  Index Scan using ts_part_cg0346l00001_cg0816k00000_ts_pkey on ts_part_cg0346l00001_cg0816k00000 t1  (cost=0.00..147.64 rows=127 width=1669) (actual time=0.004..0.004 rows=0 loops=500)\n               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))         ->  Index Scan using ts_part_cg0816k00001_cg1180k00000_ts_pkey on ts_part_cg0816k00001_cg1180k00000 t1  (cost=0.00..138.09 rows=119 width=1615) (actual time=0.004..0.004 rows=0 loops=500)\n               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))         ->  Index Scan using ts_part_cg1180k00001_cg6204k00000_ts_pkey on ts_part_cg1180k00001_cg6204k00000 t1  (cost=0.00..125.69 rows=109 width=1552) (actual time=0.004..0.004 rows=0 loops=500)\n               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))         ->  Index Scan using ts_part_cg6204k00001_lm0022n00000_ts_pkey on ts_part_cg6204k00001_lm0022n00000 t1  (cost=0.00..133.23 rows=116 width=1509) (actual time=0.004..0.004 rows=0 loops=500)\n               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))         ->  Index Scan using ts_part_lm0022n00001_lm0276m00000_ts_pkey on ts_part_lm0022n00001_lm0276m00000 t1  (cost=0.00..131.08 rows=115 width=1500) (actual time=0.004..0.004 rows=0 loops=500)\n               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))         ->  Index Scan using ts_part_lm0276m00001_lm0584k00000_ts_pkey on ts_part_lm0276m00001_lm0584k00000 t1  (cost=0.00..158.11 rows=135 width=1471) (actual time=0.004..0.004 rows=0 loops=500)\n               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid))         ->  Index Scan using ts_part_lm0584k00001_sm0073k00000_ts_pkey on ts_part_lm0584k00001_sm0073k00000 t1  (cost=0.00..103.47 rows=93 width=1242) (actual time=0.004..0.004 rows=0 loops=500)\n               Index Cond: ((t1.survey_pk = 16) AND (t1.source_pk = t0.srcid)) Total runtime: 585.566 ms(28 rows)Time: 588.102 ms\nWould be grateful for any pointers as the server restart is the only option now once such a query starts trashing the disk.Best Regards,Krzysztof\nKrzysztof Nienartowicz <[email protected]> writes:> surveys-> SELECT  t1.SURVEY_PK, t1.SOURCE_PK, t1.TSTYPE,  t1.VALS> surveys->   FROM sources t0 ,TS t1 where\n> surveys->   (t0.SURVEYID = 16 AND t0.SRCID >= 203510110032281 AND> t0.SRCID <= 203520107001677 and t0.SURVEYID = t1.SURVEY_PK AND t0.SRCID => t1.SOURCE_PK ) ORDER BY t0.SURVEYID ASC, t0.SRCID ASC\nWe don't make any attempt to infer derived inequality conditions,so no, those constraints on t0.srcid won't be propagated over tot1.source_pk.  Sorry.  It's been suggested before, but it would be\na lot of new mechanism and expense in the planner, and for mostqueries it'd just slow things down to try to do that.> I have around 30 clients running the same query with different> parameters, but the query always returns 1000 rows (boundary values\n> are pre-calculated,so it's like traversal of the equiwidth histogram> if it comes to srcid/source_pk) and the rows from parallel queries> cannot be overlapping. Usually query returns within around a second.\n> I noticed however there are some queries that hang for many hours and> what is most curious some of them created several GB of temp files.Can you show us the query plan for the slow cases?\t\t\tregards, tom lane", "msg_date": "Wed, 26 May 2010 17:27:50 +0200", "msg_from": "Krzysztof Nienartowicz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query causing explosion of temp space with join involving\n\tpartitioning" }, { "msg_contents": "Krzysztof Nienartowicz <[email protected]> writes:\n> Logs of the system running queries are not utterly clear, so chasing the\n> parameters for the explosive query is not that simple (shared logs between\n> multiple threads), but from what I see there is no difference between them\n> and the plan looks like (without removal of irrelevant parameters this time,\n> most of them are float8, but also bytea)\n> [ nestloop with inner index scans over the inherited table ]\n\nWell, that type of plan isn't going to consume much memory or disk\nspace. What I suspect is happening is that sometimes, depending on the\nspecific parameter values called out in the query, the planner is\nswitching to another plan type that does consume lots of space (probably\nvia sort or hash temp files). The most obvious guess is that that will\nhappen when the range limits on srcid get far enough apart to make a\nnestloop not look cheap. You could try experimenting with EXPLAIN and\ndifferent constant values to see what you get.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 May 2010 12:41:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query causing explosion of temp space with join involving\n\tpartitioning" }, { "msg_contents": "I made a brute force check and indeed, for one of the parameters the query was switching to sequential scans (or bitmaps scans with condition on survey_pk=16 only if sequential scans were off). After closer look at the plan cardinalities I thought it would be worthy to increase histogram size and I set statistics on sources(srcid) to 1000 from default 10. It fixed the plan! Sources table was around 100M so skewness in this range must have been looking odd for the planner..\nThank you for the hints!\nBest Regards,\nKrzysztof\nOn May 27, 2010, at 6:41 PM, Tom Lane wrote:\n\n> Krzysztof Nienartowicz <[email protected]> writes:\n>> Logs of the system running queries are not utterly clear, so chasing the\n>> parameters for the explosive query is not that simple (shared logs between\n>> multiple threads), but from what I see there is no difference between them\n>> and the plan looks like (without removal of irrelevant parameters this time,\n>> most of them are float8, but also bytea)\n>> [ nestloop with inner index scans over the inherited table ]\n> \n> Well, that type of plan isn't going to consume much memory or disk\n> space. What I suspect is happening is that sometimes, depending on the\n> specific parameter values called out in the query, the planner is\n> switching to another plan type that does consume lots of space (probably\n> via sort or hash temp files). The most obvious guess is that that will\n> happen when the range limits on srcid get far enough apart to make a\n> nestloop not look cheap. You could try experimenting with EXPLAIN and\n> different constant values to see what you get.\n> \n> \t\t\tregards, tom lane\n\n", "msg_date": "Fri, 28 May 2010 01:04:17 +0200", "msg_from": "Krzysztof Nienartowicz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query causing explosion of temp space with join involving\n\tpartitioning" } ]
[ { "msg_contents": "I have been Googling for answers on this for a while, and have not been able\nto find anything satisfactory.\n\nImagine that you have a stored procedure which is currently written using\nPL/PGSQL. This stored procedure performs lots of long, complex SQL queries\n(95% SELECT statements, 5% INSERT or UPDATE) and these queries are\ninterspersed with some minor math and some control logic, along with some\nlogging through the use of RAISE. Each logging statement is inside an\nIF/THEN which just checks a boolean flag to determine if logging is turned\non. The function returns a set of cursors to several different result sets.\nThe function is 50%-60% SQL queries and the rest is logging, control logic,\nand little bit of math.\n\nWould a query such as this obtain any performance improvement by being\nre-written using C?\n\nAre there specific cases where writing a function in C would be highly\ndesirable verses using PL/PGSQL (aside from simply gaining access to\nfunctionality not present in PL/PGSQL)?\n\nAre there specific cases where writing a function in C would be slower than\nwriting the equivalent in PL/PGSQL?\n\nBasically, I am looking for some guidelines based primarily on performance\nof when I should use C to write a function verses using PL/PGSQL.\n\nCan anybody quantify any of the performance differences between doing a\nparticular task in C verses doing the same thing in PL/PGSQL? For example,\nperforming a SELECT query or executing a certain number of lines of control\nlogic (primarily IF/THEN, but an occasional loop included)? How about\nassignments or basic math like\naddition/subtraction/multiplication/division?\n\nWhen executing SQL queries inside a C-based function, is there any way to\nhave all of the SQL queries pre-planned through the compilation process,\ndefinition of the function, and loading of the .so file similar to PL/PGSQL?\nWould I get better performance writing each SQL query as a stored procedure\nand then call these stored procedures from within a C-based function which\ndoes the logging, math, control logic, and builds the result sets and\ncursors?\n\nThanks in advance for any answers anyone can provide to these questions.\n\nI have been Googling for answers on this for a while, and have not been able to find anything satisfactory.\nImagine that you have a stored procedure which is currently written using PL/PGSQL. This stored procedure performs lots of long, complex SQL queries (95% SELECT statements, 5% INSERT or UPDATE) and these queries are interspersed with some minor math and some control logic, along with some logging through the use of RAISE. Each logging statement is inside an IF/THEN which just checks a boolean flag to determine if logging is turned on. The function returns a set of cursors to several different result sets. The function is 50%-60% SQL queries and the rest is logging, control logic, and little bit of math. \nWould a query such as this obtain any performance improvement by being re-written using C?Are there specific cases where writing a function in C would be highly desirable verses using PL/PGSQL (aside from simply gaining access to functionality not present in PL/PGSQL)?\nAre there specific cases where writing a function in C would be slower than writing the equivalent in PL/PGSQL?Basically, I am looking for some guidelines based primarily on performance of when I should use C to write a function verses using PL/PGSQL. \nCan anybody quantify any of the performance differences between doing a particular task in C verses doing the same thing in PL/PGSQL? For example, performing a SELECT query or executing a certain number of lines of control logic (primarily IF/THEN, but an occasional loop included)? How about assignments or basic math like addition/subtraction/multiplication/division? \nWhen executing SQL queries inside a C-based function, is there any way to have all of the SQL queries pre-planned through the compilation process, definition of the function, and loading of the .so file similar to PL/PGSQL? Would I get better performance writing each SQL query as a stored procedure and then call these stored procedures from within a C-based function which does the logging, math, control logic, and builds the result sets and cursors?\nThanks in advance for any answers anyone can provide to these questions.", "msg_date": "Wed, 26 May 2010 12:06:26 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL Function Language Performance: C vs PL/PGSQL" }, { "msg_contents": "* Eliot Gable ([email protected]) wrote:\n> Would a query such as this obtain any performance improvement by being\n> re-written using C?\n\nI wouldn't expect the queries called by the pl/pgsql function to be much\nfaster if called through SPI from C instead. I think the question you\nneed to answer is- how long does the pl/pgsql code take vs. the overall\ntime the function takes as a whole? You could then consider that your\n'max benefit' (or pretty close to it) which could be gained by rewriting\nit in C.\n\n> Are there specific cases where writing a function in C would be highly\n> desirable verses using PL/PGSQL (aside from simply gaining access to\n> functionality not present in PL/PGSQL)?\n\nCases where a function is called over and over again, or there are loops\nwhich go through tons of data, or there's alot of data processing to be\ndone.\n\n> Are there specific cases where writing a function in C would be slower than\n> writing the equivalent in PL/PGSQL?\n\nProbably not- provided the C code is written correctly. You can\ncertainly screw that up (eg: not preparing a query in C and having PG\nreplan it every time would probably chew up any advantage C has over\npl/pgsql, in a simple function).\n\n> Basically, I am looking for some guidelines based primarily on performance\n> of when I should use C to write a function verses using PL/PGSQL.\n\nRealize that C functions have alot of other issues associated with them-\ntypically they're much larger foot-guns, for one, for another, C is an\nuntrusted language because it can do all kinds of bad things. So you\nhave to be a superuser to create them.\n\n> Can anybody quantify any of the performance differences between doing a\n> particular task in C verses doing the same thing in PL/PGSQL? For example,\n> performing a SELECT query or executing a certain number of lines of control\n> logic (primarily IF/THEN, but an occasional loop included)? How about\n> assignments or basic math like\n> addition/subtraction/multiplication/division?\n\nActually performing a SELECT through SPI vs. calling it from pl/pgsql\nprobably won't result in that much difference, presuming most of the\ntime there is in the actual query itself. Assignments, basic math,\ncontrol logic, etc, will all be faster in C. You need to figure out if\nthat work is taking enough time to justify the switch though.\n\n> When executing SQL queries inside a C-based function, is there any way to\n> have all of the SQL queries pre-planned through the compilation process,\n> definition of the function, and loading of the .so file similar to PL/PGSQL?\n\nYou might be able to do that when the module is loaded, but I'm not 100%\nsure.. Depends on if you can start using SPI in _PG_init.. I think\nthere was some discussion about that recently but I'm not sure what the\nanswer was.\n\n> Would I get better performance writing each SQL query as a stored procedure\n> and then call these stored procedures from within a C-based function which\n> does the logging, math, control logic, and builds the result sets and\n> cursors?\n\nUhh, I'd guess 'no' to that one.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Wed, 26 May 2010 12:18:39 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Function Language Performance: C vs\n\tPL/PGSQL" }, { "msg_contents": "Thanks for the quick follow-up. So, you are saying that if I can do SPI in\n_PG_init, then I could prepare all my queries there and they would be\nprepared once for the entire function when it is loaded? That would\ncertainly achieve what I want. Does anybody know whether I can do SPI in\n_PG_init?\n\nThe function gets called a lot, but not in the same transaction. It is only\ncalled once per transaction.\n\nOn Wed, May 26, 2010 at 12:18 PM, Stephen Frost <[email protected]> wrote:\n\n> * Eliot Gable ([email protected]<egable%[email protected]>)\n> wrote:\n> > Would a query such as this obtain any performance improvement by being\n> > re-written using C?\n>\n> I wouldn't expect the queries called by the pl/pgsql function to be much\n> faster if called through SPI from C instead. I think the question you\n> need to answer is- how long does the pl/pgsql code take vs. the overall\n> time the function takes as a whole? You could then consider that your\n> 'max benefit' (or pretty close to it) which could be gained by rewriting\n> it in C.\n>\n> > Are there specific cases where writing a function in C would be highly\n> > desirable verses using PL/PGSQL (aside from simply gaining access to\n> > functionality not present in PL/PGSQL)?\n>\n> Cases where a function is called over and over again, or there are loops\n> which go through tons of data, or there's alot of data processing to be\n> done.\n>\n> > Are there specific cases where writing a function in C would be slower\n> than\n> > writing the equivalent in PL/PGSQL?\n>\n> Probably not- provided the C code is written correctly. You can\n> certainly screw that up (eg: not preparing a query in C and having PG\n> replan it every time would probably chew up any advantage C has over\n> pl/pgsql, in a simple function).\n>\n> > Basically, I am looking for some guidelines based primarily on\n> performance\n> > of when I should use C to write a function verses using PL/PGSQL.\n>\n> Realize that C functions have alot of other issues associated with them-\n> typically they're much larger foot-guns, for one, for another, C is an\n> untrusted language because it can do all kinds of bad things. So you\n> have to be a superuser to create them.\n>\n> > Can anybody quantify any of the performance differences between doing a\n> > particular task in C verses doing the same thing in PL/PGSQL? For\n> example,\n> > performing a SELECT query or executing a certain number of lines of\n> control\n> > logic (primarily IF/THEN, but an occasional loop included)? How about\n> > assignments or basic math like\n> > addition/subtraction/multiplication/division?\n>\n> Actually performing a SELECT through SPI vs. calling it from pl/pgsql\n> probably won't result in that much difference, presuming most of the\n> time there is in the actual query itself. Assignments, basic math,\n> control logic, etc, will all be faster in C. You need to figure out if\n> that work is taking enough time to justify the switch though.\n>\n> > When executing SQL queries inside a C-based function, is there any way to\n> > have all of the SQL queries pre-planned through the compilation process,\n> > definition of the function, and loading of the .so file similar to\n> PL/PGSQL?\n>\n> You might be able to do that when the module is loaded, but I'm not 100%\n> sure.. Depends on if you can start using SPI in _PG_init.. I think\n> there was some discussion about that recently but I'm not sure what the\n> answer was.\n>\n> > Would I get better performance writing each SQL query as a stored\n> procedure\n> > and then call these stored procedures from within a C-based function\n> which\n> > does the logging, math, control logic, and builds the result sets and\n> > cursors?\n>\n> Uhh, I'd guess 'no' to that one.\n>\n> Thanks,\n>\n> Stephen\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.9 (GNU/Linux)\n>\n> iEYEARECAAYFAkv9Sd8ACgkQrzgMPqB3kihj/gCdEIA8DhnvZX4Hz3tof6yzLscS\n> Lf8An2Xp8R/KXnkmp8uWg+84Cz7Pp7R3\n> =AX4g\n> -----END PGP SIGNATURE-----\n>\n>\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nThanks for the quick follow-up. So, you are saying that if I can do SPI in _PG_init, then I could prepare all my queries there and they would be prepared once for the entire function when it is loaded? That would certainly achieve what I want. Does anybody know whether I can do SPI in _PG_init?\nThe function gets called a lot, but not in the same transaction. It is only called once per transaction. On Wed, May 26, 2010 at 12:18 PM, Stephen Frost <[email protected]> wrote:\n* Eliot Gable ([email protected]) wrote:\n\n> Would a query such as this obtain any performance improvement by being\n> re-written using C?\n\nI wouldn't expect the queries called by the pl/pgsql function to be much\nfaster if called through SPI from C instead.  I think the question you\nneed to answer is- how long does the pl/pgsql code take vs. the overall\ntime the function takes as a whole?  You could then consider that your\n'max benefit' (or pretty close to it) which could be gained by rewriting\nit in C.\n\n> Are there specific cases where writing a function in C would be highly\n> desirable verses using PL/PGSQL (aside from simply gaining access to\n> functionality not present in PL/PGSQL)?\n\nCases where a function is called over and over again, or there are loops\nwhich go through tons of data, or there's alot of data processing to be\ndone.\n\n> Are there specific cases where writing a function in C would be slower than\n> writing the equivalent in PL/PGSQL?\n\nProbably not- provided the C code is written correctly.  You can\ncertainly screw that up (eg: not preparing a query in C and having PG\nreplan it every time would probably chew up any advantage C has over\npl/pgsql, in a simple function).\n\n> Basically, I am looking for some guidelines based primarily on performance\n> of when I should use C to write a function verses using PL/PGSQL.\n\nRealize that C functions have alot of other issues associated with them-\ntypically they're much larger foot-guns, for one, for another, C is an\nuntrusted language because it can do all kinds of bad things.  So you\nhave to be a superuser to create them.\n\n> Can anybody quantify any of the performance differences between doing a\n> particular task in C verses doing the same thing in PL/PGSQL? For example,\n> performing a SELECT query or executing a certain number of lines of control\n> logic (primarily IF/THEN, but an occasional loop included)? How about\n> assignments or basic math like\n> addition/subtraction/multiplication/division?\n\nActually performing a SELECT through SPI vs. calling it from pl/pgsql\nprobably won't result in that much difference, presuming most of the\ntime there is in the actual query itself.  Assignments, basic math,\ncontrol logic, etc, will all be faster in C.  You need to figure out if\nthat work is taking enough time to justify the switch though.\n\n> When executing SQL queries inside a C-based function, is there any way to\n> have all of the SQL queries pre-planned through the compilation process,\n> definition of the function, and loading of the .so file similar to PL/PGSQL?\n\nYou might be able to do that when the module is loaded, but I'm not 100%\nsure..  Depends on if you can start using SPI in _PG_init..  I think\nthere was some discussion about that recently but I'm not sure what the\nanswer was.\n\n> Would I get better performance writing each SQL query as a stored procedure\n> and then call these stored procedures from within a C-based function which\n> does the logging, math, control logic, and builds the result sets and\n> cursors?\n\nUhh, I'd guess 'no' to that one.\n\n        Thanks,\n\n                Stephen\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.9 (GNU/Linux)\n\niEYEARECAAYFAkv9Sd8ACgkQrzgMPqB3kihj/gCdEIA8DhnvZX4Hz3tof6yzLscS\nLf8An2Xp8R/KXnkmp8uWg+84Cz7Pp7R3\n=AX4g\n-----END PGP SIGNATURE-----\n-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero", "msg_date": "Wed, 26 May 2010 12:29:04 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL Function Language Performance: C vs PL/PGSQL" }, { "msg_contents": "* Eliot Gable ([email protected]) wrote:\n> Thanks for the quick follow-up. So, you are saying that if I can do SPI in\n> _PG_init, then I could prepare all my queries there and they would be\n> prepared once for the entire function when it is loaded? That would\n> certainly achieve what I want. Does anybody know whether I can do SPI in\n> _PG_init?\n\nUnless you're using EXECUTE in your pl/pgsql, the queries in your\npl/pgsql function are already getting prepared on the first call of the\nfunction for a given backend connection.. If you're using EXECUTE in\npl/gpsql then your problem might be planning time. Moving that to C\nisn't going to change things as much as you might hope if you still have\nto plan the query every time you call it..\n\n> The function gets called a lot, but not in the same transaction. It is only\n> called once per transaction.\n\nThat's not really relevant.. Is it called alot from the same\nbackend/database connection? If so, and if you're using regular SELECT\nstatements and the like (not EXECUTE), then they're getting prepared the\nfirst time they're used and that is kept across transactions.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Wed, 26 May 2010 12:32:51 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Function Language Performance: C vs\n\tPL/PGSQL" }, { "msg_contents": "Ah, that clears things up. Yes, the connections are more or less persistent.\nI have a connection manager which doles connections out to the worker\nthreads and reclaims them when the workers are done with them. It\ndynamically adds new connections based on load. Each worker obtains a\nconnection from the connection manager, performs a transaction which\ninvolves executing the function and pulling back the results from the\ncursors, then releases the connection back to the connection manager for\nother workers to use. So, this means that even when written in C, the SQL\nqueries will be planned and cached on each connection after the first\nexecution. So, I guess the question just becomes whether using SPI in C has\nany extra overhead verses using PL/PGSQL which might make it slower for\nperforming queries. Since PostgreSQL is written in C, I assume there is no\nsuch additional overhead. I assume that the PL/PGSQL implementation at its\nheart also uses SPI to perform those executions. Is that a fair statement?\n\nOn Wed, May 26, 2010 at 12:32 PM, Stephen Frost <[email protected]> wrote:\n\n> * Eliot Gable ([email protected]<egable%[email protected]>)\n> wrote:\n> > Thanks for the quick follow-up. So, you are saying that if I can do SPI\n> in\n> > _PG_init, then I could prepare all my queries there and they would be\n> > prepared once for the entire function when it is loaded? That would\n> > certainly achieve what I want. Does anybody know whether I can do SPI in\n> > _PG_init?\n>\n> Unless you're using EXECUTE in your pl/pgsql, the queries in your\n> pl/pgsql function are already getting prepared on the first call of the\n> function for a given backend connection.. If you're using EXECUTE in\n> pl/gpsql then your problem might be planning time. Moving that to C\n> isn't going to change things as much as you might hope if you still have\n> to plan the query every time you call it..\n>\n> > The function gets called a lot, but not in the same transaction. It is\n> only\n> > called once per transaction.\n>\n> That's not really relevant.. Is it called alot from the same\n> backend/database connection? If so, and if you're using regular SELECT\n> statements and the like (not EXECUTE), then they're getting prepared the\n> first time they're used and that is kept across transactions.\n>\n> Thanks,\n>\n> Stephen\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.9 (GNU/Linux)\n>\n> iEYEARECAAYFAkv9TTMACgkQrzgMPqB3kijiNQCfY/wTud+VZ4Z53Lw8cNY/N9ZD\n> 0R4AnA4diz1aptFGYXh3j8N9/k96C7/S\n> =6oz+\n> -----END PGP SIGNATURE-----\n>\n>\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nAh, that clears things up. Yes, the connections are more or less persistent. I have a connection manager which doles connections out to the worker threads and reclaims them when the workers are done with them. It dynamically adds new connections based on load. Each worker obtains a connection from the connection manager, performs a transaction which involves executing the function and pulling back the results from the cursors, then releases the connection back to the connection manager for other workers to use. So, this means that even when written in C, the SQL queries will be planned and cached on each connection after the first execution. So, I guess the question just becomes whether using SPI in C has any extra overhead verses using PL/PGSQL which might make it slower for performing queries. Since PostgreSQL is written in C, I assume there is no such additional overhead. I assume that the PL/PGSQL implementation at its heart also uses SPI to perform those executions. Is that a fair statement?\nOn Wed, May 26, 2010 at 12:32 PM, Stephen Frost <[email protected]> wrote:\n* Eliot Gable ([email protected]) wrote:\n> Thanks for the quick follow-up. So, you are saying that if I can do SPI in\n> _PG_init, then I could prepare all my queries there and they would be\n> prepared once for the entire function when it is loaded? That would\n> certainly achieve what I want. Does anybody know whether I can do SPI in\n> _PG_init?\n\nUnless you're using EXECUTE in your pl/pgsql, the queries in your\npl/pgsql function are already getting prepared on the first call of the\nfunction for a given backend connection..  If you're using EXECUTE in\npl/gpsql then your problem might be planning time.  Moving that to C\nisn't going to change things as much as you might hope if you still have\nto plan the query every time you call it..\n\n> The function gets called a lot, but not in the same transaction. It is only\n> called once per transaction.\n\nThat's not really relevant..  Is it called alot from the same\nbackend/database connection?  If so, and if you're using regular SELECT\nstatements and the like (not EXECUTE), then they're getting prepared the\nfirst time they're used and that is kept across transactions.\n\n        Thanks,\n\n                Stephen\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.9 (GNU/Linux)\n\niEYEARECAAYFAkv9TTMACgkQrzgMPqB3kijiNQCfY/wTud+VZ4Z53Lw8cNY/N9ZD\n0R4AnA4diz1aptFGYXh3j8N9/k96C7/S\n=6oz+\n-----END PGP SIGNATURE-----\n-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero", "msg_date": "Wed, 26 May 2010 12:41:23 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL Function Language Performance: C vs PL/PGSQL" }, { "msg_contents": "* Eliot Gable ([email protected]) wrote:\n> Since PostgreSQL is written in C, I assume there is no\n> such additional overhead. I assume that the PL/PGSQL implementation at its\n> heart also uses SPI to perform those executions. Is that a fair statement?\n\nRight, but I also wouldn't expect a huge improvment either, unless\nyou're calling these queries a ton, or the queries that you're calling\nfrom the pl/pgsql are pretty short-lived.\n\nDon't get me wrong, C is going to be faster, but it depends on exactly\nwhat's going on as to if it's going to be an overall improvment of, say,\n10%, or a 10-fold improvment. :)\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Wed, 26 May 2010 12:47:16 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Function Language Performance: C vs\n\tPL/PGSQL" }, { "msg_contents": "On 5/26/10 9:47 AM, Stephen Frost wrote:\n> * Eliot Gable ([email protected]) wrote:\n>> Since PostgreSQL is written in C, I assume there is no\n>> such additional overhead. I assume that the PL/PGSQL implementation at its\n>> heart also uses SPI to perform those executions. Is that a fair statement?\n>\n> Right, but I also wouldn't expect a huge improvment either, unless\n> you're calling these queries a ton, or the queries that you're calling\n> from the pl/pgsql are pretty short-lived.\n>\n> Don't get me wrong, C is going to be faster, but it depends on exactly\n> what's going on as to if it's going to be an overall improvment of, say,\n> 10%, or a 10-fold improvment. :)\n\nOr a 0.1% improvement, which is more likely. Or that the PL/PGSQL version is even faster than the C version, because if you do any string regexp in your function, Perl has extremely efficient algorithms, probably better than you have time to write in C.\n\nWe use Perl extensively and have never had any complaints. The database activity completely dominates all queries, and the performance of Perl has never even been noticable.\n\nWe use a C functions for a few things, and it is a big nuisance. Every time you upgrade Postgres or your OS, there's a chance the recompile will fail because of changed header files. Any bugs in your code crash Postgres itself. We avoid C as much as possible (and I love C, been doing it since 1984).\n\nCraig\n", "msg_date": "Wed, 26 May 2010 10:09:41 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Function Language Performance: C vs\tPL/PGSQL" }, { "msg_contents": "On Wed, May 26, 2010 at 12:41 PM, Eliot Gable\n<[email protected]> wrote:\n> Ah, that clears things up. Yes, the connections are more or less persistent.\n> I have a connection manager which doles connections out to the worker\n> threads and reclaims them when the workers are done with them. It\n> dynamically adds new connections based on load. Each worker obtains a\n> connection from the connection manager, performs a transaction which\n> involves executing the function and pulling back the results from the\n> cursors, then releases the connection back to the connection manager for\n> other workers to use. So, this means that even when written in C, the SQL\n> queries will be planned and cached on each connection after the first\n> execution. So, I guess the question just becomes whether using SPI in C has\n> any extra overhead verses using PL/PGSQL which might make it slower for\n> performing queries. Since PostgreSQL is written in C, I assume there is no\n> such additional overhead. I assume that the PL/PGSQL implementation at its\n> heart also uses SPI to perform those executions. Is that a fair statement?\n\nAt best, if you are a ninja with the marginally documented backend\napi, you will create code that goes about as fast as your pl/pgsql\nfunction for 10 times the amount of input work, unless there are heavy\namounts of 'other than sql' code in your function. The reason to\nwrite C in the backend is:\n\n*) Interface w/3rd party libraries w/C linkage\n*) Do things that are illegal in regular SQL (write files, etc)\n*) Make custom types\n\nThings like that. If your pl/pgsql function is running slow, it's\nprobably better to look at what's going on there.\n\nmerlin\n", "msg_date": "Fri, 28 May 2010 08:22:22 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Function Language Performance: C vs PL/PGSQL" }, { "msg_contents": "On Fri, 28 May 2010, Merlin Moncure wrote:\n> At best, if you are a ninja with the marginally documented backend\n> api, you will create code that goes about as fast as your pl/pgsql\n> function for 10 times the amount of input work, unless there are heavy\n> amounts of 'other than sql' code in your function. The reason to\n> write C in the backend is:\n>\n> *) Interface w/3rd party libraries w/C linkage\n> *) Do things that are illegal in regular SQL (write files, etc)\n> *) Make custom types\n\nThe major case I found when writing pl/pgsql was when trying to build \narrays row by row. AFAIK when I tried it, adding a row to an array caused \nthe whole array to be copied, which put a bit of a damper on performance.\n\nMatthew\n\n-- \n \"The problem with defending the purity of the English language is that\n English is about as pure as a cribhouse whore. We don't just borrow words;\n on occasion, English has pursued other languages down alleyways to beat\n them unconscious and rifle their pockets for new vocabulary.\" - James Nicoll\n", "msg_date": "Tue, 1 Jun 2010 13:47:08 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Function Language Performance: C vs\n PL/PGSQL" }, { "msg_contents": "* Matthew Wakeling ([email protected]) wrote:\n> The major case I found when writing pl/pgsql was when trying to build \n> arrays row by row. AFAIK when I tried it, adding a row to an array caused \n> the whole array to be copied, which put a bit of a damper on performance.\n\nUsing the built-ins now available in 8.4 (array_agg), that copying\ndoesn't happen any more.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 1 Jun 2010 08:54:53 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Function Language Performance: C vs\n\tPL/PGSQL" }, { "msg_contents": "On Tue, 1 Jun 2010, Stephen Frost wrote:\n> * Matthew Wakeling ([email protected]) wrote:\n>> The major case I found when writing pl/pgsql was when trying to build\n>> arrays row by row. AFAIK when I tried it, adding a row to an array caused\n>> the whole array to be copied, which put a bit of a damper on performance.\n>\n> Using the built-ins now available in 8.4 (array_agg), that copying\n> doesn't happen any more.\n\nThanks. I had wondered if that had been improved.\n\nMatthew\n\n-- \n Our riverbanks and seashores have a beauty all can share, provided\n there's at least one boot, three treadless tyres, a half-eaten pork\n pie, some oil drums, an old felt hat, a lorry-load of tar blocks,\n and a broken bedstead there. -- Flanders and Swann\n", "msg_date": "Tue, 1 Jun 2010 13:59:35 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Function Language Performance: C vs\n PL/PGSQL" }, { "msg_contents": "On Tue, Jun 1, 2010 at 8:59 AM, Matthew Wakeling <[email protected]> wrote:\n> On Tue, 1 Jun 2010, Stephen Frost wrote:\n>>\n>> * Matthew Wakeling ([email protected]) wrote:\n>>>\n>>> The major case I found when writing pl/pgsql was when trying to build\n>>> arrays row by row. AFAIK when I tried it, adding a row to an array caused\n>>> the whole array to be copied, which put a bit of a damper on performance.\n>>\n>> Using the built-ins now available in 8.4 (array_agg), that copying\n>> doesn't happen any more.\n>\n> Thanks. I had wondered if that had been improved.\n\neven better is array(query) -- which has been around for a while. not\ntoo many people know about it because it's syntactically weird but\nit's the preferred way to build arrays when you don't need true\naggregation (group by and such).\n\ngenerally speaking, concatenation of any kind in loops should be\navoided in pl/pgsql. in fact, whenever writing pl/pgsql, it's all to\neasy to over-use the loop construct...every time you're looping it's\nalways good to ask yourself: 'can this be done in a query?'.\n\nmerlin\n", "msg_date": "Tue, 1 Jun 2010 12:28:25 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Function Language Performance: C vs PL/PGSQL" } ]
[ { "msg_contents": "Dear all and Tom,\n\n Recently my company’s postgres DB server sluggish suddenly with a\nhight Context-switching value as below:\n\n \n\n2010-04-07 04:03:15 procs memory swap io\nsystem cpu\n\n2010-04-07 04:03:15 r b swpd free buff cache si so bi bo\nin cs us sy id wa\n\n2010-04-07 14:04:27 3 0 0 2361272 272684 3096148 0 0 3\n1445 973 14230 7 8 84 0\n\n2010-04-07 14:05:27 2 0 0 2361092 272684 3096220 0 0 3\n1804 1029 31852 8 10 81 1\n\n2010-04-07 14:06:27 1 0 0 2362236 272684 3096564 0 0 3\n1865 1135 19689 9 9 81 0\n\n2010-04-07 14:07:27 1 0 0 2348400 272720 3101836 0 0 3\n1582 1182 149461 15 17 67 0\n\n2010-04-07 14:08:27 3 0 0 2392028 272840 3107600 0 0 3\n3093 1275 203196 24 23 53 1\n\n2010-04-07 14:09:27 3 1 0 2386224 272916 3107960 0 0 3\n2486 1331 193299 26 22 52 0\n\n2010-04-07 14:10:27 34 0 0 2332320 272980 3107944 0 0 3\n1692 1082 214309 24 22 54 0\n\n2010-04-07 14:11:27 1 0 0 2407432 273028 3108092 0 0 6\n2770 1540 76643 29 13 57 1\n\n2010-04-07 14:12:27 9 0 0 2358968 273104 3108388 0 0 7\n2639 1466 10603 22 6 72 1\n\n \n\n \n\n I have read this problem about ““Tom Lane” Workload” . And I found\nmy company’s DB is a Xeon MP server.\n\nI am going to have a test to confirm it.\n\n \n\nIf anybody have the test case “Tom Lane's Xeon CS test case” ? \n\nThank you!\n\n \n\n \n\nMy postgres version: 8.1.3; \n\nMy OS version: Linux version 2.4.21-47.Elsmp((Red Hat Linux 3.2.3-54)\n\nMy CPU:\n\nprocessor : 7\n\nvendor_id : GenuineIntel\n\ncpu family : 15\n\nmodel : 6\n\nmodel name : Intel(R) Xeon(TM) CPU 3.40GHz\n\nstepping : 8\n\ncpu MHz : 3400.262\n\ncache size : 1024 KB\n\nphysical id : 1\n\n \n\n \n\n \n\nBest regards,\n\nRay Huang\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nDear all and Tom,\n    Recently my company’s  postgres DB server sluggish suddenly with a hight\nContext-switching value as below:\n \n2010-04-07 04:03:15\nprocs                     \nmemory     \nswap          io     system        \ncpu\n2010-04-07 04:03:15 \nr  b   swpd   free   buff \ncache   si   so    bi   \nbo   in    cs us sy id wa\n2010-04-07 14:04:27 \n3  0      0 2361272 272684\n3096148    0    0     3 \n1445  973 14230  7  8 84  0\n2010-04-07 14:05:27 \n2  0      0 2361092 272684\n3096220    0    0     3 \n1804 1029 31852  8 10 81  1\n2010-04-07 14:06:27 \n1  0      0 2362236 272684\n3096564    0    0     3 \n1865 1135 19689  9  9 81  0\n2010-04-07 14:07:27 \n1  0      0 2348400 272720\n3101836    0    0     3 \n1582 1182 149461 15 17 67  0\n2010-04-07 14:08:27 \n3  0      0 2392028 272840\n3107600    0    0     3 \n3093 1275 203196 24 23 53  1\n2010-04-07 14:09:27 \n3  1      0 2386224 272916\n3107960    0    0     3 \n2486 1331 193299 26 22 52  0\n2010-04-07 14:10:27 34 \n0      0 2332320 272980 3107944   \n0    0     3  1692 1082 214309 24 22\n54  0\n2010-04-07 14:11:27 \n1  0      0 2407432 273028\n3108092    0    0     6 \n2770 1540 76643 29 13 57  1\n2010-04-07 14:12:27 \n9  0      0 2358968 273104\n3108388    0    0     7 \n2639 1466 10603 22  6 72  1\n \n \n    I have\nread this problem  about ““Tom Lane” Workload” . And I found\nmy company’s DB  is a Xeon MP server.\nI am going\nto have a test to confirm it.\n \nIf anybody\nhave the test case “Tom Lane's Xeon\nCS test case” ?  \nThank you!\n    \n \nMy postgres\nversion: 8.1.3; \nMy OS version:\nLinux version 2.4.21-47.Elsmp((Red Hat Linux\n3.2.3-54)\nMy CPU:\nprocessor      \n: 7\nvendor_id      \n: GenuineIntel\ncpu\nfamily      : 15\nmodel          \n: 6\nmodel\nname      : Intel(R) Xeon(TM) CPU 3.40GHz\nstepping       \n: 8\ncpu\nMHz         : 3400.262\ncache\nsize      : 1024 KB\nphysical\nid     : 1\n \n \n \nBest regards,\nRay Huang", "msg_date": "Thu, 27 May 2010 15:27:41 +0800", "msg_from": "=?gb2312?B?u8bTwM7A?= <[email protected]>", "msg_from_op": true, "msg_subject": "About Tom Lane's Xeon CS test case" }, { "msg_contents": "=?gb2312?B?u8bTwM7A?= <[email protected]> writes:\n> My postgres version: 8.1.3; \n\nYou do realize that version was obsoleted four years ago last week?\n\nIf you're encountering multiprocessor performance problems you\nreally need to get onto 8.3.x or later.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 May 2010 10:09:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About Tom Lane's Xeon CS test case " }, { "msg_contents": "Tom ,\n\tThank you for your reply!\n\tI am encountering a context-switch storm problem .\n\tWe got the pg_locks data when context-switch value over 200K/sec\n\tWe fount that the value of CS relate to the count of\nExclutivelocks .\n\tAnd I donnt know how to make the problem appear again by testing to\ncollect evidence to update postgreSQL .\n\tSo I want to redo your testing for that.\n\nThank you!\nBest regards,\nRay Huang\n\t\n\n-----邮件原件-----\n发件人: Tom Lane [mailto:[email protected]] \n发送时间: 2010年5月27日 22:10\n收件人: 黄永卫\n抄送: [email protected]\n主题: Re: [PERFORM] About Tom Lane's Xeon CS test case \n\n=?gb2312?B?u8bTwM7A?= <[email protected]> writes:\n> My postgres version: 8.1.3; \n\nYou do realize that version was obsoleted four years ago last week?\n\nIf you're encountering multiprocessor performance problems you\nreally need to get onto 8.3.x or later.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 28 May 2010 12:11:31 +0800", "msg_from": "=?gb2312?B?u8bTwM7A?= <[email protected]>", "msg_from_op": true, "msg_subject": "=?gb2312?B?tPC4tDogW1BFUkZPUk1dIEFib3V0IFRvbSBMYW5lJ3MgWGVvbiBDUw==?=\n\t=?gb2312?B?IHRlc3QgY2FzZSA=?=" } ]
[ { "msg_contents": "Hi,\nI have two similar queries that calculate \"group by\" summaries over a huge table (74.6mil rows).\nThe only difference between two queries is the number of columns that group by is performed on.\nThis difference is causing two different plans which are vary so very much in performance.\nPostgres is 8.4.4. on Linux 64bit. Work_mem is 4GB for both queries and effective_cache_size = 30GB (server has 72GB RAM).\nBoth queries are 100% time on CPU (data is all in buffer cache or OS cache).\nMy questions are:\n\n1) Is there a way to force plan that uses hashaggregate for the second query?\n\n2) I am not trying to achieve any particular execution time for the query, but I noticed that when \"disk sort\" kicks in (and that happens eventually once the dataset is large enough) the query drastically slows down, even if there is no physical IO going on. I wonder if it's possible to have predictable performance rather than sudden drop.\n\n3) Why hashAggregate plan uses so much less memory (work_mem) than the plan with groupAggregate/sort? HashAggregate plan for Query1 works even with work_mem='2GB'; The second plan decides to use disk sort even with work_mem='4GB'. Why sort is so memory greedy? Are there any plans to address the sorting memory efficiency issues?\n\nThank you!\n\nQuery1:\nexplain analyze\nsmslocate_edw-# SELECT\nsmslocate_edw-# month_code,\nsmslocate_edw-# short_code,\nsmslocate_edw-# gateway_carrier_id,\nsmslocate_edw-# mp_code,\nsmslocate_edw-# partner_id,\nsmslocate_edw-# master_company_id,\nsmslocate_edw-# ad_id,\nsmslocate_edw-# sc_name_id,\nsmslocate_edw-# sc_sports_league_id,\nsmslocate_edw-# sc_sports_alert_type,\nsmslocate_edw-# al_widget_id,\nsmslocate_edw-# keyword_id,\nsmslocate_edw-# cp_id,\nsmslocate_edw-# sum(coalesce(message_count,0)), -- message_cnt\nsmslocate_edw-# sum(coalesce(message_sellable_count,0)), -- message_sellable_cnt\nsmslocate_edw-# sum(coalesce(ad_cost_sum,0)), -- ad_cost_sum\nsmslocate_edw-# NULL::int4, --count(distinct device_number), -- unique_user_cnt\nsmslocate_edw-# NULL::int4, --count(distinct case when message_sellable_count <> 0 then device_number end), -- unique_user_sellable_cnt\nsmslocate_edw-# NULL, -- unique_user_first_time_cnt\nsmslocate_edw-# 1, -- ALL\nsmslocate_edw-# CURRENT_TIMESTAMP\nsmslocate_edw-# from staging.agg_phones_monthly_snapshot\nsmslocate_edw-# group by\nsmslocate_edw-# month_code,\nsmslocate_edw-# short_code,\nsmslocate_edw-# gateway_carrier_id,\nsmslocate_edw-# mp_code,\nsmslocate_edw-# partner_id,\nsmslocate_edw-# master_company_id,\nsmslocate_edw-# ad_id,\nsmslocate_edw-# sc_name_id,\nsmslocate_edw-# sc_sports_league_id,\nsmslocate_edw-# sc_sports_alert_type,\nsmslocate_edw-# al_widget_id,\nsmslocate_edw-# keyword_id,\nsmslocate_edw-# cp_id\nsmslocate_edw-# ;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------\n HashAggregate (cost=5065227.32..5214455.48 rows=7461408 width=64) (actual time=183289.883..185213.565 rows=2240716 loops=1)\n -> Append (cost=0.00..2080664.40 rows=74614073 width=64) (actual time=0.030..58952.749 rows=74614237 loops=1)\n -> Seq Scan on agg_phones_monthly (cost=0.00..11.50 rows=1 width=102) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (month_code = '2010M04'::bpchar)\n -> Seq Scan on agg_phones_monthly_2010m04 agg_phones_monthly (cost=0.00..2080652.90 rows=74614072 width=64) (actual time=0.027..42713.387 rows=74614237 loops=1)\n Filter: (month_code = '2010M04'::bpchar)\n Total runtime: 185519.997 ms\n(7 rows)\n\nTime: 185684.396 ms\n\nQuery2:\nexplain analyze\nsmslocate_edw-# SELECT\nsmslocate_edw-# month_code,\nsmslocate_edw-# gateway_carrier_id,\nsmslocate_edw-# sum(coalesce(message_count,0)), -- message_cnt\nsmslocate_edw-# sum(coalesce(message_sellable_count,0)), -- message_sellable_cnt\nsmslocate_edw-# sum(coalesce(ad_cost_sum,0)), -- ad_cost_sum\nsmslocate_edw-# count(distinct device_number), -- unique_user_cnt\nsmslocate_edw-# count(distinct case when message_sellable_count <> 0 then device_number end), -- unique_user_sellable_cnt\nsmslocate_edw-# NULL, -- unique_user_first_time_cnt\nsmslocate_edw-# 15, -- CARRIER\nsmslocate_edw-# CURRENT_TIMESTAMP\nsmslocate_edw-# from staging.agg_phones_monthly_snapshot\nsmslocate_edw-# group by\nsmslocate_edw-# month_code,\nsmslocate_edw-# gateway_carrier_id\nsmslocate_edw-# ;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------\n GroupAggregate (cost=13877783.42..15371164.88 rows=40000 width=37) (actual time=1689525.151..2401444.441 rows=116 loops=1)\n -> Sort (cost=13877783.42..14064318.61 rows=74614073 width=37) (actual time=1664233.243..1716472.931 rows=74614237 loops=1)\n Sort Key: dw.agg_phones_monthly.month_code, dw.agg_phones_monthly.gateway_carrier_id\n Sort Method: external merge Disk: 3485424kB\n -> Result (cost=0.00..2080664.40 rows=74614073 width=37) (actual time=0.008..84421.927 rows=74614237 loops=1)\n -> Append (cost=0.00..2080664.40 rows=74614073 width=37) (actual time=0.007..64724.486 rows=74614237 loops=1)\n -> Seq Scan on agg_phones_monthly (cost=0.00..11.50 rows=1 width=574) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: (month_code = '2010M04'::bpchar)\n -> Seq Scan on agg_phones_monthly_2010m04 agg_phones_monthly (cost=0.00..2080652.90 rows=74614072 width=37) (actual time=0.005..48199.938 rows=74614237 loops=1)\n Filter: (month_code = '2010M04'::bpchar)\n Total runtime: 2402137.632 ms\n(11 rows)\n\nTime: 2402139.642 ms\n\n\n\n\n\n\n\n\n\n\n\nHi,\nI have two similar queries that calculate \"group by\"\nsummaries over a huge table (74.6mil rows).\nThe only difference between two queries is the number of\ncolumns that group by is performed on.\nThis difference is causing two different plans which are\nvary so very much in performance.\nPostgres is 8.4.4. on Linux 64bit. Work_mem is 4GB for both\nqueries and effective_cache_size = 30GB (server has 72GB RAM).\nBoth queries are 100% time on CPU (data is all in buffer\ncache or OS cache).\nMy questions are:\n1)     \nIs there a way to force plan that uses hashaggregate\nfor the second query?\n2)     \nI am not trying to achieve any particular execution\ntime for the query, but I noticed that when \"disk sort\" kicks\nin  (and that happens eventually once the dataset is large enough) the\nquery drastically slows down, even if there is no physical IO going on. I\nwonder if it's possible to have predictable performance rather than sudden\ndrop.\n3)     \nWhy hashAggregate plan uses so much less memory (work_mem)\nthan the plan with groupAggregate/sort? HashAggregate plan for Query1 works\neven with work_mem='2GB'; The second plan decides to use disk sort even with work_mem='4GB'.\nWhy sort is so memory greedy? Are there any plans to address the sorting memory\nefficiency issues?\n \nThank you!\n \nQuery1:\nexplain analyze\nsmslocate_edw-#   SELECT\nsmslocate_edw-#     month_code, \nsmslocate_edw-#     short_code,\nsmslocate_edw-#     gateway_carrier_id,\nsmslocate_edw-#     mp_code,\nsmslocate_edw-#     partner_id,\nsmslocate_edw-#     master_company_id,\nsmslocate_edw-#     ad_id,\nsmslocate_edw-#     sc_name_id,\nsmslocate_edw-#     sc_sports_league_id,\nsmslocate_edw-#    \nsc_sports_alert_type,\nsmslocate_edw-#     al_widget_id,\nsmslocate_edw-#     keyword_id,  \nsmslocate_edw-#     cp_id,\nsmslocate_edw-#    \nsum(coalesce(message_count,0)),         \n-- message_cnt\nsmslocate_edw-#    \nsum(coalesce(message_sellable_count,0)), -- message_sellable_cnt\nsmslocate_edw-#    \nsum(coalesce(ad_cost_sum,0)),           \n-- ad_cost_sum\nsmslocate_edw-#     NULL::int4,\n--count(distinct\ndevice_number),           --\nunique_user_cnt\nsmslocate_edw-#     NULL::int4,\n--count(distinct case when message_sellable_count <> 0 then device_number\nend), -- unique_user_sellable_cnt\nsmslocate_edw-#    \nNULL,                                   \n-- unique_user_first_time_cnt\nsmslocate_edw-#     1,  -- ALL\nsmslocate_edw-#     CURRENT_TIMESTAMP\nsmslocate_edw-#   from\nstaging.agg_phones_monthly_snapshot\nsmslocate_edw-#   group by\nsmslocate_edw-#     month_code, \nsmslocate_edw-#     short_code,\nsmslocate_edw-#     gateway_carrier_id,\nsmslocate_edw-#     mp_code,\nsmslocate_edw-#     partner_id,\nsmslocate_edw-#     master_company_id,\nsmslocate_edw-#     ad_id,\nsmslocate_edw-#     sc_name_id,\nsmslocate_edw-#     sc_sports_league_id,\nsmslocate_edw-#    \nsc_sports_alert_type,\nsmslocate_edw-#     al_widget_id,\nsmslocate_edw-#     keyword_id,  \nsmslocate_edw-#     cp_id\nsmslocate_edw-# ;\n                                                                                \nQUERY PLAN                                                                \n\n               \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------\n HashAggregate  (cost=5065227.32..5214455.48\nrows=7461408 width=64) (actual time=183289.883..185213.565 rows=2240716\nloops=1)\n   ->  Append  (cost=0.00..2080664.40\nrows=74614073 width=64) (actual time=0.030..58952.749 rows=74614237 loops=1)\n         -> \nSeq Scan on agg_phones_monthly  (cost=0.00..11.50 rows=1 width=102)\n(actual time=0.002..0.002 rows=0 loops=1)\n              \nFilter: (month_code = '2010M04'::bpchar)\n         -> \nSeq Scan on agg_phones_monthly_2010m04 agg_phones_monthly \n(cost=0.00..2080652.90 rows=74614072 width=64) (actual time=0.027..42713.387\nrows=74614237 loops=1)\n               Filter:\n(month_code = '2010M04'::bpchar)\n Total runtime: 185519.997 ms\n(7 rows)\n \nTime: 185684.396 ms\n \nQuery2:\nexplain analyze\nsmslocate_edw-#     SELECT\nsmslocate_edw-#     month_code, \nsmslocate_edw-#     gateway_carrier_id,\nsmslocate_edw-#     sum(coalesce(message_count,0)),         \n-- message_cnt\nsmslocate_edw-#    \nsum(coalesce(message_sellable_count,0)), -- message_sellable_cnt\nsmslocate_edw-#    \nsum(coalesce(ad_cost_sum,0)),           \n-- ad_cost_sum\nsmslocate_edw-#     count(distinct\ndevice_number),           --\nunique_user_cnt\nsmslocate_edw-#     count(distinct case\nwhen message_sellable_count <> 0 then device_number end), --\nunique_user_sellable_cnt\nsmslocate_edw-#    \nNULL,                                   \n-- unique_user_first_time_cnt\nsmslocate_edw-#     15, -- CARRIER\nsmslocate_edw-#     CURRENT_TIMESTAMP\nsmslocate_edw-#   from\nstaging.agg_phones_monthly_snapshot\nsmslocate_edw-#   group by\nsmslocate_edw-#     month_code, \nsmslocate_edw-#     gateway_carrier_id\nsmslocate_edw-# ;\n                                                                                      \nQUERY\nPLAN                                                          \n\n                           \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------\n GroupAggregate  (cost=13877783.42..15371164.88\nrows=40000 width=37) (actual time=1689525.151..2401444.441 rows=116 loops=1)\n   ->  Sort  (cost=13877783.42..14064318.61\nrows=74614073 width=37) (actual time=1664233.243..1716472.931 rows=74614237\nloops=1)\n         Sort Key:\ndw.agg_phones_monthly.month_code, dw.agg_phones_monthly.gateway_carrier_id\n         Sort\nMethod:  external merge  Disk: 3485424kB\n         -> \nResult  (cost=0.00..2080664.40 rows=74614073 width=37) (actual\ntime=0.008..84421.927 rows=74614237 loops=1)\n              \n->  Append  (cost=0.00..2080664.40 rows=74614073 width=37) (actual\ntime=0.007..64724.486 rows=74614237 loops=1)\n                    \n->  Seq Scan on agg_phones_monthly  (cost=0.00..11.50 rows=1\nwidth=574) (actual time=0.000..0.000 rows=0 loops=1)\n                          \nFilter: (month_code = '2010M04'::bpchar)\n                    \n->  Seq Scan on agg_phones_monthly_2010m04 agg_phones_monthly \n(cost=0.00..2080652.90 rows=74614072 width=37) (actual time=0.005..48199.938\nrows=74614237 loops=1)\n                          \nFilter: (month_code = '2010M04'::bpchar)\n Total runtime: 2402137.632 ms\n(11 rows)\n \nTime: 2402139.642 ms", "msg_date": "Thu, 27 May 2010 12:34:15 -0700", "msg_from": "Slava Moudry <[email protected]>", "msg_from_op": true, "msg_subject": "how to force hashaggregate plan?" }, { "msg_contents": "On Thu, May 27, 2010 at 3:34 PM, Slava Moudry <[email protected]> wrote:\n> 1)      Is there a way to force plan that uses hashaggregate for the second\n> query?\n\nNo, although if you crank work_mem up high enough you should get it, I think.\n\n> 2)      I am not trying to achieve any particular execution time for the\n> query, but I noticed that when \"disk sort\" kicks in  (and that happens\n> eventually once the dataset is large enough) the query drastically slows\n> down, even if there is no physical IO going on. I wonder if it's possible to\n> have predictable performance rather than sudden drop.\n\nNo. The planner has to choose one algorithm or the other - there's\nnot really a way it can do a mix.\n\n> 3)      Why hashAggregate plan uses so much less memory (work_mem) than the\n> plan with groupAggregate/sort? HashAggregate plan for Query1 works even with\n> work_mem='2GB'; The second plan decides to use disk sort even with\n> work_mem='4GB'. Why sort is so memory greedy? Are there any plans to address\n> the sorting memory efficiency issues?\n\nWell, if you select more columns, then the tuples that are buffered in\nmemory take up more space, right? Twice the columns = twice the\nmemory.\n\nWhat I'd be curious to know is how accurate the memory estimates are -\nfigure out what the lowest value of work_mem needed to get a\nparticular plan is and then compare that to the amount of memory used\nwhen you execute the query...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Fri, 4 Jun 2010 21:40:45 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to force hashaggregate plan?" } ]
[ { "msg_contents": "Hi all,\n\nAre there any HP Smart Array disk controller users running linux that\nhave experimented with the new scsi based hpsa driver over the block\nbased cciss driver? I have a p800 controller that I'll try out soon.\n(I hope.)\n\nRegards,\nMark\n", "msg_date": "Thu, 27 May 2010 18:24:17 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": true, "msg_subject": "hp hpsa vs cciss driver" } ]
[ { "msg_contents": "I'm using PostgreSQL 9.0 beta 1. I've got the following table definition:\n\n# \\d parts_2576\n Table \"public.parts_2576\"\n Column | Type |\nModifiers\n------------+------------------------+-----------------------------------------------------------\n ID | bigint | not null default\nnextval('\"parts_2576_ID_seq\"'::regclass)\n binaryID | character varying(32) | not null default ''::character varying\n messageID | character varying(255) | not null default ''::character varying\n subject | character varying(512) | not null default ''::character varying\n fromname | character varying(512) | not null default ''::character varying\n date | bigint | default 0::bigint\n partnumber | bigint | not null default 0::bigint\n size | bigint | not null default 0::bigint\nIndexes:\n \"parts_2576_pkey\" PRIMARY KEY, btree (\"ID\")\n \"binaryID_2576_idx\" btree (\"binaryID\")\n \"date_2576_idx\" btree (date)\n \"parts_2576_binaryID_idx\" btree (\"binaryID\")\n\nIf I run this:\n\nEXPLAIN ANALYZE SELECT SUM(\"size\") AS totalsize, \"binaryID\", COUNT(*)\nAS parttotal, MAX(\"subject\") AS subject, MAX(\"fromname\") AS fromname,\nMIN(\"date\") AS mindate FROM parts_2576 WHERE \"binaryID\" >\n'1082fa89fe499741b8271f9c92136f44' GROUP BY \"binaryID\" ORDER BY\n\"binaryID\" LIMIT 400;\n\nI get this:\n\nLimit (cost=0.00..316895.11 rows=400 width=211) (actual\ntime=3.880..1368.936 rows=400 loops=1)\n -> GroupAggregate (cost=0.00..41843621.95 rows=52817 width=211)\n(actual time=3.872..1367.048 rows=400 loops=1)\n -> Index Scan using \"binaryID_2576_idx\" on parts_2576\n(cost=0.00..41683754.21 rows=10578624 width=211) (actual\ntime=0.284..130.756 rows=19954 loops=1)\n Index Cond: ((\"binaryID\")::text >\n'1082fa89fe499741b8271f9c92136f44'::text)\n Total runtime: 1370.140 ms\n\nThe first thing which strikes me is how the GroupAggregate step shows\nit got the 400 rows which matches the limit, but it estimated 52,817\nrows. Shouldn't it have already known it would be 400?\n\nI've got an index on \"binaryID\" (actually, I appear to have 2), but I\nsuspect it's not really working as intended as it's doing an\nevaluation on its value and those greater than it. Is there a way to\noptimise this like using a functional index or something?\n\nObviously this isn't my design (duplicate indexes and mixed-case\ncolumn names?), but I'd like to see if I can get things running\nfaster.\n\nThanks\n\nThom\n", "msg_date": "Fri, 28 May 2010 19:27:08 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Wildly inaccurate query plan" }, { "msg_contents": "Thom Brown <[email protected]> writes:\n> I get this:\n\n> Limit (cost=0.00..316895.11 rows=400 width=211) (actual\n> time=3.880..1368.936 rows=400 loops=1)\n> -> GroupAggregate (cost=0.00..41843621.95 rows=52817 width=211)\n> (actual time=3.872..1367.048 rows=400 loops=1)\n> -> Index Scan using \"binaryID_2576_idx\" on parts_2576\n> (cost=0.00..41683754.21 rows=10578624 width=211) (actual\n> time=0.284..130.756 rows=19954 loops=1)\n> Index Cond: ((\"binaryID\")::text >\n> '1082fa89fe499741b8271f9c92136f44'::text)\n> Total runtime: 1370.140 ms\n\n> The first thing which strikes me is how the GroupAggregate step shows\n> it got the 400 rows which matches the limit, but it estimated 52,817\n> rows. Shouldn't it have already known it would be 400?\n\nNo. Rowcount estimates are always in terms of what the node would emit\nif allowed to run to completion. Likewise cost. In this case both the\nindexscan and the groupagg are terminated early once they satisfy the\nlimit. The planner is expecting this which is why the estimated cost\nfor the limit node is way less than those for its inputs.\n\nThat looks like a perfectly reasonable plan from here, though it would\nprobably not get chosen with a larger limit or no limit at all, since\nthe ultimate costs are pretty large. Essentially this is a fast-start\nplan rather than a lowest-total-cost plan, and that looks like the\nbest bet for a small limit value.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 May 2010 14:54:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wildly inaccurate query plan " }, { "msg_contents": "On 28 May 2010 19:54, Tom Lane <[email protected]> wrote:\n> Thom Brown <[email protected]> writes:\n>> I get this:\n>\n>> Limit  (cost=0.00..316895.11 rows=400 width=211) (actual\n>> time=3.880..1368.936 rows=400 loops=1)\n>>    ->  GroupAggregate  (cost=0.00..41843621.95 rows=52817 width=211)\n>> (actual time=3.872..1367.048 rows=400 loops=1)\n>>          ->  Index Scan using \"binaryID_2576_idx\" on parts_2576\n>> (cost=0.00..41683754.21 rows=10578624 width=211) (actual\n>> time=0.284..130.756 rows=19954 loops=1)\n>>                Index Cond: ((\"binaryID\")::text >\n>> '1082fa89fe499741b8271f9c92136f44'::text)\n>>  Total runtime: 1370.140 ms\n>\n>> The first thing which strikes me is how the GroupAggregate step shows\n>> it got the 400 rows which matches the limit, but it estimated 52,817\n>> rows.  Shouldn't it have already known it would be 400?\n>\n> No.  Rowcount estimates are always in terms of what the node would emit\n> if allowed to run to completion.  Likewise cost.  In this case both the\n> indexscan and the groupagg are terminated early once they satisfy the\n> limit.  The planner is expecting this which is why the estimated cost\n> for the limit node is way less than those for its inputs.\n>\n> That looks like a perfectly reasonable plan from here, though it would\n> probably not get chosen with a larger limit or no limit at all, since\n> the ultimate costs are pretty large.  Essentially this is a fast-start\n> plan rather than a lowest-total-cost plan, and that looks like the\n> best bet for a small limit value.\n>\n>                        regards, tom lane\n\nYou're absolutely right, it's not chosen when without limit. I see\nwhat you mean though about terminating once it has enough rows. It's\na shame I can't optimise it though as the real case that runs is with\na limit of 4000 which takes a long time to complete.\n\nThanks\n\nThom\n", "msg_date": "Fri, 28 May 2010 20:05:40 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wildly inaccurate query plan" } ]
[ { "msg_contents": "Anybody on the list have any experience with these drives? They get\ngood numbers but I can't find diddly on them on the internet for the\nlast year or so.\n\nhttp://www.stec-inc.com/product/zeusiops.php\n", "msg_date": "Fri, 28 May 2010 13:48:54 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Zeus IOPS" }, { "msg_contents": "On Fri, 2010-05-28 at 13:48 -0600, Scott Marlowe wrote:\n> Anybody on the list have any experience with these drives? They get\n> good numbers but I can't find diddly on them on the internet for the\n> last year or so.\n> \n> http://www.stec-inc.com/product/zeusiops.php\n\nI'd heard that they were a popular choice in the Enterprise market, but\nthat was around 6 months ago or so, and purely anecdotal.\n\nSeems strange though for them to disappear off the radar when SSD\nrelated information is becoming so prevalent. That could just be an\neffect of all the noise around the non-Enterprise grade SSD's floating\naround though.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Mon, 31 May 2010 09:06:19 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Zeus IOPS" }, { "msg_contents": "On Mon, May 31, 2010 at 7:06 AM, Brad Nicholson\n<[email protected]> wrote:\n> On Fri, 2010-05-28 at 13:48 -0600, Scott Marlowe wrote:\n>> Anybody on the list have any experience with these drives?  They get\n>> good numbers but I can't find diddly on them on the internet for the\n>> last year or so.\n>>\n>> http://www.stec-inc.com/product/zeusiops.php\n>\n> I'd heard that they were a popular choice in the Enterprise market, but\n> that was around 6 months ago or so, and purely anecdotal.\n>\n> Seems strange though for them to disappear off the radar when SSD\n> related information is becoming so prevalent.  That could just be an\n> effect of all the noise around the non-Enterprise grade SSD's floating\n> around though.\n\nYeah. According to a buddy of mine they're mostly only resold by Sun\nand EMC in their own products. Of course what I'm really looking for\nare postive stories about them surviving power plug pulls without\ncorrupting the database.\n", "msg_date": "Mon, 31 May 2010 07:20:02 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Zeus IOPS" }, { "msg_contents": " Hello,\n\n>>> Anybody on the list have any experience with these drives?  They get\n>>> good numbers but I can't find diddly on them on the internet for the\n>>> last year or so.\n>>>\n>>> http://www.stec-inc.com/product/zeusiops.php\n\n Most of the storage vendors (I have confirmation from EMC and HP)\nuse those in their SAN boxes. I believe that is because they are the\nonly SLC SSD makers that have supercapacitors on the SSD drive which\nallows them to run with write cache enabled. As a side effect - they\nare insanely expensive. :)\n\n Mindaugas\n", "msg_date": "Tue, 1 Jun 2010 10:27:18 +0300", "msg_from": "Mindaugas Riauba <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Zeus IOPS" }, { "msg_contents": "On Tue, 2010-06-01 at 10:27 +0300, Mindaugas Riauba wrote:\n> Hello,\n> \n> >>> Anybody on the list have any experience with these drives? They get\n> >>> good numbers but I can't find diddly on them on the internet for the\n> >>> last year or so.\n> >>>\n> >>> http://www.stec-inc.com/product/zeusiops.php\n> \n> Most of the storage vendors (I have confirmation from EMC and HP)\n> use those in their SAN boxes. I believe that is because they are the\n> only SLC SSD makers that have supercapacitors on the SSD drive which\n> allows them to run with write cache enabled. As a side effect - they\n> are insanely expensive. :)\n\nTexas Memory Systems also have these.\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Tue, 01 Jun 2010 09:17:31 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Zeus IOPS" } ]
[ { "msg_contents": "Thom Brown wrote:\n \n> It's a shame I can't optimise it though as the real case that runs\n> is with a limit of 4000 which takes a long time to complete.\n \nPerhaps you should post the real case.\n \n-Kevin\n", "msg_date": "Sat, 29 May 2010 12:01:21 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wildly inaccurate query plan" } ]
[ { "msg_contents": "Hi.\n\nI'm trying to get the planner to do sort of the correct thing\nwhen choosing between index-scans on btree indices and\nbitmap heap scans.\n\nThere has been several things going on in parallel. One\nis that the statistics data is off:\nhttp://thread.gmane.org/gmane.comp.db.postgresql.devel.general/141420/focus=141735\n\nThe other one that the costestimates (number of pages\nto read) is inaccurate on gin indices:\nhttp://archives.postgresql.org/pgsql-performance/2009-10/msg00393.php\nwhich there also is coming a solution to that I'm testing out.\n\nI was trying to nullify the problem with the wrongly estimated number\nof pages to read and see if \"the rest\" seems to work as expected.\n\nThe theory was that if I set \"seq_page_cost\" and \"random_page_cost\" to\nsomething \"really low\" (0 is not permitted) and ran tests\non a fully cached query (where both costs indeed is \"really low\").\nthen the \"cheapest\" query should indeed also be the fastest one.\nLet me know if the logic is flawed.\n\nThe test dataset is 1365462 documents, running pg9.0b1, both queries run \ntwice to\nsee that the data actually is fully cached as expected.\n\ntestdb=# set seq_page_cost = 0.00001;\nSET\ntestdb=# set random_page_cost = 0.00001;\nSET\ntestdb=# set enable_indexscan = on;\nSET\ntestdb=# explain analyze select id from testdb.reference where \ndocument_tsvector @@ to_tsquery('literature') order by accession_number \nlimit 200;\n QUERY \nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..432.82 rows=200 width=11) (actual \ntime=831.456..2167.302 rows=200 loops=1)\n -> Index Scan using ref_acc_idx on reference (cost=0.00..61408.12 \nrows=28376 width=11) (actual time=831.451..2166.434 rows=200 loops=1)\n Filter: (document_tsvector @@ to_tsquery('literature'::text))\n Total runtime: 2167.982 ms\n(4 rows)\n\ntestdb=# explain analyze select id from testdb.reference where \ndocument_tsvector @@ to_tsquery('literature') order by accession_number \nlimit 200;\n QUERY \nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..432.82 rows=200 width=11) (actual \ntime=842.990..2187.393 rows=200 loops=1)\n -> Index Scan using ref_acc_idx on reference (cost=0.00..61408.12 \nrows=28376 width=11) (actual time=842.984..2186.540 rows=200 loops=1)\n Filter: (document_tsvector @@ to_tsquery('literature'::text))\n Total runtime: 2188.083 ms\n(4 rows)\n\ntestdb=# set enable_indexscan = off;\nSET\ntestdb=# explain analyze select id from testdb.reference where \ndocument_tsvector @@ to_tsquery('literature') order by accession_number \nlimit 200;\n \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2510.68..2511.18 rows=200 width=11) (actual \ntime=270.016..270.918 rows=200 loops=1)\n -> Sort (cost=2510.68..2581.62 rows=28376 width=11) (actual \ntime=270.011..270.321 rows=200 loops=1)\n Sort Key: accession_number\n Sort Method: top-N heapsort Memory: 34kB\n -> Bitmap Heap Scan on reference (cost=219.94..1284.29 \nrows=28376 width=11) (actual time=13.897..216.700 rows=21613 loops=1)\n Recheck Cond: (document_tsvector @@ \nto_tsquery('literature'::text))\n -> Bitmap Index Scan on reference_fts_idx \n(cost=0.00..212.85 rows=28376 width=0) (actual time=10.053..10.053 \nrows=21613 loops=1)\n Index Cond: (document_tsvector @@ \nto_tsquery('literature'::text))\n Total runtime: 271.323 ms\n(9 rows)\n\ntestdb=# explain analyze select id from testdb.reference where \ndocument_tsvector @@ to_tsquery('literature') order by accession_number \nlimit 200;\n \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2510.68..2511.18 rows=200 width=11) (actual \ntime=269.881..270.782 rows=200 loops=1)\n -> Sort (cost=2510.68..2581.62 rows=28376 width=11) (actual \ntime=269.876..270.182 rows=200 loops=1)\n Sort Key: accession_number\n Sort Method: top-N heapsort Memory: 34kB\n -> Bitmap Heap Scan on reference (cost=219.94..1284.29 \nrows=28376 width=11) (actual time=14.113..216.173 rows=21613 loops=1)\n Recheck Cond: (document_tsvector @@ \nto_tsquery('literature'::text))\n -> Bitmap Index Scan on reference_fts_idx \n(cost=0.00..212.85 rows=28376 width=0) (actual time=10.360..10.360 \nrows=21613 loops=1)\n Index Cond: (document_tsvector @@ \nto_tsquery('literature'::text))\n Total runtime: 271.533 ms\n(9 rows)\n\n\nSo in the situation where i have tried to \"nullify\" the actual \ndisc-cost, hopefully leaving only the\ncpu and other cost back and running the query in fully cached mode (two \nruns). the bitmap-heap-scan\nis still hugely favorable in actual runtime. (which isn't that much a \nsuprise) but it seems strange that the\nindex-scan is still favored in the cost calculations?\n\nI have tried to alter the cost of ts_match_vq but even setting it to \n1000 does not change the overall picture.\n\nIs the approach simply too naive?\n\n-- \nJesper\n\n", "msg_date": "Sun, 30 May 2010 19:41:22 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "planner costs in \"warm cache\" tests" }, { "msg_contents": "Jesper Krogh <[email protected]> writes:\n> testdb=# set seq_page_cost = 0.00001;\n> SET\n> testdb=# set random_page_cost = 0.00001;\n> SET\n\nWell, hmm, I really doubt that that represents reality either. A page\naccess is by no means \"free\" even when the page is already in cache.\nI don't recall anyone suggesting that you set these numbers to less\nthan perhaps 0.01.\n\nIn the case at hand, the problem is that the planner is preferring using\nan indexscan to an after-the-fact sort to obtain the specified result\nordering. Making page fetches look too cheap definitely plays into\nthat. There may also be a statistical problem, if the location of the\ndesired records isn't independent of the accession_number ordering, but\nyou're not doing yourself any favors by pushing the planner cost\nparameters several orders of magnitude outside the design envelope.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 May 2010 14:34:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner costs in \"warm cache\" tests " }, { "msg_contents": "On 2010-05-30 20:34, Tom Lane wrote:\n> Jesper Krogh<[email protected]> writes:\n> \n>> testdb=# set seq_page_cost = 0.00001;\n>> SET\n>> testdb=# set random_page_cost = 0.00001;\n>> SET\n>> \n> Well, hmm, I really doubt that that represents reality either. A page\n> access is by no means \"free\" even when the page is already in cache.\n> I don't recall anyone suggesting that you set these numbers to less\n> than perhaps 0.01.\n>\n> \nThank you for the prompt response. Is it a \"false assumption\" that the\ncost should in some metric between different plans be a measurement\nof actual run-time in a dead-disk run?\n\nIt should most likely be matching a typical workload situation, but that\nit really hard to tell anything about, so my \"feeling\" would be that the\ndead disk case is the one closest?\n\n-- \nJesper\n", "msg_date": "Mon, 31 May 2010 20:48:40 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: planner costs in \"warm cache\" tests" }, { "msg_contents": "Jesper Krogh <[email protected]> writes:\n> On 2010-05-30 20:34, Tom Lane wrote:\n>> Well, hmm, I really doubt that that represents reality either. A page\n>> access is by no means \"free\" even when the page is already in cache.\n>> I don't recall anyone suggesting that you set these numbers to less\n>> than perhaps 0.01.\n>> \n> Thank you for the prompt response. Is it a \"false assumption\" that the\n> cost should in some metric between different plans be a measurement\n> of actual run-time in a dead-disk run?\n\nWell, the default cost parameters (seq_page_cost=1, random_page_cost=4)\nare intended to model the non-cached state where most page fetches\nactually do require a disk access. They are definitely too large\nrelative to the cpu_xxx_cost parameters when you have a fully-cached\ndatabase, but what I've seen people recommending for that condition\nis to set them both to the same value in the vicinity of 0.1 or 0.01\nor so. If it's only mostly cached you might try intermediate settings.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 31 May 2010 15:55:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner costs in \"warm cache\" tests " }, { "msg_contents": "It is still best to have random_page_cost to be slightly larger (~50%) than sequential_page_cost, because even when entirely in RAM, sequential reads are faster than random reads. Today's CPU's do memory prefetching on sequential access. Perhaps try something like 0.3 and 0.2, or half that. You still don't want it to gratuitously scan a lot of RAM -- reading a page is not free and can kick out other pages from shared_buffers.\n\n\nOn May 31, 2010, at 12:55 PM, Tom Lane wrote:\n\n> Jesper Krogh <[email protected]> writes:\n>> On 2010-05-30 20:34, Tom Lane wrote:\n>>> Well, hmm, I really doubt that that represents reality either. A page\n>>> access is by no means \"free\" even when the page is already in cache.\n>>> I don't recall anyone suggesting that you set these numbers to less\n>>> than perhaps 0.01.\n>>> \n>> Thank you for the prompt response. Is it a \"false assumption\" that the\n>> cost should in some metric between different plans be a measurement\n>> of actual run-time in a dead-disk run?\n> \n> Well, the default cost parameters (seq_page_cost=1, random_page_cost=4)\n> are intended to model the non-cached state where most page fetches\n> actually do require a disk access. They are definitely too large\n> relative to the cpu_xxx_cost parameters when you have a fully-cached\n> database, but what I've seen people recommending for that condition\n> is to set them both to the same value in the vicinity of 0.1 or 0.01\n> or so. If it's only mostly cached you might try intermediate settings.\n> \n> \t\t\tregards, tom lane\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Tue, 1 Jun 2010 00:13:22 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner costs in \"warm cache\" tests " }, { "msg_contents": "Scott Carey <[email protected]> writes:\n> It is still best to have random_page_cost to be slightly larger (~50%)\n> than sequential_page_cost, because even when entirely in RAM,\n> sequential reads are faster than random reads. Today's CPU's do\n> memory prefetching on sequential access.\n\nDo you have any actual evidence of that? Because I don't believe it.\nNeither PG nor any kernel that I've ever heard of makes any effort to\nensure that logically sequential blocks occupy physically sequential\nbuffers, so even if the CPU tries to do some prefetching, it's not\ngoing to help at all.\n\nNow, if the database isn't entirely cached, then indeed it's a good\nidea to keep random_page_cost higher than seq_page_cost. But that's\nbecause of the actual disk fetches, not anything that happens in RAM.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Jun 2010 10:03:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner costs in \"warm cache\" tests " }, { "msg_contents": "On Mon, May 31, 2010 at 3:55 PM, Tom Lane <[email protected]> wrote:\n> Jesper Krogh <[email protected]> writes:\n>> On 2010-05-30 20:34, Tom Lane wrote:\n>>> Well, hmm, I really doubt that that represents reality either.  A page\n>>> access is by no means \"free\" even when the page is already in cache.\n>>> I don't recall anyone suggesting that you set these numbers to less\n>>> than perhaps 0.01.\n>>>\n>> Thank you for the prompt response. Is it a \"false assumption\" that the\n>> cost should in some metric between different plans be a measurement\n>> of actual run-time in a dead-disk run?\n>\n> Well, the default cost parameters (seq_page_cost=1, random_page_cost=4)\n> are intended to model the non-cached state where most page fetches\n> actually do require a disk access.  They are definitely too large\n> relative to the cpu_xxx_cost parameters when you have a fully-cached\n> database, but what I've seen people recommending for that condition\n> is to set them both to the same value in the vicinity of 0.1 or 0.01\n> or so.  If it's only mostly cached you might try intermediate settings.\n\nI have had to set it as low as .005 to get the right things to happen.\n Could have been a fluke, I suppose.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Fri, 4 Jun 2010 21:29:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner costs in \"warm cache\" tests" } ]
[ { "msg_contents": "Hello,\n\ni have a set of unique data which about 150.000.000 rows. Regullary i \nget a list of data, which contains multiple times of rows than the \nalready stored one. Often around 2.000.000.000 rows. Within this rows \nare many duplicates and often the set of already stored data.\nI want to store just every entry, which is not within the already stored \none. Also i do not want to store duplicates. Example:\n\nAlready stored set:\na,b,c\n\nGiven set:\na,b,a,c,d,a,c,d,b\n\nExpected set after import:\na,b,c,d\n\nI now looking for a faster way for the import. At the moment i import \nthe new data with copy into an table 'import'. then i remove the \nduplicates and insert every row which is not already known. after that \nimport is truncated.\n\nIs there a faster way? Should i just insert every row and ignore it, if \nthe unique constrain fails?\n\nHere the simplified table-schema. in real life it's with partitions:\ntest=# \\d urls\n Tabelle ᅵpublic.urlsᅵ\n Spalte | Typ | Attribute\n--------+---------+-------------------------------------------------------\n url_id | integer | not null default nextval('urls_url_id_seq'::regclass)\n url | text | not null\nIndexe:\n ᅵurls_urlᅵ UNIQUE, btree (url)\n ᅵurls_url_idᅵ btree (url_id)\n\nThanks for every hint or advice! :)\n\nGreetings from Germany,\nTorsten\n-- \nhttp://www.dddbl.de - ein Datenbank-Layer, der die Arbeit mit 8 \nverschiedenen Datenbanksystemen abstrahiert,\nQueries von Applikationen trennt und automatisch die Query-Ergebnisse \nauswerten kann.\n", "msg_date": "Tue, 01 Jun 2010 17:03:48 +0200", "msg_from": "=?ISO-8859-15?Q?Torsten_Z=FChlsdorff?= <[email protected]>", "msg_from_op": true, "msg_subject": "How to insert a bulk of data with unique-violations very\n fast" }, { "msg_contents": "On Tue, Jun 1, 2010 at 9:03 AM, Torsten Zühlsdorff\n<[email protected]> wrote:\n> Hello,\n>\n> i have a set of unique data which about 150.000.000 rows. Regullary i get a\n> list of data, which contains multiple times of rows than the already stored\n> one. Often around 2.000.000.000 rows. Within this rows are many duplicates\n> and often the set of already stored data.\n> I want to store just every entry, which is not within the already stored\n> one. Also i do not want to store duplicates. Example:\n\nThe standard method in pgsql is to load the data into a temp table\nthen insert where not exists in old table.\n", "msg_date": "Wed, 2 Jun 2010 13:59:23 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to insert a bulk of data with unique-violations\n\tvery fast" }, { "msg_contents": "On Thu, Jun 3, 2010 at 11:19 AM, Torsten Zühlsdorff\n<[email protected]> wrote:\n> Scott Marlowe schrieb:\n>>\n>> On Tue, Jun 1, 2010 at 9:03 AM, Torsten Zühlsdorff\n>> <[email protected]> wrote:\n>>>\n>>> Hello,\n>>>\n>>> i have a set of unique data which about 150.000.000 rows. Regullary i get\n>>> a\n>>> list of data, which contains multiple times of rows than the already\n>>> stored\n>>> one. Often around 2.000.000.000 rows. Within this rows are many\n>>> duplicates\n>>> and often the set of already stored data.\n>>> I want to store just every entry, which is not within the already stored\n>>> one. Also i do not want to store duplicates. Example:\n>>\n>> The standard method in pgsql is to load the data into a temp table\n>> then insert where not exists in old table.\n>\n> Sorry, i didn't get it. I've googled some examples, but no one match at my\n> case. Every example i found was a single insert which should be done or\n> ignored, if the row is already stored.\n>\n> But in my case i have a bulk of rows with duplicates. Either your tipp\n> doesn't match my case or i didn't unterstand it correctly. Can you provide a\n> simple example?\n\ncreate table main (id int primary key, info text);\ncreate table loader (id int, info text);\ninsert into main values (1,'abc'),(2,'def'),(3,'ghi');\ninsert into loader values (1,'abc'),(4,'xyz');\nselect * from main;\n id | info\n----+------\n 1 | abc\n 2 | def\n 3 | ghi\n(3 rows)\n\nselect * from loader;\n id | info\n----+------\n 1 | abc\n 4 | xyz\n(2 rows)\n\ninsert into main select * from loader except select * from main;\nselect * from main;\n id | info\n----+------\n 1 | abc\n 2 | def\n 3 | ghi\n 4 | xyz\n(4 rows)\n\nNote that for the where not exists to work the fields would need to be\nall the same, or you'd need a more complex query. If the info field\nhere was different you'd get an error an no insert / update. For that\ncase you might want to use \"where not in\":\n\ninsert into main select * from loader where id not in (select id from main);\n\nIf you wanted the new rows to update pre-existing rows, then you could\nrun an update first where the ids matched, then the insert where no id\nmatches.\n", "msg_date": "Thu, 3 Jun 2010 14:09:38 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to insert a bulk of data with unique-violations\n\tvery fast" }, { "msg_contents": "2010/6/1 Torsten Zühlsdorff <[email protected]>:\n> Hello,\n>\n> i have a set of unique data which about 150.000.000 rows. Regullary i get a\n> list of data, which contains multiple times of rows than the already stored\n> one. Often around 2.000.000.000 rows. Within this rows are many duplicates\n> and often the set of already stored data.\n> I want to store just every entry, which is not within the already stored\n> one. Also i do not want to store duplicates. Example:\n>\n> Already stored set:\n> a,b,c\n>\n> Given set:\n> a,b,a,c,d,a,c,d,b\n>\n> Expected set after import:\n> a,b,c,d\n>\n> I now looking for a faster way for the import. At the moment i import the\n> new data with copy into an table 'import'. then i remove the duplicates and\n> insert every row which is not already known. after that import is truncated.\n>\n> Is there a faster way? Should i just insert every row and ignore it, if the\n> unique constrain fails?\n>\n> Here the simplified table-schema. in real life it's with partitions:\n> test=# \\d urls\n>                         Tabelle »public.urls«\n>  Spalte |   Typ   |                       Attribute\n> --------+---------+-------------------------------------------------------\n>  url_id | integer | not null default nextval('urls_url_id_seq'::regclass)\n>  url    | text    | not null\n> Indexe:\n>    »urls_url« UNIQUE, btree (url)\n>    »urls_url_id« btree (url_id)\n>\n> Thanks for every hint or advice! :)\n\nI think you need to have a look at pgloader. It does COPY with error\nhandling. very effective.\n\nhttp://pgloader.projects.postgresql.org/\n\n>\n> Greetings from Germany,\n> Torsten\n> --\n> http://www.dddbl.de - ein Datenbank-Layer, der die Arbeit mit 8\n> verschiedenen Datenbanksystemen abstrahiert,\n> Queries von Applikationen trennt und automatisch die Query-Ergebnisse\n> auswerten kann.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Fri, 4 Jun 2010 01:03:37 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to insert a bulk of data with unique-violations\n\tvery fast" }, { "msg_contents": "Scott Marlowe schrieb:\n\n>>>> i have a set of unique data which about 150.000.000 rows. Regullary i get\n>>>> a\n>>>> list of data, which contains multiple times of rows than the already\n>>>> stored\n>>>> one. Often around 2.000.000.000 rows. Within this rows are many\n>>>> duplicates\n>>>> and often the set of already stored data.\n>>>> I want to store just every entry, which is not within the already stored\n>>>> one. Also i do not want to store duplicates. Example:\n>>> The standard method in pgsql is to load the data into a temp table\n>>> then insert where not exists in old table.\n>> Sorry, i didn't get it. I've googled some examples, but no one match at my\n>> case. Every example i found was a single insert which should be done or\n>> ignored, if the row is already stored.\n>>\n>> But in my case i have a bulk of rows with duplicates. Either your tipp\n>> doesn't match my case or i didn't unterstand it correctly. Can you provide a\n>> simple example?\n> \n> create table main (id int primary key, info text);\n> create table loader (id int, info text);\n> insert into main values (1,'abc'),(2,'def'),(3,'ghi');\n> insert into loader values (1,'abc'),(4,'xyz');\n> select * from main;\n> id | info\n> ----+------\n> 1 | abc\n> 2 | def\n> 3 | ghi\n> (3 rows)\n> \n> select * from loader;\n> id | info\n> ----+------\n> 1 | abc\n> 4 | xyz\n> (2 rows)\n> \n> insert into main select * from loader except select * from main;\n> select * from main;\n> id | info\n> ----+------\n> 1 | abc\n> 2 | def\n> 3 | ghi\n> 4 | xyz\n> (4 rows)\n> \n> Note that for the where not exists to work the fields would need to be\n> all the same, or you'd need a more complex query. If the info field\n> here was different you'd get an error an no insert / update. For that\n> case you might want to use \"where not in\":\n> \n> insert into main select * from loader where id not in (select id from main);\n\nThank you very much for your example. Now i've got it :)\n\nI've test your example on a small set of my rows. While testing i've \nstumpled over a difference in sql-formulation. Using except seems to be \na little slower than the more complex where not in (subquery) group by. \nHere is my example:\n\nCREATE TABLE tseq (value text);\nINSERT INTO tseq VALUES ('a') , ('b'), ('c');\nCREATE UNIQUE INDEX tseq_unique on tseq (value);\nCREATE TEMP TABLE tmpseq(value text);\nINSERT INTO tmpseq VALUES ('a') , ('b'), ('c');\nINSERT INTO tmpseq VALUES ('a') , ('b'), ('c');\nINSERT INTO tmpseq VALUES ('a') , ('b'), ('d');\nINSERT INTO tmpseq VALUES ('d') , ('b'), ('d');\nSELECT* from tseq;\n value\n-------\n a\n b\n c\n(3 rows)\n\nSELECT* from tmpseq;\n value\n-------\n a\n b\n c\n a\n b\n c\n a\n b\n d\n d\n b\n d\n(12 rows)\n\nVACUUM VERBOSE ANALYSE;\n\nexplain analyze SELECT value FROM tmpseq except SELECT value FROM tseq;\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------\n HashSetOp Except (cost=0.00..2.34 rows=4 width=2) (actual \ntime=0.157..0.158 rows=1 loops=1)\n -> Append (cost=0.00..2.30 rows=15 width=2) (actual \ntime=0.012..0.126 rows=15 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..1.24 rows=12 \nwidth=2) (actual time=0.009..0.060 rows=12 loops=1)\n -> Seq Scan on tmpseq (cost=0.00..1.12 rows=12 \nwidth=2) (actual time=0.004..0.022 rows=12 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..1.06 rows=3 \nwidth=2) (actual time=0.006..0.018 rows=3 loops=1)\n -> Seq Scan on tseq (cost=0.00..1.03 rows=3 width=2) \n(actual time=0.003..0.009 rows=3 loops=1)\n Total runtime: 0.216 ms\n(7 rows)\n\nexplain analyze SELECT value FROM tmpseq WHERE value NOT IN (SELECT \nvalue FROM tseq) GROUP BY value;\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=2.20..2.22 rows=2 width=2) (actual \ntime=0.053..0.055 rows=1 loops=1)\n -> Seq Scan on tmpseq (cost=1.04..2.19 rows=6 width=2) (actual \ntime=0.038..0.043 rows=3 loops=1)\n Filter: (NOT (hashed SubPlan 1))\n SubPlan 1\n -> Seq Scan on tseq (cost=0.00..1.03 rows=3 width=2) \n(actual time=0.004..0.009 rows=3 loops=1)\n Total runtime: 0.105 ms\n(6 rows)\n\nMy question: is this an generall behavior or just an effect of the small \ncase?\n\nGreetings form Germany,\nTorsten\n", "msg_date": "Sun, 06 Jun 2010 14:02:20 +0200", "msg_from": "=?ISO-8859-1?Q?Torsten_Z=FChlsdorff?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to insert a bulk of data with unique-violations\n very fast" }, { "msg_contents": "C�dric Villemain schrieb:\n\n> I think you need to have a look at pgloader. It does COPY with error\n> handling. very effective.\n\nThanks for this advice. I will have a look at it.\n\nGreetings from Germany,\nTorsten\n", "msg_date": "Sun, 06 Jun 2010 14:05:44 +0200", "msg_from": "=?ISO-8859-1?Q?Torsten_Z=FChlsdorff?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to insert a bulk of data with unique-violations\n very fast" }, { "msg_contents": "On 06/01/2010 10:03 AM, Torsten Zᅵhlsdorff wrote:\n> Hello,\n>\n> i have a set of unique data which about 150.000.000 rows. Regullary i\n> get a list of data, which contains multiple times of rows than the\n> already stored one. Often around 2.000.000.000 rows. Within this rows\n> are many duplicates and often the set of already stored data.\n> I want to store just every entry, which is not within the already stored\n> one. Also i do not want to store duplicates. Example:\n>\n> Already stored set:\n> a,b,c\n>\n> Given set:\n> a,b,a,c,d,a,c,d,b\n>\n> Expected set after import:\n> a,b,c,d\n>\n> I now looking for a faster way for the import. At the moment i import\n> the new data with copy into an table 'import'. then i remove the\n> duplicates and insert every row which is not already known. after that\n> import is truncated.\n>\n> Is there a faster way? Should i just insert every row and ignore it, if\n> the unique constrain fails?\n>\n> Here the simplified table-schema. in real life it's with partitions:\n> test=# \\d urls\n> Tabelle ᅵpublic.urlsᅵ\n> Spalte | Typ | Attribute\n> --------+---------+-------------------------------------------------------\n> url_id | integer | not null default nextval('urls_url_id_seq'::regclass)\n> url | text | not null\n> Indexe:\n> ᅵurls_urlᅵ UNIQUE, btree (url)\n> ᅵurls_url_idᅵ btree (url_id)\n>\n> Thanks for every hint or advice! :)\n>\n> Greetings from Germany,\n> Torsten\n\nI do this with a stored procedure. I do not care about speed because my db is really small and I only insert a few records a month. So I dont know how fast this is, but here is my func:\n\nCREATE FUNCTION addentry(idate timestamp without time zone, ilevel integer) RETURNS character varying\nAS $$\ndeclare\n tmp integer;\nbegin\n insert into blood(adate, alevel) values(idate, ilevel);\n return 'ok';\nexception\n when unique_violation then\n select into tmp alevel from blood where adate = idate;\n if tmp <> ilevel then\n return idate || ' levels differ!';\n else\n return 'ok, already in table';\n end if;\nend; $$\nLANGUAGE plpgsql;\n\n\nUse it like, select * from addentry('2010-006-06 8:00:00', 130);\n\nI do an extra check that if the date's match that the level's match too, but you wouldnt have to. There is a unique index on adate.\n\n-Andy\n\n", "msg_date": "Sun, 06 Jun 2010 07:45:34 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to insert a bulk of data with unique-violations\n very fast" }, { "msg_contents": "On Sun, Jun 6, 2010 at 6:02 AM, Torsten Zühlsdorff\n<[email protected]> wrote:\n> Scott Marlowe schrieb:\n> Thank you very much for your example. Now i've got it :)\n>\n> I've test your example on a small set of my rows. While testing i've\n> stumpled over a difference in sql-formulation. Using except seems to be a\n> little slower than the more complex where not in (subquery) group by. Here\n> is my example:\n\nYeah, to get a good idea you need a more realistic example. Build\nsome tables with millions of rows using generate_series() and then\ntest against those.\n", "msg_date": "Sun, 6 Jun 2010 11:46:52 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to insert a bulk of data with unique-violations\n\tvery fast" }, { "msg_contents": "Since you have lots of data you can use parallel loading.\n\nSplit your data in several files and then do :\n\nCREATE TEMPORARY TABLE loader1 ( ... )\nCOPY loader1 FROM ...\n\nUse a TEMPORARY TABLE for this : you don't need crash-recovery since if \nsomething blows up, you can COPY it again... and it will be much faster \nbecause no WAL will be written.\n\nIf your disk is fast, COPY is cpu-bound, so if you can do 1 COPY process \nper core, and avoid writing WAL, it will scale.\n\nThis doesn't solve the other half of your problem (removing the \nduplicates) which isn't easy to parallelize, but it will make the COPY \npart a lot faster.\n\nNote that you can have 1 core process the INSERT / removing duplicates \nwhile the others are handling COPY and filling temp tables, so if you \npipeline it, you could save some time.\n\nDoes your data contain a lot of duplicates, or are they rare ? What \npercentage ?\n", "msg_date": "Mon, 07 Jun 2010 00:35:18 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to insert a bulk of data with unique-violations\n very fast" }, { "msg_contents": "Pierre C schrieb:\n> Since you have lots of data you can use parallel loading.\n> \n> Split your data in several files and then do :\n> \n> CREATE TEMPORARY TABLE loader1 ( ... )\n> COPY loader1 FROM ...\n> \n> Use a TEMPORARY TABLE for this : you don't need crash-recovery since if \n> something blows up, you can COPY it again... and it will be much faster \n> because no WAL will be written.\n\nThat's a good advice, thank yo :)\n\n> If your disk is fast, COPY is cpu-bound, so if you can do 1 COPY process \n> per core, and avoid writing WAL, it will scale.\n> \n> This doesn't solve the other half of your problem (removing the \n> duplicates) which isn't easy to parallelize, but it will make the COPY \n> part a lot faster.\n> \n> Note that you can have 1 core process the INSERT / removing duplicates \n> while the others are handling COPY and filling temp tables, so if you \n> pipeline it, you could save some time.\n> \n> Does your data contain a lot of duplicates, or are they rare ? What \n> percentage ?\n\nWithin the data to import most rows have 20 till 50 duplicates. Sometime \nmuch more, sometimes less.\n\nBut over 99,1% of the rows to import are already know. This percentage \nis growing, because there is a finite number of rows i want to know.\n\nIn my special case i'm collection domain-names. Till now it's completly \nfor private interests and with normal pc-hardware. I'm collecting them \nby crawling known sites and checking them for new hosts. Maybe i will \nbuild later an expired domain service or an reverse ip database or \nsomething like that. But now i'm just interested in the connection of \nthe sites and the structure people choose domain-names.\n\n(Before someone ask: Till now i have more rows than domains (nearly) \nexists, because i collect subdomain of all levels too and do not delete \nentries)\n\nThanks everyone for your advices. This will help me a lot!\n\nGreetings from Germany,\nTorsten\n-- \nhttp://www.dddbl.de - ein Datenbank-Layer, der die Arbeit mit 8 \nverschiedenen Datenbanksystemen abstrahiert,\nQueries von Applikationen trennt und automatisch die Query-Ergebnisse \nauswerten kann.\n", "msg_date": "Mon, 07 Jun 2010 15:21:13 +0200", "msg_from": "=?UTF-8?B?VG9yc3RlbiBaw7xobHNkb3JmZg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to insert a bulk of data with unique-violations\n very fast" }, { "msg_contents": "\n> Within the data to import most rows have 20 till 50 duplicates. Sometime \n> much more, sometimes less.\n\nIn that case (source data has lots of redundancy), after importing the \ndata chunks in parallel, you can run a first pass of de-duplication on the \nchunks, also in parallel, something like :\n\nCREATE TEMP TABLE foo_1_dedup AS SELECT DISTINCT * FROM foo_1;\n\nor you could compute some aggregates, counts, etc. Same as before, no WAL \nneeded, and you can use all your cores in parallel.\n\n From what you say this should reduce the size of your imported data by a \nlot (and hence the time spent in the non-parallel operation).\n\nWith a different distribution, ie duplicates only between existing and \nimported data, and not within the imported data, this strategy would be \nuseless.\n\n", "msg_date": "Mon, 07 Jun 2010 18:52:07 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to insert a bulk of data with unique-violations\n very fast" }, { "msg_contents": "Pierre C schrieb:\n> \n>> Within the data to import most rows have 20 till 50 duplicates. \n>> Sometime much more, sometimes less.\n> \n> In that case (source data has lots of redundancy), after importing the \n> data chunks in parallel, you can run a first pass of de-duplication on \n> the chunks, also in parallel, something like :\n> \n> CREATE TEMP TABLE foo_1_dedup AS SELECT DISTINCT * FROM foo_1;\n> \n> or you could compute some aggregates, counts, etc. Same as before, no \n> WAL needed, and you can use all your cores in parallel.\n> \n> From what you say this should reduce the size of your imported data by \n> a lot (and hence the time spent in the non-parallel operation).\n\nThank you very much for this advice. I've tried it inanother project \nwith similar import-problems. This really speed the import up.\n\nThank everyone for your time and help!\n\nGreetings,\nTorsten\n-- \nhttp://www.dddbl.de - ein Datenbank-Layer, der die Arbeit mit 8 \nverschiedenen Datenbanksystemen abstrahiert,\nQueries von Applikationen trennt und automatisch die Query-Ergebnisse \nauswerten kann.\n", "msg_date": "Wed, 09 Jun 2010 09:45:46 +0200", "msg_from": "=?UTF-8?B?VG9yc3RlbiBaw7xobHNkb3JmZg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to insert a bulk of data with unique-violations\n very fast" }, { "msg_contents": ">>> Within the data to import most rows have 20 till 50 duplicates. \n>>> Sometime much more, sometimes less.\n>> In that case (source data has lots of redundancy), after importing the \n>> data chunks in parallel, you can run a first pass of de-duplication on \n>> the chunks, also in parallel, something like :\n>> CREATE TEMP TABLE foo_1_dedup AS SELECT DISTINCT * FROM foo_1;\n>> or you could compute some aggregates, counts, etc. Same as before, no \n>> WAL needed, and you can use all your cores in parallel.\n>> From what you say this should reduce the size of your imported data \n>> by a lot (and hence the time spent in the non-parallel operation).\n>\n> Thank you very much for this advice. I've tried it inanother project \n> with similar import-problems. This really speed the import up.\n\nGlad it was useful ;)\n", "msg_date": "Wed, 09 Jun 2010 12:51:08 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to insert a bulk of data with unique-violations\n very fast" } ]
[ { "msg_contents": "I'm needing some tutorial to use and understand the graphical feature\n\"Explain\" of PgAdmin III?\n\nDo you have it?\n\nThanks,\n\nJeres.\n\nI'm needing some tutorial to use and understand the graphical feature \"Explain\" of PgAdmin III?Do you have it?Thanks,Jeres.", "msg_date": "Tue, 1 Jun 2010 14:47:15 -0300", "msg_from": "Jeres Caldeira Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "PgAdmin iii - Explain." }, { "msg_contents": "On Tue, Jun 1, 2010 at 1:47 PM, Jeres Caldeira Gomes\n<[email protected]> wrote:\n> I'm needing some tutorial to use and understand the graphical feature\n> \"Explain\" of PgAdmin III?\n>\n> Do you have it?\n\nHmm... you might want to ask about this on the pgadmin-support list.\n\nhttp://archives.postgresql.org/pgadmin-support/\n\nIf you're looking for documentation of the explain format in general,\nyou might read the PostgreSQL documentation for explain.\n\nhttp://www.postgresql.org/docs/current/static/sql-explain.html\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Fri, 4 Jun 2010 21:43:20 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgAdmin iii - Explain." } ]
[ { "msg_contents": "I'm helping set up a Red Hat 5.5 system for Postgres. I was going to \nrecommend xfs for the filesystem - however it seems that xfs is \nsupported as a technology preview \"layered product\" for 5.5. This \napparently means that the xfs tools are only available via special \nchannels.\n\nWhat are Red Hat using people choosing for a good performing filesystem?\n\nregards\n\nMark\n", "msg_date": "Wed, 02 Jun 2010 15:06:36 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "File system choice for Red Hat systems" }, { "msg_contents": "Mark Kirkwood <[email protected]> writes:\n> I'm helping set up a Red Hat 5.5 system for Postgres. I was going to \n> recommend xfs for the filesystem - however it seems that xfs is \n> supported as a technology preview \"layered product\" for 5.5. This \n> apparently means that the xfs tools are only available via special \n> channels.\n\nIt also means that it's probably not production grade, anyway.\n\n> What are Red Hat using people choosing for a good performing filesystem?\n\nWhat's your time horizon? RHEL6 will have full support for xfs.\nOn RHEL5 I really wouldn't consider anything except ext3.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Jun 2010 23:26:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: File system choice for Red Hat systems " }, { "msg_contents": "On 02/06/10 15:26, Tom Lane wrote:\n>\n> What's your time horizon? RHEL6 will have full support for xfs.\n> On RHEL5 I really wouldn't consider anything except ext3.\n>\n> \nYeah, RHEL6 seems like the version we would prefer - unfortunately time \nframe is the next few days. Awesome - thanks for the quick reply!\n\nregards\n\nMark\n\n", "msg_date": "Wed, 02 Jun 2010 15:41:28 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: File system choice for Red Hat systems" }, { "msg_contents": "On Wed, 2010-06-02 at 15:06 +1200, Mark Kirkwood wrote:\n> What are Red Hat using people choosing for a good performing\n> filesystem?\n\next2 (xlogs) and ext3 (data). \n\nFor xfs, you may want to read this:\n\nhttp://blog.2ndquadrant.com/en/2010/04/the-return-of-xfs-on-linux.html\n\nRegards,\n-- \nDevrim GÜNDÜZ\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\nPostgreSQL RPM Repository: http://yum.pgrpms.org\nCommunity: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz", "msg_date": "Wed, 02 Jun 2010 08:17:36 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: File system choice for Red Hat systems" }, { "msg_contents": "On 02/06/10 17:17, Devrim GÜNDÜZ wrote:\n>\n> For xfs, you may want to read this:\n>\n> http://blog.2ndquadrant.com/en/2010/04/the-return-of-xfs-on-linux.html\n>\n>\n> \n\nThanks - yes RHEL6 is the version we would have liked to use I suspect!\n\nRegards\n\nMark\n\n", "msg_date": "Wed, 02 Jun 2010 17:31:08 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: File system choice for Red Hat systems" }, { "msg_contents": "Mark Kirkwood wrote:\n> Yeah, RHEL6 seems like the version we would prefer - unfortunately \n> time frame is the next few days. Awesome - thanks for the quick reply!\n\nThe RHEL6 beta is out, I'm running it, and I expect a straightforward \nupgrade path to the final release--I think I can just keep grabbing \nupdated packages. Depending on how long your transition from test into \nproduction is, you might want to consider a similar move, putting RHEL6 \nonto something right now in nearly complete form and just slip in \nupdates as it moves toward the official release. It's already better \nthan RHEL5 at many things, even as a beta. The 2.6.18 kernel in \nparticular is looking painfully old nowadays.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Wed, 02 Jun 2010 02:16:58 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: File system choice for Red Hat systems" }, { "msg_contents": "On Tuesday 01 June 2010, Mark Kirkwood <[email protected]> \nwrote:\n> I'm helping set up a Red Hat 5.5 system for Postgres. I was going to\n> recommend xfs for the filesystem - however it seems that xfs is\n> supported as a technology preview \"layered product\" for 5.5. This\n> apparently means that the xfs tools are only available via special\n> channels.\n> \n> What are Red Hat using people choosing for a good performing filesystem?\n> \n\nI've run PostgreSQL on XFS on CentOS for years. It works well. Make sure you \nhave a good battery-backed RAID controller under it (true for all \nfilesystems).\n\n-- \n\"No animals were harmed in the recording of this episode. We tried but that \ndamn monkey was just too fast.\"\n", "msg_date": "Wed, 2 Jun 2010 07:53:28 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: File system choice for Red Hat systems" }, { "msg_contents": "\nyou can try Scientific Linux 5.x,it plus XFS and some other soft for HPC based on CentOS.\nIt had XFS for years\n\n\n--- On Wed, 6/2/10, Alan Hodgson <[email protected]> wrote:\n\n> From: Alan Hodgson <[email protected]>\n> Subject: Re: [PERFORM] File system choice for Red Hat systems\n> To: [email protected]\n> Date: Wednesday, June 2, 2010, 10:53 PM\n> On Tuesday 01 June 2010, Mark\n> Kirkwood <[email protected]>\n> \n> wrote:\n> > I'm helping set up a Red Hat 5.5 system for Postgres.\n> I was going to\n> > recommend xfs for the filesystem - however it seems\n> that xfs is\n> > supported as a technology preview \"layered product\"\n> for 5.5. This\n> > apparently means that the xfs tools are only available\n> via special\n> > channels.\n> > \n> > What are Red Hat using people choosing for a good\n> performing filesystem?\n> > \n> \n> I've run PostgreSQL on XFS on CentOS for years. It works\n> well. Make sure you \n> have a good battery-backed RAID controller under it (true\n> for all \n> filesystems).\n> \n> -- \n> \"No animals were harmed in the recording of this episode.\n> We tried but that \n> damn monkey was just too fast.\"\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n", "msg_date": "Wed, 2 Jun 2010 08:29:30 -0700 (PDT)", "msg_from": "Wales Wang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: File system choice for Red Hat systems" }, { "msg_contents": "On 03/06/10 02:53, Alan Hodgson wrote:\n> On Tuesday 01 June 2010, Mark Kirkwood<[email protected]>\n> wrote:\n> \n>> I'm helping set up a Red Hat 5.5 system for Postgres. I was going to\n>> recommend xfs for the filesystem - however it seems that xfs is\n>> supported as a technology preview \"layered product\" for 5.5. This\n>> apparently means that the xfs tools are only available via special\n>> channels.\n>>\n>> What are Red Hat using people choosing for a good performing filesystem?\n>>\n>> \n> I've run PostgreSQL on XFS on CentOS for years. It works well. Make sure you\n> have a good battery-backed RAID controller under it (true for all\n> filesystems).\n>\n> \n\nThanks - yes, left to myself I would consider using Centos instead. \nHowever os choice is prescribed in this case I believe.\n\nCheers\n\nMark\n", "msg_date": "Thu, 03 Jun 2010 10:13:29 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: File system choice for Red Hat systems" } ]
[ { "msg_contents": "Hi,\n\nSorry to revive an old thread but I have had this error whilst trying to \nconfigure my 32-bit build of postgres to run on a 64-bit Windows Server \n2008 machine with 96GB of RAM (that I would very much like to use with \npostgres).\n\nI am getting:\n\n2010-06-02 11:34:09 BSTFATAL: requested shared memory size overflows size_t\n2010-06-02 11:41:01 BSTFATAL: could not create shared memory segment: 8\n2010-06-02 11:41:01 BSTDETAIL: Failed system call was MapViewOfFileEx.\n\nwhich makes a lot of sense since I was setting shared_buffers (and \neffective_cache_size) to values like 60GB..\n\nIs it possible to get postgres to make use of the available 96GB RAM on \na Windows 32-bit build? Otherwise, how can I get it to work?\n\nIm guessing my options are:\n\n- Use the 64-bit Linux build (Not a viable option for me - unless from a \nVM - in which case recommendations?)\nor\n- Configure Windows and postgres properly (Preferred option - but I \ndon't know what needs to be done here or if Im testing properly using \nResource Monitor)\n\nThanks,\nTom\n\n", "msg_date": "Wed, 02 Jun 2010 11:58:47 +0100", "msg_from": "Tom Wilcox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "Tom Wilcox <[email protected]> wrote:\n \n> Is it possible to get postgres to make use of the available 96GB\n> RAM on a Windows 32-bit build?\n \nI would try setting shared_memory to somewhere between 200MB and 1GB\nand set effective_cache_size = 90GB or so. The default behavior of\nWindows was to use otherwise idle RAM for disk caching, last I\nchecked, anyway.\n \n-Kevin\n", "msg_date": "Wed, 02 Jun 2010 15:52:49 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "* Kevin Grittner ([email protected]) wrote:\n> Tom Wilcox <[email protected]> wrote:\n> > Is it possible to get postgres to make use of the available 96GB\n> > RAM on a Windows 32-bit build?\n> \n> I would try setting shared_memory to somewhere between 200MB and 1GB\n> and set effective_cache_size = 90GB or so. The default behavior of\n> Windows was to use otherwise idle RAM for disk caching, last I\n> checked, anyway.\n\nSure, but as explained on -general already, all that RAM will only ever\nget used for disk cacheing. It won't be able to be used for sorts or\nhash aggs or any other PG operations (PG would use at most\n4GB-shared_buffers, or so).\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Wed, 2 Jun 2010 16:59:57 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "Tom,\n\nA 32 bit build could only reference at most 4 Gb - certainly not 60 Gb. Also, Windows doesn't do well with large shared buffer sizes anyway. Try setting shared_buffers to 2 Gb and let the OS file system cache handle the rest.\n\nYour other option, of course, is a nice 64-bit linux variant, which won't have this problem at all.\n\nGood luck!\n\nBob Lunney\n\n--- On Wed, 6/2/10, Tom Wilcox <[email protected]> wrote:\n\n> From: Tom Wilcox <[email protected]>\n> Subject: Re: [PERFORM] requested shared memory size overflows size_t\n> To: [email protected]\n> Date: Wednesday, June 2, 2010, 6:58 AM\n> Hi,\n> \n> Sorry to revive an old thread but I have had this error\n> whilst trying to configure my 32-bit build of postgres to\n> run on a 64-bit Windows Server 2008 machine with 96GB of RAM\n> (that I would very much like to use with postgres).\n> \n> I am getting:\n> \n> 2010-06-02 11:34:09 BSTFATAL:  requested shared memory\n> size overflows size_t\n> 2010-06-02 11:41:01 BSTFATAL:  could not create shared\n> memory segment: 8\n> 2010-06-02 11:41:01 BSTDETAIL:  Failed system call was\n> MapViewOfFileEx.\n> \n> which makes a lot of sense since I was setting\n> shared_buffers (and effective_cache_size) to values like\n> 60GB..\n> \n> Is it possible to get postgres to make use of the available\n> 96GB RAM on a Windows 32-bit build? Otherwise, how can I get\n> it to work?\n> \n> Im guessing my options are:\n> \n> - Use the 64-bit Linux build (Not a viable option for me -\n> unless from a VM - in which case recommendations?)\n> or\n> - Configure Windows and postgres properly (Preferred option\n> - but I don't know what needs to be done here or if Im\n> testing properly using Resource Monitor)\n> \n> Thanks,\n> Tom\n> \n> \n> -- Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n \n", "msg_date": "Wed, 2 Jun 2010 18:26:56 -0700 (PDT)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "On Wed, Jun 02, 2010 at 11:58:47AM +0100, Tom Wilcox wrote:\n> Hi,\n>\n> Sorry to revive an old thread but I have had this error whilst trying to \n> configure my 32-bit build of postgres to run on a 64-bit Windows Server \n> 2008 machine with 96GB of RAM (that I would very much like to use with \n> postgres).\n>\n> I am getting:\n>\n> 2010-06-02 11:34:09 BSTFATAL: requested shared memory size overflows size_t\n> 2010-06-02 11:41:01 BSTFATAL: could not create shared memory segment: 8\n> 2010-06-02 11:41:01 BSTDETAIL: Failed system call was MapViewOfFileEx.\n>\n> which makes a lot of sense since I was setting shared_buffers (and \n> effective_cache_size) to values like 60GB..\n\nI realize other answers have already been given on this thread; I figured I'd\njust refer to the manual, which says, \"The useful range for shared_buffers on\nWindows systems is generally from 64MB to 512MB.\" [1]\n\n[1] http://www.postgresql.org/docs/8.4/static/runtime-config-resource.html\n\n--\nJoshua Tolley / eggyknap\nEnd Point Corporation\nhttp://www.endpoint.com", "msg_date": "Wed, 2 Jun 2010 20:27:46 -0600", "msg_from": "Joshua Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "On Wed, Jun 2, 2010 at 9:26 PM, Bob Lunney <[email protected]> wrote:\n> Your other option, of course, is a nice 64-bit linux variant, which won't have this problem at all.\n\nAlthough, even there, I think I've heard that after 10GB you don't get\nmuch benefit from raising it further. Not sure if that's accurate or\nnot...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 9 Jun 2010 21:49:53 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "True, plus there are the other issues of increased checkpoint times and I/O, bgwriter tuning, etc. It may be better to let the OS cache the files and size shared_buffers to a smaller value. \n\nBob Lunney\n\n--- On Wed, 6/9/10, Robert Haas <[email protected]> wrote:\n\n> From: Robert Haas <[email protected]>\n> Subject: Re: [PERFORM] requested shared memory size overflows size_t\n> To: \"Bob Lunney\" <[email protected]>\n> Cc: [email protected], \"Tom Wilcox\" <[email protected]>\n> Date: Wednesday, June 9, 2010, 9:49 PM\n> On Wed, Jun 2, 2010 at 9:26 PM, Bob\n> Lunney <[email protected]>\n> wrote:\n> > Your other option, of course, is a nice 64-bit linux\n> variant, which won't have this problem at all.\n> \n> Although, even there, I think I've heard that after 10GB\n> you don't get\n> much benefit from raising it further.  Not sure if\n> that's accurate or\n> not...\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise Postgres Company\n> \n\n\n \n", "msg_date": "Thu, 10 Jun 2010 07:41:15 -0700 (PDT)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" } ]
[ { "msg_contents": "Hallo all\n\nI have a strange problem here.\nI have a pgsql database running on Intel hardware here, it has 8 cores\nhyperthreaded so you see 16 cpu's.\n\nThis box is basically adle @ the moment as it is still in testing yet\ntop shows high usage on just 1 of the cores.\nmpstat gives the below.\nAs you can see only cpu 1 is verey bussy, the rest are idle.\n\nThanx\n\nMozzi\n\n13:02:19 CPU %usr %nice %sys %iowait %irq %soft %steal\n%guest %idle\n13:02:21 all 4.70 0.00 0.41 1.57 0.00 0.00 0.00\n0.00 93.32\n13:02:21 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\n13:02:21 1 72.68 0.00 5.37 21.46 0.00 0.49 0.00\n0.00 0.00\n13:02:21 2 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\n13:02:21 3 0.00 0.00 0.51 0.00 0.00 0.00 0.00\n0.00 99.49\n13:02:21 4 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\n13:02:21 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\n13:02:21 6 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\n13:02:21 7 0.00 0.00 0.36 0.00 0.00 0.00 0.00\n0.00 99.64\n13:02:21 8 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\n13:02:21 9 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\n13:02:21 10 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\n13:02:21 11 0.00 0.00 0.00 1.00 0.00 0.00 0.00\n0.00 99.00\n13:02:21 12 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\n13:02:21 13 0.00 0.00 0.00 2.00 0.00 0.00 0.00\n0.00 98.00\n13:02:21 14 0.00 0.00 0.51 0.00 0.00 0.00 0.00\n0.00 99.49\n13:02:21 15 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\n\nAverage: CPU %usr %nice %sys %iowait %irq %soft %steal\n%guest %idle\nAverage: all 4.66 0.00 0.43 1.46 0.00 0.04 0.00\n0.00 93.41\nAverage: 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\nAverage: 1 72.27 0.00 5.47 21.58 0.00 0.59 0.00\n0.00 0.10\nAverage: 2 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\nAverage: 3 0.10 0.00 0.50 0.00 0.00 0.00 0.00\n0.00 99.40\nAverage: 4 0.10 0.00 0.10 0.00 0.00 0.00 0.00\n0.00 99.80\nAverage: 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\nAverage: 6 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\nAverage: 7 0.00 0.00 0.10 0.60 0.00 0.00 0.00\n0.00 99.30\nAverage: 8 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\nAverage: 9 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\nAverage: 10 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\nAverage: 11 0.00 0.00 0.00 0.20 0.00 0.00 0.00\n0.00 99.80\nAverage: 12 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\nAverage: 13 0.00 0.00 0.10 0.40 0.00 0.00 0.00\n0.00 99.50\nAverage: 14 0.00 0.00 0.50 0.00 0.00 0.00 0.00\n0.00 99.50\nAverage: 15 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 100.00\n\n\n", "msg_date": "Wed, 02 Jun 2010 13:12:40 +0200", "msg_from": "Mozzi <[email protected]>", "msg_from_op": true, "msg_subject": "Overusing 1 CPU" }, { "msg_contents": "On Wed, 2 Jun 2010, Mozzi wrote:\n> This box is basically adle @ the moment as it is still in testing yet\n> top shows high usage on just 1 of the cores.\n\nFirst port of call: What process is using the CPU? Run top on a fairly \nwide terminal and use the \"c\" button to show the full command line.\n\nMatthew\n\n-- \n Debugging is twice as hard as writing the code in the first place.\n Therefore, if you write the code as cleverly as possible, you are, by\n definition, not smart enough to debug it. - Kernighan\n", "msg_date": "Wed, 2 Jun 2010 12:24:03 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overusing 1 CPU" }, { "msg_contents": "Hi\n\nThanx mate Create Index seems to be the culprit.\nIs it normal to just use 1 cpu tho?\n\nMozzi\n\nOn Wed, 2010-06-02 at 12:24 +0100, Matthew Wakeling wrote:\n> On Wed, 2 Jun 2010, Mozzi wrote:\n> > This box is basically adle @ the moment as it is still in testing yet\n> > top shows high usage on just 1 of the cores.\n> \n> First port of call: What process is using the CPU? Run top on a fairly \n> wide terminal and use the \"c\" button to show the full command line.\n> \n> Matthew\n> \n\n\n", "msg_date": "Wed, 02 Jun 2010 13:37:37 +0200", "msg_from": "Mozzi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Overusing 1 CPU" }, { "msg_contents": "On Wednesday 02 June 2010 13:37:37 Mozzi wrote:\n> Hi\n> \n> Thanx mate Create Index seems to be the culprit.\n> Is it normal to just use 1 cpu tho?\n\nIf it is a single-threaded process, then yes.\nAnd a \"Create index\" on a single table will probably be single-threaded.\n\nIf you now start a \"create index\" on a different table, a different CPU should \nbe used for that.\n\n> \n> Mozzi\n> \n> On Wed, 2010-06-02 at 12:24 +0100, Matthew Wakeling wrote:\n> > On Wed, 2 Jun 2010, Mozzi wrote:\n> > > This box is basically adle @ the moment as it is still in testing yet\n> > > top shows high usage on just 1 of the cores.\n> >\n> > First port of call: What process is using the CPU? Run top on a fairly\n> > wide terminal and use the \"c\" button to show the full command line.\n> >\n> > Matthew\n> \n", "msg_date": "Wed, 2 Jun 2010 13:48:21 +0200", "msg_from": "\"J. Roeleveld\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overusing 1 CPU" }, { "msg_contents": "In response to Mozzi :\n> Hi\n> \n> Thanx mate Create Index seems to be the culprit.\n> Is it normal to just use 1 cpu tho?\n\nIf you have only one client, yes. If you have more then one active\nconnections, every connection will use one CPU. In your case: create\nindex can use only one CPU.\n\n\nRegards, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Wed, 2 Jun 2010 13:58:59 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overusing 1 CPU" }, { "msg_contents": "Mozzi,\n\n* Mozzi ([email protected]) wrote:\n> Thanx mate Create Index seems to be the culprit.\n> Is it normal to just use 1 cpu tho?\n\nYes, PG can only use 1 CPU for a given query or connection. You'll\nstart to see the other CPUs going when you have more than one connection\nto the database. If you're building alot of indexes then you probably\nwant to split up the statements into multiple connections and run them\nin parallel.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Wed, 2 Jun 2010 07:59:16 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overusing 1 CPU" } ]
[ { "msg_contents": "hi,\n\nI have a problem space where the main goal is to search backward in time for\nevents. Time can go back very far into the past, and so the\ntable can get quite large. However, the vast majority of queries are all\nsatisfied by relatively recent data. I have an index on the row creation\ndate and I would like almost all of my queries to have a query plan looking\nsomething like:\n\n Limit ...\n -> Index Scan Backward using server_timestamp_idx on events\n (cost=0.00..623055.91 rows=8695 width=177)\n ...\n\nHowever, PostgreSQL frequently tries to do a full table scan. Often what\ncontrols whether a scan is performed or not is dependent on the size of the\nLIMIT and how detailed the WHERE clause is. In practice, the scan is always\nthe wrong answer for my use cases (where \"always\" is defined to be >99.9%).\n\nSome examples:\n\n(1) A sample query that devolves to a full table scan\n\n EXPLAIN\n SELECT events.id, events.client_duration, events.message,\nevents.created_by, events.source, events.type, events.event,\nevents.environment,\n events.server_timestamp, events.session_id, events.reference,\nevents.client_uuid\n FROM events\n WHERE client_uuid ~* E'^foo bar so what'\n ORDER BY server_timestamp DESC\n LIMIT 20;\n QUERY PLAN (BAD!)\n--------------------------------------------------------------------------\n Limit (cost=363278.56..363278.61 rows=20 width=177)\n -> Sort (cost=363278.56..363278.62 rows=24 width=177)\n Sort Key: server_timestamp\n -> Seq Scan on events (cost=0.00..363278.01 rows=24 width=177)\n Filter: (client_uuid ~* '^foo bar so what'::text)\n\n\n(2) Making the query faster by making the string match LESS specific (odd,\nseems like it should be MORE)\n\n EXPLAIN\n SELECT events.id, events.client_duration, events.message,\nevents.created_by, events.source, events.type, events.event,\nevents.environment,\n events.server_timestamp, events.session_id, events.reference,\nevents.client_uuid\n FROM events\n WHERE client_uuid ~* E'^foo'\n ORDER BY server_timestamp DESC\n LIMIT 20;\n QUERY PLAN (GOOD!)\n\n------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1433.14 rows=20 width=177)\n -> Index Scan Backward using server_timestamp_idx on events\n (cost=0.00..623055.91 rows=8695 width=177)\n Filter: (client_uuid ~* '^foo'::text)\n\n\n(3) Alternatively making the query faster by using a smaller limit\n\n EXPLAIN\n SELECT events.id, events.client_duration, events.message,\nevents.created_by, events.source, events.type, events.event,\nevents.environment,\n events.server_timestamp, events.session_id, events.reference,\nevents.client_uuid\n FROM events\n WHERE client_uuid ~* E'^foo bar so what'\n ORDER BY server_timestamp DESC\n LIMIT 10;\n QUERY PLAN (GOOD!)\n\n----------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..259606.63 rows=10 width=177)\n -> Index Scan Backward using server_timestamp_idx on events\n (cost=0.00..623055.91 rows=24 width=177)\n Filter: (client_uuid ~* '^foo bar so what'::text)\n\n\nI find myself wishing I could just put a SQL HINT on the query to force the\nindex to be used but I understand that HINTs are considered harmful and are\ntherefore not provided for PostgreSQL, so what is the recommended way to\nsolve this?\n\nthank you very much\n\nhi,\nI have a problem space where the main goal is to search backward in time for events.  Time can go back very far into the past, and so the\ntable can get quite large.  However, the vast majority of queries are all satisfied by relatively recent data.  I have an index on the row creation date and I would like almost all of my queries to have a query plan looking something like:\n Limit ...   ->  Index Scan Backward using server_timestamp_idx on events  (cost=0.00..623055.91 rows=8695 width=177)\n         ...However, PostgreSQL frequently tries to do a full table scan.  Often what controls whether a scan is performed or not is dependent on the size of the LIMIT and how detailed the WHERE clause is.  In practice, the scan is always the wrong answer for my use cases (where \"always\" is defined to be >99.9%).\nSome examples:\n(1) A sample query that devolves to a full table scan\n  EXPLAIN   SELECT events.id, events.client_duration, events.message, events.created_by, events.source, events.type, events.event, events.environment,\n          events.server_timestamp, events.session_id, events.reference, events.client_uuid     FROM events\n    WHERE client_uuid ~* E'^foo bar so what' ORDER BY server_timestamp DESC\n    LIMIT 20;                                QUERY PLAN (BAD!)\n-------------------------------------------------------------------------- Limit  (cost=363278.56..363278.61 rows=20 width=177)\n   ->  Sort  (cost=363278.56..363278.62 rows=24 width=177)         Sort Key: server_timestamp\n         ->  Seq Scan on events  (cost=0.00..363278.01 rows=24 width=177)\n               Filter: (client_uuid ~* '^foo bar so what'::text)\n(2) Making the query faster by making the string match LESS specific (odd, seems like it should be MORE)\n  EXPLAIN   SELECT events.id, events.client_duration, events.message, events.created_by, events.source, events.type, events.event, events.environment,\n          events.server_timestamp, events.session_id, events.reference, events.client_uuid     FROM events\n    WHERE client_uuid ~* E'^foo' ORDER BY server_timestamp DESC\n    LIMIT 20;                                QUERY PLAN (GOOD!)                                       \n------------------------------------------------------------------------------------------------------------ Limit  (cost=0.00..1433.14 rows=20 width=177)   ->  Index Scan Backward using server_timestamp_idx on events  (cost=0.00..623055.91 rows=8695 width=177)\n         Filter: (client_uuid ~* '^foo'::text)(3) Alternatively making the query faster by using a smaller limit\n  EXPLAIN\n   SELECT events.id, events.client_duration, events.message, events.created_by, events.source, events.type, events.event, events.environment,\n          events.server_timestamp, events.session_id, events.reference, events.client_uuid     FROM events\n    WHERE client_uuid ~* E'^foo bar so what' ORDER BY server_timestamp DESC\n    LIMIT 10;\n                                QUERY PLAN (GOOD!)                                       ----------------------------------------------------------------------------------------------------------\n Limit  (cost=0.00..259606.63 rows=10 width=177)   ->  Index Scan Backward using server_timestamp_idx on events  (cost=0.00..623055.91 rows=24 width=177)\n         Filter: (client_uuid ~* '^foo bar so what'::text)I find myself wishing I could just put a SQL HINT on the query to force the index to be used but I understand that HINTs are considered harmful and are therefore not provided for PostgreSQL, so what is the recommended way to solve this?\nthank you very much", "msg_date": "Wed, 2 Jun 2010 15:28:54 -0500", "msg_from": "Jori Jovanovich <[email protected]>", "msg_from_op": true, "msg_subject": "SELECT ignoring index even though ORDER BY and LIMIT present" }, { "msg_contents": "Jori Jovanovich <[email protected]> wrote:\n \n> what is the recommended way to solve this?\n \nThe recommended way is to adjust your costing configuration to\nbetter reflect your environment. What version of PostgreSQL is\nthis? What do you have set in your postgresql.conf file? What does\nthe hardware look like? How big is the active (frequently\nreferenced) portion of your database?\n \n-Kevin\n", "msg_date": "Wed, 02 Jun 2010 17:25:26 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT ignoring index even though ORDER BY and\n\t LIMIT present" }, { "msg_contents": "2010/6/2 Jori Jovanovich <[email protected]>\n\n> hi,\n>\n> I have a problem space where the main goal is to search backward in time\n> for events. Time can go back very far into the past, and so the\n> table can get quite large. However, the vast majority of queries are all\n> satisfied by relatively recent data. I have an index on the row creation\n> date and I would like almost all of my queries to have a query plan looking\n> something like:\n>\n>\n>\n[CUT]\n\nDo you have autovacuum running? Have you tried updating statistics?\n\nregards\nSzymon Guz\n\n2010/6/2 Jori Jovanovich <[email protected]>\nhi,\nI have a problem space where the main goal is to search backward in time for events.  Time can go back very far into the past, and so the\ntable can get quite large.  However, the vast majority of queries are all satisfied by relatively recent data.  I have an index on the row creation date and I would like almost all of my queries to have a query plan looking something like:\n[CUT]Do you have autovacuum running? Have you tried updating statistics?\nregardsSzymon Guz", "msg_date": "Thu, 3 Jun 2010 00:27:47 +0200", "msg_from": "Szymon Guz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT ignoring index even though ORDER BY and LIMIT\n\tpresent" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Jori Jovanovich <[email protected]> wrote:\n>> what is the recommended way to solve this?\n \n> The recommended way is to adjust your costing configuration to\n> better reflect your environment.\n\nActually, it's probably not the costs so much as the row estimates.\nFor instance, that first query was estimated to select 20 out of a\npossible 24 rows. If 24 is indeed the right number of matches, then\nthe planner is right and the OP is wrong: the indexscan is going to\nhave to traverse almost all of the table and therefore it will be a\nlot slower than seqscan + sort. Now, if the real number of matches\nis a lot more than that, then the indexscan would make sense because it\ncould be expected to get stopped by the LIMIT before it has to traverse\ntoo much of the table. So the true problem is to get the rowcount\nestimate to line up with reality.\n\nUnfortunately the estimates for ~* are typically not very good.\nIf you could convert that to plain ~ (case sensitive) it'd probably\nwork better. Also, if this isn't a particularly modern version of\nPostgres, a newer version might do a bit better with the estimate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Jun 2010 18:41:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT ignoring index even though ORDER BY and LIMIT present " }, { "msg_contents": "Jori,\n\nWhat is the PostgreSQL version/shared_buffers/work_mem/effective_cache_size/default_statistics_target?  Are the statistics for the table up to date?  (Run analyze verbose <tablename> to update them.)  Table and index structure would be nice to know, too.\n\nIf all else fails you can set enable_seqscan = off for the session, but that is a Big Hammer for what is probably a smaller problem.\n\nBob Lunney\n\n--- On Wed, 6/2/10, Jori Jovanovich <[email protected]> wrote:\n\nFrom: Jori Jovanovich <[email protected]>\nSubject: [PERFORM] SELECT ignoring index even though ORDER BY and LIMIT present\nTo: [email protected]\nDate: Wednesday, June 2, 2010, 4:28 PM\n\nhi,\n\nI have a problem space where the main goal is to search backward in time for events.  Time can go back very far into the past, and so the\ntable can get quite large.  However, the vast majority of queries are all satisfied by relatively recent data.  I have an index on the row creation date and I would like almost all of my queries to have a query plan looking something like:\n\n Limit ...   ->  Index Scan Backward using server_timestamp_idx on events  (cost=0.00..623055.91 rows=8695 width=177)\n         ...\nHowever, PostgreSQL frequently tries to do a full table scan.  Often what controls whether a scan is performed or not is dependent on the size of the LIMIT and how detailed the WHERE clause is.  In practice, the scan is always the wrong answer for my use cases (where \"always\" is defined to be >99.9%).\n\nSome examples:\n\n(1) A sample query that devolves to a full table scan\n\n  EXPLAIN\n   SELECT events.id, events.client_duration, events.message, events.created_by, events.source, events.type, events.event, events.environment,\n          events.server_timestamp, events.session_id, events.reference, events.client_uuid     FROM events\n    WHERE client_uuid ~* E'^foo bar so what' ORDER BY server_timestamp DESC\n    LIMIT 20;                                QUERY PLAN (BAD!)\n-------------------------------------------------------------------------- Limit  (cost=363278.56..363278.61 rows=20 width=177)\n   ->  Sort  (cost=363278.56..363278.62 rows=24 width=177)         Sort Key: server_timestamp\n         ->  Seq Scan on events  (cost=0.00..363278.01 rows=24 width=177)\n               Filter: (client_uuid ~* '^foo bar so what'::text)\n\n\n(2) Making the query faster by making the string match LESS specific (odd, seems like it should be MORE)\n\n  EXPLAIN\n   SELECT events.id, events.client_duration, events.message, events.created_by, events.source, events.type, events.event, events.environment,\n          events.server_timestamp, events.session_id, events.reference, events.client_uuid     FROM events\n    WHERE client_uuid ~* E'^foo' ORDER BY server_timestamp DESC\n    LIMIT 20;                                QUERY PLAN (GOOD!)                                       \n------------------------------------------------------------------------------------------------------------ Limit  (cost=0.00..1433.14 rows=20 width=177)   ->  Index Scan Backward using server_timestamp_idx on events  (cost=0.00..623055.91 rows=8695 width=177)\n         Filter: (client_uuid ~* '^foo'::text)\n\n(3) Alternatively making the query faster by using a smaller limit\n\n  EXPLAIN\n\n   SELECT events.id, events.client_duration, events.message, events.created_by, events.source, events.type, events.event, events.environment,\n          events.server_timestamp, events.session_id, events.reference, events.client_uuid     FROM events\n    WHERE client_uuid ~* E'^foo bar so what' ORDER BY server_timestamp DESC\n    LIMIT 10;\n                                QUERY PLAN (GOOD!)                                       ----------------------------------------------------------------------------------------------------------\n Limit  (cost=0.00..259606.63 rows=10 width=177)   ->  Index Scan Backward using server_timestamp_idx on events  (cost=0.00..623055.91 rows=24 width=177)\n         Filter: (client_uuid ~* '^foo bar so what'::text)\n\nI find myself wishing I could just put a SQL HINT on the query to force the index to be used but I understand that HINTs are considered harmful and are therefore not provided for PostgreSQL, so what is the recommended way to solve this?\n\nthank you very much\n\n\n\n\n \nJori,What is the PostgreSQL version/shared_buffers/work_mem/effective_cache_size/default_statistics_target?  Are the statistics for the table up to date?  (Run analyze verbose <tablename> to update them.)  Table and index structure would be nice to know, too.If all else fails you can set enable_seqscan = off for the session, but that is a Big Hammer for what is probably a smaller problem.Bob Lunney--- On Wed, 6/2/10, Jori Jovanovich <[email protected]> wrote:From: Jori Jovanovich <[email protected]>Subject: [PERFORM] SELECT ignoring index even though ORDER BY and LIMIT presentTo: [email protected]: Wednesday, June 2, 2010, 4:28\n PMhi,\nI have a problem space where the main goal is to search backward in time for events.  Time can go back very far into the past, and so the\ntable can get quite large.  However, the vast majority of queries are all satisfied by relatively recent data.  I have an index on the row creation date and I would like almost all of my queries to have a query plan looking something like:\n Limit ...   ->  Index Scan Backward using server_timestamp_idx on events  (cost=0.00..623055.91 rows=8695 width=177)\n         ...However, PostgreSQL frequently tries to do a full table scan.  Often what controls whether a scan is performed or not is dependent on the size of the LIMIT and how detailed the WHERE clause is.  In practice, the scan is always the wrong answer for my use cases (where \"always\" is defined to be >99.9%).\nSome examples:\n(1) A sample query that devolves to a full table scan\n  EXPLAIN   SELECT events.id, events.client_duration, events.message, events.created_by, events.source, events.type, events.event, events.environment,\n          events.server_timestamp, events.session_id, events.reference, events.client_uuid     FROM events\n    WHERE client_uuid ~* E'^foo bar so what' ORDER BY server_timestamp DESC\n    LIMIT 20;                                QUERY PLAN (BAD!)\n-------------------------------------------------------------------------- Limit  (cost=363278.56..363278.61 rows=20 width=177)\n   ->  Sort  (cost=363278.56..363278.62 rows=24 width=177)         Sort Key: server_timestamp\n         ->  Seq Scan on events  (cost=0.00..363278.01 rows=24 width=177)\n               Filter: (client_uuid ~* '^foo bar so what'::text)\n(2) Making the query faster by making the string match LESS specific (odd, seems like it should be MORE)\n  EXPLAIN   SELECT events.id, events.client_duration, events.message, events.created_by, events.source, events.type, events.event, events.environment,\n          events.server_timestamp, events.session_id, events.reference, events.client_uuid     FROM events\n    WHERE client_uuid ~* E'^foo' ORDER BY server_timestamp DESC\n    LIMIT 20;                                QUERY PLAN (GOOD!)                                       \n------------------------------------------------------------------------------------------------------------ Limit  (cost=0.00..1433.14 rows=20 width=177)   ->  Index Scan Backward using server_timestamp_idx on events  (cost=0.00..623055.91 rows=8695 width=177)\n         Filter: (client_uuid ~* '^foo'::text)(3) Alternatively making the query faster by using a smaller limit\n  EXPLAIN\n   SELECT events.id, events.client_duration, events.message, events.created_by, events.source, events.type, events.event, events.environment,\n          events.server_timestamp, events.session_id, events.reference, events.client_uuid     FROM events\n    WHERE client_uuid ~* E'^foo bar so what' ORDER BY server_timestamp DESC\n    LIMIT 10;\n                                QUERY PLAN (GOOD!)                                       ----------------------------------------------------------------------------------------------------------\n Limit  (cost=0.00..259606.63 rows=10 width=177)   ->  Index Scan Backward using server_timestamp_idx on events  (cost=0.00..623055.91 rows=24 width=177)\n         Filter: (client_uuid ~* '^foo bar so what'::text)I find myself wishing I could just put a SQL HINT on the query to force the index to be used but I understand that HINTs are considered harmful and are therefore not provided for PostgreSQL, so what is the recommended way to solve this?\nthank you very much", "msg_date": "Wed, 2 Jun 2010 18:49:08 -0700 (PDT)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT ignoring index even though ORDER BY and LIMIT present" }, { "msg_contents": "On Wed, 2 Jun 2010, Jori Jovanovich wrote:\n> (2) Making the query faster by making the string match LESS specific (odd,\n> seems like it should be MORE)\n\nNo, that's the way round it should be. The LIMIT changes it all. Consider \nif you have a huge table, and half of the entries match your WHERE clause. \nTo fetch the ORDER BY ... LIMIT 20 using an index scan would involve \naccessing only on average 40 entries from the table referenced by the \nindex. Therefore, the index is quick. However, consider a huge table that \nonly has twenty matching entries. The index scan would need to touch every \nsingle row in the table to return the matching rows, so a sequential scan, \nfilter, and sort would be much faster. Of course, if you had an index \ncapable of answering the WHERE clause, that would be even better for that \ncase.\n\nMatthew\n\n-- \n Don't criticise a man until you have walked a mile in his shoes; and if\n you do at least he will be a mile behind you and bare footed.\n", "msg_date": "Thu, 3 Jun 2010 11:15:45 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT ignoring index even though ORDER BY and LIMIT\n present" }, { "msg_contents": "hi,\n\nI'm sorry for not posting this first.\n\nThe server is the following and is being used exclusively for this\nPostgreSQL instance:\n\nPostgreSQL 8.4.2 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.2.real (GCC)\n4.2.4 (Ubuntu 4.2.4-1ubuntu4), 64-bit\nAmazon EC2 Large Instance, 7.5GB memory, 64-bit\n\nThis is what is set in my postgresql.conf file:\n\nmax_connections = 100\nssl = true\nshared_buffers = 24MB\n\nANALYZE VERBOSE EVENTS;\nINFO: analyzing \"public.events\"\nINFO: \"events\": scanned 30000 of 211312 pages, containing 1725088 live rows\nand 0 dead rows; 30000 rows in sample, 12151060 estimated total rows\n\nUpdating statistics did not effect the results -- it's still doing full\ntable scans (I had run statistics as well before posting here as well so\nthis was expected).\n\nthank you\n\nOn Wed, Jun 2, 2010 at 8:49 PM, Bob Lunney <[email protected]> wrote:\n\n> Jori,\n>\n> What is the PostgreSQL\n> version/shared_buffers/work_mem/effective_cache_size/default_statistics_target?\n> Are the statistics for the table up to date? (Run analyze verbose\n> <tablename> to update them.) Table and index structure would be nice to\n> know, too.\n>\n> If all else fails you can set enable_seqscan = off for the session, but\n> that is a Big Hammer for what is probably a smaller problem.\n>\n> Bob Lunney\n>\n> --- On *Wed, 6/2/10, Jori Jovanovich <[email protected]>* wrote:\n>\n>\n> From: Jori Jovanovich <[email protected]>\n> Subject: [PERFORM] SELECT ignoring index even though ORDER BY and LIMIT\n> present\n> To: [email protected]\n> Date: Wednesday, June 2, 2010, 4:28 PM\n>\n>\n> hi,\n>\n> I have a problem space where the main goal is to search backward in time\n> for events. Time can go back very far into the past, and so the\n> table can get quite large. However, the vast majority of queries are all\n> satisfied by relatively recent data. I have an index on the row creation\n> date and I would like almost all of my queries to have a query plan looking\n> something like:\n>\n> Limit ...\n> -> Index Scan Backward using server_timestamp_idx on events\n> (cost=0.00..623055.91 rows=8695 width=177)\n> ...\n>\n> However, PostgreSQL frequently tries to do a full table scan. Often what\n> controls whether a scan is performed or not is dependent on the size of the\n> LIMIT and how detailed the WHERE clause is. In practice, the scan is always\n> the wrong answer for my use cases (where \"always\" is defined to be >99.9%).\n>\n> Some examples:\n>\n> (1) A sample query that devolves to a full table scan\n>\n> EXPLAIN\n> SELECT events.id, events.client_duration, events.message,\n> events.created_by, events.source, events.type, events.event,\n> events.environment,\n> events.server_timestamp, events.session_id, events.reference,\n> events.client_uuid\n> FROM events\n> WHERE client_uuid ~* E'^foo bar so what'\n> ORDER BY server_timestamp DESC\n> LIMIT 20;\n> QUERY PLAN (BAD!)\n> --------------------------------------------------------------------------\n> Limit (cost=363278.56..363278.61 rows=20 width=177)\n> -> Sort (cost=363278.56..363278.62 rows=24 width=177)\n> Sort Key: server_timestamp\n> -> Seq Scan on events (cost=0.00..363278.01 rows=24 width=177)\n> Filter: (client_uuid ~* '^foo bar so what'::text)\n>\n>\n> (2) Making the query faster by making the string match LESS specific (odd,\n> seems like it should be MORE)\n>\n> EXPLAIN\n> SELECT events.id, events.client_duration, events.message,\n> events.created_by, events.source, events.type, events.event,\n> events.environment,\n> events.server_timestamp, events.session_id, events.reference,\n> events.client_uuid\n> FROM events\n> WHERE client_uuid ~* E'^foo'\n> ORDER BY server_timestamp DESC\n> LIMIT 20;\n> QUERY PLAN (GOOD!)\n>\n>\n> ------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..1433.14 rows=20 width=177)\n> -> Index Scan Backward using server_timestamp_idx on events\n> (cost=0.00..623055.91 rows=8695 width=177)\n> Filter: (client_uuid ~* '^foo'::text)\n>\n>\n> (3) Alternatively making the query faster by using a smaller limit\n>\n> EXPLAIN\n> SELECT events.id, events.client_duration, events.message,\n> events.created_by, events.source, events.type, events.event,\n> events.environment,\n> events.server_timestamp, events.session_id, events.reference,\n> events.client_uuid\n> FROM events\n> WHERE client_uuid ~* E'^foo bar so what'\n> ORDER BY server_timestamp DESC\n> LIMIT 10;\n> QUERY PLAN (GOOD!)\n>\n>\n> ----------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..259606.63 rows=10 width=177)\n> -> Index Scan Backward using server_timestamp_idx on events\n> (cost=0.00..623055.91 rows=24 width=177)\n> Filter: (client_uuid ~* '^foo bar so what'::text)\n>\n>\n> I find myself wishing I could just put a SQL HINT on the query to force the\n> index to be used but I understand that HINTs are considered harmful and are\n> therefore not provided for PostgreSQL, so what is the recommended way to\n> solve this?\n>\n> thank you very much\n>\n>\nOn Thu, Jun 3, 2010 at 5:15 AM, Matthew Wakeling <[email protected]>\n wrote:\n\n> On Wed, 2 Jun 2010, Jori Jovanovich wrote:\n>\n>> (2) Making the query faster by making the string match LESS specific (odd,\n>> seems like it should be MORE)\n>>\n>\n> No, that's the way round it should be. The LIMIT changes it all. Consider\n> if you have a huge table, and half of the entries match your WHERE clause.\n> To fetch the ORDER BY ... LIMIT 20 using an index scan would involve\n> accessing only on average 40 entries from the table referenced by the index.\n> Therefore, the index is quick. However, consider a huge table that only has\n> twenty matching entries. The index scan would need to touch every single row\n> in the table to return the matching rows, so a sequential scan, filter, and\n> sort would be much faster. Of course, if you had an index capable of\n> answering the WHERE clause, that would be even better for that case.\n>\n\nOkay, this makes sense, thank you -- I was thinking about it backwards.\n\nhi,I'm sorry for not posting this first.The server is the following and is being used exclusively for this PostgreSQL instance:PostgreSQL 8.4.2 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.2.real (GCC) 4.2.4 (Ubuntu 4.2.4-1ubuntu4), 64-bit\nAmazon EC2 Large Instance, 7.5GB memory, 64-bitThis is what is set in my postgresql.conf file:max_connections = 100ssl = trueshared_buffers = 24MB\nANALYZE VERBOSE EVENTS;INFO:  analyzing \"public.events\"INFO:  \"events\": scanned 30000 of 211312 pages, containing 1725088 live rows and 0 dead rows; 30000 rows in sample, 12151060 estimated total rows\nUpdating statistics did not effect the results -- it's still doing full table scans (I had run statistics as well before posting here as well so this was expected).thank you\nOn Wed, Jun 2, 2010 at 8:49 PM, Bob Lunney <[email protected]> wrote:\nJori,What is the PostgreSQL version/shared_buffers/work_mem/effective_cache_size/default_statistics_target?  Are the statistics for the table up to date?  (Run analyze verbose <tablename> to update them.)  Table and index structure would be nice to know, too.\nIf all else fails you can set enable_seqscan = off for the session, but that is a Big Hammer for what is probably a smaller problem.Bob Lunney--- On Wed, 6/2/10, Jori Jovanovich <[email protected]> wrote:\nFrom: Jori Jovanovich <[email protected]>Subject: [PERFORM] SELECT ignoring index even though ORDER BY and LIMIT present\nTo: [email protected]: Wednesday, June 2, 2010, 4:28\n PMhi,\nI have a problem space where the main goal is to search backward in time for events.  Time can go back very far into the past, and so the\ntable can get quite large.  However, the vast majority of queries are all satisfied by relatively recent data.  I have an index on the row creation date and I would like almost all of my queries to have a query plan looking something like:\n Limit ...   ->  Index Scan Backward using server_timestamp_idx on events  (cost=0.00..623055.91 rows=8695 width=177)\n         ...However, PostgreSQL frequently tries to do a full table scan.  Often what controls whether a scan is performed or not is dependent on the size of the LIMIT and how detailed the WHERE clause is.  In practice, the scan is always the wrong answer for my use cases (where \"always\" is defined to be >99.9%).\nSome examples:\n(1) A sample query that devolves to a full table scan\n  EXPLAIN   SELECT events.id, events.client_duration, events.message, events.created_by, events.source, events.type, events.event, events.environment,\n          events.server_timestamp, events.session_id, events.reference, events.client_uuid     FROM events\n    WHERE client_uuid ~* E'^foo bar so what' ORDER BY server_timestamp DESC\n    LIMIT 20;                                QUERY PLAN (BAD!)\n-------------------------------------------------------------------------- Limit  (cost=363278.56..363278.61 rows=20 width=177)\n   ->  Sort  (cost=363278.56..363278.62 rows=24 width=177)         Sort Key: server_timestamp\n         ->  Seq Scan on events  (cost=0.00..363278.01 rows=24 width=177)\n               Filter: (client_uuid ~* '^foo bar so what'::text)\n(2) Making the query faster by making the string match LESS specific (odd, seems like it should be MORE)\n  EXPLAIN   SELECT events.id, events.client_duration, events.message, events.created_by, events.source, events.type, events.event, events.environment,\n          events.server_timestamp, events.session_id, events.reference, events.client_uuid     FROM events\n    WHERE client_uuid ~* E'^foo' ORDER BY server_timestamp DESC\n    LIMIT 20;                                QUERY PLAN (GOOD!)                                       \n------------------------------------------------------------------------------------------------------------ Limit  (cost=0.00..1433.14 rows=20 width=177)   ->  Index Scan Backward using server_timestamp_idx on events  (cost=0.00..623055.91 rows=8695 width=177)\n         Filter: (client_uuid ~* '^foo'::text)(3) Alternatively making the query faster by using a smaller limit\n  EXPLAIN\n   SELECT events.id, events.client_duration, events.message, events.created_by, events.source, events.type, events.event, events.environment,\n          events.server_timestamp, events.session_id, events.reference, events.client_uuid     FROM events\n    WHERE client_uuid ~* E'^foo bar so what' ORDER BY server_timestamp DESC\n    LIMIT 10;\n                                QUERY PLAN (GOOD!)                                       ----------------------------------------------------------------------------------------------------------\n Limit  (cost=0.00..259606.63 rows=10 width=177)   ->  Index Scan Backward using server_timestamp_idx on events  (cost=0.00..623055.91 rows=24 width=177)\n         Filter: (client_uuid ~* '^foo bar so what'::text)I find myself wishing I could just put a SQL HINT on the query to force the index to be used but I understand that HINTs are considered harmful and are therefore not provided for PostgreSQL, so what is the recommended way to solve this?\nthank you very much\nOn Thu, Jun 3, 2010 at 5:15 AM, Matthew Wakeling <[email protected]> wrote:\n\nOn Wed, 2 Jun 2010, Jori Jovanovich wrote:\n(2) Making the query faster by making the string match LESS specific (odd,seems like it should be MORE)No, that's the way round it should be. The LIMIT changes it all. Consider if you have a huge table, and half of the entries match your WHERE clause. To fetch the ORDER BY ... LIMIT 20 using an index scan would involve accessing only on average 40 entries from the table referenced by the index. Therefore, the index is quick. However, consider a huge table that only has twenty matching entries. The index scan would need to touch every single row in the table to return the matching rows, so a sequential scan, filter, and sort would be much faster. Of course, if you had an index capable of answering the WHERE clause, that would be even better for that case.\nOkay, this makes sense, thank you -- I was thinking about it backwards.", "msg_date": "Thu, 3 Jun 2010 10:32:00 -0500", "msg_from": "Jori Jovanovich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT ignoring index even though ORDER BY and LIMIT\n\tpresent" } ]
[ { "msg_contents": "I cant seem to pinpoint why this query is slow . No full table scans\nare being done. The hash join is taking maximum time. The table\ndev4_act_action has only 3 rows.\n\nbox is a 2 cpu quad core intel 5430 with 32G RAM... Postgres 8.4.0\n1G work_mem\n20G effective_cache\nrandom_page_cost=1\ndefault_statistics_target=1000\n\nThe larget table in the inner query is dev4_act_dy_fact which is\npartitioned into 3 partitions per month. Each partition has about 25\nmillion rows.\nThe rest of the tables are very small (100- 1000 rows)\n\nexplain analyze\nselect ipconvert(srctest_num),CASE targetpt::character varying\n WHEN NULL::text THEN serv.targetsrv\n ELSE targetpt::character varying\n END AS targetsrv, sesstype,hits as cons,bytes, srcz.srcarea as\nsrcz, dstz.dstarea as dstz from\n(\nselect srctest_num, targetpt,targetsrv_id, sesstype_id, sum(total) as\nhits, sum(bin) + sum(bout) as bts, sourcearea_id, destinationarea_id\n from dev4_act_dy_fact a, dev4_act_action act where thedate between\n'2010-05-22' and '2010-05-22'\n and a.action_id = act.action_id and action in ('rejected','sess_rejected')\n and guardid_id in (select guardid_id from dev4_act_guardid where\nguardid like 'cust00%')\n and node_id=(select node_id from dev4_act_node where node='10.90.100.2')\n group by srctest_num,targetpt,targetsrv_id,sesstype_id,\nsourcearea_id, destinationarea_id\n order by (sum(bin) + sum(bout)) desc\n limit 1000\n ) a left outer join dev4_act_dstarea dstz on a.destinationarea_id =\ndstz.dstarea_id\n left outer join dev4_act_srcarea srcz on a.sourcearea_id = srcz.srcarea_id\n left outer join dev4_act_targetsrv serv on a.targetsrv_id = serv.targetsrv_id\n left outer join dev4_sesstype proto on a.sesstype_id = proto.sesstype_id\n order by bytes desc\n\n\n\n\"Nested Loop Left Join (cost=95392.32..95496.13 rows=20 width=510)\n(actual time=164533.831..164533.831 rows=0 loops=1)\"\n\" -> Nested Loop Left Join (cost=95392.32..95473.43 rows=20\nwidth=396) (actual time=164533.830..164533.830 rows=0 loops=1)\"\n\" -> Nested Loop Left Join (cost=95392.32..95455.83 rows=20\nwidth=182) (actual time=164533.829..164533.829 rows=0 loops=1)\"\n\" -> Nested Loop Left Join (cost=95392.32..95410.17\nrows=20 width=186) (actual time=164533.829..164533.829 rows=0\nloops=1)\"\n\" -> Limit (cost=95392.32..95392.37 rows=20\nwidth=52) (actual time=164533.828..164533.828 rows=0 loops=1)\"\n\" InitPlan 1 (returns $0)\"\n\" -> Index Scan using dev4_act_node_uindx\non dev4_act_node (cost=0.00..2.27 rows=1 width=4) (actual\ntime=0.052..0.052 rows=0 loops=1)\"\n\" Index Cond: ((node)::text =\n'10.90.100.2'::text)\"\n\" -> Sort (cost=95390.05..95390.10 rows=20\nwidth=52) (actual time=164533.826..164533.826 rows=0 loops=1)\"\n\" Sort Key: ((sum(a.bin) + sum(a.bout)))\"\n\" Sort Method: quicksort Memory: 17kB\"\n\" -> HashAggregate\n(cost=95389.22..95389.62 rows=20 width=52) (actual\ntime=164533.796..164533.796 rows=0 loops=1)\"\n\" -> Nested Loop Semi Join\n(cost=7.37..95388.77 rows=20 width=52) (actual\ntime=164533.793..164533.793 rows=0 loops=1)\"\n\" -> Hash Join\n(cost=7.37..94836.75 rows=2043 width=56) (actual\ntime=164533.792..164533.792 rows=0 loops=1)\"\n\" Hash Cond:\n(a.action_id = act.action_id)\"\n\" -> Append\n(cost=2.80..94045.71 rows=204277 width=60) (actual\ntime=164533.790..164533.790 rows=0 loops=1)\"\n\" -> Bitmap\nHeap Scan on dev4_act_dy_fact a (cost=2.80..3.82 rows=1 width=60)\n(actual time=0.064..0.064 rows=0 loops=1)\"\n\" Recheck\nCond: ((node_id = $0) AND (thedate >= '2010-05-22 00:00:00'::timestamp\nwithout time area) AND (thedate <= '2010-05-22 00:00:00'::timestamp\nwithout time area))\"\n\" ->\nBitmapAnd (cost=2.80..2.80 rows=1 width=0) (actual time=0.062..0.062\nrows=0 loops=1)\"\n\"\n-> Bitmap Index Scan on dev4_act_dy_dm_nd_indx (cost=0.00..1.27\nrows=3 width=0) (actual time=0.062..0.062 rows=0 loops=1)\"\n\"\n Index Cond: (node_id = $0)\"\n\"\n-> Bitmap Index Scan on dev4_act_dy_dm_cd_indx (cost=0.00..1.28\nrows=3 width=0) (never executed)\"\n\"\n Index Cond: ((thedate >= '2010-05-22 00:00:00'::timestamp without\ntime area) AND (thedate <= '2010-05-22 00:00:00'::timestamp without\ntime area))\"\n\" -> Index\nScan using dev4_act_dy_fact_2010_05_t3_thedate on\ndev4_act_dy_fact_2010_05_t3 a (cost=0.00..94041.89 rows=204276\nwidth=60) (actual time=164533.725..164533.725 rows=0 loops=1)\"\n\" Index\nCond: ((thedate >= '2010-05-22 00:00:00'::timestamp without time area)\nAND (thedate <= '2010-05-22 00:00:00'::timestamp without time area))\"\n\" Filter:\n(node_id = $0)\"\n\" -> Hash\n(cost=4.54..4.54 rows=2 width=4) (never executed)\"\n\" -> Bitmap\nHeap Scan on dev4_act_action act (cost=2.52..4.54 rows=2 width=4)\n(never executed)\"\n\" Recheck\nCond: ((action)::text = ANY ('{rejected,sess_rejected}'::text[]))\"\n\" ->\nBitmap Index Scan on dev4_act_action_uindx (cost=0.00..2.52 rows=2\nwidth=0) (never executed)\"\n\"\nIndex Cond: ((action)::text = ANY\n('{rejected,sess_rejected}'::text[]))\"\n\" -> Index Scan using\ndev4_act_guardid_pk on dev4_act_guardid (cost=0.00..0.27 rows=1\nwidth=4) (never executed)\"\n\" Index Cond:\n(dev4_act_guardid.guardid_id = a.guardid_id)\"\n\" Filter:\n((dev4_act_guardid.guardid)::text ~~ 'cust00%'::text)\"\n\" -> Index Scan using sys_c006248 on dev4_sesstype\nproto (cost=0.00..0.87 rows=1 width=102) (never executed)\"\n\" Index Cond: (a.sesstype_id = proto.sesstype_id)\"\n\" -> Index Scan using dev4_act_targetsrv_pk on\ndev4_act_targetsrv serv (cost=0.00..2.27 rows=1 width=4) (never\nexecuted)\"\n\" Index Cond: (a.targetsrv_id = serv.targetsrv_id)\"\n\" -> Index Scan using dev4_act_srcarea_pk on dev4_act_srcarea\nsrcz (cost=0.00..0.87 rows=1 width=222) (never executed)\"\n\" Index Cond: (a.sourcearea_id = srcz.srcarea_id)\"\n\" -> Index Scan using dev4_act_dstarea_pk on dev4_act_dstarea dstz\n(cost=0.00..0.87 rows=1 width=122) (never executed)\"\n\" Index Cond: (a.destinationarea_id = dstz.dstarea_id)\"\n\"Total runtime: 164534.172 ms\"\n", "msg_date": "Thu, 3 Jun 2010 10:47:55 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "slow query performance" }, { "msg_contents": "On 6/3/2010 12:47 PM, Anj Adu wrote:\n> I cant seem to pinpoint why this query is slow . No full table scans\n> are being done. The hash join is taking maximum time. The table\n> dev4_act_action has only 3 rows.\n>\n> box is a 2 cpu quad core intel 5430 with 32G RAM... Postgres 8.4.0\n> 1G work_mem\n> 20G effective_cache\n> random_page_cost=1\n> default_statistics_target=1000\n>\n> The larget table in the inner query is dev4_act_dy_fact which is\n> partitioned into 3 partitions per month. Each partition has about 25\n> million rows.\n> The rest of the tables are very small (100- 1000 rows)\n>\n> explain analyze\n> select ipconvert(srctest_num),CASE targetpt::character varying\n> WHEN NULL::text THEN serv.targetsrv\n> ELSE targetpt::character varying\n> END AS targetsrv, sesstype,hits as cons,bytes, srcz.srcarea as\n> srcz, dstz.dstarea as dstz from\n> (\n> select srctest_num, targetpt,targetsrv_id, sesstype_id, sum(total) as\n> hits, sum(bin) + sum(bout) as bts, sourcearea_id, destinationarea_id\n> from dev4_act_dy_fact a, dev4_act_action act where thedate between\n> '2010-05-22' and '2010-05-22'\n> and a.action_id = act.action_id and action in ('rejected','sess_rejected')\n> and guardid_id in (select guardid_id from dev4_act_guardid where\n> guardid like 'cust00%')\n> and node_id=(select node_id from dev4_act_node where node='10.90.100.2')\n> group by srctest_num,targetpt,targetsrv_id,sesstype_id,\n> sourcearea_id, destinationarea_id\n> order by (sum(bin) + sum(bout)) desc\n> limit 1000\n> ) a left outer join dev4_act_dstarea dstz on a.destinationarea_id =\n> dstz.dstarea_id\n> left outer join dev4_act_srcarea srcz on a.sourcearea_id = srcz.srcarea_id\n> left outer join dev4_act_targetsrv serv on a.targetsrv_id = serv.targetsrv_id\n> left outer join dev4_sesstype proto on a.sesstype_id = proto.sesstype_id\n> order by bytes desc\n>\n>\n\n\nWow, the word wrap on that makes it hard to read... can you paste it \nhere and send us a link?\n\nhttp://explain.depesz.com\n\n", "msg_date": "Thu, 03 Jun 2010 13:43:21 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query performance" }, { "msg_contents": "Link to plan\n\nhttp://explain.depesz.com/s/kHa\n\nOn Thu, Jun 3, 2010 at 11:43 AM, Andy Colson <[email protected]> wrote:\n> On 6/3/2010 12:47 PM, Anj Adu wrote:\n>>\n>> I cant seem to pinpoint why this query is slow . No full table scans\n>> are being done. The hash join is taking maximum time. The table\n>> dev4_act_action has only 3 rows.\n>>\n>> box is a 2 cpu quad core intel 5430 with 32G RAM... Postgres 8.4.0\n>> 1G work_mem\n>> 20G effective_cache\n>> random_page_cost=1\n>> default_statistics_target=1000\n>>\n>> The larget table  in the inner query is dev4_act_dy_fact which is\n>> partitioned into 3 partitions per month. Each partition has about 25\n>> million rows.\n>> The rest of the tables are very small (100- 1000 rows)\n>>\n>> explain analyze\n>> select ipconvert(srctest_num),CASE targetpt::character varying\n>>             WHEN NULL::text THEN serv.targetsrv\n>>             ELSE targetpt::character varying\n>>         END AS targetsrv, sesstype,hits as cons,bytes, srcz.srcarea as\n>> srcz, dstz.dstarea as dstz from\n>> (\n>> select srctest_num, targetpt,targetsrv_id, sesstype_id, sum(total) as\n>> hits, sum(bin) + sum(bout) as bts, sourcearea_id, destinationarea_id\n>>  from dev4_act_dy_fact a, dev4_act_action act where thedate between\n>> '2010-05-22' and '2010-05-22'\n>>  and a.action_id = act.action_id and action in\n>> ('rejected','sess_rejected')\n>>  and guardid_id in (select guardid_id from dev4_act_guardid where\n>> guardid like 'cust00%')\n>>  and node_id=(select node_id from dev4_act_node where node='10.90.100.2')\n>>  group by srctest_num,targetpt,targetsrv_id,sesstype_id,\n>> sourcearea_id, destinationarea_id\n>>   order by (sum(bin) + sum(bout)) desc\n>>  limit 1000\n>>  ) a left outer join dev4_act_dstarea dstz on a.destinationarea_id =\n>> dstz.dstarea_id\n>>  left outer join dev4_act_srcarea srcz on a.sourcearea_id =\n>> srcz.srcarea_id\n>>  left outer join  dev4_act_targetsrv serv on a.targetsrv_id =\n>> serv.targetsrv_id\n>>  left outer join dev4_sesstype proto on a.sesstype_id = proto.sesstype_id\n>>  order by bytes desc\n>>\n>>\n>\n>\n> Wow, the word wrap on that makes it hard to read... can you paste it here\n> and send us a link?\n>\n> http://explain.depesz.com\n>\n>\n", "msg_date": "Thu, 3 Jun 2010 13:37:23 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query performance" }, { "msg_contents": "On Thu, Jun 3, 2010 at 4:37 PM, Anj Adu <[email protected]> wrote:\n> Link to plan\n>\n> http://explain.depesz.com/s/kHa\n\nYour problem is likely related to the line that's showing up in red:\n\nIndex Scan using dev4_act_dy_fact_2010_05_t3_thedate on\ndev4_act_dy_fact_2010_05_t3 a (cost=0.00..94041.89 rows=204276\nwidth=60) (actual time=164533.725..164533.725 rows=0 loops=1)\n * Index Cond: ((thedate >= '2010-05-22 00:00:00'::timestamp\nwithout time area) AND (thedate <= '2010-05-22 00:00:00'::timestamp\nwithout time area))\n * Filter: (node_id = $0)\n\nThis index scan is estimated to return 204,276 rows and actually\nreturned zero... it might work better to rewrite this part of the\nquery as a join, if you can:\n\nnode_id=(select node_id from dev4_act_node where node='10.90.100.2')\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 9 Jun 2010 22:12:41 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query performance" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Jun 3, 2010 at 4:37 PM, Anj Adu <[email protected]> wrote:\n>> Link to plan\n>> \n>> http://explain.depesz.com/s/kHa\n\n> Your problem is likely related to the line that's showing up in red:\n\n> Index Scan using dev4_act_dy_fact_2010_05_t3_thedate on\n> dev4_act_dy_fact_2010_05_t3 a (cost=0.00..94041.89 rows=204276\n> width=60) (actual time=164533.725..164533.725 rows=0 loops=1)\n> * Index Cond: ((thedate >= '2010-05-22 00:00:00'::timestamp\n> without time area) AND (thedate <= '2010-05-22 00:00:00'::timestamp\n> without time area))\n> * Filter: (node_id = $0)\n\n\"timestamp without time area\"? Somehow I think this isn't the true\nunaltered output of EXPLAIN.\n\nI'm just guessing, since we haven't been shown any table schemas,\nbut what it looks like to me is that the planner is using an entirely\ninappropriate index in which the \"thedate\" column is a low-order column.\nSo what looks like a nice tight indexscan range is actually a full-table\nindexscan. The planner knows that this is ridiculously expensive, as\nindicated by the high cost estimate. It would be cheaper to do a\nseqscan, which leads me to think the real problem here is the OP has\ndisabled seqscans.\n\nIt might be worth providing an index in which \"thedate\" is the only, or\nat least the first, column. For this particular query, an index on\nnode_id and thedate would actually be ideal, but that might be too\nspecialized.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jun 2010 22:55:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query performance " }, { "msg_contents": "The plan is unaltered . There is a separate index on theDate as well\nas one on node_id\n\nI have not specifically disabled sequential scans.\n\nThis query performs much better on 8.1.9 on a similar sized\ntable.(althought the random_page_cost=4 on 8.1.9 and 2 on 8.4.0 )\n\nOn Wed, Jun 9, 2010 at 7:55 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Thu, Jun 3, 2010 at 4:37 PM, Anj Adu <[email protected]> wrote:\n>>> Link to plan\n>>>\n>>> http://explain.depesz.com/s/kHa\n>\n>> Your problem is likely related to the line that's showing up in red:\n>\n>> Index Scan using dev4_act_dy_fact_2010_05_t3_thedate on\n>> dev4_act_dy_fact_2010_05_t3 a (cost=0.00..94041.89 rows=204276\n>> width=60) (actual time=164533.725..164533.725 rows=0 loops=1)\n>>     * Index Cond: ((thedate >= '2010-05-22 00:00:00'::timestamp\n>> without time area) AND (thedate <= '2010-05-22 00:00:00'::timestamp\n>> without time area))\n>>     * Filter: (node_id = $0)\n>\n> \"timestamp without time area\"?  Somehow I think this isn't the true\n> unaltered output of EXPLAIN.\n>\n> I'm just guessing, since we haven't been shown any table schemas,\n> but what it looks like to me is that the planner is using an entirely\n> inappropriate index in which the \"thedate\" column is a low-order column.\n> So what looks like a nice tight indexscan range is actually a full-table\n> indexscan.  The planner knows that this is ridiculously expensive, as\n> indicated by the high cost estimate.  It would be cheaper to do a\n> seqscan, which leads me to think the real problem here is the OP has\n> disabled seqscans.\n>\n> It might be worth providing an index in which \"thedate\" is the only, or\n> at least the first, column.  For this particular query, an index on\n> node_id and thedate would actually be ideal, but that might be too\n> specialized.\n>\n>                        regards, tom lane\n>\n", "msg_date": "Wed, 9 Jun 2010 20:17:03 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query performance" }, { "msg_contents": "On Wed, Jun 9, 2010 at 11:17 PM, Anj Adu <[email protected]> wrote:\n> The plan is unaltered . There is a separate index on theDate as well\n> as one on node_id\n>\n> I have not specifically disabled sequential scans.\n\nPlease do \"SHOW ALL\" and attach the results as a text file.\n\n> This query performs much better on 8.1.9 on a similar sized\n> table.(althought the random_page_cost=4 on 8.1.9 and 2 on 8.4.0 )\n\nWell that could certainly matter...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 10 Jun 2010 09:28:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query performance" }, { "msg_contents": "Attached\n\nThank you\n\n\nOn Thu, Jun 10, 2010 at 6:28 AM, Robert Haas <[email protected]> wrote:\n> On Wed, Jun 9, 2010 at 11:17 PM, Anj Adu <[email protected]> wrote:\n>> The plan is unaltered . There is a separate index on theDate as well\n>> as one on node_id\n>>\n>> I have not specifically disabled sequential scans.\n>\n> Please do \"SHOW ALL\" and attach the results as a text file.\n>\n>> This query performs much better on 8.1.9 on a similar sized\n>> table.(althought the random_page_cost=4 on 8.1.9 and 2 on 8.4.0 )\n>\n> Well that could certainly matter...\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise Postgres Company\n>", "msg_date": "Thu, 10 Jun 2010 08:32:43 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query performance" }, { "msg_contents": "On Thu, Jun 10, 2010 at 11:32 AM, Anj Adu <[email protected]> wrote:\n> Attached\n\nHmm. Well, I'm not quite sure what's going on here, but I think you\nmust be using a modified verison of PostgreSQL, because, as Tom\npointed out upthread, we don't have a data type called \"timestamp with\ntime area\". It would be called \"timestamp with time zone\".\n\nCan we see the index and table definitions of the relevant tables\n(attached as a text file) and the size of each one (use select\npg_relation_size('name'))?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 10 Jun 2010 12:42:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query performance" }, { "msg_contents": "you are right..the word \"zone\" was replaced by \"area\" (my bad )\n\neverything else is as is.\n\nApologies for the confusion.\n\nOn Thu, Jun 10, 2010 at 9:42 AM, Robert Haas <[email protected]> wrote:\n> On Thu, Jun 10, 2010 at 11:32 AM, Anj Adu <[email protected]> wrote:\n>> Attached\n>\n> Hmm.  Well, I'm not quite sure what's going on here, but I think you\n> must be using a modified verison of PostgreSQL, because, as Tom\n> pointed out upthread, we don't have a data type called \"timestamp with\n> time area\".  It would be called \"timestamp with time zone\".\n>\n> Can we see the index and table definitions of the relevant tables\n> (attached as a text file) and the size of each one (use select\n> pg_relation_size('name'))?\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise Postgres Company\n>\n", "msg_date": "Thu, 10 Jun 2010 09:58:59 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query performance" }, { "msg_contents": "On Thu, Jun 10, 2010 at 12:58 PM, Anj Adu <[email protected]> wrote:\n> you are right..the word \"zone\" was replaced by \"area\" (my bad )\n>\n> everything else is as is.\n>\n> Apologies for the confusion.\n\nWell, two different people have asked you for the table and index\ndefinitions now, and you haven't provided them... I think it's going\nto be hard to troubleshoot this without seeing those definitions (and\nalso the sizes, which I asked for in my previous email).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 10 Jun 2010 20:49:48 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query performance" }, { "msg_contents": "I changed random_page_cost=4 (earlier 2) and the performance issue is gone\n\nI am not clear why a page_cost of 2 on really fast disks would perform badly.\n\nThank you for all your help and time.\n\nOn Thu, Jun 10, 2010 at 8:32 AM, Anj Adu <[email protected]> wrote:\n> Attached\n>\n> Thank you\n>\n>\n> On Thu, Jun 10, 2010 at 6:28 AM, Robert Haas <[email protected]> wrote:\n>> On Wed, Jun 9, 2010 at 11:17 PM, Anj Adu <[email protected]> wrote:\n>>> The plan is unaltered . There is a separate index on theDate as well\n>>> as one on node_id\n>>>\n>>> I have not specifically disabled sequential scans.\n>>\n>> Please do \"SHOW ALL\" and attach the results as a text file.\n>>\n>>> This query performs much better on 8.1.9 on a similar sized\n>>> table.(althought the random_page_cost=4 on 8.1.9 and 2 on 8.4.0 )\n>>\n>> Well that could certainly matter...\n>>\n>> --\n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise Postgres Company\n>>\n>\n", "msg_date": "Thu, 10 Jun 2010 19:54:01 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query performance" }, { "msg_contents": "Hi Anj,\n\nThat is an indication that your system was less correctly\nmodeled with a random_page_cost=2 which means that the system\nwill assume that random I/O is cheaper than it is and will\nchoose plans based on that model. If this is not the case,\nthe plan chosen will almost certainly be slower for any\nnon-trivial query. You can put a 200mph speedometer in a\nVW bug but it will never go 200mph.\n\nRegards,\nKen\n\nOn Thu, Jun 10, 2010 at 07:54:01PM -0700, Anj Adu wrote:\n> I changed random_page_cost=4 (earlier 2) and the performance issue is gone\n> \n> I am not clear why a page_cost of 2 on really fast disks would perform badly.\n> \n> Thank you for all your help and time.\n> \n> On Thu, Jun 10, 2010 at 8:32 AM, Anj Adu <[email protected]> wrote:\n> > Attached\n> >\n> > Thank you\n> >\n> >\n> > On Thu, Jun 10, 2010 at 6:28 AM, Robert Haas <[email protected]> wrote:\n> >> On Wed, Jun 9, 2010 at 11:17 PM, Anj Adu <[email protected]> wrote:\n> >>> The plan is unaltered . There is a separate index on theDate as well\n> >>> as one on node_id\n> >>>\n> >>> I have not specifically disabled sequential scans.\n> >>\n> >> Please do \"SHOW ALL\" and attach the results as a text file.\n> >>\n> >>> This query performs much better on 8.1.9 on a similar sized\n> >>> table.(althought the random_page_cost=4 on 8.1.9 and 2 on 8.4.0 )\n> >>\n> >> Well that could certainly matter...\n> >>\n> >> --\n> >> Robert Haas\n> >> EnterpriseDB: http://www.enterprisedb.com\n> >> The Enterprise Postgres Company\n> >>\n> >\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n", "msg_date": "Fri, 11 Jun 2010 07:44:24 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query performance" }, { "msg_contents": "Is there a way to determine a reasonable value for random_page_cost\nvia some testing with OS commands. We have several postgres databases\nand determining this value on a case by case basis may not be viable\n(we may have to go with the defaults)\n\nOn Fri, Jun 11, 2010 at 5:44 AM, Kenneth Marshall <[email protected]> wrote:\n> Hi Anj,\n>\n> That is an indication that your system was less correctly\n> modeled with a random_page_cost=2 which means that the system\n> will assume that random I/O is cheaper than it is and will\n> choose plans based on that model. If this is not the case,\n> the plan chosen will almost certainly be slower for any\n> non-trivial query. You can put a 200mph speedometer in a\n> VW bug but it will never go 200mph.\n>\n> Regards,\n> Ken\n>\n> On Thu, Jun 10, 2010 at 07:54:01PM -0700, Anj Adu wrote:\n>> I changed random_page_cost=4 (earlier 2) and the performance issue is gone\n>>\n>> I am not clear why a page_cost of 2 on really fast disks would perform badly.\n>>\n>> Thank you for all your help and time.\n>>\n>> On Thu, Jun 10, 2010 at 8:32 AM, Anj Adu <[email protected]> wrote:\n>> > Attached\n>> >\n>> > Thank you\n>> >\n>> >\n>> > On Thu, Jun 10, 2010 at 6:28 AM, Robert Haas <[email protected]> wrote:\n>> >> On Wed, Jun 9, 2010 at 11:17 PM, Anj Adu <[email protected]> wrote:\n>> >>> The plan is unaltered . There is a separate index on theDate as well\n>> >>> as one on node_id\n>> >>>\n>> >>> I have not specifically disabled sequential scans.\n>> >>\n>> >> Please do \"SHOW ALL\" and attach the results as a text file.\n>> >>\n>> >>> This query performs much better on 8.1.9 on a similar sized\n>> >>> table.(althought the random_page_cost=4 on 8.1.9 and 2 on 8.4.0 )\n>> >>\n>> >> Well that could certainly matter...\n>> >>\n>> >> --\n>> >> Robert Haas\n>> >> EnterpriseDB: http://www.enterprisedb.com\n>> >> The Enterprise Postgres Company\n>> >>\n>> >\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n", "msg_date": "Fri, 11 Jun 2010 06:23:31 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query performance" }, { "msg_contents": "If you check the archives, you will see that this is not easy\nto do because of the effects of caching. The default values\nwere actually chosen to be a good compromise between fully\ncached in RAM and totally un-cached. The actual best value\ndepends on the size of your database, the size of its working\nset, your I/O system and your memory. The best recommendation\nis usually to use the default values unless you know something\nabout your system that moves it out of that arena.\n\nRegards,\nKen\n\nOn Fri, Jun 11, 2010 at 06:23:31AM -0700, Anj Adu wrote:\n> Is there a way to determine a reasonable value for random_page_cost\n> via some testing with OS commands. We have several postgres databases\n> and determining this value on a case by case basis may not be viable\n> (we may have to go with the defaults)\n> \n> On Fri, Jun 11, 2010 at 5:44 AM, Kenneth Marshall <[email protected]> wrote:\n> > Hi Anj,\n> >\n> > That is an indication that your system was less correctly\n> > modeled with a random_page_cost=2 which means that the system\n> > will assume that random I/O is cheaper than it is and will\n> > choose plans based on that model. If this is not the case,\n> > the plan chosen will almost certainly be slower for any\n> > non-trivial query. You can put a 200mph speedometer in a\n> > VW bug but it will never go 200mph.\n> >\n> > Regards,\n> > Ken\n> >\n> > On Thu, Jun 10, 2010 at 07:54:01PM -0700, Anj Adu wrote:\n> >> I changed random_page_cost=4 (earlier 2) and the performance issue is gone\n> >>\n> >> I am not clear why a page_cost of 2 on really fast disks would perform badly.\n> >>\n> >> Thank you for all your help and time.\n> >>\n> >> On Thu, Jun 10, 2010 at 8:32 AM, Anj Adu <[email protected]> wrote:\n> >> > Attached\n> >> >\n> >> > Thank you\n> >> >\n> >> >\n> >> > On Thu, Jun 10, 2010 at 6:28 AM, Robert Haas <[email protected]> wrote:\n> >> >> On Wed, Jun 9, 2010 at 11:17 PM, Anj Adu <[email protected]> wrote:\n> >> >>> The plan is unaltered . There is a separate index on theDate as well\n> >> >>> as one on node_id\n> >> >>>\n> >> >>> I have not specifically disabled sequential scans.\n> >> >>\n> >> >> Please do \"SHOW ALL\" and attach the results as a text file.\n> >> >>\n> >> >>> This query performs much better on 8.1.9 on a similar sized\n> >> >>> table.(althought the random_page_cost=4 on 8.1.9 and 2 on 8.4.0 )\n> >> >>\n> >> >> Well that could certainly matter...\n> >> >>\n> >> >> --\n> >> >> Robert Haas\n> >> >> EnterpriseDB: http://www.enterprisedb.com\n> >> >> The Enterprise Postgres Company\n> >> >>\n> >> >\n> >>\n> >> --\n> >> Sent via pgsql-performance mailing list ([email protected])\n> >> To make changes to your subscription:\n> >> http://www.postgresql.org/mailpref/pgsql-performance\n> >>\n> >\n> \n", "msg_date": "Fri, 11 Jun 2010 08:28:17 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query performance" }, { "msg_contents": "On Fri, 11 Jun 2010, Kenneth Marshall wrote:\n> If you check the archives, you will see that this is not easy\n> to do because of the effects of caching.\n\nIndeed. If you were to take the value at completely face value, a modern \nhard drive is capable of transferring sequential pages somewhere between \n40 and 100 times faster than random pages, depending on the drive.\n\nHowever, caches tend to favour index scans much more than sequential \nscans, so using a value between 40 and 100 would discourage Postgres from \nusing indexes when they are really the most appropriate option.\n\nMatthew\n\n-- \n A. Top Posters\n > Q. What's the most annoying thing in the world?\n", "msg_date": "Fri, 11 Jun 2010 14:49:15 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query performance" } ]
[ { "msg_contents": "I am reposting as my original query was mangled\n\nThe link to the explain plan is here as it does not paste well into\nthe email body.\n\nhttp://explain.depesz.com/s/kHa\n\n\nThe machine is a 2 cpu quad core 5430 with 32G RAM and 6x450G 15K\nsingle raid-10 array\n\n1G work_mem\ndefault_statistics_target=1000\nrandom_page_cost=1\n\nI am curious why the hash join takes so long. The main table\ndev4_act_dy_fact_2010_05_t has 25 million rows. The table is\npartitioned into 3 parts per month. Remaining tables are very small (\n< 1000 rows)\n", "msg_date": "Thu, 3 Jun 2010 18:45:30 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "slow query" }, { "msg_contents": "> I am reposting as my original query was mangled\n>\n> The link to the explain plan is here as it does not paste well into\n> the email body.\n>\n> http://explain.depesz.com/s/kHa\n>\n>\n> The machine is a 2 cpu quad core 5430 with 32G RAM and 6x450G 15K\n> single raid-10 array\n>\n> 1G work_mem\n> default_statistics_target=1000\n> random_page_cost=1\n\nAre you sure it's wise to set the work_mem to 1G? Do you really need it?\nDon't forget this is not a 'total' or 'per query' - each query may\nallocate multiple work areas (and occupy multiple GB). But I guess this\ndoes not cause the original problem.\n\nThe last row 'random_page_cost=1' - this basically says that reading data\nby random is just as cheap as reading data sequentially. Which may result\nin poor performance due to bad plans. Why have you set this value?\n\nSure, there are rare cases where 'random_page_cost=1' is OK.\n\n>\n> I am curious why the hash join takes so long. The main table\n> dev4_act_dy_fact_2010_05_t has 25 million rows. The table is\n> partitioned into 3 parts per month. Remaining tables are very small (\n> < 1000 rows)\n\nWell, the real cause that makes your query slow is the 'index scan' part.\n\nIndex Scan using dev4_act_dy_fact_2010_05_t3_thedate on\ndev4_act_dy_fact_2010_05_t3 a (cost=0.00..94041.89 rows=204276 width=60)\n(actual time=164533.725..164533.725 rows=0 loops=1)\n\nThe first thing to note here is the difference in expected and actual\nnumber of rows - the planner expects 204276 but gets 0 rows. How large is\nthis partition?\n\nTry to analyze it, set the random_page_cost to something reasonable (e.g.\n4) and try to run the query again.\n\nTomas\n\n", "msg_date": "Fri, 4 Jun 2010 10:13:23 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "On Thu, 3 Jun 2010, Anj Adu wrote:\n> http://explain.depesz.com/s/kHa\n\nI'm interested in why the two partitions dev4_act_dy_fact and \ndev4_act_dy_fact_2010_05_t3 are treated so differently. I'm guessing that \nthe former is the parent and the latter the child table?\n\nWhen accessing the parent table, Postgres is able to use a bitmap AND \nindex scan, because it has the two indexes dev4_act_dy_dm_nd_indx and \ndev4_act_dy_dm_cd_indx. Do the child tables have a similar index setup?\n\nIncidentally, you could get even better than a bitmap AND index scan by \ncreating an index on (node_id, thedate) on each table.\n\n> random_page_cost=1\n\nI agree with Tomas that this is rarely a useful setting.\n\nMatthew\n\n-- \n You can configure Windows, but don't ask me how. -- Bill Gates\n", "msg_date": "Fri, 4 Jun 2010 10:00:21 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": " I'm interested in why the two partitions dev4_act_dy_fact and\n> dev4_act_dy_fact_2010_05_t3 are treated so differently. I'm guessing that\n> the former is the parent and the latter the child table?\n\nYes..you are correct.\n>\n> When accessing the parent table, Postgres is able to use a bitmap AND index\n> scan, because it has the two indexes dev4_act_dy_dm_nd_indx and\n> dev4_act_dy_dm_cd_indx. Do the child tables have a similar index setup?\n\nYes..the child table have indexes on those fields as well\n\n>\n> Incidentally, you could get even better than a bitmap AND index scan by\n> creating an index on (node_id, thedate) on each table.\n\nWill this perform better than separate indexes ?\n\n>\n>> random_page_cost=1\n>\n> I agree with Tomas that this is rarely a useful setting.\n>\n> Matthew\n>\n> --\n> You can configure Windows, but don't ask me how.       -- Bill Gates\n>\n", "msg_date": "Fri, 4 Jun 2010 10:25:30 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query" }, { "msg_contents": "2010/6/4 <[email protected]>:\n>> I am reposting as my original query was mangled\n>>\n>> The link to the explain plan is here as it does not paste well into\n>> the email body.\n>>\n>> http://explain.depesz.com/s/kHa\n>>\n>>\n>> The machine is a 2 cpu quad core 5430 with 32G RAM and 6x450G 15K\n>> single raid-10 array\n>>\n>> 1G work_mem\n>> default_statistics_target=1000\n>> random_page_cost=1\n>\n> Are you sure it's wise to set the work_mem to 1G? Do you really need it?\n> Don't forget this is not a 'total' or 'per query' - each query may\n> allocate multiple work areas (and occupy multiple GB). But I guess this\n> does not cause the original problem.\n>\n> The last row 'random_page_cost=1' - this basically says that reading data\n> by random is just as cheap as reading data sequentially. Which may result\n> in poor performance due to bad plans. Why have you set this value?\n>\n> Sure, there are rare cases where 'random_page_cost=1' is OK.\n\nThe default for 8.4 is 2\nI tried with 2 and 1..but the results are not very different. I\nunderstand that for fast disks (which we have with a decent Raid 10\nsetup)..the random_page_cost can be lowered as needed..but I guess it\ndid not make a difference here.\n\n\n>\n>>\n>> I am curious why the hash join takes so long. The main table\n>> dev4_act_dy_fact_2010_05_t has 25 million rows. The table is\n>> partitioned into 3 parts per month. Remaining tables are very small (\n>> < 1000 rows)\n>\n> Well, the real cause that makes your query slow is the 'index scan' part.\n>\n> Index Scan using dev4_act_dy_fact_2010_05_t3_thedate on\n> dev4_act_dy_fact_2010_05_t3 a (cost=0.00..94041.89 rows=204276 width=60)\n> (actual time=164533.725..164533.725 rows=0 loops=1)\n>\n> The first thing to note here is the difference in expected and actual\n> number of rows - the planner expects 204276 but gets 0 rows. How large is\n> this partition?\n\nThe partition has 25 million rows with indexes on theDate, node_id..\nI altered the random_page_cost to 4 (1 more than the default)..still\nslow. These tables are analyzed every day\nI have an index on each field used in the where criteria,\n>\n> Try to analyze it, set the random_page_cost to something reasonable (e.g.\n> 4) and try to run the query again.\n>\n> Tomas\n>\n>\n", "msg_date": "Fri, 4 Jun 2010 10:41:00 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query" }, { "msg_contents": "Does the difference in expected and actual rows as seen by the planner\na big factor? Even after an analyze...the results are similar. (there\nis a big diff between expected and actual)\nPartition has 25 million rows\n\nOn Fri, Jun 4, 2010 at 10:41 AM, Anj Adu <[email protected]> wrote:\n> 2010/6/4  <[email protected]>:\n>>> I am reposting as my original query was mangled\n>>>\n>>> The link to the explain plan is here as it does not paste well into\n>>> the email body.\n>>>\n>>> http://explain.depesz.com/s/kHa\n>>>\n>>>\n>>> The machine is a 2 cpu quad core 5430 with 32G RAM and 6x450G 15K\n>>> single raid-10 array\n>>>\n>>> 1G work_mem\n>>> default_statistics_target=1000\n>>> random_page_cost=1\n>>\n>> Are you sure it's wise to set the work_mem to 1G? Do you really need it?\n>> Don't forget this is not a 'total' or 'per query' - each query may\n>> allocate multiple work areas (and occupy multiple GB). But I guess this\n>> does not cause the original problem.\n>>\n>> The last row 'random_page_cost=1' - this basically says that reading data\n>> by random is just as cheap as reading data sequentially. Which may result\n>> in poor performance due to bad plans. Why have you set this value?\n>>\n>> Sure, there are rare cases where 'random_page_cost=1' is OK.\n>\n> The default for 8.4 is 2\n> I tried with 2 and 1..but the results are not very different. I\n> understand that for fast disks (which we have with a decent Raid 10\n> setup)..the random_page_cost can be lowered as needed..but I guess it\n> did not make a difference here.\n>\n>\n>>\n>>>\n>>> I am curious why the hash join takes so long. The main table\n>>> dev4_act_dy_fact_2010_05_t has 25 million rows. The table is\n>>> partitioned into 3 parts per month. Remaining tables are very small (\n>>> < 1000 rows)\n>>\n>> Well, the real cause that makes your query slow is the 'index scan' part.\n>>\n>> Index Scan using dev4_act_dy_fact_2010_05_t3_thedate on\n>> dev4_act_dy_fact_2010_05_t3 a (cost=0.00..94041.89 rows=204276 width=60)\n>> (actual time=164533.725..164533.725 rows=0 loops=1)\n>>\n>> The first thing to note here is the difference in expected and actual\n>> number of rows - the planner expects 204276 but gets 0 rows. How large is\n>> this partition?\n>\n> The partition has 25 million rows with indexes on theDate, node_id..\n> I altered the random_page_cost to 4 (1 more than the default)..still\n> slow. These tables are analyzed every day\n> I have an index on each field used in the where criteria,\n>>\n>> Try to analyze it, set the random_page_cost to something reasonable (e.g.\n>> 4) and try to run the query again.\n>>\n>> Tomas\n>>\n>>\n>\n", "msg_date": "Fri, 4 Jun 2010 11:01:23 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query" }, { "msg_contents": "The behaviour is different in postgres 8.1.9 (much faster) (the table\nhas 9 million rows instead of 25 million..but the query comes back\nvery fast (8 seconds)..\n\nWonder if this is very specific to 8.4.0\n\nOn Fri, Jun 4, 2010 at 11:01 AM, Anj Adu <[email protected]> wrote:\n> Does the difference in expected and actual rows as seen by the planner\n> a big factor? Even after an analyze...the results are similar. (there\n> is a big diff between expected and actual)\n> Partition has 25 million rows\n>\n> On Fri, Jun 4, 2010 at 10:41 AM, Anj Adu <[email protected]> wrote:\n>> 2010/6/4  <[email protected]>:\n>>>> I am reposting as my original query was mangled\n>>>>\n>>>> The link to the explain plan is here as it does not paste well into\n>>>> the email body.\n>>>>\n>>>> http://explain.depesz.com/s/kHa\n>>>>\n>>>>\n>>>> The machine is a 2 cpu quad core 5430 with 32G RAM and 6x450G 15K\n>>>> single raid-10 array\n>>>>\n>>>> 1G work_mem\n>>>> default_statistics_target=1000\n>>>> random_page_cost=1\n>>>\n>>> Are you sure it's wise to set the work_mem to 1G? Do you really need it?\n>>> Don't forget this is not a 'total' or 'per query' - each query may\n>>> allocate multiple work areas (and occupy multiple GB). But I guess this\n>>> does not cause the original problem.\n>>>\n>>> The last row 'random_page_cost=1' - this basically says that reading data\n>>> by random is just as cheap as reading data sequentially. Which may result\n>>> in poor performance due to bad plans. Why have you set this value?\n>>>\n>>> Sure, there are rare cases where 'random_page_cost=1' is OK.\n>>\n>> The default for 8.4 is 2\n>> I tried with 2 and 1..but the results are not very different. I\n>> understand that for fast disks (which we have with a decent Raid 10\n>> setup)..the random_page_cost can be lowered as needed..but I guess it\n>> did not make a difference here.\n>>\n>>\n>>>\n>>>>\n>>>> I am curious why the hash join takes so long. The main table\n>>>> dev4_act_dy_fact_2010_05_t has 25 million rows. The table is\n>>>> partitioned into 3 parts per month. Remaining tables are very small (\n>>>> < 1000 rows)\n>>>\n>>> Well, the real cause that makes your query slow is the 'index scan' part.\n>>>\n>>> Index Scan using dev4_act_dy_fact_2010_05_t3_thedate on\n>>> dev4_act_dy_fact_2010_05_t3 a (cost=0.00..94041.89 rows=204276 width=60)\n>>> (actual time=164533.725..164533.725 rows=0 loops=1)\n>>>\n>>> The first thing to note here is the difference in expected and actual\n>>> number of rows - the planner expects 204276 but gets 0 rows. How large is\n>>> this partition?\n>>\n>> The partition has 25 million rows with indexes on theDate, node_id..\n>> I altered the random_page_cost to 4 (1 more than the default)..still\n>> slow. These tables are analyzed every day\n>> I have an index on each field used in the where criteria,\n>>>\n>>> Try to analyze it, set the random_page_cost to something reasonable (e.g.\n>>> 4) and try to run the query again.\n>>>\n>>> Tomas\n>>>\n>>>\n>>\n>\n", "msg_date": "Fri, 4 Jun 2010 11:21:28 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query" }, { "msg_contents": "On Thu, Jun 03, 2010 at 06:45:30PM -0700, Anj Adu wrote:\n> http://explain.depesz.com/s/kHa\n\ncan you please show us \\d dev4_act_dy_fact_2010_05_t3 ?\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n", "msg_date": "Sat, 5 Jun 2010 11:02:31 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "On Fri, Jun 4, 2010 at 12:21 PM, Anj Adu <[email protected]> wrote:\n> The behaviour is different in postgres 8.1.9 (much faster)  (the table\n> has 9 million rows instead of 25 million..but the query comes back\n> very fast (8 seconds)..\n>\n> Wonder if this is very specific to 8.4.0\n\nYou should really be running 8.4.4.\n", "msg_date": "Sat, 5 Jun 2010 03:38:02 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "Thanks..I'll try this. Should I also rebuild the contrib modules..or\njust the core postgres database?\n\nOn Sat, Jun 5, 2010 at 2:38 AM, Scott Marlowe <[email protected]> wrote:\n> On Fri, Jun 4, 2010 at 12:21 PM, Anj Adu <[email protected]> wrote:\n>> The behaviour is different in postgres 8.1.9 (much faster)  (the table\n>> has 9 million rows instead of 25 million..but the query comes back\n>> very fast (8 seconds)..\n>>\n>> Wonder if this is very specific to 8.4.0\n>\n> You should really be running 8.4.4.\n>\n", "msg_date": "Sat, 5 Jun 2010 07:02:37 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query" }, { "msg_contents": "On Sat, Jun 5, 2010 at 8:02 AM, Anj Adu <[email protected]> wrote:\n> Thanks..I'll try this. Should I also rebuild the contrib modules..or\n> just the core postgres database?\n\nThat's really up to you. If you use a contrib module in particular,\nI'd definitely rebuild that one. It's pretty easy anyway.\n", "msg_date": "Sat, 5 Jun 2010 15:25:23 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" } ]
[ { "msg_contents": "Hi\n\nI am performing a DB insertion and update for 3000+ records and while doing so i get CPU utilization\nto 100% with 67% of CPU used by postgres....\n\nI have also done optimization on queries too...\n\nIs there any way to optimized the CPU utilization for postgres....\n\nI am currently using postgres 8.3 version...\n\nHelp will be appreciated....\n\nRegards\n\nYogesh Naik\n\nDISCLAIMER\n==========\nThis e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.\n\n\n\n\n\n\n\n\n\n\nHi \n\nI am performing a DB insertion and update for 3000+\nrecords and while doing so i get CPU utilization \nto 100% with 67% of CPU used by postgres.... \n\nI have also done optimization on queries too... \n\nIs there any way to optimized the CPU utilization for\npostgres.... \n \nI am currently using postgres 8.3\nversion…\n\nHelp will be appreciated....\n \nRegards\n \nYogesh Naik \n\nDISCLAIMER\n==========\nThis e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.", "msg_date": "Fri, 4 Jun 2010 10:10:58 +0530", "msg_from": "Yogesh Naik <[email protected]>", "msg_from_op": true, "msg_subject": "Performance tuning for postgres" }, { "msg_contents": "Yogesh Naik <[email protected]> wrote:\n \n> I am performing a DB insertion and update for 3000+ records and\n> while doing so i get CPU utilization to 100% with 67% of CPU used\n> by postgres....\n> \n> I have also done optimization on queries too...\n> \n> Is there any way to optimized the CPU utilization for postgres....\n \nWe'd need a lot more information before we could make useful\nsuggestions. Knowing something about your hardware, OS, exact\nPostgreSQL version, postgresql.conf contents, the table definition,\nany foreign keys or other constraints, and exactly how you're doing\nthe inserts would all be useful. Please read this and repost:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n", "msg_date": "Fri, 04 Jun 2010 09:00:05 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tuning for postgres" }, { "msg_contents": "In my opinion it depends on the application, the priority of the application\nand whether or not it is a commercially sold product, but depending on your\nneeds you might want to consider having a 3rd party vendor who has expertise\nin this process review and help tune the application. One vendor that I\nknow does this is EnterpriseDB. I've worked with other SQL engines and have\na lot of experience tuning queries in a couple of the environments but\nPostGresql isn't one of them. Having an experienced DBA review your system\ncan make the difference between night and day.\n\nBest Regards\n\nMichael Gould\n\n\"Kevin Grittner\" <[email protected]> wrote:\n> Yogesh Naik <[email protected]> wrote:\n> \n>> I am performing a DB insertion and update for 3000+ records and\n>> while doing so i get CPU utilization to 100% with 67% of CPU used\n>> by postgres....\n>> \n>> I have also done optimization on queries too...\n>> \n>> Is there any way to optimized the CPU utilization for postgres....\n> \n> We'd need a lot more information before we could make useful\n> suggestions. Knowing something about your hardware, OS, exact\n> PostgreSQL version, postgresql.conf contents, the table definition,\n> any foreign keys or other constraints, and exactly how you're doing\n> the inserts would all be useful. Please read this and repost:\n> \n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n> \n> -Kevin\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n--\nMichael Gould, Managing Partner\nIntermodal Software Solutions, LLC\n904.226.0978\n904.592.5250 fax\n\n\n", "msg_date": "Fri, 4 Jun 2010 09:21:53 -0500", "msg_from": "Michael Gould <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tuning for postgres" }, { "msg_contents": "Is this a bulk insert? Are you wrapping your statements within a\ntransaction(s)?\nHow many columns in the table? What do the table statistics look like?\n\n\n\nOn Fri, Jun 4, 2010 at 9:21 AM, Michael Gould <\[email protected]> wrote:\n\n> In my opinion it depends on the application, the priority of the\n> application\n> and whether or not it is a commercially sold product, but depending on your\n> needs you might want to consider having a 3rd party vendor who has\n> expertise\n> in this process review and help tune the application. One vendor that I\n> know does this is EnterpriseDB. I've worked with other SQL engines and\n> have\n> a lot of experience tuning queries in a couple of the environments but\n> PostGresql isn't one of them. Having an experienced DBA review your system\n> can make the difference between night and day.\n>\n> Best Regards\n>\n> Michael Gould\n>\n> \"Kevin Grittner\" <[email protected]> wrote:\n> > Yogesh Naik <[email protected]> wrote:\n> >\n> >> I am performing a DB insertion and update for 3000+ records and\n> >> while doing so i get CPU utilization to 100% with 67% of CPU used\n> >> by postgres....\n> >>\n> >> I have also done optimization on queries too...\n> >>\n> >> Is there any way to optimized the CPU utilization for postgres....\n> >\n> > We'd need a lot more information before we could make useful\n> > suggestions. Knowing something about your hardware, OS, exact\n> > PostgreSQL version, postgresql.conf contents, the table definition,\n> > any foreign keys or other constraints, and exactly how you're doing\n> > the inserts would all be useful. Please read this and repost:\n> >\n> > http://wiki.postgresql.org/wiki/SlowQueryQuestions\n> >\n> > -Kevin\n> >\n> > --\n> > Sent via pgsql-performance mailing list (\n> [email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> >\n>\n> --\n> Michael Gould, Managing Partner\n> Intermodal Software Solutions, LLC\n> 904.226.0978\n> 904.592.5250 fax\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIs this a bulk insert?  Are you wrapping your statements within a transaction(s)?How many columns in the table?  What do the table statistics look like?On Fri, Jun 4, 2010 at 9:21 AM, Michael Gould <[email protected]> wrote:\nIn my opinion it depends on the application, the priority of the application\nand whether or not it is a commercially sold product, but depending on your\nneeds you might want to consider having a 3rd party vendor who has expertise\nin this process review and help tune the application.  One vendor that I\nknow does this is EnterpriseDB.  I've worked with other SQL engines and have\na lot of experience tuning queries in a couple of the environments but\nPostGresql isn't one of them.  Having an experienced DBA review your system\ncan make the difference between night and day.\n\nBest Regards\n\nMichael Gould\n\n\"Kevin Grittner\" <[email protected]> wrote:\n> Yogesh Naik <[email protected]> wrote:\n>\n>> I am performing a DB insertion and update for 3000+ records and\n>> while doing so i get CPU utilization to 100% with 67% of CPU used\n>> by postgres....\n>>\n>> I have also done optimization on queries too...\n>>\n>> Is there any way to optimized the CPU utilization for postgres....\n>\n> We'd need a lot more information before we could make useful\n> suggestions.  Knowing something about your hardware, OS, exact\n> PostgreSQL version, postgresql.conf contents, the table definition,\n> any foreign keys or other constraints, and exactly how you're doing\n> the inserts would all be useful.  Please read this and repost:\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n--\nMichael Gould, Managing Partner\nIntermodal Software Solutions, LLC\n904.226.0978\n904.592.5250 fax\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 4 Jun 2010 13:12:07 -0500", "msg_from": "Bryan Hinton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tuning for postgres" }, { "msg_contents": "On Fri, Jun 4, 2010 at 12:40 AM, Yogesh Naik\n<[email protected]> wrote:\n> I am performing a DB insertion and update for 3000+ records and while doing\n> so i get CPU utilization\n> to 100% with 67% of CPU used by postgres....\n\nThat sounds normal to me. What would you expect to happen?\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 10 Jun 2010 12:44:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tuning for postgres" } ]
[ { "msg_contents": "Some interesting data about different filesystems I tried with\nPostgreSQL and how it came out.\n\nI have an application that is backed in postgres using Java JDBC to\naccess it. The tests were all done on an opensuse 11.2 64-bit machine,\non the same hard drive (just ran mkfs between each test) on the same\ninput with the same code base. All filesystems were created with the\ndefault options.\n\nXFS (logbufs=8): ~4 hours to finish\next4: ~1 hour 50 minutes to finish\next3: 15 minutes to finish\next3 on LVM: 15 minutes to finish\n\n\n-- \nJon Schewe | http://mtu.net/~jpschewe\nIf you see an attachment named signature.asc, this is my digital\nsignature. See http://www.gnupg.org for more information.\n\n", "msg_date": "Fri, 04 Jun 2010 07:17:35 -0500", "msg_from": "Jon Schewe <[email protected]>", "msg_from_op": true, "msg_subject": "How filesystems matter with PostgreSQL" }, { "msg_contents": "On Friday 04 June 2010 14:17:35 Jon Schewe wrote:\n> Some interesting data about different filesystems I tried with\n> PostgreSQL and how it came out.\n> \n> I have an application that is backed in postgres using Java JDBC to\n> access it. The tests were all done on an opensuse 11.2 64-bit machine,\n> on the same hard drive (just ran mkfs between each test) on the same\n> input with the same code base. All filesystems were created with the\n> default options.\n> \n> XFS (logbufs=8): ~4 hours to finish\n> ext4: ~1 hour 50 minutes to finish\n> ext3: 15 minutes to finish\n> ext3 on LVM: 15 minutes to finish\n> \n\nHi Jon,\n\nAny chance you can do the same test with reiserfs?\n\nThanks,\n\nJoost\n", "msg_date": "Fri, 4 Jun 2010 14:39:15 +0200", "msg_from": "\"J. Roeleveld\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "On Friday 04 June 2010 14:17:35 Jon Schewe wrote:\n> Some interesting data about different filesystems I tried with\n> PostgreSQL and how it came out.\n> \n> I have an application that is backed in postgres using Java JDBC to\n> access it. The tests were all done on an opensuse 11.2 64-bit machine,\n> on the same hard drive (just ran mkfs between each test) on the same\n> input with the same code base. All filesystems were created with the\n> default options.\n> \n> XFS (logbufs=8): ~4 hours to finish\n> ext4: ~1 hour 50 minutes to finish\n> ext3: 15 minutes to finish\n> ext3 on LVM: 15 minutes to finish\nMy guess is that some of the difference comes from barrier differences. ext4 \nuses barriers by default, ext3 does not.\n\nAndres\n", "msg_date": "Fri, 4 Jun 2010 15:04:02 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On Friday 04 June 2010 14:17:35 Jon Schewe wrote:\n>> XFS (logbufs=8): ~4 hours to finish\n>> ext4: ~1 hour 50 minutes to finish\n>> ext3: 15 minutes to finish\n>> ext3 on LVM: 15 minutes to finish\n\n> My guess is that some of the difference comes from barrier differences. ext4 \n> uses barriers by default, ext3 does not.\n\nOr, to put it more clearly: the reason ext3 is fast is that it's unsafe.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Jun 2010 10:25:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How filesystems matter with PostgreSQL " }, { "msg_contents": "On Friday 04 June 2010 16:25:30 Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On Friday 04 June 2010 14:17:35 Jon Schewe wrote:\n> >> XFS (logbufs=8): ~4 hours to finish\n> >> ext4: ~1 hour 50 minutes to finish\n> >> ext3: 15 minutes to finish\n> >> ext3 on LVM: 15 minutes to finish\n> > \n> > My guess is that some of the difference comes from barrier differences.\n> > ext4 uses barriers by default, ext3 does not.\n> Or, to put it more clearly: the reason ext3 is fast is that it's unsafe.\nJon: To verify you can enable it via the barrier=1 option during mounting..\n\nAndres\n", "msg_date": "Fri, 4 Jun 2010 16:33:22 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "UFS2 w/ soft updates on FreeBSD might be an interesting addition to the list\nof test cases\n\nOn Fri, Jun 4, 2010 at 9:33 AM, Andres Freund <[email protected]> wrote:\n\n> On Friday 04 June 2010 16:25:30 Tom Lane wrote:\n> > Andres Freund <[email protected]> writes:\n> > > On Friday 04 June 2010 14:17:35 Jon Schewe wrote:\n> > >> XFS (logbufs=8): ~4 hours to finish\n> > >> ext4: ~1 hour 50 minutes to finish\n> > >> ext3: 15 minutes to finish\n> > >> ext3 on LVM: 15 minutes to finish\n> > >\n> > > My guess is that some of the difference comes from barrier differences.\n> > > ext4 uses barriers by default, ext3 does not.\n> > Or, to put it more clearly: the reason ext3 is fast is that it's unsafe.\n> Jon: To verify you can enable it via the barrier=1 option during mounting..\n>\n> Andres\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nUFS2 w/ soft updates on FreeBSD might be an interesting addition to the list of test casesOn Fri, Jun 4, 2010 at 9:33 AM, Andres Freund <[email protected]> wrote:\nOn Friday 04 June 2010 16:25:30 Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On Friday 04 June 2010 14:17:35 Jon Schewe wrote:\n> >> XFS (logbufs=8): ~4 hours to finish\n> >> ext4: ~1 hour 50 minutes to finish\n> >> ext3: 15 minutes to finish\n> >> ext3 on LVM: 15 minutes to finish\n> >\n> > My guess is that some of the difference comes from barrier differences.\n> > ext4 uses barriers by default, ext3 does not.\n> Or, to put it more clearly: the reason ext3 is fast is that it's unsafe.\nJon: To verify you can enable it via the barrier=1 option during mounting..\n\nAndres\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 4 Jun 2010 13:20:09 -0500", "msg_from": "Bryan Hinton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "I'm running on Linux, so that's not really an option here.\n\nOn 6/4/10 1:20 PM, Bryan Hinton wrote:\n> UFS2 w/ soft updates on FreeBSD might be an interesting addition to\n> the list of test cases\n>\n> On Fri, Jun 4, 2010 at 9:33 AM, Andres Freund <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> On Friday 04 June 2010 16:25:30 Tom Lane wrote:\n> > Andres Freund <[email protected] <mailto:[email protected]>>\n> writes:\n> > > On Friday 04 June 2010 14:17:35 Jon Schewe wrote:\n> > >> XFS (logbufs=8): ~4 hours to finish\n> > >> ext4: ~1 hour 50 minutes to finish\n> > >> ext3: 15 minutes to finish\n> > >> ext3 on LVM: 15 minutes to finish\n> > >\n> > > My guess is that some of the difference comes from barrier\n> differences.\n> > > ext4 uses barriers by default, ext3 does not.\n> > Or, to put it more clearly: the reason ext3 is fast is that it's\n> unsafe.\n> Jon: To verify you can enable it via the barrier=1 option during\n> mounting..\n>\n> Andres\n>\n> --\n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n-- \nJon Schewe | http://mtu.net/~jpschewe\nIf you see an attachment named signature.asc, this is my digital\nsignature. See http://www.gnupg.org for more information.\n\n\n\n\n\n\n\nI'm running on Linux, so that's not really an option here.\n\nOn 6/4/10 1:20 PM, Bryan Hinton wrote:\nUFS2 w/ soft updates on FreeBSD might be an interesting\naddition to the list of test cases\n\nOn Fri, Jun 4, 2010 at 9:33 AM, Andres\nFreund <[email protected]>\nwrote:\n\nOn Friday 04 June 2010 16:25:30 Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On Friday 04 June 2010 14:17:35 Jon Schewe wrote:\n> >> XFS (logbufs=8): ~4 hours to finish\n> >> ext4: ~1 hour 50 minutes to finish\n> >> ext3: 15 minutes to finish\n> >> ext3 on LVM: 15 minutes to finish\n> >\n> > My guess is that some of the difference comes from barrier\ndifferences.\n> > ext4 uses barriers by default, ext3 does not.\n> Or, to put it more clearly: the reason ext3 is fast is that it's\nunsafe.\n\nJon: To verify you can enable it via the barrier=1 option during\nmounting..\n\nAndres\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\n-- \nJon Schewe | http://mtu.net/~jpschewe\nIf you see an attachment named signature.asc, this is my digital\nsignature. See http://www.gnupg.org for more information.", "msg_date": "Fri, 04 Jun 2010 13:23:06 -0500", "msg_from": "Jon Schewe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "On 6/4/10 9:33 AM, Andres Freund wrote:\n> On Friday 04 June 2010 16:25:30 Tom Lane wrote:\n> \n>> Andres Freund <[email protected]> writes:\n>> \n>>> On Friday 04 June 2010 14:17:35 Jon Schewe wrote:\n>>> \n>>>> XFS (logbufs=8): ~4 hours to finish\n>>>> ext4: ~1 hour 50 minutes to finish\n>>>> ext3: 15 minutes to finish\n>>>> ext3 on LVM: 15 minutes to finish\n>>>> \n>>> My guess is that some of the difference comes from barrier differences.\n>>> ext4 uses barriers by default, ext3 does not.\n>>> \n>> Or, to put it more clearly: the reason ext3 is fast is that it's unsafe.\n>> \n> Jon: To verify you can enable it via the barrier=1 option during mounting..\n>\n>\n> \nFirst some details:\nLinux kernel 2.6.31\npostgres version: 8.4.2\n\nMore test results:\nreiserfs: ~1 hour 50 minutes\next3 barrier=1: ~15 minutes\next4 nobarrier: ~15 minutes\njfs: ~15 minutes\n\n-- \nJon Schewe | http://mtu.net/~jpschewe\nIf you see an attachment named signature.asc, this is my digital\nsignature. See http://www.gnupg.org for more information.\n\n", "msg_date": "Fri, 04 Jun 2010 13:26:27 -0500", "msg_from": "Jon Schewe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "What types of journaling on each fs?\n\n\nOn Fri, Jun 4, 2010 at 1:26 PM, Jon Schewe <[email protected]> wrote:\n\n> On 6/4/10 9:33 AM, Andres Freund wrote:\n> > On Friday 04 June 2010 16:25:30 Tom Lane wrote:\n> >\n> >> Andres Freund <[email protected]> writes:\n> >>\n> >>> On Friday 04 June 2010 14:17:35 Jon Schewe wrote:\n> >>>\n> >>>> XFS (logbufs=8): ~4 hours to finish\n> >>>> ext4: ~1 hour 50 minutes to finish\n> >>>> ext3: 15 minutes to finish\n> >>>> ext3 on LVM: 15 minutes to finish\n> >>>>\n> >>> My guess is that some of the difference comes from barrier differences.\n> >>> ext4 uses barriers by default, ext3 does not.\n> >>>\n> >> Or, to put it more clearly: the reason ext3 is fast is that it's unsafe.\n> >>\n> > Jon: To verify you can enable it via the barrier=1 option during\n> mounting..\n> >\n> >\n> >\n> First some details:\n> Linux kernel 2.6.31\n> postgres version: 8.4.2\n>\n> More test results:\n> reiserfs: ~1 hour 50 minutes\n> ext3 barrier=1: ~15 minutes\n> ext4 nobarrier: ~15 minutes\n> jfs: ~15 minutes\n>\n> --\n> Jon Schewe | http://mtu.net/~jpschewe\n> If you see an attachment named signature.asc, this is my digital\n> signature. See http://www.gnupg.org for more information.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWhat types of journaling on each fs?On Fri, Jun 4, 2010 at 1:26 PM, Jon Schewe <[email protected]> wrote:\nOn 6/4/10 9:33 AM, Andres Freund wrote:\n> On Friday 04 June 2010 16:25:30 Tom Lane wrote:\n>\n>> Andres Freund <[email protected]> writes:\n>>\n>>> On Friday 04 June 2010 14:17:35 Jon Schewe wrote:\n>>>\n>>>> XFS (logbufs=8): ~4 hours to finish\n>>>> ext4: ~1 hour 50 minutes to finish\n>>>> ext3: 15 minutes to finish\n>>>> ext3 on LVM: 15 minutes to finish\n>>>>\n>>> My guess is that some of the difference comes from barrier differences.\n>>> ext4 uses barriers by default, ext3 does not.\n>>>\n>> Or, to put it more clearly: the reason ext3 is fast is that it's unsafe.\n>>\n> Jon: To verify you can enable it via the barrier=1 option during mounting..\n>\n>\n>\nFirst some details:\nLinux kernel 2.6.31\npostgres version: 8.4.2\n\nMore test results:\nreiserfs: ~1 hour 50 minutes\next3 barrier=1: ~15 minutes\next4 nobarrier: ~15 minutes\njfs: ~15 minutes\n\n--\nJon Schewe | http://mtu.net/~jpschewe\nIf you see an attachment named signature.asc, this is my digital\nsignature. See http://www.gnupg.org for more information.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 4 Jun 2010 13:37:15 -0500", "msg_from": "Bryan Hinton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "I just used standard mkfs for each filesystem and mounted them without\noptions, unless otherwise specified.\n\nOn 6/4/10 1:37 PM, Bryan Hinton wrote:\n> What types of journaling on each fs?\n>\n>\n> On Fri, Jun 4, 2010 at 1:26 PM, Jon Schewe <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> On 6/4/10 9:33 AM, Andres Freund wrote:\n> > On Friday 04 June 2010 16:25:30 Tom Lane wrote:\n> >\n> >> Andres Freund <[email protected] <mailto:[email protected]>>\n> writes:\n> >>\n> >>> On Friday 04 June 2010 14:17:35 Jon Schewe wrote:\n> >>>\n> >>>> XFS (logbufs=8): ~4 hours to finish\n> >>>> ext4: ~1 hour 50 minutes to finish\n> >>>> ext3: 15 minutes to finish\n> >>>> ext3 on LVM: 15 minutes to finish\n> >>>>\n> >>> My guess is that some of the difference comes from barrier\n> differences.\n> >>> ext4 uses barriers by default, ext3 does not.\n> >>>\n> >> Or, to put it more clearly: the reason ext3 is fast is that\n> it's unsafe.\n> >>\n> > Jon: To verify you can enable it via the barrier=1 option during\n> mounting..\n> >\n> >\n> >\n> First some details:\n> Linux kernel 2.6.31\n> postgres version: 8.4.2\n>\n> More test results:\n> reiserfs: ~1 hour 50 minutes\n> ext3 barrier=1: ~15 minutes\n> ext4 nobarrier: ~15 minutes\n> jfs: ~15 minutes\n>\n> --\n> Jon Schewe | http://mtu.net/~jpschewe <http://mtu.net/%7Ejpschewe>\n> If you see an attachment named signature.asc, this is my digital\n> signature. See http://www.gnupg.org for more information.\n>\n>\n> --\n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n-- \nJon Schewe | http://mtu.net/~jpschewe\nIf you see an attachment named signature.asc, this is my digital\nsignature. See http://www.gnupg.org for more information.\n\n\n\n\n\n\n\nI just used standard mkfs for each filesystem and mounted them without\noptions, unless otherwise specified.\n\nOn 6/4/10 1:37 PM, Bryan Hinton wrote:\nWhat types of journaling on each fs?\n \n\n\nOn Fri, Jun 4, 2010 at 1:26 PM, Jon Schewe <[email protected]>\nwrote:\n\nOn 6/4/10 9:33 AM, Andres Freund wrote:\n> On Friday 04 June 2010 16:25:30 Tom Lane wrote:\n>\n>> Andres Freund <[email protected]> writes:\n>>\n>>> On Friday 04 June 2010 14:17:35 Jon Schewe wrote:\n>>>\n>>>> XFS (logbufs=8): ~4 hours to finish\n>>>> ext4: ~1 hour 50 minutes to finish\n>>>> ext3: 15 minutes to finish\n>>>> ext3 on LVM: 15 minutes to finish\n>>>>\n>>> My guess is that some of the difference comes from barrier\ndifferences.\n>>> ext4 uses barriers by default, ext3 does not.\n>>>\n>> Or, to put it more clearly: the reason ext3 is fast is that\nit's unsafe.\n>>\n> Jon: To verify you can enable it via the barrier=1 option during\nmounting..\n>\n>\n>\n\nFirst some details:\nLinux kernel 2.6.31\npostgres version: 8.4.2\n\nMore test results:\nreiserfs: ~1 hour 50 minutes\next3 barrier=1: ~15 minutes\next4 nobarrier: ~15 minutes\njfs: ~15 minutes\n\n--\nJon Schewe | http://mtu.net/~jpschewe\nIf you see an attachment named signature.asc, this is my digital\nsignature. See http://www.gnupg.org for more information.\n\n\n--\n\n\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\n\n-- \nJon Schewe | http://mtu.net/~jpschewe\nIf you see an attachment named signature.asc, this is my digital\nsignature. See http://www.gnupg.org for more information.", "msg_date": "Fri, 04 Jun 2010 13:44:18 -0500", "msg_from": "Jon Schewe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "On Friday 04 June 2010 20:26:27 Jon Schewe wrote:\n> ext3 barrier=1: ~15 minutes\n> ext4 nobarrier: ~15 minutes\nAny message in the kernel log about barriers or similar?\n\nAndres\n", "msg_date": "Fri, 4 Jun 2010 20:46:59 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "On 6/4/10 1:46 PM, Andres Freund wrote:\n> On Friday 04 June 2010 20:26:27 Jon Schewe wrote:\n> \n>> ext3 barrier=1: ~15 minutes\n>> ext4 nobarrier: ~15 minutes\n>> \n> Any message in the kernel log about barriers or similar?\n>\n> \nNo.\n\n\n-- \nJon Schewe | http://mtu.net/~jpschewe\nIf you see an attachment named signature.asc, this is my digital\nsignature. See http://www.gnupg.org for more information.\n\n", "msg_date": "Fri, 04 Jun 2010 13:49:05 -0500", "msg_from": "Jon Schewe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "Jon Schewe wrote:\n> The tests were all done on an opensuse 11.2 64-bit machine,\n> on the same hard drive (just ran mkfs between each test) on the same\n> input with the same code base.\n\nSo no controller card, just the motherboard and a single hard drive? If \nthat's the case, what you've measured is which filesystems are safe \nbecause they default to flushing drive cache (the ones that take around \n15 minutes) and which do not (the ones that take >=around 2 hours). You \ncan't make ext3 flush the cache correctly no matter what you do with \nbarriers, they just don't work on ext3 the way PostgreSQL needs them to.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 05 Jun 2010 18:36:48 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "On 06/05/2010 05:36 PM, Greg Smith wrote:\n> Jon Schewe wrote:\n>> The tests were all done on an opensuse 11.2 64-bit machine,\n>> on the same hard drive (just ran mkfs between each test) on the same\n>> input with the same code base.\n>\n> So no controller card, just the motherboard and a single hard drive?\nCorrect.\n> If that's the case, what you've measured is which filesystems are\n> safe because they default to flushing drive cache (the ones that take\n> around 15 minutes) and which do not (the ones that take >=around 2\n> hours). You can't make ext3 flush the cache correctly no matter what\n> you do with barriers, they just don't work on ext3 the way PostgreSQL\n> needs them to.\n>\nSo the 15 minute runs are doing it correctly and safely, but the slow\nones are doing the wrong thing? That would imply that ext3 is the safe\none. But your last statement suggests that ext3 is doing the wrong thing.\n", "msg_date": "Sat, 05 Jun 2010 17:41:04 -0500", "msg_from": "Jon Schewe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "Jon Schewe wrote:\n>> If that's the case, what you've measured is which filesystems are\n>> safe because they default to flushing drive cache (the ones that take\n>> around 15 minutes) and which do not (the ones that take >=around 2\n>> hours). You can't make ext3 flush the cache correctly no matter what\n>> you do with barriers, they just don't work on ext3 the way PostgreSQL\n>> needs them to.\n>>\n>> \n> So the 15 minute runs are doing it correctly and safely, but the slow\n> ones are doing the wrong thing? That would imply that ext3 is the safe\n> one. But your last statement suggests that ext3 is doing the wrong thing.\n> \n\nI goofed and reversed the two times when writing that. As is always the \ncase with this sort of thing, the unsafe runs are the fast ones. ext3 \ndoes not ever do the right thing no matter how you configure it, you \nhave to compensate for its limitations with correct hardware setup to \nmake database writes reliable.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 05 Jun 2010 18:52:55 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "\n\nOn 06/05/2010 05:52 PM, Greg Smith wrote:\n> Jon Schewe wrote:\n>>> If that's the case, what you've measured is which filesystems are\n>>> safe because they default to flushing drive cache (the ones that take\n>>> around 15 minutes) and which do not (the ones that take >=around 2\n>>> hours). You can't make ext3 flush the cache correctly no matter what\n>>> you do with barriers, they just don't work on ext3 the way PostgreSQL\n>>> needs them to.\n>>>\n>>> \n>> So the 15 minute runs are doing it correctly and safely, but the slow\n>> ones are doing the wrong thing? That would imply that ext3 is the safe\n>> one. But your last statement suggests that ext3 is doing the wrong\n>> thing.\n>> \n>\n> I goofed and reversed the two times when writing that. As is always\n> the case with this sort of thing, the unsafe runs are the fast ones. \n> ext3 does not ever do the right thing no matter how you configure it,\n> you have to compensate for its limitations with correct hardware setup\n> to make database writes reliable.\n>\nOK, so if I want the 15 minute speed, I need to give up safety (OK in\nthis case as this is just research testing), or see if I can tune\npostgres better.\n\n\n", "msg_date": "Sat, 05 Jun 2010 18:03:59 -0500", "msg_from": "Jon Schewe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "On Sat, Jun 5, 2010 at 5:03 PM, Jon Schewe <[email protected]> wrote:\n>\n>\n> On 06/05/2010 05:52 PM, Greg Smith wrote:\n>> Jon Schewe wrote:\n>>>>   If that's the case, what you've measured is which filesystems are\n>>>> safe because they default to flushing drive cache (the ones that take\n>>>> around 15 minutes) and which do not (the ones that take >=around 2\n>>>> hours).  You can't make ext3 flush the cache correctly no matter what\n>>>> you do with barriers, they just don't work on ext3 the way PostgreSQL\n>>>> needs them to.\n>>>>\n>>>>\n>>> So the 15 minute runs are doing it correctly and safely, but the slow\n>>> ones are doing the wrong thing? That would imply that ext3 is the safe\n>>> one. But your last statement suggests that ext3 is doing the wrong\n>>> thing.\n>>>\n>>\n>> I goofed and reversed the two times when writing that.  As is always\n>> the case with this sort of thing, the unsafe runs are the fast ones.\n>> ext3 does not ever do the right thing no matter how you configure it,\n>> you have to compensate for its limitations with correct hardware setup\n>> to make database writes reliable.\n>>\n> OK, so if I want the 15 minute speed, I need to give up safety (OK in\n> this case as this is just research testing), or see if I can tune\n> postgres better.\n\nOr use a trustworthy hardware caching battery backed RAID controller,\neither in RAID mode or JBOD mode.\n", "msg_date": "Sat, 5 Jun 2010 17:54:37 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "On 06/05/2010 06:54 PM, Scott Marlowe wrote:\n> On Sat, Jun 5, 2010 at 5:03 PM, Jon Schewe <[email protected]> wrote:\n> \n>>\n>> On 06/05/2010 05:52 PM, Greg Smith wrote:\n>> \n>>> Jon Schewe wrote:\n>>> \n>>>>> If that's the case, what you've measured is which filesystems are\n>>>>> safe because they default to flushing drive cache (the ones that take\n>>>>> around 15 minutes) and which do not (the ones that take >=around 2\n>>>>> hours). You can't make ext3 flush the cache correctly no matter what\n>>>>> you do with barriers, they just don't work on ext3 the way PostgreSQL\n>>>>> needs them to.\n>>>>>\n>>>>>\n>>>>> \n>>>> So the 15 minute runs are doing it correctly and safely, but the slow\n>>>> ones are doing the wrong thing? That would imply that ext3 is the safe\n>>>> one. But your last statement suggests that ext3 is doing the wrong\n>>>> thing.\n>>>>\n>>>> \n>>> I goofed and reversed the two times when writing that. As is always\n>>> the case with this sort of thing, the unsafe runs are the fast ones.\n>>> ext3 does not ever do the right thing no matter how you configure it,\n>>> you have to compensate for its limitations with correct hardware setup\n>>> to make database writes reliable.\n>>>\n>>> \n>> OK, so if I want the 15 minute speed, I need to give up safety (OK in\n>> this case as this is just research testing), or see if I can tune\n>> postgres better.\n>> \n> Or use a trustworthy hardware caching battery backed RAID controller,\n> either in RAID mode or JBOD mode.\n> \nRight, because the real danger here is if the power goes out you can end\nup with a scrambled database, correct?\n\n", "msg_date": "Sat, 05 Jun 2010 18:58:58 -0500", "msg_from": "Jon Schewe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "On Sat, Jun 5, 2010 at 5:58 PM, Jon Schewe <[email protected]> wrote:\n> On 06/05/2010 06:54 PM, Scott Marlowe wrote:\n>> On Sat, Jun 5, 2010 at 5:03 PM, Jon Schewe <[email protected]> wrote:\n>>\n>>>\n>>> On 06/05/2010 05:52 PM, Greg Smith wrote:\n>>>\n>>>> Jon Schewe wrote:\n>>>>\n>>>>>>   If that's the case, what you've measured is which filesystems are\n>>>>>> safe because they default to flushing drive cache (the ones that take\n>>>>>> around 15 minutes) and which do not (the ones that take >=around 2\n>>>>>> hours).  You can't make ext3 flush the cache correctly no matter what\n>>>>>> you do with barriers, they just don't work on ext3 the way PostgreSQL\n>>>>>> needs them to.\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>> So the 15 minute runs are doing it correctly and safely, but the slow\n>>>>> ones are doing the wrong thing? That would imply that ext3 is the safe\n>>>>> one. But your last statement suggests that ext3 is doing the wrong\n>>>>> thing.\n>>>>>\n>>>>>\n>>>> I goofed and reversed the two times when writing that.  As is always\n>>>> the case with this sort of thing, the unsafe runs are the fast ones.\n>>>> ext3 does not ever do the right thing no matter how you configure it,\n>>>> you have to compensate for its limitations with correct hardware setup\n>>>> to make database writes reliable.\n>>>>\n>>>>\n>>> OK, so if I want the 15 minute speed, I need to give up safety (OK in\n>>> this case as this is just research testing), or see if I can tune\n>>> postgres better.\n>>>\n>> Or use a trustworthy hardware caching battery backed RAID controller,\n>> either in RAID mode or JBOD mode.\n>>\n> Right, because the real danger here is if the power goes out you can end\n> up with a scrambled database, correct?\n\nCorrect. Assuming you can get power applied again before the battery\nin the RAID controller dies, it will then flush out its cache and your\ndata will still be coherent.\n", "msg_date": "Sat, 5 Jun 2010 18:02:18 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "On 06/05/2010 07:02 PM, Scott Marlowe wrote:\n> On Sat, Jun 5, 2010 at 5:58 PM, Jon Schewe <[email protected]> wrote:\n> \n>> On 06/05/2010 06:54 PM, Scott Marlowe wrote:\n>> \n>>> On Sat, Jun 5, 2010 at 5:03 PM, Jon Schewe <[email protected]> wrote:\n>>>\n>>> \n>>>> On 06/05/2010 05:52 PM, Greg Smith wrote:\n>>>>\n>>>> \n>>>>> Jon Schewe wrote:\n>>>>>\n>>>>> \n>>>>>>> If that's the case, what you've measured is which filesystems are\n>>>>>>> safe because they default to flushing drive cache (the ones that take\n>>>>>>> around 15 minutes) and which do not (the ones that take >=around 2\n>>>>>>> hours). You can't make ext3 flush the cache correctly no matter what\n>>>>>>> you do with barriers, they just don't work on ext3 the way PostgreSQL\n>>>>>>> needs them to.\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>> \n>>>>>> So the 15 minute runs are doing it correctly and safely, but the slow\n>>>>>> ones are doing the wrong thing? That would imply that ext3 is the safe\n>>>>>> one. But your last statement suggests that ext3 is doing the wrong\n>>>>>> thing.\n>>>>>>\n>>>>>>\n>>>>>> \n>>>>> I goofed and reversed the two times when writing that. As is always\n>>>>> the case with this sort of thing, the unsafe runs are the fast ones.\n>>>>> ext3 does not ever do the right thing no matter how you configure it,\n>>>>> you have to compensate for its limitations with correct hardware setup\n>>>>> to make database writes reliable.\n>>>>>\n>>>>>\n>>>>> \n>>>> OK, so if I want the 15 minute speed, I need to give up safety (OK in\n>>>> this case as this is just research testing), or see if I can tune\n>>>> postgres better.\n>>>>\n>>>> \n>>> Or use a trustworthy hardware caching battery backed RAID controller,\n>>> either in RAID mode or JBOD mode.\n>>>\n>>> \n>> Right, because the real danger here is if the power goes out you can end\n>> up with a scrambled database, correct?\n>> \n> Correct. Assuming you can get power applied again before the battery\n> in the RAID controller dies, it will then flush out its cache and your\n> data will still be coherent.\n> \nOr if you really don't care if your database is scrambled after a power\noutage you can go without the battery backed RAID controller.\n\n", "msg_date": "Sat, 05 Jun 2010 19:07:08 -0500", "msg_from": "Jon Schewe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "On Sat, Jun 5, 2010 at 6:07 PM, Jon Schewe <[email protected]> wrote:\n> On 06/05/2010 07:02 PM, Scott Marlowe wrote:\n>> On Sat, Jun 5, 2010 at 5:58 PM, Jon Schewe <[email protected]> wrote:\n>>\n>>> On 06/05/2010 06:54 PM, Scott Marlowe wrote:\n>>>\n>>>> On Sat, Jun 5, 2010 at 5:03 PM, Jon Schewe <[email protected]> wrote:\n>>>>\n>>>>\n>>>>> On 06/05/2010 05:52 PM, Greg Smith wrote:\n>>>>>\n>>>>>\n>>>>>> Jon Schewe wrote:\n>>>>>>\n>>>>>>\n>>>>>>>>   If that's the case, what you've measured is which filesystems are\n>>>>>>>> safe because they default to flushing drive cache (the ones that take\n>>>>>>>> around 15 minutes) and which do not (the ones that take >=around 2\n>>>>>>>> hours).  You can't make ext3 flush the cache correctly no matter what\n>>>>>>>> you do with barriers, they just don't work on ext3 the way PostgreSQL\n>>>>>>>> needs them to.\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>\n>>>>>>> So the 15 minute runs are doing it correctly and safely, but the slow\n>>>>>>> ones are doing the wrong thing? That would imply that ext3 is the safe\n>>>>>>> one. But your last statement suggests that ext3 is doing the wrong\n>>>>>>> thing.\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>> I goofed and reversed the two times when writing that.  As is always\n>>>>>> the case with this sort of thing, the unsafe runs are the fast ones.\n>>>>>> ext3 does not ever do the right thing no matter how you configure it,\n>>>>>> you have to compensate for its limitations with correct hardware setup\n>>>>>> to make database writes reliable.\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>> OK, so if I want the 15 minute speed, I need to give up safety (OK in\n>>>>> this case as this is just research testing), or see if I can tune\n>>>>> postgres better.\n>>>>>\n>>>>>\n>>>> Or use a trustworthy hardware caching battery backed RAID controller,\n>>>> either in RAID mode or JBOD mode.\n>>>>\n>>>>\n>>> Right, because the real danger here is if the power goes out you can end\n>>> up with a scrambled database, correct?\n>>>\n>> Correct.  Assuming you can get power applied again before the battery\n>> in the RAID controller dies, it will then flush out its cache and your\n>> data will still be coherent.\n>>\n> Or if you really don't care if your database is scrambled after a power\n> outage you can go without the battery backed RAID controller.\n\nI do that all the time. On slony replication slaves. You can use a\nconsiderably less powerful machine, IO wise, with fsync disabled and a\nhandful of cheap SATA drives.\n", "msg_date": "Sat, 5 Jun 2010 21:34:08 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "Jon Schewe wrote:\n\n> OK, so if I want the 15 minute speed, I need to give up safety (OK in\n> this case as this is just research testing), or see if I can tune\n> postgres better.\n\nDepending on your app, one more possibility would be to see if you\ncan re-factor the application so it can do multiple writes in parallel\nrather than waiting for each one to complete. If I understand right,\nthen many transactions could potentially be handled by a single fsync.\n", "msg_date": "Sat, 05 Jun 2010 23:51:08 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How filesystems matter with PostgreSQL" }, { "msg_contents": "On 06/06/10 14:51, Ron Mayer wrote:\n> Jon Schewe wrote:\n>\n>> OK, so if I want the 15 minute speed, I need to give up safety (OK in\n>> this case as this is just research testing), or see if I can tune\n>> postgres better.\n>\n> Depending on your app, one more possibility would be to see if you\n> can re-factor the application so it can do multiple writes in parallel\n> rather than waiting for each one to complete. If I understand right,\n> then many transactions could potentially be handled by a single fsync.\n\nBy using a commit delay, yes. (see postgresql.conf). You do open up the \nrisk of losing transactions committed within the commit delay period, \nbut you don't risk corruption like you do with fsync.\n\nSometimes you can also batch work into bigger transactions. The classic \nexample here is the usual long stream of individual auto-committed \nINSERTs, which when wrapped in an explicit transaction can be vastly \nquicker.\n\n--\nCraig Ringer\n", "msg_date": "Sun, 06 Jun 2010 15:48:19 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How filesystems matter with PostgreSQL" } ]
[ { "msg_contents": "Hi.\n\nI hope I'm not going to expose an already known problem, but I couldn't find \nit mailing list archives (I only found http://archives.postgresql.org/pgsql-\nhackers/2009-12/msg01543.php).\n\nOn one of my (non production) machines, I've just seen a very big performance \nregression (I was doing a very simple insert test). I had an 'old' 8.4 \npostgresql compiled a few month ago, performing very well, and my 'bleeding \nedge' 9.0, doing the same insert very slowly.\n\nI managed to find the cause of the regression : with Linux 2.6.33, O_DSYNC is \nnow available. With glibc 2.12, O_DSYNC is available in userspace. Having both \n(they are both very new, 2.12 isn't even official on glibc website), my new \nbuild defaulted to open_datasync. The problem is that it is much slower. I \ntested it on 2 small machines (no big raid, just basic machines, with SATA or \nsoftware RAID).\n\nHere is the trivial test :\nThe configuration is the default configuration, just after initdb\n\nCREATE TABLE test (a int);\nCREATE INDEX idxtest on test (a);\n\n\n\nwith wal_sync_method = open_datasync (new default)\n\nmarc=# INSERT INTO test SELECT generate_series(1,100000);\nINSERT 0 100000\nTime: 16083,912 ms\n\nwith wal_sync_method = fdatasync (old default)\n\nmarc=# INSERT INTO test SELECT generate_series(1,100000);\nINSERT 0 100000\nTime: 954,000 ms\n\nDoing synthetic benchmarks with test_fsync:\n\nopen_datasync performance, glibc 2.12, 2.6.34, 1 SATA drive\n\nSimple 8k write timing:\nwrite 0.037511\n\nCompare file sync methods using one 8k write:\nopen_datasync write 56.998797\nopen_sync write 168.653995\nwrite, fdatasync 55.359279\nwrite, fsync 166.854911\n\nCompare file sync methods using two 8k writes:\nopen_datasync write, write 113.342738\nopen_sync write, write 339.066883\nwrite, write, fdatasync 57.336820\nwrite, write, fsync 166.847923\n\nCompare open_sync sizes: \n16k open_sync write 169.423723 \n2 8k open_sync writes 336.457119 \n\nCompare fsync times on write() and new file descriptors (if the times\nare similar, fsync() can sync data written on a different descriptor):\nwrite, fsync, close 166.264048\nwrite, close, fsync 168.702035\n\nThis is it, I just wanted to raise an alert on this: the degradation was 16-\nfold with this test. We wont see linux 2.6.33 + glibc 2.12 in production \nbefore months (I hope), but shouldn't PostgreSQL use fdatasync by default with \nLinux, seeing the result ?\n\nBy the way, I re-did my tests with both 2.6.33, 2.6.34 and 2.6.35-rc1 and got \nthe exact same result (O_DSYNC there, obviously, but also the performance \ndegradation).\n\nCheers\n\nMarc\n", "msg_date": "Fri, 4 Jun 2010 15:39:03 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "performance regression with Linux 2.6.33 and glibc 2.12" }, { "msg_contents": "Marc Cousin <[email protected]> writes:\n> I hope I'm not going to expose an already known problem, but I couldn't find \n> it mailing list archives (I only found http://archives.postgresql.org/pgsql-\n> hackers/2009-12/msg01543.php).\n\nYou sure this isn't the well-known \"ext4 actually implements fsync\nwhere ext3 didn't\" issue?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Jun 2010 09:59:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance regression with Linux 2.6.33 and glibc 2.12 " }, { "msg_contents": "On Friday 04 June 2010 15:59:05 Tom Lane wrote:\n> Marc Cousin <[email protected]> writes:\n> > I hope I'm not going to expose an already known problem, but I couldn't\n> > find it mailing list archives (I only found\n> > http://archives.postgresql.org/pgsql- hackers/2009-12/msg01543.php).\n> \n> You sure this isn't the well-known \"ext4 actually implements fsync\n> where ext3 didn't\" issue?\nI doubt it. It reads to me like he is testing the two methods on the same \ninstallation with the same kernel \n\n> > with wal_sync_method = open_datasync (new default)\n> > marc=# INSERT INTO test SELECT generate_series(1,100000);\n> > INSERT 0 100000\n> > Time: 16083,912 ms\n> > \n> > with wal_sync_method = fdatasync (old default)\n> > \n> > marc=# INSERT INTO test SELECT generate_series(1,100000);\n> > INSERT 0 100000\n> > Time: 954,000 ms\nIts not actually surprising that in such a open_datasync is hugely slower than \nfdatasync. With open_datasync every single write will be synchronous, very \nlikely not reordered/batched/whatever. In contrast to that with fdatasync it \nwill only synced in way much bigger batches.\n\nOr am I missing something?\n\nI always thought the synchronous write methods to be a fallback kludge and \ndidnt realize its actually the preferred method...\n\nAndres\n", "msg_date": "Fri, 4 Jun 2010 17:25:05 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance regression with Linux 2.6.33 and glibc 2.12" }, { "msg_contents": "The Friday 04 June 2010 15:59:05, Tom Lane wrote :\n> Marc Cousin <[email protected]> writes:\n> > I hope I'm not going to expose an already known problem, but I couldn't\n> > find it mailing list archives (I only found\n> > http://archives.postgresql.org/pgsql- hackers/2009-12/msg01543.php).\n> \n> You sure this isn't the well-known \"ext4 actually implements fsync\n> where ext3 didn't\" issue?\n> \n> \t\t\tregards, tom lane\n\nEverything is ext4. So I should have fsync working with write barriers on all \nthe tests.\n\nI don't think this problem is of the same kind: I think it is really because \nof O_DSYNC appearing on 2.6.33, and PostgreSQL using it by default now. If my \nfilesystem was lying to me about barriers, I should take no more performance \nhit with open_datasync than with fdatasync, should I ?\n", "msg_date": "Fri, 4 Jun 2010 17:29:04 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance regression with Linux 2.6.33 and glibc 2.12" } ]
[ { "msg_contents": "Hi,\nI was doing some benchmarking while changing configuration options to try to get more performance out of our postgresql servers and noticed that when running pgbench against 8.4.3 vs 8.4.4 on identical hardware and configuration there is a large difference in performance. I know tuning is a very deep topic and benchmarking is hardly an accurate indication of real world performance but I was still surprised by these results and wanted to know what I am doing wrong.\n\nOS is CentOS 5.5 and the postgresql packages are from the pgdg repo.\n\nHardware specs are:\n2x Quad core Xeons 2.4Ghz\n16GB RAM\n2x RAID1 7.2k RPM disks (slow I know, but we are upgrading them soon..)\n\nRelevant Postgresql Configuration:\nmax_connections = 1000\nshared_buffers = 4096MB\ntemp_buffers = 8MB\nmax_prepared_transactions = 1000\nwork_mem = 8MB\nmaintenance_work_mem = 512MB\nwal_buffers = 8MB\ncheckpoint_segments = 192\ncheckpoint_timeout = 30min\neffective_cache_size = 12288MB\n\nResults for the 8.4.3 (8.4.3-2PGDG.el5) host:\n[root@some-host ~]# pgbench -h dbs3 -U postgres -i -s 100 pgbench1 > /dev/null 2>&1 && pgbench -h dbs3 -U postgres -c 100 -t 100000 pgbench1\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nquery mode: simple\nnumber of clients: 100\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 10000000/10000000\ntps = 5139.554921 (including connections establishing)\ntps = 5140.325850 (excluding connections establishing)\nopreport:\nCPU: Intel Core/i7, speed 2394.07 MHz (estimated)\nCounted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000\nCPU_CLK_UNHALT...|\n samples| %|\n------------------\n 37705832 61.3683 postgres\n 18472598 30.0652 no-vmlinux\n 4982274 8.1089 libc-2.5.so\n 138517 0.2254 oprofiled\n 134628 0.2191 libm-2.5.so\n 1465 0.0024 libc-2.5.so\n 1454 0.0024 libperl.so\n 793 0.0013 libdcsupt.so.5.9.2\n 444 7.2e-04 dsm_sa_datamgrd\n CPU_CLK_UNHALT...|\n samples| %|\n ------------------\n 401 90.3153 dsm_sa_datamgrd\n 43 9.6847 anon (tgid:8013 range:0xffffe000-0xfffff000)\n 410 6.7e-04 libxml2.so.2.6.26\n 356 5.8e-04 ld-2.5.so\n 332 5.4e-04 libnetsnmp.so.10.0.3\n 327 5.3e-04 dsm_sa_snmpd\n CPU_CLK_UNHALT...|\n samples| %|\n ------------------\n 255 77.9817 dsm_sa_snmpd\n 72 22.0183 anon (tgid:8146 range:0xffffe000-0xfffff000)\n 304 4.9e-04 libcrypto.so.0.9.8e\n 290 4.7e-04 libpthread-2.5.so\n 199 3.2e-04 libdcsmil.so.5.9.2\n 139 2.3e-04 modclusterd\n<snip>\n\nResults for the 8.4.4 (8.4.4-1PGDG.el5) host:\n[root@ some-host ~]# pgbench -h dbs4 -U postgres -i -s 100 pgbench1 > /dev/null 2>&1 && pgbench -h dbs4 -U postgres -c 100 -t 100000 pgbench1\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nquery mode: simple\nnumber of clients: 100\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 10000000/10000000\ntps = 2765.643549 (including connections establishing)\ntps = 2765.931203 (excluding connections establishing)\nopreport:\nCPU: Intel Core/i7, speed 2394.07 MHz (estimated)\nCounted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000\nCPU_CLK_UNHALT...|\n samples| %|\n------------------\n312481395 84.5038 postgres\n 41861164 11.3204 no-vmlinux\n 14290652 3.8646 libc-2.5.so\n 812148 0.2196 oprofiled\n 305909 0.0827 libm-2.5.so\n 7647 0.0021 libc-2.5.so\n 3809 0.0010 libdcsupt.so.5.9.2\n 3077 8.3e-04 libperl.so\n 2302 6.2e-04 dsm_sa_datamgrd\n CPU_CLK_UNHALT...|\n samples| %|\n ------------------\n 2113 91.7897 dsm_sa_datamgrd\n 189 8.2103 anon (tgid:8075 range:0xffffe000-0xfffff000)\n 2175 5.9e-04 libxml2.so.2.6.26\n 1455 3.9e-04 dsm_sa_snmpd\n CPU_CLK_UNHALT...|\n samples| %|\n ------------------\n 1226 84.2612 dsm_sa_snmpd\n 229 15.7388 anon (tgid:8208 range:0xffffe000-0xfffff000)\n 1227 3.3e-04 libdchipm.so.5.9.2\n 1192 3.2e-04 libpthread-2.5.so\n 804 2.2e-04 libnetsnmp.so.10.0.3\n 745 2.0e-04 modclusterd\n<snip>\n\nAny input? I can reproduce these numbers consistently. If you need more information then just let me know. By the way, I am a new postgresql user so my experience is limited.\nCheers,\nMax\n\n\n\n\n\n\n\n\n\n\nHi,\nI was doing some benchmarking while changing configuration\noptions to try to get more performance out of our postgresql servers and\nnoticed that when running pgbench against 8.4.3 vs 8.4.4 on identical hardware\nand configuration there is a large difference in performance. I know tuning is\na very deep topic and benchmarking is hardly an accurate indication of real\nworld performance but I was still surprised by these results and wanted to know\nwhat I am doing wrong.\n \nOS is CentOS 5.5 and the postgresql packages are from the pgdg\nrepo.\n \nHardware specs are:\n2x Quad core Xeons 2.4Ghz\n16GB RAM\n2x RAID1 7.2k RPM disks (slow I know, but we are upgrading\nthem soon..)\n \nRelevant Postgresql Configuration:\nmax_connections = 1000\nshared_buffers = 4096MB\ntemp_buffers = 8MB\nmax_prepared_transactions = 1000\nwork_mem = 8MB\nmaintenance_work_mem = 512MB\nwal_buffers = 8MB\ncheckpoint_segments = 192\ncheckpoint_timeout = 30min\neffective_cache_size = 12288MB\n \nResults for the 8.4.3\n(8.4.3-2PGDG.el5) host:\n[root@some-host ~]# pgbench -h dbs3 -U postgres -i -s 100\npgbench1 > /dev/null 2>&1 && pgbench -h dbs3 -U postgres -c\n100 -t 100000 pgbench1\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nquery mode: simple\nnumber of clients: 100\nnumber of transactions per client: 100000\nnumber of transactions actually processed:\n10000000/10000000\ntps = 5139.554921 (including connections establishing)\ntps = 5140.325850 (excluding connections establishing)\nopreport:\nCPU: Intel Core/i7, speed 2394.07 MHz (estimated)\nCounted CPU_CLK_UNHALTED events (Clock cycles when not\nhalted) with a unit mask of 0x00 (No unit mask) count 100000\nCPU_CLK_UNHALT...|\n  samples|      %|\n------------------\n 37705832 61.3683 postgres\n 18472598 30.0652 no-vmlinux\n  4982274  8.1089 libc-2.5.so\n   138517  0.2254 oprofiled\n   134628  0.2191 libm-2.5.so\n     1465  0.0024 libc-2.5.so\n     1454  0.0024 libperl.so\n      793  0.0013\nlibdcsupt.so.5.9.2\n      444 7.2e-04\ndsm_sa_datamgrd\n       \nCPU_CLK_UNHALT...|\n         \nsamples|      %|\n       \n------------------\n             \n401 90.3153 dsm_sa_datamgrd\n              \n43  9.6847 anon (tgid:8013 range:0xffffe000-0xfffff000)\n      410 6.7e-04\nlibxml2.so.2.6.26\n      356 5.8e-04 ld-2.5.so\n      332 5.4e-04\nlibnetsnmp.so.10.0.3\n      327 5.3e-04 dsm_sa_snmpd\n       \nCPU_CLK_UNHALT...|\n         \nsamples|      %|\n       \n------------------\n             \n255 77.9817 dsm_sa_snmpd\n              \n72 22.0183 anon (tgid:8146 range:0xffffe000-0xfffff000)\n      304 4.9e-04\nlibcrypto.so.0.9.8e\n      290 4.7e-04\nlibpthread-2.5.so\n      199 3.2e-04\nlibdcsmil.so.5.9.2\n      139 2.3e-04 modclusterd\n<snip>\n \nResults for the 8.4.4\n(8.4.4-1PGDG.el5) host:\n[root@ some-host ~]# pgbench -h dbs4 -U postgres -i -s\n100 pgbench1 > /dev/null 2>&1 && pgbench -h dbs4 -U postgres\n-c 100 -t 100000 pgbench1\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nquery mode: simple\nnumber of clients: 100\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 10000000/10000000\ntps = 2765.643549 (including connections establishing)\ntps = 2765.931203 (excluding connections establishing)\nopreport:\nCPU: Intel Core/i7, speed 2394.07 MHz (estimated)\nCounted CPU_CLK_UNHALTED events (Clock cycles when not\nhalted) with a unit mask of 0x00 (No unit mask) count 100000\nCPU_CLK_UNHALT...|\n  samples|      %|\n------------------\n312481395 84.5038 postgres\n 41861164 11.3204 no-vmlinux\n 14290652  3.8646 libc-2.5.so\n   812148  0.2196 oprofiled\n   305909  0.0827 libm-2.5.so\n     7647  0.0021 libc-2.5.so\n     3809  0.0010\nlibdcsupt.so.5.9.2\n     3077 8.3e-04 libperl.so\n     2302 6.2e-04 dsm_sa_datamgrd\n       \nCPU_CLK_UNHALT...|\n         \nsamples|      %|\n       \n------------------\n            \n2113 91.7897 dsm_sa_datamgrd\n             \n189  8.2103 anon (tgid:8075 range:0xffffe000-0xfffff000)\n     2175 5.9e-04 libxml2.so.2.6.26\n     1455 3.9e-04 dsm_sa_snmpd\n       \nCPU_CLK_UNHALT...|\n         \nsamples|      %|\n       \n------------------\n            \n1226 84.2612 dsm_sa_snmpd\n             \n229 15.7388 anon (tgid:8208 range:0xffffe000-0xfffff000)\n     1227 3.3e-04 libdchipm.so.5.9.2\n     1192 3.2e-04 libpthread-2.5.so\n      804 2.2e-04\nlibnetsnmp.so.10.0.3\n      745 2.0e-04 modclusterd\n<snip>\n \nAny input? I can reproduce these numbers consistently. If\nyou need more information then just let me know. By the way, I am a new\npostgresql user so my experience is limited.\nCheers,\nMax", "msg_date": "Wed, 9 Jun 2010 11:56:16 +0100", "msg_from": "Max Williams <[email protected]>", "msg_from_op": true, "msg_subject": "Large (almost 50%!) performance drop after upgrading to 8.4.4?" }, { "msg_contents": "\nCan you give the config params for those :\n\nfsync =\nsynchronous_commit =\nwal_sync_method =\n\nAlso, some \"vmstat 1\" output during the runs would be interesting.\n", "msg_date": "Wed, 09 Jun 2010 13:35:42 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large (almost 50%!) performance drop after upgrading to\n 8.4.4?" }, { "msg_contents": "On Wed, Jun 9, 2010 at 6:56 AM, Max Williams <[email protected]> wrote:\n> Any input? I can reproduce these numbers consistently. If you need more\n> information then just let me know. By the way, I am a new postgresql user so\n> my experience is limited.\n\nMaybe different compile options? If we'd really slowed things down by\n50% between 8.4.3 and 8.4.4, there'd be an awful lot of people\nscreaming about it...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 9 Jun 2010 21:51:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large (almost 50%!) performance drop after upgrading to\n\t8.4.4?" }, { "msg_contents": "Well the packages are from the pgdg repo which I would have thought are pretty common?\nhttps://public.commandprompt.com/projects/pgcore/wiki\n\n\n-----Original Message-----\nFrom: Robert Haas [mailto:[email protected]] \nSent: 10 June 2010 02:52\nTo: Max Williams\nCc: [email protected]\nSubject: Re: [PERFORM] Large (almost 50%!) performance drop after upgrading to 8.4.4?\n\nOn Wed, Jun 9, 2010 at 6:56 AM, Max Williams <[email protected]> wrote:\n> Any input? I can reproduce these numbers consistently. If you need more\n> information then just let me know. By the way, I am a new postgresql user so\n> my experience is limited.\n\nMaybe different compile options? If we'd really slowed things down by\n50% between 8.4.3 and 8.4.4, there'd be an awful lot of people\nscreaming about it...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 10 Jun 2010 09:18:12 +0100", "msg_from": "Max Williams <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large (almost 50%!) performance drop after upgrading\n to \t8.4.4?" }, { "msg_contents": "On Wed, 2010-06-09 at 21:51 -0400, Robert Haas wrote:\n> On Wed, Jun 9, 2010 at 6:56 AM, Max Williams <[email protected]>\n> wrote:\n> > Any input? I can reproduce these numbers consistently. If you need\n> more\n> > information then just let me know. By the way, I am a new postgresql\n> user so\n> > my experience is limited.\n> \n> Maybe different compile options? If we'd really slowed things down by\n> 50% between 8.4.3 and 8.4.4, there'd be an awful lot of people\n> screaming about it... \n\nGiven that there are 2 recent reports on the same issue, I wonder if the\nnew packages were built with debugging options or not.\n\n-- \nDevrim GÜNDÜZ\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\nPostgreSQL RPM Repository: http://yum.pgrpms.org\nCommunity: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz", "msg_date": "Thu, 10 Jun 2010 11:29:58 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large (almost 50%!) performance drop after upgrading\n to 8.4.4?" }, { "msg_contents": "How do I tell if it was built with debugging options?\r\n\r\n\r\n-----Original Message-----\r\nFrom: Devrim GÜNDÜZ [mailto:[email protected]] \r\nSent: 10 June 2010 09:30\r\nTo: Robert Haas\r\nCc: Max Williams; [email protected]\r\nSubject: Re: [PERFORM] Large (almost 50%!) performance drop after upgrading to 8.4.4?\r\n\r\nOn Wed, 2010-06-09 at 21:51 -0400, Robert Haas wrote:\r\n> On Wed, Jun 9, 2010 at 6:56 AM, Max Williams <[email protected]>\r\n> wrote:\r\n> > Any input? I can reproduce these numbers consistently. If you need\r\n> more\r\n> > information then just let me know. By the way, I am a new postgresql\r\n> user so\r\n> > my experience is limited.\r\n> \r\n> Maybe different compile options? If we'd really slowed things down by \r\n> 50% between 8.4.3 and 8.4.4, there'd be an awful lot of people \r\n> screaming about it...\r\n\r\nGiven that there are 2 recent reports on the same issue, I wonder if the new packages were built with debugging options or not.\r\n\r\n--\r\nDevrim GÜNDÜZ\r\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer PostgreSQL RPM Repository: http://yum.pgrpms.org\r\nCommunity: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.gunduz.org Twitter: http://twitter.com/devrimgunduz\r\n", "msg_date": "Thu, 10 Jun 2010 11:19:41 +0100", "msg_from": "Max Williams <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large (almost 50%!) performance drop after upgrading\n to 8.4.4?" }, { "msg_contents": "Max Williams <[email protected]> writes:\n> How do I tell if it was built with debugging options?\n\nRun pg_config --configure and see if --enable-cassert is mentioned.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Jun 2010 10:10:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large (almost 50%!) performance drop after upgrading to 8.4.4? " }, { "msg_contents": "I'm afraid pg_config is not part of the pgdg packages.\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: 10 June 2010 15:11\nTo: Max Williams\nCc: [email protected]\nSubject: Re: [PERFORM] Large (almost 50%!) performance drop after upgrading to 8.4.4? \n\nMax Williams <[email protected]> writes:\n> How do I tell if it was built with debugging options?\n\nRun pg_config --configure and see if --enable-cassert is mentioned.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Jun 2010 16:09:02 +0100", "msg_from": "Max Williams <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large (almost 50%!) performance drop after upgrading\n to 8.4.4?" }, { "msg_contents": "Max Williams <[email protected]> wrote:\n \n> I'm afraid pg_config is not part of the pgdg packages.\n \nConnect (using psql or your favorite client) and run:\n \nshow debug_assertions;\n \n-Kevin\n", "msg_date": "Thu, 10 Jun 2010 10:15:53 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large (almost 50%!) performance drop after\n\t upgrading to 8.4.4?" }, { "msg_contents": "Max Williams <[email protected]> writes:\n> I'm afraid pg_config is not part of the pgdg packages.\n\nSure it is. They might've put it in the -devel subpackage, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Jun 2010 11:21:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large (almost 50%!) performance drop after upgrading to 8.4.4? " }, { "msg_contents": "Ah, yes its OFF for 8.4.3 and ON for 8.4.4!\n\nCan I just turn this off on 8.4.4 or is it a compile time option?\nAlso is this a mistake or intended? Perhaps I should tell the person who builds the pgdg packages??\n\nCheers,\nMax\n\n\n-----Original Message-----\nFrom: Kevin Grittner [mailto:[email protected]] \nSent: 10 June 2010 16:16\nTo: Max Williams; [email protected]\nSubject: Re: [PERFORM] Large (almost 50%!) performance drop after upgrading to 8.4.4?\n\nMax Williams <[email protected]> wrote:\n \n> I'm afraid pg_config is not part of the pgdg packages.\n \nConnect (using psql or your favorite client) and run:\n \nshow debug_assertions;\n \n-Kevin\n", "msg_date": "Thu, 10 Jun 2010 16:40:06 +0100", "msg_from": "Max Williams <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large (almost 50%!) performance drop after\t upgrading\n to 8.4.4?" }, { "msg_contents": "Max Williams <[email protected]> writes:\n> Ah, yes its OFF for 8.4.3 and ON for 8.4.4!\n\nHah.\n\n> Can I just turn this off on 8.4.4 or is it a compile time option?\n\nWell, you can turn it off, but that will only buy back part of the\ncost (and not even the bigger part, I believe).\n\n> Also is this a mistake or intended? Perhaps I should tell the person who builds the pgdg packages??\n\nYes, the folks at commandprompt need to be told about this. Loudly.\nIt's a serious packaging error.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Jun 2010 11:46:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large (almost 50%!) performance drop after upgrading to 8.4.4? " }, { "msg_contents": "Excerpts from Tom Lane's message of jue jun 10 11:46:25 -0400 2010:\n\n> Yes, the folks at commandprompt need to be told about this. Loudly.\n> It's a serious packaging error.\n\nJust notified Lacey, the packager (not so loudly, though); she's working\non new packages, and apologizes for the inconvenience.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 10 Jun 2010 12:34:47 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large (almost 50%!) performance drop after upgrading to 8.4.4?" }, { "msg_contents": "Alvaro Herrera wrote:\n> Excerpts from Tom Lane's message of jue jun 10 11:46:25 -0400 2010:\n>\n>> Yes, the folks at commandprompt need to be told about this. Loudly.\n>> It's a serious packaging error.\n>\n> Just notified Lacey, the packager (not so loudly, though); she's working\n> on new packages, and apologizes for the inconvenience.\n>\n\nHello Everyone,\n\nNew packages for 8.4.4 on CentOS 5.5 and RHEL 5.5 (all arches), have \nbeen built, and are available in the PGDG repo.\n\nhttp://yum.pgsqlrpms.org/8.4/redhat/rhel-5-i386/\nhttp://yum.pgsqlrpms.org/8.4/redhat/rhel-5-x86_64/\n\nOutput from pg_config --configure --version is below.\n\nx86_64:\n\n'--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' \n'--target=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' \n'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' \n'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' \n'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' \n'--sharedstatedir=/usr/com' '--mandir=/usr/share/man' \n'--infodir=/usr/share/info' '--disable-rpath' '--with-perl' \n'--with-python' '--with-tcl' '--with-tclconfig=/usr/lib64' \n'--with-openssl' '--with-pam' '--with-krb5' '--with-gssapi' \n'--with-includes=/usr/include' '--with-libraries=/usr/lib64' \n'--enable-nls' '--enable-thread-safety' '--with-libxml' '--with-libxslt' \n'--with-ldap' '--with-system-tzdata=/usr/share/zoneinfo' \n'--sysconfdir=/etc/sysconfig/pgsql' '--datadir=/usr/share/pgsql' \n'--with-docdir=/usr/share/doc' 'build_alias=x86_64-redhat-linux-gnu' \n'host_alias=x86_64-redhat-linux-gnu' \n'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall \n-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector \n--param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et' \n'CPPFLAGS= -I/usr/include/et'\nPostgreSQL 8.4.4\n\ni386:\n\n'--build=i686-redhat-linux-gnu' '--host=i686-redhat-linux-gnu' \n'--target=i386-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' \n'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' \n'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' \n'--libdir=/usr/lib' '--libexecdir=/usr/libexec' '--localstatedir=/var' \n'--sharedstatedir=/usr/com' '--mandir=/usr/share/man' \n'--infodir=/usr/share/info' '--disable-rpath' '--with-perl' \n'--with-python' '--with-tcl' '--with-tclconfig=/usr/lib' \n'--with-openssl' '--with-pam' '--with-krb5' '--with-gssapi' \n'--with-includes=/usr/include' '--with-libraries=/usr/lib' \n'--enable-nls' '--enable-thread-safety' '--with-libxml' '--with-libxslt' \n'--with-ldap' '--with-system-tzdata=/usr/share/zoneinfo' \n'--sysconfdir=/etc/sysconfig/pgsql' '--datadir=/usr/share/pgsql' \n'--with-docdir=/usr/share/doc' 'build_alias=i686-redhat-linux-gnu' \n'host_alias=i686-redhat-linux-gnu' 'target_alias=i386-redhat-linux-gnu' \n'CFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions \n-fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 \n-mtune=generic -fasynchronous-unwind-tables -I/usr/include/et' \n'CPPFLAGS= -I/usr/include/et'\nPostgreSQL 8.4.4\n\nAgain, I extend deep apologies for the inconvenience.\n\nIf there is anything further we can help with, please let us know.\n\nRegards,\n\nLacey\n\n-- \nLacey Powers\n\nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 ext 104\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n\n", "msg_date": "Thu, 10 Jun 2010 11:01:30 -0700", "msg_from": "Lacey Powers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large (almost 50%!) performance drop after upgrading\n to 8.4.4?" }, { "msg_contents": "Max Williams wrote:\n> Can I just turn this off on 8.4.4 or is it a compile time option\n\nYou can update your postgresql.conf to include:\n\ndebug_assertions = false\n\nAnd restart the server. This will buy you back *some* of the \nperformance loss but not all of it. Will have to wait for corrected \npackaged to make the issue completely go away.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 10 Jun 2010 16:15:44 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large (almost 50%!) performance drop after\t upgrading\n to 8.4.4?" }, { "msg_contents": "Greg Smith <greg <at> 2ndquadrant.com> writes:\n\n> \n> Max Williams wrote:\n> > Can I just turn this off on 8.4.4 or is it a compile time option\n> \n> You can update your postgresql.conf to include:\n> \n> debug_assertions = false\n> \n> And restart the server. This will buy you back *some* of the \n> performance loss but not all of it. Will have to wait for corrected \n> packaged to make the issue completely go away.\n> \n\n\nAh! I am so thankful I found this thread. We've been having the same issues \ndescribed here. And when I do a SHOW debug_assertions I get:\n\n\npostgres=# show debug_assertions;\n debug_assertions\n------------------\n on\n(1 row)\n\n\nCan you let us know when the corrected packages have become available?\n\nRegards,\nJohn\n\n\n", "msg_date": "Fri, 11 Jun 2010 16:34:30 +0000 (UTC)", "msg_from": "John Reeve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large (almost 50%!) performance drop =?utf-8?b?YWZ0ZXIJ?=\n\tupgrading to 8.4.4?" }, { "msg_contents": "Alvaro Herrera wrote:\n> Excerpts from Tom Lane's message of jue jun 10 11:46:25 -0400 2010:\n> \n> > Yes, the folks at commandprompt need to be told about this. Loudly.\n> > It's a serious packaging error.\n> \n> Just notified Lacey, the packager (not so loudly, though); she's working\n> on new packages, and apologizes for the inconvenience.\n\n[ Thread moved to hackers. 8.4.4 RPMs were built with debug flags. ]\n\nUh, where are we on this? Has it been completed? How are people\ninformed about this? Do we need to post to the announce email list? \nDoes Yum just update them? How did this mistake happen? How many days\ndid it take to detect the problem?\n\nWhy has no news been posted here?\n\n\thttps://public.commandprompt.com/projects/pgcore/news\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Sat, 12 Jun 2010 21:08:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Large (almost 50%!) performance drop after\n\tupgrading to 8.4.4?" }, { "msg_contents": "Bruce Momjian wrote:\n> Alvaro Herrera wrote:\n> > Excerpts from Tom Lane's message of jue jun 10 11:46:25 -0400 2010:\n> > \n> > > Yes, the folks at commandprompt need to be told about this. Loudly.\n> > > It's a serious packaging error.\n> > \n> > Just notified Lacey, the packager (not so loudly, though); she's working\n> > on new packages, and apologizes for the inconvenience.\n> \n> [ Thread moved to hackers. 8.4.4 RPMs were built with debug flags. ]\n> \n> Uh, where are we on this? Has it been completed? How are people\n> informed about this? Do we need to post to the announce email list? \n> Does Yum just update them? How did this mistake happen? How many days\n> did it take to detect the problem?\n> \n> Why has no news been posted here?\n> \n> \thttps://public.commandprompt.com/projects/pgcore/news\n\nWhy have I received no reply to this email? Do people think this is not\na serious issue? I know it is a weekend but the problem was identified\non Thursday, meaning there was a full workday for someone from\nCommandPrompt to reply to the issue and report a status:\n\n\thttp://archives.postgresql.org/pgsql-performance/2010-06/msg00165.php\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Sun, 13 Jun 2010 10:00:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PERFORM] Large (almost 50%!) performance\n\tdrop after upgrading to 8.4.4?" }, { "msg_contents": "Bruce Momjian wrote:\n> Bruce Momjian wrote:\n> > Alvaro Herrera wrote:\n> > > Excerpts from Tom Lane's message of jue jun 10 11:46:25 -0400 2010:\n> > > \n> > > > Yes, the folks at commandprompt need to be told about this. Loudly.\n> > > > It's a serious packaging error.\n> > > \n> > > Just notified Lacey, the packager (not so loudly, though); she's working\n> > > on new packages, and apologizes for the inconvenience.\n> > \n> > [ Thread moved to hackers. 8.4.4 RPMs were built with debug flags. ]\n> > \n> > Uh, where are we on this? Has it been completed? How are people\n> > informed about this? Do we need to post to the announce email list? \n> > Does Yum just update them? How did this mistake happen? How many days\n> > did it take to detect the problem?\n> > \n> > Why has no news been posted here?\n> > \n> > \thttps://public.commandprompt.com/projects/pgcore/news\n> \n> Why have I received no reply to this email? Do people think this is not\n> a serious issue? I know it is a weekend but the problem was identified\n> on Thursday, meaning there was a full workday for someone from\n> CommandPrompt to reply to the issue and report a status:\n> \n> \thttp://archives.postgresql.org/pgsql-performance/2010-06/msg00165.php\n\n[ Updated subject line.]\n\nI am on IM with Joshua Drake right now and am working to get answers to\nthe questions above. He or I will report in the next few hours.\n\nFYI, only Command Prompt-produced RPMs are affected. Devrim's RPMs are\nnot:\n\n\thttp://yum.postgresqlrpms.org/\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Sun, 13 Jun 2010 19:57:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Command Prompt 8.4.4 PRMs compiled with debug/assert enabled" }, { "msg_contents": "Bruce Momjian wrote:\n> Bruce Momjian wrote:\n> > Bruce Momjian wrote:\n> > > Alvaro Herrera wrote:\n> > > > Excerpts from Tom Lane's message of jue jun 10 11:46:25 -0400 2010:\n> > > > \n> > > > > Yes, the folks at commandprompt need to be told about this. Loudly.\n> > > > > It's a serious packaging error.\n> > > > \n> > > > Just notified Lacey, the packager (not so loudly, though); she's working\n> > > > on new packages, and apologizes for the inconvenience.\n> > > \n> > > [ Thread moved to hackers. 8.4.4 RPMs were built with debug flags. ]\n> > > \n> > > Uh, where are we on this? Has it been completed? How are people\n> > > informed about this? Do we need to post to the announce email list? \n> > > Does Yum just update them? How did this mistake happen? How many days\n> > > did it take to detect the problem?\n> > > \n> > > Why has no news been posted here?\n> > > \n> > > \thttps://public.commandprompt.com/projects/pgcore/news\n> > \n> > Why have I received no reply to this email? Do people think this is not\n> > a serious issue? I know it is a weekend but the problem was identified\n> > on Thursday, meaning there was a full workday for someone from\n> > CommandPrompt to reply to the issue and report a status:\n> > \n> > \thttp://archives.postgresql.org/pgsql-performance/2010-06/msg00165.php\n> \n> [ Updated subject line.]\n> \n> I am on IM with Joshua Drake right now and am working to get answers to\n> the questions above. He or I will report in the next few hours.\n> \n> FYI, only Command Prompt-produced RPMs are affected. Devrim's RPMs are\n> not:\n> \n> \thttp://yum.postgresqlrpms.org/\n\nI have still seen no public report about this, 12 hours after talking to\nJosh Drake on IM about it. :-(\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Mon, 14 Jun 2010 07:14:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Command Prompt 8.4.4 PRMs compiled with debug/assert\n enabled" }, { "msg_contents": "Excerpts from Bruce Momjian's message of dom jun 13 10:00:16 -0400 2010:\n\n> Why have I received no reply to this email? Do people think this is not\n> a serious issue? I know it is a weekend but the problem was identified\n> on Thursday, meaning there was a full workday for someone from\n> CommandPrompt to reply to the issue and report a status:\n> \n> http://archives.postgresql.org/pgsql-performance/2010-06/msg00165.php\n\nThe packager did reply to the original inquiry *on the same day*, but\nthe moderator has not approved that email yet, it seems. I do have the\nreply on my mbox, with CC: pgsql-performance.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 14 Jun 2010 11:19:16 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PERFORM] Large (almost 50%!) performance drop after\n\tupgrading to 8.4.4?" }, { "msg_contents": "Bruce Momjian wrote:\n> Bruce Momjian wrote:\n>> Bruce Momjian wrote:\n>>> Bruce Momjian wrote:\n>>>> Alvaro Herrera wrote:\n>>>>> Excerpts from Tom Lane's message of jue jun 10 11:46:25 -0400 2010:\n>>>>>\n>>>>>> Yes, the folks at commandprompt need to be told about this. Loudly.\n>>>>>> It's a serious packaging error.\n>>>>> Just notified Lacey, the packager (not so loudly, though); she's working\n>>>>> on new packages, and apologizes for the inconvenience.\n>>>> [ Thread moved to hackers. 8.4.4 RPMs were built with debug flags. ]\n>>>>\n>>>> Uh, where arIf there are further questions, or needs, please let me know, and I will try to get them addressed as soon as I can.e we on this? Has it been completed? How are people\n>>>> informed about this? Do we need to post to the announce email list? \n>>>> Does Yum just update them? How did this mistake happen? How many days\n>>>> did it take to detect the problem?\n>>>>\n>>>> Why has no news been posted here?\n>>>>\n>>>> \thttps://public.commandprompt.com/projects/pgcore/news\n>>> Why have I received no reply to this email? Do people think this is not\n>>> a serious issue? I know it is a weekend but the problem was identified\n>>> on Thursday, meaning there was a full workday for someone from\n>>> CommandPrompt to reply to the issue and report a status:\n>>>\n>>> \thttp://archives.postgresql.org/pgsql-performance/2010-06/msg00165.php\n>> [ Updated subject line.]\n>>\n>> I am on IM with Joshua Drake right now and am working to get answers to\n>> the questions above. He or I will report in the next few hours.\n>>\n>> FYI, only Command Prompt-produced RPMs are affected. Devrim's RPMs are\n>> not:\n>>\n>> \thttp://yum.postgresqlrpms.org/\n>\n> I have still seen no public report about this, 12 hours after talking to\n> Josh Drake on IM about it. :-(\n>\n\n\nHello Everyone,\n\nI tried to send something out Thursday about this to pgsql-performance, \nand I tried to send something out last night about this to \npgsql-announce. Neither seem to have gotten through, or approved. =( =( =(\n\nThursday to the Performance List:\n\nHello Everyone,\n\nNew packages for 8.4.4 on CentOS 5.5 and RHEL 5.5 (all arches), have \nbeen built, and are available in the PGDG repo.\n\nhttp://yum.pgsqlrpms.org/8.4/redhat/rhel-5-i386/\nhttp://yum.pgsqlrpms.org/8.4/redhat/rhel-5-x86_64/\n\nOutput from pg_config --configure --version is below.\n\nx86_64:\n\n'--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' \n'--target=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' \n'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' \n'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' \n'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' \n'--sharedstatedir=/usr/com' '--mandir=/usr/share/man' \n'--infodir=/usr/share/info' '--disable-rpath' '--with-perl' \n'--with-python' '--with-tcl' '--with-tclconfig=/usr/lib64' \n'--with-openssl' '--with-pam' '--with-krb5' '--with-gssapi' \n'--with-includes=/usr/include' '--with-libraries=/usr/lib64' \n'--enable-nls' '--enable-thread-safety' '--with-libxml' '--with-libxslt' \n'--with-ldap' '--with-system-tzdata=/usr/share/zoneinfo' \n'--sysconfdir=/etc/sysconfig/pgsql' '--datadir=/usr/share/pgsql' \n'--with-docdir=/usr/share/doc' 'build_alias=x86_64-redhat-linux-gnu' \n'host_alias=x86_64-redhat-linux-gnu' \n'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall \n-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector \n--param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et' \n'CPPFLAGS= -I/usr/include/et'\nPostgreSQL 8.4.4\n\ni386:\n\n'--build=i686-redhat-linux-gnu' '--host=i686-redhat-linux-gnu' \n'--target=i386-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' \n'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' \n'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' \n'--libdir=/usr/lib' '--libexecdir=/usr/libexec' '--localstatedir=/var' \n'--sharedstatedir=/usr/com' '--mandir=/usr/share/man' \n'--infodir=/usr/share/info' '--disable-rpath' '--with-perl' \n'--with-python' '--with-tcl' '--with-tclconfig=/usr/lib' \n'--with-openssl' '--with-pam' '--with-krb5' '--with-gssapi' \n'--with-includes=/usr/include' '--with-libraries=/usr/lib' \n'--enable-nls' '--enable-thread-safety' '--with-libxml' '--with-libxslt' \n'--with-ldap' '--with-system-tzdata=/usr/share/zoneinfo' \n'--sysconfdir=/etc/sysconfig/pgsql' '--datadir=/usr/share/pgsql' \n'--with-docdir=/usr/share/doc' 'build_alias=i686-redhat-linux-gnu' \n'host_alias=i686-redhat-linux-gnu' 'target_alias=i386-redhat-linux-gnu' \n'CFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions \n-fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 \n-mtune=generic -fasynchronous-unwind-tables -I/usr/include/et' \n'CPPFLAGS= -I/usr/include/et'\nPostgreSQL 8.4.4\n\nAgain, I extend deep apologies for the inconvenience.\n\nIf there is anything further we can help with, please let us know.\n\nRegards,\n\nLacey\n\n\n\nAnd last night, for a public announcement:\n\n\nDear PostgreSQL RPMS users,\n\nThere was a mistake with the 8.4.4 packages resulting in --enable-debug \nand --enable-cassert being enabled in the packages for CentOS 5.5 x86_64 \nand i386.\n\nThis has been corrected in the 8.4.4-2PGDG packages, which are in the \nPostgreSQL RPMS repository.\n\nPlease update to these corrected packages as soon as possible.\n\nWe apologize for any inconvenience.\n\nRegards,\n\nLacey\n\n\nI had this fixed and out in the repo about an hour after I was made \naware of it (Alvaro let me know at ~9:30AM PDT ( Thank you *so* much, \nAlvaro! =) ), and I had things out at ~10:45AM PDT, and tried to reply \nshortly thereafter. =( ), and tried to let people know as best I could.\n\nI know there are a great deal of concerns regarding this, and I am \ngreatly sorry for any trouble that was caused, and will add tests to the \nbuild process to ensure that this does not happen again. =(\n\nGiven the concern, I thought I'd try posting a reply here, to this \nemail, to soothe fears, and to plead for some moderator help, since both \nof my emails are most likely stuck in moderation. =( =(\n\nAgain, I'm sorry for the issues I caused, and I will endeavor to make \nthe turnaround and notification time quicker in the future. =(\n\nIf there are further questions, or needs, please let me know, and I will \ntry to get them addressed as soon as I can.\n\nApologies,\n\nLacey\n\n-- \nLacey Powers\n\nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 ext 104\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n\n", "msg_date": "Mon, 14 Jun 2010 09:06:55 -0700", "msg_from": "Lacey Powers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Command Prompt 8.4.4 PRMs compiled with debug/assert enabled" }, { "msg_contents": "Lacey Powers wrote:\n> I tried to send something out Thursday about this to pgsql-performance, \n> and I tried to send something out last night about this to \n> pgsql-announce. Neither seem to have gotten through, or approved. =( =( =(\n\nYes, I suspected that might have happened.\n\n> Thursday to the Performance List:\n> \n> Hello Everyone,\n> \n> New packages for 8.4.4 on CentOS 5.5 and RHEL 5.5 (all arches), have \n> been built, and are available in the PGDG repo.\n> \n> http://yum.pgsqlrpms.org/8.4/redhat/rhel-5-i386/\n> http://yum.pgsqlrpms.org/8.4/redhat/rhel-5-x86_64/\n...\n> Again, I extend deep apologies for the inconvenience.\n> \n> If there is anything further we can help with, please let us know.\n\nDo any of the other minor releases made at the same time have this\nproblem, or just 8.4.4?\n\n> And last night, for a public announcement:\n> \n> \n> Dear PostgreSQL RPMS users,\n> \n> There was a mistake with the 8.4.4 packages resulting in --enable-debug \n> and --enable-cassert being enabled in the packages for CentOS 5.5 x86_64 \n> and i386.\n> \n> This has been corrected in the 8.4.4-2PGDG packages, which are in the \n> PostgreSQL RPMS repository.\n> \n> Please update to these corrected packages as soon as possible.\n> \n> We apologize for any inconvenience.\n\nOK, do the Yum folks get these updates automatically?\n\n> I had this fixed and out in the repo about an hour after I was made \n> aware of it (Alvaro let me know at ~9:30AM PDT ( Thank you *so* much, \n> Alvaro! =) ), and I had things out at ~10:45AM PDT, and tried to reply \n> shortly thereafter. =( ), and tried to let people know as best I could.\n> \n> I know there are a great deal of concerns regarding this, and I am \n> greatly sorry for any trouble that was caused, and will add tests to the \n> build process to ensure that this does not happen again. =(\n> \n> Given the concern, I thought I'd try posting a reply here, to this \n> email, to soothe fears, and to plead for some moderator help, since both \n> of my emails are most likely stuck in moderation. =( =(\n> \n> Again, I'm sorry for the issues I caused, and I will endeavor to make \n> the turnaround and notification time quicker in the future. =(\n> \n> If there are further questions, or needs, please let me know, and I will \n> try to get them addressed as soon as I can.\n\nOK, how do we properly get rid of all those buggy 8.4.4 installs? Seems\na posting to announce is not enough, and we need to show users how to\ntell if they are running a de-buggy version. Does the fixed 8.4.4\ninstall have a different visible version number, or do they have to use\nSHOW debug_assertions?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Mon, 14 Jun 2010 16:46:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Command Prompt 8.4.4 PRMs compiled with debug/assert enabled" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, how do we properly get rid of all those buggy 8.4.4 installs? Seems\n> a posting to announce is not enough, and we need to show users how to\n> tell if they are running a de-buggy version.\n\nThe original thread already covered that in sufficient detail: check\ndebug_assertions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jun 2010 16:52:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Command Prompt 8.4.4 PRMs compiled with debug/assert enabled " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > OK, how do we properly get rid of all those buggy 8.4.4 installs? Seems\n> > a posting to announce is not enough, and we need to show users how to\n> > tell if they are running a de-buggy version.\n> \n> The original thread already covered that in sufficient detail: check\n> debug_assertions.\n\nBut this was not communicated in the announce email, which was part of\nmy point. We need to tell people how to get the fix, and how to audit\ntheir systems to know they have all been upgrade.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Mon, 14 Jun 2010 16:56:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Command Prompt 8.4.4 PRMs compiled with debug/assert enabled" }, { "msg_contents": "Bruce Momjian wrote:\n> Lacey Powers wrote:\n>> I tried to send something out Thursday about this to pgsql-performance, \n>> and I tried to send something out last night about this to \n>> pgsql-announce. Neither seem to have gotten through, or approved. =( =( =(\n>\n> Yes, I suspected that might have happened.\n>\n>> Thursday to the Performance List:\n>>\n>> Hello Everyone,\n>>\n>> New packages for 8.4.4 on CentOS 5.5 and RHEL 5.5 (all arches), have \n>> been built, and are available in the PGDG repo.\n>>\n>> http://yum.pgsqlrpms.org/8.4/redhat/rhel-5-i386/\n>> http://yum.pgsqlrpms.org/8.4/redhat/rhel-5-x86_64/\n> ...\n>> Again, I extend deep apologies for the inconvenience.\n>>\n>> If there is anything further we can help with, please let us know.\n>\n> Do any of the other minor releases made at the same time have this\n> problem, or just 8.4.4?\n\nThe only ones affected were 8.4.4 for CentOS 5 x86_64 and i386.\n\n8.3\n\n'--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' \n'--target=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' \n'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' \n'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' \n'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' \n'--sharedstatedir=/usr/com' '--mandir=/usr/share/man' \n'--infodir=/usr/share/info' '--disable-rpath' '--with-perl' \n'--with-python' '--with-tcl' '--with-tclconfig=/usr/lib64' \n'--with-openssl' '--with-pam' '--with-krb5' '--with-gssapi' \n'--with-includes=/usr/include' '--with-libraries=/usr/lib64' \n'--enable-nls' '--enable-thread-safety' '--with-ossp-uuid' \n'--with-libxml' '--with-libxslt' '--with-ldap' \n'--with-system-tzdata=/usr/share/zoneinfo' \n'--sysconfdir=/etc/sysconfig/pgsql' '--datadir=/usr/share/pgsql' \n'--with-docdir=/usr/share/doc' 'CFLAGS=-O2 -g -pipe -Wall \n-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector \n--param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et' \n'CPPFLAGS= -I/usr/include/et' 'build_alias=x86_64-redhat-linux-gnu' \n'host_alias=x86_64-redhat-linux-gnu' 'target_alias=x86_64-redhat-linux-gnu'\nPostgreSQL 8.3.11\n\n\n8.2\n\n'--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' \n'--target=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' \n'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' \n'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' \n'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' \n'--sharedstatedir=/usr/com' '--mandir=/usr/share/man' \n'--infodir=/usr/share/info' '--disable-rpath' '--with-perl' \n'--with-python' '--with-tcl' '--with-tclconfig=/usr/lib64' \n'--with-openssl' '--with-pam' '--with-krb5' \n'--with-includes=/usr/include' '--with-libraries=/usr/lib64' \n'--enable-nls' '--enable-thread-safety' \n'--sysconfdir=/etc/sysconfig/pgsql' '--datadir=/usr/share/pgsql' \n'--with-docdir=/usr/share/doc' 'CFLAGS=-O2 -g -pipe -Wall \n-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector \n--param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et' \n'CPPFLAGS= -I/usr/include/et' 'build_alias=x86_64-redhat-linux-gnu' \n'host_alias=x86_64-redhat-linux-gnu' 'target_alias=x86_64-redhat-linux-gnu'\nPostgreSQL 8.2.17\n\n8.1\n\n\n'--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' \n'--target=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' \n'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' \n'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' \n'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' \n'--sharedstatedir=/usr/com' '--mandir=/usr/share/man' \n'--infodir=/usr/share/info' '--disable-rpath' '--with-perl' '--with-tcl' \n'--with-tclconfig=/usr/lib64' '--with-python' '--with-openssl' \n'--with-pam' '--with-krb5' '--with-includes=/usr/include' \n'--with-libraries=/usr/lib64' '--enable-nls' '--enable-thread-safety' \n'--sysconfdir=/etc/sysconfig/pgsql' '--datadir=//usr/share/pgsql' \n'--with-docdir=/usr/share/doc' 'CFLAGS=-O2 -g -pipe -Wall \n-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector \n--param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et' \n'CPPFLAGS= -I/usr/include/et' 'build_alias=x86_64-redhat-linux-gnu' \n'host_alias=x86_64-redhat-linux-gnu' 'target_alias=x86_64-redhat-linux-gnu'\nPostgreSQL 8.1.21\n\n8.0\n'--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' \n'--target=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' \n'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' \n'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' \n'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' \n'--sharedstatedir=/usr/com' '--mandir=/usr/share/man' \n'--infodir=/usr/share/info' '--disable-rpath' '--with-perl' '--with-tcl' \n'--with-tclconfig=/usr/lib64' '--with-openssl' '--with-pam' \n'--with-krb5' '--with-includes=/usr/include' '--with-libraries=/usr/lib' \n'--enable-nls' '--sysconfdir=/etc/sysconfig/pgsql' \n'--datadir=/usr/share/pgsql' '--with-docdir=/usr/share/doc' 'CFLAGS=-O2 \n-g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector \n--param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et' \n'CPPFLAGS= -I/usr/include/et' 'build_alias=x86_64-redhat-linux-gnu' \n'host_alias=x86_64-redhat-linux-gnu' 'target_alias=x86_64-redhat-linux-gnu'\nPostgreSQL 8.0.25\n\n7.4\n\n'--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' \n'--target=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' \n'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' \n'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' \n'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' \n'--sharedstatedir=/usr/com' '--mandir=/usr/share/man' \n'--infodir=/usr/share/info' '--disable-rpath' '--with-perl' '--with-tcl' \n'--with-tclconfig=/usr/lib64' '--without-tk' '--with-python' \n'--with-openssl' '--with-pam' '--with-krb5=/usr' \n'--with-includes=/usr/include/et' '--enable-nls' \n'--enable-thread-safety' '--sysconfdir=/etc/sysconfig/pgsql' \n'--datadir=/usr/share/pgsql' '--with-docdir=/usr/share/doc' 'CFLAGS=-O2 \n-g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector \n--param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et' \n'CPPFLAGS= -I/usr/include/et' 'build_alias=x86_64-redhat-linux-gnu' \n'host_alias=x86_64-redhat-linux-gnu' 'target_alias=x86_64-redhat-linux-gnu'\nPostgreSQL 7.4.29\n\n\nAnd verified to be shut off in all of the spec files in the current svn \nrevision, and changed only in the 8.4.4 spec file according to the svn \nhistory:\n\nfind . -type f -name 'postgresql-[0-9].[0-9].spec' -exec grep -i \n'%define beta' '{}' \\; -print\n%define beta 0\n./7.4/postgresql/EL-5/postgresql-7.4.spec\n%define beta 0\n./7.4/postgresql/EL-4/postgresql-7.4.spec\n%define beta 0\n./8.0/postgresql/F-12/postgresql-8.0.spec\n%define beta 0\n./8.0/postgresql/F-11/postgresql-8.0.spec\n%define beta 0\n./8.0/postgresql/EL-5/postgresql-8.0.spec\n%define beta 0\n./8.0/postgresql/EL-4/postgresql-8.0.spec\n%define beta 0\n./8.1/postgresql/F-12/postgresql-8.1.spec\n%define beta 0\n./8.1/postgresql/F-11/postgresql-8.1.spec\n%define beta 0\n./8.1/postgresql/EL-5/postgresql-8.1.spec\n%define beta 0\n./8.1/postgresql/EL-4/postgresql-8.1.spec\n%define beta 0\n./8.3/postgresql-intdatetime/F-12/postgresql-8.3.spec\n%define beta 0\n./8.3/postgresql-intdatetime/F-11/postgresql-8.3.spec\n%define beta 0\n./8.3/postgresql-intdatetime/EL-5/postgresql-8.3.spec\n%define beta 0\n./8.3/postgresql-intdatetime/EL-4/postgresql-8.3.spec\n%define beta 0\n./8.3/postgresql/F-12/postgresql-8.3.spec\n%define beta 0\n./8.3/postgresql/F-11/postgresql-8.3.spec\n%define beta 0\n./8.3/postgresql/EL-5/postgresql-8.3.spec\n%define beta 0\n./8.3/postgresql/EL-4/postgresql-8.3.spec\n%define beta 0\n./8.4/postgresql/F-12/postgresql-8.4.spec\n%define beta 0\n./8.4/postgresql/F-11/postgresql-8.4.spec\n%define beta 0\n./8.4/postgresql/EL-5/postgresql-8.4.spec\n%define beta 0\n./8.4/postgresql/EL-4/postgresql-8.4.spec\n%define beta 0\n./8.4/postgresql-mv/F-12/postgresql-8.4.spec\n%define beta 0\n./8.4/postgresql-mv/F-11/postgresql-8.4.spec\n%define beta 0\n./8.4/postgresql-mv/EL-5/postgresql-8.4.spec\n%define beta 0\n./8.4/postgresql-mv/EL-4/postgresql-8.4.spec\n%define beta 0\n./9.0/postgresql/F-12/postgresql-9.0.spec\n%define beta 0\n./9.0/postgresql/F-11/postgresql-9.0.spec\n%define beta 0\n./9.0/postgresql/EL-5/postgresql-9.0.spec\n%define beta 0\n./9.0/postgresql/EL-4/postgresql-9.0.spec\n%define beta 0\n./8.2/postgresql/F-12/postgresql-8.2.spec\n%define beta 0\n./8.2/postgresql/F-11/postgresql-8.2.spec\n%define beta 0\n./8.2/postgresql/EL-5/postgresql-8.2.spec\n%define beta 0\n./8.2/postgresql/EL-4/postgresql-8.2.spec\n\n\n\n>\n>> And last night, for a public announcement:\n>>\n>>\n>> Dear PostgreSQL RPMS users,\n>>\n>> There was a mistake with the 8.4.4 packages resulting in --enable-debug \n>> and --enable-cassert being enabled in the packages for CentOS 5.5 x86_64 \n>> and i386.\n>>\n>> This has been corrected in the 8.4.4-2PGDG packages, which are in the \n>> PostgreSQL RPMS repository.\n>>\n>> Please update to these corrected packages as soon as possible.\n>>\n>> We apologize for any inconvenience.\n>\n> OK, do the Yum folks get these updates automatically?\n\nYup. I updated the package version to 8.4.4-2PGDG, so this should update \nin yum automatically, if you do a yum update.\n\n>\n>> I had this fixed and out in the repo about an hour after I was made \n>> aware of it (Alvaro letwget -c http://yum.pgsqlrpms.org/7.4/redhat/rhel-5-x86_64/postgresql-7.4.29-1PGDG.el5.x86_64.rpm\n>> wget -c http://yum.pgsqlrpms.org/7.4/redhat/rhel-5-x86_64/postgresql-contrib-7.4.29-1PGDG.el5.x86_64.rpm\n>> wget -c http://yum.pgsqlrpms.org/7.4/redhat/rhel-5-x86_64/postgresql-debuginfo-7.4.29-1PGDG.el5.x86_64.rpm\n>> wget -c http://yum.pgsqlrpms.org/7.4/redhat/rhel-5-x86_64/postgresql-devel-7.4.29-1PGDG.el5.x86_64.rpm\n>> wget -c http://yum.pgsqlrpms.org/7.4/redhat/rhel-5-x86_64/postgresql-docs-7.4.29-1PGDG.el5.x86_64.rpm\n>> wget -c http://yum.pgsqlrpms.org/7.4/redhat/rhel-5-x86_64/postgresql-libs-7.4.29-1PGDG.el5.x86_64.rpm\n>> wget -c http://yum.pgsqlrpms.org/7.4/redhat/rhel-5-x86_64/postgresql-pl-7.4.29-1PGDG.el5.x86_64.rpm\n>> wget -c http://yum.pgsqlrpms.org/7.4/redhat/rhel-5-x86_64/postgresql-server-7.4.29-1PGDG.el5.x86_64.rpm\n>> wget -c http://yum.pgsqlrpms.org/7.4/redhat/rhel-5-x86_64/postgresql-test-7.4.29-1PGDG.el5.x86_64.rpm me know at ~9:30AM PDT ( Thank you *so* much, \n>> Alvaro! =) ), and I had things out at ~10:45AM PDT, and tried to reply \n>> shortly thereafter. =( ), and tried to let people know as best I could.\n>>\n>> I know there are a great deal of concerns regarding this, and I am \n>> greatly sorry for any trouble that was caused, and will add tests to the \n>> build process to ensure that this does not happen again. =(\n>>\n>> Given the concern, I thought I'd try posting a reply here, to this \n>> email, to soothe fears, and to plead for some moderator help, since both \n>> of my emails are most likely stuck in moderation. =( =(\n>>\n>> Again, I'm sorry for the issues I caused, and I will endeavor to make \n>> the turnaround and notification time quicker in the future. =(\n>>\n>> If there are further questions, or needs, please let me know, and I will \n>> try to get them addressed as soon as I can.\n>\n> OK, how do we properly get rid of all those buggy 8.4.4 installs? Seems\n> a posting to announce is not enough, and we need to show users how to\n> tell if they are running a de-buggy version. Does the fixed 8.4.4\n> install have a different visible version number, or do they have to use\n> SHOW debug_assertions?\n\nYou can tell from the package name, using:\n\nrpm -qa | grep 'postgresql'\n\npostgresql-server-8.4.4-1PGDG.el5 -- Debugging Enabled. =( =( =(\n\npostgresql-server-8.4.4-2PGDG.el5 -- Debugging Disabled. =) =) =)\n\nOr with:\n\npg_config --configure | grep 'debug\\|cassert' if you install the -devel \npackage.\n\nOr as you said:\n\nSHOW debug_assertions\n\nI hope that answers your questions.\n\nI'll write up another announcement that has the ways you can tell, and \nurges people who are using yum to update to the next package revision \nvia yum.\n\nThank you,\n\nLacey\n\n\n-- \nLacey Powers\n\nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 ext 104\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n\n", "msg_date": "Mon, 14 Jun 2010 15:39:34 -0700", "msg_from": "Lacey Powers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Command Prompt 8.4.4 PRMs compiled with debug/assert enabled" }, { "msg_contents": "On 6/14/10 3:39 PM, Lacey Powers wrote:\n> Bruce Momjian wrote:\n>> Lacey Powers wrote:\n>>> I tried to send something out Thursday about this to\n>>> pgsql-performance, and I tried to send something out last night about\n>>> this to pgsql-announce. Neither seem to have gotten through, or\n>>> approved. =( =( =(\n\nHmmm. I'm the approver for pgsql-performance, but somehow I didn't get\nthe moderator e-mail for you. Approved now. Sorry.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Mon, 14 Jun 2010 15:51:11 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Command Prompt 8.4.4 PRMs compiled with debug/assert enabled" }, { "msg_contents": "\n\nLacey Powers wrote:\n>>\n>> Do any of the other minor releases made at the same time have this\n>> problem, or just 8.4.4?\n>\n> The only ones affected were 8.4.4 for CentOS 5 x86_64 and i386.\n>\n>\n\nThat also covers RHEL5 x86_64/i386, no? I assume you use the same RPMs \nfor both.\n\ncheers\n\nandrew\n", "msg_date": "Mon, 14 Jun 2010 18:53:29 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Command Prompt 8.4.4 PRMs compiled with debug/assert enabled" }, { "msg_contents": "Andrew Dunstan wrote:\n>\n>\n> Lacey Powers wrote:\n>>>\n>>> Do any of the other minor releases made at the same time have this\n>>> problem, or just 8.4.4?\n>>\n>> The only ones affected were 8.4.4 for CentOS 5 x86_64 and i386.\n>>\n>>\n>\n> That also covers RHEL5 x86_64/i386, no? I assume you use the same RPMs \n> for both.\n>\n> cheers\n>\n> andrew\n>\nYes, it covers RHEL 5 too. =) We do use the same RPM for both.\n\nLacey\n\n\n-- \nLacey Powers\n\nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 ext 104\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n\n", "msg_date": "Mon, 14 Jun 2010 16:14:47 -0700", "msg_from": "Lacey Powers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Command Prompt 8.4.4 PRMs compiled with debug/assert enabled" }, { "msg_contents": "Josh Berkus wrote:\n> On 6/14/10 3:39 PM, Lacey Powers wrote:\n>> Bruce Momjian wrote:\n>>> Lacey Powers wrote:\n>>>> I tried to send something out Thursday about this to\n>>>> pgsql-performance, and I tried to send something out last night about\n>>>> this to pgsql-announce. Neither seem to have gotten through, or\n>>>> approved. =( =( =(\n>\n> Hmmm. I'm the approver for pgsql-performance, but somehow I didn't get\n> the moderator e-mail for you. Approved now. Sorry.\n>\n\nNo big. =) Thank you very much for approving me! =)\n\nLacey\n\n-- \nLacey Powers\n\nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 ext 104\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n\n", "msg_date": "Mon, 14 Jun 2010 16:15:37 -0700", "msg_from": "Lacey Powers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Command Prompt 8.4.4 PRMs compiled with debug/assert enabled" } ]
[ { "msg_contents": "Dear Experts,\n\n \n\nI have data about half milllion to 1 million which is populated into the Postgres db using a batch job (A sql script consists of pl/pgsql functions and views) .\n\n \n\nI am using PostgreSQL 8.3.5 on windows 2003 64-Bit machine.\n\n \n\nIt would be helpful if you can suggest me the appropriate Autovacuum settings for handling this large data as my autovacuum setting is hanging the entire process.\n\n \n\nAs of now I have the below Autovacuum settings in postgresql.conf file.\n\n#------------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#------------------------------------------------------------------------------\n\nautovacuum = on \n\n \n\nlog_autovacuum_min_duration = 0 \n\n \n\nautovacuum_max_workers = 5 \n\n\nautovacuum_naptime = 10min \n\n \n\nautovacuum_vacuum_threshold = 1000\n\n \n\nautovacuum_analyze_threshold = 500 \n\n \n\nautovacuum_vacuum_scale_factor = 0.2 \n\n \n\nautovacuum_analyze_scale_factor = 0.1 \n\n\nautovacuum_freeze_max_age = 200000000 \n\n \n#autovacuum_vacuum_cost_delay = 200 \n\n \n\n#autovacuum_vacuum_cost_limit = -1 \n\n--------------------------------------------------------------------------------------\n\nPlease provide you suggestion regarding the same.\n\n \n\nMany thanks\n\n \n \t\t \t \t\t \n_________________________________________________________________\nThe latest in fashion and style in MSN Lifestyle\nhttp://lifestyle.in.msn.com/\n\n\n\n\n\nDear Experts,\n \nI have data about half milllion to 1 million which is populated into the Postgres db using a batch job (A sql script consists of pl/pgsql functions and views) .\n \nI am using PostgreSQL 8.3.5 on windows 2003 64-Bit machine.\n \nIt would be helpful if you can suggest me the appropriate Autovacuum settings for handling this large data as my autovacuum setting is hanging the entire process.\n \nAs of now I have the below Autovacuum settings in postgresql.conf file.\n#------------------------------------------------------------------------------# AUTOVACUUM PARAMETERS#------------------------------------------------------------------------------\nautovacuum = on    \n \nlog_autovacuum_min_duration = 0 \n     \nautovacuum_max_workers = 5 \nautovacuum_naptime = 10min \n \nautovacuum_vacuum_threshold = 1000\n      \nautovacuum_analyze_threshold = 500 \n \nautovacuum_vacuum_scale_factor = 0.2 \n \nautovacuum_analyze_scale_factor = 0.1 \nautovacuum_freeze_max_age = 200000000 \n #autovacuum_vacuum_cost_delay = 200  \n    \n#autovacuum_vacuum_cost_limit = -1 \n--------------------------------------------------------------------------------------\nPlease provide you suggestion regarding the same.\n \nMany thanks\n  Build a bright career through MSN Education Sign up now.", "msg_date": "Thu, 10 Jun 2010 14:17:48 +0530", "msg_from": "Ambarish Bhattacharya <[email protected]>", "msg_from_op": true, "msg_subject": "Autovaccum settings while Bulk Loading data" }, { "msg_contents": "On 10/06/10 11:47, Ambarish Bhattacharya wrote:\n> It would be helpful if you can suggest me the appropriate Autovacuum settings for handling this large data as my autovacuum setting is hanging the entire process.\n\nWhat do you mean by \"hanging the entire process\"?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 10 Jun 2010 12:01:25 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovaccum settings while Bulk Loading data" }, { "msg_contents": "Please keep the mailing list CC'd, so that others can help.\n\nOn 10/06/10 15:30, Ambarish Bhattacharya wrote:\n>> On 10/06/10 11:47, Ambarish Bhattacharya wrote:\n>>> It would be helpful if you can suggest me the appropriate Autovacuum settings for handling this large data as my autovacuum setting is hanging the entire process.\n>>\n>> What do you mean by \"hanging the entire process\"?\n>\n> Hanging the entire process means...the autovacuum and auto analyzes starts and after that there is no acitivity i could see in the postgres log related to the bulk loading and when checked the postgres processes from the task manager i could see few of the postgres porcess are still running and had to be killed from there..normal shut down in not happening in this case...\n\nYou'll have to provide a lot more details if you want people to help \nyou. How do you bulk load the data? What kind of log messages do you \nnormally get in the PostgreSQL log related to bulk loading?\n\nAutovacuum or autoanalyze should not interfere with loading data, even \nif it runs simultaneously.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 10 Jun 2010 18:27:16 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovaccum settings while Bulk Loading data" } ]
[ { "msg_contents": "Can anyone please tell me why the following query hangs?\nThis is a part of a large query.\n\nexplain\nselect *\nfrom vtiger_emaildetails\ninner join vtiger_vantage_email_track on vtiger_emaildetails.emailid =\nvtiger_vantage_email_track.mailid\nleft join vtiger_seactivityrel on vtiger_seactivityrel.activityid =\nvtiger_emaildetails.emailid\n\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n Merge Cond: (\"outer\".emailid = \"inner\".activityid)\n -> Merge Join (cost=9500.30..11658.97 rows=88852 width=498)\n Merge Cond: (\"outer\".emailid = \"inner\".mailid)\n -> Index Scan using vtiger_emaildetails_pkey on\nvtiger_emaildetails (cost=0.00..714.40 rows=44595 width=486)\n -> Sort (cost=9500.30..9722.43 rows=88852 width=12)\n Sort Key: vtiger_vantage_email_track.mailid\n -> Seq Scan on vtiger_vantage_email_track\n(cost=0.00..1369.52 rows=88852 width=12)\n -> Index Scan using seactivityrel_activityid_idx on\nvtiger_seactivityrel (cost=0.00..28569.29 rows=1319776 width=8)\n(9 rows)\n\nselect relname, reltuples, relpages\nfrom pg_class\nwhere relname in\n('vtiger_emaildetails','vtiger_vantage_email_track','vtiger_seactivityrel');\n\n\n relname | reltuples | relpages\n----------------------------+-------------+----------\n vtiger_emaildetails | 44595 | 1360\n vtiger_seactivityrel | 1.31978e+06 | 6470\n vtiger_vantage_email_track | 88852 | 481\n(3 rows)\n\nCan anyone please tell me why the following query hangs?This is a part of a large query.explainselect *from vtiger_emaildetailsinner join vtiger_vantage_email_track on vtiger_emaildetails.emailid = vtiger_vantage_email_track.mailid\nleft join vtiger_seactivityrel on vtiger_seactivityrel.activityid = vtiger_emaildetails.emailid                                                       QUERY PLAN                                                        \n------------------------------------------------------------------------------------------------------------------------- Merge Left Join  (cost=9500.30..101672.51 rows=2629549 width=506)   Merge Cond: (\"outer\".emailid = \"inner\".activityid)\n   ->  Merge Join  (cost=9500.30..11658.97 rows=88852 width=498)         Merge Cond: (\"outer\".emailid = \"inner\".mailid)         ->  Index Scan using vtiger_emaildetails_pkey on vtiger_emaildetails  (cost=0.00..714.40 rows=44595 width=486)\n         ->  Sort  (cost=9500.30..9722.43 rows=88852 width=12)               Sort Key: vtiger_vantage_email_track.mailid               ->  Seq Scan on vtiger_vantage_email_track  (cost=0.00..1369.52 rows=88852 width=12)\n   ->  Index Scan using seactivityrel_activityid_idx on vtiger_seactivityrel  (cost=0.00..28569.29 rows=1319776 width=8)(9 rows)select relname, reltuples, relpagesfrom pg_classwhere relname in ('vtiger_emaildetails','vtiger_vantage_email_track','vtiger_seactivityrel');\n          relname           |  reltuples  | relpages ----------------------------+-------------+---------- vtiger_emaildetails        |       44595 |     1360 vtiger_seactivityrel       | 1.31978e+06 |     6470\n vtiger_vantage_email_track |       88852 |      481(3 rows)", "msg_date": "Thu, 10 Jun 2010 16:16:05 +0600", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": true, "msg_subject": "query hangs" }, { "msg_contents": "2010/6/10 AI Rumman <[email protected]>\n\n> Can anyone please tell me why the following query hangs?\n> This is a part of a large query.\n>\n> explain\n> select *\n> from vtiger_emaildetails\n> inner join vtiger_vantage_email_track on vtiger_emaildetails.emailid =\n> vtiger_vantage_email_track.mailid\n> left join vtiger_seactivityrel on vtiger_seactivityrel.activityid =\n> vtiger_emaildetails.emailid\n>\n> QUERY\n> PLAN\n>\n> -------------------------------------------------------------------------------------------------------------------------\n> Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n> Merge Cond: (\"outer\".emailid = \"inner\".activityid)\n> -> Merge Join (cost=9500.30..11658.97 rows=88852 width=498)\n> Merge Cond: (\"outer\".emailid = \"inner\".mailid)\n> -> Index Scan using vtiger_emaildetails_pkey on\n> vtiger_emaildetails (cost=0.00..714.40 rows=44595 width=486)\n> -> Sort (cost=9500.30..9722.43 rows=88852 width=12)\n> Sort Key: vtiger_vantage_email_track.mailid\n> -> Seq Scan on vtiger_vantage_email_track\n> (cost=0.00..1369.52 rows=88852 width=12)\n> -> Index Scan using seactivityrel_activityid_idx on\n> vtiger_seactivityrel (cost=0.00..28569.29 rows=1319776 width=8)\n> (9 rows)\n>\n> select relname, reltuples, relpages\n> from pg_class\n> where relname in\n> ('vtiger_emaildetails','vtiger_vantage_email_track','vtiger_seactivityrel');\n>\n>\n> relname | reltuples | relpages\n> ----------------------------+-------------+----------\n> vtiger_emaildetails | 44595 | 1360\n> vtiger_seactivityrel | 1.31978e+06 | 6470\n> vtiger_vantage_email_track | 88852 | 481\n> (3 rows)\n>\n>\n>\n>\nCould you define what you mean by 'hangs'? Does it work or not?\nCheck table pg_locks for locking issues, maybe the query is just slow but\nnot hangs.\nNotice that the query just returns 2M rows, that can be quite huge number\ndue to your database structure, data amount and current server\nconfiguration.\n\nregards\nSzymon Guz\n\n2010/6/10 AI Rumman <[email protected]>\nCan anyone please tell me why the following query hangs?This is a part of a large query.explainselect *from vtiger_emaildetailsinner join vtiger_vantage_email_track on vtiger_emaildetails.emailid = vtiger_vantage_email_track.mailid\n\nleft join vtiger_seactivityrel on vtiger_seactivityrel.activityid = vtiger_emaildetails.emailid                                                       QUERY PLAN                                                        \n\n------------------------------------------------------------------------------------------------------------------------- Merge Left Join  (cost=9500.30..101672.51 rows=2629549 width=506)   Merge Cond: (\"outer\".emailid = \"inner\".activityid)\n\n   ->  Merge Join  (cost=9500.30..11658.97 rows=88852 width=498)         Merge Cond: (\"outer\".emailid = \"inner\".mailid)         ->  Index Scan using vtiger_emaildetails_pkey on vtiger_emaildetails  (cost=0.00..714.40 rows=44595 width=486)\n\n         ->  Sort  (cost=9500.30..9722.43 rows=88852 width=12)               Sort Key: vtiger_vantage_email_track.mailid               ->  Seq Scan on vtiger_vantage_email_track  (cost=0.00..1369.52 rows=88852 width=12)\n\n   ->  Index Scan using seactivityrel_activityid_idx on vtiger_seactivityrel  (cost=0.00..28569.29 rows=1319776 width=8)(9 rows)select relname, reltuples, relpagesfrom pg_classwhere relname in ('vtiger_emaildetails','vtiger_vantage_email_track','vtiger_seactivityrel');\n          relname           |  reltuples  | relpages ----------------------------+-------------+---------- vtiger_emaildetails        |       44595 |     1360 vtiger_seactivityrel       | 1.31978e+06 |     6470\n\n vtiger_vantage_email_track |       88852 |      481(3 rows)Could you define what you mean by 'hangs'? Does it work or not? Check table pg_locks for locking issues, maybe the query is just slow but not hangs.\nNotice that the query just returns 2M rows, that can be quite huge number due to your database structure, data amount and current server configuration.regardsSzymon Guz", "msg_date": "Thu, 10 Jun 2010 13:26:02 +0200", "msg_from": "Szymon Guz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query hangs" }, { "msg_contents": "I found only AccessShareLock in pg_locks during the query.\nAnd the query does not return data though I have been waiting for 10 mins.\n\nDo you have any idea ?\n\nOn Thu, Jun 10, 2010 at 5:26 PM, Szymon Guz <[email protected]> wrote:\n\n>\n>\n> 2010/6/10 AI Rumman <[email protected]>\n>\n> Can anyone please tell me why the following query hangs?\n>> This is a part of a large query.\n>>\n>> explain\n>> select *\n>> from vtiger_emaildetails\n>> inner join vtiger_vantage_email_track on vtiger_emaildetails.emailid =\n>> vtiger_vantage_email_track.mailid\n>> left join vtiger_seactivityrel on vtiger_seactivityrel.activityid =\n>> vtiger_emaildetails.emailid\n>>\n>> QUERY\n>> PLAN\n>>\n>> -------------------------------------------------------------------------------------------------------------------------\n>> Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n>> Merge Cond: (\"outer\".emailid = \"inner\".activityid)\n>> -> Merge Join (cost=9500.30..11658.97 rows=88852 width=498)\n>> Merge Cond: (\"outer\".emailid = \"inner\".mailid)\n>> -> Index Scan using vtiger_emaildetails_pkey on\n>> vtiger_emaildetails (cost=0.00..714.40 rows=44595 width=486)\n>> -> Sort (cost=9500.30..9722.43 rows=88852 width=12)\n>> Sort Key: vtiger_vantage_email_track.mailid\n>> -> Seq Scan on vtiger_vantage_email_track\n>> (cost=0.00..1369.52 rows=88852 width=12)\n>> -> Index Scan using seactivityrel_activityid_idx on\n>> vtiger_seactivityrel (cost=0.00..28569.29 rows=1319776 width=8)\n>> (9 rows)\n>>\n>> select relname, reltuples, relpages\n>> from pg_class\n>> where relname in\n>> ('vtiger_emaildetails','vtiger_vantage_email_track','vtiger_seactivityrel');\n>>\n>>\n>> relname | reltuples | relpages\n>> ----------------------------+-------------+----------\n>> vtiger_emaildetails | 44595 | 1360\n>> vtiger_seactivityrel | 1.31978e+06 | 6470\n>> vtiger_vantage_email_track | 88852 | 481\n>> (3 rows)\n>>\n>>\n>>\n>>\n> Could you define what you mean by 'hangs'? Does it work or not?\n> Check table pg_locks for locking issues, maybe the query is just slow but\n> not hangs.\n> Notice that the query just returns 2M rows, that can be quite huge number\n> due to your database structure, data amount and current server\n> configuration.\n>\n> regards\n> Szymon Guz\n>\n>\n\nI found only AccessShareLock in pg_locks during the query.And the query does not return data though I have been waiting for 10 mins.Do you have any idea ?On Thu, Jun 10, 2010 at 5:26 PM, Szymon Guz <[email protected]> wrote:\n2010/6/10 AI Rumman <[email protected]>\n\nCan anyone please tell me why the following query hangs?This is a part of a large query.explainselect *from vtiger_emaildetailsinner join vtiger_vantage_email_track on vtiger_emaildetails.emailid = vtiger_vantage_email_track.mailid\n\n\nleft join vtiger_seactivityrel on vtiger_seactivityrel.activityid = vtiger_emaildetails.emailid                                                       QUERY PLAN                                                        \n\n\n------------------------------------------------------------------------------------------------------------------------- Merge Left Join  (cost=9500.30..101672.51 rows=2629549 width=506)   Merge Cond: (\"outer\".emailid = \"inner\".activityid)\n\n\n   ->  Merge Join  (cost=9500.30..11658.97 rows=88852 width=498)         Merge Cond: (\"outer\".emailid = \"inner\".mailid)         ->  Index Scan using vtiger_emaildetails_pkey on vtiger_emaildetails  (cost=0.00..714.40 rows=44595 width=486)\n\n\n         ->  Sort  (cost=9500.30..9722.43 rows=88852 width=12)               Sort Key: vtiger_vantage_email_track.mailid               ->  Seq Scan on vtiger_vantage_email_track  (cost=0.00..1369.52 rows=88852 width=12)\n\n\n   ->  Index Scan using seactivityrel_activityid_idx on vtiger_seactivityrel  (cost=0.00..28569.29 rows=1319776 width=8)(9 rows)select relname, reltuples, relpagesfrom pg_classwhere relname in ('vtiger_emaildetails','vtiger_vantage_email_track','vtiger_seactivityrel');\n          relname           |  reltuples  | relpages ----------------------------+-------------+---------- vtiger_emaildetails        |       44595 |     1360 vtiger_seactivityrel       | 1.31978e+06 |     6470\n\n\n vtiger_vantage_email_track |       88852 |      481(3 rows)Could you define what you mean by 'hangs'? Does it work or not? Check table pg_locks for locking issues, maybe the query is just slow but not hangs.\nNotice that the query just returns 2M rows, that can be quite huge number due to your database structure, data amount and current server configuration.regardsSzymon Guz", "msg_date": "Thu, 10 Jun 2010 17:36:34 +0600", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query hangs" }, { "msg_contents": "2010/6/10 AI Rumman <[email protected]>\n\n> I found only AccessShareLock in pg_locks during the query.\n> And the query does not return data though I have been waiting for 10 mins.\n>\n> Do you have any idea ?\n>\n>\n> On Thu, Jun 10, 2010 at 5:26 PM, Szymon Guz <[email protected]> wrote:\n>\n>>\n>>\n>> 2010/6/10 AI Rumman <[email protected]>\n>>\n>> Can anyone please tell me why the following query hangs?\n>>> This is a part of a large query.\n>>>\n>>> explain\n>>> select *\n>>> from vtiger_emaildetails\n>>> inner join vtiger_vantage_email_track on vtiger_emaildetails.emailid =\n>>> vtiger_vantage_email_track.mailid\n>>> left join vtiger_seactivityrel on vtiger_seactivityrel.activityid =\n>>> vtiger_emaildetails.emailid\n>>>\n>>> QUERY\n>>> PLAN\n>>>\n>>> -------------------------------------------------------------------------------------------------------------------------\n>>> Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n>>> Merge Cond: (\"outer\".emailid = \"inner\".activityid)\n>>> -> Merge Join (cost=9500.30..11658.97 rows=88852 width=498)\n>>> Merge Cond: (\"outer\".emailid = \"inner\".mailid)\n>>> -> Index Scan using vtiger_emaildetails_pkey on\n>>> vtiger_emaildetails (cost=0.00..714.40 rows=44595 width=486)\n>>> -> Sort (cost=9500.30..9722.43 rows=88852 width=12)\n>>> Sort Key: vtiger_vantage_email_track.mailid\n>>> -> Seq Scan on vtiger_vantage_email_track\n>>> (cost=0.00..1369.52 rows=88852 width=12)\n>>> -> Index Scan using seactivityrel_activityid_idx on\n>>> vtiger_seactivityrel (cost=0.00..28569.29 rows=1319776 width=8)\n>>> (9 rows)\n>>>\n>>> select relname, reltuples, relpages\n>>> from pg_class\n>>> where relname in\n>>> ('vtiger_emaildetails','vtiger_vantage_email_track','vtiger_seactivityrel');\n>>>\n>>>\n>>> relname | reltuples | relpages\n>>> ----------------------------+-------------+----------\n>>> vtiger_emaildetails | 44595 | 1360\n>>> vtiger_seactivityrel | 1.31978e+06 | 6470\n>>> vtiger_vantage_email_track | 88852 | 481\n>>> (3 rows)\n>>>\n>>>\n>>>\n>>>\n>> Could you define what you mean by 'hangs'? Does it work or not?\n>> Check table pg_locks for locking issues, maybe the query is just slow but\n>> not hangs.\n>> Notice that the query just returns 2M rows, that can be quite huge number\n>> due to your database structure, data amount and current server\n>> configuration.\n>>\n>> regards\n>> Szymon Guz\n>>\n>>\n>\n1. Make vacuum analyze on used tables.\n2. Check how long it would take if you limit the number of returned rows\njust to 100\n3. Do you have indexes on used columns?\n\nregards\nSzymon Guz\n\n2010/6/10 AI Rumman <[email protected]>\nI found only AccessShareLock in pg_locks during the query.And the query does not return data though I have been waiting for 10 mins.Do you have any idea ?\nOn Thu, Jun 10, 2010 at 5:26 PM, Szymon Guz <[email protected]> wrote:\n2010/6/10 AI Rumman <[email protected]>\n\nCan anyone please tell me why the following query hangs?This is a part of a large query.explainselect *from vtiger_emaildetailsinner join vtiger_vantage_email_track on vtiger_emaildetails.emailid = vtiger_vantage_email_track.mailid\n\n\n\nleft join vtiger_seactivityrel on vtiger_seactivityrel.activityid = vtiger_emaildetails.emailid                                                       QUERY PLAN                                                        \n\n\n\n------------------------------------------------------------------------------------------------------------------------- Merge Left Join  (cost=9500.30..101672.51 rows=2629549 width=506)   Merge Cond: (\"outer\".emailid = \"inner\".activityid)\n\n\n\n   ->  Merge Join  (cost=9500.30..11658.97 rows=88852 width=498)         Merge Cond: (\"outer\".emailid = \"inner\".mailid)         ->  Index Scan using vtiger_emaildetails_pkey on vtiger_emaildetails  (cost=0.00..714.40 rows=44595 width=486)\n\n\n\n         ->  Sort  (cost=9500.30..9722.43 rows=88852 width=12)               Sort Key: vtiger_vantage_email_track.mailid               ->  Seq Scan on vtiger_vantage_email_track  (cost=0.00..1369.52 rows=88852 width=12)\n\n\n\n   ->  Index Scan using seactivityrel_activityid_idx on vtiger_seactivityrel  (cost=0.00..28569.29 rows=1319776 width=8)(9 rows)select relname, reltuples, relpagesfrom pg_classwhere relname in ('vtiger_emaildetails','vtiger_vantage_email_track','vtiger_seactivityrel');\n          relname           |  reltuples  | relpages ----------------------------+-------------+---------- vtiger_emaildetails        |       44595 |     1360 vtiger_seactivityrel       | 1.31978e+06 |     6470\n\n\n\n vtiger_vantage_email_track |       88852 |      481(3 rows)Could you define what you mean by 'hangs'? Does it work or not? Check table pg_locks for locking issues, maybe the query is just slow but not hangs.\nNotice that the query just returns 2M rows, that can be quite huge number due to your database structure, data amount and current server configuration.regardsSzymon Guz \n\n\n1. Make vacuum analyze on used tables.2. Check how long it would take if you limit the number of returned rows just to 1003. Do you have indexes on used columns?\nregardsSzymon Guz", "msg_date": "Thu, 10 Jun 2010 13:44:46 +0200", "msg_from": "Szymon Guz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query hangs" }, { "msg_contents": "On Thu, Jun 10, 2010 at 5:36 AM, AI Rumman <[email protected]> wrote:\n> I found only AccessShareLock in pg_locks during the query.\n> And the query does not return data though I have been waiting for 10 mins.\n>\n> Do you have any idea ?\n\nI have queries that run for hours. As long as it's using CPU / IO\n(use top in unix, whatever in windows to see) it's not hung, it's just\ntaking longer than you expected. Those are not the same thing at all.\n\nSeeing as how you're joining three tables with millions of rows with\nno where clause, it's gonna take some to complete. Go grab a\nsandwich, etc, come back when it's done.\n", "msg_date": "Mon, 14 Jun 2010 14:16:39 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query hangs" } ]
[ { "msg_contents": "AI Rumman wrote:\n \n>> Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n \n> And the query does not return data though I have been waiting for\n> 10 mins.\n>\n> Do you have any idea ?\n \nUnless you use a cursor, PostgreSQL interfaces typically don't show\nany response on the client side until all rows have been received and\ncached on the client side. That's estimated to be over 2.6 million\nrows in this case. That can take a while.\n \nYou might want to use a cursor....\n \n-Kevin\n", "msg_date": "Thu, 10 Jun 2010 07:28:06 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query hangs" }, { "msg_contents": "Could you please give me the link for cursor- How to use it?\n\nOn Thu, Jun 10, 2010 at 6:28 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> AI Rumman wrote:\n>\n> >> Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n>\n> > And the query does not return data though I have been waiting for\n> > 10 mins.\n> >\n> > Do you have any idea ?\n>\n> Unless you use a cursor, PostgreSQL interfaces typically don't show\n> any response on the client side until all rows have been received and\n> cached on the client side. That's estimated to be over 2.6 million\n> rows in this case. That can take a while.\n>\n> You might want to use a cursor....\n>\n> -Kevin\n>\n\nCould you please give me the link for cursor- How to use it?On Thu, Jun 10, 2010 at 6:28 PM, Kevin Grittner <[email protected]> wrote:\nAI Rumman  wrote:\n\n>> Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n\n> And the query does not return data though I have been waiting for\n> 10 mins.\n>\n> Do you have any idea ?\n\nUnless you use a cursor, PostgreSQL interfaces typically don't show\nany response on the client side until all rows have been received and\ncached on the client side.  That's estimated to be over 2.6 million\nrows in this case.  That can take a while.\n\nYou might want to use a cursor....\n\n-Kevin", "msg_date": "Thu, 10 Jun 2010 18:35:03 +0600", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query hangs" }, { "msg_contents": "On 10 June 2010 18:05, AI Rumman <[email protected]> wrote:\n\n> Could you please give me the link for cursor- How to use it?\n>\n>\n> On Thu, Jun 10, 2010 at 6:28 PM, Kevin Grittner <\n> [email protected]> wrote:\n>\n>> AI Rumman wrote:\n>>\n>> >> Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n>>\n>> > And the query does not return data though I have been waiting for\n>> > 10 mins.\n>> >\n>> > Do you have any idea ?\n>>\n>> Unless you use a cursor, PostgreSQL interfaces typically don't show\n>> any response on the client side until all rows have been received and\n>> cached on the client side. That's estimated to be over 2.6 million\n>> rows in this case. That can take a while.\n>>\n>> You might want to use a cursor....\n>>\n>>\n\nIf you are using psql client, using FETCH_COUNT to a small value will allow\nyou to achieve cursor behaviour. psql starts returning batches of\nFETCH_COUNT number of rows .\n\nE.g. \\set FETCH_COUNT 1\nwill start fetching and displaying each row one by one.\n\n\n\n\n -Kevin\n>>\n>\n>\n\nOn 10 June 2010 18:05, AI Rumman <[email protected]> wrote:\n\nCould you please give me the link for cursor- How to use it?On Thu, Jun 10, 2010 at 6:28 PM, Kevin Grittner <[email protected]> wrote:\nAI Rumman  wrote:\n\n>> Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n\n> And the query does not return data though I have been waiting for\n> 10 mins.\n>\n> Do you have any idea ?\n\nUnless you use a cursor, PostgreSQL interfaces typically don't show\nany response on the client side until all rows have been received and\ncached on the client side.  That's estimated to be over 2.6 million\nrows in this case.  That can take a while.\n\nYou might want to use a cursor....\nIf you are using psql client, using FETCH_COUNT to a small value will allow you to achieve cursor behaviour. psql starts returning batches of FETCH_COUNT number of rows .\nE.g. \\set FETCH_COUNT 1will start fetching and displaying each row one by one.\n\n-Kevin", "msg_date": "Thu, 10 Jun 2010 18:25:54 +0530", "msg_from": "Amit Khandekar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query hangs" }, { "msg_contents": "I am using Postgresql 8.1 and did not find FETCH_COUNT\n\nOn Thu, Jun 10, 2010 at 6:55 PM, Amit Khandekar <\[email protected]> wrote:\n\n>\n>\n> On 10 June 2010 18:05, AI Rumman <[email protected]> wrote:\n>\n>> Could you please give me the link for cursor- How to use it?\n>>\n>>\n>> On Thu, Jun 10, 2010 at 6:28 PM, Kevin Grittner <\n>> [email protected]> wrote:\n>>\n>>> AI Rumman wrote:\n>>>\n>>> >> Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n>>>\n>>> > And the query does not return data though I have been waiting for\n>>> > 10 mins.\n>>> >\n>>> > Do you have any idea ?\n>>>\n>>> Unless you use a cursor, PostgreSQL interfaces typically don't show\n>>> any response on the client side until all rows have been received and\n>>> cached on the client side. That's estimated to be over 2.6 million\n>>> rows in this case. That can take a while.\n>>>\n>>> You might want to use a cursor....\n>>>\n>>>\n>\n> If you are using psql client, using FETCH_COUNT to a small value will allow\n> you to achieve cursor behaviour. psql starts returning batches of\n> FETCH_COUNT number of rows .\n>\n> E.g. \\set FETCH_COUNT 1\n> will start fetching and displaying each row one by one.\n>\n>\n>\n>\n> -Kevin\n>>>\n>>\n>>\n>\n\nI am using Postgresql 8.1 and did not find FETCH_COUNTOn Thu, Jun 10, 2010 at 6:55 PM, Amit Khandekar <[email protected]> wrote:\nOn 10 June 2010 18:05, AI Rumman <[email protected]> wrote:\n\n\nCould you please give me the link for cursor- How to use it?On Thu, Jun 10, 2010 at 6:28 PM, Kevin Grittner <[email protected]> wrote:\nAI Rumman  wrote:\n\n>> Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n\n> And the query does not return data though I have been waiting for\n> 10 mins.\n>\n> Do you have any idea ?\n\nUnless you use a cursor, PostgreSQL interfaces typically don't show\nany response on the client side until all rows have been received and\ncached on the client side.  That's estimated to be over 2.6 million\nrows in this case.  That can take a while.\n\nYou might want to use a cursor....\nIf you are using psql client, using FETCH_COUNT to a small value will allow you to achieve cursor behaviour. psql starts returning batches of FETCH_COUNT number of rows .\nE.g. \\set FETCH_COUNT 1will start fetching and displaying each row one by one.\n\n-Kevin", "msg_date": "Thu, 10 Jun 2010 19:17:25 +0600", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query hangs" }, { "msg_contents": "On 10 June 2010 18:47, AI Rumman <[email protected]> wrote:\n\n> I am using Postgresql 8.1 and did not find FETCH_COUNT\n>\n>\nOh ok. Looks like FETCH_COUNT was introduced in 8.2\n\n\n> On Thu, Jun 10, 2010 at 6:55 PM, Amit Khandekar <\n> [email protected]> wrote:\n>\n>>\n>>\n>> On 10 June 2010 18:05, AI Rumman <[email protected]> wrote:\n>>\n>>> Could you please give me the link for cursor- How to use it?\n>>>\n>>>\n>>> On Thu, Jun 10, 2010 at 6:28 PM, Kevin Grittner <\n>>> [email protected]> wrote:\n>>>\n>>>> AI Rumman wrote:\n>>>>\n>>>> >> Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n>>>>\n>>>> > And the query does not return data though I have been waiting for\n>>>> > 10 mins.\n>>>> >\n>>>> > Do you have any idea ?\n>>>>\n>>>> Unless you use a cursor, PostgreSQL interfaces typically don't show\n>>>> any response on the client side until all rows have been received and\n>>>> cached on the client side. That's estimated to be over 2.6 million\n>>>> rows in this case. That can take a while.\n>>>>\n>>>> You might want to use a cursor....\n>>>>\n>>>>\n>>\n>> If you are using psql client, using FETCH_COUNT to a small value will\n>> allow you to achieve cursor behaviour. psql starts returning batches of\n>> FETCH_COUNT number of rows .\n>>\n>> E.g. \\set FETCH_COUNT 1\n>> will start fetching and displaying each row one by one.\n>>\n>>\n>>\n>>\n>> -Kevin\n>>>>\n>>>\n>>>\n>>\n>\n\nOn 10 June 2010 18:47, AI Rumman <[email protected]> wrote:\n\nI am using Postgresql 8.1 and did not find FETCH_COUNTOh ok. Looks like FETCH_COUNT was introduced in 8.2\nOn Thu, Jun 10, 2010 at 6:55 PM, Amit Khandekar <[email protected]> wrote:\nOn 10 June 2010 18:05, AI Rumman <[email protected]> wrote:\n\n\nCould you please give me the link for cursor- How to use it?On Thu, Jun 10, 2010 at 6:28 PM, Kevin Grittner <[email protected]> wrote:\nAI Rumman  wrote:\n\n>> Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n\n> And the query does not return data though I have been waiting for\n> 10 mins.\n>\n> Do you have any idea ?\n\nUnless you use a cursor, PostgreSQL interfaces typically don't show\nany response on the client side until all rows have been received and\ncached on the client side.  That's estimated to be over 2.6 million\nrows in this case.  That can take a while.\n\nYou might want to use a cursor....\nIf you are using psql client, using FETCH_COUNT to a small value will allow you to achieve cursor behaviour. psql starts returning batches of FETCH_COUNT number of rows .\nE.g. \\set FETCH_COUNT 1will start fetching and displaying each row one by one.\n\n-Kevin", "msg_date": "Fri, 11 Jun 2010 10:39:34 +0530", "msg_from": "Amit Khandekar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query hangs" }, { "msg_contents": "Any more idea, please.\nIs table partition a good solution for query optimization?\n\nOn Fri, Jun 11, 2010 at 11:09 AM, Amit Khandekar <\[email protected]> wrote:\n\n>\n>\n> On 10 June 2010 18:47, AI Rumman <[email protected]> wrote:\n>\n>> I am using Postgresql 8.1 and did not find FETCH_COUNT\n>>\n>>\n> Oh ok. Looks like FETCH_COUNT was introduced in 8.2\n>\n>\n>> On Thu, Jun 10, 2010 at 6:55 PM, Amit Khandekar <\n>> [email protected]> wrote:\n>>\n>>>\n>>>\n>>> On 10 June 2010 18:05, AI Rumman <[email protected]> wrote:\n>>>\n>>>> Could you please give me the link for cursor- How to use it?\n>>>>\n>>>>\n>>>> On Thu, Jun 10, 2010 at 6:28 PM, Kevin Grittner <\n>>>> [email protected]> wrote:\n>>>>\n>>>>> AI Rumman wrote:\n>>>>>\n>>>>> >> Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n>>>>>\n>>>>> > And the query does not return data though I have been waiting for\n>>>>> > 10 mins.\n>>>>> >\n>>>>> > Do you have any idea ?\n>>>>>\n>>>>> Unless you use a cursor, PostgreSQL interfaces typically don't show\n>>>>> any response on the client side until all rows have been received and\n>>>>> cached on the client side. That's estimated to be over 2.6 million\n>>>>> rows in this case. That can take a while.\n>>>>>\n>>>>> You might want to use a cursor....\n>>>>>\n>>>>>\n>>>\n>>> If you are using psql client, using FETCH_COUNT to a small value will\n>>> allow you to achieve cursor behaviour. psql starts returning batches of\n>>> FETCH_COUNT number of rows .\n>>>\n>>> E.g. \\set FETCH_COUNT 1\n>>> will start fetching and displaying each row one by one.\n>>>\n>>>\n>>>\n>>>\n>>> -Kevin\n>>>>>\n>>>>\n>>>>\n>>>\n>>\n>\n\nAny more idea, please.Is table partition a good solution for query optimization?On Fri, Jun 11, 2010 at 11:09 AM, Amit Khandekar <[email protected]> wrote:\nOn 10 June 2010 18:47, AI Rumman <[email protected]> wrote:\n\n\nI am using Postgresql 8.1 and did not find FETCH_COUNTOh ok. Looks like FETCH_COUNT was introduced in 8.2\nOn Thu, Jun 10, 2010 at 6:55 PM, Amit Khandekar <[email protected]> wrote:\nOn 10 June 2010 18:05, AI Rumman <[email protected]> wrote:\n\n\nCould you please give me the link for cursor- How to use it?On Thu, Jun 10, 2010 at 6:28 PM, Kevin Grittner <[email protected]> wrote:\nAI Rumman  wrote:\n\n>> Merge Left Join (cost=9500.30..101672.51 rows=2629549 width=506)\n\n> And the query does not return data though I have been waiting for\n> 10 mins.\n>\n> Do you have any idea ?\n\nUnless you use a cursor, PostgreSQL interfaces typically don't show\nany response on the client side until all rows have been received and\ncached on the client side.  That's estimated to be over 2.6 million\nrows in this case.  That can take a while.\n\nYou might want to use a cursor....\nIf you are using psql client, using FETCH_COUNT to a small value will allow you to achieve cursor behaviour. psql starts returning batches of FETCH_COUNT number of rows .\nE.g. \\set FETCH_COUNT 1will start fetching and displaying each row one by one.\n\n-Kevin", "msg_date": "Sun, 13 Jun 2010 16:45:31 +0600", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query hangs" }, { "msg_contents": "AI Rumman <[email protected]> wrote:\n \n> [It takes a long time to return 2.6 million rows.]\n \n> Any more idea, please.\n \nI don't recall you telling us exactly what the environment and\nconnection type is in which you're trying to return this large\nresult set. Any specific suggestions would depend on that\ninformation.\n \nI do wonder why you are returning 2.6 million rows. A result set\nthat large is rarely useful directly (except during data conversion\nor loading of some sort). Is there any filtering or aggregation\nhappening on the client side with the received rows? If so, my\nfirst suggestion would be to make that part of the query, rather\nthan part of the client code.\n \n> Is table partition a good solution for query optimization?\n \nTable partitioning is useful in some cases, but you haven't told us\nanything yet to indicate that it would help here.\n \n-Kevin\n", "msg_date": "Mon, 14 Jun 2010 10:00:33 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query hangs" } ]
[ { "msg_contents": "Hi,\nI have the following query that needs tuning:\n\npsrdb=# explain analyze (SELECT\npsrdb(# MAX(item_rank.rank) AS maxRank\npsrdb(# FROM\npsrdb(# item_rank item_rank\npsrdb(# WHERE\npsrdb(# item_rank.project_id='proj2783'\npsrdb(# AND item_rank.pf_id IS NULL\npsrdb(#\npsrdb(# )\npsrdb-# ORDER BY\npsrdb-# maxRank DESC;\n \nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------- \n\nSort (cost=0.19..0.19 rows=1 width=0) (actual time=12.154..12.155 \nrows=1 loops=1)\n Sort Key: ($0)\n Sort Method: quicksort Memory: 17kB\n InitPlan\n -> Limit (cost=0.00..0.17 rows=1 width=8) (actual \ntime=12.129..12.130 rows=1 loops=1)\n -> Index Scan Backward using item_rank_rank on item_rank \n(cost=0.00..2933.84 rows=17558 width=8) (actual time=12.126..12.126 \nrows=1 loops=1)\n Filter: ((rank IS NOT NULL) AND (pf_id IS NULL) AND \n((project_id)::text = 'proj2783'::text))\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=12.140..12.142 rows=1 loops=1)\nTotal runtime: 12.206 ms\n(9 rows)\n\nI have been playing with indexes but it seems that it doesn't make any \ndifference. (I have created an index: item_rank_index\" btree \n(project_id) WHERE (pf_id IS NULL))\n\n\nAny advice on how to make it run faster?\n\nThanks a lot,\nAnne\n", "msg_date": "Thu, 10 Jun 2010 10:50:40 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Need to increase performance of a query" }, { "msg_contents": "On 2010-06-10 19:50, Anne Rosset wrote:\n> Any advice on how to make it run faster?\n\nWhat timing do you get if you run it with \\t (timing on) and without \nexplain analyze ?\n\nI would be surprised if you can get it much faster than what is is.. I \nmay be that a\nsignificant portion is \"planning cost\" so if you run it a lot you might \nbenefit from\na prepared statement.\n\n\n-- \nJesper\n", "msg_date": "Thu, 10 Jun 2010 20:12:41 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "On Thu, Jun 10, 2010 at 10:50:40AM -0700, Anne Rosset wrote:\n> Any advice on how to make it run faster?\n\nFirst, let me ask a simple question - what runtime for this query will\nbe satisfactory for you?\n\nBest regards,\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n", "msg_date": "Thu, 10 Jun 2010 20:33:56 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "Jesper Krogh wrote:\n> On 2010-06-10 19:50, Anne Rosset wrote:\n>> Any advice on how to make it run faster?\n>\n> What timing do you get if you run it with \\t (timing on) and without \n> explain analyze ?\n>\n> I would be surprised if you can get it much faster than what is is.. I \n> may be that a\n> significant portion is \"planning cost\" so if you run it a lot you \n> might benefit from\n> a prepared statement.\n>\n>\nHi Jesper,\nThanks your response:\npsrdb=# \\timing\nTiming is on.\npsrdb=# (SELECT\npsrdb(# MAX(item_rank.rank) AS maxRank\npsrdb(# FROM\npsrdb(# item_rank item_rank\npsrdb(# WHERE\npsrdb(# item_rank.project_id='proj2783'\npsrdb(# AND item_rank.pf_id IS NULL\npsrdb(#\npsrdb(# )\npsrdb-# ORDER BY\npsrdb-# maxRank DESC;\n maxrank\n-------------\n 20200000000\n(1 row)\n\nTime: 12.947 ms\n\nIt really seems to me that it should take less time.\n\n Specially when I see the result with a different where clause like \nthis one:\npsrdb=# SELECT\npsrdb-# MAX(item_rank.rank) AS maxRank\npsrdb-# FROM\npsrdb-# item_rank item_rank\npsrdb-# WHERE\npsrdb-# item_rank.pf_id='plan1408'\npsrdb-# ORDER BY\npsrdb-# maxRank DESC;\n maxrank\n-------------\n 20504000000\n(1 row)\n\nTime: 2.582 ms\n\n\nThanks,\nAnne\n", "msg_date": "Thu, 10 Jun 2010 11:36:08 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "Thursday, June 10, 2010, 8:36:08 PM you wrote:\n\n> psrdb=# (SELECT\n> psrdb(# MAX(item_rank.rank) AS maxRank\n> psrdb(# FROM\n> psrdb(# item_rank item_rank\n> psrdb(# WHERE\n> psrdb(# item_rank.project_id='proj2783'\n> psrdb(# AND item_rank.pf_id IS NULL\n> psrdb(#\n> psrdb(# )\n> psrdb-# ORDER BY\n> psrdb-# maxRank DESC;\n\nDon't think it does really matter, but why do you sort a resultset \nconsisting of only one row?\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Thu, 10 Jun 2010 21:22:11 +0200", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "Jochen Erwied wrote:\n> Thursday, June 10, 2010, 8:36:08 PM you wrote:\n>\n> \n>> psrdb=# (SELECT\n>> psrdb(# MAX(item_rank.rank) AS maxRank\n>> psrdb(# FROM\n>> psrdb(# item_rank item_rank\n>> psrdb(# WHERE\n>> psrdb(# item_rank.project_id='proj2783'\n>> psrdb(# AND item_rank.pf_id IS NULL\n>> psrdb(#\n>> psrdb(# )\n>> psrdb-# ORDER BY\n>> psrdb-# maxRank DESC;\n>> \n>\n> Don't think it does really matter, but why do you sort a resultset \n> consisting of only one row?\n>\n> \nSorry, I should have removed the ORDER by (the full query has a union).\nSo without the ORDER by, here are the results:\npsrdb=# SELECT\npsrdb-# MAX(item_rank.rank) AS maxRank\npsrdb-# FROM\npsrdb-# item_rank item_rank\npsrdb-# WHERE\npsrdb-# item_rank.pf_id='plan1408';\n maxrank\n-------------\n 20504000000\n(1 row)\n\nTime: 1.516 ms\npsrdb=# SELECT\npsrdb-# MAX(item_rank.rank) AS maxRank\npsrdb-# FROM\npsrdb-# item_rank item_rank\npsrdb-# WHERE\npsrdb-# item_rank.project_id='proj2783'\npsrdb-# AND item_rank.pf_id IS NULL;\n maxrank\n-------------\n 20200000000\n(1 row)\n\nTime: 13.177 ms\n\nIs there anything that can be done for the second one?\n\nThanks,\nAnne\n", "msg_date": "Thu, 10 Jun 2010 12:34:07 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "On Thu, Jun 10, 2010 at 12:34:07PM -0700, Anne Rosset wrote:\n> Jochen Erwied wrote:\n>> Thursday, June 10, 2010, 8:36:08 PM you wrote:\n>>\n>> \n>>> psrdb=# (SELECT\n>>> psrdb(# MAX(item_rank.rank) AS maxRank\n>>> psrdb(# FROM\n>>> psrdb(# item_rank item_rank\n>>> psrdb(# WHERE\n>>> psrdb(# item_rank.project_id='proj2783'\n>>> psrdb(# AND item_rank.pf_id IS NULL\n>>> psrdb(#\n>>> psrdb(# )\n>>> psrdb-# ORDER BY\n>>> psrdb-# maxRank DESC;\n>>> \n>>\n>> Don't think it does really matter, but why do you sort a resultset \n>> consisting of only one row?\n>>\n>> \n> Sorry, I should have removed the ORDER by (the full query has a union).\n> So without the ORDER by, here are the results:\n> psrdb=# SELECT\n> psrdb-# MAX(item_rank.rank) AS maxRank\n> psrdb-# FROM\n> psrdb-# item_rank item_rank\n> psrdb-# WHERE\n> psrdb-# item_rank.pf_id='plan1408';\n> maxrank\n> -------------\n> 20504000000\n> (1 row)\n>\n> Time: 1.516 ms\n> psrdb=# SELECT\n> psrdb-# MAX(item_rank.rank) AS maxRank\n> psrdb-# FROM\n> psrdb-# item_rank item_rank\n> psrdb-# WHERE\n> psrdb-# item_rank.project_id='proj2783'\n> psrdb-# AND item_rank.pf_id IS NULL;\n> maxrank\n> -------------\n> 20200000000\n> (1 row)\n>\n> Time: 13.177 ms\n>\n> Is there anything that can be done for the second one?\n>\n> Thanks,\n> Anne\n>\nWhat about an IS NULL index on pf_id?\n\nRegards,\nKen\n", "msg_date": "Thu, 10 Jun 2010 14:37:12 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "Thursday, June 10, 2010, 9:34:07 PM you wrote:\n\n> Time: 1.516 ms\n\n> Time: 13.177 ms\n\nI'd suppose the first query to scan a lot less rows than the second one. \nCould you supply an explained plan for the fast query?\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Thu, 10 Jun 2010 21:39:24 +0200", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "Kenneth Marshall wrote:\n> On Thu, Jun 10, 2010 at 12:34:07PM -0700, Anne Rosset wrote:\n> \n>> Jochen Erwied wrote:\n>> \n>>> Thursday, June 10, 2010, 8:36:08 PM you wrote:\n>>>\n>>> \n>>> \n>>>> psrdb=# (SELECT\n>>>> psrdb(# MAX(item_rank.rank) AS maxRank\n>>>> psrdb(# FROM\n>>>> psrdb(# item_rank item_rank\n>>>> psrdb(# WHERE\n>>>> psrdb(# item_rank.project_id='proj2783'\n>>>> psrdb(# AND item_rank.pf_id IS NULL\n>>>> psrdb(#\n>>>> psrdb(# )\n>>>> psrdb-# ORDER BY\n>>>> psrdb-# maxRank DESC;\n>>>> \n>>>> \n>>> Don't think it does really matter, but why do you sort a resultset \n>>> consisting of only one row?\n>>>\n>>> \n>>> \n>> Sorry, I should have removed the ORDER by (the full query has a union).\n>> So without the ORDER by, here are the results:\n>> psrdb=# SELECT\n>> psrdb-# MAX(item_rank.rank) AS maxRank\n>> psrdb-# FROM\n>> psrdb-# item_rank item_rank\n>> psrdb-# WHERE\n>> psrdb-# item_rank.pf_id='plan1408';\n>> maxrank\n>> -------------\n>> 20504000000\n>> (1 row)\n>>\n>> Time: 1.516 ms\n>> psrdb=# SELECT\n>> psrdb-# MAX(item_rank.rank) AS maxRank\n>> psrdb-# FROM\n>> psrdb-# item_rank item_rank\n>> psrdb-# WHERE\n>> psrdb-# item_rank.project_id='proj2783'\n>> psrdb-# AND item_rank.pf_id IS NULL;\n>> maxrank\n>> -------------\n>> 20200000000\n>> (1 row)\n>>\n>> Time: 13.177 ms\n>>\n>> Is there anything that can be done for the second one?\n>>\n>> Thanks,\n>> Anne\n>>\n>> \n> What about an IS NULL index on pf_id?\n>\n> Regards,\n> Ken\n> \nHi Ken,\nI have the following index:\n\"item_rank_index2\" btree (project_id) WHERE (pf_id IS NULL)\n\nAre you suggesting something else?\nThanks,\nAnne\n\n\n", "msg_date": "Thu, 10 Jun 2010 12:42:21 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "Jochen Erwied wrote:\n> Thursday, June 10, 2010, 9:34:07 PM you wrote:\n>\n> \n>> Time: 1.516 ms\n>> \n>\n> \n>> Time: 13.177 ms\n>> \n>\n> I'd suppose the first query to scan a lot less rows than the second one. \n> Could you supply an explained plan for the fast query?\n>\n> \nHi Jochen,\nHere is the explained plan for the fastest query:\npsrdb=# explain analyze ELECT\npsrdb-# MAX(item_rank.rank) AS maxRank\npsrdb-# FROM\npsrdb-# item_rank item_rank\npsrdb-# WHERE\npsrdb-# item_rank.pf_id='plan1408';\nERROR: syntax error at or near \"ELECT\" at character 17\npsrdb=# explain analyze SELECT\npsrdb-# MAX(item_rank.rank) AS maxRank\npsrdb-# FROM\npsrdb-# item_rank item_rank\npsrdb-# WHERE\npsrdb-# item_rank.pf_id='plan1408';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=8.28..8.29 rows=1 width=8) (actual time=0.708..0.709 \nrows=1 loops=1)\n -> Index Scan using item_rank_pf on item_rank (cost=0.00..8.27 \nrows=1 width=8) (actual time=0.052..0.407 rows=303 loops=1)\n Index Cond: ((pf_id)::text = 'plan1408'::text)\n Total runtime: 0.761 ms\n(4 rows)\n\nTime: 2.140 ms\n\n", "msg_date": "Thu, 10 Jun 2010 12:44:54 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "On 6/10/10 12:34 PM, Anne Rosset wrote:\n> Jochen Erwied wrote:\n>> Thursday, June 10, 2010, 8:36:08 PM you wrote:\n>>\n>>> psrdb=# (SELECT\n>>> psrdb(# MAX(item_rank.rank) AS maxRank\n>>> psrdb(# FROM\n>>> psrdb(# item_rank item_rank\n>>> psrdb(# WHERE\n>>> psrdb(# item_rank.project_id='proj2783'\n>>> psrdb(# AND item_rank.pf_id IS NULL\n>>> psrdb(#\n>>> psrdb(# )\n>>> psrdb-# ORDER BY\n>>> psrdb-# maxRank DESC;\n>>\n>> Don't think it does really matter, but why do you sort a resultset\n>> consisting of only one row?\n>>\n> Sorry, I should have removed the ORDER by (the full query has a union).\n> So without the ORDER by, here are the results:\n> psrdb=# SELECT\n> psrdb-# MAX(item_rank.rank) AS maxRank\n> psrdb-# FROM\n> psrdb-# item_rank item_rank\n> psrdb-# WHERE\n> psrdb-# item_rank.pf_id='plan1408';\n> maxrank\n> -------------\n> 20504000000\n> (1 row)\n>\n> Time: 1.516 ms\n> psrdb=# SELECT\n> psrdb-# MAX(item_rank.rank) AS maxRank\n> psrdb-# FROM\n> psrdb-# item_rank item_rank\n> psrdb-# WHERE\n> psrdb-# item_rank.project_id='proj2783'\n> psrdb-# AND item_rank.pf_id IS NULL;\n> maxrank\n> -------------\n> 20200000000\n> (1 row)\n>\n> Time: 13.177 ms\n>\n> Is there anything that can be done for the second one?\n\nPostgres normally doesn't index NULL values even if the column is indexed, so it has to do a table scan when your query includes an IS NULL condition. You need to create an index that includes the \"IS NULL\" condition.\n\n create index item_rank_null_idx on item_rank(pf_id)\n where item_rank.pf_id is null;\n\nCraig\n", "msg_date": "Thu, 10 Jun 2010 12:47:07 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "Craig James wrote:\n> On 6/10/10 12:34 PM, Anne Rosset wrote:\n>> Jochen Erwied wrote:\n>>> Thursday, June 10, 2010, 8:36:08 PM you wrote:\n>>>\n>>>> psrdb=# (SELECT\n>>>> psrdb(# MAX(item_rank.rank) AS maxRank\n>>>> psrdb(# FROM\n>>>> psrdb(# item_rank item_rank\n>>>> psrdb(# WHERE\n>>>> psrdb(# item_rank.project_id='proj2783'\n>>>> psrdb(# AND item_rank.pf_id IS NULL\n>>>> psrdb(#\n>>>> psrdb(# )\n>>>> psrdb-# ORDER BY\n>>>> psrdb-# maxRank DESC;\n>>>\n>>> Don't think it does really matter, but why do you sort a resultset\n>>> consisting of only one row?\n>>>\n>> Sorry, I should have removed the ORDER by (the full query has a union).\n>> So without the ORDER by, here are the results:\n>> psrdb=# SELECT\n>> psrdb-# MAX(item_rank.rank) AS maxRank\n>> psrdb-# FROM\n>> psrdb-# item_rank item_rank\n>> psrdb-# WHERE\n>> psrdb-# item_rank.pf_id='plan1408';\n>> maxrank\n>> -------------\n>> 20504000000\n>> (1 row)\n>>\n>> Time: 1.516 ms\n>> psrdb=# SELECT\n>> psrdb-# MAX(item_rank.rank) AS maxRank\n>> psrdb-# FROM\n>> psrdb-# item_rank item_rank\n>> psrdb-# WHERE\n>> psrdb-# item_rank.project_id='proj2783'\n>> psrdb-# AND item_rank.pf_id IS NULL;\n>> maxrank\n>> -------------\n>> 20200000000\n>> (1 row)\n>>\n>> Time: 13.177 ms\n>>\n>> Is there anything that can be done for the second one?\n>\n> Postgres normally doesn't index NULL values even if the column is \n> indexed, so it has to do a table scan when your query includes an IS \n> NULL condition. You need to create an index that includes the \"IS \n> NULL\" condition.\n>\n> create index item_rank_null_idx on item_rank(pf_id)\n> where item_rank.pf_id is null;\n>\n> Craig\n>\nHi Craig,\nI tried again after adding your suggested index but I didn't see any \nimprovements: (seems that the index is not used)\npsrdb=# explain analyze SELECT\npsrdb-# MAX(item_rank.rank) AS maxRank\npsrdb-# FROM\npsrdb-# item_rank item_rank\npsrdb-# WHERE\npsrdb-# item_rank.project_id='proj2783'\npsrdb-# AND item_rank.pf_id IS NULL;\n \nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.17..0.18 rows=1 width=0) (actual time=11.942..11.943 \nrows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..0.17 rows=1 width=8) (actual \ntime=11.931..11.932 rows=1 loops=1)\n -> Index Scan Backward using item_rank_rank on item_rank \n(cost=0.00..2933.84 rows=17558 width=8) (actual time=11.926..11.926 \nrows=1 loops=1)\n Filter: ((rank IS NOT NULL) AND (pf_id IS NULL) AND \n((project_id)::text = 'proj2783'::text))\n Total runtime: 11.988 ms\n(6 rows)\n\nTime: 13.654 ms\n\n\nThanks,\nAnne\n", "msg_date": "Thu, 10 Jun 2010 12:56:49 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "On 10/06/10 22:47, Craig James wrote:\n> Postgres normally doesn't index NULL values even if the column is\n> indexed, so it has to do a table scan when your query includes an IS\n> NULL condition.\n\nThat was addressed in version 8.3. 8.3 and upwards can use an index for \nIS NULL.\n\nI believe the NULLs were stored in the index in earlier releases too, \nthey just couldn't be searched for.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 10 Jun 2010 22:57:16 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "Heikki Linnakangas wrote:\n> On 10/06/10 22:47, Craig James wrote:\n>> Postgres normally doesn't index NULL values even if the column is\n>> indexed, so it has to do a table scan when your query includes an IS\n>> NULL condition.\n>\n> That was addressed in version 8.3. 8.3 and upwards can use an index \n> for IS NULL.\n>\n> I believe the NULLs were stored in the index in earlier releases too, \n> they just couldn't be searched for.\n>\nI am using postgres 8.3.6. So why doesn't it use my index?\nThanks,\nAnne\n", "msg_date": "Thu, 10 Jun 2010 13:08:27 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "On 06/10/2010 12:56 PM, Anne Rosset wrote:\n> Craig James wrote:\n>> create index item_rank_null_idx on item_rank(pf_id)\n>> where item_rank.pf_id is null;\n>>\n>> Craig\n>>\n> Hi Craig,\n> I tried again after adding your suggested index but I didn't see any\n> improvements: (seems that the index is not used)\n\n> Filter: ((rank IS NOT NULL) AND (pf_id IS NULL) AND\n> ((project_id)::text = 'proj2783'::text))\n> Total runtime: 11.988 ms\n> (6 rows)\n> \n> Time: 13.654 ms\n\ntry:\n\ncreate index item_rank_null_idx on item_rank(pf_id)\nwhere rank IS NOT NULL AND pf_id IS NULL;\n\nJoe", "msg_date": "Thu, 10 Jun 2010 13:10:02 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "On 06/10/2010 01:10 PM, Joe Conway wrote:\n> try:\n> \n> create index item_rank_null_idx on item_rank(pf_id)\n> where rank IS NOT NULL AND pf_id IS NULL;\n\noops -- that probably should be:\n\ncreate index item_rank_null_idx on item_rank(project_id)\nwhere rank IS NOT NULL AND pf_id IS NULL;\n\nJoe", "msg_date": "Thu, 10 Jun 2010 13:13:46 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "Joe Conway wrote:\n> On 06/10/2010 01:10 PM, Joe Conway wrote:\n> \n>> try:\n>>\n>> create index item_rank_null_idx on item_rank(pf_id)\n>> where rank IS NOT NULL AND pf_id IS NULL;\n>> \n>\n> oops -- that probably should be:\n>\n> create index item_rank_null_idx on item_rank(project_id)\n> where rank IS NOT NULL AND pf_id IS NULL;\n>\n> Joe\n>\n> \nI tried that and it didn't make any difference. Same query plan.\n\nAnne\n", "msg_date": "Thu, 10 Jun 2010 13:21:44 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "On 10/06/10 23:08, Anne Rosset wrote:\n> Heikki Linnakangas wrote:\n>> On 10/06/10 22:47, Craig James wrote:\n>>> Postgres normally doesn't index NULL values even if the column is\n>>> indexed, so it has to do a table scan when your query includes an IS\n>>> NULL condition.\n>>\n>> That was addressed in version 8.3. 8.3 and upwards can use an index\n>> for IS NULL.\n>>\n>> I believe the NULLs were stored in the index in earlier releases too,\n>> they just couldn't be searched for.\n>>\n> I am using postgres 8.3.6. So why doesn't it use my index?\n\nWell, apparently the planner doesn't think it would be any cheaper.\n\nI wonder if this helps:\n\nCREATE INDEX item_rank_project_id ON item_rank(project_id, rank, pf_id);\n\nAnd make sure you drop any of the indexes that are not being used, to \nmake sure the planner doesn't choose them instead.\n\n(You should upgrade to 8.3.11, BTW. There's been a bunch of bug-fixes \nin-between, though I don't know if any are related to this, but there's \nother important fixes there)\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 10 Jun 2010 23:59:54 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "On 06/10/2010 01:21 PM, Anne Rosset wrote:\n>> \n> I tried that and it didn't make any difference. Same query plan.\n\nA little experimentation suggests this might work:\n\ncreate index item_rank_project on item_rank(project_id, rank) where\npf_id IS NULL;\n\nJoe", "msg_date": "Thu, 10 Jun 2010 14:06:24 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to increase performance of a query" }, { "msg_contents": "Joe Conway wrote:\n> On 06/10/2010 01:21 PM, Anne Rosset wrote:\n> \n>>> \n>>> \n>> I tried that and it didn't make any difference. Same query plan.\n>> \n>\n> A little experimentation suggests this might work:\n>\n> create index item_rank_project on item_rank(project_id, rank) where\n> pf_id IS NULL;\n>\n> Joe\n>\n> \nYes it does. Thanks a lot!\nAnne\n", "msg_date": "Thu, 10 Jun 2010 14:20:15 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to increase performance of a query" } ]
[ { "msg_contents": "Hi,\n\nI found a slow part of the query:\n\nSELECT\n* date(extract(YEAR FROM m.taken)||'-1-1') d1,*\n* date(extract(YEAR FROM m.taken)||'-1-31') d2*\nFROM\n climate.city c,\n climate.station s,\n climate.station_category sc,\n climate.measurement m\nWHERE\n c.id = 5148 AND ...\n\nDate extraction is 3.2 seconds, but without is 1.5 seconds. The PL/pgSQL\ncode that actually runs (where p_month1, p_day1, and p_month2, p_day2 are\nintegers):\n\n* date(extract(YEAR FROM m.taken)||''-'||p_month1||'-'||p_day1||''')\nd1,\n date(extract(YEAR FROM m.taken)||''-'||p_month2||'-'||p_day2||''')\nd2\n*\nWhat is a better way to create those dates (without string concatenation, I\npresume)?\n\nDave\n\nHi,\n\nI found a slow part of the query:\n\nSELECT\n  date(extract(YEAR FROM m.taken)||'-1-1') d1,\n  date(extract(YEAR FROM m.taken)||'-1-31') d2FROM\n  climate.city c,\n  climate.station s,\n  climate.station_category sc,\n  climate.measurement mWHERE\n   c.id = 5148 AND ...\n\nDate extraction is 3.2 seconds, but without is 1.5 seconds. The PL/pgSQL code that actually runs (where p_month1, p_day1, and p_month2, p_day2 are integers):        date(extract(YEAR FROM m.taken)||''-'||p_month1||'-'||p_day1||''') d1,\n        date(extract(YEAR FROM m.taken)||''-'||p_month2||'-'||p_day2||''') d2What is a better way to create those dates (without string concatenation, I presume)?\nDave", "msg_date": "Thu, 10 Jun 2010 17:41:51 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "On 06/10/2010 07:41 PM, David Jarvis wrote:\n> Hi,\n>\n> I found a slow part of the query:\n>\n> SELECT\n> * date(extract(YEAR FROM m.taken)||'-1-1') d1,*\n> * date(extract(YEAR FROM m.taken)||'-1-31') d2*\n> FROM\n> climate.city c,\n> climate.station s,\n> climate.station_category sc,\n> climate.measurement m\n> WHERE\n> c.id <http://c.id> = 5148 AND ...\n>\n> Date extraction is 3.2 seconds, but without is 1.5 seconds. The PL/pgSQL\n> code that actually runs (where p_month1, p_day1, and p_month2, p_day2\n> are integers):\n>\n> * date(extract(YEAR FROM\n> m.taken)||''-'||p_month1||'-'||p_day1||''') d1,\n> date(extract(YEAR FROM\n> m.taken)||''-'||p_month2||'-'||p_day2||''') d2\n> *\n> What is a better way to create those dates (without string\n> concatenation, I presume)?\n>\n> Dave\n>\n\nI assume you are doing this in a loop? Many Many Many times? cuz:\n\nandy=# select date(extract(year from current_date) || '-1-1');\n date\n------------\n 2010-01-01\n(1 row)\n\nTime: 0.528 ms\n\nIts pretty quick. You say \"without\" its 1.5 seconds? Thats all you change? Can we see the sql and 'explain analyze' for both?\n\n-Andy\n", "msg_date": "Thu, 10 Jun 2010 21:11:10 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "Hi, Andy.\n\nI assume you are doing this in a loop? Many Many Many times? cuz:\n>\n\nYes. Here are the variations I have benchmarked (times are best of three):\n\nVariation #0\n-no date field-\nExplain: http://explain.depesz.com/s/Y9R\nTime: 2.2s\n\nVariation #1\ndate('1960-1-1')\nExplain: http://explain.depesz.com/s/DW2\nTime: 2.6s\n\nVariation #2\ndate('1960'||'-1-1')\nExplain: http://explain.depesz.com/s/YuX\nTime: 3.1s\n\nVariation #3\ndate(extract(YEAR FROM m.taken)||'-1-1')\nExplain: http://explain.depesz.com/s/1I\nTime: 4.3s\n\nVariation #4\nto_date( date_part('YEAR', m.taken)::text, 'YYYY' ) + interval '0 months' +\ninterval '0 days'\nExplain: http://explain.depesz.com/s/fIT\nTime: 4.4s\n\nWhat I would like is along Variation #5:\n\n*PGTYPESdate_mdyjul(taken_year, p_month1, p_day1)*\nTime: 2.3s\n\nI find it interesting that variation #2 is half a second slower than\nvariation #1.\n\nThe other question I have is: why does PG seem to discard the results? In\npgAdmin3, I can keep pressing F5 and (before 8.4.4?) the results came back\nin 4s for the first response then 1s in subsequent responses.\n\nDave\n\nHi, Andy.I assume you are doing this in a loop?  Many Many Many times?  cuz:\nYes. Here are the variations I have benchmarked (times are best of three):Variation #0-no date field-Explain: http://explain.depesz.com/s/Y9R\nTime: 2.2sVariation #1date('1960-1-1')Explain: http://explain.depesz.com/s/DW2Time: 2.6sVariation #2\ndate('1960'||'-1-1')Explain: http://explain.depesz.com/s/YuXTime: 3.1sVariation #3\ndate(extract(YEAR FROM m.taken)||'-1-1')Explain: http://explain.depesz.com/s/1ITime: 4.3sVariation #4to_date( date_part('YEAR', m.taken)::text, 'YYYY' ) + interval '0 months' + interval '0 days'\nExplain: http://explain.depesz.com/s/fITTime: 4.4sWhat I would like is along Variation #5:PGTYPESdate_mdyjul(taken_year, p_month1, p_day1)\nTime: 2.3sI find it interesting that variation #2 is half a second slower than variation #1.The other question I have is: why does PG seem to discard the results? In pgAdmin3, I can keep pressing F5 and (before 8.4.4?) the results came back in 4s for the first response then 1s in subsequent responses.\nDave", "msg_date": "Thu, 10 Jun 2010 19:56:35 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "Hi,\n\nTo avoid string concatenation using dates, I figured I could write a C\nfunction:\n\n#include \"postgres.h\"\n#include \"fmgr.h\"\n#include \"utils/date.h\"\n#include \"utils/nabstime.h\"\n\n#ifdef PG_MODULE_MAGIC\nPG_MODULE_MAGIC;\n#endif\n\nDatum dateserial (PG_FUNCTION_ARGS);\n\nPG_FUNCTION_INFO_V1 (dateserial);\n\nDatum dateserial (PG_FUNCTION_ARGS) {\n int32 p_year = PG_GETARG_INT32(0);\n int32 p_month = PG_GETARG_INT32(1);\n int32 p_day = PG_GETARG_INT32(2);\n\n DateADT d = date2j (p_year, p_month, p_day) - POSTGRES_EPOCH_JDATE;\n PG_RETURN_DATEADT(d);\n}\n\nCompiles without errors or warnings. The function is integrated as follows:\n\nCREATE OR REPLACE FUNCTION dateserial(integer, integer, integer)\n RETURNS text AS\n'ymd.so', 'dateserial'\n LANGUAGE 'c' IMMUTABLE STRICT\n COST 1;\n\nHowever, when I try to use it, the database segfaults:\n\nselect dateserial( 2007, 1, 3 )\n\nAny ideas why?\n\nThank you!\n\nDave\n\nP.S.\nI have successfully written a function that creates a YYYYmmDD formatted\nstring (using *sprintf*) when given three integers. It returns as expected;\nI ran it as follows:\n\n dateserial( extract(YEAR FROM m.taken)::int, 1, 1 )::date\n\nThis had a best-of-three time of 3.7s compared with 4.3s using string\nconcatenation. If I can eliminate all the typecasts, and pass in m.taken\ndirectly (rather than calling *extract*), I think the speed will be closer\nto 2.5s.\n\nAny hints would be greatly appreciated.\n\nHi,To avoid string concatenation using dates, I figured I could write a C function:#include \"postgres.h\"#include \"fmgr.h\"\n#include \"utils/date.h\"#include \"utils/nabstime.h\"#ifdef PG_MODULE_MAGICPG_MODULE_MAGIC;#endifDatum dateserial (PG_FUNCTION_ARGS);PG_FUNCTION_INFO_V1 (dateserial);\nDatum dateserial (PG_FUNCTION_ARGS) {  int32 p_year = PG_GETARG_INT32(0);  int32 p_month = PG_GETARG_INT32(1);  int32 p_day = PG_GETARG_INT32(2);  DateADT d = date2j (p_year, p_month, p_day) - POSTGRES_EPOCH_JDATE;\n  PG_RETURN_DATEADT(d);}Compiles without errors or warnings. The function is integrated as follows:CREATE OR REPLACE FUNCTION dateserial(integer, integer, integer)\n  RETURNS text AS'ymd.so', 'dateserial'  LANGUAGE 'c' IMMUTABLE STRICT  COST 1;However, when I try to use it, the database segfaults:\nselect dateserial( 2007, 1, 3 )Any ideas why?Thank you!DaveP.S.I have successfully written a function that creates a YYYYmmDD formatted string (using sprintf) when given three integers. It returns as expected; I ran it as follows:\n        dateserial( extract(YEAR FROM m.taken)::int, 1, 1 )::dateThis had a best-of-three time of 3.7s compared with 4.3s using string concatenation. If I can eliminate all the typecasts, and pass in m.taken directly (rather than calling extract), I think the speed will be closer to 2.5s.\nAny hints would be greatly appreciated.", "msg_date": "Fri, 11 Jun 2010 01:25:49 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "David Jarvis <[email protected]> wrote:\n\n> [...]\n> Yes. Here are the variations I have benchmarked (times are best of three):\n\n> Variation #0\n> -no date field-\n> Explain: http://explain.depesz.com/s/Y9R\n> Time: 2.2s\n\n> Variation #1\n> date('1960-1-1')\n> Explain: http://explain.depesz.com/s/DW2\n> Time: 2.6s\n\n> Variation #2\n> date('1960'||'-1-1')\n> Explain: http://explain.depesz.com/s/YuX\n> Time: 3.1s\n\n> Variation #3\n> date(extract(YEAR FROM m.taken)||'-1-1')\n> Explain: http://explain.depesz.com/s/1I\n> Time: 4.3s\n\n> Variation #4\n> to_date( date_part('YEAR', m.taken)::text, 'YYYY' ) + interval '0 months' +\n> interval '0 days'\n> Explain: http://explain.depesz.com/s/fIT\n> Time: 4.4s\n\n> What I would like is along Variation #5:\n\n> *PGTYPESdate_mdyjul(taken_year, p_month1, p_day1)*\n> Time: 2.3s\n\n> I find it interesting that variation #2 is half a second slower than\n> variation #1.\n> [...]\n\nHave you tested DATE_TRUNC()?\n\nTim\n\n", "msg_date": "Fri, 11 Jun 2010 08:57:43 +0000", "msg_from": "Tim Landscheidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "Hi, Tim.\n\nHave you tested DATE_TRUNC()?\n>\n\nNot really; it returns a full timestamp and I would still have to\nconcatenate strings. My goal is to speed up the following code (where\n*p_*parameters are user inputs):\n\n* date(extract(YEAR FROM m.taken)||''-'||p_month1||'-'||p_day1||''')\nd1,\n date(extract(YEAR FROM m.taken)||''-'||p_month2||'-'||p_day2||''')\nd2*\n\nUsing DATE_TRUNC() won't help here, as far as I can tell. Removing the\nconcatenation will halve the query's time. Such as:\n\ndateserial( m.taken, p_month1, p_day1 ) d1,\ndateserial( m.taken, p_month2, p_day2 ) d2\n\nMy testing so far has shown a modest improvement by using a C function (to\navoid concatenation).\n\nDave\n\nHi, Tim.\nHave you tested DATE_TRUNC()?\nNot really; it returns a full timestamp and I would still have to concatenate strings. My goal is to speed up the following code (where p_ parameters are user inputs):        date(extract(YEAR FROM m.taken)||''-'||p_month1||'-'||p_day1||''') d1,\n\n        date(extract(YEAR FROM m.taken)||''-'||p_month2||'-'||p_day2||''') d2Using DATE_TRUNC() won't help here, as far as I can tell. Removing the concatenation will halve the query's time. Such as:\ndateserial( m.taken, p_month1, p_day1 ) d1,dateserial( m.taken, p_month2, p_day2 ) d2\nMy testing so far has shown a modest improvement by using a C function (to avoid concatenation).Dave", "msg_date": "Fri, 11 Jun 2010 02:18:08 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "On 11/06/10 11:25, David Jarvis wrote:\n> Datum dateserial (PG_FUNCTION_ARGS) {\n> int32 p_year = PG_GETARG_INT32(0);\n> int32 p_month = PG_GETARG_INT32(1);\n> int32 p_day = PG_GETARG_INT32(2);\n>\n> DateADT d = date2j (p_year, p_month, p_day) - POSTGRES_EPOCH_JDATE;\n> PG_RETURN_DATEADT(d);\n> }\n>\n> Compiles without errors or warnings. The function is integrated as follows:\n>\n> CREATE OR REPLACE FUNCTION dateserial(integer, integer, integer)\n> RETURNS text AS\n> 'ymd.so', 'dateserial'\n> LANGUAGE 'c' IMMUTABLE STRICT\n> COST 1;\n>\n> However, when I try to use it, the database segfaults:\n>\n> select dateserial( 2007, 1, 3 )\n>\n> Any ideas why?\n\nThe C function returns a DateADT, which is a typedef for int32, but the \nCREATE FUNCTION statement claims that it returns 'text'.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 11 Jun 2010 12:42:34 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "Hello all,\n\nOne query about PostgreSQL's index usage. If I select just one column on \nwhich there is an index (or select only columns on which there is an \nindex), and the index is used by PostgreSQL, does PostgreSQL avoid table \naccess if possible? I am trying to understand the differences between \nOracle's data access patterns and PostgreSQL's. \nHere is how it works in Oracle.\n\nCase 1 - SELECT column which is not there in the index \n\nSQL> select name from myt where id = 13890;\n\nNAME\n---------------------------------------------------------------------------------------------------\nAAAA\n\n\nExecution Plan\n----------------------------------------------------------\nPlan hash value: 2609414407\n\n-------------------------------------------------------------------------------------\n| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| \nTime |\n-------------------------------------------------------------------------------------\n| 0 | SELECT STATEMENT | | 1 | 65 | 2 (0)| \n00:00:01 |\n| 1 | TABLE ACCESS BY INDEX ROWID| MYT | 1 | 65 | 2 (0)| \n00:00:01 |\n|* 2 | INDEX RANGE SCAN | MYIDX | 1 | | 1 (0)| \n00:00:01 |\n-------------------------------------------------------------------------------------\n\nPredicate Information (identified by operation id):\n---------------------------------------------------\n\n 2 - access(\"ID\"=13890)\n\nNote\n-----\n - dynamic sampling used for this statement\n\n\nStatistics\n----------------------------------------------------------\n 0 recursive calls\n 0 db block gets\n 4 consistent gets\n 0 physical reads\n 0 redo size\n 409 bytes sent via SQL*Net to client\n 384 bytes received via SQL*Net from client\n 2 SQL*Net roundtrips to/from client\n 0 sorts (memory)\n 0 sorts (disk)\n 1 rows processed\n\n \n \nCase 1 - SELECT column which is there in the index \n\nSQL> select id from myt where id = 13890;\n\n ID\n----------\n 13890\n\n\nExecution Plan\n----------------------------------------------------------\nPlan hash value: 2555454399\n\n--------------------------------------------------------------------------\n| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |\n--------------------------------------------------------------------------\n| 0 | SELECT STATEMENT | | 1 | 13 | 1 (0)| 00:00:01 |\n|* 1 | INDEX RANGE SCAN| MYIDX | 1 | 13 | 1 (0)| 00:00:01 |\n--------------------------------------------------------------------------\n\nPredicate Information (identified by operation id):\n---------------------------------------------------\n\n 1 - access(\"ID\"=13890)\n\nNote\n-----\n - dynamic sampling used for this statement\n\n\nStatistics\n----------------------------------------------------------\n 0 recursive calls\n 0 db block gets\n 3 consistent gets\n 0 physical reads\n 0 redo size\n 407 bytes sent via SQL*Net to client\n 384 bytes received via SQL*Net from client\n 2 SQL*Net roundtrips to/from client\n 0 sorts (memory)\n 0 sorts (disk)\n 1 rows processed\n\nIn the second query where id was selected, the table was not used at all. \nIn PosgreSQL, explain gives me similar output in both cases.\nTable structure - \n\npostgres=# \\d myt\n Table \"public.myt\"\n Column | Type | Modifiers\n--------+-----------------------+-----------\n id | integer |\n name | character varying(20) |\nIndexes:\n \"myidx\" btree (id)\n\n\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n", "msg_date": "Fri, 11 Jun 2010 15:26:09 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": false, "msg_subject": "Query about index usage" }, { "msg_contents": "David Jarvis <[email protected]> wrote:\n\n>> Have you tested DATE_TRUNC()?\n\n> Not really; it returns a full timestamp and I would still have to\n> concatenate strings. My goal is to speed up the following code (where\n> *p_*parameters are user inputs):\n\n> * date(extract(YEAR FROM m.taken)||''-'||p_month1||'-'||p_day1||''')\n> d1,\n> date(extract(YEAR FROM m.taken)||''-'||p_month2||'-'||p_day2||''')\n> d2*\n\n> Using DATE_TRUNC() won't help here, as far as I can tell. Removing the\n> concatenation will halve the query's time. Such as:\n\n> dateserial( m.taken, p_month1, p_day1 ) d1,\n> dateserial( m.taken, p_month2, p_day2 ) d2\n\n> My testing so far has shown a modest improvement by using a C function (to\n> avoid concatenation).\n\nYou could use:\n\n| (DATE_TRUNC('year', m.taken) + p_month1 * '1 month'::INTERVAL + p_day1 * '1 day'::INTERVAL)::DATE\n\nbut whether that is faster or slower I don't know. But I\ndon't see why this query needs to be fast in the first\nplace. It seems to be interactive, and therefore I wouldn't\ninvest too much time to have the user wait not 4.4, but\n2.2 seconds. You could also do the concatenation in the ap-\nplication if that is faster than PostgreSQL's date arithme-\ntics.\n\nTim\n\n", "msg_date": "Fri, 11 Jun 2010 11:18:37 +0000", "msg_from": "Tim Landscheidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "Jayadevan M <[email protected]> wrote:\n \n> One query about PostgreSQL's index usage. If I select just one\n> column on which there is an index (or select only columns on which\n> there is an index), and the index is used by PostgreSQL, does\n> PostgreSQL avoid table access if possible?\n \nPostgreSQL can't currently avoid reading the table, because that's\nwhere the tuple visibility information is stored. We've been making\nprogress toward having some way to avoid reading the table for all\nexcept very recently written tuples, but we're not there yet (in any\nproduction version or in the 9.0 version to be released this\nsummer).\n \n-Kevin\n", "msg_date": "Fri, 11 Jun 2010 10:25:15 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query about index usage" }, { "msg_contents": "Jayadevan M wrote:\n> One query about PostgreSQL's index usage. If I select just one column on \n> which there is an index (or select only columns on which there is an \n> index), and the index is used by PostgreSQL, does PostgreSQL avoid table \n> access if possible?\n\nPostgreSQL keeps information about what rows are visible or not in with \nthe row data. It's therefore impossible at this time for it to answer \nqueries just based on what's in an index. Once candidate rows are found \nusing one, the database must then also retrieve the row(s) and do a \nsecond check as to whether it's visible to the running transaction or \nnot before returning them to the client.\n\nImproving this situation is high up on the list of things to improve in \nPostgreSQL and the value of it recognized, it just hasn't been built yet.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Fri, 11 Jun 2010 11:32:13 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query about index usage" }, { "msg_contents": "Hi,\n\nThe C function returns a DateADT, which is a typedef for int32, but the\n> CREATE FUNCTION statement claims that it returns 'text'.\n>\n\nThat'll do it. Thank you!\n\nbut whether that is faster or slower I don't know. But I\n> don't see why this query needs to be fast in the first\n> place. It seems to be interactive, and therefore I wouldn't\n>\n\nWhen users click the button, I want the result returned in in less under 4\nseconds. Right now it is closer to 10. Consequently, they click twice.\nShaving 2 seconds here and there will make a huge difference. It will also\nallow the computer to handle a higher volume of requests.\n\n\n> invest too much time to have the user wait not 4.4, but\n> 2.2 seconds. You could also do the concatenation in the ap-\n> plication if that is faster than PostgreSQL's date arithme-\n> tics.\n>\n\nNo, I cannot. The concatenation uses the year that the measurement was\nmade--from the database--and the month/day combination from the user. See\nalso:\n\nhttp://stackoverflow.com/questions/2947105/calculate-year-for-end-date-postgresql\n\nDave\n\nHi,\n\nThe C function returns a DateADT, which is a typedef for int32, but the CREATE FUNCTION statement claims that it returns 'text'.That'll do it. Thank you!\n\nbut whether that is faster or slower I don't know. But I\ndon't see why this query needs to be fast in the first\nplace. It seems to be interactive, and therefore I wouldn'tWhen users click the button, I want the result returned in in less under 4 seconds. Right now it is closer to 10. Consequently, they click twice. Shaving 2 seconds here and there will make a huge difference. It will also allow the computer to handle a higher volume of requests.\n \ninvest too much time to have the user wait not 4.4, but\n2.2 seconds. You could also do the concatenation in the ap-\nplication if that is faster than PostgreSQL's date arithme-\ntics.No, I cannot. The concatenation uses the year that the measurement was made--from the database--and the month/day combination from the user. See also:http://stackoverflow.com/questions/2947105/calculate-year-for-end-date-postgresql\nDave", "msg_date": "Fri, 11 Jun 2010 10:09:49 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "Hi,\n\nHere is code to convert dates from integers without string concatenation:\n\nEdit dateserial.c:\n\n#include \"postgres.h\"\n#include \"utils/date.h\"\n#include \"utils/nabstime.h\"\n\n#ifdef PG_MODULE_MAGIC\nPG_MODULE_MAGIC;\n#endif\n\nDatum dateserial(PG_FUNCTION_ARGS);\n\nPG_FUNCTION_INFO_V1 (dateserial);\n\nDatum\ndateserial(PG_FUNCTION_ARGS) {\n int32 p_year = (int32)PG_GETARG_FLOAT8(0);\n int32 p_month = PG_GETARG_INT32(1);\n int32 p_day = PG_GETARG_INT32(2);\n\n PG_RETURN_DATEADT( date2j( p_year, p_month, p_day ) - POSTGRES_EPOCH_JDATE );\n}\n\nEdit Makefile:\n\nMODULES = dateserial\nPGXS := $(shell pg_config --pgxs)\ninclude $(PGXS)\n\nEdit inst.sh (optional):\n\n#!/bin/bash\n\nmake clean && make && strip *.so && make install &&\n/etc/init.d/postgresql-8.4 restart\n\nRun bash inst.sh.\n\nCreate a SQL function dateserial:\n\nCREATE OR REPLACE FUNCTION dateserial(double precision, integer, integer)\n RETURNS date AS\n'$libdir/dateserial', 'dateserial'\n LANGUAGE 'c' IMMUTABLE STRICT\n COST 1;\nALTER FUNCTION dateserial(double precision, integer, integer) OWNER TO postgres;\n\nTest the function:\n\nSELECT dateserial( 2007, 5, 5 )\n\nUsing this function, performance increases from 4.4s to 2.8s..\n\nDave\n\nHi,Here is code to convert dates from integers without string concatenation:Edit dateserial.c:\n\n#include \"postgres.h\"#include \"utils/date.h\"\n#include \"utils/nabstime.h\"#ifdef PG_MODULE_MAGIC\nPG_MODULE_MAGIC;#endifDatum dateserial(PG_FUNCTION_ARGS);\nPG_FUNCTION_INFO_V1 (dateserial);Datumdateserial(PG_FUNCTION_ARGS) {\n  int32 p_year = (int32)PG_GETARG_FLOAT8(0);\n  int32 p_month = PG_GETARG_INT32(1);  int32 p_day = PG_GETARG_INT32(2);\n  PG_RETURN_DATEADT( date2j( p_year, p_month, p_day ) - POSTGRES_EPOCH_JDATE );\n}\nEdit Makefile:\nMODULES = dateserialPGXS := $(shell pg_config --pgxs)\ninclude $(PGXS)\nEdit inst.sh (optional):\n#!/bin/bashmake clean && make && strip *.so && make install && /etc/init.d/postgresql-8.4 restart\n\nRun bash inst.sh.\nCreate a SQL function dateserial:\nCREATE OR REPLACE FUNCTION dateserial(double precision, integer, integer)\n  RETURNS date AS'$libdir/dateserial', 'dateserial'  LANGUAGE 'c' IMMUTABLE STRICT\n  COST 1;ALTER FUNCTION dateserial(double precision, integer, integer) OWNER TO postgres;\n\nTest the function:\nSELECT dateserial( 2007, 5, 5 )\nUsing this function, performance increases from 4.4s to 2.8s..Dave", "msg_date": "Fri, 11 Jun 2010 11:17:18 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "Jayadevan,\n\nPostgreSQL must go to the table to determine if the row you are requesting is visible to your transaction. This is an artifact of the MVCC implementation. Oracle can fetch the data from the index, since it doesn't keep multiple representations of the rows, but it may need to check the undo logs to determine the state that applies to your transaction. Its just two different ways to accomplish the same thing.\n\nBob Lunney\n\n--- On Fri, 6/11/10, Jayadevan M <[email protected]> wrote:\n\n> From: Jayadevan M <[email protected]>\n> Subject: [PERFORM] Query about index usage\n> To: [email protected]\n> Date: Friday, June 11, 2010, 5:56 AM\n> Hello all,\n> \n> One query about PostgreSQL's index usage. If I select just\n> one column on \n> which there is an index (or select only columns on which\n> there is an \n> index), and the index is used by PostgreSQL, does\n> PostgreSQL avoid table \n> access if possible?  I am trying to understand the\n> differences between \n> Oracle's data access patterns and PostgreSQL's. \n> Here is how it works in Oracle.\n> \n> Case 1 - SELECT column which is not there in the index \n> \n> SQL> select name from myt where id = 13890;\n> \n> NAME\n> ---------------------------------------------------------------------------------------------------\n> AAAA\n> \n> \n> Execution Plan\n> ----------------------------------------------------------\n> Plan hash value: 2609414407\n> \n> -------------------------------------------------------------------------------------\n> | Id  | Operation         \n>          | Name  |\n> Rows  | Bytes | Cost (%CPU)| \n> Time    |\n> -------------------------------------------------------------------------------------\n> |   0 | SELECT STATEMENT     \n>       |   \n>    |     1 |   \n> 65 |     2   (0)| \n> 00:00:01 |\n> |   1 |  TABLE ACCESS BY INDEX ROWID|\n> MYT   |     1 | \n>   65 |     2   (0)|\n> \n> 00:00:01 |\n> |*  2 |   INDEX RANGE SCAN   \n>       | MYIDX |     1\n> |       | \n>    1   (0)| \n> 00:00:01 |\n> -------------------------------------------------------------------------------------\n> \n> Predicate Information (identified by operation id):\n> ---------------------------------------------------\n> \n>    2 - access(\"ID\"=13890)\n> \n> Note\n> -----\n>    - dynamic sampling used for this\n> statement\n> \n> \n> Statistics\n> ----------------------------------------------------------\n>           0  recursive calls\n>           0  db block gets\n>           4  consistent gets\n>           0  physical reads\n>           0  redo size\n>         409  bytes sent via\n> SQL*Net to client\n>         384  bytes received via\n> SQL*Net from client\n>           2  SQL*Net\n> roundtrips to/from client\n>           0  sorts (memory)\n>           0  sorts (disk)\n>           1  rows processed\n> \n> \n> \n> Case 1 - SELECT column which is there in the index \n> \n> SQL> select id from myt where id = 13890;\n> \n>         ID\n> ----------\n>      13890\n> \n> \n> Execution Plan\n> ----------------------------------------------------------\n> Plan hash value: 2555454399\n> \n> --------------------------------------------------------------------------\n> | Id  | Operation        |\n> Name  | Rows  | Bytes | Cost (%CPU)| Time \n>    |\n> --------------------------------------------------------------------------\n> |   0 | SELECT STATEMENT |   \n>    |     1 |   \n> 13 |     1   (0)|\n> 00:00:01 |\n> |*  1 |  INDEX RANGE SCAN| MYIDX | \n>    1 |    13 | \n>    1   (0)| 00:00:01 |\n> --------------------------------------------------------------------------\n> \n> Predicate Information (identified by operation id):\n> ---------------------------------------------------\n> \n>    1 - access(\"ID\"=13890)\n> \n> Note\n> -----\n>    - dynamic sampling used for this\n> statement\n> \n> \n> Statistics\n> ----------------------------------------------------------\n>           0  recursive calls\n>           0  db block gets\n>           3  consistent gets\n>           0  physical reads\n>           0  redo size\n>         407  bytes sent via\n> SQL*Net to client\n>         384  bytes received via\n> SQL*Net from client\n>           2  SQL*Net\n> roundtrips to/from client\n>           0  sorts (memory)\n>           0  sorts (disk)\n>           1  rows processed\n> \n> In the second query where id was selected, the table was\n> not used at all. \n> In PosgreSQL, explain gives me similar output in both\n> cases.\n> Table structure - \n> \n> postgres=# \\d myt\n>              Table\n> \"public.myt\"\n> Column |         Type \n>         | Modifiers\n> --------+-----------------------+-----------\n> id     | integer     \n>          |\n> name   | character varying(20) |\n> Indexes:\n>     \"myidx\" btree (id)\n> \n> \n> Regards,\n> Jayadevan\n> \n> \n> \n> \n> \n> DISCLAIMER: \n> \n> \"The information in this e-mail and any attachment is\n> intended only for \n> the person to whom it is addressed and may contain\n> confidential and/or \n> privileged material. If you have received this e-mail in\n> error, kindly \n> contact the sender and destroy all copies of the original\n> communication. \n> IBS makes no warranty, express or implied, nor guarantees\n> the accuracy, \n> adequacy or completeness of the information contained in\n> this email or any \n> attachment and is not liable for any errors, defects,\n> omissions, viruses \n> or for resultant loss or damage, if any, direct or\n> indirect.\"\n> \n> \n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n \n", "msg_date": "Fri, 11 Jun 2010 11:50:10 -0700 (PDT)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query about index usage" }, { "msg_contents": "David Jarvis <[email protected]> wrote:\n\n> [...]\n>> invest too much time to have the user wait not 4.4, but\n>> 2.2 seconds. You could also do the concatenation in the ap-\n>> plication if that is faster than PostgreSQL's date arithme-\n>> tics.\n\n> No, I cannot. The concatenation uses the year that the measurement was\n> made--from the database--and the month/day combination from the user. See\n> also:\n\n> http://stackoverflow.com/questions/2947105/calculate-year-for-end-date-postgresql\n\nThat page doesn't deal with \"select year from database and\nmonth/day from user and present the results\", but *much*\ndifferent problems.\n\nTim\n\n", "msg_date": "Fri, 11 Jun 2010 18:53:44 +0000", "msg_from": "Tim Landscheidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "David Jarvis <[email protected]> writes:\n> dateserial(PG_FUNCTION_ARGS) {\n> int32 p_year = (int32)PG_GETARG_FLOAT8(0);\n> int32 p_month = PG_GETARG_INT32(1);\n> int32 p_day = PG_GETARG_INT32(2);\n\nEr ... why float? Integer is plenty for the range of years supported by\nthe PG datetime infrastructure. The above coding is pretty lousy in terms\nof its roundoff and overflow behavior, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Jun 2010 16:20:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function " }, { "msg_contents": "Hi, Tom.\n\nextract(YEAR FROM m.taken)\n\nI thought that returned a double precision?\n\nDave\n\nHi, Tom.extract(YEAR FROM m.taken)I thought that returned a double precision?Dave", "msg_date": "Fri, 11 Jun 2010 13:30:58 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "Hi,\n\nI added an explicit cast in the SQL:\n\n dateserial(extract(YEAR FROM\nm.taken)::int,'||p_month1||','||p_day1||') d1,\n dateserial(extract(YEAR FROM\nm.taken)::int,'||p_month2||','||p_day2||') d2\n\nThe function now takes three integer parameters; there was no performance\nloss.\n\nThank you.\n\nDave\n\nHi,I added an explicit cast in the SQL:        dateserial(extract(YEAR FROM m.taken)::int,'||p_month1||','||p_day1||') d1,        dateserial(extract(YEAR FROM m.taken)::int,'||p_month2||','||p_day2||') d2\nThe function now takes three integer parameters; there was no performance loss.Thank you.Dave", "msg_date": "Fri, 11 Jun 2010 13:38:07 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "On 11/06/10 23:38, David Jarvis wrote:\n> I added an explicit cast in the SQL:\n>\n> dateserial(extract(YEAR FROM\n> m.taken)::int,'||p_month1||','||p_day1||') d1,\n> dateserial(extract(YEAR FROM\n> m.taken)::int,'||p_month2||','||p_day2||') d2\n>\n> The function now takes three integer parameters; there was no performance\n> loss.\n\nWe had a little chat about this with Magnus. It's pretty surprising that \nthere's no built-in function to do this, we should consider adding one.\n\nWe could have a function like:\n\nconstruct_timestamp(year int4, month int4, date int4, hour int4, minute \nint4, second int4, milliseconds int4, timezone text)\n\nNow that we have named parameter notation, callers can use it to \nconveniently fill in only the fields needed:\n\nSELECT construct_timestamp(year := 1999, month := 10, date := 22);\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sun, 13 Jun 2010 09:02:23 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "Hi,\n\nWe had a little chat about this with Magnus. It's pretty surprising that\n> there's no built-in function to do this, we should consider adding one.\n>\n\nI agree; you should be able to create a timestamp or a date from integer\nvalues. Others, apparently, have written code. The implementation I did was\npretty rudimentary, but I was going for speed.\n\nIf you could overload to_date and to_timestamp, that would be great. For\nexample:\n\nto_date( year ) = year-01-01\nto_date( year, month ) = year-month-01\nto_date( year, month, day ) = year-month-day\n\nto_timestamp( year, month, day, hour ) = year-month-day hour:00:00.0000 GMT\netc.\n\nconstruct_timestamp(year int4, month int4, date int4, hour int4, minute\n> int4, second int4, milliseconds int4, timezone text)\n>\n\nAlso, \"date int4\" should be \"day int4\", to avoid confusion with the date\ntype.\n\nDoes it makes sense to use named parameter notation for the first value (the\nyear)? This could be potentially confusing:\n\nto_date() - What would this return? now()? Jan 1st, 1970? 2000?\n\nSimilarly, to_timestamp() ...? Seems meaningless without at least a full\ndate and an hour.\n\nDave\n\nHi,We had a little chat about this with Magnus. It's pretty surprising that there's no built-in function to do this, we should consider adding one.\nI agree; you should be able to create a timestamp or a date from integer values. Others, apparently, have written code. The implementation I did was pretty rudimentary, but I was going for speed.If you could overload to_date and to_timestamp, that would be great. For example:\nto_date( year ) = year-01-01to_date( year, month ) = year-month-01to_date( year, month, day ) = year-month-dayto_timestamp( year, month, day, hour ) = year-month-day hour:00:00.0000 GMTetc.\n\nconstruct_timestamp(year int4, month int4, date int4, hour int4, minute int4, second int4, milliseconds int4, timezone text)Also, \"date int4\" should be \"day int4\", to avoid confusion with the date type.\nDoes it makes sense to use named parameter notation for the first value (the year)? This could be potentially confusing:to_date() - What would this return? now()? Jan 1st, 1970? 2000?Similarly, to_timestamp() ...? Seems meaningless without at least a full date and an hour.\nDave", "msg_date": "Sun, 13 Jun 2010 00:38:48 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "On Sun, Jun 13, 2010 at 09:38, David Jarvis <[email protected]> wrote:\n> Hi,\n>\n>> We had a little chat about this with Magnus. It's pretty surprising that\n>> there's no built-in function to do this, we should consider adding one.\n>\n> I agree; you should be able to create a timestamp or a date from integer\n> values. Others, apparently, have written code. The implementation I did was\n> pretty rudimentary, but I was going for speed.\n>\n> If you could overload to_date and to_timestamp, that would be great. For\n> example:\n>\n> to_date( year ) = year-01-01\n> to_date( year, month ) = year-month-01\n> to_date( year, month, day ) = year-month-day\n>\n> to_timestamp( year, month, day, hour ) = year-month-day hour:00:00.0000 GMT\n> etc.\n\nNot that it would make a huge difference over having to specify 1's\nand 0's there, but I agree that could be useful.\n\n\n>> construct_timestamp(year int4, month int4, date int4, hour int4, minute\n>> int4, second int4, milliseconds int4, timezone text)\n>\n> Also, \"date int4\" should be \"day int4\", to avoid confusion with the date\n> type.\n\nYes, absolutely.\n\n\n> Does it makes sense to use named parameter notation for the first value (the\n> year)? This could be potentially confusing:\n\nHow so? If it does named parameters, why not all?\n\n\n> to_date() - What would this return? now()? Jan 1st, 1970? 2000?\n\nERROR, IMHO. We have a function for now() already, and the others are\nso arbitrary there is no way to explain such a choice.\n\n\n> Similarly, to_timestamp() ...? Seems meaningless without at least a full\n> date and an hour.\n\nAgreed.\n\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Sun, 13 Jun 2010 11:42:50 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> We could have a function like:\n\n> construct_timestamp(year int4, month int4, date int4, hour int4, minute \n> int4, second int4, milliseconds int4, timezone text)\n\nThis fails to allow specification to the microsecond level (and note\nthat with float timestamps even smaller fractions have potential use).\nI would suggest dropping the milliseconds argument and instead letting\nthe seconds arg be float8. That seems a closer match to the way people\nthink about the textual representation.\n\n> Now that we have named parameter notation, callers can use it to \n> conveniently fill in only the fields needed:\n\nIt's not immediately obvious what the default value of \"timezone\"\nwill be?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Jun 2010 11:34:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function " }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n> On Sun, Jun 13, 2010 at 09:38, David Jarvis <[email protected]> wrote:\n>> Does it makes sense to use named parameter notation for the first value (the\n>> year)? This could be potentially confusing:\n\n> How so? If it does named parameters, why not all?\n\nThere's no reason not to allow the year parameter to be named. What\nI think it shouldn't have is a default. OTOH I see no good reason\nnot to allow the other ones to have defaults. (We presumably want\ntimezone to default to the system timezone setting, but I wonder how\nwe should make that work --- should an empty string be treated as\nmeaning that?)\n\n>> Similarly, to_timestamp() ...? Seems meaningless without at least a full\n>> date and an hour.\n\n> Agreed.\n\nNo, I think it's perfectly sane to allow month/day to default to 1\nand h/m/s to zeroes.\n\nI do think it might be a good idea to have two functions,\nconstruct_timestamp yielding timestamptz and construct_date\nyielding date (and needing only 3 args). When you only want\na date, having to use construct_timestamp and cast will be\nawkward and much more expensive than is needed (timezone\nrotations aren't real cheap).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Jun 2010 11:42:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function " }, { "msg_contents": "On Sun, Jun 13, 2010 at 17:42, Tom Lane <[email protected]> wrote:\n> Magnus Hagander <[email protected]> writes:\n>> On Sun, Jun 13, 2010 at 09:38, David Jarvis <[email protected]> wrote:\n>>> Does it makes sense to use named parameter notation for the first value (the\n>>> year)? This could be potentially confusing:\n>\n>> How so? If it does named parameters, why not all?\n>\n> There's no reason not to allow the year parameter to be named.  What\n> I think it shouldn't have is a default.  OTOH I see no good reason\n> not to allow the other ones to have defaults.  (We presumably want\n> timezone to default to the system timezone setting, but I wonder how\n> we should make that work --- should an empty string be treated as\n> meaning that?)\n\nUmm. NULL could be made to mean that, or we could provicde two\ndifferent versions - one that takes TZ and one that doesn't.\n\n\n>>> Similarly, to_timestamp() ...? Seems meaningless without at least a full\n>>> date and an hour.\n>\n>> Agreed.\n>\n> No, I think it's perfectly sane to allow month/day to default to 1\n> and h/m/s to zeroes.\n>\n> I do think it might be a good idea to have two functions,\n> construct_timestamp yielding timestamptz and construct_date\n> yielding date (and needing only 3 args).  When you only want\n> a date, having to use construct_timestamp and cast will be\n> awkward and much more expensive than is needed (timezone\n> rotations aren't real cheap).\n\nAnd a third, construct_time(), no?\n\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Sun, 13 Jun 2010 17:49:50 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n> On Sun, Jun 13, 2010 at 17:42, Tom Lane <[email protected]> wrote:\n>> ... (We presumably want\n>> timezone to default to the system timezone setting, but I wonder how\n>> we should make that work --- should an empty string be treated as\n>> meaning that?)\n\n> Umm. NULL could be made to mean that, or we could provicde two\n> different versions - one that takes TZ and one that doesn't.\n\nUsing NULL like that seems a bit awkward: for one thing it'd mean the\nfunction couldn't be STRICT, and also it'd be bizarre that only this\none argument could be null without leading to a null result.\n\nAnd two separate functions isn't good either. Basically, I think it's\nimportant that there be a way to specify an explicit parameter value\nthat behaves identically to the default.\n\n> And a third, construct_time(), no?\n\nYeah, maybe ... do you think there's any demand for it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Jun 2010 11:58:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function " }, { "msg_contents": "Hi,\n\nIt's not immediately obvious what the default value of \"timezone\"\n> will be?\n>\n\nThe system's locale, like now(); documentation can clarify.\n\nBy named parameter, I meant default value. You could construct a timestamp\nvariable using:\n\n construct_timestamp( year := 1900, hour := 1 )\n\nWhen I read that code, the first thing I think it should return is:\n\n 1900-01-01 01:00:00.0000-07\n\nI agree construct_timestamp( hour := 1 ) and construct_date() are errors:\nyear is required.\n\nDave\n\nP.S.\nI prefer to_timestamp and to_date over the more verbose construct_timestamp.\n\nHi,\nIt's not immediately obvious what the default value of \"timezone\"\n\nwill be?The system's locale, like now(); documentation can clarify.By named parameter, I meant default value. You could construct a timestamp variable using:  construct_timestamp( year := 1900, hour := 1 )\nWhen I read that code, the first thing I think it should return is:  1900-01-01 01:00:00.0000-07I agree construct_timestamp( hour := 1 ) and construct_date() are errors: year is required.Dave\nP.S.I prefer to_timestamp and to_date over the more verbose construct_timestamp.", "msg_date": "Sun, 13 Jun 2010 12:19:05 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "> PostgreSQL can't currently avoid reading the table, because that's\n> where the tuple visibility information is stored. We've been making\n> progress toward having some way to avoid reading the table for all\n> except very recently written tuples, but we're not there yet (in any\n> production version or in the 9.0 version to be released this\n> summer).\nThank you for all the replies. I am learning PostgreSQL and figuring out \nwhich of the standard techniques for tuning queries in Oracle works in \nPostgreSQL as well. Thank you. \nRegards,\nJayadevan \n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n", "msg_date": "Mon, 14 Jun 2010 09:10:45 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query about index usage" }, { "msg_contents": "On Sun, Jun 13, 2010 at 17:58, Tom Lane <[email protected]> wrote:\n> Magnus Hagander <[email protected]> writes:\n>> On Sun, Jun 13, 2010 at 17:42, Tom Lane <[email protected]> wrote:\n>>> ... (We presumably want\n>>> timezone to default to the system timezone setting, but I wonder how\n>>> we should make that work --- should an empty string be treated as\n>>> meaning that?)\n>\n>> Umm. NULL could be made to mean that, or we could provicde two\n>> different versions - one that takes TZ and one that doesn't.\n>\n> Using NULL like that seems a bit awkward: for one thing it'd mean the\n> function couldn't be STRICT, and also it'd be bizarre that only this\n> one argument could be null without leading to a null result.\n\nHmm, yeah.\n\n\n> And two separate functions isn't good either.  Basically, I think it's\n> important that there be a way to specify an explicit parameter value\n> that behaves identically to the default.\n\nIn that case, empty string seems fairly reasonable - if you look at\nthe text based parsing, that's what we do if the timezone is an \"empty\nstring\" (meaning not specified).\n\n\n\n>> And a third, construct_time(), no?\n>\n> Yeah, maybe ... do you think there's any demand for it?\n\nYes, I think there is. Plus, it's for completeness :-)\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Mon, 14 Jun 2010 11:01:47 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "On Sun, Jun 13, 2010 at 21:19, David Jarvis <[email protected]> wrote:\n> Hi,\n>\n>> It's not immediately obvious what the default value of \"timezone\"\n>> will be?\n>\n> The system's locale, like now(); documentation can clarify.\n>\n> By named parameter, I meant default value. You could construct a timestamp\n> variable using:\n>\n>   construct_timestamp( year := 1900, hour := 1 )\n>\n> When I read that code, the first thing I think it should return is:\n>\n>   1900-01-01 01:00:00.0000-07\n>\n> I agree construct_timestamp( hour := 1 ) and construct_date() are errors:\n> year is required.\n\nDoes it make sense to allow minutes when hours isn't specified? Or\nshould we simply say that for each of the date and the time part, to\nspecify at <level n> you need to have everything from the top up to\n<level n-1> specified? E.g. month requires year to be specified, day\nrequires both year and month etc?\n\n\n> I prefer to_timestamp and to_date over the more verbose construct_timestamp.\n\nYeah, I agree with that.\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Mon, 14 Jun 2010 11:03:37 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function" }, { "msg_contents": ">\n> Does it make sense to allow minutes when hours isn't specified? Or\n>\n\nFor time, 00 seems a reasonable default for all values; clearly document the\ndefaults. Also, having a default makes the code simpler than <level n> plus\n<level n-1>. (Not to mention explaining it.) ;-)\n\nSELECT to_timestamp( minutes := 19 ) -- error (year not specified)\nSELECT to_timestamp( year := 2000, minutes := 19 ) -- 2000-01-01\n00:19:00.0000-07\n\nDave\n\nDoes it make sense to allow minutes when hours isn't specified? Or\nFor time, 00 seems a reasonable default for all values; clearly document the defaults. Also, having a default makes the code simpler than <level n> plus <level n-1>. (Not to mention explaining it.) ;-)\nSELECT to_timestamp( minutes := 19 ) -- error (year not specified)SELECT to_timestamp( year := 2000, minutes := 19 ) -- 2000-01-01 00:19:00.0000-07Dave", "msg_date": "Mon, 14 Jun 2010 04:10:41 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n> On Sun, Jun 13, 2010 at 21:19, David Jarvis <[email protected]> wrote:\n>> I prefer to_timestamp and to_date over the more verbose construct_timestamp.\n\n> Yeah, I agree with that.\n\nThose names are already taken. It will cause confusion (of both people\nand machines) if you try to overload them with this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jun 2010 09:59:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function " }, { "msg_contents": "On Mon, Jun 14, 2010 at 15:59, Tom Lane <[email protected]> wrote:\n> Magnus Hagander <[email protected]> writes:\n>> On Sun, Jun 13, 2010 at 21:19, David Jarvis <[email protected]> wrote:\n>>> I prefer to_timestamp and to_date over the more verbose construct_timestamp.\n>\n>> Yeah, I agree with that.\n>\n> Those names are already taken.  It will cause confusion (of both people\n> and machines) if you try to overload them with this.\n\nFair enough. How about something like make_timestamp? It's at least\nshorter and easier than construct :-)\n\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Wed, 16 Jun 2010 08:36:25 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "> Fair enough. How about something like make_timestamp? It's at least\n> shorter and easier than construct :-)\n>\n\nAgreed.\n\nDave\n\n\nFair enough. How about something like make_timestamp? It's at least\n\nshorter and easier than construct :-)Agreed.Dave", "msg_date": "Wed, 16 Jun 2010 01:48:27 -0700", "msg_from": "David Jarvis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analysis Function" }, { "msg_contents": "David Jarvis <[email protected]> writes:\n>> Fair enough. How about something like make_timestamp? It's at least\n>> shorter and easier than construct :-)\n\n> Agreed.\n\nNo objection here either.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 Jun 2010 10:33:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function " }, { "msg_contents": "Hello,\n> PostgreSQL can't currently avoid reading the table, because that's\n> where the tuple visibility information is stored. We've been making\n> progress toward having some way to avoid reading the table for all\n> except very recently written tuples, but we're not there yet (in any\n> production version or in the 9.0 version to be released this\n> summer).\nMore doubts on how indexes are used by PostgreSQL. It is mentioned that \ntable data blocks have data about tuple visibility and hence table scans \nare always necessary. So how does PostgreSQL reduce the number of blocks \nto be read by using indexes? Does this mean that indexes will have \nreferences to all the 'possible' blocks which may contain the data one is \nsearching for, and then scans all those blocks and eliminates records \nwhich should not be 'visible' to the query being executed? Do index data \nget updated as and when data is committed and made 'visible' or is it \nthat index data get updated as soon as data is changed, before commit is \nissued and rollback of transaction results in a rollback of the index data \nchanges too?\nRegards,\nJayadevan \n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n", "msg_date": "Wed, 23 Jun 2010 09:46:19 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query about index usage" }, { "msg_contents": "Jayadevan M wrote:\n> It is mentioned that table data blocks have data about tuple visibility and hence table scans \n> are always necessary. So how does PostgreSQL reduce the number of blocks \n> to be read by using indexes?\n\nTo be useful, a query utilizing an index must be selective: it must \nonly return a fraction of the possible rows in the table. Scanning the \nindex will produce a list of blocks that contain the potentially visible \ndata, then only those data blocks will be retrieved and tested for \nvisibility.\n\nLet's say you have a table that's 100 pages (pages are 8KB) and an index \nthat's 50 pages against it. You run a query that only selects 5% of the \nrows in the table, from a continuous section. Very rough estimate, it \nwill look at 5% * 50 = 3 index pages. Those will point to a matching \nset of 5% * 100 = 5 data pages. Now you've just found the right subset \nof the data by only retrieving 8 random pages of data instead of 100. \nWith random_page_cost=4.0, that would give this plan a cost of around \n32, while the sequential scan one would cost 100 * 1.0 (sequential \naccesses) for a cost of around 100 (Both of them would also have some \nsmaller row processing cost added in there too).\n\nIt's actually a bit more complicated than that--the way indexes are \nbuilt means you can't just linearly estimate their usage, and scans of \nnon-contiguous sections are harder to model simply--but that should give \nyou an idea. Only when using the index significantly narrows the number \nof data pages expected will it be an improvement over ignoring the index \nand just scanning the whole table.\n\nIf the expected use of the index was only 20% selective for another \nquery, you'd be getting 20% * 50 = 10 index pages, 20% * 100 = 20 data \npages, for a potential total of 30 random page lookups. That could end \nup costing 30 * 4.0 = 120, higher than the sequential scan. Usually \nthe breakpoint for how much of a table has to be scanned before just \nscanning the whole thing sequentially is considered cheaper happens near \n20% of it, and you can shift it around by adjusting random_page_cost. \nMake it lower, and you can end up preferring index scans even for 30 or \n40% of a table.\n\n> Do index data get updated as and when data is committed and made 'visible' or is it \n> that index data get updated as soon as data is changed, before commit is \n> issued and rollback of transaction results in a rollback of the index data\n\nIndex changes happen when the data goes into the table, including \nsituations where it might not be committed. The index change doesn't \never get deferred to commit time, like you can things like foreign key \nchecks. When a transaction is rolled back, the aborted row eventually \ngets marked as dead by vacuum, at which point any index records pointing \nto it can also be cleaned up.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Wed, 23 Jun 2010 02:05:44 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query about index usage" }, { "msg_contents": "Thank you for the detailed explanation.\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n", "msg_date": "Wed, 23 Jun 2010 11:45:06 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query about index usage" }, { "msg_contents": "Tom Lane wrote:\n> David Jarvis <[email protected]> writes:\n> >> Fair enough. How about something like make_timestamp? It's at least\n> >> shorter and easier than construct :-)\n> \n> > Agreed.\n> \n> No objection here either.\n\nAdded to TODO:\n\n Add function to allow the creation of timestamps using parameters\n\n * http://archives.postgresql.org/pgsql-performance/2010-06/msg00232.php\n\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Wed, 30 Jun 2010 20:42:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analysis Function" } ]
[ { "msg_contents": "Tom,\n\nFirst off, I wouldn't use a VM if I could help it, however, sometimes you have to make compromises. With a 16 Gb machine running 64-bit Ubuntu and only PostgreSQL, I'd start by allocating 4 Gb to shared_buffers. That should leave more than enough room for the OS and file system cache. Then I'd begin testing by measuring response times of representative queries with significant amounts of data.\n\nAlso, what is the disk setup for the box? Filesystem? Can WAL files have their own disk? Is the workload OLTP or OLAP, or a mixture of both? There is more that goes into tuning a PG server for good performance than simply installing the software, setting a couple of GUCs and running it.\n\nBob\n\n--- On Thu, 6/10/10, Tom Wilcox <[email protected]> wrote:\n\n> From: Tom Wilcox <[email protected]>\n> Subject: Re: [PERFORM] requested shared memory size overflows size_t\n> To: \"Bob Lunney\" <[email protected]>\n> Cc: \"Robert Haas\" <[email protected]>, [email protected]\n> Date: Thursday, June 10, 2010, 10:45 AM\n> Thanks guys. I am currently\n> installing Pg64 onto a Ubuntu Server 64-bit installation\n> running as a VM in VirtualBox with 16GB of RAM accessible.\n> If what you say is true then what do you suggest I do to\n> configure my new setup to best use the available 16GB (96GB\n> and native install eventually if the test goes well) of RAM\n> on Linux.\n> \n> I was considering starting by using Enterprise DBs tuner to\n> see if that optimises things to a better quality..\n> \n> Tom\n> \n> On 10/06/2010 15:41, Bob Lunney wrote:\n> > True, plus there are the other issues of increased\n> checkpoint times and I/O, bgwriter tuning, etc.  It may\n> be better to let the OS cache the files and size\n> shared_buffers to a smaller value.\n> > \n> > Bob Lunney\n> > \n> > --- On Wed, 6/9/10, Robert Haas<[email protected]> \n> wrote:\n> > \n> >    \n> >> From: Robert Haas<[email protected]>\n> >> Subject: Re: [PERFORM] requested shared memory\n> size overflows size_t\n> >> To: \"Bob Lunney\"<[email protected]>\n> >> Cc: [email protected],\n> \"Tom Wilcox\"<[email protected]>\n> >> Date: Wednesday, June 9, 2010, 9:49 PM\n> >> On Wed, Jun 2, 2010 at 9:26 PM, Bob\n> >> Lunney<[email protected]>\n> >> wrote:\n> >>      \n> >>> Your other option, of course, is a nice 64-bit\n> linux\n> >>>        \n> >> variant, which won't have this problem at all.\n> >> \n> >> Although, even there, I think I've heard that\n> after 10GB\n> >> you don't get\n> >> much benefit from raising it further.  Not\n> sure if\n> >> that's accurate or\n> >> not...\n> >> \n> >> -- Robert Haas\n> >> EnterpriseDB: http://www.enterprisedb.com\n> >> The Enterprise Postgres Company\n> >> \n> >>      \n> > \n> > \n> >    \n> \n> \n\n\n \n", "msg_date": "Thu, 10 Jun 2010 23:25:47 -0700 (PDT)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": true, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "\nHi Bob,\n\nThanks a lot. Here's my best attempt to answer your questions:\n\nThe VM is setup with a virtual disk image dynamically expanding to fill \nan allocation of 300GB on a fast, local hard drive (avg read speed = \n778MB/s ).\nWAL files can have their own disk, but how significantly would this \naffect our performance?\nThe filesystem of the host OS is NTFS (Windows Server 2008 OS 64), the \nguest filesystem is Ext2 (Ubuntu 64).\nThe workload is OLAP (lots of large, complex queries on large tables run \nin sequence).\n\nIn addition, I have reconfigured my server to use more memory. Here's a \ndetailed blow by blow of how I reconfigured my system to get better \nperformance (for anyone who might be interested)...\n\nIn order to increase the shared memory on Ubuntu I edited the System V \nIPC values using sysctl:\n\nsysctl -w kernel.shmmax=16106127360*\n*sysctl -w kernel.shmall=2097152\n\nI had some fun with permissions as I somehow managed to change the \nowner of the postgresql.conf to root where it needed to be postgres, \nresulting in failure to start the service.. (Fixed with chown \npostgres:postgres ./data/postgresql.conf and chmod u=rwx ./data -R).\n\nI changed the following params in my configuration file..\n\ndefault_statistics_target=10000\nmaintenance_work_mem=512MB\nwork_mem=512MB\nshared_buffers=512MB\nwal_buffers=128MB\n\nWith this config, the following command took 6,400,000ms:\n\nEXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;\n\nWith plan:\n\"Seq Scan on match_data (cost=0.00..1392900.78 rows=32237278 width=232) \n(actual time=0.379..464270.682 rows=27777961 loops=1)\"\n\"Total runtime: 6398238.890 ms\"\n\nWith these changes to the previous config, the same command took \n5,610,000ms:\n\nmaintenance_work_mem=4GB\nwork_mem=4GB\nshared_buffers=4GB\neffective_cache_size=4GB\nwal_buffers=1GB\n\nResulting plan:\n\n\"Seq Scan on match_data (cost=0.00..2340147.72 rows=30888572 width=232) \n(actual time=0.094..452793.430 rows=27777961 loops=1)\"\n\"Total runtime: 5614140.786 ms\"\n\nThen I performed these changes to the postgresql.conf file:\n\nmax_connections=3\neffective_cache_size=15GB\nmaintenance_work_mem=5GB\nshared_buffers=7000MB\nwork_mem=5GB\n\nAnd ran this query (for a quick look - can't afford the time for the \nprevious tests..):\n\nEXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE match_data_id \n< 100000;\n\nResult:\n\n\"Index Scan using match_data_pkey1 on match_data (cost=0.00..15662.17 \nrows=4490 width=232) (actual time=27.055..1908.027 rows=99999 loops=1)\"\n\" Index Cond: (match_data_id < 100000)\"\n\"Total runtime: 25909.372 ms\"\n\nI then ran EntrepriseDB's Tuner on my postgres install (for a dedicated \nmachine) and got the following settings and results:\n\nEXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE match_data_id \n< 100000;\n\n\"Index Scan using match_data_pkey1 on match_data (cost=0.00..13734.54 \nrows=4495 width=232) (actual time=0.348..2928.844 rows=99999 loops=1)\"\n\" Index Cond: (match_data_id < 100000)\"\n\"Total runtime: 1066580.293 ms\"\n\nFor now, I will go with the config using 7000MB shared_buffers. Any \nsuggestions on how I can further optimise this config for a single \nsession, 64-bit install utilising ALL of 96GB RAM. I will spend the next \nweek making the case for a native install of Linux, but first we need to \nbe 100% sure that is the only way to get the most out of Postgres on \nthis machine.\n\nThanks very much. I now feel I am at a position where I can really \nexplore and find the optimal configuration for my system, but would \nstill appreciate any suggestions.\n\nCheers,\nTom\n\nOn 11/06/2010 07:25, Bob Lunney wrote:\n> Tom,\n>\n> First off, I wouldn't use a VM if I could help it, however, sometimes you have to make compromises. With a 16 Gb machine running 64-bit Ubuntu and only PostgreSQL, I'd start by allocating 4 Gb to shared_buffers. That should leave more than enough room for the OS and file system cache. Then I'd begin testing by measuring response times of representative queries with significant amounts of data.\n>\n> Also, what is the disk setup for the box? Filesystem? Can WAL files have their own disk? Is the workload OLTP or OLAP, or a mixture of both? There is more that goes into tuning a PG server for good performance than simply installing the software, setting a couple of GUCs and running it.\n>\n> Bob\n>\n> --- On Thu, 6/10/10, Tom Wilcox <[email protected]> wrote:\n>\n> \n>> From: Tom Wilcox <[email protected]>\n>> Subject: Re: [PERFORM] requested shared memory size overflows size_t\n>> To: \"Bob Lunney\" <[email protected]>\n>> Cc: \"Robert Haas\" <[email protected]>, [email protected]\n>> Date: Thursday, June 10, 2010, 10:45 AM\n>> Thanks guys. I am currently\n>> installing Pg64 onto a Ubuntu Server 64-bit installation\n>> running as a VM in VirtualBox with 16GB of RAM accessible.\n>> If what you say is true then what do you suggest I do to\n>> configure my new setup to best use the available 16GB (96GB\n>> and native install eventually if the test goes well) of RAM\n>> on Linux.\n>>\n>> I was considering starting by using Enterprise DBs tuner to\n>> see if that optimises things to a better quality..\n>>\n>> Tom\n>>\n>> On 10/06/2010 15:41, Bob Lunney wrote:\n>> \n>>> True, plus there are the other issues of increased\n>>> \n>> checkpoint times and I/O, bgwriter tuning, etc. It may\n>> be better to let the OS cache the files and size\n>> shared_buffers to a smaller value.\n>> \n>>> Bob Lunney\n>>>\n>>> --- On Wed, 6/9/10, Robert Haas<[email protected]> \n>>> \n>> wrote:\n>> \n>>> \n>>> \n>>>> From: Robert Haas<[email protected]>\n>>>> Subject: Re: [PERFORM] requested shared memory\n>>>> \n>> size overflows size_t\n>> \n>>>> To: \"Bob Lunney\"<[email protected]>\n>>>> Cc: [email protected],\n>>>> \n>> \"Tom Wilcox\"<[email protected]>\n>> \n>>>> Date: Wednesday, June 9, 2010, 9:49 PM\n>>>> On Wed, Jun 2, 2010 at 9:26 PM, Bob\n>>>> Lunney<[email protected]>\n>>>> wrote:\n>>>> \n>>>> \n>>>>> Your other option, of course, is a nice 64-bit\n>>>>> \n>> linux\n>> \n>>>>> \n>>>>> \n>>>> variant, which won't have this problem at all.\n>>>>\n>>>> Although, even there, I think I've heard that\n>>>> \n>> after 10GB\n>> \n>>>> you don't get\n>>>> much benefit from raising it further. Not\n>>>> \n>> sure if\n>> \n>>>> that's accurate or\n>>>> not...\n>>>>\n>>>> -- Robert Haas\n>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>> The Enterprise Postgres Company\n>>>>\n>>>> \n>>>> \n>>> \n>>> \n>> \n> \n> \n\n", "msg_date": "Mon, 14 Jun 2010 19:53:01 +0100", "msg_from": "Tom Wilcox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "On Mon, Jun 14, 2010 at 2:53 PM, Tom Wilcox <[email protected]> wrote:\n> maintenance_work_mem=4GB\n> work_mem=4GB\n> shared_buffers=4GB\n> effective_cache_size=4GB\n> wal_buffers=1GB\n\nIt's pretty easy to drive your system into swap with such a large\nvalue for work_mem - you'd better monitor that carefully.\n\nThe default value for wal_buffers is 64kB. I can't imagine why you'd\nneed to increase that by four orders of magnitude. I'm not sure\nwhether it will cause you a problem or not, but you're allocating\nquite a lot of shared memory that way that you might not really need.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Mon, 14 Jun 2010 19:08:14 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "Thanks. I will try with a more sensible value of wal_buffers.. I was \nhoping to keep more in memory and therefore reduce the frequency of disk \nIOs..\n\nAny suggestions for good monitoring software for linux?\n\nOn 15/06/2010 00:08, Robert Haas wrote:\n> On Mon, Jun 14, 2010 at 2:53 PM, Tom Wilcox<[email protected]> wrote:\n> \n>> maintenance_work_mem=4GB\n>> work_mem=4GB\n>> shared_buffers=4GB\n>> effective_cache_size=4GB\n>> wal_buffers=1GB\n>> \n> It's pretty easy to drive your system into swap with such a large\n> value for work_mem - you'd better monitor that carefully.\n>\n> The default value for wal_buffers is 64kB. I can't imagine why you'd\n> need to increase that by four orders of magnitude. I'm not sure\n> whether it will cause you a problem or not, but you're allocating\n> quite a lot of shared memory that way that you might not really need.\n>\n> \n\n", "msg_date": "Tue, 15 Jun 2010 00:21:35 +0100", "msg_from": "Tom Wilcox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "Tom\n\nI always prefer to choose apps based on business needs, then the OS based on\nthe needs for the app.\n\nCynically, I often feel that the best answer to \"we have a policy that says\nwe're only allowed to use operating system x\" is to ignore the policy ....\nthe kind of people ignorant enough to be that blinkered are usually not\ntech-savvy enough to notice when it gets flouted :-)\n\nMore seriously, is the policy \"Windows only on the metal\" or could you run\ne.g. VMware ESX server? I/O is the area that takes the biggest hit in\nvirtualization, and ESX server has far less overhead loss than either\nHyper-V (which I presume you are using) or VMWare Workstation for NT\n(kernels).\n\nIf it's a Windows-only policy, then perhaps you can run those traps in\nreverse, and switch to a Windows database, i.e. Microsoft SQL Server.\n\nCheers\nDave\n\nOn Mon, Jun 14, 2010 at 1:53 PM, Tom Wilcox <[email protected]> wrote:\n\n>\n> Hi Bob,\n>\n> Thanks a lot. Here's my best attempt to answer your questions:\n>\n> The VM is setup with a virtual disk image dynamically expanding to fill an\n> allocation of 300GB on a fast, local hard drive (avg read speed = 778MB/s ).\n> WAL files can have their own disk, but how significantly would this affect\n> our performance?\n> The filesystem of the host OS is NTFS (Windows Server 2008 OS 64), the\n> guest filesystem is Ext2 (Ubuntu 64).\n> The workload is OLAP (lots of large, complex queries on large tables run in\n> sequence).\n>\n> In addition, I have reconfigured my server to use more memory. Here's a\n> detailed blow by blow of how I reconfigured my system to get better\n> performance (for anyone who might be interested)...\n>\n> In order to increase the shared memory on Ubuntu I edited the System V IPC\n> values using sysctl:\n>\n> sysctl -w kernel.shmmax=16106127360*\n> *sysctl -w kernel.shmall=2097152\n>\n> I had some fun with permissions as I somehow managed to change the owner\n> of the postgresql.conf to root where it needed to be postgres, resulting in\n> failure to start the service.. (Fixed with chown postgres:postgres\n> ./data/postgresql.conf and chmod u=rwx ./data -R).\n>\n> I changed the following params in my configuration file..\n>\n> default_statistics_target=10000\n> maintenance_work_mem=512MB\n> work_mem=512MB\n> shared_buffers=512MB\n> wal_buffers=128MB\n>\n> With this config, the following command took 6,400,000ms:\n>\n> EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;\n>\n> With plan:\n> \"Seq Scan on match_data (cost=0.00..1392900.78 rows=32237278 width=232)\n> (actual time=0.379..464270.682 rows=27777961 loops=1)\"\n> \"Total runtime: 6398238.890 ms\"\n>\n> With these changes to the previous config, the same command took\n> 5,610,000ms:\n>\n> maintenance_work_mem=4GB\n> work_mem=4GB\n> shared_buffers=4GB\n> effective_cache_size=4GB\n> wal_buffers=1GB\n>\n> Resulting plan:\n>\n> \"Seq Scan on match_data (cost=0.00..2340147.72 rows=30888572 width=232)\n> (actual time=0.094..452793.430 rows=27777961 loops=1)\"\n> \"Total runtime: 5614140.786 ms\"\n>\n> Then I performed these changes to the postgresql.conf file:\n>\n> max_connections=3\n> effective_cache_size=15GB\n> maintenance_work_mem=5GB\n> shared_buffers=7000MB\n> work_mem=5GB\n>\n> And ran this query (for a quick look - can't afford the time for the\n> previous tests..):\n>\n> EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE match_data_id <\n> 100000;\n>\n> Result:\n>\n> \"Index Scan using match_data_pkey1 on match_data (cost=0.00..15662.17\n> rows=4490 width=232) (actual time=27.055..1908.027 rows=99999 loops=1)\"\n> \" Index Cond: (match_data_id < 100000)\"\n> \"Total runtime: 25909.372 ms\"\n>\n> I then ran EntrepriseDB's Tuner on my postgres install (for a dedicated\n> machine) and got the following settings and results:\n>\n> EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE match_data_id <\n> 100000;\n>\n> \"Index Scan using match_data_pkey1 on match_data (cost=0.00..13734.54\n> rows=4495 width=232) (actual time=0.348..2928.844 rows=99999 loops=1)\"\n> \" Index Cond: (match_data_id < 100000)\"\n> \"Total runtime: 1066580.293 ms\"\n>\n> For now, I will go with the config using 7000MB shared_buffers. Any\n> suggestions on how I can further optimise this config for a single session,\n> 64-bit install utilising ALL of 96GB RAM. I will spend the next week making\n> the case for a native install of Linux, but first we need to be 100% sure\n> that is the only way to get the most out of Postgres on this machine.\n>\n> Thanks very much. I now feel I am at a position where I can really explore\n> and find the optimal configuration for my system, but would still appreciate\n> any suggestions.\n>\n> Cheers,\n> Tom\n>\n>\n> On 11/06/2010 07:25, Bob Lunney wrote:\n>\n>> Tom,\n>>\n>> First off, I wouldn't use a VM if I could help it, however, sometimes you\n>> have to make compromises. With a 16 Gb machine running 64-bit Ubuntu and\n>> only PostgreSQL, I'd start by allocating 4 Gb to shared_buffers. That\n>> should leave more than enough room for the OS and file system cache. Then\n>> I'd begin testing by measuring response times of representative queries with\n>> significant amounts of data.\n>>\n>> Also, what is the disk setup for the box? Filesystem? Can WAL files have\n>> their own disk? Is the workload OLTP or OLAP, or a mixture of both? There\n>> is more that goes into tuning a PG server for good performance than simply\n>> installing the software, setting a couple of GUCs and running it.\n>>\n>> Bob\n>>\n>> --- On Thu, 6/10/10, Tom Wilcox <[email protected]> wrote:\n>>\n>>\n>>\n>>> From: Tom Wilcox <[email protected]>\n>>> Subject: Re: [PERFORM] requested shared memory size overflows size_t\n>>> To: \"Bob Lunney\" <[email protected]>\n>>> Cc: \"Robert Haas\" <[email protected]>,\n>>> [email protected]\n>>> Date: Thursday, June 10, 2010, 10:45 AM\n>>> Thanks guys. I am currently\n>>> installing Pg64 onto a Ubuntu Server 64-bit installation\n>>> running as a VM in VirtualBox with 16GB of RAM accessible.\n>>> If what you say is true then what do you suggest I do to\n>>> configure my new setup to best use the available 16GB (96GB\n>>> and native install eventually if the test goes well) of RAM\n>>> on Linux.\n>>>\n>>> I was considering starting by using Enterprise DBs tuner to\n>>> see if that optimises things to a better quality..\n>>>\n>>> Tom\n>>>\n>>> On 10/06/2010 15:41, Bob Lunney wrote:\n>>>\n>>>\n>>>> True, plus there are the other issues of increased\n>>>>\n>>>>\n>>> checkpoint times and I/O, bgwriter tuning, etc. It may\n>>> be better to let the OS cache the files and size\n>>> shared_buffers to a smaller value.\n>>>\n>>>\n>>>> Bob Lunney\n>>>>\n>>>> --- On Wed, 6/9/10, Robert Haas<[email protected]>\n>>>>\n>>> wrote:\n>>>\n>>>\n>>>>\n>>>>\n>>>>> From: Robert Haas<[email protected]>\n>>>>> Subject: Re: [PERFORM] requested shared memory\n>>>>>\n>>>>>\n>>>> size overflows size_t\n>>>\n>>>\n>>>> To: \"Bob Lunney\"<[email protected]>\n>>>>> Cc: [email protected],\n>>>>>\n>>>>>\n>>>> \"Tom Wilcox\"<[email protected]>\n>>>\n>>>\n>>>> Date: Wednesday, June 9, 2010, 9:49 PM\n>>>>> On Wed, Jun 2, 2010 at 9:26 PM, Bob\n>>>>> Lunney<[email protected]>\n>>>>> wrote:\n>>>>>\n>>>>>\n>>>>>> Your other option, of course, is a nice 64-bit\n>>>>>>\n>>>>>>\n>>>>> linux\n>>>\n>>>\n>>>>\n>>>>>>\n>>>>> variant, which won't have this problem at all.\n>>>>>\n>>>>> Although, even there, I think I've heard that\n>>>>>\n>>>>>\n>>>> after 10GB\n>>>\n>>>\n>>>> you don't get\n>>>>> much benefit from raising it further. Not\n>>>>>\n>>>>>\n>>>> sure if\n>>>\n>>>\n>>>> that's accurate or\n>>>>> not...\n>>>>>\n>>>>> -- Robert Haas\n>>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>>> The Enterprise Postgres Company\n>>>>>\n>>>>>\n>>>>>\n>>>>\n>>>>\n>>>\n>>>\n>>\n>>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nTomI always prefer to choose apps based on business needs, then the OS \nbased on the needs for the app.\nCynically, I often feel that the best answer to \"we have a policy that says we're only allowed to use operating system x\" is to ignore the policy .... the kind of people ignorant enough to be that blinkered are usually not tech-savvy enough to notice when it gets flouted :-)\nMore seriously, is the policy \"Windows only on the metal\" or could you run e.g. VMware ESX server? I/O is the area that takes the biggest hit in virtualization, and ESX server has far less overhead loss than either Hyper-V (which I presume you are using) or VMWare Workstation for NT (kernels).\nIf it's a Windows-only policy, then perhaps you can run those traps in reverse, and switch to a Windows database, i.e. Microsoft SQL Server.CheersDaveOn Mon, Jun 14, 2010 at 1:53 PM, Tom Wilcox <[email protected]> wrote:\n\nHi Bob,\n\nThanks a lot. Here's my best attempt to answer your questions:\n\nThe VM is setup with a virtual disk image dynamically expanding to fill an allocation of 300GB on a fast, local hard drive (avg read speed = 778MB/s ).\nWAL files can have their own disk, but how significantly would this affect our performance?\nThe filesystem of the host OS is NTFS (Windows Server 2008 OS 64), the guest filesystem is Ext2 (Ubuntu 64).\nThe workload is OLAP (lots of large, complex queries on large tables run in sequence).\n\nIn addition, I have reconfigured my server to use more memory. Here's a detailed blow by blow of how I reconfigured my system to get better performance (for anyone who might be interested)...\n\nIn order to increase the shared memory on Ubuntu I edited the System V IPC values using sysctl:\n\nsysctl -w kernel.shmmax=16106127360*\n*sysctl -w kernel.shmall=2097152\n\nI had some fun with permissions as I somehow managed to change the owner  of the postgresql.conf to root where it needed to be postgres, resulting in failure to start the service.. (Fixed with chown postgres:postgres ./data/postgresql.conf and chmod u=rwx ./data -R).\n\nI changed the following params in my configuration file..\n\ndefault_statistics_target=10000\nmaintenance_work_mem=512MB\nwork_mem=512MB\nshared_buffers=512MB\nwal_buffers=128MB\n\nWith this config, the following command took  6,400,000ms:\n\nEXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;\n\nWith plan:\n\"Seq Scan on match_data  (cost=0.00..1392900.78 rows=32237278 width=232) (actual time=0.379..464270.682 rows=27777961 loops=1)\"\n\"Total runtime: 6398238.890 ms\"\n\nWith these changes to the previous config, the same command took  5,610,000ms:\n\nmaintenance_work_mem=4GB\nwork_mem=4GB\nshared_buffers=4GB\neffective_cache_size=4GB\nwal_buffers=1GB\n\nResulting plan:\n\n\"Seq Scan on match_data  (cost=0.00..2340147.72 rows=30888572 width=232) (actual time=0.094..452793.430 rows=27777961 loops=1)\"\n\"Total runtime: 5614140.786 ms\"\n\nThen I performed these changes to the postgresql.conf file:\n\nmax_connections=3\neffective_cache_size=15GB\nmaintenance_work_mem=5GB\nshared_buffers=7000MB\nwork_mem=5GB\n\nAnd ran this query (for a quick look - can't afford the time for the previous tests..):\n\nEXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE match_data_id < 100000;\n\nResult:\n\n\"Index Scan using match_data_pkey1 on match_data  (cost=0.00..15662.17 rows=4490 width=232) (actual time=27.055..1908.027 rows=99999 loops=1)\"\n\"  Index Cond: (match_data_id < 100000)\"\n\"Total runtime: 25909.372 ms\"\n\nI then ran EntrepriseDB's Tuner on my postgres install (for a dedicated machine) and got the following settings and results:\n\nEXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE match_data_id < 100000;\n\n\"Index Scan using match_data_pkey1 on match_data  (cost=0.00..13734.54 rows=4495 width=232) (actual time=0.348..2928.844 rows=99999 loops=1)\"\n\"  Index Cond: (match_data_id < 100000)\"\n\"Total runtime: 1066580.293 ms\"\n\nFor now, I will go with the config using 7000MB shared_buffers. Any suggestions on how I can further optimise this config for a single session, 64-bit install utilising ALL of 96GB RAM. I will spend the next week making the case for a native install of Linux, but first we need to be 100% sure that is the only way to get the most out of Postgres on this machine.\n\nThanks very much. I now feel I am at a position where I can really explore and find the optimal configuration for my system, but would still appreciate any suggestions.\n\nCheers,\nTom\n\nOn 11/06/2010 07:25, Bob Lunney wrote:\n\nTom,\n\nFirst off, I wouldn't use a VM if I could help it, however, sometimes you have to make compromises.  With a 16 Gb machine running 64-bit Ubuntu and only PostgreSQL, I'd start by allocating 4 Gb to shared_buffers.  That should leave more than enough room for the OS and file system cache.  Then I'd begin testing by measuring response times of representative queries with significant amounts of data.\n\nAlso, what is the disk setup for the box?  Filesystem?  Can WAL files have their own disk?  Is the workload OLTP or OLAP, or a mixture of both?  There is more that goes into tuning a PG server for good performance than simply installing the software, setting a couple of GUCs and running it.\n\nBob\n\n--- On Thu, 6/10/10, Tom Wilcox <[email protected]> wrote:\n\n  \n\nFrom: Tom Wilcox <[email protected]>\nSubject: Re: [PERFORM] requested shared memory size overflows size_t\nTo: \"Bob Lunney\" <[email protected]>\nCc: \"Robert Haas\" <[email protected]>, [email protected]\n\nDate: Thursday, June 10, 2010, 10:45 AM\nThanks guys. I am currently\ninstalling Pg64 onto a Ubuntu Server 64-bit installation\nrunning as a VM in VirtualBox with 16GB of RAM accessible.\nIf what you say is true then what do you suggest I do to\nconfigure my new setup to best use the available 16GB (96GB\nand native install eventually if the test goes well) of RAM\non Linux.\n\nI was considering starting by using Enterprise DBs tuner to\nsee if that optimises things to a better quality..\n\nTom\n\nOn 10/06/2010 15:41, Bob Lunney wrote:\n    \n\nTrue, plus there are the other issues of increased\n      \n\ncheckpoint times and I/O, bgwriter tuning, etc.  It may\nbe better to let the OS cache the files and size\nshared_buffers to a smaller value.\n    \n\nBob Lunney\n\n--- On Wed, 6/9/10, Robert Haas<[email protected]>       \n\nwrote:\n    \n\n          \n\nFrom: Robert Haas<[email protected]>\nSubject: Re: [PERFORM] requested shared memory\n        \n\nsize overflows size_t\n    \n\n\nTo: \"Bob Lunney\"<[email protected]>\nCc: [email protected],\n        \n\n\"Tom Wilcox\"<[email protected]>\n    \n\n\nDate: Wednesday, June 9, 2010, 9:49 PM\nOn Wed, Jun 2, 2010 at 9:26 PM, Bob\nLunney<[email protected]>\nwrote:\n              \n\nYour other option, of course, is a nice 64-bit\n          \n\nlinux\n    \n\n\n                  \n\nvariant, which won't have this problem at all.\n\nAlthough, even there, I think I've heard that\n        \n\nafter 10GB\n    \n\n\nyou don't get\nmuch benefit from raising it further.  Not\n        \n\nsure if\n    \n\n\nthat's accurate or\nnot...\n\n-- Robert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n\n              \n\n          \n\n    \n\n        \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 14 Jun 2010 19:26:51 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "Hi Dave,\n\nI am definitely able to switch OS if it will get the most out of \nPostgres. So it is definitely a case of choosing the OS on the needs if \nthe app providing it is well justified.\n\nCurrently, we are running Ubuntu Server 64-bit in a VirtualBox VM.\n\nCheers,\nTom\n\n\nDave Crooke wrote:\n> Tom\n>\n> I always prefer to choose apps based on business needs, then the OS \n> based on the needs for the app.\n>\n> Cynically, I often feel that the best answer to \"we have a policy that \n> says we're only allowed to use operating system x\" is to ignore the \n> policy .... the kind of people ignorant enough to be that blinkered \n> are usually not tech-savvy enough to notice when it gets flouted :-)\n>\n> More seriously, is the policy \"Windows only on the metal\" or could you \n> run e.g. VMware ESX server? I/O is the area that takes the biggest hit \n> in virtualization, and ESX server has far less overhead loss than \n> either Hyper-V (which I presume you are using) or VMWare Workstation \n> for NT (kernels).\n>\n> If it's a Windows-only policy, then perhaps you can run those traps in \n> reverse, and switch to a Windows database, i.e. Microsoft SQL Server.\n>\n> Cheers\n> Dave\n>\n> On Mon, Jun 14, 2010 at 1:53 PM, Tom Wilcox <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n>\n> Hi Bob,\n>\n> Thanks a lot. Here's my best attempt to answer your questions:\n>\n> The VM is setup with a virtual disk image dynamically expanding to\n> fill an allocation of 300GB on a fast, local hard drive (avg read\n> speed = 778MB/s ).\n> WAL files can have their own disk, but how significantly would\n> this affect our performance?\n> The filesystem of the host OS is NTFS (Windows Server 2008 OS 64),\n> the guest filesystem is Ext2 (Ubuntu 64).\n> The workload is OLAP (lots of large, complex queries on large\n> tables run in sequence).\n>\n> In addition, I have reconfigured my server to use more memory.\n> Here's a detailed blow by blow of how I reconfigured my system to\n> get better performance (for anyone who might be interested)...\n>\n> In order to increase the shared memory on Ubuntu I edited the\n> System V IPC values using sysctl:\n>\n> sysctl -w kernel.shmmax=16106127360*\n> *sysctl -w kernel.shmall=2097152\n>\n> I had some fun with permissions as I somehow managed to change the\n> owner of the postgresql.conf to root where it needed to be\n> postgres, resulting in failure to start the service.. (Fixed with\n> chown postgres:postgres ./data/postgresql.conf and chmod u=rwx\n> ./data -R).\n>\n> I changed the following params in my configuration file..\n>\n> default_statistics_target=10000\n> maintenance_work_mem=512MB\n> work_mem=512MB\n> shared_buffers=512MB\n> wal_buffers=128MB\n>\n> With this config, the following command took 6,400,000ms:\n>\n> EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;\n>\n> With plan:\n> \"Seq Scan on match_data (cost=0.00..1392900.78 rows=32237278\n> width=232) (actual time=0.379..464270.682 rows=27777961 loops=1)\"\n> \"Total runtime: 6398238.890 ms\"\n>\n> With these changes to the previous config, the same command took\n> 5,610,000ms:\n>\n> maintenance_work_mem=4GB\n> work_mem=4GB\n> shared_buffers=4GB\n> effective_cache_size=4GB\n> wal_buffers=1GB\n>\n> Resulting plan:\n>\n> \"Seq Scan on match_data (cost=0.00..2340147.72 rows=30888572\n> width=232) (actual time=0.094..452793.430 rows=27777961 loops=1)\"\n> \"Total runtime: 5614140.786 ms\"\n>\n> Then I performed these changes to the postgresql.conf file:\n>\n> max_connections=3\n> effective_cache_size=15GB\n> maintenance_work_mem=5GB\n> shared_buffers=7000MB\n> work_mem=5GB\n>\n> And ran this query (for a quick look - can't afford the time for\n> the previous tests..):\n>\n> EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE\n> match_data_id < 100000;\n>\n> Result:\n>\n> \"Index Scan using match_data_pkey1 on match_data\n> (cost=0.00..15662.17 rows=4490 width=232) (actual\n> time=27.055..1908.027 rows=99999 loops=1)\"\n> \" Index Cond: (match_data_id < 100000)\"\n> \"Total runtime: 25909.372 ms\"\n>\n> I then ran EntrepriseDB's Tuner on my postgres install (for a\n> dedicated machine) and got the following settings and results:\n>\n> EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE\n> match_data_id < 100000;\n>\n> \"Index Scan using match_data_pkey1 on match_data\n> (cost=0.00..13734.54 rows=4495 width=232) (actual\n> time=0.348..2928.844 rows=99999 loops=1)\"\n> \" Index Cond: (match_data_id < 100000)\"\n> \"Total runtime: 1066580.293 ms\"\n>\n> For now, I will go with the config using 7000MB shared_buffers.\n> Any suggestions on how I can further optimise this config for a\n> single session, 64-bit install utilising ALL of 96GB RAM. I will\n> spend the next week making the case for a native install of Linux,\n> but first we need to be 100% sure that is the only way to get the\n> most out of Postgres on this machine.\n>\n> Thanks very much. I now feel I am at a position where I can really\n> explore and find the optimal configuration for my system, but\n> would still appreciate any suggestions.\n>\n> Cheers,\n> Tom\n>\n>\n> On 11/06/2010 07:25, Bob Lunney wrote:\n>\n> Tom,\n>\n> First off, I wouldn't use a VM if I could help it, however,\n> sometimes you have to make compromises. With a 16 Gb machine\n> running 64-bit Ubuntu and only PostgreSQL, I'd start by\n> allocating 4 Gb to shared_buffers. That should leave more\n> than enough room for the OS and file system cache. Then I'd\n> begin testing by measuring response times of representative\n> queries with significant amounts of data.\n>\n> Also, what is the disk setup for the box? Filesystem? Can\n> WAL files have their own disk? Is the workload OLTP or OLAP,\n> or a mixture of both? There is more that goes into tuning a\n> PG server for good performance than simply installing the\n> software, setting a couple of GUCs and running it.\n>\n> Bob\n>\n> --- On Thu, 6/10/10, Tom Wilcox <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> \n>\n> From: Tom Wilcox <[email protected]\n> <mailto:[email protected]>>\n> Subject: Re: [PERFORM] requested shared memory size\n> overflows size_t\n> To: \"Bob Lunney\" <[email protected]\n> <mailto:[email protected]>>\n> Cc: \"Robert Haas\" <[email protected]\n> <mailto:[email protected]>>,\n> [email protected]\n> <mailto:[email protected]>\n> Date: Thursday, June 10, 2010, 10:45 AM\n> Thanks guys. I am currently\n> installing Pg64 onto a Ubuntu Server 64-bit installation\n> running as a VM in VirtualBox with 16GB of RAM accessible.\n> If what you say is true then what do you suggest I do to\n> configure my new setup to best use the available 16GB (96GB\n> and native install eventually if the test goes well) of RAM\n> on Linux.\n>\n> I was considering starting by using Enterprise DBs tuner to\n> see if that optimises things to a better quality..\n>\n> Tom\n>\n> On 10/06/2010 15:41, Bob Lunney wrote:\n> \n>\n> True, plus there are the other issues of increased\n> \n>\n> checkpoint times and I/O, bgwriter tuning, etc. It may\n> be better to let the OS cache the files and size\n> shared_buffers to a smaller value.\n> \n>\n> Bob Lunney\n>\n> --- On Wed, 6/9/10, Robert Haas<[email protected]\n> <mailto:[email protected]>> \n>\n> wrote:\n> \n>\n> \n>\n> From: Robert Haas<[email protected]\n> <mailto:[email protected]>>\n> Subject: Re: [PERFORM] requested shared memory\n> \n>\n> size overflows size_t\n> \n>\n> To: \"Bob Lunney\"<[email protected]\n> <mailto:[email protected]>>\n> Cc: [email protected]\n> <mailto:[email protected]>,\n> \n>\n> \"Tom Wilcox\"<[email protected]\n> <mailto:[email protected]>>\n> \n>\n> Date: Wednesday, June 9, 2010, 9:49 PM\n> On Wed, Jun 2, 2010 at 9:26 PM, Bob\n> Lunney<[email protected]\n> <mailto:[email protected]>>\n> wrote:\n> \n>\n> Your other option, of course, is a nice 64-bit\n> \n>\n> linux\n> \n>\n> \n>\n> variant, which won't have this problem at all.\n>\n> Although, even there, I think I've heard that\n> \n>\n> after 10GB\n> \n>\n> you don't get\n> much benefit from raising it further. Not\n> \n>\n> sure if\n> \n>\n> that's accurate or\n> not...\n>\n> -- Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise Postgres Company\n>\n> \n>\n> \n>\n> \n>\n> \n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n", "msg_date": "Tue, 15 Jun 2010 01:41:01 +0100", "msg_from": "Tom Wilcox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "With that clarification, I stand squarely behind what others are saying ...\nif performance is important to you, then you should always run databases on\ndedicated hardware, with the OS running on bare metal with no\nvirtualization. VirtualBox has even more I/O losses than Hyper-V. It's\nsimply not designed for this, and you're giving away a ton of performance.\n\nIf nothing else, my confusion should indicate to you how unconventional and\npoorly performing this virtualizaed setup is ... I simply assumed that the\nonly plausible reason you were piggybacking on virtualization on Windows was\na mandated lack of alternative options.\n\nReload the hardware with an OS which PGSQL supports well, and get rid of the\nVirtualBox and Windows layers. If you have hardware that only Windows\nsupports well, then you may need to make some hardware changes.\n\nI haven't said anything about which Unix-like OS .... you may find people\narguing passionately for BSD vs. Linux .... however, the difference between\nthese is negligible compared to \"virtualized vs. real system\", and at this\npoint considerations like support base, ease of use and familiarity also\ncome into play.\n\nIMHO Ubuntu would be a fine choice, and PGSQL is a \"first-class\" supported\npackage from the distributor ... however, at customer sites, I've typically\nused Red Hat AS because they have a corporate preference for it, even though\nit is less convenient to install and manage.\n\nOn Mon, Jun 14, 2010 at 7:41 PM, Tom Wilcox <[email protected]> wrote:\n\n> Hi Dave,\n>\n> I am definitely able to switch OS if it will get the most out of Postgres.\n> So it is definitely a case of choosing the OS on the needs if the app\n> providing it is well justified.\n>\n> Currently, we are running Ubuntu Server 64-bit in a VirtualBox VM.\n>\n> Cheers,\n> Tom\n>\n>\n> Dave Crooke wrote:\n>\n>> Tom\n>>\n>> I always prefer to choose apps based on business needs, then the OS based\n>> on the needs for the app.\n>>\n>> Cynically, I often feel that the best answer to \"we have a policy that\n>> says we're only allowed to use operating system x\" is to ignore the policy\n>> .... the kind of people ignorant enough to be that blinkered are usually not\n>> tech-savvy enough to notice when it gets flouted :-)\n>>\n>> More seriously, is the policy \"Windows only on the metal\" or could you run\n>> e.g. VMware ESX server? I/O is the area that takes the biggest hit in\n>> virtualization, and ESX server has far less overhead loss than either\n>> Hyper-V (which I presume you are using) or VMWare Workstation for NT\n>> (kernels).\n>>\n>> If it's a Windows-only policy, then perhaps you can run those traps in\n>> reverse, and switch to a Windows database, i.e. Microsoft SQL Server.\n>>\n>> Cheers\n>> Dave\n>>\n>> On Mon, Jun 14, 2010 at 1:53 PM, Tom Wilcox <[email protected] <mailto:\n>> [email protected]>> wrote:\n>>\n>>\n>> Hi Bob,\n>>\n>> Thanks a lot. Here's my best attempt to answer your questions:\n>>\n>> The VM is setup with a virtual disk image dynamically expanding to\n>> fill an allocation of 300GB on a fast, local hard drive (avg read\n>> speed = 778MB/s ).\n>> WAL files can have their own disk, but how significantly would\n>> this affect our performance?\n>> The filesystem of the host OS is NTFS (Windows Server 2008 OS 64),\n>> the guest filesystem is Ext2 (Ubuntu 64).\n>> The workload is OLAP (lots of large, complex queries on large\n>> tables run in sequence).\n>>\n>> In addition, I have reconfigured my server to use more memory.\n>> Here's a detailed blow by blow of how I reconfigured my system to\n>> get better performance (for anyone who might be interested)...\n>>\n>> In order to increase the shared memory on Ubuntu I edited the\n>> System V IPC values using sysctl:\n>>\n>> sysctl -w kernel.shmmax=16106127360*\n>> *sysctl -w kernel.shmall=2097152\n>>\n>> I had some fun with permissions as I somehow managed to change the\n>> owner of the postgresql.conf to root where it needed to be\n>> postgres, resulting in failure to start the service.. (Fixed with\n>> chown postgres:postgres ./data/postgresql.conf and chmod u=rwx\n>> ./data -R).\n>>\n>> I changed the following params in my configuration file..\n>>\n>> default_statistics_target=10000\n>> maintenance_work_mem=512MB\n>> work_mem=512MB\n>> shared_buffers=512MB\n>> wal_buffers=128MB\n>>\n>> With this config, the following command took 6,400,000ms:\n>>\n>> EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;\n>>\n>> With plan:\n>> \"Seq Scan on match_data (cost=0.00..1392900.78 rows=32237278\n>> width=232) (actual time=0.379..464270.682 rows=27777961 loops=1)\"\n>> \"Total runtime: 6398238.890 ms\"\n>>\n>> With these changes to the previous config, the same command took\n>> 5,610,000ms:\n>>\n>> maintenance_work_mem=4GB\n>> work_mem=4GB\n>> shared_buffers=4GB\n>> effective_cache_size=4GB\n>> wal_buffers=1GB\n>>\n>> Resulting plan:\n>>\n>> \"Seq Scan on match_data (cost=0.00..2340147.72 rows=30888572\n>> width=232) (actual time=0.094..452793.430 rows=27777961 loops=1)\"\n>> \"Total runtime: 5614140.786 ms\"\n>>\n>> Then I performed these changes to the postgresql.conf file:\n>>\n>> max_connections=3\n>> effective_cache_size=15GB\n>> maintenance_work_mem=5GB\n>> shared_buffers=7000MB\n>> work_mem=5GB\n>>\n>> And ran this query (for a quick look - can't afford the time for\n>> the previous tests..):\n>>\n>> EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE\n>> match_data_id < 100000;\n>>\n>> Result:\n>>\n>> \"Index Scan using match_data_pkey1 on match_data\n>> (cost=0.00..15662.17 rows=4490 width=232) (actual\n>> time=27.055..1908.027 rows=99999 loops=1)\"\n>> \" Index Cond: (match_data_id < 100000)\"\n>> \"Total runtime: 25909.372 ms\"\n>>\n>> I then ran EntrepriseDB's Tuner on my postgres install (for a\n>> dedicated machine) and got the following settings and results:\n>>\n>> EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE\n>> match_data_id < 100000;\n>>\n>> \"Index Scan using match_data_pkey1 on match_data\n>> (cost=0.00..13734.54 rows=4495 width=232) (actual\n>> time=0.348..2928.844 rows=99999 loops=1)\"\n>> \" Index Cond: (match_data_id < 100000)\"\n>> \"Total runtime: 1066580.293 ms\"\n>>\n>> For now, I will go with the config using 7000MB shared_buffers.\n>> Any suggestions on how I can further optimise this config for a\n>> single session, 64-bit install utilising ALL of 96GB RAM. I will\n>> spend the next week making the case for a native install of Linux,\n>> but first we need to be 100% sure that is the only way to get the\n>> most out of Postgres on this machine.\n>>\n>> Thanks very much. I now feel I am at a position where I can really\n>> explore and find the optimal configuration for my system, but\n>> would still appreciate any suggestions.\n>>\n>> Cheers,\n>> Tom\n>>\n>>\n>> On 11/06/2010 07:25, Bob Lunney wrote:\n>>\n>> Tom,\n>>\n>> First off, I wouldn't use a VM if I could help it, however,\n>> sometimes you have to make compromises. With a 16 Gb machine\n>> running 64-bit Ubuntu and only PostgreSQL, I'd start by\n>> allocating 4 Gb to shared_buffers. That should leave more\n>> than enough room for the OS and file system cache. Then I'd\n>> begin testing by measuring response times of representative\n>> queries with significant amounts of data.\n>>\n>> Also, what is the disk setup for the box? Filesystem? Can\n>> WAL files have their own disk? Is the workload OLTP or OLAP,\n>> or a mixture of both? There is more that goes into tuning a\n>> PG server for good performance than simply installing the\n>> software, setting a couple of GUCs and running it.\n>>\n>> Bob\n>>\n>> --- On Thu, 6/10/10, Tom Wilcox <[email protected]\n>> <mailto:[email protected]>> wrote:\n>>\n>>\n>> From: Tom Wilcox <[email protected]\n>> <mailto:[email protected]>>\n>>\n>> Subject: Re: [PERFORM] requested shared memory size\n>> overflows size_t\n>> To: \"Bob Lunney\" <[email protected]\n>> <mailto:[email protected]>>\n>>\n>> Cc: \"Robert Haas\" <[email protected]\n>> <mailto:[email protected]>>,\n>> [email protected]\n>> <mailto:[email protected]>\n>>\n>> Date: Thursday, June 10, 2010, 10:45 AM\n>> Thanks guys. I am currently\n>> installing Pg64 onto a Ubuntu Server 64-bit installation\n>> running as a VM in VirtualBox with 16GB of RAM accessible.\n>> If what you say is true then what do you suggest I do to\n>> configure my new setup to best use the available 16GB (96GB\n>> and native install eventually if the test goes well) of RAM\n>> on Linux.\n>>\n>> I was considering starting by using Enterprise DBs tuner to\n>> see if that optimises things to a better quality..\n>>\n>> Tom\n>>\n>> On 10/06/2010 15:41, Bob Lunney wrote:\n>>\n>> True, plus there are the other issues of increased\n>>\n>> checkpoint times and I/O, bgwriter tuning, etc. It may\n>> be better to let the OS cache the files and size\n>> shared_buffers to a smaller value.\n>>\n>> Bob Lunney\n>>\n>> --- On Wed, 6/9/10, Robert Haas<[email protected]\n>> <mailto:[email protected]>>\n>>\n>> wrote:\n>>\n>>\n>> From: Robert Haas<[email protected]\n>> <mailto:[email protected]>>\n>>\n>> Subject: Re: [PERFORM] requested shared memory\n>>\n>> size overflows size_t\n>>\n>> To: \"Bob Lunney\"<[email protected]\n>> <mailto:[email protected]>>\n>>\n>> Cc: [email protected]\n>> <mailto:[email protected]>,\n>>\n>>\n>> \"Tom Wilcox\"<[email protected]\n>> <mailto:[email protected]>>\n>>\n>>\n>> Date: Wednesday, June 9, 2010, 9:49 PM\n>> On Wed, Jun 2, 2010 at 9:26 PM, Bob\n>> Lunney<[email protected]\n>> <mailto:[email protected]>>\n>>\n>> wrote:\n>>\n>> Your other option, of course, is a nice 64-bit\n>>\n>> linux\n>>\n>>\n>> variant, which won't have this problem at all.\n>>\n>> Although, even there, I think I've heard that\n>>\n>> after 10GB\n>>\n>> you don't get\n>> much benefit from raising it further. Not\n>>\n>> sure if\n>>\n>> that's accurate or\n>> not...\n>>\n>> -- Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise Postgres Company\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> -- Sent via pgsql-performance mailing list\n>> ([email protected]\n>> <mailto:[email protected]>)\n>>\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>>\n>>\n>\n\nWith that clarification, I stand squarely behind what others are saying ... if performance is important to you, then you should always run databases on dedicated hardware, with the OS running on bare metal with no virtualization. VirtualBox has even more I/O losses than Hyper-V. It's simply not designed for this, and you're giving away a ton of performance. \nIf nothing else, my confusion should indicate to you how unconventional and poorly performing this virtualizaed setup is ... I simply assumed that the only plausible reason you were piggybacking on virtualization on Windows was a mandated lack of alternative options.\nReload the hardware with an OS which PGSQL supports well, and get rid of\n the VirtualBox and Windows layers. If you have hardware that only Windows supports well, then you may need to make some hardware changes.I haven't said anything about which Unix-like OS .... you may find people arguing passionately for BSD vs. Linux .... however, the difference between these is negligible compared to \"virtualized vs. real system\", and at this point considerations like support base, ease of use and familiarity also come into play. \nIMHO Ubuntu would be a fine choice, and PGSQL is a \"first-class\" supported package from the distributor ... however, at customer sites, I've typically used Red Hat AS because they have a corporate preference for it, even though it is less convenient to install and manage. \nOn Mon, Jun 14, 2010 at 7:41 PM, Tom Wilcox <[email protected]> wrote:\nHi Dave,\n\nI am definitely able to switch OS if it will get the most out of Postgres. So it is definitely a case of choosing the OS on the needs if the app providing it is well justified.\n\nCurrently, we are running Ubuntu Server 64-bit in a VirtualBox VM.\n\nCheers,\nTom\n\n\nDave Crooke wrote:\n\nTom\n\nI always prefer to choose apps based on business needs, then the OS based on the needs for the app.\n\nCynically, I often feel that the best answer to \"we have a policy that says we're only allowed to use operating system x\" is to ignore the policy .... the kind of people ignorant enough to be that blinkered are usually not tech-savvy enough to notice when it gets flouted :-)\n\nMore seriously, is the policy \"Windows only on the metal\" or could you run e.g. VMware ESX server? I/O is the area that takes the biggest hit in virtualization, and ESX server has far less overhead loss than either Hyper-V (which I presume you are using) or VMWare Workstation for NT (kernels).\n\nIf it's a Windows-only policy, then perhaps you can run those traps in reverse, and switch to a Windows database, i.e. Microsoft SQL Server.\n\nCheers\nDave\n\nOn Mon, Jun 14, 2010 at 1:53 PM, Tom Wilcox <[email protected] <mailto:[email protected]>> wrote:\n\n\n    Hi Bob,\n\n    Thanks a lot. Here's my best attempt to answer your questions:\n\n    The VM is setup with a virtual disk image dynamically expanding to\n    fill an allocation of 300GB on a fast, local hard drive (avg read\n    speed = 778MB/s ).\n    WAL files can have their own disk, but how significantly would\n    this affect our performance?\n    The filesystem of the host OS is NTFS (Windows Server 2008 OS 64),\n    the guest filesystem is Ext2 (Ubuntu 64).\n    The workload is OLAP (lots of large, complex queries on large\n    tables run in sequence).\n\n    In addition, I have reconfigured my server to use more memory.\n    Here's a detailed blow by blow of how I reconfigured my system to\n    get better performance (for anyone who might be interested)...\n\n    In order to increase the shared memory on Ubuntu I edited the\n    System V IPC values using sysctl:\n\n    sysctl -w kernel.shmmax=16106127360*\n    *sysctl -w kernel.shmall=2097152\n\n    I had some fun with permissions as I somehow managed to change the\n    owner  of the postgresql.conf to root where it needed to be\n    postgres, resulting in failure to start the service.. (Fixed with\n    chown postgres:postgres ./data/postgresql.conf and chmod u=rwx\n    ./data -R).\n\n    I changed the following params in my configuration file..\n\n    default_statistics_target=10000\n    maintenance_work_mem=512MB\n    work_mem=512MB\n    shared_buffers=512MB\n    wal_buffers=128MB\n\n    With this config, the following command took  6,400,000ms:\n\n    EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;\n\n    With plan:\n    \"Seq Scan on match_data  (cost=0.00..1392900.78 rows=32237278\n    width=232) (actual time=0.379..464270.682 rows=27777961 loops=1)\"\n    \"Total runtime: 6398238.890 ms\"\n\n    With these changes to the previous config, the same command took\n     5,610,000ms:\n\n    maintenance_work_mem=4GB\n    work_mem=4GB\n    shared_buffers=4GB\n    effective_cache_size=4GB\n    wal_buffers=1GB\n\n    Resulting plan:\n\n    \"Seq Scan on match_data  (cost=0.00..2340147.72 rows=30888572\n    width=232) (actual time=0.094..452793.430 rows=27777961 loops=1)\"\n    \"Total runtime: 5614140.786 ms\"\n\n    Then I performed these changes to the postgresql.conf file:\n\n    max_connections=3\n    effective_cache_size=15GB\n    maintenance_work_mem=5GB\n    shared_buffers=7000MB\n    work_mem=5GB\n\n    And ran this query (for a quick look - can't afford the time for\n    the previous tests..):\n\n    EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE\n    match_data_id < 100000;\n\n    Result:\n\n    \"Index Scan using match_data_pkey1 on match_data\n     (cost=0.00..15662.17 rows=4490 width=232) (actual\n    time=27.055..1908.027 rows=99999 loops=1)\"\n    \"  Index Cond: (match_data_id < 100000)\"\n    \"Total runtime: 25909.372 ms\"\n\n    I then ran EntrepriseDB's Tuner on my postgres install (for a\n    dedicated machine) and got the following settings and results:\n\n    EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE\n    match_data_id < 100000;\n\n    \"Index Scan using match_data_pkey1 on match_data\n     (cost=0.00..13734.54 rows=4495 width=232) (actual\n    time=0.348..2928.844 rows=99999 loops=1)\"\n    \"  Index Cond: (match_data_id < 100000)\"\n    \"Total runtime: 1066580.293 ms\"\n\n    For now, I will go with the config using 7000MB shared_buffers.\n    Any suggestions on how I can further optimise this config for a\n    single session, 64-bit install utilising ALL of 96GB RAM. I will\n    spend the next week making the case for a native install of Linux,\n    but first we need to be 100% sure that is the only way to get the\n    most out of Postgres on this machine.\n\n    Thanks very much. I now feel I am at a position where I can really\n    explore and find the optimal configuration for my system, but\n    would still appreciate any suggestions.\n\n    Cheers,\n    Tom\n\n\n    On 11/06/2010 07:25, Bob Lunney wrote:\n\n        Tom,\n\n        First off, I wouldn't use a VM if I could help it, however,\n        sometimes you have to make compromises.  With a 16 Gb machine\n        running 64-bit Ubuntu and only PostgreSQL, I'd start by\n        allocating 4 Gb to shared_buffers.  That should leave more\n        than enough room for the OS and file system cache.  Then I'd\n        begin testing by measuring response times of representative\n        queries with significant amounts of data.\n\n        Also, what is the disk setup for the box?  Filesystem?  Can\n        WAL files have their own disk?  Is the workload OLTP or OLAP,\n        or a mixture of both?  There is more that goes into tuning a\n        PG server for good performance than simply installing the\n        software, setting a couple of GUCs and running it.\n\n        Bob\n\n        --- On Thu, 6/10/10, Tom Wilcox <[email protected]\n        <mailto:[email protected]>> wrote:\n\n         \n            From: Tom Wilcox <[email protected]\n            <mailto:[email protected]>>\n            Subject: Re: [PERFORM] requested shared memory size\n            overflows size_t\n            To: \"Bob Lunney\" <[email protected]\n            <mailto:[email protected]>>\n            Cc: \"Robert Haas\" <[email protected]\n            <mailto:[email protected]>>,\n            [email protected]\n            <mailto:[email protected]>\n            Date: Thursday, June 10, 2010, 10:45 AM\n            Thanks guys. I am currently\n            installing Pg64 onto a Ubuntu Server 64-bit installation\n            running as a VM in VirtualBox with 16GB of RAM accessible.\n            If what you say is true then what do you suggest I do to\n            configure my new setup to best use the available 16GB (96GB\n            and native install eventually if the test goes well) of RAM\n            on Linux.\n\n            I was considering starting by using Enterprise DBs tuner to\n            see if that optimises things to a better quality..\n\n            Tom\n\n            On 10/06/2010 15:41, Bob Lunney wrote:\n               \n                True, plus there are the other issues of increased\n                     \n            checkpoint times and I/O, bgwriter tuning, etc.  It may\n            be better to let the OS cache the files and size\n            shared_buffers to a smaller value.\n               \n                Bob Lunney\n\n                --- On Wed, 6/9/10, Robert Haas<[email protected]\n                <mailto:[email protected]>>      \n            wrote:\n               \n                         \n                    From: Robert Haas<[email protected]\n                    <mailto:[email protected]>>\n                    Subject: Re: [PERFORM] requested shared memory\n                           \n            size overflows size_t\n               \n                    To: \"Bob Lunney\"<[email protected]\n                    <mailto:[email protected]>>\n                    Cc: [email protected]\n                    <mailto:[email protected]>,\n                           \n            \"Tom Wilcox\"<[email protected]\n            <mailto:[email protected]>>\n               \n                    Date: Wednesday, June 9, 2010, 9:49 PM\n                    On Wed, Jun 2, 2010 at 9:26 PM, Bob\n                    Lunney<[email protected]\n                    <mailto:[email protected]>>\n                    wrote:\n                                 \n                        Your other option, of course, is a nice 64-bit\n                                 \n            linux\n               \n                                         \n                    variant, which won't have this problem at all.\n\n                    Although, even there, I think I've heard that\n                           \n            after 10GB\n               \n                    you don't get\n                    much benefit from raising it further.  Not\n                           \n            sure if\n               \n                    that's accurate or\n                    not...\n\n                    -- Robert Haas\n                    EnterpriseDB: http://www.enterprisedb.com\n                    The Enterprise Postgres Company\n\n                                 \n                         \n               \n               \n\n\n    --     Sent via pgsql-performance mailing list\n    ([email protected]\n    <mailto:[email protected]>)\n    To make changes to your subscription:\n    http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 14 Jun 2010 19:56:16 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "Thanks a lot Dave,\n\nThat's exactly the kind of answer I can use to justify the OS switch. \nMotivation for the previous setup was based on the fact that we will use \nthe same machine for other projects that will use SQL Server and most of \nour experience lies within the MS domain. However, these projects are \nnot a high priority currently and therefore I have been focusing on the \nbest solution for a Postgres-focused setup.\n\nThis does however mean that I will need to have the other projects \nrunning in a VM on Linux. However, they are less demanding in terms of \nresources.\n\nCheers,\nTom\n\nDave Crooke wrote:\n> With that clarification, I stand squarely behind what others are \n> saying ... if performance is important to you, then you should always \n> run databases on dedicated hardware, with the OS running on bare metal \n> with no virtualization. VirtualBox has even more I/O losses than \n> Hyper-V. It's simply not designed for this, and you're giving away a \n> ton of performance.\n>\n> If nothing else, my confusion should indicate to you how \n> unconventional and poorly performing this virtualizaed setup is ... I \n> simply assumed that the only plausible reason you were piggybacking on \n> virtualization on Windows was a mandated lack of alternative options.\n>\n> Reload the hardware with an OS which PGSQL supports well, and get rid \n> of the VirtualBox and Windows layers. If you have hardware that only \n> Windows supports well, then you may need to make some hardware changes.\n>\n> I haven't said anything about which Unix-like OS .... you may find \n> people arguing passionately for BSD vs. Linux .... however, the \n> difference between these is negligible compared to \"virtualized vs. \n> real system\", and at this point considerations like support base, ease \n> of use and familiarity also come into play.\n>\n> IMHO Ubuntu would be a fine choice, and PGSQL is a \"first-class\" \n> supported package from the distributor ... however, at customer sites, \n> I've typically used Red Hat AS because they have a corporate \n> preference for it, even though it is less convenient to install and \n> manage.\n>\n> On Mon, Jun 14, 2010 at 7:41 PM, Tom Wilcox <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Hi Dave,\n>\n> I am definitely able to switch OS if it will get the most out of\n> Postgres. So it is definitely a case of choosing the OS on the\n> needs if the app providing it is well justified.\n>\n> Currently, we are running Ubuntu Server 64-bit in a VirtualBox VM.\n>\n> Cheers,\n> Tom\n>\n>\n> Dave Crooke wrote:\n>\n> Tom\n>\n> I always prefer to choose apps based on business needs, then\n> the OS based on the needs for the app.\n>\n> Cynically, I often feel that the best answer to \"we have a\n> policy that says we're only allowed to use operating system x\"\n> is to ignore the policy .... the kind of people ignorant\n> enough to be that blinkered are usually not tech-savvy enough\n> to notice when it gets flouted :-)\n>\n> More seriously, is the policy \"Windows only on the metal\" or\n> could you run e.g. VMware ESX server? I/O is the area that\n> takes the biggest hit in virtualization, and ESX server has\n> far less overhead loss than either Hyper-V (which I presume\n> you are using) or VMWare Workstation for NT (kernels).\n>\n> If it's a Windows-only policy, then perhaps you can run those\n> traps in reverse, and switch to a Windows database, i.e.\n> Microsoft SQL Server.\n>\n> Cheers\n> Dave\n>\n> On Mon, Jun 14, 2010 at 1:53 PM, Tom Wilcox\n> <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>> wrote:\n>\n>\n> Hi Bob,\n>\n> Thanks a lot. Here's my best attempt to answer your questions:\n>\n> The VM is setup with a virtual disk image dynamically\n> expanding to\n> fill an allocation of 300GB on a fast, local hard drive\n> (avg read\n> speed = 778MB/s ).\n> WAL files can have their own disk, but how significantly would\n> this affect our performance?\n> The filesystem of the host OS is NTFS (Windows Server 2008\n> OS 64),\n> the guest filesystem is Ext2 (Ubuntu 64).\n> The workload is OLAP (lots of large, complex queries on large\n> tables run in sequence).\n>\n> In addition, I have reconfigured my server to use more memory.\n> Here's a detailed blow by blow of how I reconfigured my\n> system to\n> get better performance (for anyone who might be interested)...\n>\n> In order to increase the shared memory on Ubuntu I edited the\n> System V IPC values using sysctl:\n>\n> sysctl -w kernel.shmmax=16106127360*\n> *sysctl -w kernel.shmall=2097152\n>\n> I had some fun with permissions as I somehow managed to\n> change the\n> owner of the postgresql.conf to root where it needed to be\n> postgres, resulting in failure to start the service..\n> (Fixed with\n> chown postgres:postgres ./data/postgresql.conf and chmod u=rwx\n> ./data -R).\n>\n> I changed the following params in my configuration file..\n>\n> default_statistics_target=10000\n> maintenance_work_mem=512MB\n> work_mem=512MB\n> shared_buffers=512MB\n> wal_buffers=128MB\n>\n> With this config, the following command took 6,400,000ms:\n>\n> EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;\n>\n> With plan:\n> \"Seq Scan on match_data (cost=0.00..1392900.78 rows=32237278\n> width=232) (actual time=0.379..464270.682 rows=27777961\n> loops=1)\"\n> \"Total runtime: 6398238.890 ms\"\n>\n> With these changes to the previous config, the same command\n> took\n> 5,610,000ms:\n>\n> maintenance_work_mem=4GB\n> work_mem=4GB\n> shared_buffers=4GB\n> effective_cache_size=4GB\n> wal_buffers=1GB\n>\n> Resulting plan:\n>\n> \"Seq Scan on match_data (cost=0.00..2340147.72 rows=30888572\n> width=232) (actual time=0.094..452793.430 rows=27777961\n> loops=1)\"\n> \"Total runtime: 5614140.786 ms\"\n>\n> Then I performed these changes to the postgresql.conf file:\n>\n> max_connections=3\n> effective_cache_size=15GB\n> maintenance_work_mem=5GB\n> shared_buffers=7000MB\n> work_mem=5GB\n>\n> And ran this query (for a quick look - can't afford the\n> time for\n> the previous tests..):\n>\n> EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE\n> match_data_id < 100000;\n>\n> Result:\n>\n> \"Index Scan using match_data_pkey1 on match_data\n> (cost=0.00..15662.17 rows=4490 width=232) (actual\n> time=27.055..1908.027 rows=99999 loops=1)\"\n> \" Index Cond: (match_data_id < 100000)\"\n> \"Total runtime: 25909.372 ms\"\n>\n> I then ran EntrepriseDB's Tuner on my postgres install (for a\n> dedicated machine) and got the following settings and results:\n>\n> EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE\n> match_data_id < 100000;\n>\n> \"Index Scan using match_data_pkey1 on match_data\n> (cost=0.00..13734.54 rows=4495 width=232) (actual\n> time=0.348..2928.844 rows=99999 loops=1)\"\n> \" Index Cond: (match_data_id < 100000)\"\n> \"Total runtime: 1066580.293 ms\"\n>\n> For now, I will go with the config using 7000MB shared_buffers.\n> Any suggestions on how I can further optimise this config for a\n> single session, 64-bit install utilising ALL of 96GB RAM. I\n> will\n> spend the next week making the case for a native install of\n> Linux,\n> but first we need to be 100% sure that is the only way to\n> get the\n> most out of Postgres on this machine.\n>\n> Thanks very much. I now feel I am at a position where I can\n> really\n> explore and find the optimal configuration for my system, but\n> would still appreciate any suggestions.\n>\n> Cheers,\n> Tom\n>\n>\n> On 11/06/2010 07:25, Bob Lunney wrote:\n>\n> Tom,\n>\n> First off, I wouldn't use a VM if I could help it, however,\n> sometimes you have to make compromises. With a 16 Gb\n> machine\n> running 64-bit Ubuntu and only PostgreSQL, I'd start by\n> allocating 4 Gb to shared_buffers. That should leave more\n> than enough room for the OS and file system cache.\n> Then I'd\n> begin testing by measuring response times of representative\n> queries with significant amounts of data.\n>\n> Also, what is the disk setup for the box? Filesystem? Can\n> WAL files have their own disk? Is the workload OLTP or\n> OLAP,\n> or a mixture of both? There is more that goes into\n> tuning a\n> PG server for good performance than simply installing the\n> software, setting a couple of GUCs and running it.\n>\n> Bob\n>\n> --- On Thu, 6/10/10, Tom Wilcox <[email protected]\n> <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>> wrote:\n>\n> \n> From: Tom Wilcox <[email protected]\n> <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>>\n>\n> Subject: Re: [PERFORM] requested shared memory size\n> overflows size_t\n> To: \"Bob Lunney\" <[email protected]\n> <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>>\n>\n> Cc: \"Robert Haas\" <[email protected]\n> <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>>,\n> [email protected]\n> <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>\n>\n> Date: Thursday, June 10, 2010, 10:45 AM\n> Thanks guys. I am currently\n> installing Pg64 onto a Ubuntu Server 64-bit\n> installation\n> running as a VM in VirtualBox with 16GB of RAM\n> accessible.\n> If what you say is true then what do you suggest I\n> do to\n> configure my new setup to best use the available\n> 16GB (96GB\n> and native install eventually if the test goes\n> well) of RAM\n> on Linux.\n>\n> I was considering starting by using Enterprise DBs\n> tuner to\n> see if that optimises things to a better quality..\n>\n> Tom\n>\n> On 10/06/2010 15:41, Bob Lunney wrote:\n> \n> True, plus there are the other issues of increased\n> \n> checkpoint times and I/O, bgwriter tuning, etc. It may\n> be better to let the OS cache the files and size\n> shared_buffers to a smaller value.\n> \n> Bob Lunney\n>\n> --- On Wed, 6/9/10, Robert\n> Haas<[email protected] <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>> \n>\n> wrote:\n> \n> \n> From: Robert Haas<[email protected]\n> <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>>\n>\n> Subject: Re: [PERFORM] requested shared memory\n> \n> size overflows size_t\n> \n> To: \"Bob Lunney\"<[email protected]\n> <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>>\n>\n> Cc: [email protected]\n> <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>,\n>\n> \n> \"Tom Wilcox\"<[email protected]\n> <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>>\n>\n> \n> Date: Wednesday, June 9, 2010, 9:49 PM\n> On Wed, Jun 2, 2010 at 9:26 PM, Bob\n> Lunney<[email protected]\n> <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>>\n>\n> wrote:\n> \n> Your other option, of course, is a nice\n> 64-bit\n> \n> linux\n> \n> \n> variant, which won't have this problem at all.\n>\n> Although, even there, I think I've heard that\n> \n> after 10GB\n> \n> you don't get\n> much benefit from raising it further. Not\n> \n> sure if\n> \n> that's accurate or\n> not...\n>\n> -- Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise Postgres Company\n>\n> \n> \n> \n> \n>\n>\n> -- Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>)\n>\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n>\n\n", "msg_date": "Tue, 15 Jun 2010 02:12:31 +0100", "msg_from": "Tom Wilcox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "Tom Wilcox wrote:\n> default_statistics_target=10000\n> wal_buffers=1GB\n> max_connections=3\n> effective_cache_size=15GB\n> maintenance_work_mem=5GB\n> shared_buffers=7000MB\n> work_mem=5GB\n\nThat value for default_statistics_target means that every single query \nyou ever run will take a seriously long time to generate a plan for. \nEven on an OLAP system, I would consider 10,000 an appropriate setting \nfor a column or two in a particularly troublesome table. I wouldn't \nconsider a value of even 1,000 in the postgresql.conf to be a good \nidea. You should consider making the system default much lower, and \nincrease it only on columns that need it, not for every column on every \ntable.\n\nThere is no reason to set wal_buffers larger than 16MB, the size of a \nfull WAL segment. Have you read \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server yet? \ncheckpoint_segments is the main parameter you haven't touched yet you \nshould consider increasing. Even if you have a low write load, when \nVACUUM runs it will be very inefficient running against a large set of \ntables without the checkpoint frequency being decreased some. Something \nin the 16-32 range would be plenty for an OLAP setup.\n\nAt 3 connections, a work_mem of 5GB is possibly reasonable. I would \nnormally recommend that you make the default much smaller than that \nthough, and instead just increase to a large value for queries that \nbenefit from it. If someone later increases max_connections to \nsomething higher, your server could run completely out of memory if \nwork_mem isn't cut way back as part of that change.\n\nYou could consider setting effective_cache_size to something even larger \nthan that,\n\n> EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE \n> match_data_id < 100000;\n\nBy the way--repeatedly running this form of query to test for \nimprovements in speed is not going to give you particularly good \nresults. Each run will execute a bunch of UPDATE statements that leave \nbehind dead rows. So the next run done for comparison sake will either \nhave to cope with that additional overhead, or it will end up triggering \nautovacuum and suffer from that. If you're going to use an UPDATE \nstatement as your benchmark, at a minimum run a manual VACUUM ANALYZE in \nbetween each test run, to level out the consistency of results a bit. \nIdeally you'd restore the whole database to an initial state before each \ntest.\n\n> I will spend the next week making the case for a native install of \n> Linux, but first we need to be 100% sure that is the only way to get \n> the most out of Postgres on this machine.\n\nI really cannot imagine taking a system as powerful as you're using here \nand crippling it by running through a VM. You should be running Ubuntu \ndirectly on the hardware, ext3 filesystem without LVM, split off RAID-1 \ndrive pairs dedicated to OS and WAL, then use the rest of them for the \ndatabase.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 14 Jun 2010 22:06:49 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "\nOn Jun 14, 2010, at 11:53 AM, Tom Wilcox wrote:\n\n> \n> \n> max_connections=3\n> effective_cache_size=15GB\n> maintenance_work_mem=5GB\n> shared_buffers=7000MB\n> work_mem=5GB\n> \n\nmaintenance_work_mem doesn't need to be so high, it certainly has no effect on your queries below. It would affect vacuum, reindex, etc.\n\nWith fast disk like this (assuming your 700MB/sec above was not a typo) make sure you tune autovacuum up to be much more aggressive than the default (increase the allowable cost per sleep by at least 10x).\n\nA big work_mem like above is OK if you know that no more than a couple sessions will be active at once. Worst case, a single connection ... probably ... won't use more than 2x that ammount. \n\n\n> For now, I will go with the config using 7000MB shared_buffers. Any \n> suggestions on how I can further optimise this config for a single \n> session, 64-bit install utilising ALL of 96GB RAM. I will spend the next \n> week making the case for a native install of Linux, but first we need to \n> be 100% sure that is the only way to get the most out of Postgres on \n> this machine.\n> \n\nGetting the most from the RAM does *_NOT_* mean making Postgres use all the RAM. Postgres relies on the OS file cache heavily. If there is a lot of free RAM for the OS to use to cache files, it will help the performance. Both Windows and Linux aggressively cache file pages and do a good job at it.\n\n\n\n", "msg_date": "Mon, 14 Jun 2010 20:27:11 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "\nOn Jun 14, 2010, at 7:06 PM, Greg Smith wrote:\n\n> I really cannot imagine taking a system as powerful as you're using here \n> and crippling it by running through a VM. You should be running Ubuntu \n> directly on the hardware, ext3 filesystem without LVM, split off RAID-1 \n> drive pairs dedicated to OS and WAL, then use the rest of them for the \n> database.\n> \n\nGreat points. There is one other option that is decent for the WAL:\nIf splitting out a volume is not acceptable for the OS and WAL -- absolutely split those two out into their own partitions. It is most important to make sure that WAL and data are not on the same filesystem, especially if ext3 is involved.\n\n\n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Mon, 14 Jun 2010 20:49:40 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "Scott Carey <[email protected]> writes:\n> Great points. There is one other option that is decent for the WAL:\n> If splitting out a volume is not acceptable for the OS and WAL -- absolutely split those two out into their own partitions. It is most important to make sure that WAL and data are not on the same filesystem, especially if ext3 is involved.\n\nUh, no, WAL really needs to be on its own *spindle*. The whole point\nhere is to have one disk head sitting on the WAL and not doing anything\nelse except writing to that file. Pushing WAL to a different partition\nbut still on the same physical disk is likely to be a net pessimization,\nbecause it'll increase the average seek distance whenever the head does\nhave to move between WAL and everything-else-in-the-database.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jun 2010 23:57:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t " }, { "msg_contents": "Excerpts from Tom Lane's message of lun jun 14 23:57:11 -0400 2010:\n> Scott Carey <[email protected]> writes:\n> > Great points. There is one other option that is decent for the WAL:\n> > If splitting out a volume is not acceptable for the OS and WAL -- absolutely split those two out into their own partitions. It is most important to make sure that WAL and data are not on the same filesystem, especially if ext3 is involved.\n> \n> Uh, no, WAL really needs to be on its own *spindle*. The whole point\n> here is to have one disk head sitting on the WAL and not doing anything\n> else except writing to that file.\n\nHowever, there's another point here -- probably what Scott is on about:\non Linux (at least ext3), an fsync of any file does not limit to\nflushing that file's blocks -- it flushes *ALL* blocks on *ALL* files in\nthe filesystem. This is particularly problematic if you have pgsql_tmp\nin the same filesystem and do lots of disk-based sorts.\n\nSo if you have it in the same spindle but on a different filesystem, at\nleast you'll avoid that extra fsync work, even if you have to live with\nthe extra seeking.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 16 Jun 2010 16:53:52 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "Tom Wilcox wrote:\n> Any suggestions for good monitoring software for linux?\n\nBy monitoring, do you mean for alerting purposes or for graphing \npurposes? Nagios is the only reasonable choice for the former, while \ndoing at best a mediocre job at the latter. For the later, I've found \nthat Munin does a good job of monitoring Linux and PostgreSQL in its out \nof the box configuration, in terms of providing useful activity graphs. \nAnd you can get it to play nice with Nagios.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 17 Jun 2010 17:41:54 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "On 17/06/2010 22:41, Greg Smith wrote:\n> Tom Wilcox wrote:\n>> Any suggestions for good monitoring software for linux?\n>\n> By monitoring, do you mean for alerting purposes or for graphing \n> purposes? Nagios is the only reasonable choice for the former, while \n> doing at best a mediocre job at the latter. For the later, I've found \n> that Munin does a good job of monitoring Linux and PostgreSQL in its \n> out of the box configuration, in terms of providing useful activity \n> graphs. And you can get it to play nice with Nagios.\n>\nThanks Greg. Ill check Munin and Nagios out. It is very much for \ngraphing purposes. I would like to be able to perform objective, \nplatform-independent style performance comparisons.\n\nCheers,\nTom\n", "msg_date": "Fri, 18 Jun 2010 00:46:11 +0100", "msg_from": "Tom Wilcox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "On Fri, Jun 18, 2010 at 12:46:11AM +0100, Tom Wilcox wrote:\n> On 17/06/2010 22:41, Greg Smith wrote:\n>> Tom Wilcox wrote:\n>>> Any suggestions for good monitoring software for linux?\n>>\n>> By monitoring, do you mean for alerting purposes or for graphing purposes? \n>> Nagios is the only reasonable choice for the former, while doing at best \n>> a mediocre job at the latter. For the later, I've found that Munin does a \n>> good job of monitoring Linux and PostgreSQL in its out of the box \n>> configuration, in terms of providing useful activity graphs. And you can \n>> get it to play nice with Nagios.\n>>\n> Thanks Greg. Ill check Munin and Nagios out. It is very much for graphing \n> purposes. I would like to be able to perform objective, \n> platform-independent style performance comparisons.\n>\n> Cheers,\n> Tom\n>\nZabbix-1.8+ is also worth taking a look at and it can run off our\nfavorite database. It allows for some very flexible monitoring and\ntrending data collection.\n\nRegards,\nKen\n", "msg_date": "Fri, 18 Jun 2010 07:48:26 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "Kenneth Marshall wrote:\n> Zabbix-1.8+ is also worth taking a look at and it can run off our\n> favorite database. It allows for some very flexible monitoring and\n> trending data collection.\n> \n\nNote that while Zabbix is perfectly reasonable general solution, the \nnumber of things it monitors out of the box for PostgreSQL: \nhttp://www.zabbix.com/wiki/howto/monitor/db/postgresql is only a \nfraction of what Munin shows you. The main reason I've been suggesting \nMunin lately is because it seems to get all the basics right for new \nusers without them having to do anything but activate the PostgreSQL \nplug-in.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Fri, 18 Jun 2010 14:11:53 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "\nOn Jun 16, 2010, at 1:53 PM, Alvaro Herrera wrote:\n\n> Excerpts from Tom Lane's message of lun jun 14 23:57:11 -0400 2010:\n>> Scott Carey <[email protected]> writes:\n>>> Great points. There is one other option that is decent for the WAL:\n>>> If splitting out a volume is not acceptable for the OS and WAL -- absolutely split those two out into their own partitions. It is most important to make sure that WAL and data are not on the same filesystem, especially if ext3 is involved.\n>> \n>> Uh, no, WAL really needs to be on its own *spindle*. The whole point\n>> here is to have one disk head sitting on the WAL and not doing anything\n>> else except writing to that file.\n> \n> However, there's another point here -- probably what Scott is on about:\n> on Linux (at least ext3), an fsync of any file does not limit to\n> flushing that file's blocks -- it flushes *ALL* blocks on *ALL* files in\n> the filesystem. This is particularly problematic if you have pgsql_tmp\n> in the same filesystem and do lots of disk-based sorts.\n> \n> So if you have it in the same spindle but on a different filesystem, at\n> least you'll avoid that extra fsync work, even if you have to live with\n> the extra seeking.\n\nyes, especially with a battery backed up caching raid controller the whole \"own spindle\" thing doesn't really matter, the WAL log writes fairly slowly and linearly and any controller with a damn will batch those up efficiently.\n\nBy FAR, the most important thing is to have WAL on its own file system. If using EXT3 in a way that is safe for your data (data = ordered or better), even with just one SATA disk, performance will improve a LOT if data and xlog are separated into different file systems. Yes, an extra spindle is better.\n\nHowever with a decent RAID card or caching storage, 8 spindles for it all in one raid 10, with a partition for xlog and one for data, is often better performing than a mirrored pair for OS/xlog and 6 for data so long as the file systems are separated. With a dedicated xlog and caching reliable storage, you can even mount it direct to avoid polluting OS page cache.\n\n\n\n> \n> -- \n> Álvaro Herrera <[email protected]>\n> The PostgreSQL Company - Command Prompt, Inc.\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n\n", "msg_date": "Fri, 18 Jun 2010 19:59:14 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "Can anyone tell me what's going on here? I hope this doesn't mean my system tables are corrupt...\n\nThanks,\nCraig\n\n\nselect relname, pg_relation_size(relname) from pg_class\n where pg_get_userbyid(relowner) = 'emol_warehouse_1'\n and relname not like 'pg_%'\n order by pg_relation_size(relname) desc;\nERROR: relation \"rownum_temp\" does not exist\n\nemol_warehouse_1=> select relname from pg_class where relname = 'rownum_temp';\n relname\n----------------------\n rownum_temp\n(1 row)\n\nemol_warehouse_1=> \\d rownum_temp\nDid not find any relation named \"rownum_temp\".\nemol_warehouse_1=> create table rownum_temp(i int);\nCREATE TABLE\nemol_warehouse_1=> drop table rownum_temp;\nDROP TABLE\nemol_warehouse_1=> select relname, pg_relation_size(relname) from pg_class\n where pg_get_userbyid(relowner) = 'emol_warehouse_1'\n and relname not like 'pg_%'\n order by pg_relation_size(relname) desc;\nERROR: relation \"rownum_temp\" does not exist\n\nemol_warehouse_1=> select relname, pg_relation_size(relname) from pg_class;\nERROR: relation \"tables\" does not exist\n\n\n\n\n", "msg_date": "Thu, 24 Jun 2010 16:03:00 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "Excerpts from Craig James's message of jue jun 24 19:03:00 -0400 2010:\n\n> select relname, pg_relation_size(relname) from pg_class\n> where pg_get_userbyid(relowner) = 'emol_warehouse_1'\n> and relname not like 'pg_%'\n> order by pg_relation_size(relname) desc;\n> ERROR: relation \"rownum_temp\" does not exist\n> \n> emol_warehouse_1=> select relname from pg_class where relname = 'rownum_temp';\n> relname\n> ----------------------\n> rownum_temp\n> (1 row)\n\nWhat's the full row? I'd just add a \"WHERE relkind = 'r'\" to the above\nquery anyway.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 24 Jun 2010 19:19:25 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "On 6/24/10 4:19 PM, Alvaro Herrera wrote:\n> Excerpts from Craig James's message of jue jun 24 19:03:00 -0400 2010:\n>\n>> select relname, pg_relation_size(relname) from pg_class\n>> where pg_get_userbyid(relowner) = 'emol_warehouse_1'\n>> and relname not like 'pg_%'\n>> order by pg_relation_size(relname) desc;\n>> ERROR: relation \"rownum_temp\" does not exist\n>>\n>> emol_warehouse_1=> select relname from pg_class where relname = 'rownum_temp';\n>> relname\n>> ----------------------\n>> rownum_temp\n>> (1 row)\n>\n> What's the full row? I'd just add a \"WHERE relkind = 'r'\" to the above\n> query anyway.\n\nThanks, in fact that works. But my concern is that these are system tables and system functions and yet they seem to be confused. I've used this query dozens of times and never seen this behavior before. It makes me really nervous...\n\nCraig\n\nP.S. Sorry I got the Subject wrong the first time by hitting the REPLY key mindlessly, I've changed it now.\n", "msg_date": "Thu, 24 Jun 2010 16:24:44 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System tables screwed up? (WAS requested shared memory\n\tsize overflows size_t)" }, { "msg_contents": "Excerpts from Craig James's message of jue jun 24 19:24:44 -0400 2010:\n> On 6/24/10 4:19 PM, Alvaro Herrera wrote:\n> > Excerpts from Craig James's message of jue jun 24 19:03:00 -0400 2010:\n> >\n> >> select relname, pg_relation_size(relname) from pg_class\n> >> where pg_get_userbyid(relowner) = 'emol_warehouse_1'\n> >> and relname not like 'pg_%'\n> >> order by pg_relation_size(relname) desc;\n> >> ERROR: relation \"rownum_temp\" does not exist\n> >>\n> >> emol_warehouse_1=> select relname from pg_class where relname = 'rownum_temp';\n> >> relname\n> >> ----------------------\n> >> rownum_temp\n> >> (1 row)\n> >\n> > What's the full row? I'd just add a \"WHERE relkind = 'r'\" to the above\n> > query anyway.\n> \n> Thanks, in fact that works. But my concern is that these are system tables and system functions and yet they seem to be confused. I've used this query dozens of times and never seen this behavior before. It makes me really nervous...\n\nI think you're being bitten by lack of schema qualification. Perhaps\nyou ought to pass pg_class.oid to pg_relation_size instead of relname.\nWhat did you do to make pg_relation_size to work on type name?\n\nWhy is this a -performance question anyway?\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 24 Jun 2010 19:55:01 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System tables screwed up? (WAS requested shared memory size\n\toverflows size_t)" }, { "msg_contents": "On Thu, Jun 24, 2010 at 7:19 PM, Alvaro Herrera\n<[email protected]> wrote:\n> Excerpts from Craig James's message of jue jun 24 19:03:00 -0400 2010:\n>\n>> select relname, pg_relation_size(relname) from pg_class\n>>          where pg_get_userbyid(relowner) = 'emol_warehouse_1'\n>>          and relname not like 'pg_%'\n>>          order by pg_relation_size(relname) desc;\n>> ERROR:  relation \"rownum_temp\" does not exist\n>>\n>> emol_warehouse_1=> select relname from pg_class where relname = 'rownum_temp';\n>>         relname\n>> ----------------------\n>>   rownum_temp\n>> (1 row)\n>\n> What's the full row?  I'd just add a \"WHERE relkind = 'r'\" to the above\n> query anyway.\n\nYeah - also, it would probably be good to call pg_relation_size on\npg_class.oid rather than pg_class.relname, to avoid any chance of\nconfusion over which objects are in which schema.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 24 Jun 2010 23:05:06 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" }, { "msg_contents": "Remove me from your email traffic.\n \n> Date: Thu, 24 Jun 2010 23:05:06 -0400\n> Subject: Re: [PERFORM] requested shared memory size overflows size_t\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]; [email protected]\n> \n> On Thu, Jun 24, 2010 at 7:19 PM, Alvaro Herrera\n> <[email protected]> wrote:\n> > Excerpts from Craig James's message of jue jun 24 19:03:00 -0400 2010:\n> >\n> >> select relname, pg_relation_size(relname) from pg_class\n> >> where pg_get_userbyid(relowner) = 'emol_warehouse_1'\n> >> and relname not like 'pg_%'\n> >> order by pg_relation_size(relname) desc;\n> >> ERROR: relation \"rownum_temp\" does not exist\n> >>\n> >> emol_warehouse_1=> select relname from pg_class where relname = 'rownum_temp';\n> >> relname\n> >> ----------------------\n> >> rownum_temp\n> >> (1 row)\n> >\n> > What's the full row? I'd just add a \"WHERE relkind = 'r'\" to the above\n> > query anyway.\n> \n> Yeah - also, it would probably be good to call pg_relation_size on\n> pg_class.oid rather than pg_class.relname, to avoid any chance of\n> confusion over which objects are in which schema.\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise Postgres Company\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n_________________________________________________________________\nhttp://clk.atdmt.com/UKM/go/197222280/direct/01/\nWe want to hear all your funny, exciting and crazy Hotmail stories. Tell us now\n\n\n\n\n\nRemove me from your email traffic. > Date: Thu, 24 Jun 2010 23:05:06 -0400> Subject: Re: [PERFORM] requested shared memory size overflows size_t> From: [email protected]> To: [email protected]> CC: [email protected]; [email protected]> > On Thu, Jun 24, 2010 at 7:19 PM, Alvaro Herrera> <[email protected]> wrote:> > Excerpts from Craig James's message of jue jun 24 19:03:00 -0400 2010:> >> >> select relname, pg_relation_size(relname) from pg_class> >>          where pg_get_userbyid(relowner) = 'emol_warehouse_1'> >>          and relname not like 'pg_%'> >>          order by pg_relation_size(relname) desc;> >> ERROR:  relation \"rownum_temp\" does not exist> >>> >> emol_warehouse_1=> select relname from pg_class where relname = 'rownum_temp';> >>         relname> >> ----------------------> >>   rownum_temp> >> (1 row)> >> > What's the full row?  I'd just add a \"WHERE relkind = 'r'\" to the above> > query anyway.> > Yeah - also, it would probably be good to call pg_relation_size on> pg_class.oid rather than pg_class.relname, to avoid any chance of> confusion over which objects are in which schema.> > -- > Robert Haas> EnterpriseDB: http://www.enterprisedb.com> The Enterprise Postgres Company> > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance Get a free e-mail account with Hotmail. Sign-up now.", "msg_date": "Fri, 25 Jun 2010 04:59:33 +0000", "msg_from": "Jim Montgomery <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" } ]
[ { "msg_contents": "Never say never with computer geeks ....\nhttp://www.youtube.com/watch?v=mJyAA0oPAwE\n\nOn Fri, Jun 11, 2010 at 7:44 AM, Kenneth Marshall <[email protected]> wrote:\n\n> Hi Anj,\n>\n> That is an indication that your system was less correctly\n> modeled with a random_page_cost=2 which means that the system\n> will assume that random I/O is cheaper than it is and will\n> choose plans based on that model. If this is not the case,\n> the plan chosen will almost certainly be slower for any\n> non-trivial query. You can put a 200mph speedometer in a\n> VW bug but it will never go 200mph.\n>\n> Regards,\n> Ken\n>\n> On Thu, Jun 10, 2010 at 07:54:01PM -0700, Anj Adu wrote:\n> > I changed random_page_cost=4 (earlier 2) and the performance issue is\n> gone\n> >\n> > I am not clear why a page_cost of 2 on really fast disks would perform\n> badly.\n> >\n> > Thank you for all your help and time.\n> >\n> > On Thu, Jun 10, 2010 at 8:32 AM, Anj Adu <[email protected]> wrote:\n> > > Attached\n> > >\n> > > Thank you\n> > >\n> > >\n> > > On Thu, Jun 10, 2010 at 6:28 AM, Robert Haas <[email protected]>\n> wrote:\n> > >> On Wed, Jun 9, 2010 at 11:17 PM, Anj Adu <[email protected]>\n> wrote:\n> > >>> The plan is unaltered . There is a separate index on theDate as well\n> > >>> as one on node_id\n> > >>>\n> > >>> I have not specifically disabled sequential scans.\n> > >>\n> > >> Please do \"SHOW ALL\" and attach the results as a text file.\n> > >>\n> > >>> This query performs much better on 8.1.9 on a similar sized\n> > >>> table.(althought the random_page_cost=4 on 8.1.9 and 2 on 8.4.0 )\n> > >>\n> > >> Well that could certainly matter...\n> > >>\n> > >> --\n> > >> Robert Haas\n> > >> EnterpriseDB: http://www.enterprisedb.com\n> > >> The Enterprise Postgres Company\n> > >>\n> > >\n> >\n> > --\n> > Sent via pgsql-performance mailing list (\n> [email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> >\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nNever say never with computer geeks .... http://www.youtube.com/watch?v=mJyAA0oPAwEOn Fri, Jun 11, 2010 at 7:44 AM, Kenneth Marshall <[email protected]> wrote:\nHi Anj,\n\nThat is an indication that your system was less correctly\nmodeled with a random_page_cost=2 which means that the system\nwill assume that random I/O is cheaper than it is and will\nchoose plans based on that model. If this is not the case,\nthe plan chosen will almost certainly be slower for any\nnon-trivial query. You can put a 200mph speedometer in a\nVW bug but it will never go 200mph.\n\nRegards,\nKen\n\nOn Thu, Jun 10, 2010 at 07:54:01PM -0700, Anj Adu wrote:\n> I changed random_page_cost=4 (earlier 2) and the performance issue is gone\n>\n> I am not clear why a page_cost of 2 on really fast disks would perform badly.\n>\n> Thank you for all your help and time.\n>\n> On Thu, Jun 10, 2010 at 8:32 AM, Anj Adu <[email protected]> wrote:\n> > Attached\n> >\n> > Thank you\n> >\n> >\n> > On Thu, Jun 10, 2010 at 6:28 AM, Robert Haas <[email protected]> wrote:\n> >> On Wed, Jun 9, 2010 at 11:17 PM, Anj Adu <[email protected]> wrote:\n> >>> The plan is unaltered . There is a separate index on theDate as well\n> >>> as one on node_id\n> >>>\n> >>> I have not specifically disabled sequential scans.\n> >>\n> >> Please do \"SHOW ALL\" and attach the results as a text file.\n> >>\n> >>> This query performs much better on 8.1.9 on a similar sized\n> >>> table.(althought the random_page_cost=4 on 8.1.9 and 2 on 8.4.0 )\n> >>\n> >> Well that could certainly matter...\n> >>\n> >> --\n> >> Robert Haas\n> >> EnterpriseDB: http://www.enterprisedb.com\n> >> The Enterprise Postgres Company\n> >>\n> >\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 11 Jun 2010 10:58:11 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "O/T: performance tuning cars" } ]
[ { "msg_contents": "Hello,\n\nWe are trying to optimize our box for Postgresql. We have i7, 8GB of\nram, 2xSATA RAID1 (software) running on XFS filesystem. We are running\nPostgresql and memcached on that box. Without any optimizations (just\nedited PG config) we got 50 TPS with pg_bench default run (1 client /\n10 transactions), then we've added to /home partition (where PGDATA\nis) logbuf=8 and nobarrier. With that fs setup TPS in default test is\nunstable, 150-300 TPS. So we've tested with -c 100 -t 10 and got\nstable ~400 TPS. Question is - is it decent result or we can get much\nmore from Postgres on that box setup? If yes, what we need to do? We\nare running Gentoo.\n\nHere's our config: http://paste.pocoo.org/show/224393/\n\nPS. pgbench scale is set to \"1\".\n\n-- \nGreetings,\nSzymon\n", "msg_date": "Sat, 12 Jun 2010 14:03:10 +0200", "msg_from": "Szymon Kosok <[email protected]>", "msg_from_op": true, "msg_subject": "~400 TPS - good or bad?" }, { "msg_contents": "2010/6/12 Szymon Kosok <[email protected]>:\n> PS. pgbench scale is set to \"1\".\n\nI've found in mailing list archive that scale = 1 is not good idea. So\nwe have ran pgbench -s 200 (our database is ~3 GB) -c 10 -t 3000 and\nget about ~600 TPS. Good or bad?\n\n-- \nGreetings,\nSzymon\n", "msg_date": "Sat, 12 Jun 2010 14:37:21 +0200", "msg_from": "Szymon Kosok <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ~400 TPS - good or bad?" }, { "msg_contents": "On Sat, Jun 12, 2010 at 8:37 AM, Szymon Kosok <[email protected]> wrote:\n> 2010/6/12 Szymon Kosok <[email protected]>:\n>> PS. pgbench scale is set to \"1\".\n>\n> I've found in mailing list archive that scale = 1 is not good idea. So\n> we have ran pgbench -s 200 (our database is ~3 GB) -c 10 -t 3000 and\n> get about ~600 TPS. Good or bad?\n\nYou are being bound by the performance of your disk drives. Since you\nhave 8gb ram, your database fit in memory once the cache warms up. To\nconfirm this, try running a 'select only' test with a longer\ntransaction count:\n\n pgbench -c 10 -t 10000 -S\n\nAnd compare the results. If you get much higher results (you should),\nthen we know for sure where the problem is. Your main lines of attack\non fixing disk performance issues are going to be:\n\n*) simply dealing with 400-600tps\n*) getting more/faster disk drives\n*) doing some speed/safety tradeoffs, for example synchronous_commit\n\nmerlin\n", "msg_date": "Sat, 12 Jun 2010 09:39:32 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ~400 TPS - good or bad?" }, { "msg_contents": "Szymon Kosok wrote:\n> I've found in mailing list archive that scale = 1 is not good idea. So\n> we have ran pgbench -s 200 (our database is ~3 GB) -c 10 -t 3000 and\n> get about ~600 TPS. Good or bad?\n> \npgbench in its default only really tests commit rate, and often that's \nnot what is actually important to people. Your results are normal if \nyou don't have a battery-backed RAID controller. In that case, your \ndrives are only capable of committing once per disk rotation, so if you \nhave 7200RPM drives that's no more than 120 times per second. On each \nphysical disk commit, PostgreSQL will include any other pending \ntransactions that are waiting around too. So what I suspect you're \nseeing is about 100 commits/second, and on average 6 of the 10 clients \nhave something ready to commit each time. That's what I normally see \nwhen running pgbench on regular hard drives without a RAID controller, \nsomewhere around 500 commits/second.\n\nIf you change the number of clients to 1 you'll find out what the commit \nrate for a single client is, that should help validate whether my \nsuspicion is correct. I'd expect a fairly linear increase from 100 to \n~600 TPS as your client count goes from 1 to 10, topping out at under \n1000 TPS even with much higher client counts.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 12 Jun 2010 16:46:29 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ~400 TPS - good or bad?" } ]
[ { "msg_contents": "Whenever I run this query, I get out of memory error:\n\n\nexplain analyze\n*select *\nemail_track.count AS \"Emails_Access_Count\",\nactivity.subject AS \"Emails_Subject\",\ncrmentity.crmid AS EntityId_crmentitycrmid\n*from *\n(select * from crmentity where deleted = 0 and createdtime between (now() -\ninterval '6 month') and now() ) as crmentity\ninner join\n(select * from activity where activitytype = 'Emails' and date_start\nbetween (now() - interval '6 month') and now()) as activity\non crmentity.crmid=activity.activityid\ninner join emaildetails on emaildetails.emailid = crmentity.crmid\ninner join vantage_email_track on\nvantage_email_track.mailid=emaildetails.emailid\nleft join seactivityrel on seactivityrel.activityid = emaildetails.emailid\n\nWhenever I run this query, I get out of memory error:explain analyzeselect email_track.count AS \"Emails_Access_Count\",activity.subject AS \"Emails_Subject\",crmentity.crmid AS EntityId_crmentitycrmid\nfrom (select * from crmentity where deleted = 0 and createdtime between (now() - interval '6 month') and now() ) as crmentityinner join (select * from activity where  activitytype = 'Emails' and date_start between (now() - interval '6 month')  and now()) as activity \non crmentity.crmid=activity.activityidinner join emaildetails on emaildetails.emailid = crmentity.crmidinner join vantage_email_track on vantage_email_track.mailid=emaildetails.emailidleft join seactivityrel on seactivityrel.activityid = emaildetails.emailid", "msg_date": "Sun, 13 Jun 2010 19:25:32 +0600", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": true, "msg_subject": "out of memory" }, { "msg_contents": "Can you provide these details\n\nwork_mem\nHow much physical memory there is on your system\n\nMost out of memory errors are associated with a high work_mem setting\n\nOn Sun, Jun 13, 2010 at 6:25 AM, AI Rumman <[email protected]> wrote:\n> Whenever I run this query, I get out of memory error:\n>\n>\n> explain analyze\n> select\n> email_track.count AS \"Emails_Access_Count\",\n> activity.subject AS \"Emails_Subject\",\n> crmentity.crmid AS EntityId_crmentitycrmid\n> from\n> (select * from crmentity where deleted = 0 and createdtime between (now() -\n> interval '6 month') and now() ) as crmentity\n> inner join\n> (select * from activity where  activitytype = 'Emails' and date_start\n> between (now() - interval '6 month')  and now()) as activity\n> on crmentity.crmid=activity.activityid\n> inner join emaildetails on emaildetails.emailid = crmentity.crmid\n> inner join vantage_email_track on\n> vantage_email_track.mailid=emaildetails.emailid\n> left join seactivityrel on seactivityrel.activityid = emaildetails.emailid\n>\n>\n", "msg_date": "Sun, 13 Jun 2010 06:59:58 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: out of memory" }, { "msg_contents": "Hello\n\nI think this SQL returns the following error.\n\nERROR: missing FROM-clause entry for table \"email_track\"\nLINE 3: email_track.count AS \"Emails_Access_Count\",\n ^\n\nFor a fact ,this SQL does not have the \"email_trac\" table in from-clause.\n\n1)Is this SQL right?\n2)If the SQL is right, can you write how make your table?\n I'd like to try your SQL.\n3)What version do you use?\n\nThis is my test.\n\n================================================================\n--PostgreSQL8.4.4. for CentOS5.3\nbegin;\n\ncreate schema email_track;\ncreate table email_track.crmentity\n(crmid int,\ndeleted int,\ncreatedtime date);\n\ncreate table email_track.activity\n(activityid int,\nactivitytype varchar(100),\ndate_start date,\nsubject varchar(100));\n\n\ncreate table email_track.emaildetails\n(emailid int\n);\n\ncreate table email_track.vantage_email_track\n(vantage_email_track int,\nmailid int);\n\ncreate table email_track.seactivityrel\n(activityid int);\n\nset search_path to email_track,public;\n\n\n\nexplain analyze\nselect\nemail_track.count AS \"Emails_Access_Count\",\nactivity.subject AS \"Emails_Subject\",\ncrmentity.crmid AS EntityId_crmentitycrmid\nfrom\n(select * from crmentity where deleted = 0 and createdtime between \n(now() - interval '6 month') and now() ) as crmentity\ninner join\n(select * from activity where activitytype = 'Emails' and date_start \nbetween (now() - interval '6 month') and now()) as activity\non crmentity.crmid=activity.activityid\ninner join emaildetails on emaildetails.emailid = crmentity.crmid\ninner join vantage_email_track on \nvantage_email_track.mailid=emaildetails.emailid\nleft join seactivityrel on seactivityrel.activityid = emaildetails.emailid;\n\n\nERROR: missing FROM-clause entry for table \"email_track\" at character 24\nSTATEMENT: explain analyze\n select\n email_track.count AS \"Emails_Access_Count\",\n activity.subject AS \"Emails_Subject\",\n crmentity.crmid AS EntityId_crmentitycrmid\n from\n (select * from crmentity where deleted = 0 and createdtime \nbetween (now() - interval '6 month') and now() ) as crmentity\n inner join\n (select * from activity where activitytype = 'Emails' and \ndate_start between (now() - interval '6 month') and now()) as activity\n on crmentity.crmid=activity.activityid\n inner join emaildetails on emaildetails.emailid = crmentity.crmid\n inner join vantage_email_track on \nvantage_email_track.mailid=emaildetails.emailid\n left join seactivityrel on seactivityrel.activityid = \nemaildetails.emailid;\nERROR: missing FROM-clause entry for table \"email_track\"\nLINE 3: email_track.count AS \"Emails_Access_Count\",\n ^\n\n--can not reproduce.\nabort;\n\n================================================================\n\n(2010/06/13 22:25), AI Rumman wrote:\n> Whenever I run this query, I get out of memory error:\n>\n>\n> explain analyze\n> *select *\n> email_track.count AS \"Emails_Access_Count\",\n> activity.subject AS \"Emails_Subject\",\n> crmentity.crmid AS EntityId_crmentitycrmid\n> *from *\n> (select * from crmentity where deleted = 0 and createdtime between \n> (now() - interval '6 month') and now() ) as crmentity\n> inner join\n> (select * from activity where activitytype = 'Emails' and date_start \n> between (now() - interval '6 month') and now()) as activity\n> on crmentity.crmid=activity.activityid\n> inner join emaildetails on emaildetails.emailid = crmentity.crmid\n> inner join vantage_email_track on \n> vantage_email_track.mailid=emaildetails.emailid\n> left join seactivityrel on seactivityrel.activityid = emaildetails.emailid\n>\n\n\n", "msg_date": "Mon, 14 Jun 2010 11:10:31 +0900", "msg_from": "Kenichiro Tanaka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: out of memory" } ]
[ { "msg_contents": "Can any one please help me in tuning the query?\n\nexplain\nselect *\nfrom (select * from crmentity where deleted = 0 and createdtime between\n(now() - interval '6 month') and now() ) as crmentity\ninner join (select * from activity where activitytype = 'Emails' and\ndate_start between (now() - interval '6 month') and now()) as activity on\ncrmentity.crmid=activity.activityid\ninner join emaildetails on emaildetails.emailid = crmentity.crmid\ninner join vantage_email_track on\nvantage_email_track.mailid=emaildetails.emailid\nleft join seactivityrel on seactivityrel.activityid = emaildetails.emailid\n\n\n\nQUERY\nPLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=8725.27..17121.20 rows=197 width=581)\n -> Nested Loop (cost=8725.27..16805.64 rows=7 width=573)\n -> Hash Join (cost=8725.27..10643.08 rows=789 width=292)\n Hash Cond: (emaildetails.emailid =\npublic.activity.activityid)\n -> Seq Scan on emaildetails (cost=0.00..1686.95 rows=44595\nwidth=186)\n -> Hash (cost=8664.41..8664.41 rows=4869 width=106)\n -> Hash Join (cost=5288.61..8664.41 rows=4869\nwidth=106)\n Hash Cond: (vantage_email_track.mailid =\npublic.activity.activityid)\n -> Seq Scan on vantage_email_track\n(cost=0.00..1324.52 rows=88852 width=12)\n -> Hash (cost=4879.22..4879.22 rows=15071\nwidth=94)\n -> Bitmap Heap Scan on activity\n(cost=392.45..4879.22 rows=15071 width=94)\n Recheck Cond: (((activitytype)::text\n= 'Emails'::text) AND (date_start >= (now() - '6 mons'::interval)) AND\n(date_start <= now()))\n -> Bitmap Index Scan on\nactivity_activitytype_date_start_idx (cost=0.00..388.68 rows=15071 width=0)\n Index Cond:\n(((activitytype)::text = 'Emails'::text) AND (date_start >= (now() - '6\nmons'::interval)) AND (date_start <= now()))\n -> Index Scan using crmentity_pkey on crmentity (cost=0.00..7.80\nrows=1 width=281)\n Index Cond: (public.crmentity.crmid =\npublic.activity.activityid)\n Filter: ((public.crmentity.deleted = 0) AND\n(public.crmentity.createdtime <= now()) AND (public.crmentity.createdtime >=\n(now() - '6 mons'::interval)))\n -> Index Scan using seactivityrel_activityid_idx on seactivityrel\n(cost=0.00..39.57 rows=441 width=8)\n Index Cond: (seactivityrel.activityid = emaildetails.emailid)\n(19 rows)\n\nCan any one please help me in tuning the query?explain select *from (select * from crmentity where deleted = 0 and createdtime between (now() - interval '6 month') and now() ) as crmentityinner join (select * from activity where  activitytype = 'Emails' and date_start between (now() - interval '6 month')  and now()) as activity on crmentity.crmid=activity.activityid\ninner join emaildetails on emaildetails.emailid = crmentity.crmidinner join vantage_email_track on vantage_email_track.mailid=emaildetails.emailidleft join seactivityrel on seactivityrel.activityid = emaildetails.emailid\n                                                                                         QUERY PLAN                                                                                         --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join  (cost=8725.27..17121.20 rows=197 width=581)   ->  Nested Loop  (cost=8725.27..16805.64 rows=7 width=573)         ->  Hash Join  (cost=8725.27..10643.08 rows=789 width=292)               Hash Cond: (emaildetails.emailid = public.activity.activityid)\n               ->  Seq Scan on emaildetails  (cost=0.00..1686.95 rows=44595 width=186)               ->  Hash  (cost=8664.41..8664.41 rows=4869 width=106)                     ->  Hash Join  (cost=5288.61..8664.41 rows=4869 width=106)\n                           Hash Cond: (vantage_email_track.mailid = public.activity.activityid)                           ->  Seq Scan on vantage_email_track  (cost=0.00..1324.52 rows=88852 width=12)                           ->  Hash  (cost=4879.22..4879.22 rows=15071 width=94)\n                                 ->  Bitmap Heap Scan on activity  (cost=392.45..4879.22 rows=15071 width=94)                                       Recheck Cond: (((activitytype)::text = 'Emails'::text) AND (date_start >= (now() - '6 mons'::interval)) AND (date_start <= now()))\n                                       ->  Bitmap Index Scan on activity_activitytype_date_start_idx  (cost=0.00..388.68 rows=15071 width=0)                                             Index Cond: (((activitytype)::text = 'Emails'::text) AND (date_start >= (now() - '6 mons'::interval)) AND (date_start <= now()))\n         ->  Index Scan using crmentity_pkey on crmentity  (cost=0.00..7.80 rows=1 width=281)               Index Cond: (public.crmentity.crmid = public.activity.activityid)               Filter: ((public.crmentity.deleted = 0) AND (public.crmentity.createdtime <= now()) AND (public.crmentity.createdtime >= (now() - '6 mons'::interval)))\n   ->  Index Scan using seactivityrel_activityid_idx on seactivityrel  (cost=0.00..39.57 rows=441 width=8)         Index Cond: (seactivityrel.activityid = emaildetails.emailid)(19 rows)", "msg_date": "Mon, 14 Jun 2010 16:41:26 +0600", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": true, "msg_subject": "query tuning help" }, { "msg_contents": "On 06/14/2010 05:41 AM, AI Rumman wrote:\n> Can any one please help me in tuning the query?\n>\n> explain\n> select *\n> from (select * from crmentity where deleted = 0 and createdtime between\n> (now() - interval '6 month') and now() ) as crmentity\n> inner join (select * from activity where activitytype = 'Emails' and\n> date_start between (now() - interval '6 month') and now()) as activity\n> on crmentity.crmid=activity.activityid\n> inner join emaildetails on emaildetails.emailid = crmentity.crmid\n> inner join vantage_email_track on\n> vantage_email_track.mailid=emaildetails.emailid\n> left join seactivityrel on seactivityrel.activityid = emaildetails.emailid\n>\n\nCan you send us 'explain analyze' too?\n\n> -> Seq Scan on emaildetails (cost=0.00..1686.95 rows=44595 width=186)\n> -> Seq Scan on vantage_email_track (cost=0.00..1324.52 rows=88852 width=12)\n\ndo you have indexes on emaildetails(emailid) and vantage_email_track(mailid)?\n\n-Andy\n", "msg_date": "Mon, 14 Jun 2010 08:50:40 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query tuning help" } ]
[ { "msg_contents": "Hi all,\nI have 2 data bases trying to perform an update query at the same time \non a same table in a third data base using db link.\nI'm getting a dead lock exception:\nERROR: deadlock detected\nDETAIL: Process 27305 waits for ShareLock on transaction 55575; blocked \nby process 27304.\nProcess 27304 waits for ShareLock on transaction 55576; blocked by \nprocess 27305.\nHINT: See server log for query details.\nActually the folowing function is installed on 2 dbs DB1 and DB2. This \nfunction issues an update query on DB3.\nWhen this function is running simultaneously on DB1 and DB2, it produces \na dead lock making one of the functions (in DB1 or DB2) stop with the \nabove exception:\nIs it normal? should'nt postgres be able to handle such situations, for \nex: let one transaction wait untill the other commits or rollback then \ncontinue with the first transaction?\nIs there a parameter that should be set in postgresql.conf to allow \nhandling of concurrent transaction...?\n\nCREATE OR REPLACE FUNCTION TEST_DB_LINK(VARCHAR)\nRETURNS VOID AS'\nDECLARE\nC INTEGER;\nP ALIAS FOR $1;\nDUMMY VARCHAR;\nBEGIN\n C:= 0;\n LOOP\n EXIT WHEN C > 15;\n C:= C+1;\n SELECT INTO DUMMY DBLINK_EXEC(''CONNECTION_STRING TO DB3', \n''UPDATE IN_FICHE_PRODUIT SET VALIDE = 1'');\n RAISE NOTICE ''%, %'', C,P;\n END LOOP;\nEND;'\nLANGUAGE 'plpgsql';\n\nThanks for your time.\n\n\n\n\n\n\nHi all,\nI have 2 data bases trying to perform an update query at the same time\non a same table in a third data base using db link.\nI'm getting a dead lock exception:\nERROR:  deadlock detected\nDETAIL:  Process 27305 waits for ShareLock on transaction 55575;\nblocked by process 27304.\nProcess 27304 waits for ShareLock on transaction 55576; blocked by\nprocess 27305.\nHINT:  See server log for query details.\nActually the folowing function is installed on 2 dbs DB1 and\nDB2. This function issues an update query on DB3.\nWhen this function is running simultaneously on DB1 and DB2, it\nproduces a dead lock making one of the functions (in DB1 or DB2) stop\nwith the above exception:\nIs it normal? should'nt postgres be able to handle such situations, for\nex:  let one transaction wait untill the other commits or rollback then\ncontinue with the first transaction?\nIs there a parameter that should be set in postgresql.conf to allow\nhandling of concurrent transaction...?\n\nCREATE OR REPLACE FUNCTION TEST_DB_LINK(VARCHAR)\nRETURNS VOID AS'\nDECLARE \nC INTEGER;\nP ALIAS FOR $1;\nDUMMY  VARCHAR;\nBEGIN    \n    C:= 0;\n    LOOP\n        EXIT WHEN C > 15;\n        C:= C+1;\n        SELECT INTO DUMMY DBLINK_EXEC(''CONNECTION_STRING TO DB3',\n''UPDATE IN_FICHE_PRODUIT SET VALIDE = 1'');\n    RAISE NOTICE ''%, %'', C,P;\n    END LOOP;\nEND;'\nLANGUAGE 'plpgsql';\n\nThanks for your time.", "msg_date": "Mon, 14 Jun 2010 14:50:43 +0300", "msg_from": "Elias Ghanem <[email protected]>", "msg_from_op": true, "msg_subject": "Dead lock" }, { "msg_contents": "On 14/06/10 12:50, Elias Ghanem wrote:\n> SELECT INTO DUMMY DBLINK_EXEC(''CONNECTION_STRING TO DB3', \n> ''UPDATE IN_FICHE_PRODUIT SET VALIDE = 1'');\n\nIf there's more than one value in that table, an explicit ORDER BY might \nhelp (otherwise you could get the situation where query A will update in \nthe order 1,2,3 and query B will do 3,2,1 so neither will be able to get \nthe requested locks until the other query has finished).\n\nTom\n\n", "msg_date": "Mon, 14 Jun 2010 13:54:24 +0100", "msg_from": "Tom Molesworth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dead lock" }, { "msg_contents": "On 06/14/2010 06:50 AM, Elias Ghanem wrote:\n> Hi all,\n> I have 2 data bases trying to perform an update query at the same time\n> on a same table in a third data base using db link.\n> I'm getting a dead lock exception:\n> ERROR: deadlock detected\n> DETAIL: Process 27305 waits for ShareLock on transaction 55575; blocked\n> by process 27304.\n> Process 27304 waits for ShareLock on transaction 55576; blocked by\n> process 27305.\n> HINT: See server log for query details.\n> Actually the folowing function is installed on 2 dbs DB1 and DB2. This\n> function issues an update query on DB3.\n> When this function is running simultaneously on DB1 and DB2, it produces\n> a dead lock making one of the functions (in DB1 or DB2) stop with the\n> above exception:\n> Is it normal? should'nt postgres be able to handle such situations, for\n> ex: let one transaction wait untill the other commits or rollback then\n> continue with the first transaction?\n> Is there a parameter that should be set in postgresql.conf to allow\n> handling of concurrent transaction...?\n>\n> CREATE OR REPLACE FUNCTION TEST_DB_LINK(VARCHAR)\n> RETURNS VOID AS'\n> DECLARE\n> C INTEGER;\n> P ALIAS FOR $1;\n> DUMMY VARCHAR;\n> BEGIN\n> C:= 0;\n> LOOP\n> EXIT WHEN C > 15;\n> C:= C+1;\n> SELECT INTO DUMMY DBLINK_EXEC(''CONNECTION_STRING TO DB3', ''UPDATE\n> IN_FICHE_PRODUIT SET VALIDE = 1'');\n> RAISE NOTICE ''%, %'', C,P;\n> END LOOP;\n> END;'\n> LANGUAGE 'plpgsql';\n>\n> Thanks for your time.\n\n\nI think PG is doing what you want.. if you think about it. You start two transactions at the same time. A transaction is defined as \"do this set of operations, all of which must succeed or fail atomicly\". One transaction cannot update the exact same row as another transaction because that would break the second transactions \"must succeed\" rule.\n\n\n-Andy\n", "msg_date": "Mon, 14 Jun 2010 08:59:40 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dead lock" } ]
[ { "msg_contents": "Hi,\nActually i guess the problem is related to the way PG uses to aquire \nlock on the rows that will be updated.\nSuppose the update query will affect 5 rows: A, B, C, D and E.\nApparently the folowing senario is happening:\n 1- Transaction1 locks row A\n 2- Trnasaction2 locks row B\n 3- Transaction1 updates row A\n 4- Tranasaction2 updates row B\n 5- Transaction1 *tries *to acquire lock on row B(and fail because \nrow B is still locked by transaction2)\n 6- Transaction2 *tries *to acquire lock on row A(and fail because \nrow A is still locked by transaction1)\nHence the dead lock.\nIs this a plausible explanation of what is going on?\nIf yes, what can be done to avoid the dead lock?\nThanks again.\n\n\n-------- Original Message --------\nSubject: \tDead lock\nDate: \tMon, 14 Jun 2010 14:50:43 +0300\nFrom: \tElias Ghanem <[email protected]>\nTo: \[email protected]\n\n\n\nHi all,\nI have 2 data bases trying to perform an update query at the same time \non a same table in a third data base using db link.\nI'm getting a dead lock exception:\nERROR: deadlock detected\nDETAIL: Process 27305 waits for ShareLock on transaction 55575; blocked \nby process 27304.\nProcess 27304 waits for ShareLock on transaction 55576; blocked by \nprocess 27305.\nHINT: See server log for query details.\nActually the folowing function is installed on 2 dbs DB1 and DB2. This \nfunction issues an update query on DB3.\nWhen this function is running simultaneously on DB1 and DB2, it produces \na dead lock making one of the functions (in DB1 or DB2) stop with the \nabove exception:\nIs it normal? should'nt postgres be able to handle such situations, for \nex: let one transaction wait untill the other commits or rollback then \ncontinue with the first transaction?\nIs there a parameter that should be set in postgresql.conf to allow \nhandling of concurrent transaction...?\n\nCREATE OR REPLACE FUNCTION TEST_DB_LINK(VARCHAR)\nRETURNS VOID AS'\nDECLARE\nC INTEGER;\nP ALIAS FOR $1;\nDUMMY VARCHAR;\nBEGIN\n C:= 0;\n LOOP\n EXIT WHEN C > 15;\n C:= C+1;\n SELECT INTO DUMMY DBLINK_EXEC(''CONNECTION_STRING TO DB3', \n''UPDATE IN_FICHE_PRODUIT SET VALIDE = 1'');\n RAISE NOTICE ''%, %'', C,P;\n END LOOP;\nEND;'\nLANGUAGE 'plpgsql';\n\nThanks for your time.\n\n\n\n\n\n\n\nHi,\nActually i guess the problem is related to the way PG uses to aquire\nlock on the rows that will be updated.\nSuppose the update query will affect 5 rows: A, B, C, D and E.\nApparently the folowing senario is happening:\n    1- Transaction1 locks row A\n    2- Trnasaction2 locks row B\n    3- Transaction1 updates row A\n    4- Tranasaction2 updates row B\n    5- Transaction1 tries to acquire lock on row B(and fail\nbecause row B is still locked by transaction2)\n    6- Transaction2 tries to acquire lock on row A(and fail\nbecause row A is still locked by transaction1)\nHence the dead lock.\nIs this a plausible explanation of what is going on?\nIf yes, what can be done to avoid the dead lock?\nThanks again.\n\n\n-------- Original Message --------\n\n\n\nSubject: \nDead lock\n\n\nDate: \nMon, 14 Jun 2010 14:50:43 +0300\n\n\nFrom: \nElias Ghanem <[email protected]>\n\n\nTo: \[email protected]\n\n\n\n\n\n\nHi all,\nI have 2 data bases trying to perform an update query at the same time\non a same table in a third data base using db link.\nI'm getting a dead lock exception:\nERROR:  deadlock detected\nDETAIL:  Process 27305 waits for ShareLock on transaction 55575;\nblocked by process 27304.\nProcess 27304 waits for ShareLock on transaction 55576; blocked by\nprocess 27305.\nHINT:  See server log for query details.\nActually the folowing function is installed on 2 dbs DB1 and\nDB2. This function issues an update query on DB3.\nWhen this function is running simultaneously on DB1 and DB2, it\nproduces a dead lock making one of the functions (in DB1 or DB2) stop\nwith the above exception:\nIs it normal? should'nt postgres be able to handle such situations, for\nex:  let one transaction wait untill the other commits or rollback then\ncontinue with the first transaction?\nIs there a parameter that should be set in postgresql.conf to allow\nhandling of concurrent transaction...?\n\nCREATE OR REPLACE FUNCTION TEST_DB_LINK(VARCHAR)\nRETURNS VOID AS'\nDECLARE \nC INTEGER;\nP ALIAS FOR $1;\nDUMMY  VARCHAR;\nBEGIN    \n    C:= 0;\n    LOOP\n        EXIT WHEN C > 15;\n        C:= C+1;\n        SELECT INTO DUMMY DBLINK_EXEC(''CONNECTION_STRING TO DB3',\n''UPDATE IN_FICHE_PRODUIT SET VALIDE = 1'');\n    RAISE NOTICE ''%, %'', C,P;\n    END LOOP;\nEND;'\nLANGUAGE 'plpgsql';\n\nThanks for your time.", "msg_date": "Mon, 14 Jun 2010 18:36:58 +0300", "msg_from": "Elias Ghanem <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Dead lock" }, { "msg_contents": "It's a standard (indeed, required) best practice of concurrent database\nprogramming across any brand of database to ensure that multi-row\ntransactions always acquire the locks they use in a predictable order based\non row identities, e.g. for the classic banking debit-credit pair, doing\nsomething like this (Java / JDBC, simplified for brevity and clarity):\n\nPreparedStatement debit = conn.prepareStatement(\"update account set balance\n= balance - ? where acc_no = ? and balance > ?\");\ndebit.setLong(1, amount);\ndebit.setLong(2, debit_acct);\ndebit.setLong(3, amount);\n\nPreparedStatement credit = conn.prepareStatement(\"update account set balance\n= balance + ? where acc_no = ?\");\ncredit.setLong(1, amount);\ncredit.setLong(2, credit_acct);\n\ntry {\n // always acquire row locks in increasing account number order\n conn.beginTransaction();\n if (credit_acct < debit_acct) {\n credit.executeUpdate();\n if (debit.executeUpdate() < 1) throw new SQLException(\"Insufficient\nbalance\");\n }\n else {\n if (debit.executeUpdate() < 1) throw new SQLException(\"Insufficient\nbalance\");\n credit.executeUpdate();\n }\n}\ncatch (SQLException e) {\n System.err.println(\"Oops. transaction failed: \", e.getMessage());\n conn.rollback();\n}\nconn.commit();\n\nIf you're doing straight SQL bulk updates, then as someone suggested, you\ncould use an ORDER BY on a subquery, but I don't know if that is a\nguarantee, if you're not actually displaying the results then the DB may be\ntechnically allowed to optimize it out from underneath you. The only way to\nbe sure is a cursor / procedure.\n\nIn short, this boils down to learning more about database programming. PG is\nperforming as it should.\n\nCheers\nDave\n\nOn Mon, Jun 14, 2010 at 10:36 AM, Elias Ghanem <[email protected]> wrote:\n\n> Hi,\n> Actually i guess the problem is related to the way PG uses to aquire lock\n> on the rows that will be updated.\n> Suppose the update query will affect 5 rows: A, B, C, D and E.\n> Apparently the folowing senario is happening:\n> 1- Transaction1 locks row A\n> 2- Trnasaction2 locks row B\n> 3- Transaction1 updates row A\n> 4- Tranasaction2 updates row B\n> 5- Transaction1 *tries *to acquire lock on row B(and fail because row\n> B is still locked by transaction2)\n> 6- Transaction2 *tries *to acquire lock on row A(and fail because row\n> A is still locked by transaction1)\n> Hence the dead lock.\n> Is this a plausible explanation of what is going on?\n> If yes, what can be done to avoid the dead lock?\n> Thanks again.\n>\n>\n>\n> -------- Original Message -------- Subject: Dead lock Date: Mon, 14 Jun\n> 2010 14:50:43 +0300 From: Elias Ghanem <[email protected]><[email protected]> To:\n> [email protected]\n>\n> Hi all,\n> I have 2 data bases trying to perform an update query at the same time on a\n> same table in a third data base using db link.\n> I'm getting a dead lock exception:\n> ERROR: deadlock detected\n> DETAIL: Process 27305 waits for ShareLock on transaction 55575; blocked by\n> process 27304.\n> Process 27304 waits for ShareLock on transaction 55576; blocked by process\n> 27305.\n> HINT: See server log for query details.\n> Actually the folowing function is installed on 2 dbs DB1 and DB2. This\n> function issues an update query on DB3.\n> When this function is running simultaneously on DB1 and DB2, it produces a\n> dead lock making one of the functions (in DB1 or DB2) stop with the above\n> exception:\n> Is it normal? should'nt postgres be able to handle such situations, for\n> ex: let one transaction wait untill the other commits or rollback then\n> continue with the first transaction?\n> Is there a parameter that should be set in postgresql.conf to allow\n> handling of concurrent transaction...?\n>\n> CREATE OR REPLACE FUNCTION TEST_DB_LINK(VARCHAR)\n> RETURNS VOID AS'\n> DECLARE\n> C INTEGER;\n> P ALIAS FOR $1;\n> DUMMY VARCHAR;\n> BEGIN\n> C:= 0;\n> LOOP\n> EXIT WHEN C > 15;\n> C:= C+1;\n> SELECT INTO DUMMY DBLINK_EXEC(''CONNECTION_STRING TO DB3', ''UPDATE\n> IN_FICHE_PRODUIT SET VALIDE = 1'');\n> RAISE NOTICE ''%, %'', C,P;\n> END LOOP;\n> END;'\n> LANGUAGE 'plpgsql';\n>\n> Thanks for your time.\n>\n>\n\nIt's a standard (indeed, required) best practice of concurrent database programming across any brand of database to ensure that multi-row transactions always acquire the locks they use in a predictable order based on row identities, e.g. for the classic banking debit-credit pair, doing something like this (Java / JDBC, simplified for brevity and clarity):\nPreparedStatement debit = conn.prepareStatement(\"update account set \nbalance = balance - ? where acc_no = ? and balance > ?\");debit.setLong(1, amount);\ndebit.setLong(2, debit_acct);debit.setLong(3, amount);\nPreparedStatement credit = conn.prepareStatement(\"update account set \nbalance = balance + ? where acc_no = ?\");\ncredit.setLong(1, amount);credit.setLong(2, credit_acct);\ntry {   // always acquire row locks in increasing account number order\n   conn.beginTransaction();   if (credit_acct < debit_acct) {\n      credit.executeUpdate();      if \n(debit.executeUpdate() < 1) throw new SQLException(\"Insufficient \nbalance\");\n   }\n   else {      if \n(debit.executeUpdate() < 1) throw new SQLException(\"Insufficient \nbalance\");\n      credit.executeUpdate();\n   }}catch (SQLException e) {\n   System.err.println(\"Oops. transaction failed: \", e.getMessage());   conn.rollback();\n}conn.commit();\nIf you're doing straight SQL bulk updates, then as someone suggested, you could use an ORDER BY on a subquery, but I don't know if that is a guarantee, if you're not actually displaying the results then the DB may be technically allowed to optimize it out from underneath you. The only way to be sure is a cursor / procedure.\nIn short, this boils down to learning more about database programming. PG is performing as it should.CheersDaveOn Mon, Jun 14, 2010 at 10:36 AM, Elias Ghanem <[email protected]> wrote:\n\n\nHi,\nActually i guess the problem is related to the way PG uses to aquire\nlock on the rows that will be updated.\nSuppose the update query will affect 5 rows: A, B, C, D and E.\nApparently the folowing senario is happening:\n    1- Transaction1 locks row A\n    2- Trnasaction2 locks row B\n    3- Transaction1 updates row A\n    4- Tranasaction2 updates row B\n    5- Transaction1 tries to acquire lock on row B(and fail\nbecause row B is still locked by transaction2)\n    6- Transaction2 tries to acquire lock on row A(and fail\nbecause row A is still locked by transaction1)\nHence the dead lock.\nIs this a plausible explanation of what is going on?\nIf yes, what can be done to avoid the dead lock?\nThanks again.\n\n\n-------- Original Message --------\n\n\n\nSubject: \nDead lock\n\n\nDate: \nMon, 14 Jun 2010 14:50:43 +0300\n\n\nFrom: \nElias Ghanem <[email protected]>\n\n\nTo: \[email protected]\n\n\n\n\n\n\nHi all,\nI have 2 data bases trying to perform an update query at the same time\non a same table in a third data base using db link.\nI'm getting a dead lock exception:\nERROR:  deadlock detected\nDETAIL:  Process 27305 waits for ShareLock on transaction 55575;\nblocked by process 27304.\nProcess 27304 waits for ShareLock on transaction 55576; blocked by\nprocess 27305.\nHINT:  See server log for query details.\nActually the folowing function is installed on 2 dbs DB1 and\nDB2. This function issues an update query on DB3.\nWhen this function is running simultaneously on DB1 and DB2, it\nproduces a dead lock making one of the functions (in DB1 or DB2) stop\nwith the above exception:\nIs it normal? should'nt postgres be able to handle such situations, for\nex:  let one transaction wait untill the other commits or rollback then\ncontinue with the first transaction?\nIs there a parameter that should be set in postgresql.conf to allow\nhandling of concurrent transaction...?\n\nCREATE OR REPLACE FUNCTION TEST_DB_LINK(VARCHAR)\nRETURNS VOID AS'\nDECLARE \nC INTEGER;\nP ALIAS FOR $1;\nDUMMY  VARCHAR;\nBEGIN    \n    C:= 0;\n    LOOP\n        EXIT WHEN C > 15;\n        C:= C+1;\n        SELECT INTO DUMMY DBLINK_EXEC(''CONNECTION_STRING TO DB3',\n''UPDATE IN_FICHE_PRODUIT SET VALIDE = 1'');\n    RAISE NOTICE ''%, %'', C,P;\n    END LOOP;\nEND;'\nLANGUAGE 'plpgsql';\n\nThanks for your time.", "msg_date": "Mon, 14 Jun 2010 10:58:20 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Dead lock" }, { "msg_contents": "On Mon, Jun 14, 2010 at 11:58 AM, Dave Crooke <[email protected]> wrote:\n> If you're doing straight SQL bulk updates, then as someone suggested, you could use an ORDER BY on a subquery, but I don't know if that is a guarantee, if you're not actually displaying the results then the DB may be technically allowed to optimize it out from underneath you. The only way to be sure is a cursor / procedure.\n\n'order by' should be safe if you use SELECT...FOR UPDATE. update\ndoesn't have an order by clause. Using cursor/procedure vs a query\nis not the material point; you have to make sure locks are acquired in\na regular way.\n\nupdate foo set x=x where id in (select * from bar order by x) does\nlook dangerous.\n\nI think:\nupdate foo set x=x where id in (select * from bar order by x for update)\nshould be ok. I don't usually do it that way.\n\nmerlin\n", "msg_date": "Mon, 14 Jun 2010 16:44:26 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Dead lock" } ]
[ { "msg_contents": "We have a fairly unique need for a local, in-memory cache. This will\nstore data aggregated from other sources. Generating the data only\ntakes a few minutes, and it is updated often. There will be some\nfairly expensive queries of arbitrary complexity run at a fairly high\nrate. We're looking for high concurrency and reasonable performance\nthroughout.\n\nThe entire data set is roughly 20 MB in size. We've tried Carbonado in\nfront of SleepycatJE only to discover that it chokes at a fairly low\nconcurrency and that Carbonado's rule-based optimizer is wholly\ninsufficient for our needs. We've also tried Carbonado's Map\nRepository which suffers the same problems.\n\nI've since moved the backend database to a local PostgreSQL instance\nhoping to take advantage of PostgreSQL's superior performance at high\nconcurrency. Of course, at the default settings, it performs quite\npoorly compares to the Map Repository and Sleepycat JE.\n\nMy question is how can I configure the database to run as quickly as\npossible if I don't care about data consistency or durability? That\nis, the data is updated so often and it can be reproduced fairly\nrapidly so that if there is a server crash or random particles from\nspace mess up memory we'd just restart the machine and move on.\n\nI've never configured PostgreSQL to work like this and I thought maybe\nsomeone here had some ideas on a good approach to this.\n", "msg_date": "Mon, 14 Jun 2010 19:14:46 -0700 (PDT)", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL as a local in-memory cache" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> My question is how can I configure the database to run as quickly as\n> possible if I don't care about data consistency or durability? That\n> is, the data is updated so often and it can be reproduced fairly\n> rapidly so that if there is a server crash or random particles from\n> space mess up memory we'd just restart the machine and move on.\n\nFor such a scenario, I'd suggest you:\n\n- Set up a filesystem that is memory-backed. On Linux, RamFS or TmpFS\n are reasonable options for this.\n\n- The complication would be that your \"restart the machine and move\n on\" needs to consist of quite a few steps:\n\n - recreating the filesystem\n - fixing permissions as needed\n - running initdb to set up new PG instance\n - automating any needful fiddling with postgresql.conf, pg_hba.conf\n - starting up that PG instance\n - creating users, databases, schemas, ...\n\nWhen my desktop machine's not dead [as it is now :-(], I frequently\nuse this very kind of configuration to host databases where I'm doing\nfunctionality testing on continually-freshly-created DBs and therefore\ndon't actually care if they get thrown away.\n\nI have set up an \"init.d\"-style script which has an extra target to do\nthe database \"init\" in order to make the last few steps mentioned as\nquick as possible.\n\n ~/dbs/pgsql-head.sh init\n\ngoes an extra mile, using sed to rewrite postgresql.conf to change\ndefaults.\n\nI expect that, if running on a ramdisk, you'd want to fiddle some of\nthe disk performance parameters in postgresql.conf.\n\nIt's certainly worth trying out the ramdisk to see if it helps with\nthis case. Note that all you'll lose is durability under conditions\nof hardware outage - PostgreSQL will still care as much as always\nabout data consistency.\n\n[Thinking about wilder possibilities...]\n\nI wonder if this kind of installation \"comes into its own\" for more\nrealistic scenarios in the presence of streaming replication. If you\nknow the WAL files have gotten to disk on another server, that's a\npretty good guarantee :-).\n-- \nselect 'cbbrowne' || '@' || 'cbbrowne.com';\nhttp://cbbrowne.com/info/internet.html\n\"MS apparently now has a team dedicated to tracking problems with\nLinux and publicizing them. I guess eventually they'll figure out\nthis back fires... ;)\" -- William Burrow <[email protected]>\n", "msg_date": "Tue, 15 Jun 2010 11:47:15 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Chris Browne wrote:\n> \"[email protected]\" <[email protected]> writes:\n>> My question is how can I configure the database to run as quickly as\n>> possible if I don't care about data consistency or durability? That\n>> is, the data is updated so often and it can be reproduced fairly\n>> rapidly so that if there is a server crash or random particles from\n>> space mess up memory we'd just restart the machine and move on.\n> \n> For such a scenario, I'd suggest you:\n> \n> - Set up a filesystem that is memory-backed. On Linux, RamFS or TmpFS\n> are reasonable options for this.\n> \n> - The complication would be that your \"restart the machine and move\n> on\" needs to consist of quite a few steps:\n> \n> - recreating the filesystem\n> - fixing permissions as needed\n> - running initdb to set up new PG instance\n> - automating any needful fiddling with postgresql.conf, pg_hba.conf\n> - starting up that PG instance\n> - creating users, databases, schemas, ...\n\nDoesn't PG now support putting both WAL and user table files onto\nfile systems other than the one holding the PG config files and PG\n'admin' tables? Wouldn't doing so simplify the above considertably\nby allowing just the WAL and user tables on the memory-backed file\nsystems? I wouldn't think the performance impact of leaving\nthe rest of the stuff on disk would be that large.\n\nOr does losing WAL files mandate a new initdb?\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n", "msg_date": "Tue, 15 Jun 2010 09:02:31 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Jun 15, 8:47 am, Chris Browne <[email protected]> wrote:\n> \"[email protected]\" <[email protected]> writes:\n> > My question is how can I configure the database to run as quickly as\n> > possible if I don't care about data consistency or durability? That\n> > is, the data is updated so often and it can be reproduced fairly\n> > rapidly so that if there is a server crash or random particles from\n> > space mess up memory we'd just restart the machine and move on.\n>\n> For such a scenario, I'd suggest you:\n>\n> - Set up a filesystem that is memory-backed.  On Linux, RamFS or TmpFS\n>   are reasonable options for this.\n>\n\nI had forgotten about this. I will try this out.\n\n> - The complication would be that your \"restart the machine and move\n>   on\" needs to consist of quite a few steps:\n>\n>   - recreating the filesystem\n>   - fixing permissions as needed\n>   - running initdb to set up new PG instance\n>   - automating any needful fiddling with postgresql.conf, pg_hba.conf\n>   - starting up that PG instance\n>   - creating users, databases, schemas, ...\n>\n\nI'm going to have a system in place to create these databases when I\nrestart the service.\n\n> ...\n>\n> I wonder if this kind of installation \"comes into its own\" for more\n> realistic scenarios in the presence of streaming replication.  If you\n> know the WAL files have gotten to disk on another server, that's a\n> pretty good guarantee :-).\n>\n\nI have found that pre-computing and storing values in a general\nrelational-type database without durability is an ideal use case to\nhelp improve services that need to return calculated results quickly.\nA simple hash lookup is no longer sufficient. Perhaps PostgreSQL\nrunning in this mode will be the ideal solution.\n\nNowadays, no one is really surprised that it takes 30 seconds or so to\nreplicate your data everywhere, but they do detest not getting answers\nto their complicated queries immediately.\n\n", "msg_date": "Tue, 15 Jun 2010 09:49:37 -0700 (PDT)", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "[oops, didn't hit \"reply to list\" first time, resending...]\n\nOn 6/15/10 9:02 AM, Steve Wampler wrote:\n> Chris Browne wrote:\n>> \"[email protected]\" <[email protected]> writes:\n>>> My question is how can I configure the database to run as quickly as\n>>> possible if I don't care about data consistency or durability? That\n>>> is, the data is updated so often and it can be reproduced fairly\n>>> rapidly so that if there is a server crash or random particles from\n>>> space mess up memory we'd just restart the machine and move on.\n>>\n>> For such a scenario, I'd suggest you:\n>>\n>> - Set up a filesystem that is memory-backed. On Linux, RamFS or TmpFS\n>> are reasonable options for this.\n>>\n>> - The complication would be that your \"restart the machine and move\n>> on\" needs to consist of quite a few steps:\n>>\n>> - recreating the filesystem\n>> - fixing permissions as needed\n>> - running initdb to set up new PG instance\n>> - automating any needful fiddling with postgresql.conf, pg_hba.conf\n>> - starting up that PG instance\n>> - creating users, databases, schemas, ...\n\nHow about this: Set up a database entirely on a RAM disk, then install a WAL-logging warm standby. If the production computer goes down, you bring the warm standby online, shut it down, and use tar(1) to recreate the database on the production server when you bring it back online. You have speed and you have near-100% backup.\n\nCraig\n", "msg_date": "Tue, 15 Jun 2010 10:12:38 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "[email protected] (Steve Wampler) writes:\n> Or does losing WAL files mandate a new initdb?\n\nLosing WAL would mandate initdb, so I'd think this all fits into the\nset of stuff worth putting onto ramfs/tmpfs. Certainly it'll all be\nsignificant to the performance focus.\n-- \nselect 'cbbrowne' || '@' || 'cbbrowne.com';\nhttp://cbbrowne.com/info/internet.html\n\"MS apparently now has a team dedicated to tracking problems with\nLinux and publicizing them. I guess eventually they'll figure out\nthis back fires... ;)\" -- William Burrow <[email protected]>\n", "msg_date": "Tue, 15 Jun 2010 13:37:55 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Tue, Jun 15, 2010 at 12:37 PM, Chris Browne <[email protected]> wrote:\n> [email protected] (Steve Wampler) writes:\n>> Or does losing WAL files mandate a new initdb?\n>\n> Losing WAL would mandate initdb, so I'd think this all fits into the\n> set of stuff worth putting onto ramfs/tmpfs.  Certainly it'll all be\n> significant to the performance focus.\n\nwhy is that? isn't simply execute pg_resetxlog enough? specially\n'cause OP doesn't care about loosing some transactions\n\n-- \nJaime Casanova www.2ndQuadrant.com\nSoporte y capacitación de PostgreSQL\n", "msg_date": "Tue, 15 Jun 2010 18:09:58 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On 6/15/10 10:37 AM, Chris Browne wrote:\n> [email protected] (Steve Wampler) writes:\n>> Or does losing WAL files mandate a new initdb?\n> \n> Losing WAL would mandate initdb, so I'd think this all fits into the\n> set of stuff worth putting onto ramfs/tmpfs. Certainly it'll all be\n> significant to the performance focus.\n\nI'd like to see some figures about WAL on RAMfs vs. simply turning off\nfsync and full_page_writes. Per Gavin's tests, PostgreSQL is already\nclose to TokyoCabinet/MongoDB performance just with those turned off; I\nwonder if actually having the WAL on a memory partition would make any\nreal difference in throughput.\n\nI've seen a lot of call for this recently, especially since PostgreSQL\nseems to be increasingly in use as a reporting server for Hadoop. Might\nbe worth experimenting with just making wal writing a no-op. We'd also\nwant to disable checkpointing, of course.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Tue, 15 Jun 2010 16:18:19 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Jun 15, 4:18 pm, [email protected] (Josh Berkus) wrote:\n> On 6/15/10 10:37 AM, Chris Browne wrote:\n>\n> I'd like to see some figures about WAL on RAMfs vs. simply turning off\n> fsync and full_page_writes.  Per Gavin's tests, PostgreSQL is already\n> close to TokyoCabinet/MongoDB performance just with those turned off; I\n> wonder if actually having the WAL on a memory partition would make any\n> real difference in throughput.\n>\n> I've seen a lot of call for this recently, especially since PostgreSQL\n> seems to be increasingly in use as a reporting server for Hadoop.  Might\n> be worth experimenting with just making wal writing a no-op.  We'd also\n> want to disable checkpointing, of course.\n>\n\nMy back-of-the-envelope experiment: Inserting single integers into a\ntable without indexes using a prepared query via psycopg2.\n\nPython Script:\nimport psycopg2\nfrom time import time\nconn = psycopg2.connect(database='jgardner')\ncursor = conn.cursor()\ncursor.execute(\"CREATE TABLE test (data int not null)\")\nconn.commit()\ncursor.execute(\"PREPARE ins AS INSERT INTO test VALUES ($1)\")\nconn.commit()\nstart = time()\ntx = 0\nwhile time() - start < 1.0:\n cursor.execute(\"EXECUTE ins(%s)\", (tx,));\n conn.commit()\n tx += 1\nprint tx\ncursor.execute(\"DROP TABLE test\");\nconn.commit();\n\nLocal disk, WAL on same FS:\n* Default config => 90\n* full_page_writes=off => 90\n* synchronous_commit=off => 4,500\n* fsync=off => 5,100\n* fsync=off and synchronous_commit=off => 5,500\n* fsync=off and full_page_writes=off => 5,150\n* fsync=off, synchronous_commit=off and full_page_writes=off => 5,500\n\ntmpfs, WAL on same tmpfs:\n* Default config: 5,200\n* full_page_writes=off => 5,200\n* fsync=off => 5,250\n* synchronous_commit=off => 5,200\n* fsync=off and synchronous_commit=off => 5,450\n* fsync=off and full_page_writes=off => 5,250\n* fsync=off, synchronous_commit=off and full_page_writes=off => 5,500\n\nNOTE: If I do one giant commit instead of lots of littler ones, I get\nmuch better speeds for the slower cases, but I never exceed 5,500\nwhich appears to be some kind of wall I can't break through.\n\nIf there's anything else I should tinker with, I'm all ears.\n", "msg_date": "Tue, 15 Jun 2010 23:30:30 -0700 (PDT)", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On 16/06/10 18:30, [email protected] wrote:\n> On Jun 15, 4:18 pm, [email protected] (Josh Berkus) wrote:\n> \n>> On 6/15/10 10:37 AM, Chris Browne wrote:\n>>\n>> I'd like to see some figures about WAL on RAMfs vs. simply turning off\n>> fsync and full_page_writes. Per Gavin's tests, PostgreSQL is already\n>> close to TokyoCabinet/MongoDB performance just with those turned off; I\n>> wonder if actually having the WAL on a memory partition would make any\n>> real difference in throughput.\n>>\n>> I've seen a lot of call for this recently, especially since PostgreSQL\n>> seems to be increasingly in use as a reporting server for Hadoop. Might\n>> be worth experimenting with just making wal writing a no-op. We'd also\n>> want to disable checkpointing, of course.\n>>\n>> \n> My back-of-the-envelope experiment: Inserting single integers into a\n> table without indexes using a prepared query via psycopg2.\n>\n> Python Script:\n> import psycopg2\n> from time import time\n> conn = psycopg2.connect(database='jgardner')\n> cursor = conn.cursor()\n> cursor.execute(\"CREATE TABLE test (data int not null)\")\n> conn.commit()\n> cursor.execute(\"PREPARE ins AS INSERT INTO test VALUES ($1)\")\n> conn.commit()\n> start = time()\n> tx = 0\n> while time() - start< 1.0:\n> cursor.execute(\"EXECUTE ins(%s)\", (tx,));\n> conn.commit()\n> tx += 1\n> print tx\n> cursor.execute(\"DROP TABLE test\");\n> conn.commit();\n>\n> Local disk, WAL on same FS:\n> * Default config => 90\n> * full_page_writes=off => 90\n> * synchronous_commit=off => 4,500\n> * fsync=off => 5,100\n> * fsync=off and synchronous_commit=off => 5,500\n> * fsync=off and full_page_writes=off => 5,150\n> * fsync=off, synchronous_commit=off and full_page_writes=off => 5,500\n>\n> tmpfs, WAL on same tmpfs:\n> * Default config: 5,200\n> * full_page_writes=off => 5,200\n> * fsync=off => 5,250\n> * synchronous_commit=off => 5,200\n> * fsync=off and synchronous_commit=off => 5,450\n> * fsync=off and full_page_writes=off => 5,250\n> * fsync=off, synchronous_commit=off and full_page_writes=off => 5,500\n>\n> NOTE: If I do one giant commit instead of lots of littler ones, I get\n> much better speeds for the slower cases, but I never exceed 5,500\n> which appears to be some kind of wall I can't break through.\n>\n> If there's anything else I should tinker with, I'm all ears.\n>\n> \n\nSeeing some profiler output (e.g oprofile) for the fastest case (and \nmaybe 'em all later) might be informative about what limit is being hit \nhere.\n\nregards\n\nMark\n", "msg_date": "Wed, 16 Jun 2010 19:48:23 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "\nHave you tried connecting using a UNIX socket instead of a TCP socket on \nlocalhost ? On such very short queries, the TCP overhead is significant.\n", "msg_date": "Wed, 16 Jun 2010 09:51:14 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "\n> Have you tried connecting using a UNIX socket instead of a TCP socket on\n> localhost ? On such very short queries, the TCP overhead is significant.\n\nActually UNIX sockets are the default for psycopg2, had forgotten that.\n\nI get 7400 using UNIX sockets and 3000 using TCP (host=\"localhost\")\n", "msg_date": "Wed, 16 Jun 2010 09:53:58 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "[email protected] wrote:\n> NOTE: If I do one giant commit instead of lots of littler ones, I get\n> much better speeds for the slower cases, but I never exceed 5,500\n> which appears to be some kind of wall I can't break through.\n> \n\nThat's usually about where I run into the upper limit on how many \nstatements Python can execute against the database per second. Between \nthat and the GIL preventing better multi-core use, once you pull the \ndisk out and get CPU bound it's hard to use Python for load testing of \nsmall statements and bottleneck anywhere except in Python itself.\n\nI normally just write little performance test cases in the pgbench \nscripting language, then I get multiple clients and (in 9.0) multiple \ndriver threads all for free.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Wed, 16 Jun 2010 04:27:00 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "\nFYI I've tweaked this program a bit :\n\nimport psycopg2\n from time import time\nconn = psycopg2.connect(database='peufeu')\ncursor = conn.cursor()\ncursor.execute(\"CREATE TEMPORARY TABLE test (data int not null)\")\nconn.commit()\ncursor.execute(\"PREPARE ins AS INSERT INTO test VALUES ($1)\")\ncursor.execute(\"PREPARE sel AS SELECT 1\")\nconn.commit()\nstart = time()\ntx = 0\nN = 100\nd = 0\nwhile d < 10:\n\tfor n in xrange( N ):\n\t\tcursor.execute(\"EXECUTE ins(%s)\", (tx,));\n\t\t#~ conn.commit()\n\t\t#~ cursor.execute(\"EXECUTE sel\" );\n\tconn.commit()\n\td = time() - start\n\ttx += N\nprint \"result : %d tps\" % (tx / d)\ncursor.execute(\"DROP TABLE test\");\nconn.commit();\n\nResults (Core 2 quad, ubuntu 10.04 64 bits) :\n\nSELECT 1 : 21000 queries/s (I'd say 50 us per query isn't bad !)\nINSERT with commit every 100 inserts : 17800 insets/s\nINSERT with commit every INSERT : 7650 tps\n\nfsync is on but not synchronous_commit.\n\n", "msg_date": "Wed, 16 Jun 2010 13:22:50 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Excerpts from [email protected]'s message of mié jun 16 02:30:30 -0400 2010:\n\n> NOTE: If I do one giant commit instead of lots of littler ones, I get\n> much better speeds for the slower cases, but I never exceed 5,500\n> which appears to be some kind of wall I can't break through.\n> \n> If there's anything else I should tinker with, I'm all ears.\n\nincrease wal_buffers?\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 16 Jun 2010 13:37:06 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "\n> * fsync=off => 5,100\n> * fsync=off and synchronous_commit=off => 5,500\n\nNow, this *is* interesting ... why should synch_commit make a difference\nif fsync is off?\n\nAnyone have any ideas?\n\n> tmpfs, WAL on same tmpfs:\n> * Default config: 5,200\n> * full_page_writes=off => 5,200\n> * fsync=off => 5,250\n> * synchronous_commit=off => 5,200\n> * fsync=off and synchronous_commit=off => 5,450\n> * fsync=off and full_page_writes=off => 5,250\n> * fsync=off, synchronous_commit=off and full_page_writes=off => 5,500\n\nSo, in this test, it seems like having WAL on tmpfs doesn't make a\nsignificant difference for everything == off.\n\nI'll try running some tests on Amazon when I have a chance. It would be\nworthwhile to get figures without Python's \"ceiling\".\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Wed, 16 Jun 2010 12:00:22 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Wed, Jun 16, 2010 at 12:51 AM, Pierre C <[email protected]> wrote:\n>\n> Have you tried connecting using a UNIX socket instead of a TCP socket on\n> localhost ? On such very short queries, the TCP overhead is significant.\n>\n\nUnfortunately, this isn't an option for my use case. Carbonado only\nsupports TCP connections.\n\n\n-- \nJonathan Gardner\[email protected]\n", "msg_date": "Wed, 16 Jun 2010 12:11:27 -0700", "msg_from": "Jonathan Gardner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Wed, Jun 16, 2010 at 4:22 AM, Pierre C <[email protected]> wrote:\n>\n> import psycopg2\n> from time import time\n> conn = psycopg2.connect(database='peufeu')\n> cursor = conn.cursor()\n> cursor.execute(\"CREATE TEMPORARY TABLE test (data int not null)\")\n> conn.commit()\n> cursor.execute(\"PREPARE ins AS INSERT INTO test VALUES ($1)\")\n> cursor.execute(\"PREPARE sel AS SELECT 1\")\n> conn.commit()\n> start = time()\n> tx = 0\n> N = 100\n> d = 0\n> while d < 10:\n>        for n in xrange( N ):\n>                cursor.execute(\"EXECUTE ins(%s)\", (tx,));\n>                #~ conn.commit()\n>                #~ cursor.execute(\"EXECUTE sel\" );\n>        conn.commit()\n>        d = time() - start\n>        tx += N\n> print \"result : %d tps\" % (tx / d)\n> cursor.execute(\"DROP TABLE test\");\n> conn.commit();\n>\n\nI'm not surprised that Python add is so slow, but I am surprised that\nI didn't remember it was... ;-)\n\n-- \nJonathan Gardner\[email protected]\n", "msg_date": "Wed, 16 Jun 2010 12:17:11 -0700", "msg_from": "Jonathan Gardner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Wed, Jun 16, 2010 at 12:00 PM, Josh Berkus <[email protected]> wrote:\n>\n>> * fsync=off => 5,100\n>> * fsync=off and synchronous_commit=off => 5,500\n>\n> Now, this *is* interesting ... why should synch_commit make a difference\n> if fsync is off?\n>\n> Anyone have any ideas?\n>\n\nI may have stumbled upon this by my ignorance, but I thought I read\nthat synchronous_commit controlled whether it tries to line up commits\nor has a more free-for-all that may cause some intermediate weirdness.\n\n-- \nJonathan Gardner\[email protected]\n", "msg_date": "Wed, 16 Jun 2010 12:19:20 -0700", "msg_from": "Jonathan Gardner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Wed, Jun 16, 2010 at 1:27 AM, Greg Smith <[email protected]> wrote:\n>\n> I normally just write little performance test cases in the pgbench scripting\n> language, then I get multiple clients and (in 9.0) multiple driver threads\n> all for free.\n>\n\nSee, this is why I love these mailing lists. I totally forgot about\npgbench. I'm going to dump my cheesy python script and play with that\nfor a while.\n\n-- \nJonathan Gardner\[email protected]\n", "msg_date": "Wed, 16 Jun 2010 12:22:17 -0700", "msg_from": "Jonathan Gardner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "http://www.postgresql.org/docs/current/static/wal-async-commit.html\n\" the server waits for the transaction's WAL records to be flushed to permanent storage before returning a success indication to the client.\"\nI think with fynch=off, whether WAL gets written to disk or not is still controlled by synchronous_commit parameter. guessing here...\n\n> Date: Wed, 16 Jun 2010 12:19:20 -0700\n> Subject: Re: [PERFORM] PostgreSQL as a local in-memory cache\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n> \n> On Wed, Jun 16, 2010 at 12:00 PM, Josh Berkus <[email protected]> wrote:\n> >\n> >> * fsync=off => 5,100\n> >> * fsync=off and synchronous_commit=off => 5,500\n> >\n> > Now, this *is* interesting ... why should synch_commit make a difference\n> > if fsync is off?\n> >\n> > Anyone have any ideas?\n> >\n> \n> I may have stumbled upon this by my ignorance, but I thought I read\n> that synchronous_commit controlled whether it tries to line up commits\n> or has a more free-for-all that may cause some intermediate weirdness.\n> \n> -- \n> Jonathan Gardner\n> [email protected]\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n_________________________________________________________________\nThe New Busy is not the old busy. Search, chat and e-mail from your inbox.\nhttp://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_3\n\n\n\n\n\nhttp://www.postgresql.org/docs/current/static/wal-async-commit.html\" the server waits for the transaction's WAL records to be flushed to permanent storage before returning a success indication to the client.\"I think with fynch=off, whether WAL gets written to disk or not is still controlled by synchronous_commit parameter. guessing here...> Date: Wed, 16 Jun 2010 12:19:20 -0700> Subject: Re: [PERFORM] PostgreSQL as a local in-memory cache> From: [email protected]> To: [email protected]> CC: [email protected]> > On Wed, Jun 16, 2010 at 12:00 PM, Josh Berkus <[email protected]> wrote:> >> >> * fsync=off => 5,100> >> * fsync=off and synchronous_commit=off => 5,500> >> > Now, this *is* interesting ... why should synch_commit make a difference> > if fsync is off?> >> > Anyone have any ideas?> >> > I may have stumbled upon this by my ignorance, but I thought I read> that synchronous_commit controlled whether it tries to line up commits> or has a more free-for-all that may cause some intermediate weirdness.> > -- > Jonathan Gardner> [email protected]> > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance The New Busy is not the old busy. Search, chat and e-mail from your inbox. Get started.", "msg_date": "Wed, 16 Jun 2010 15:33:12 -0400", "msg_from": "Balkrishna Sharma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On 6/16/10 12:00 PM, Josh Berkus wrote:\n>\n>> * fsync=off => 5,100\n>> * fsync=off and synchronous_commit=off => 5,500\n>\n> Now, this *is* interesting ... why should synch_commit make a difference\n> if fsync is off?\n>\n> Anyone have any ideas?\n\nI found that pgbench has \"noise\" of about 20% (I posted about this a couple days ago using data from 1000 identical pgbench runs). Unless you make a bunch of runs and average them, a difference of 5,100 to 5,500 appears to be meaningless.\n\nCraig\n\n>\n>> tmpfs, WAL on same tmpfs:\n>> * Default config: 5,200\n>> * full_page_writes=off => 5,200\n>> * fsync=off => 5,250\n>> * synchronous_commit=off => 5,200\n>> * fsync=off and synchronous_commit=off => 5,450\n>> * fsync=off and full_page_writes=off => 5,250\n>> * fsync=off, synchronous_commit=off and full_page_writes=off => 5,500\n>\n> So, in this test, it seems like having WAL on tmpfs doesn't make a\n> significant difference for everything == off.\n>\n> I'll try running some tests on Amazon when I have a chance. It would be\n> worthwhile to get figures without Python's \"ceiling\".\n>\n\n", "msg_date": "Wed, 16 Jun 2010 12:36:31 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "\n> I'm not surprised that Python add is so slow, but I am surprised that\n> I didn't remember it was... ;-)\n\nit's not the add(), it's the time.time()...\n\n", "msg_date": "Wed, 16 Jun 2010 22:40:49 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Jun 14, 7:14 pm, \"[email protected]\"\n<[email protected]> wrote:\n> We have a fairly unique need for a local, in-memory cache. This will\n> store data aggregated from other sources. Generating the data only\n> takes a few minutes, and it is updated often. There will be some\n> fairly expensive queries of arbitrary complexity run at a fairly high\n> rate. We're looking for high concurrency and reasonable performance\n> throughout.\n>\n> The entire data set is roughly 20 MB in size. We've tried Carbonado in\n> front of SleepycatJE only to discover that it chokes at a fairly low\n> concurrency and that Carbonado's rule-based optimizer is wholly\n> insufficient for our needs. We've also tried Carbonado's Map\n> Repository which suffers the same problems.\n>\n> I've since moved the backend database to a local PostgreSQL instance\n> hoping to take advantage of PostgreSQL's superior performance at high\n> concurrency. Of course, at the default settings, it performs quite\n> poorly compares to the Map Repository and Sleepycat JE.\n>\n> My question is how can I configure the database to run as quickly as\n> possible if I don't care about data consistency or durability? That\n> is, the data is updated so often and it can be reproduced fairly\n> rapidly so that if there is a server crash or random particles from\n> space mess up memory we'd just restart the machine and move on.\n>\n> I've never configured PostgreSQL to work like this and I thought maybe\n> someone here had some ideas on a good approach to this.\n\nJust to summarize what I've been able to accomplish so far. By turning\nfsync and synchronize_commit off, and moving the data dir to tmpfs,\nI've been able to run the expensive queries much faster than BDB or\nthe MapRepository that comes with Carbonado. This is because\nPostgreSQL's planner is so much faster and better than whatever\nCarbonado has. Tweaking indexes has only made things run faster.\n\nRight now I'm wrapping up the project so that we can do some serious\nperformance benchmarks. I'll let you all know how it goes.\n\nAlso, just a note that setting up PostgreSQL for these weird scenarios\nturned out to be just a tiny bit harder than setting up SQLite. I\nremember several years ago when there was a push to simplify the\nconfiguration and installation of PostgreSQL, and I believe that that\nhas born fruit.\n", "msg_date": "Wed, 16 Jun 2010 19:33:13 -0700 (PDT)", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "All,\n\nSo, I've been discussing this because using PostgreSQL on the caching \nlayer has become more common that I think most people realize. Jonathan \nis one of 4 companies I know of who are doing this, and with the growth \nof Hadoop and other large-scale data-processing technologies, I think \ndemand will increase.\n\nEspecially as, in repeated tests, PostgreSQL with persistence turned off \nis just as fast as the fastest nondurable NoSQL database. And it has a \nLOT more features.\n\nNow, while fsync=off and tmpfs for WAL more-or-less eliminate the IO for \ndurability, they don't eliminate the CPU time. Which means that a \ncaching version of PostgreSQL could be even faster. To do that, we'd \nneed to:\n\na) Eliminate WAL logging entirely\nb) Eliminate checkpointing\nc) Turn off the background writer\nd) Have PostgreSQL refuse to restart after a crash and instead call an \nexteral script (for reprovisioning)\n\nOf the three above, (a) is the most difficult codewise. (b)(c) and (d) \nshould be relatively straightforwards, although I believe that we now \nhave the bgwriter doing some other essential work besides syncing \nbuffers. There's also a narrower use-case in eliminating (a), since a \nnon-fsync'd server which was recording WAL could be used as part of a \nreplication chain.\n\nThis isn't on hackers because I'm not ready to start working on a patch, \nbut I'd like some feedback on the complexities of doing (b) and (c) as \nwell as how many people could use a non-persistant, in-memory postgres.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Thu, 17 Jun 2010 10:29:37 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "\n> Especially as, in repeated tests, PostgreSQL with persistence turned off \n> is just as fast as the fastest nondurable NoSQL database. And it has a \n> LOT more features.\n\nAn option to completely disable WAL for such use cases would make it a lot \nfaster, especially in the case of heavy concurrent writes.\n\n> Now, while fsync=off and tmpfs for WAL more-or-less eliminate the IO for \n> durability, they don't eliminate the CPU time.\n\nActually the WAL overhead is some CPU and lots of locking.\n\n> Which means that a caching version of PostgreSQL could be even faster. \n> To do that, we'd need to:\n>\n> a) Eliminate WAL logging entirely\n> b) Eliminate checkpointing\n> c) Turn off the background writer\n> d) Have PostgreSQL refuse to restart after a crash and instead call an \n> exteral script (for reprovisioning)\n>\n> Of the three above, (a) is the most difficult codewise.\n\nActually, it's pretty easy, look in xlog.c\n\n", "msg_date": "Thu, 17 Jun 2010 20:44:04 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Hi,\n\nJosh Berkus <[email protected]> writes:\n> a) Eliminate WAL logging entirely\n> b) Eliminate checkpointing\n> c) Turn off the background writer\n> d) Have PostgreSQL refuse to restart after a crash and instead call an\n> exteral script (for reprovisioning)\n\nWell I guess I'd prefer a per-transaction setting, allowing to bypass\nWAL logging and checkpointing. Forcing the backend to care itself for\nwriting the data I'm not sure is a good thing, but if you say so.\n\nThen you could have the GUC set for a whole cluster, only a database\netc. We already have synchronous_commit to trade durability against\nperformances, we could maybe support protect_data = off too.\n\nThe d) point I'm not sure still applies if you have per transaction\nsetting, which I think makes the most sense. The data you choose not to\nprotect is missing at restart, just add some way to register a hook\nthere. We already have one (shared_preload_libraries) but it requires\ncoding in C. \n\nCalling a user function at the end of recovery and before accepting\nconnection would be good I think. A user function (per database) is\nbetter than a script because if you want to run it before accepting\nconnections and still cause changes in the database…\n\nRegards,\n-- \ndim\n", "msg_date": "Thu, 17 Jun 2010 21:01:19 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Dimitri Fontaine <[email protected]> writes:\n> Josh Berkus <[email protected]> writes:\n>> a) Eliminate WAL logging entirely\n>> b) Eliminate checkpointing\n>> c) Turn off the background writer\n>> d) Have PostgreSQL refuse to restart after a crash and instead call an\n>> exteral script (for reprovisioning)\n\n> Well I guess I'd prefer a per-transaction setting, allowing to bypass\n> WAL logging and checkpointing.\n\nNot going to happen; this is all or nothing.\n\n> Forcing the backend to care itself for\n> writing the data I'm not sure is a good thing, but if you say so.\n\nYeah, I think proposal (c) is likely to be a net loss.\n\n(a) and (d) are probably simple, if by \"reprovisioning\" you mean\n\"rm -rf $PGDATA; initdb\". Point (b) will be a bit trickier because\nthere are various housekeeping activities tied into checkpoints.\nI think you can't actually remove checkpoints altogether, just\nskip the flush-dirty-pages part.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Jun 2010 15:38:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache " }, { "msg_contents": "Josh Berkus wrote:\n> a) Eliminate WAL logging entirely\n> c) Turn off the background writer\n\nNote that if you turn off full_page_writes and set \nbgwriter_lru_maxpages=0, you'd get a substantial move in both these \ndirections without touching any code. Would help prove those as useful \ndirections to move toward or not. The difference in WAL writes just \nafter a checkpoint in particular, due to the full_page_writes behavior, \nis a significant portion of total WAL activity on most systems.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Thu, 17 Jun 2010 15:59:39 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "\n> Well I guess I'd prefer a per-transaction setting, allowing to bypass\n> WAL logging and checkpointing. Forcing the backend to care itself for\n> writing the data I'm not sure is a good thing, but if you say so.\n\nWell if the transaction touches a system catalog it better be WAL-logged...\n\nA per-table (or per-index) setting makes more sense IMHO. For instance \"on \nrecovery, truncate this table\" (this was mentioned before).\nAnother option would be \"make the table data safe, but on recovery, \ndestroy and rebuild this index\" : because on a not so large, often updated \ntable, with often updated indexes, it may not take long to rebuild the \nindexes, but all those wal-logged index updates do add some overhead.\n\n", "msg_date": "Thu, 17 Jun 2010 22:12:25 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "\n> Well I guess I'd prefer a per-transaction setting, allowing to bypass\n> WAL logging and checkpointing. \n\nNot even conceiveable. For this to work, we're talking about the whole\ndatabase installation. This is only a set of settings for a database\n*server* which is considered disposable and replaceable, where if it\nshuts down unexpectedly, you throw it away and replace it.\n\n> Forcing the backend to care itself for\n> writing the data I'm not sure is a good thing, but if you say so.\n\nOh, yeah, I guess we'd only be turning off the LRU cache operations of\nthe background writer. Same with checkpoints. Copying between\nshared_buffers and the LRU cache would still happen.\n\n> Calling a user function at the end of recovery and before accepting\n> connection would be good I think. A user function (per database) is\n> better than a script because if you want to run it before accepting\n> connections and still cause changes in the database…\n\nHmmm, you're not quite following my idea. There is no recovery. If the\ndatabase shuts down unexpectedly, it's toast and you replace it from\nanother copy somewhere else.\n\n> (a) and (d) are probably simple, if by \"reprovisioning\" you mean\n> \"rm -rf $PGDATA; initdb\".\n\nExactly. Followed by \"scp database_image\". Or heck, just replacing the\nwhole VM.\n\n> Point (b) will be a bit trickier because\n> there are various housekeeping activities tied into checkpoints.\n> I think you can't actually remove checkpoints altogether, just\n> skip the flush-dirty-pages part.\n\nYes, and we'd want to flush dirty pages on an actual shutdown command.\nWe do want to be able to shut down the DB on purpose.\n\n> Well if the transaction touches a system catalog it better be\n> WAL-logged...\n\nGiven the above, why?\n\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Thu, 17 Jun 2010 16:01:29 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> (a) and (d) are probably simple, if by \"reprovisioning\" you mean\n>> \"rm -rf $PGDATA; initdb\".\n\n> Exactly. Followed by \"scp database_image\". Or heck, just replacing the\n> whole VM.\n\nRight, that would work. I don't think you really need to implement that\ninside Postgres. I would envision having the startup script do it, ie\n\n\trm -rf $PGDATA\n\tcp -pr prepared-database-image $PGDATA\n\n\t# this loop exits when postmaster exits normally\n\twhile ! postmaster ...\n\tdo\n\t\trm -rf $PGDATA\n\t\tcp -pr prepared-database-image $PGDATA\n\tdone\n\nThen all you need is a tweak to make the postmaster exit(1) after\na crash instead of trying to launch recovery.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Jun 2010 19:25:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache " }, { "msg_contents": "Dimitri Fontaine wrote:\n>> Well I guess I'd prefer a per-transaction setting\n\nNot possible, as many others have said. As soon as you make an unsafe \ntransaction, all the other transactions have nothing to rely on.\n\nOn Thu, 17 Jun 2010, Pierre C wrote:\n> A per-table (or per-index) setting makes more sense IMHO. For instance \"on \n> recovery, truncate this table\" (this was mentioned before).\n\nThat would be much more valuable.\n\nI'd like to point out the costs involved in having a whole separate \n\"version\" of Postgres that has all this safety switched off. Package \nmanagers will not thank anyone for having to distribute another version of \nthe system, and woe betide the user who installs the wrong version because \n\"it runs faster\". No, this is much better as a configurable option.\n\nGoing back to the \"on recovery, truncate this table\". We already have a \nmechanism for skipping the WAL writes on an entire table - we do that for \ntables that have been created in the current transaction. It would surely \nbe a small step to allow this to be configurably permanent on a particular \ntable.\n\nMoreover, we already have a mechanism for taking a table that has had \nnon-logged changes, and turning it into a fully logged table - we do that \nto the above mentioned tables when the transaction commits. I would \nstrongly recommend providing an option to ALTER TABLE MAKE SAFE, which may \ninvolve some more acrobatics if the table is currently in use by multiple \ntransactions, but would be valuable.\n\nThis would allow users to create \"temporary tables\" that can be shared by \nseveral connections. It would also allow bulk loading in parallel of a \nsingle large table.\n\nWith these suggestions, we would still need to WAL-log all the metadata \nchanges, but I think in most circumstances that is not going to be a large \nburden on performance.\n\nMatthew\n\n-- \n Picard: I was just paid a visit from Q.\n Riker: Q! Any idea what he's up to?\n Picard: No. He said he wanted to be \"nice\" to me.\n Riker: I'll alert the crew.\n", "msg_date": "Fri, 18 Jun 2010 10:15:06 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "\n> I'd like to point out the costs involved in having a whole separate \n> \"version\"\n\nIt must be a setting, not a version.\n\nFor instance suppose you have a session table for your website and a users \ntable.\n\n- Having ACID on the users table is of course a must ;\n- for the sessions table you can drop the \"D\"\n\nServer crash would force all users to re-login on your website but if your \nserver crashes enough that your users complain about that, you have \nanother problem anyway. Having the sessions table not WAL-logged (ie \nfaster) would not prevent you from having sessions.user_id REFERENCES \nusers( user_id ) ... so mixing safe and unsafe tables would be much more \npowerful than just having unsafe tables.\n\nAnd I really like the idea of non-WAL-logged indexes, too, since they can \nbe rebuilt as needed, the DBA could decide between faster index updates \nbut rebuild on crash, or normal updates and fast recovery.\n\nAlso materialized views etc, you can rebuild them on crash and the added \nupdate speed would be good.\n\n> Moreover, we already have a mechanism for taking a table that has had \n> non-logged changes, and turning it into a fully logged table - we do \n> that to the above mentioned tables when the transaction commits. I would \n> strongly recommend providing an option to ALTER TABLE MAKE SAFE, which \n> may involve some more acrobatics if the table is currently in use by \n> multiple transactions, but would be valuable.\n\nI believe the old discussions called this ALTER TABLE SET PERSISTENCE.\n\n> This would allow users to create \"temporary tables\" that can be shared \n> by several connections. It would also allow bulk loading in parallel of \n> a single large table.\n\nThis would need to WAL-log the entire table to send it to the slaves if \nreplication is enabled, but it's a lot faster than replicating each record.\n\n", "msg_date": "Fri, 18 Jun 2010 11:40:43 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "\n> It must be a setting, not a version.\n> \n> For instance suppose you have a session table for your website and a\n> users table.\n> \n> - Having ACID on the users table is of course a must ;\n> - for the sessions table you can drop the \"D\"\n\nYou're trying to solve a different use-case than the one I am.\n\nYour use-case will be solved by global temporary tables. I suggest that\nyou give Robert Haas some help & feedback on that.\n\nMy use case is people using PostgreSQL as a cache, or relying entirely\non replication for durability.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 18 Jun 2010 13:55:28 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On 6/18/10 2:15 AM, Matthew Wakeling wrote:\n> I'd like to point out the costs involved in having a whole separate\n> \"version\" of Postgres that has all this safety switched off. Package\n> managers will not thank anyone for having to distribute another version\n> of the system, and woe betide the user who installs the wrong version\n> because \"it runs faster\". No, this is much better as a configurable option.\n\nAgreed, although initial alphas of this concept are likely to in fact be\na separate source code tree. Eventually when we have it working well it\ncould become an initdb-time option.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 18 Jun 2010 13:56:42 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Thu, Jun 17, 2010 at 1:29 PM, Josh Berkus <[email protected]> wrote:\n> a) Eliminate WAL logging entirely\n\nIn addition to global temporary tables, I am also planning to\nimplement unlogged tables, which are, precisely, tables for which no\nWAL is written. On restart, any such tables will be truncated. That\nshould give you the ability to do this (by making all your tables\nunlogged).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Mon, 21 Jun 2010 18:14:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Tom Lane wrote:\n> Dimitri Fontaine <[email protected]> writes:\n> > Josh Berkus <[email protected]> writes:\n> >> a) Eliminate WAL logging entirely\n\nIf we elimiate WAL logging, that means a reinstall is required for even\na postmaster crash, which is a new non-durable behavior.\n\nAlso, we just added wal_level = minimal, which might end up being a poor\nname choice of we want wal_level = off in PG 9.1. Perhaps we should\nhave used wal_level = crash_safe in 9.0.\n\nI have added the following TODO:\n\n\tConsider a non-crash-safe wal_level that eliminates WAL activity\n\t\n\t * http://archives.postgresql.org/pgsql-performance/2010-06/msg00300.php \n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Wed, 23 Jun 2010 15:37:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Tom Lane wrote:\n> Dimitri Fontaine <[email protected]> writes:\n> > Josh Berkus <[email protected]> writes:\n> >> a) Eliminate WAL logging entirely\n> >> b) Eliminate checkpointing\n> >> c) Turn off the background writer\n> >> d) Have PostgreSQL refuse to restart after a crash and instead call an\n> >> exteral script (for reprovisioning)\n> \n> > Well I guess I'd prefer a per-transaction setting, allowing to bypass\n> > WAL logging and checkpointing.\n> \n> Not going to happen; this is all or nothing.\n> \n> > Forcing the backend to care itself for\n> > writing the data I'm not sure is a good thing, but if you say so.\n> \n> Yeah, I think proposal (c) is likely to be a net loss.\n> \n> (a) and (d) are probably simple, if by \"reprovisioning\" you mean\n> \"rm -rf $PGDATA; initdb\". Point (b) will be a bit trickier because\n> there are various housekeeping activities tied into checkpoints.\n> I think you can't actually remove checkpoints altogether, just\n> skip the flush-dirty-pages part.\n\nBased on this thread, I have developed the following documentation patch\nthat outlines the performance enhancements possible if durability is not\nrequired. The patch also documents that synchronous_commit = false has\npotential committed transaction loss from a database crash (as well as\nan OS crash).\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +", "msg_date": "Wed, 23 Jun 2010 15:40:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "2010/6/23 Bruce Momjian <[email protected]>:\n> Tom Lane wrote:\n>> Dimitri Fontaine <[email protected]> writes:\n>> > Josh Berkus <[email protected]> writes:\n>> >> a) Eliminate WAL logging entirely\n>\n> If we elimiate WAL logging, that means a reinstall is required for even\n> a postmaster crash, which is a new non-durable behavior.\n>\n> Also, we just added wal_level = minimal, which might end up being a poor\n> name choice of we want wal_level = off in PG 9.1.  Perhaps we should\n> have used wal_level = crash_safe in 9.0.\n>\n> I have added the following TODO:\n>\n>        Consider a non-crash-safe wal_level that eliminates WAL activity\n>\n>            * http://archives.postgresql.org/pgsql-performance/2010-06/msg00300.php\n>\n> --\n\nisn't fsync to off enought?\n\nRegards\n\nPavel\n\n>  Bruce Momjian  <[email protected]>        http://momjian.us\n>  EnterpriseDB                             http://enterprisedb.com\n>\n>  + None of us is going to be here forever. +\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Wed, 23 Jun 2010 21:51:26 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Pavel Stehule wrote:\n> 2010/6/23 Bruce Momjian <[email protected]>:\n> > Tom Lane wrote:\n> >> Dimitri Fontaine <[email protected]> writes:\n> >> > Josh Berkus <[email protected]> writes:\n> >> >> a) Eliminate WAL logging entirely\n> >\n> > If we elimiate WAL logging, that means a reinstall is required for even\n> > a postmaster crash, which is a new non-durable behavior.\n> >\n> > Also, we just added wal_level = minimal, which might end up being a poor\n> > name choice of we want wal_level = off in PG 9.1. ?Perhaps we should\n> > have used wal_level = crash_safe in 9.0.\n> >\n> > I have added the following TODO:\n> >\n> > ? ? ? ?Consider a non-crash-safe wal_level that eliminates WAL activity\n> >\n> > ? ? ? ? ? ?* http://archives.postgresql.org/pgsql-performance/2010-06/msg00300.php\n> >\n> > --\n> \n> isn't fsync to off enought?\n\nWell, testing reported in the thread showed other settings also help,\nthough the checkpoint lengthening was not tested.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Wed, 23 Jun 2010 16:16:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Wed, Jun 23, 2010 at 3:37 PM, Bruce Momjian <[email protected]> wrote:\n> Tom Lane wrote:\n>> Dimitri Fontaine <[email protected]> writes:\n>> > Josh Berkus <[email protected]> writes:\n>> >> a) Eliminate WAL logging entirely\n>\n> If we elimiate WAL logging, that means a reinstall is required for even\n> a postmaster crash, which is a new non-durable behavior.\n>\n> Also, we just added wal_level = minimal, which might end up being a poor\n> name choice of we want wal_level = off in PG 9.1.  Perhaps we should\n> have used wal_level = crash_safe in 9.0.\n>\n> I have added the following TODO:\n>\n>        Consider a non-crash-safe wal_level that eliminates WAL activity\n>\n>            * http://archives.postgresql.org/pgsql-performance/2010-06/msg00300.php\n\nI don't think we need a system-wide setting for that. I believe that\nthe unlogged tables I'm working on will handle that case.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 23 Jun 2010 16:25:44 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Wed, Jun 23, 2010 at 9:25 PM, Robert Haas <[email protected]> wrote:\n> On Wed, Jun 23, 2010 at 3:37 PM, Bruce Momjian <[email protected]> wrote:\n>> Tom Lane wrote:\n>>> Dimitri Fontaine <[email protected]> writes:\n>>> > Josh Berkus <[email protected]> writes:\n>>> >> a) Eliminate WAL logging entirely\n>>\n>> If we elimiate WAL logging, that means a reinstall is required for even\n>> a postmaster crash, which is a new non-durable behavior.\n>>\n>> Also, we just added wal_level = minimal, which might end up being a poor\n>> name choice of we want wal_level = off in PG 9.1.  Perhaps we should\n>> have used wal_level = crash_safe in 9.0.\n>>\n>> I have added the following TODO:\n>>\n>>        Consider a non-crash-safe wal_level that eliminates WAL activity\n>>\n>>            * http://archives.postgresql.org/pgsql-performance/2010-06/msg00300.php\n>\n> I don't think we need a system-wide setting for that.  I believe that\n> the unlogged tables I'm working on will handle that case.\n\nAren't they going to be truncated at startup? If the entire system is\nrunning without WAL, we would only need to do that in case of an\nunclean shutdown wouldn't we?\n\n\n-- \nDave Page\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 23 Jun 2010 21:27:38 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Robert Haas wrote:\n> On Wed, Jun 23, 2010 at 3:37 PM, Bruce Momjian <[email protected]> wrote:\n> > Tom Lane wrote:\n> >> Dimitri Fontaine <[email protected]> writes:\n> >> > Josh Berkus <[email protected]> writes:\n> >> >> a) Eliminate WAL logging entirely\n> >\n> > If we elimiate WAL logging, that means a reinstall is required for even\n> > a postmaster crash, which is a new non-durable behavior.\n> >\n> > Also, we just added wal_level = minimal, which might end up being a poor\n> > name choice of we want wal_level = off in PG 9.1. ?Perhaps we should\n> > have used wal_level = crash_safe in 9.0.\n> >\n> > I have added the following TODO:\n> >\n> > ? ? ? ?Consider a non-crash-safe wal_level that eliminates WAL activity\n> >\n> > ? ? ? ? ? ?* http://archives.postgresql.org/pgsql-performance/2010-06/msg00300.php\n> \n> I don't think we need a system-wide setting for that. I believe that\n> the unlogged tables I'm working on will handle that case.\n\nUh, will we have some global unlogged setting, like for the system\ntables and stuff? It seems like an heavy burden to tell people they\nhave to create ever object as unlogged, and we would still generate log\nfor things like transaction commits.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Wed, 23 Jun 2010 16:27:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Dave Page <[email protected]> writes:\n> On Wed, Jun 23, 2010 at 9:25 PM, Robert Haas <[email protected]> wrote:\n>> I don't think we need a system-wide setting for that. �I believe that\n>> the unlogged tables I'm working on will handle that case.\n\n> Aren't they going to be truncated at startup? If the entire system is\n> running without WAL, we would only need to do that in case of an\n> unclean shutdown wouldn't we?\n\nThe problem with a system-wide no-WAL setting is it means you can't\ntrust the system catalogs after a crash. Which means you are forced to\nuse initdb to recover from any crash, in return for not a lot of savings\n(for typical usages where there's not really much churn in the\ncatalogs). I tend to agree with Robert that a way to not log content\nupdates for individual user tables is likely to be much more useful in\npractice.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Jun 2010 16:43:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache " }, { "msg_contents": "Tom Lane wrote:\n> Dave Page <[email protected]> writes:\n> > On Wed, Jun 23, 2010 at 9:25 PM, Robert Haas <[email protected]> wrote:\n> >> I don't think we need a system-wide setting for that. �I believe that\n> >> the unlogged tables I'm working on will handle that case.\n> \n> > Aren't they going to be truncated at startup? If the entire system is\n> > running without WAL, we would only need to do that in case of an\n> > unclean shutdown wouldn't we?\n> \n> The problem with a system-wide no-WAL setting is it means you can't\n> trust the system catalogs after a crash. Which means you are forced to\n\nTrue, and in fact any postmaster crash could lead to curruption.\n\n> use initdb to recover from any crash, in return for not a lot of savings\n> (for typical usages where there's not really much churn in the\n> catalogs). I tend to agree with Robert that a way to not log content\n> updates for individual user tables is likely to be much more useful in\n> practice.\n\nOK, TODO removed.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Wed, 23 Jun 2010 16:45:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n> The problem with a system-wide no-WAL setting is it means you can't\n> trust the system catalogs after a crash. Which means you are forced to\n> use initdb to recover from any crash, in return for not a lot of savings\n> (for typical usages where there's not really much churn in the\n> catalogs). \n\nWhat about having a \"catalog only\" WAL setting, userset ?\n\nI'm not yet clear on the point but it well seems that the per\ntransaction WAL setting is impossible because of catalogs (meaning\nmainly DDL support), but I can see us enforcing durability and crash\nsafety there.\n\nThat would probably mean that setting WAL level this low yet doing any\nkind of DDL would need to be either an ERROR, or better yet, a WARNING\ntelling that the WAL level can not be that low so has been raised by the\nsystem.\n\nRegards,\n-- \ndim\n", "msg_date": "Thu, 24 Jun 2010 10:25:23 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Fri, Jun 18, 2010 at 1:55 PM, Josh Berkus <[email protected]> wrote:\n>\n>> It must be a setting, not a version.\n>>\n>> For instance suppose you have a session table for your website and a\n>> users table.\n>>\n>> - Having ACID on the users table is of course a must ;\n>> - for the sessions table you can drop the \"D\"\n>\n> You're trying to solve a different use-case than the one I am.\n>\n> Your use-case will be solved by global temporary tables.  I suggest that\n> you give Robert Haas some help & feedback on that.\n>\n> My use case is people using PostgreSQL as a cache, or relying entirely\n> on replication for durability.\n>\n> --\n>                                  -- Josh Berkus\n>                                     PostgreSQL Experts Inc.\n>                                     http://www.pgexperts.com\n>\n\n\nIs he? Wouldn't a global temporary table have content that is not\nvisible between db connections? A db session many not be the same as a\nuser session.\n\n-- \nRob Wultsch\[email protected]\n", "msg_date": "Thu, 24 Jun 2010 01:40:23 -0700", "msg_from": "Rob Wultsch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Thu, Jun 24, 2010 at 4:40 AM, Rob Wultsch <[email protected]> wrote:\n> On Fri, Jun 18, 2010 at 1:55 PM, Josh Berkus <[email protected]> wrote:\n>>\n>>> It must be a setting, not a version.\n>>>\n>>> For instance suppose you have a session table for your website and a\n>>> users table.\n>>>\n>>> - Having ACID on the users table is of course a must ;\n>>> - for the sessions table you can drop the \"D\"\n>>\n>> You're trying to solve a different use-case than the one I am.\n>>\n>> Your use-case will be solved by global temporary tables.  I suggest that\n>> you give Robert Haas some help & feedback on that.\n>>\n>> My use case is people using PostgreSQL as a cache, or relying entirely\n>> on replication for durability.\n>\n> Is he? Wouldn't a global temporary table have content that is not\n> visible between db connections? A db session many not be the same as a\n> user session.\n>\n\nI'm planning to implement global temporary tables, which can have\ndifferent contents for each user session.\n\nAnd I'm also planning to implement unlogged tables, which have the\nsame contents for all sessions but are not WAL-logged (and are\ntruncated on startup).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 24 Jun 2010 07:35:16 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "\n> And I'm also planning to implement unlogged tables, which have the\n> same contents for all sessions but are not WAL-logged (and are\n> truncated on startup).\n\nYep. And it's quite possible that this will be adequate for most users.\n\nAnd it's also possible that the extra CPU which Robert isn't getting rid\nof (bgwriter, checkpointing, etc.) does not have a measurable impact on\nperformance. At this point, my idea (which I call\n\"RunningWithScissorsDB\") is only an idea for experimentation and\nperformance testing. It's pretty far off from being a TODO.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Thu, 24 Jun 2010 11:56:44 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "2010/6/24 Josh Berkus <[email protected]>:\n>\n>> And I'm also planning to implement unlogged tables, which have the\n>> same contents for all sessions but are not WAL-logged (and are\n>> truncated on startup).\n\nthis is similar MySQL's memory tables. Personally, I don't see any\npractical sense do same work on PostgreSQL now, when memcached exists.\nMuch more important is smarter cache controlling then we have now -\nmaybe with priorities for some tables and some operations\n(applications) - sometimes we don't need use cache for extra large\nscans.\n\nRegards\n\nPavel Stehule\n\n\n>\n> Yep.  And it's quite possible that this will be adequate for most users.\n>\n> And it's also possible that the extra CPU which Robert isn't getting rid\n> of (bgwriter, checkpointing, etc.) does not have a measurable impact on\n> performance.  At this point, my idea (which I call\n> \"RunningWithScissorsDB\") is only an idea for experimentation and\n> performance testing.  It's pretty far off from being a TODO.\n>\n\n> --\n>                                  -- Josh Berkus\n>                                     PostgreSQL Experts Inc.\n>                                     http://www.pgexperts.com\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Thu, 24 Jun 2010 21:14:05 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Thu, 2010-06-24 at 21:14 +0200, Pavel Stehule wrote:\n> 2010/6/24 Josh Berkus <[email protected]>:\n> >\n> >> And I'm also planning to implement unlogged tables, which have the\n> >> same contents for all sessions but are not WAL-logged (and are\n> >> truncated on startup).\n> \n> this is similar MySQL's memory tables. Personally, I don't see any\n> practical sense do same work on PostgreSQL now, when memcached exists.\n\nBecause memcache is yet another layer and increases overhead to the\napplication developers by adding yet another layer to work with. Non\nlogged tables would rock.\n\nSELECT * FROM foo;\n\n:D\n\nJD\n\n\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\n\n", "msg_date": "Thu, 24 Jun 2010 12:47:46 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "2010/6/24 Joshua D. Drake <[email protected]>:\n> On Thu, 2010-06-24 at 21:14 +0200, Pavel Stehule wrote:\n>> 2010/6/24 Josh Berkus <[email protected]>:\n>> >\n>> >> And I'm also planning to implement unlogged tables, which have the\n>> >> same contents for all sessions but are not WAL-logged (and are\n>> >> truncated on startup).\n>>\n>> this is similar MySQL's memory tables. Personally, I don't see any\n>> practical sense do same work on PostgreSQL now, when memcached exists.\n>\n> Because memcache is yet another layer and increases overhead to the\n> application developers by adding yet another layer to work with. Non\n> logged tables would rock.\n\nI see only one positive point - it can help to people with broken\ndesign application with migration to PostgreSQL.\n\nThere are different interesting feature - cached procedure's results\nlike Oracle 11. - it's more general.\n\nonly idea.\n\nFor me memory tables are nonsens, but what about memory cached\nmaterialised views (maybe periodically refreshed)?\n\nRegards\n\nPavel\n\n>\n> SELECT * FROM foo;\n>\n> :D\n\n:)\n>\n> JD\n>\n>\n>\n>\n> --\n> PostgreSQL.org Major Contributor\n> Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\n> Consulting, Training, Support, Custom Development, Engineering\n>\n>\n", "msg_date": "Thu, 24 Jun 2010 22:01:49 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "\nOn Jun 24, 2010, at 4:01 PM, Pavel Stehule wrote:\n\n> 2010/6/24 Joshua D. Drake <[email protected]>:\n>> On Thu, 2010-06-24 at 21:14 +0200, Pavel Stehule wrote:\n>>> 2010/6/24 Josh Berkus <[email protected]>:\n>>>> \n>>>>> And I'm also planning to implement unlogged tables, which have the\n>>>>> same contents for all sessions but are not WAL-logged (and are\n>>>>> truncated on startup).\n>>> \n>>> this is similar MySQL's memory tables. Personally, I don't see any\n>>> practical sense do same work on PostgreSQL now, when memcached exists.\n>> \n>> Because memcache is yet another layer and increases overhead to the\n>> application developers by adding yet another layer to work with. Non\n>> logged tables would rock.\n> \n> I see only one positive point - it can help to people with broken\n> design application with migration to PostgreSQL.\n\nThe broken design is being required to work around PostgreSQL's lack of this optimization.\n\n> \n> There are different interesting feature - cached procedure's results\n> like Oracle 11. - it's more general.\n> \n> only idea.\n> \n> For me memory tables are nonsens, but what about memory cached\n> materialised views (maybe periodically refreshed)?\n\nNon-WAL-logged, non-fsynced tables are not equivalent to MySQL \"memory tables\". Such tables simply contain transient information. One can already make \"memory tables\" in PostgreSQL by making a tablespace in a tmpfs partition.\n\nI have been eagerly waiting for this feature for six years so that I can write proper queries against ever-changing session data with transactional semantics (which memcached cannot offer). The only restriction I see for these transient data tables is that they cannot be referenced by standard tables using foreign key constraints. Otherwise, these tables behave like any other. That's the benefit.\n\nCheers,\nM", "msg_date": "Thu, 24 Jun 2010 16:38:45 -0400", "msg_from": "\"A.M.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "2010/6/24 A.M. <[email protected]>:\n>\n> On Jun 24, 2010, at 4:01 PM, Pavel Stehule wrote:\n>\n>> 2010/6/24 Joshua D. Drake <[email protected]>:\n>>> On Thu, 2010-06-24 at 21:14 +0200, Pavel Stehule wrote:\n>>>> 2010/6/24 Josh Berkus <[email protected]>:\n>>>>>\n>>>>>> And I'm also planning to implement unlogged tables, which have the\n>>>>>> same contents for all sessions but are not WAL-logged (and are\n>>>>>> truncated on startup).\n>>>>\n>>>> this is similar MySQL's memory tables. Personally, I don't see any\n>>>> practical sense do same work on PostgreSQL now, when memcached exists.\n>>>\n>>> Because memcache is yet another layer and increases overhead to the\n>>> application developers by adding yet another layer to work with. Non\n>>> logged tables would rock.\n>>\n>> I see only one positive point - it can help to people with broken\n>> design application with migration to PostgreSQL.\n>\n> The broken design is being required to work around PostgreSQL's lack of this optimization.\n>\n>>\n>> There are different interesting feature - cached procedure's results\n>> like Oracle 11. - it's more general.\n>>\n>> only idea.\n>>\n>> For me memory tables are nonsens, but what about memory cached\n>> materialised views (maybe periodically refreshed)?\n>\n> Non-WAL-logged, non-fsynced tables are not equivalent to MySQL \"memory tables\". Such tables simply contain transient information. One can already make \"memory tables\" in PostgreSQL by making a tablespace in a tmpfs partition.\n>\n> I have been eagerly waiting for this feature for six years so that I can write proper queries against ever-changing session data with transactional semantics (which memcached cannot offer). The only restriction I see for these transient data tables is that they cannot be referenced by standard tables using foreign key constraints. Otherwise, these tables behave like any other. That's the benefit.\n>\n\nif you remove WAL, then there are MVCC still - you have to do VACUUM,\nyou have to do ANALYZE, you have to thinking about indexes ...\nProcessing pipe for simple query is long too. The removing WAL doesn't\ndo memory database from Postgres. But You have to know best, what do\nyou do.\n\nRegards\n\nPavel Stehule\n\np.s. maybe memcached is too simply for you - there are more NoSQL db\n\n> Cheers,\n> M\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Thu, 24 Jun 2010 23:33:06 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "\n> this is similar MySQL's memory tables. Personally, I don't see any\n> practical sense do same work on PostgreSQL now, when memcached exists.\n\nThing is, if you only have one table (say, a sessions table) which you\ndon't want logged, you don't necessarily want to fire up a 2nd software\napplication just for that. Plus, recent testing seems to show that with\nno logging, memcached isn't really faster than PG.\n\nAlso, like for asynch_commit, this is something where users are\ncurrently turning off fsync. Any option where we can present users with\ncontrolled, predictable data loss instead of random corruption is a good\none.\n\n> Much more important is smarter cache controlling then we have now -\n> maybe with priorities for some tables and some operations\n> (applications) - sometimes we don't need use cache for extra large\n> scans.\n\nWell, that would be good *too*. You working on it? ;-)\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Thu, 24 Jun 2010 14:37:35 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "2010/6/24 Josh Berkus <[email protected]>:\n>\n>> this is similar MySQL's memory tables. Personally, I don't see any\n>> practical sense do same work on PostgreSQL now, when memcached exists.\n>\n> Thing is, if you only have one table (say, a sessions table) which you\n> don't want logged, you don't necessarily want to fire up a 2nd software\n> application just for that.  Plus, recent testing seems to show that with\n> no logging, memcached isn't really faster than PG.\n\nsorry, I thinking some else. Not only WAL does significant overhead.\nYou need litlle bit more memory, much more processing time. With very\nfast operations, the bottle neck will be in interprocess communication\n- but it doesn't mean so pg isn't slower than memcached. I repeating\nit again - there are no any universal tool for all tasks.\n\n>\n> Also, like for asynch_commit, this is something where users are\n> currently turning off fsync.  Any option where we can present users with\n> controlled, predictable data loss instead of random corruption is a good\n> one.\n>\n\nit isn't too simple. What about statistics? These are used in system table.\n\n>> Much more important is smarter cache controlling then we have now -\n>> maybe with priorities for some tables and some operations\n>> (applications) - sometimes we don't need use cache for extra large\n>> scans.\n>\n> Well, that would be good *too*.  You working on it?  ;-)\n>\n\nno - just I know about possible problems with memory control.\n\n> --\n>                                  -- Josh Berkus\n>                                     PostgreSQL Experts Inc.\n>                                     http://www.pgexperts.com\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Thu, 24 Jun 2010 23:55:48 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Dimitri Fontaine <[email protected]> writes:\n> > > Josh Berkus <[email protected]> writes:\n> > >> a) Eliminate WAL logging entirely\n> > >> b) Eliminate checkpointing\n> > >> c) Turn off the background writer\n> > >> d) Have PostgreSQL refuse to restart after a crash and instead call an\n> > >> exteral script (for reprovisioning)\n> > \n> > > Well I guess I'd prefer a per-transaction setting, allowing to bypass\n> > > WAL logging and checkpointing.\n> > \n> > Not going to happen; this is all or nothing.\n> > \n> > > Forcing the backend to care itself for\n> > > writing the data I'm not sure is a good thing, but if you say so.\n> > \n> > Yeah, I think proposal (c) is likely to be a net loss.\n> > \n> > (a) and (d) are probably simple, if by \"reprovisioning\" you mean\n> > \"rm -rf $PGDATA; initdb\". Point (b) will be a bit trickier because\n> > there are various housekeeping activities tied into checkpoints.\n> > I think you can't actually remove checkpoints altogether, just\n> > skip the flush-dirty-pages part.\n> \n> Based on this thread, I have developed the following documentation patch\n> that outlines the performance enhancements possible if durability is not\n> required. The patch also documents that synchronous_commit = false has\n> potential committed transaction loss from a database crash (as well as\n> an OS crash).\n\nApplied.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Mon, 28 Jun 2010 17:57:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Mon, Jun 28, 2010 at 5:57 PM, Bruce Momjian <[email protected]> wrote:\n>> The patch also documents that synchronous_commit = false has\n>> potential committed transaction loss from a database crash (as well as\n>> an OS crash).\n\nIs this actually true?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Tue, 29 Jun 2010 06:52:10 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Robert Haas wrote:\n> On Mon, Jun 28, 2010 at 5:57 PM, Bruce Momjian <[email protected]> wrote:\n> >> The patch also documents that synchronous_commit = false has\n> >> potential committed transaction loss from a database crash (as well as\n> >> an OS crash).\n> \n> Is this actually true?\n\nI asked on IRC and was told it is true, and looking at the C code it\nlooks true. What synchronous_commit = false does is to delay writing\nthe wal buffers to disk and fsyncing them, not just fsync, which is\nwhere the commit loss due to db process crash comes from.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Tue, 29 Jun 2010 09:32:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Bruce Momjian <[email protected]> wrote:\n \n> What synchronous_commit = false does is to delay writing\n> the wal buffers to disk and fsyncing them, not just fsync\n \nAh, that answers the question Josh Berkus asked here:\n \nhttp://archives.postgresql.org/pgsql-performance/2010-06/msg00285.php\n \n(which is something I was wondering about, too.)\n \n-Kevin\n", "msg_date": "Tue, 29 Jun 2010 09:14:13 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Tue, Jun 29, 2010 at 9:32 AM, Bruce Momjian <[email protected]> wrote:\n> Robert Haas wrote:\n>> On Mon, Jun 28, 2010 at 5:57 PM, Bruce Momjian <[email protected]> wrote:\n>> >> The patch also documents that synchronous_commit = false has\n>> >> potential committed transaction loss from a database crash (as well as\n>> >> an OS crash).\n>>\n>> Is this actually true?\n>\n> I asked on IRC and was told it is true, and looking at the C code it\n> looks true.  What synchronous_commit = false does is to delay writing\n> the wal buffers to disk and fsyncing them, not just fsync, which is\n> where the commit loss due to db process crash comes from.\n\nAh, I see. Thanks.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Tue, 29 Jun 2010 13:09:42 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Robert Haas wrote:\n> On Tue, Jun 29, 2010 at 9:32 AM, Bruce Momjian <[email protected]> wrote:\n> > Robert Haas wrote:\n> >> On Mon, Jun 28, 2010 at 5:57 PM, Bruce Momjian <[email protected]> wrote:\n> >> >> The patch also documents that synchronous_commit = false has\n> >> >> potential committed transaction loss from a database crash (as well as\n> >> >> an OS crash).\n> >>\n> >> Is this actually true?\n> >\n> > I asked on IRC and was told it is true, and looking at the C code it\n> > looks true. ?What synchronous_commit = false does is to delay writing\n> > the wal buffers to disk and fsyncing them, not just fsync, which is\n> > where the commit loss due to db process crash comes from.\n> \n> Ah, I see. Thanks.\n\nI am personally surprised it was designed that way; I thought we would\njust delay fsync.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Tue, 29 Jun 2010 13:19:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>>> I asked on IRC and was told it is true, and looking at the C code it\n>>> looks true. ?What synchronous_commit = false does is to delay writing\n>>> the wal buffers to disk and fsyncing them, not just fsync, which is\n>>> where the commit loss due to db process crash comes from.\n\n>> Ah, I see. Thanks.\n\n> I am personally surprised it was designed that way; I thought we would\n> just delay fsync.\n\nThat would require writing and syncing to be separable actions. If\nyou're using O_SYNC or similar, they aren't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 29 Jun 2010 13:27:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> >>> I asked on IRC and was told it is true, and looking at the C code it\n> >>> looks true. ?What synchronous_commit = false does is to delay writing\n> >>> the wal buffers to disk and fsyncing them, not just fsync, which is\n> >>> where the commit loss due to db process crash comes from.\n> \n> >> Ah, I see. Thanks.\n> \n> > I am personally surprised it was designed that way; I thought we would\n> > just delay fsync.\n> \n> That would require writing and syncing to be separable actions. If\n> you're using O_SYNC or similar, they aren't.\n\nAh, very good point. I have added a C comment to clarify why this is\nthe current behavior; attached and applied.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +", "msg_date": "Tue, 29 Jun 2010 14:45:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Tue, Jun 29, 2010 at 2:45 PM, Bruce Momjian <[email protected]> wrote:\n> Tom Lane wrote:\n>> Bruce Momjian <[email protected]> writes:\n>> >>> I asked on IRC and was told it is true, and looking at the C code it\n>> >>> looks true. ?What synchronous_commit = false does is to delay writing\n>> >>> the wal buffers to disk and fsyncing them, not just fsync, which is\n>> >>> where the commit loss due to db process crash comes from.\n>>\n>> >> Ah, I see.  Thanks.\n>>\n>> > I am personally surprised it was designed that way;  I thought we would\n>> > just delay fsync.\n>>\n>> That would require writing and syncing to be separable actions.  If\n>> you're using O_SYNC or similar, they aren't.\n>\n> Ah, very good point.  I have added a C comment to clarify why this is\n> the current behavior;  attached and applied.\n>\n> --\n>  Bruce Momjian  <[email protected]>        http://momjian.us\n>  EnterpriseDB                             http://enterprisedb.com\n\n\nThough has anybody seen a behaviour where synchronous_commit=off is\nslower than synchronous_commit=on ? Again there are two cases here\none with O_* flag and other with f*sync flags. But I had seen that\nbehavior with PostgreSQL 9.0 beta(2 I think) though havent really\ninvestigated it much yet .. (though now I dont remember which\nwal_sync_method flag) . Just curious if anybody has seen that\nbehavior..\n\nRegards,\nJignesh\n", "msg_date": "Tue, 29 Jun 2010 20:48:23 -0400", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Jignesh Shah wrote:\n> On Tue, Jun 29, 2010 at 2:45 PM, Bruce Momjian <[email protected]> wrote:\n> > Tom Lane wrote:\n> >> Bruce Momjian <[email protected]> writes:\n> >> >>> I asked on IRC and was told it is true, and looking at the C code it\n> >> >>> looks true. ?What synchronous_commit = false does is to delay writing\n> >> >>> the wal buffers to disk and fsyncing them, not just fsync, which is\n> >> >>> where the commit loss due to db process crash comes from.\n> >>\n> >> >> Ah, I see. ?Thanks.\n> >>\n> >> > I am personally surprised it was designed that way; ?I thought we would\n> >> > just delay fsync.\n> >>\n> >> That would require writing and syncing to be separable actions. ?If\n> >> you're using O_SYNC or similar, they aren't.\n> >\n> > Ah, very good point. ?I have added a C comment to clarify why this is\n> > the current behavior; ?attached and applied.\n> >\n> > --\n> > ?Bruce Momjian ?<[email protected]> ? ? ? ?http://momjian.us\n> > ?EnterpriseDB ? ? ? ? ? ? ? ? ? ? ? ? ? ? http://enterprisedb.com\n> \n> \n> Though has anybody seen a behaviour where synchronous_commit=off is\n> slower than synchronous_commit=on ? Again there are two cases here\n> one with O_* flag and other with f*sync flags. But I had seen that\n> behavior with PostgreSQL 9.0 beta(2 I think) though havent really\n> investigated it much yet .. (though now I dont remember which\n> wal_sync_method flag) . Just curious if anybody has seen that\n> behavior..\n\nI have trouble believing how synchronous_commit=off could be slower than\n'on'.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Tue, 29 Jun 2010 21:39:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Tue, 2010-06-29 at 21:39 -0400, Bruce Momjian wrote:\n> Jignesh Shah wrote:\n> > On Tue, Jun 29, 2010 at 2:45 PM, Bruce Momjian <[email protected]> wrote:\n> > > Tom Lane wrote:\n> > >> Bruce Momjian <[email protected]> writes:\n> > >> >>> I asked on IRC and was told it is true, and looking at the C code it\n> > >> >>> looks true. ?What synchronous_commit = false does is to delay writing\n> > >> >>> the wal buffers to disk and fsyncing them, not just fsync, which is\n> > >> >>> where the commit loss due to db process crash comes from.\n> > >>\n> > >> >> Ah, I see. ?Thanks.\n> > >>\n> > >> > I am personally surprised it was designed that way; ?I thought we would\n> > >> > just delay fsync.\n> > >>\n> > >> That would require writing and syncing to be separable actions. ?If\n> > >> you're using O_SYNC or similar, they aren't.\n> > >\n> > > Ah, very good point. ?I have added a C comment to clarify why this is\n> > > the current behavior; ?attached and applied.\n> > >\n> > > --\n> > > ?Bruce Momjian ?<[email protected]> ? ? ? ?http://momjian.us\n> > > ?EnterpriseDB ? ? ? ? ? ? ? ? ? ? ? ? ? ? http://enterprisedb.com\n> > \n> > \n> > Though has anybody seen a behaviour where synchronous_commit=off is\n> > slower than synchronous_commit=on ? Again there are two cases here\n> > one with O_* flag and other with f*sync flags. But I had seen that\n> > behavior with PostgreSQL 9.0 beta(2 I think) though havent really\n> > investigated it much yet .. (though now I dont remember which\n> > wal_sync_method flag) . Just curious if anybody has seen that\n> > behavior..\n> \n> I have trouble believing how synchronous_commit=off could be slower than\n> 'on'.\n> \n\nI wonder if it could be contention on wal buffers?\n\nSay I've turned synchronous_commit off, I drive enough traffic fill up\nmy wal_buffers. I assume that we would have to start writing buffers\ndown to disk before allocating to the new process.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Wed, 30 Jun 2010 10:30:57 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "Brad Nicholson wrote:\n> > > > Ah, very good point. ?I have added a C comment to clarify why this is\n> > > > the current behavior; ?attached and applied.\n> > > >\n> > > > --\n> > > > ?Bruce Momjian ?<[email protected]> ? ? ? ?http://momjian.us\n> > > > ?EnterpriseDB ? ? ? ? ? ? ? ? ? ? ? ? ? ? http://enterprisedb.com\n> > > \n> > > \n> > > Though has anybody seen a behaviour where synchronous_commit=off is\n> > > slower than synchronous_commit=on ? Again there are two cases here\n> > > one with O_* flag and other with f*sync flags. But I had seen that\n> > > behavior with PostgreSQL 9.0 beta(2 I think) though havent really\n> > > investigated it much yet .. (though now I dont remember which\n> > > wal_sync_method flag) . Just curious if anybody has seen that\n> > > behavior..\n> > \n> > I have trouble believing how synchronous_commit=off could be slower than\n> > 'on'.\n> > \n> \n> I wonder if it could be contention on wal buffers?\n> \n> Say I've turned synchronous_commit off, I drive enough traffic fill up\n> my wal_buffers. I assume that we would have to start writing buffers\n> down to disk before allocating to the new process.\n\nUh, good question. I know this report showed ynchronous_commit=off as\nfaster than 'on':\n\n\thttp://archives.postgresql.org/pgsql-performance/2010-06/msg00277.php\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Wed, 30 Jun 2010 11:45:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "I haven't jumped in yet on this thread, but here goes ....\n\nIf you're really looking for query performance, then any database which is\ndesigned with reliability and ACID consistency in mind is going to\ninherently have some mis-fit features.\n\nSome other ideas to consider, depending on your query mix:\n\n1. MySQL with the MyISAM database (non-ACID)\n\n2. Put an in-application generic query cache in front of the DB, that runs\nin the app address space, e.g. Cache' if using Java\n\n3. Using a DB is a good way to get generic querying capability, but if the\n\"where\" clause in the querying is over a small set of meta-data, and SQL\nsyntax is not a big requirement, consider non-RDBMS alternatives, e.g. use\nXPath over a W3C DOM object tree to get primary keys to in-memory hash\ntables (possibly distributed with something like memcached)\n\nOn Mon, Jun 14, 2010 at 9:14 PM, [email protected] <\[email protected]> wrote:\n\n> We have a fairly unique need for a local, in-memory cache. This will\n> store data aggregated from other sources. Generating the data only\n> takes a few minutes, and it is updated often. There will be some\n> fairly expensive queries of arbitrary complexity run at a fairly high\n> rate. We're looking for high concurrency and reasonable performance\n> throughout.\n>\n> The entire data set is roughly 20 MB in size. We've tried Carbonado in\n> front of SleepycatJE only to discover that it chokes at a fairly low\n> concurrency and that Carbonado's rule-based optimizer is wholly\n> insufficient for our needs. We've also tried Carbonado's Map\n> Repository which suffers the same problems.\n>\n> I've since moved the backend database to a local PostgreSQL instance\n> hoping to take advantage of PostgreSQL's superior performance at high\n> concurrency. Of course, at the default settings, it performs quite\n> poorly compares to the Map Repository and Sleepycat JE.\n>\n> My question is how can I configure the database to run as quickly as\n> possible if I don't care about data consistency or durability? That\n> is, the data is updated so often and it can be reproduced fairly\n> rapidly so that if there is a server crash or random particles from\n> space mess up memory we'd just restart the machine and move on.\n>\n> I've never configured PostgreSQL to work like this and I thought maybe\n> someone here had some ideas on a good approach to this.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI haven't jumped in yet on this thread, but here goes ....If you're really looking for query performance, then any database which is designed with reliability and ACID consistency in mind is going to inherently have some mis-fit features.\nSome other ideas to consider, depending on your query mix:1. MySQL with the MyISAM database (non-ACID)2. Put an in-application generic query cache in front of the DB, that runs in the app address space, e.g. Cache' if using Java\n3. Using a DB is a good way to get generic querying capability, but if the \"where\" clause in the querying is over a small set of meta-data, and SQL syntax is not a big requirement, consider non-RDBMS alternatives, e.g. use XPath over a W3C DOM object tree to get primary keys to in-memory hash tables (possibly distributed with something like memcached)\nOn Mon, Jun 14, 2010 at 9:14 PM, [email protected] <[email protected]> wrote:\nWe have a fairly unique need for a local, in-memory cache. This will\nstore data aggregated from other sources. Generating the data only\ntakes a few minutes, and it is updated often. There will be some\nfairly expensive queries of arbitrary complexity run at a fairly high\nrate. We're looking for high concurrency and reasonable performance\nthroughout.\n\nThe entire data set is roughly 20 MB in size. We've tried Carbonado in\nfront of SleepycatJE only to discover that it chokes at a fairly low\nconcurrency and that Carbonado's rule-based optimizer is wholly\ninsufficient for our needs. We've also tried Carbonado's Map\nRepository which suffers the same problems.\n\nI've since moved the backend database to a local PostgreSQL instance\nhoping to take advantage of PostgreSQL's superior performance at high\nconcurrency. Of course, at the default settings, it performs quite\npoorly compares to the Map Repository and Sleepycat JE.\n\nMy question is how can I configure the database to run as quickly as\npossible if I don't care about data consistency or durability? That\nis, the data is updated so often and it can be reproduced fairly\nrapidly so that if there is a server crash or random particles from\nspace mess up memory we'd just restart the machine and move on.\n\nI've never configured PostgreSQL to work like this and I thought maybe\nsomeone here had some ideas on a good approach to this.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 30 Jun 2010 11:42:50 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On 6/30/10 9:42 AM, Dave Crooke wrote:\n> I haven't jumped in yet on this thread, but here goes ....\n>\n> If you're really looking for query performance, then any database which\n> is designed with reliability and ACID consistency in mind is going to\n> inherently have some mis-fit features.\n>\n> Some other ideas to consider, depending on your query mix:\n>\n> 1. MySQL with the MyISAM database (non-ACID)\n>\n> 2. Put an in-application generic query cache in front of the DB, that\n> runs in the app address space, e.g. Cache' if using Java\n>\n> 3. Using a DB is a good way to get generic querying capability, but if\n> the \"where\" clause in the querying is over a small set of meta-data, and\n> SQL syntax is not a big requirement, consider non-RDBMS alternatives,\n> e.g. use XPath over a W3C DOM object tree to get primary keys to\n> in-memory hash tables (possibly distributed with something like memcached)\n\nThese would be good suggestions if the \"throwaway\" database was the only one. But in real life, these throwaway databases are built from other databases that are NOT throwaway, where the data matters and ACID is critical. In other words, they'll probably need Postgres anyway.\n\nSure, you could use both Postgres and MySQL/ISAM, but that means installing and maintaining both, plus building all of the other application layers to work on both systems.\n\nCraig\n", "msg_date": "Wed, 30 Jun 2010 10:06:22 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": "On Tue, Jun 29, 2010 at 9:39 PM, Bruce Momjian <[email protected]> wrote:\n> Jignesh Shah wrote:\n>> On Tue, Jun 29, 2010 at 2:45 PM, Bruce Momjian <[email protected]> wrote:\n>> > Tom Lane wrote:\n>> >> Bruce Momjian <[email protected]> writes:\n>> >> >>> I asked on IRC and was told it is true, and looking at the C code it\n>> >> >>> looks true. ?What synchronous_commit = false does is to delay writing\n>> >> >>> the wal buffers to disk and fsyncing them, not just fsync, which is\n>> >> >>> where the commit loss due to db process crash comes from.\n>> >>\n>> >> >> Ah, I see. ?Thanks.\n>> >>\n>> >> > I am personally surprised it was designed that way; ?I thought we would\n>> >> > just delay fsync.\n>> >>\n>> >> That would require writing and syncing to be separable actions. ?If\n>> >> you're using O_SYNC or similar, they aren't.\n>> >\n>> > Ah, very good point. ?I have added a C comment to clarify why this is\n>> > the current behavior; ?attached and applied.\n>> >\n>> > --\n>> > ?Bruce Momjian ?<[email protected]> ? ? ? ?http://momjian.us\n>> > ?EnterpriseDB ? ? ? ? ? ? ? ? ? ? ? ? ? ? http://enterprisedb.com\n>>\n>>\n>> Though has anybody seen a behaviour where synchronous_commit=off is\n>> slower than synchronous_commit=on  ? Again there are two cases here\n>> one with O_* flag and other with f*sync flags. But I had seen that\n>> behavior with PostgreSQL 9.0 beta(2 I think) though havent really\n>> investigated it much yet .. (though now I dont remember which\n>> wal_sync_method flag) . Just curious if anybody has seen that\n>> behavior..\n>\n> I have trouble believing how synchronous_commit=off could be slower than\n> 'on'.\n>\n> --\n>  Bruce Momjian  <[email protected]>        http://momjian.us\n>  EnterpriseDB                             http://enterprisedb.com\n>\n>  + None of us is going to be here forever. +\n>\n\nHi Bruce,\n\nLet me clarify the problem a bit.. If the underlying WAL disk is SSD\nthen it seems I can get synchronous_commit=on to work faster than\nsynchronous_commit=off.. Yes sounds unintuitive to me. But the results\nseems to point in that direction. It could be that it hit some other\nbottleneck with synchronous_commit=off reaches that\nsynchronous_commit=on does not hit (or has not hit yet).\n\n Brads point of wal buffers could be valid. Though typically I havent\nseen the need to increase it beyond 1024kB yet.\n\nHopefully I will retry it with the latest PostgreSQL 9.0 bits and see\nit happens again.\nMore on that later.\n\nRegards,\nJignesh\n", "msg_date": "Wed, 30 Jun 2010 14:21:42 -0400", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" }, { "msg_contents": " On 6/30/2010 2:21 PM, Jignesh Shah wrote:\n> If the underlying WAL disk is SSD then it seems I can get \n> synchronous_commit=on to work faster than\n> synchronous_commit=off..\n\nThe first explanation that pops to mind is that synchronous_commit is \nwriting all the time, which doesn't have the same sort of penalty on \nSSD. Whereas if you turn it off, then there are some idle periods where \nthe SSD could be writing usefully, but instead it's buffering for the \nnext burst instead. The importance of that can be magnified on \noperating systems that do their own buffering and tend to lag behind \nwrites until they see an fsync call, like is the case on Linux with ext3.\n", "msg_date": "Thu, 01 Jul 2010 06:00:01 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL as a local in-memory cache" } ]
[ { "msg_contents": "Just curious if this would apply to PostgreSQL:\nhttp://queue.acm.org/detail.cfm?id=1814327\n\n<http://queue.acm.org/detail.cfm?id=1814327>Now that I've read it, it seems\nlike a no-brainer. So, how does PostgreSQL deal with the different latencies\ninvolved in accessing data on disk for searches / sorts vs. accessing data\nin memory? Is it allocated in a similar way as described in the article such\nthat disk access is reduced to a minimum?\n\nJust curious if this would apply to PostgreSQL: http://queue.acm.org/detail.cfm?id=1814327Now that I've read it, it seems like a no-brainer. So, how does PostgreSQL deal with the different latencies involved in accessing data on disk for searches / sorts vs. accessing data in memory? Is it allocated in a similar way as described in the article such that disk access is reduced to a minimum?", "msg_date": "Mon, 14 Jun 2010 23:21:30 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "B-Heaps" }, { "msg_contents": "On 15/06/10 06:21, Eliot Gable wrote:\n> Just curious if this would apply to PostgreSQL:\n> http://queue.acm.org/detail.cfm?id=1814327\n>\n> <http://queue.acm.org/detail.cfm?id=1814327>Now that I've read it, it seems\n> like a no-brainer. So, how does PostgreSQL deal with the different latencies\n> involved in accessing data on disk for searches / sorts vs. accessing data\n> in memory? Is it allocated in a similar way as described in the article such\n> that disk access is reduced to a minimum?\n\nI don't think we have any binary heap structures that are large enough \nfor this to matter. We use a binary heap when merging tapes in the \ntuplesort code, for example, but that's tiny.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 15 Jun 2010 09:10:10 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Heaps" }, { "msg_contents": "Eliot Gable wrote:\n> Just curious if this would apply to \n> PostgreSQL: http://queue.acm.org/detail.cfm?id=1814327\n\nIt's hard to take this seriously at all when it's so ignorant of actual \nresearch in this area. Take a look at \nhttp://www.cc.gatech.edu/~bader/COURSES/UNM/ece637-Fall2003/papers/BFJ01.pdf \nfor a second, specifically page 9. See the \"van Emde Boas\" layout? \nThat's basically the same as what this article is calling a B-heap, and \nthe idea goes back to at least 1977. As you can see from that paper, \nthe idea of using it to optimize for multi-level caches goes back to at \nleast 2001. Based on the performance number, it seems a particularly \nhelpful optimization for the type of in-memory caching that his Varnish \ntool is good at, so kudos for reinventing the right wheel. But that's \nan environment with one level of cache: you're delivering something \nfrom memory, or not. You can't extrapolate from what works for that \nvery far.\n\n> So, how does PostgreSQL deal with the different latencies involved in \n> accessing data on disk for searches / sorts vs. accessing data in \n> memory? Is it allocated in a similar way as described in the article \n> such that disk access is reduced to a minimum?\n\nPostgreSQL is modeling a much more complicated situation where there are \nmany levels of caches, from CPU to disk. When executing a query, the \ndatabase tries to manage that by estimating the relative costs for CPU \noperations, row operations, sequential disk reads, and random disk \nreads. Those fundamental operations are then added up to build more \ncomplicated machinery like sorting. To minimize query execution cost, \nvarious query plans are considered, the cost computed for each one, and \nthe cheapest one gets executed. This has to take into account a wide \nvariety of subtle tradeoffs related to whether memory should be used for \nthings that would otherwise happen on disk. There are three primary \nways to search for a row, three main ways to do a join, two for how to \nsort, and they all need to have cost estimates made for them that \nbalance CPU time, memory, and disk access.\n\nThe problem Varnish is solving is most like how PostgreSQL decides what \ndisk pages to keep in memory, specifically the shared_buffers \nstructure. Even there the problem the database is trying to solve is \nquite a bit more complicated than what a HTTP cache has to deal with. \nFor details about what the database does there, see \"Inside the \nPostgreSQL Buffer Cache\" at http://projects.2ndquadrant.com/talks\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Tue, 15 Jun 2010 02:40:43 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Heaps" }, { "msg_contents": "In response to Greg Smith :\n> For details about what the database does there, see \"Inside the \n> PostgreSQL Buffer Cache\" at http://projects.2ndquadrant.com/talks\n\nNice paper, btw., thanks for that!\n\n\nRegards, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Tue, 15 Jun 2010 08:49:29 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Heaps" }, { "msg_contents": "On Mon, 14 Jun 2010, Eliot Gable wrote:\n> Just curious if this would apply to PostgreSQL:\n> http://queue.acm.org/detail.cfm?id=1814327\n\nAbsolutely, and I said in \nhttp://archives.postgresql.org/pgsql-performance/2010-03/msg00272.php\nbut applied to the Postgres B-tree indexes instead of heaps. It's a pretty \nobvious performance improvement really - the principle is that when you do \nhave to fetch a page from a slower medium, you may as well make it count \nfor a lot.\n\nLots of research has already been done on this - the paper linked above is \nrather behind the times.\n\nHowever, AFAIK, Postgres has not implemented this in any of its indexing \nsystems.\n\nMatthew\n\n-- \n An ant doesn't have a lot of processing power available to it. I'm not trying\n to be speciesist - I wouldn't want to detract you from such a wonderful\n creature, but, well, there isn't a lot there, is there?\n -- Computer Science Lecturer\n", "msg_date": "Tue, 15 Jun 2010 13:23:43 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Heaps" }, { "msg_contents": "Greg Smith wrote:\n> Eliot Gable wrote:\n>> Just curious if this would apply to PostgreSQL: \n>> http://queue.acm.org/detail.cfm?id=1814327\n>\n> It's hard to take this seriously at all when it's so ignorant of \n> actual research in this area. Take a look at \n> http://www.cc.gatech.edu/~bader/COURSES/UNM/ece637-Fall2003/papers/BFJ01.pdf \n> for a second\nInteresting paper, thanks for the reference!\n> PostgreSQL is modeling a much more complicated situation where there \n> are many levels of caches, from CPU to disk. When executing a query, \n> the database tries to manage that by estimating the relative costs for \n> CPU operations, row operations, sequential disk reads, and random disk \n> reads. Those fundamental operations are then added up to build more \n> complicated machinery like sorting. To minimize query execution cost, \n> various query plans are considered, the cost computed for each one, \n> and the cheapest one gets executed. This has to take into account a \n> wide variety of subtle tradeoffs related to whether memory should be \n> used for things that would otherwise happen on disk. There are three \n> primary ways to search for a row, three main ways to do a join, two \n> for how to sort, and they all need to have cost estimates made for \n> them that balance CPU time, memory, and disk access.\nDo you think that the cache oblivious algorithm described in the paper \ncould speed up index scans hitting the disk Postgres (and os/hardware) \nmulti level memory case? (so e.g. random page cost could go down?)\n\nregards,\nYeb Havinga\n", "msg_date": "Tue, 15 Jun 2010 21:33:06 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Heaps" }, { "msg_contents": "On Tue, Jun 15, 2010 at 8:23 AM, Matthew Wakeling <[email protected]> wrote:\n> Absolutely, and I said in\n> http://archives.postgresql.org/pgsql-performance/2010-03/msg00272.php\n> but applied to the Postgres B-tree indexes instead of heaps.\n\nThis is an interesting idea. I would guess that you could simulate\nthis to some degree by compiling PG with a larger block size. Have\nyou tried this to see whether/how much/for what kind of workloads it\nhelps?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Fri, 18 Jun 2010 07:54:17 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Heaps" }, { "msg_contents": "On Fri, 18 Jun 2010, Robert Haas wrote:\n> On Tue, Jun 15, 2010 at 8:23 AM, Matthew Wakeling <[email protected]> wrote:\n>> Absolutely, and I said in\n>> http://archives.postgresql.org/pgsql-performance/2010-03/msg00272.php\n>> but applied to the Postgres B-tree indexes instead of heaps.\n>\n> This is an interesting idea. I would guess that you could simulate\n> this to some degree by compiling PG with a larger block size. Have\n> you tried this to see whether/how much/for what kind of workloads it\n> helps?\n\nTo a degree, that is the case. However, if you follow the thread a bit \nfurther back, you will find evidence that when the index is in memory, \nincreasing the page size actually decreases the performance, because it \nuses more CPU.\n\nTo make it clear - 8kB is not an optimal page size for either fully cached \ndata or sparsely cached data. For disc access, large pages are \nappropriate, on the order of 256kB. If the page size is much lower than \nthat, then the time taken to fetch it doesn't actually decrease much, and \nwe are trying to get the maximum amount of work done per fetch without \nslowing fetches down significantly.\n\nGiven such a large page size, it would then be appropriate to have a \nbetter data structure inside the page. Currently, our indexes (at least \nthe GiST ones - I haven't looked at the Btree ones) use a simple linear \narray in the index page. Using a proper tree inside the index page would \nimprove the CPU usage of the index lookups.\n\nOne detail that would need to be sorted out is the cache eviction policy. \nI don't know if it is best to evict whole 256kB pages, or to evict 8kB \npages. Probably the former, which would involve quite a bit of change to \nthe shared memory cache. I can see the cache efficiency decreasing as a \nresult of this, which is the only disadvantage I can see.\n\nThis sort of thing has been fairly well researched at an academic level, \nbut has not been implemented in that many real world situations. I would \nencourage its use in Postgres.\n\nMatthew\n\n-- \n Failure is not an option. It comes bundled with your Microsoft product. \n -- Ferenc Mantfeld\n", "msg_date": "Fri, 18 Jun 2010 17:33:58 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Heaps" }, { "msg_contents": "Matthew Wakeling wrote:\n> This sort of thing has been fairly well researched at an academic \n> level, but has not been implemented in that many real world \n> situations. I would encourage its use in Postgres.\n\nI guess, but don't forget that work on PostgreSQL is driven by what \nproblems people are actually running into. There's a long list of \nperformance improvements sitting in the TODO list waiting for people to \nfind time to work on them, ones that we're quite certain are useful. \nThat anyone is going to chase after any of these speculative ideas from \nacademic research instead of one of those is unlikely. Your \ncharacterization of the potential speed up here is \"Using a proper tree \ninside the index page would improve the CPU usage of the index lookups\", \nwhich seems quite reasonable. Regardless, when I consider \"is that \nsomething I have any reason to suspect is a bottleneck on common \nworkloads?\", I don't think of any, and return to working on one of \nthings I already know is instead.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Fri, 18 Jun 2010 13:53:24 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Heaps" }, { "msg_contents": "Greg Smith wrote:\n> Matthew Wakeling wrote:\n>> This sort of thing has been fairly well researched at an academic \n>> level, but has not been implemented in that many real world \n>> situations. I would encourage its use in Postgres.\n>\n> I guess, but don't forget that work on PostgreSQL is driven by what \n> problems people are actually running into. There's a long list of \n> performance improvements sitting in the TODO list waiting for people \n> to find time to work on them, ones that we're quite certain are \n> useful. That anyone is going to chase after any of these speculative \n> ideas from academic research instead of one of those is unlikely. \n> Your characterization of the potential speed up here is \"Using a \n> proper tree inside the index page would improve the CPU usage of the \n> index lookups\", which seems quite reasonable. Regardless, when I \n> consider \"is that something I have any reason to suspect is a \n> bottleneck on common workloads?\", I don't think of any, and return to \n> working on one of things I already know is instead.\n>\nThere are two different things concerning gist indexes:\n\n1) with larger block sizes and hence, larger # entries per gist page, \nresults in more generic keys of those pages. This in turn results in a \ngreater number of hits, when the index is queried, so a larger part of \nthe index is scanned. NB this has nothing to do with caching / cache \nsizes; it holds for every IO model. Tests performed by me showed \nperformance improvements of over 200%. Since then implementing a speedup \nhas been on my 'want to do list'.\n\n2) there are several approaches to get the # entries per page down. Two \nhave been suggested in the thread referred to by Matthew (virtual pages \n(but how to order these?) and tree within a page). It is interesting to \nsee if ideas from Prokop's cache oblivous algorithms match with this \nproblem to find a suitable virtual page format.\n\nregards,\nYeb Havinga\n\n", "msg_date": "Fri, 18 Jun 2010 20:30:17 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Heaps" }, { "msg_contents": "Yeb Havinga <[email protected]> wrote:\n \n> concerning gist indexes:\n> \n> 1) with larger block sizes and hence, larger # entries per gist\n> page, results in more generic keys of those pages. This in turn\n> results in a greater number of hits, when the index is queried, so\n> a larger part of the index is scanned. NB this has nothing to do\n> with caching / cache sizes; it holds for every IO model. Tests\n> performed by me showed performance improvements of over 200%.\n> Since then implementing a speedup has been on my 'want to do\n> list'.\n \nAs I recall, the better performance in your tests was with *smaller*\nGiST pages, right? (The above didn't seem entirely clear on that.)\n \n-Kevin\n", "msg_date": "Fri, 18 Jun 2010 13:44:22 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Heaps" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> Your characterization of the potential speed up here is \"Using a proper tree \n> inside the index page would improve the CPU usage of the index lookups\", \n> which seems quite reasonable. Regardless, when I consider \"is that \n> something I have any reason to suspect is a bottleneck on common \n> workloads?\", I don't think of any, and return to working on one of \n> things I already know is instead.\n\nNote also that this doesn't do a thing for b-tree indexes, which already\nhave an intelligent within-page structure. So that instantly makes it\nnot a mainstream issue. Perhaps somebody will be motivated to work on\nit, but most of us are chasing other things.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Jun 2010 15:41:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Heaps " }, { "msg_contents": "Kevin Grittner wrote:\n> Yeb Havinga <[email protected]> wrote:\n> \n> \n>> concerning gist indexes:\n>>\n>> 1) with larger block sizes and hence, larger # entries per gist\n>> page, results in more generic keys of those pages. This in turn\n>> results in a greater number of hits, when the index is queried, so\n>> a larger part of the index is scanned. NB this has nothing to do\n>> with caching / cache sizes; it holds for every IO model. Tests\n>> performed by me showed performance improvements of over 200%.\n>> Since then implementing a speedup has been on my 'want to do\n>> list'.\n>> \n> \n> As I recall, the better performance in your tests was with *smaller*\n> GiST pages, right? (The above didn't seem entirely clear on that.)\n> \nYes, making pages smaller made index scanning faster.\n\n-- Yeb\n\n", "msg_date": "Fri, 18 Jun 2010 22:18:41 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Heaps" }, { "msg_contents": "On Fri, Jun 18, 2010 at 1:53 PM, Greg Smith <[email protected]> wrote:\n> Matthew Wakeling wrote:\n>>\n>> This sort of thing has been fairly well researched at an academic level,\n>> but has not been implemented in that many real world situations. I would\n>> encourage its use in Postgres.\n>\n> I guess, but don't forget that work on PostgreSQL is driven by what problems\n> people are actually running into.  There's a long list of performance\n> improvements sitting in the TODO list waiting for people to find time to\n> work on them, ones that we're quite certain are useful.  That anyone is\n> going to chase after any of these speculative ideas from academic research\n> instead of one of those is unlikely.  Your characterization of the potential\n> speed up here is \"Using a proper tree inside the index page would improve\n> the CPU usage of the index lookups\", which seems quite reasonable.\n>  Regardless, when I consider \"is that something I have any reason to suspect\n> is a bottleneck on common workloads?\", I don't think of any, and return to\n> working on one of things I already know is instead.\n\nThis is drifting a bit off-topic for this thread, but it's not so easy\nto figure out from looking at the TODO which things are actually\nimportant. Performance-related improvements are mixed in with\nnon-performance related improvements, which are mixed in with things\nthat are probably not improvements at all. And even to the extent\nthat you can identify the stuff that's performance-related, it's far\nfrom obvious which things are most important. Any thoughts on that?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Sat, 19 Jun 2010 08:08:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Heaps" }, { "msg_contents": "Robert Haas wrote:\n> This is drifting a bit off-topic for this thread, but it's not so easy\n> to figure out from looking at the TODO which things are actually\n> important. Performance-related improvements are mixed in with\n> non-performance related improvements, which are mixed in with things\n> that are probably not improvements at all. And even to the extent\n> that you can identify the stuff that's performance-related, it's far\n> from obvious which things are most important. Any thoughts on that\n\nI don't think it's off topic at all actually, and as usually I'll be \nhappy to argue why. Reorganizing the TODO so that it's easier for \nnewcomers to consume is certainly a worthwhile but hard to \"fund\" (find \ntime to do relative to more important things) effort itself. My point \nwas more that statistically, *anything* on that list is likely a better \ncandidate for something to work on usefully than one of the random \ntheoretical performance improvements from research that pop on the lists \nfrom time to time. People get excited about these papers and blog posts \nsometimes, but the odds of those actually being in the critical path \nwhere it represents a solution to a current PostgreSQL bottleneck is \ndramatically lower than that you'll find one reading the list of *known* \nissues. Want to improve PostgreSQL performance? Spend more time \nreading the TODO, less looking around elsewhere for problems the \ndatabase may or may not have.\n\nI have a major time sink I'm due to free myself from this week, and the \nidea of providing some guidance for a \"low hanging performance fruit\" \nsection of the TODO is a good one I should take a look at. I have a \npersonal list of that sort already I should probably just make public, \nsince the ideas for improving things are not the valuable part I should \nworry about keeping private anyway.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sun, 20 Jun 2010 15:57:13 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: B-Heaps" } ]
[ { "msg_contents": "Good morning List \r\n\r\nIn relation to the process of tuning the engine PostgreSQL database, especially \r\n7.3.7 version that is being used currently, agreaceria me clarify a procedure \r\n\r\nIf I have a server with 2 GB of RAM, it is said that the shared memory segment for \r\nthe engine of the database on a dedicated server should be between 25% and maximum 33% of the RAM. \r\n\r\nI'm doing the calculation would be the next agradeze I confirm if I am \r\ncorrect \r\n\r\n1 MB = 1,048,576 \r\n * 2 = 2097152 1048576 \r\n25% would be 2097152 * 25 / 100 = 524288 \r\nThen the shared_buffers should be 524288 / 8 = 65536 for 25% of 2 GB \r\nor should be 524 288? \r\n\r\n\r\nSincerely.\r\n\r\nJuan Pablo Sandoval Rivera\r\nTecnologo Prof. en Ing. de Sistemas\r\n\r\nLinux User : 322765 \r\nmsn : [email protected]\r\nyahoo : [email protected] (juan_pablos.rm)\r\nUIN : 276125187 (ICQ)\r\nJabber : [email protected]\r\nSkype : juan.pablo.sandoval.rivera\r\n\r\nAPOYA A ECOSEARCH.COM - Ayuda a salvar al Planeta.\r\n\r\n\r\n\r\n", "msg_date": "Wed, 16 Jun 2010 13:45:57 +0000", "msg_from": "Juan Pablo Sandoval Rivera <[email protected]>", "msg_from_op": true, "msg_subject": "Confirm calculus " } ]
[ { "msg_contents": "Good morning List \r\n\r\nIn relation to the process of tuning the engine PostgreSQL database, especially \r\n7.3.7 version that is being used currently, agreaceria me clarify a procedure \r\n\r\nIf I have a server with 2 GB of RAM, it is said that the shared memory segment for \r\nthe engine of the database on a dedicated server should be between 25% and maximum 33% of the RAM. \r\n\r\nI'm doing the calculation would be the next agradeze I confirm if I am \r\ncorrect \r\n\r\n1 MB = 1,048,576 \r\n * 2 = 2097152 1048576 \r\n25% would be 2097152 * 25 / 100 = 524288 \r\nThen the shared_buffers should be 524288 / 8 = 65536 for 25% of 2 GB \r\nor should be 524 288? \r\n\r\n\r\nSincerely.\r\n\r\nJuan Pablo Sandoval Rivera\r\nTecnologo Prof. en Ing. de Sistemas\r\n\r\nLinux User : 322765 \r\nmsn : [email protected]\r\nyahoo : [email protected] (juan_pablos.rm)\r\nUIN : 276125187 (ICQ)\r\nJabber : [email protected]\r\nSkype : juan.pablo.sandoval.rivera\r\n\r\nAPOYA A ECOSEARCH.COM - Ayuda a salvar al Planeta.\r\n\r\n\r\n\r\n", "msg_date": "Wed, 16 Jun 2010 13:46:07 +0000", "msg_from": "Juan Pablo Sandoval Rivera <[email protected]>", "msg_from_op": true, "msg_subject": "Confirm calculus " }, { "msg_contents": "Juan Pablo Sandoval Rivera <[email protected]> wrote:\n \n> In relation to the process of tuning the engine PostgreSQL\n> database, especially 7.3.7 version that is being used currently,\n \nDo you really mean PostgreSQL version 7.3.7 (not 8.3.7)? If so, you\nshould really consider upgrading. Performance is going to be much\nbetter, not to mention all the new features and bug fixes.\n \n> it is said that the shared memory segment for the engine of the\n> database on a dedicated server should be between 25% and maximum\n> 33% of the RAM.\n \nI think that started being a recommended practice with 8.1 or 8.2;\nin my limited experience with older versions, it didn't pay to go\nbeyond somewhere in the 100MB to 200MB range.\n \n-Kevin\n", "msg_date": "Wed, 16 Jun 2010 14:26:18 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirm calculus" }, { "msg_contents": "On Wed, Jun 16, 2010 at 7:46 AM, Juan Pablo Sandoval Rivera\n<[email protected]> wrote:\n> Good morning List\n>\n> In relation to the process of tuning the engine PostgreSQL database, especially\n> 7.3.7 version that is being used currently, agreaceria me clarify a procedure\n\nI concur with the other poster on keeping shared_mem lower on this\nolder version. Also, you should be running at the very least the\nlatest version of 7.3, which is 7.3.21 and available here:\n\nftp://ftp-archives.postgresql.org/pub/source/\n\nTuning 7.3 is more an effort in futility at this point, considering\nthat a non-tuned 8.3 or 8.4 install will still be much much faster.\nIf the lack of auto-casts in 8.3 impacts you negatively then at least\nlook at 8.2.latest.\n", "msg_date": "Wed, 16 Jun 2010 13:34:43 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confirm calculus" } ]
[ { "msg_contents": "Hello,I will have a web application having postgres 8.4+ as backend. At any given time, there will be max of 1000 parallel web-users interacting with the database (read/write)I wish to do performance testing of 1000 simultaneous read/write to the database.\nI can do a simple unix script on the postgres server and have parallel updates fired for example with an ampersand at the end. Example:\necho '\\timing \\\\update \"DAPP\".emp_data set f1 = 123where emp_id =0;' | \"psql\" test1 postgres|grep \"Time:\"|cut -d' ' -f2- >> \"/home/user/Documents/temp/logs/$NUM.txt\" &pid1=$!\t echo '\\timing \\\\update \"DAPP\".emp_data set f1 = 123 where emp_id =2;' | \"psql\" test1 postgres|grep \"Time:\"|cut -d' ' -f2- >> \"/home/user/Documents/temp/logs/$NUM.txt\" &pid2=$!\t echo '\\timing \\\\update \"DAPP\".emp_data set f1 = 123 where emp_id =4;' | \"psql\" test1 postgres|grep \"Time:\"|cut -d' ' -f2- >> \"/home/user/Documents/temp/logs/$NUM.txt\" &pid3=$!\t .............\n\nMy question is:Am I losing something by firing these queries directly off the server and should I look at firing the queries from different IP address (as it would happen in a web application). Would the way postgres opens sockets/allocates buffer etc change in the two approaches and I get non-realistic results by a unix script on the server ?It will be very tedious exercise to have 1000 different machines (IP address) and each firing a query; all the same time. But at the same time, I want to be absolutely sure my test would give the same result in production (requirements for latency for read/write is very very low)I am not interested in the network time; just the database read/write time.\n\nThanks for any tips !-Bala\n \t\t \t \t\t \n_________________________________________________________________\nThe New Busy think 9 to 5 is a cute idea. Combine multiple calendars with Hotmail. \nhttp://www.windowslive.com/campaign/thenewbusy?tile=multicalendar&ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_5\n\n\n\n\n\nHello,I will have a web application having postgres 8.4+ as backend. At any given time, there will be max of 1000 parallel web-users interacting with the database (read/write)I wish to do performance testing of 1000 simultaneous read/write to the database.I can do a simple unix script on the postgres server and have parallel updates fired for example with an ampersand at the end. Example:echo '\\timing \\\\update \"DAPP\".emp_data set f1 = 123where  emp_id =0;' | \"psql\" test1 postgres|grep \"Time:\"|cut -d' ' -f2- >> \"/home/user/Documents/temp/logs/$NUM.txt\" &pid1=$!   echo '\\timing \\\\update \"DAPP\".emp_data set f1 = 123 where  emp_id =2;' | \"psql\" test1 postgres|grep \"Time:\"|cut -d' ' -f2- >> \"/home/user/Documents/temp/logs/$NUM.txt\" &pid2=$!    echo '\\timing \\\\update \"DAPP\".emp_data set f1 = 123 where  emp_id =4;' | \"psql\" test1 postgres|grep \"Time:\"|cut -d' ' -f2- >> \"/home/user/Documents/temp/logs/$NUM.txt\" &pid3=$!    .............My question is:Am I losing something by firing these queries directly off the server and should I look at firing the queries from different IP address (as it would happen in a web application). Would the way postgres opens sockets/allocates buffer etc change in the two approaches and I get non-realistic results by a unix script on the server ?It will be very tedious  exercise to have 1000 different machines (IP address)  and each firing a query; all the same time. But at the same time, I want to be absolutely sure my test would give the same result in production (requirements for latency for read/write is very very low)I am not interested in the network time; just the database read/write time.Thanks for any tips !-Bala\nThe New Busy think 9 to 5 is a cute idea. Combine multiple calendars with Hotmail. Get busy.", "msg_date": "Wed, 16 Jun 2010 13:00:22 -0400", "msg_from": "Balkrishna Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Parallel queries for a web-application |performance testing" }, { "msg_contents": "Balkrishna Sharma <[email protected]> wrote:\n \n> I wish to do performance testing of 1000 simultaneous read/write\n> to the database.\n \nYou should definitely be using a connection pool of some sort. Both\nyour throughput and response time will be better that way. You'll\nwant to test with different pool sizes, but I've found that a size\nwhich allows the number of active queries in PostgreSQL to be\nsomewhere around (number_of_cores * 2) + effective_spindle_count to\nbe near the optimal size.\n \n> My question is:Am I losing something by firing these queries\n> directly off the server and should I look at firing the queries\n> from different IP address (as it would happen in a web application).\n \nIf you run the client side of your test on the database server, the\nCPU time used by the client will probably distort your results. I\nwould try using one separate machine to generate the requests, but\nmonitor to make sure that the client machine isn't hitting some\nbottleneck (like CPU time). If the client is the limiting factor,\nyou may need to use more than one client machine. No need to use\n1000 different client machines. :-)\n \n-Kevin\n", "msg_date": "Wed, 16 Jun 2010 16:19:06 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel queries for a web-application\n\t |performance testing" }, { "msg_contents": "Balkrishna Sharma <[email protected]> writes:\n> I will have a web application having postgres 8.4+ as backend. At any given time, there will be max of 1000 parallel web-users interacting with the database (read/write)\n> I wish to do performance testing of 1000 simultaneous read/write to\n> the database.\n\nSee about tsung, and either benckmarck only the PostgreSQL side of\nthings, or at the HTTP side of things directly : that will run your\napplication code against PostgreSQL.\n\n http://tsung.erlang-projects.org/\n\nAnd as Kevin said, consider using a connection pool, such as\npgbouncer. Once you have setup the benchmark with Tsung, adding\npgbouncer and comparing the results will be easy.\n\nRegards,\n-- \ndim\n", "msg_date": "Thu, 17 Jun 2010 10:59:57 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel queries for a web-application |performance testing" }, { "msg_contents": "On Wed, 16 Jun 2010, Balkrishna Sharma wrote:\n> Hello,I will have a web application having postgres 8.4+ as backend. At \n> any given time, there will be max of 1000 parallel web-users interacting \n> with the database (read/write)I wish to do performance testing of 1000 \n> simultaneous read/write to the database.\n\nWhen you set up a server that has high throughput requirements, the last \nthing you want to do is use it in a manner that cripples its throughput. \nDon't try and have 1000 parallel Postgres backends - it will process those \nqueries slower than the optimal setup. You should aim to have \napproximately ((2 * cpu core count) + effective spindle count) number of \nbackends, as that is the point at which throughput is the greatest. You \ncan use pgbouncer to achieve this.\n\n> I can do a simple unix script on the postgres server and have parallel \n> updates fired for example with an ampersand at the end. Example:\n\n> echo '\\timing \\\\update \"DAPP\".emp_data set f1 = 123where emp_id =0;' | \n> \"psql\" test1 postgres|grep \"Time:\"|cut -d' ' -f2- >> \n> \"/home/user/Documents/temp/logs/$NUM.txt\" &pid1=$! echo '\\timing \n> \\\\update \"DAPP\".emp_data set f1 = 123 where emp_id =2;' | \"psql\" test1 \n> postgres|grep \"Time:\"|cut -d' ' -f2- >> \n> \"/home/user/Documents/temp/logs/$NUM.txt\" &pid2=$! echo '\\timing \n> \\\\update \"DAPP\".emp_data set f1 = 123 where emp_id =4;' | \"psql\" test1 \n> postgres|grep \"Time:\"|cut -d' ' -f2- >> \n> \"/home/user/Documents/temp/logs/$NUM.txt\" &pid3=$! .............\n\nDon't do that. The overhead of starting up an echo, a psql, and a grep \nwill limit the rate at which these queries can be fired at Postgres, and \nconsume quite a lot of CPU. Use a proper benchmarking tool, possibly on a \ndifferent server.\n\nAlso, you should be using a different username to \"postgres\" - that one is \nkind of reserved for superuser operations.\n\nMatthew\n\n-- \n People who love sausages, respect the law, and work with IT standards \n shouldn't watch any of them being made. -- Peter Gutmann\n", "msg_date": "Thu, 17 Jun 2010 10:41:44 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel queries for a web-application |performance\n testing" }, { "msg_contents": "\n> When you set up a server that has high throughput requirements, the last \n> thing you want to do is use it in a manner that cripples its throughput. \n> Don't try and have 1000 parallel Postgres backends - it will process \n> those queries slower than the optimal setup. You should aim to have \n> approximately ((2 * cpu core count) + effective spindle count) number of \n> backends, as that is the point at which throughput is the greatest. You \n> can use pgbouncer to achieve this.\n\nThe same is true of a web server : 1000 active php interpreters (each \neating several megabytes or more) are not ideal for performance !\n\nFor php, I like lighttpd with php-fastcgi : the webserver proxies requests \nto a small pool of php processes, which are only busy while generating the \npage. Once the page is generated the webserver handles all (slow) IO to \nthe client.\n\nAn interesting side effect is that the number of database connections is \nlimited to the number of PHP processes in the pool, so you don't even need \na postgres connection pooler (unless you have lots of php boxes)...\n", "msg_date": "Thu, 17 Jun 2010 13:12:22 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel queries for a web-application |performance\n testing" }, { "msg_contents": "\"Pierre C\" <[email protected]> writes:\n> The same is true of a web server : 1000 active php interpreters (each eating\n> several megabytes or more) are not ideal for performance !\n>\n> For php, I like lighttpd with php-fastcgi : the webserver proxies requests\n> to a small pool of php processes, which are only busy while generating the\n> page. Once the page is generated the webserver handles all (slow) IO to the\n> client.\n\nI use haproxy for that, it handles requests queues very effectively.\n-- \ndim\n", "msg_date": "Thu, 17 Jun 2010 13:31:51 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel queries for a web-application |performance testing" } ]
[ { "msg_contents": "Hi,\n\nI am new to this list so please forgive me if it not fits the standards.\n\nI have the following query that I run agains postgresql 8.2:\n\nselect distinct \n m.koid,\n m.name, \n m.farbe, \n m.aktennummer, \n m.durchgefuehrt_von, \n m.durchgefuehrt_bis, \n rf.bezeichnung as rf_bezeichnung, \n mt.bezeichnung as mt_bezeichnung, \n wl_farben.wert, \n v_adr.text_lkr, \n v_adr.text_gemeinde \nfrom \n (((((( boden.massnahmeobjekt m left join boden.massnahmengruppe mg on\nm.massnahmengruppe_koid=mg.koid) \n left join boden.th_referate rf on mg.angelegt_von_referat=rf.th_id) \n left join boden.th_massnahmentyp mt on m.massnahmentyp=mt.th_id) \n left join boden.wl_farben wl_farben on m.farbe=wl_farben.wl_id) \n left join boden_views.v_z_lc_flst v_flst on m.koid=v_flst.koid)\n left join boden_views.v_z_lc_adresse v_adr on m.koid=v_adr.koid)\nwhere m.aktennummer ~* 'M\\\\-2009\\\\-1' \norder by koid asc limit 100\n\n-----------------\nIt takes a around 10 secs to complete with the following plan:\n\n----------------\n\nLimit (cost=128494.42..128494.69 rows=9 width=1212) (actual\ntime=12463.236..12464.675 rows=100 loops=1)\n -> Unique (cost=128494.42..128494.69 rows=9 width=1212) (actual\ntime=12463.206..12464.183 rows=100 loops=1)\n -> Sort (cost=128494.42..128494.44 rows=9 width=1212) (actual\ntime=12463.178..12463.490 rows=123 loops=1)\n Sort Key: m.koid, m.name, m.farbe, m.aktennummer,\nm.durchgefuehrt_von, m.durchgefuehrt_bis, rf.bezeichnung,\nmt.bezeichnung, wl_farben.wert, t2.bezeichnung, t3.bezeichnung\n -> Hash Left Join (cost=119377.13..128494.28 rows=9\nwidth=1212) (actual time=10475.870..12416.672 rows=3922 loops=1)\n Hash Cond: (m.koid = lc.koid)\n -> Nested Loop Left Join (cost=26.59..5848.52\nrows=3 width=1148) (actual time=1.697..1711.535 rows=3813 loops=1)\n -> Nested Loop Left Join\n(cost=26.59..5847.53 rows=3 width=1156) (actual time=1.664..1632.871\nrows=3813 loops=1)\n -> Nested Loop Left Join\n(cost=26.59..5846.68 rows=3 width=1152) (actual time=1.617..1538.819\nrows=3813 loops=1)\n -> Nested Loop Left Join\n(cost=0.00..3283.05 rows=1 width=1148) (actual time=1.267..1352.254\nrows=3694 loops=1)\n -> Nested Loop Left Join\n(cost=0.00..3282.77 rows=1 width=1120) (actual time=1.230..1232.264\nrows=3694 loops=1)\n -> Nested Loop Left\nJoin (cost=0.00..3274.48 rows=1 width=1124) (actual\ntime=1.089..1143.501 rows=3694 loops=1)\n Join Filter:\n(m.massnahmentyp = mt.th_id)\n -> Nested Loop\nLeft Join (cost=0.00..3273.03 rows=1 width=1100) (actual\ntime=0.999..671.405 rows=3694 loops=1)\n Join\nFilter: (m.farbe = wl_farben.wl_id)\n -> Seq\nScan on massnahmeobjekt m (cost=0.00..3271.88 rows=1 width=1068)\n(actual time=0.909..425.324 rows=3694 loops=1)\n \nFilter: ((aktennummer)::text ~* 'M\\\\-2009\\\\-1'::text)\n -> Seq\nScan on wl_farben (cost=0.00..1.07 rows=7 width=36) (actual\ntime=0.005..0.024 rows=7 loops=3694)\n -> Seq Scan on\nth_massnahmentyp mt (cost=0.00..1.20 rows=20 width=40) (actual\ntime=0.003..0.060 rows=20 loops=3694)\n -> Index Scan using\nidx_massnahmengruppe_koid on massnahmengruppe mg (cost=0.00..8.28\nrows=1 width=12) (actual time=0.009..0.012 rows=1 loops=3694)\n\n\n--------------------------\nBut when I run analyse the same query runs for hours. (See eyplain\noutput below)\n--------------------\n\n\n\nLimit (cost=111795.21..111795.24 rows=1 width=149) (actual\ntime=10954094.322..10954095.612 rows=100 loops=1)\n -> Unique (cost=111795.21..111795.24 rows=1 width=149) (actual\ntime=10954094.316..10954095.165 rows=100 loops=1)\n -> Sort (cost=111795.21..111795.22 rows=1 width=149) (actual\ntime=10954094.310..10954094.600 rows=123 loops=1)\n Sort Key: m.koid, m.name, m.farbe, m.aktennummer,\nm.durchgefuehrt_von, m.durchgefuehrt_bis, rf.bezeichnung,\nmt.bezeichnung, wl_farben.wert, t2.bezeichnung, t3.bezeichnung\n -> Nested Loop Left Join (cost=101312.40..111795.20\nrows=1 width=149) (actual time=7983.197..10954019.963 rows=3922 loops=1)\n Join Filter: (m.koid = lc.koid)\n -> Nested Loop Left Join (cost=0.00..3291.97\nrows=1 width=119) (actual time=1.083..2115.512 rows=3813 loops=1)\n -> Nested Loop Left Join (cost=0.00..3291.69\nrows=1 width=115) (actual time=0.980..2018.008 rows=3813 loops=1)\n -> Nested Loop Left Join\n(cost=0.00..3283.41 rows=1 width=119) (actual time=0.868..1874.309\nrows=3813 loops=1)\n Join Filter: (m.massnahmentyp =\nmt.th_id)\n -> Nested Loop Left Join\n(cost=0.00..3281.96 rows=1 width=105) (actual time=0.844..1394.628\nrows=3813 loops=1)\n Join Filter: (m.farbe =\nwl_farben.wl_id)\n -> Nested Loop Left Join\n(cost=0.00..3280.80 rows=1 width=94) (actual time=0.825..1168.177\nrows=3813 loops=1)\n -> Nested Loop Left\nJoin (cost=0.00..3280.47 rows=1 width=102) (actual time=0.808..1069.334\nrows=3813 loops=1)\n -> Nested Loop\nLeft Join (cost=0.00..3280.18 rows=1 width=98) (actual\ntime=0.694..918.863 rows=3813 loops=1)\n -> Seq\nScan on massnahmeobjekt m (cost=0.00..3271.88 rows=1 width=94) (actual\ntime=0.387..577.771 rows=3694 loops=1)\n \nFilter: ((aktennummer)::text ~* 'M\\\\-2009\\\\-1'::text)\n -> Index\nScan using idx_boden_lc_flst_koid on lc_flst lc (cost=0.00..8.30 rows=1\nwidth=12) (actual time=0.060..0.065 rows=1 loops=3694)\n \nIndex Cond: (m.koid = lc.koid)\n -> Index Scan\nusing th_meta_vagmk_pkey on th_meta_vagmk t1 (cost=0.00..0.27 rows=1\nwidth=16) (actual time=0.022..0.025 rows=1 loops=3813)\n\n----------------------\nThanks in advance for any help.\nChristian Kaufhold\n\n\n\n\n\n\n\n\nQuery slow after analyse on postgresql 8.2\n\n\n\nHi,\n\nI am new to this list so please forgive me if it not fits the standards.\n\nI have the following query that I run agains postgresql 8.2:\n\nselect distinct \n  m.koid,\n  m.name, \n  m.farbe, \n  m.aktennummer, \n  m.durchgefuehrt_von, \n  m.durchgefuehrt_bis, \n  rf.bezeichnung as rf_bezeichnung, \n  mt.bezeichnung as mt_bezeichnung, \n  wl_farben.wert, \n  v_adr.text_lkr, \n  v_adr.text_gemeinde \nfrom \n (((((( boden.massnahmeobjekt m left join boden.massnahmengruppe mg on m.massnahmengruppe_koid=mg.koid) \n left join boden.th_referate rf on mg.angelegt_von_referat=rf.th_id) \n left join boden.th_massnahmentyp mt on m.massnahmentyp=mt.th_id) \n left join boden.wl_farben wl_farben on m.farbe=wl_farben.wl_id) \n left join boden_views.v_z_lc_flst v_flst on m.koid=v_flst.koid)\n left join boden_views.v_z_lc_adresse v_adr on m.koid=v_adr.koid)\nwhere m.aktennummer ~* 'M\\\\-2009\\\\-1'    \norder by koid asc limit 100\n\n-----------------\nIt takes a around 10 secs to complete with the following plan:\n\n----------------\n\nLimit  (cost=128494.42..128494.69 rows=9 width=1212) (actual time=12463.236..12464.675 rows=100 loops=1)\n  ->  Unique  (cost=128494.42..128494.69 rows=9 width=1212) (actual time=12463.206..12464.183 rows=100 loops=1)\n        ->  Sort  (cost=128494.42..128494.44 rows=9 width=1212) (actual time=12463.178..12463.490 rows=123 loops=1)\n              Sort Key: m.koid, m.name, m.farbe, m.aktennummer, m.durchgefuehrt_von, m.durchgefuehrt_bis, rf.bezeichnung, mt.bezeichnung, wl_farben.wert, t2.bezeichnung, t3.bezeichnung\n              ->  Hash Left Join  (cost=119377.13..128494.28 rows=9 width=1212) (actual time=10475.870..12416.672 rows=3922 loops=1)\n                    Hash Cond: (m.koid = lc.koid)\n                    ->  Nested Loop Left Join  (cost=26.59..5848.52 rows=3 width=1148) (actual time=1.697..1711.535 rows=3813 loops=1)\n                          ->  Nested Loop Left Join  (cost=26.59..5847.53 rows=3 width=1156) (actual time=1.664..1632.871 rows=3813 loops=1)\n                                ->  Nested Loop Left Join  (cost=26.59..5846.68 rows=3 width=1152) (actual time=1.617..1538.819 rows=3813 loops=1)\n                                      ->  Nested Loop Left Join  (cost=0.00..3283.05 rows=1 width=1148) (actual time=1.267..1352.254 rows=3694 loops=1)\n                                            ->  Nested Loop Left Join  (cost=0.00..3282.77 rows=1 width=1120) (actual time=1.230..1232.264 rows=3694 loops=1)\n                                                  ->  Nested Loop Left Join  (cost=0.00..3274.48 rows=1 width=1124) (actual time=1.089..1143.501 rows=3694 loops=1)\n                                                        Join Filter: (m.massnahmentyp = mt.th_id)\n                                                        ->  Nested Loop Left Join  (cost=0.00..3273.03 rows=1 width=1100) (actual time=0.999..671.405 rows=3694 loops=1)\n                                                              Join Filter: (m.farbe = wl_farben.wl_id)\n                                                              ->  Seq Scan on massnahmeobjekt m  (cost=0.00..3271.88 rows=1 width=1068) (actual time=0.909..425.324 rows=3694 loops=1)\n                                                                    Filter: ((aktennummer)::text ~* 'M\\\\-2009\\\\-1'::text)\n                                                              ->  Seq Scan on wl_farben  (cost=0.00..1.07 rows=7 width=36) (actual time=0.005..0.024 rows=7 loops=3694)\n                                                        ->  Seq Scan on th_massnahmentyp mt  (cost=0.00..1.20 rows=20 width=40) (actual time=0.003..0.060 rows=20 loops=3694)\n                                                  ->  Index Scan using idx_massnahmengruppe_koid on massnahmengruppe mg  (cost=0.00..8.28 rows=1 width=12) (actual time=0.009..0.012 rows=1 loops=3694)\n\n--------------------------\nBut when I run analyse the same query runs for hours. (See eyplain output below)\n--------------------\n\n\n\nLimit  (cost=111795.21..111795.24 rows=1 width=149) (actual time=10954094.322..10954095.612 rows=100 loops=1)\n  ->  Unique  (cost=111795.21..111795.24 rows=1 width=149) (actual time=10954094.316..10954095.165 rows=100 loops=1)\n        ->  Sort  (cost=111795.21..111795.22 rows=1 width=149) (actual time=10954094.310..10954094.600 rows=123 loops=1)\n              Sort Key: m.koid, m.name, m.farbe, m.aktennummer, m.durchgefuehrt_von, m.durchgefuehrt_bis, rf.bezeichnung, mt.bezeichnung, wl_farben.wert, t2.bezeichnung, t3.bezeichnung\n              ->  Nested Loop Left Join  (cost=101312.40..111795.20 rows=1 width=149) (actual time=7983.197..10954019.963 rows=3922 loops=1)\n                    Join Filter: (m.koid = lc.koid)\n                    ->  Nested Loop Left Join  (cost=0.00..3291.97 rows=1 width=119) (actual time=1.083..2115.512 rows=3813 loops=1)\n                          ->  Nested Loop Left Join  (cost=0.00..3291.69 rows=1 width=115) (actual time=0.980..2018.008 rows=3813 loops=1)\n                                ->  Nested Loop Left Join  (cost=0.00..3283.41 rows=1 width=119) (actual time=0.868..1874.309 rows=3813 loops=1)\n                                      Join Filter: (m.massnahmentyp = mt.th_id)\n                                      ->  Nested Loop Left Join  (cost=0.00..3281.96 rows=1 width=105) (actual time=0.844..1394.628 rows=3813 loops=1)\n                                            Join Filter: (m.farbe = wl_farben.wl_id)\n                                            ->  Nested Loop Left Join  (cost=0.00..3280.80 rows=1 width=94) (actual time=0.825..1168.177 rows=3813 loops=1)\n                                                  ->  Nested Loop Left Join  (cost=0.00..3280.47 rows=1 width=102) (actual time=0.808..1069.334 rows=3813 loops=1)\n                                                        ->  Nested Loop Left Join  (cost=0.00..3280.18 rows=1 width=98) (actual time=0.694..918.863 rows=3813 loops=1)\n                                                              ->  Seq Scan on massnahmeobjekt m  (cost=0.00..3271.88 rows=1 width=94) (actual time=0.387..577.771 rows=3694 loops=1)\n                                                                    Filter: ((aktennummer)::text ~* 'M\\\\-2009\\\\-1'::text)\n                                                              ->  Index Scan using idx_boden_lc_flst_koid on lc_flst lc  (cost=0.00..8.30 rows=1 width=12) (actual time=0.060..0.065 rows=1 loops=3694)\n                                                                    Index Cond: (m.koid = lc.koid)\n                                                        ->  Index Scan using th_meta_vagmk_pkey on th_meta_vagmk t1  (cost=0.00..0.27 rows=1 width=16) (actual time=0.022..0.025 rows=1 loops=3813)\n----------------------\nThanks in advance for any help.\nChristian Kaufhold", "msg_date": "Thu, 17 Jun 2010 09:52:33 +0200", "msg_from": "\"Kaufhold, Christian (LFD)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query slow after analyse on postgresql 8.2" }, { "msg_contents": "\"Kaufhold, Christian (LFD)\" <[email protected]> writes:\n> I have the following query that I run agains postgresql 8.2:\n> ...\n> But when I run analyse the same query runs for hours.\n\nSeems like the core of the problem is here:\n\n> -> Seq\n> Scan on massnahmeobjekt m (cost=0.00..3271.88 rows=1 width=94) (actual\n> time=0.387..577.771 rows=3694 loops=1)\n> Filter: ((aktennummer)::text ~* 'M\\\\-2009\\\\-1'::text)\n\nIf that rowcount estimate weren't off by three orders of magnitude you\nprobably would be getting a more appropriate plan. The first thing you\ncould try is increasing the statistics target for aktennummer. Also,\nif you're running in a non-C locale and this is 8.2.5 or older, try a\nmore recent 8.2.x. Updating to 8.3 or 8.4 might help even more.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Jun 2010 10:58:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query slow after analyse on postgresql 8.2 " }, { "msg_contents": " \nThanks Tom,\n\nalter table boden.massnahmeobjekt alter column aktennummer set statistics 1000;\n\nfixed it.\n\nRegards\nChristian\n\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] [mailto:[email protected]] Im Auftrag von Tom Lane\nGesendet: Donnerstag, 17. Juni 2010 16:59\nAn: Kaufhold, Christian (LFD)\nCc: [email protected]\nBetreff: Re: [PERFORM] Query slow after analyse on postgresql 8.2 \n\n\"Kaufhold, Christian (LFD)\" <[email protected]> writes:\n> I have the following query that I run agains postgresql 8.2:\n> ...\n> But when I run analyse the same query runs for hours.\n\nSeems like the core of the problem is here:\n\n> -> Seq \n> Scan on massnahmeobjekt m (cost=0.00..3271.88 rows=1 width=94) \n> (actual\n> time=0.387..577.771 rows=3694 loops=1)\n> Filter: ((aktennummer)::text ~* 'M\\\\-2009\\\\-1'::text)\n\nIf that rowcount estimate weren't off by three orders of magnitude you probably would be getting a more appropriate plan. The first thing you could try is increasing the statistics target for aktennummer. Also, if you're running in a non-C locale and this is 8.2.5 or older, try a more recent 8.2.x. Updating to 8.3 or 8.4 might help even more.\n\n\t\t\tregards, tom lane\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 17 Jun 2010 17:30:25 +0200", "msg_from": "\"Kaufhold, Christian (LFD)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query slow after analyse on postgresql 8.2 " } ]
[ { "msg_contents": "Hi All,\n I am using Postgres 8.1.9 for my application. My application also has\na clean up module which cleans up specified percentage of total database\nsize at regular intervals. Now the problem is I use *pg_database_size* to\nobtain the size of the database. After deleting the records, we run *Vacuum\nAnalyze* to reorder the indexes. The problem here is even though some\nrecords are cleared, it still shows the original DB Size. Is there any way\nto find out the actual DB Size or it would be more useful, if I can get the\nsize of each table.\n I can't run *Vacuum Full* because the application should be run 24*7\nwithout downtime.\n\nCan someone please help me in solving this.\n\nPlease let me know if you need any clarifications.\n\nThank you,\nVenu.\n\nHi All,      I am using Postgres 8.1.9 for my application. My application also has a clean up module which cleans up specified percentage of total database size at regular intervals. Now the problem is I use pg_database_size to obtain the size of the database. After deleting the records, we run Vacuum Analyze to reorder the indexes. The problem here is even though some records are cleared, it still shows the original DB Size. Is there any way to find out the actual DB Size or it would be more useful, if I can get the size of each table. \n     I can't run Vacuum Full because the application should be run 24*7 without downtime.Can someone please help me in solving this.Please let me know if you need any clarifications.Thank you,\nVenu.", "msg_date": "Thu, 17 Jun 2010 14:38:05 +0530", "msg_from": "venu madhav <[email protected]>", "msg_from_op": true, "msg_subject": "Obtaining the exact size of the database." }, { "msg_contents": "Hi there\n\n1. PG 8.1.9 is ancient ... you should upgrade.\n\n2. The database gross size on disk is not affected by VACUUM ANALYZE ... all\nthis does is return space used by deleted row-versions to PG for re-use. The\nonly way to reduce it and thus return disk space to the OS is to do a VACUUM\nFULL, or to delete the entire table.\n\n3. If you can suspend writes for a while, you can pull off an \"online\"\nVACCUM FULL, or copy and delete the table in order to repack it. Check out\nthe CLUSTER command.\n\n4. If you're trying to figure out the net size of the table, i.e. how much\nfree space is inside the table files for reuse by PG, then you need the\npg_stat_tuple function ... this is built in to PG 8.4, and a plug-in\nactivated by a script for PG 8.3, don't know if it exists in 8.1 or not.\nLike SELECT COUNT(*) this requires a full table scan.\n\nCheers\nDave\n\nsent from my Android phone\n\nOn Jun 20, 2010 6:18 AM, \"venu madhav\" <[email protected]> wrote:\n\nHi All,\n I am using Postgres 8.1.9 for my application. My application also has\na clean up module which cleans up specified percentage of total database\nsize at regular intervals. Now the problem is I use *pg_database_size* to\nobtain the size of the database. After deleting the records, we run *Vacuum\nAnalyze* to reorder the indexes. The problem here is even though some\nrecords are cleared, it still shows the original DB Size. Is there any way\nto find out the actual DB Size or it would be more useful, if I can get the\nsize of each table.\n I can't run *Vacuum Full* because the application should be run 24*7\nwithout downtime.\n\nCan someone please help me in solving this.\n\nPlease let me know if you need any clarifications.\n\nThank you,\nVenu.\n\nHi there\n1. PG 8.1.9 is ancient ... you should upgrade.\n2. The database gross size on disk is not affected by VACUUM ANALYZE ... all this does is return space used by deleted row-versions to PG for re-use. The only way to reduce it and thus return disk space to the OS is to do a VACUUM FULL, or to delete the entire table. \n3. If you can suspend writes for a while, you can pull off an \"online\" VACCUM FULL, or copy and delete the table in order to repack it. Check out the CLUSTER command.\n4. If you're trying to figure out the net size of the table, i.e. how much free space is inside the table files for reuse by PG, then you need the pg_stat_tuple function ... this is built in to PG 8.4, and a plug-in activated by a script for PG 8.3, don't know if it exists in 8.1 or not. Like SELECT COUNT(*) this requires a full table scan.\nCheers\nDave\nsent from my Android phone\nOn Jun 20, 2010 6:18 AM, \"venu madhav\" <[email protected]> wrote:Hi All,      I am using Postgres 8.1.9 for my application. My application also has a clean up module which cleans up specified percentage of total database size at regular intervals. Now the problem is I use pg_database_size to obtain the size of the database. After deleting the records, we run Vacuum Analyze to reorder the indexes. The problem here is even though some records are cleared, it still shows the original DB Size. Is there any way to find out the actual DB Size or it would be more useful, if I can get the size of each table. \n\n     I can't run Vacuum Full because the application should be run 24*7 without downtime.Can someone please help me in solving this.Please let me know if you need any clarifications.Thank you,\n\nVenu.", "msg_date": "Sun, 20 Jun 2010 09:22:13 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Obtaining the exact size of the database." }, { "msg_contents": "Dave Crooke <[email protected]> writes:\n> 4. If you're trying to figure out the net size of the table, i.e. how much\n> free space is inside the table files for reuse by PG, then you need the\n> pg_stat_tuple function ... this is built in to PG 8.4, and a plug-in\n> activated by a script for PG 8.3, don't know if it exists in 8.1 or not.\n> Like SELECT COUNT(*) this requires a full table scan.\n\nI think what the OP actually wants is the number of live rows, so plain\nold SELECT COUNT(*) would do it. If that's too slow, a good alternative\nis to ANALYZE the table and then look at its pg_class.reltuples entry\n--- of course that will only be an approximate count.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Jun 2010 11:34:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Obtaining the exact size of the database. " }, { "msg_contents": "venu madhav wrote:\n> The problem here is even though some records are cleared, it still \n> shows the original DB Size. Is there any way to find out the actual DB \n> Size or it would be more useful, if I can get the size of each table.\n\nOne of the queries at http://wiki.postgresql.org/wiki/Disk_Usage should \ngive you the breakdown per table. Regular VACUUM doesn't ever shrink \nthe database from the operating system perspective unless you hit a very \nunusual situation (all of the free space is at the end). There is no \nway to do that without system downtime of sorts in the form a \npotentially long database lock, such as VACUUM FULL (the main option on \n8.1, the alternative of using CLUSTER isn't a good idea until 8.3). The \nbest you can do is making sure you VACUUM often enough that space is \nregularly reused.\n\nIt's hard to run a 24x7 environment on 8.1. Much easier on 8.4, where \nthe major things that regularly left people with quite bad VACUUM \ncleanup situations are all less likely to occur than on any previous \nversion.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sun, 20 Jun 2010 16:04:09 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Obtaining the exact size of the database." }, { "msg_contents": "On Sun, Jun 20, 2010 at 2:04 PM, Greg Smith <[email protected]> wrote:\n\n> It's hard to run a 24x7 environment on 8.1.  Much easier on 8.4, where the\n> major things that regularly left people with quite bad VACUUM cleanup\n> situations are all less likely to occur than on any previous version.\n\nHere here. keeping anything before 8.2 fed and happy is pretty\ndifficult in 24/7 environments. 8.2 and 8.3 are ok if you keep a\nclose eye on them. And it just gets better from there.\n", "msg_date": "Sun, 20 Jun 2010 14:31:13 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Obtaining the exact size of the database." } ]
[ { "msg_contents": "Hi,\n\nI've noticed something that I find strange with the hash-aggregate\nfeature of Postgres. I'm currently running Postgres v8.4.1 on Debian\nLinux 64-bit.\n\nI have a simple query that when planned either uses hash-aggregates or a\nsort depending on the amount of working memory available. The problem is\nthat when it uses the hash-aggregates, the query runs 25% slower than\nwhen using the sort method.\n\nThe table in question contains about 60 columns, many of which are\nboolean, 32-bit integers and some are 64-bit integers. Many fields are\ntext - and some of these can be quite long (eg 32Kb).\n\n\n\nThe SQL is as follows:\n\nexplain analyse\nselect distinct T1.*\n from role T1\n where T1.endDate is null and T1.latest=true and T1.active=true and\n T1.deceased=false and T1.desk in (BIG LIST OF INTEGERS);\n\n\nselect version() --> \"PostgreSQL 8.4.1 on x86_64-unknown-linux-gnu,\ncompiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10), 64-bit\"\nshow enable_hashagg --> \"on\"\nset work_mem='8MB'\nshow work_mem --> \"8MB\"\n\nExplain analyse of the SQL above:\nUnique (cost=47033.71..48410.27 rows=8881 width=1057) (actual\ntime=18.803..38.969 rows=6449 loops=1)\n -> Sort (cost=47033.71..47055.91 rows=8881 width=1057) (actual\ntime=18.801..20.560 rows=6449 loops=1)\n Sort Key: id, version, latest, active, deceased, person,\nformalnotes, informalnotes, description, desk, rolelevel, roletype,\npromotiondate, primaryrole, headofplace, careergrading, startdate,\nenddate, percentsalary, deskf, rolelevelf, roletypef, promotiondatef,\nprimaryrolef, headofplacef, careergradingf, startdatef, enddatef,\npercentsalaryf, descriptionf, deskmv, rolelevelmv, roletypemv,\npromotiondatemv, primaryrolemv, headofplacemv, careergradingmv,\nstartdatemv, enddatemv, percentsalarymv, descriptionmv, hasattachments,\nhasrelationships, hasprojects, audwho, audwhen, audcreated, costcentre,\nreportsto, manages, startdateest, enddateest, hasstarperformers,\nprojectnames, sourcefrom, sourceto, checkedwho, checkedwhen,\ncheckednotes, hasqueries, querytitles\n Sort Method: quicksort Memory: 2001kB\n -> Bitmap Heap Scan on role t1 (cost=4888.59..42321.27\nrows=8881 width=1057) (actual time=7.041..12.504 rows=6449 loops=1)\n Recheck Cond: (desk = ANY ('BIG LIST OF\nINTEGERS'::bigint[]))\n Filter: ((enddate IS NULL) AND latest AND active AND (NOT\ndeceased))\n -> Bitmap Index Scan on role_ix2 (cost=0.00..4886.37\nrows=10984 width=0) (actual time=6.948..6.948 rows=9296 loops=1)\n Index Cond: ((latest = true) AND (active = true) AND\n(deceased = false) AND (desk = ANY ('BIG LIST OF INTEGERS'::bigint[])))\nTotal runtime: 40.777 ms\n\n\n\nThis execution of the query used a sort to perform the \"distinct\".\n\n\n\nNow for the second run:\n\nselect version() --> \"PostgreSQL 8.4.1 on x86_64-unknown-linux-gnu,\ncompiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10), 64-bit\"\nshow enable_hashagg --> \"on\"\nset work_mem='64MB'\nshow work_mem --> \"64MB\"\n\nExplain analyse of the SQL above:\nHashAggregate (cost=43675.63..43764.44 rows=8881 width=1057) (actual\ntime=46.556..55.694 rows=6449 loops=1)\n -> Bitmap Heap Scan on role t1 (cost=4888.59..42321.27 rows=8881\nwidth=1057) (actual time=7.179..13.023 rows=6449 loops=1)\n Recheck Cond: (desk = ANY ('BIG LIST OF INTEGERS'::bigint[]))\n Filter: ((enddate IS NULL) AND latest AND active AND (NOT\ndeceased))\n -> Bitmap Index Scan on role_ix2 (cost=0.00..4886.37\nrows=10984 width=0) (actual time=7.086..7.086 rows=9296 loops=1)\n Index Cond: ((latest = true) AND (active = true) AND\n(deceased = false) AND (desk = ANY ('BIG LIST OF INTEGERS'::bigint[])))\nTotal runtime: 57.536 ms\n\n\n\n\nI've tested this with v8.4.4 as well with the same results. I also\ntested the same query with our previous production version of Postgres\n(v8.3.8) and that version only appears to use sorting not\nhash-aggregates.\n\n\n\nObviously, I can re-write the query to use a \"distinct on (...)\" clause\nto improve performance - which is what I've done, but my question is:\nWhy is the hash-aggregate slower than the sort?\n\n\nIs it something to do with the number of columns? ie. When sorting, the\nfirst few columns defined on the table (id, version) make the row unique\n- but when using the hash-aggregate feature, presumably every column\nneeds to be hashed which takes longer especially for long text fields?\n\nThanks,\n--Jatinder\n\n\n", "msg_date": "Thu, 17 Jun 2010 17:57:15 +0100", "msg_from": "\"Jatinder Sangha\" <[email protected]>", "msg_from_op": true, "msg_subject": "HashAggregate slower than sort?" } ]
[ { "msg_contents": "Hello there,\n\nI've searched the web and can find very little on this issue, so I was\nhoping those on this list would be able to shed some light on it.\n\nPerformance has dropped through the floor after converting my db from ASCI\nto UTF8. Is this normal behavior on 8.4.x?\n\nI'm mystified as to the problem. Thanks for any input you can provide.\n\n-- \nBrant Fitzsimmons\n\"Everything should be made as simple as possible, but not simpler.\" --\nAlbert Einstein\n\nHello there,I've searched the web and can find very little on this issue, so I was hoping those on this list would be able to shed some light on it.Performance has dropped through the floor after converting my db from ASCI to UTF8.  Is this normal behavior on 8.4.x?\nI'm mystified as to the problem. Thanks for any input you can provide.-- Brant Fitzsimmons\"Everything should be made as simple as possible, but not simpler.\" -- Albert Einstein", "msg_date": "Thu, 17 Jun 2010 18:28:53 -0400", "msg_from": "Brant Fitzsimmons <[email protected]>", "msg_from_op": true, "msg_subject": "Add slowdown after conversion to UTF8" }, { "msg_contents": "Brant Fitzsimmons <[email protected]> writes:\n> I've searched the web and can find very little on this issue, so I was\n> hoping those on this list would be able to shed some light on it.\n\n> Performance has dropped through the floor after converting my db from ASCI\n> to UTF8. Is this normal behavior on 8.4.x?\n\nWell, with no specifics on performance of *what*, it's hard to say.\nThere are certain operations that could be quite a bit slower, yes.\nI haven't heard of too many people noticing a problem though.\n\nIt's probably worth noting that locale could be at least as much of a\nfactor as encoding ... but perhaps I'm jumping to conclusions about\nwhat your slow operations are.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Jun 2010 19:17:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add slowdown after conversion to UTF8 " }, { "msg_contents": "On tor, 2010-06-17 at 18:28 -0400, Brant Fitzsimmons wrote:\n> Performance has dropped through the floor after converting my db from\n> ASCI to UTF8.\n\nConverting from ASCII to UTF8 is a noop.\n\nIf you did some configuration changes, you need to tell us which.\n\n", "msg_date": "Fri, 18 Jun 2010 00:09:04 -0400", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add slowdown after conversion to UTF8" } ]
[ { "msg_contents": "Some more on the RHEL 5.5 system I'm helping to setup. Some benchmarking \nusing pgbench appeared to suggest that wal_sync_method=open_sync was a \nlittle faster than fdatasync [1]. Now I recall some discussion about \nthis enabling direct io and the general flakiness of this on Linux, so \nis the option regarded as safe?\n\n[1] The workout:\n\n$ pgbench -i -s 1000 bench\n$ pgbench -c [1,2,4,8,32,64,128] -t 10000\n\nPerformance peaked around 2500 tps @32 clients using open_sync and 2200 \nwith fdatasync. However the disk arrays are on a SAN and I suspect that \nwhen testing with fdatasync later in the day there may have been \nworkload 'leakage' from other hosts hitting the SAN.\n\n\n\n\n\n\nSome more on the RHEL 5.5 system\nI'm helping to setup. Some benchmarking using pgbench appeared to\nsuggest that wal_sync_method=open_sync was a little faster than\nfdatasync [1]. Now I recall some discussion about this enabling direct\nio and the general flakiness of this on Linux, so is the option\nregarded as safe?\n\n[1] The workout: \n\n$ pgbench -i -s 1000 bench\n$ pgbench -c [1,2,4,8,32,64,128] -t 10000\n\nPerformance peaked around 2500 tps @32 clients using open_sync and 2200\nwith fdatasync. However the disk arrays are on a SAN and I suspect that\nwhen testing with fdatasync later in the day there may have been\nworkload 'leakage' from other hosts hitting the SAN.", "msg_date": "Fri, 18 Jun 2010 13:39:43 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "wal_synch_method = open_sync safe on RHEL 5.5?" }, { "msg_contents": "Mark Kirkwood wrote:\n> Now I recall some discussion about this enabling direct io and the \n> general flakiness of this on Linux, so is the option regarded as safe?\n\nNo one has ever refuted the claims in \nhttp://archives.postgresql.org/pgsql-hackers/2007-10/msg01310.php that \nit can be unsafe under a heavy enough level of mixed load on RHEL5. \nGiven the performance benefits are marginal on ext3, I haven't ever \nconsidered it worth the risk. (I've seen much larger gains on \nLinux+Veritas VxFS). From what I've seen, recent Linux kernel work has \nreinforced that the old O_SYNC implementation was full of bugs now that \nmore work is being done to improve that area. My suspicion (based on no \nparticular data, just what I've seen it tested with) is that it only \nreally worked before in the very specific way that Oracle does O_SYNC \nwrites, which is different from what PostgreSQL does.\n\nP.S. Be wary of expecting pgbench to give you useful numbers on a single \nrun. For the default write-heavy test, I recommend three runs of 10 \nminutes each (-T 600 on recent PostgreSQL versions) before I trust any \nresults it gives. You can get useful data from the select-only test in \nonly a few seconds, but not the one that writes a bunch.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n\n\n\n\n\n\nMark Kirkwood wrote:\n\n\nNow I recall some discussion\nabout this enabling direct\nio and the general flakiness of this on Linux, so is the option\nregarded as safe?\n\n\nNo one has ever refuted the claims in\nhttp://archives.postgresql.org/pgsql-hackers/2007-10/msg01310.php that\nit can be unsafe under a heavy enough level of mixed load on RHEL5. \nGiven the performance benefits are marginal on ext3, I haven't ever\nconsidered it worth the risk.  (I've seen much larger gains on\nLinux+Veritas VxFS).  From what I've seen, recent Linux kernel work has\nreinforced that the old O_SYNC implementation was full of bugs now that\nmore work is being done to improve that area.  My suspicion (based on\nno particular data, just what I've seen it tested with) is that it only\nreally worked before in the very specific way that Oracle does O_SYNC\nwrites, which is different from what PostgreSQL does.\n\nP.S. Be wary of expecting pgbench to give you useful numbers on a\nsingle run.  For the default write-heavy test, I recommend three runs\nof 10 minutes each (-T 600 on recent PostgreSQL versions) before I\ntrust any results it gives.  You can get useful data from the\nselect-only test in only a few seconds, but not the one that writes a\nbunch.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us", "msg_date": "Thu, 17 Jun 2010 23:29:36 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wal_synch_method = open_sync safe on RHEL 5.5?" }, { "msg_contents": "The conclusion I read was that Linux O_SYNC behaves like O_DSYNC on \nother systems. For WAL, this seems satisfactory?\n\nPersonally, I use fdatasync(). I wasn't able to measure a reliable \ndifference for my far more smaller databases, and fdatasync() seems \nreliable and fast enough, that fighting with O_SYNC doesn't seem to be \nworth it. Also, technically speaking, fdatasync() appeals more to me, as \nit allows the system to buffer while it can, and the application to \ninstruct it across what boundaries it should not buffer. O_SYNC / \nO_DSYNC seem to imply a requirement that it does a synch on every block. \nMy gut tells me that fdatasync() gives the operating system more \nopportunities to optimize (whether it does or not is a different issue \n:-) ).\n\nCheers,\nmark\n\n\nOn 06/17/2010 11:29 PM, Greg Smith wrote:\n> Mark Kirkwood wrote:\n>> Now I recall some discussion about this enabling direct io and the \n>> general flakiness of this on Linux, so is the option regarded as safe?\n>\n> No one has ever refuted the claims in \n> http://archives.postgresql.org/pgsql-hackers/2007-10/msg01310.php that \n> it can be unsafe under a heavy enough level of mixed load on RHEL5. \n> Given the performance benefits are marginal on ext3, I haven't ever \n> considered it worth the risk. (I've seen much larger gains on \n> Linux+Veritas VxFS). From what I've seen, recent Linux kernel work \n> has reinforced that the old O_SYNC implementation was full of bugs now \n> that more work is being done to improve that area. My suspicion \n> (based on no particular data, just what I've seen it tested with) is \n> that it only really worked before in the very specific way that Oracle \n> does O_SYNC writes, which is different from what PostgreSQL does.\n>\n> P.S. Be wary of expecting pgbench to give you useful numbers on a \n> single run. For the default write-heavy test, I recommend three runs \n> of 10 minutes each (-T 600 on recent PostgreSQL versions) before I \n> trust any results it gives. You can get useful data from the \n> select-only test in only a few seconds, but not the one that writes a \n> bunch.\n>\n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n\n\n-- \nMark Mielke<[email protected]>\n\n\n\n\n\n\n\nThe conclusion I read was that Linux O_SYNC behaves like O_DSYNC on\nother systems. For WAL, this seems satisfactory?\n\nPersonally, I use fdatasync(). I wasn't able to measure a reliable\ndifference for my far more smaller databases, and fdatasync() seems\nreliable and fast enough, that fighting with O_SYNC doesn't seem to be\nworth it. Also, technically speaking, fdatasync() appeals more to me,\nas it allows the system to buffer while it can, and the application to\ninstruct it across what boundaries it should not buffer. O_SYNC /\nO_DSYNC seem to imply a requirement that it does a synch on every\nblock. My gut tells me that fdatasync() gives the operating system more\nopportunities to optimize (whether it does or not is a different issue\n:-) ).\n\nCheers,\nmark\n\n\nOn 06/17/2010 11:29 PM, Greg Smith wrote:\n\n\nMark Kirkwood wrote:\n \n\nNow I recall some discussion\nabout this enabling direct\nio and the general flakiness of this on Linux, so is the option\nregarded as safe?\n\n\nNo one has ever refuted the claims in\n http://archives.postgresql.org/pgsql-hackers/2007-10/msg01310.php\nthat\nit can be unsafe under a heavy enough level of mixed load on RHEL5. \nGiven the performance benefits are marginal on ext3, I haven't ever\nconsidered it worth the risk.  (I've seen much larger gains on\nLinux+Veritas VxFS).  From what I've seen, recent Linux kernel work has\nreinforced that the old O_SYNC implementation was full of bugs now that\nmore work is being done to improve that area.  My suspicion (based on\nno particular data, just what I've seen it tested with) is that it only\nreally worked before in the very specific way that Oracle does O_SYNC\nwrites, which is different from what PostgreSQL does.\n\nP.S. Be wary of expecting pgbench to give you useful numbers on a\nsingle run.  For the default write-heavy test, I recommend three runs\nof 10 minutes each (-T 600 on recent PostgreSQL versions) before I\ntrust any results it gives.  You can get useful data from the\nselect-only test in only a few seconds, but not the one that writes a\nbunch.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n \n\n\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Thu, 17 Jun 2010 23:36:03 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wal_synch_method = open_sync safe on RHEL 5.5?" }, { "msg_contents": "On 18/06/10 15:29, Greg Smith wrote:\n>\n> P.S. Be wary of expecting pgbench to give you useful numbers on a \n> single run. For the default write-heavy test, I recommend three runs \n> of 10 minutes each (-T 600 on recent PostgreSQL versions) before I \n> trust any results it gives. You can get useful data from the \n> select-only test in only a few seconds, but not the one that writes a \n> bunch.\n>\n\nYeah, I did several runs of each, and a couple with -c 128 and -t 100000 \nto give the setup a good workout (also 2000-2400 tps, nice to see a well \nbehaved SAN).\n\n\nCheers\n\nMark\n", "msg_date": "Fri, 18 Jun 2010 16:19:52 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: wal_synch_method = open_sync safe on RHEL 5.5?" }, { "msg_contents": "Mark Mielke wrote:\n> The conclusion I read was that Linux O_SYNC behaves like O_DSYNC on \n> other systems. For WAL, this seems satisfactory?\n\nIt would be if it didn't have any bugs or limitiations, but it does. \nThe one pointed out in the message I linked to suggests that a mix of \nbuffered and O_SYNC direct I/O can cause a write error, with the exact \nbehavior you get depending on the kernel version. That's a path better \nnot explored as I see it.\n\nThe kernels that have made some effort to implement this correctly \nactually expose O_DSYNC, on newer Linux systems. My current opinion is \nthat if you only have Linux O_SYNC, don't use it. The ones with O_DSYNC \nhaven't been around for long enough to be proven or disproven as \neffective yet.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Fri, 18 Jun 2010 02:02:57 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wal_synch_method = open_sync safe on RHEL 5.5?" } ]
[ { "msg_contents": "Hi, \n\nI've noticed something that I find strange with the hash-aggregate feature of Postgres. I'm currently running Postgres v8.4.1 on Debian Linux 64-bit.\n\nI have a simple query that when planned either uses hash-aggregates or a sort depending on the amount of working memory available. The problem is that when it uses the hash-aggregates, the query runs 25% slower than when using the sort method.\n\nThe table in question contains about 60 columns, many of which are boolean, 32-bit integers and some are 64-bit integers. Many fields are text - and some of these can be quite long (eg 32Kb).\n\n\n\nThe SQL is as follows: \n\nexplain analyse \nselect distinct T1.* \n from role T1 \n where T1.endDate is null and T1.latest=true and T1.active=true and \n T1.deceased=false and T1.desk in (BIG LIST OF INTEGERS); \n\n\nselect version() --> \"PostgreSQL 8.4.1 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10), 64-bit\"\n\nshow enable_hashagg --> \"on\" \nset work_mem='8MB' \nshow work_mem --> \"8MB\" \n\nExplain analyse of the SQL above: \nUnique (cost=47033.71..48410.27 rows=8881 width=1057) (actual time=18.803..38.969 rows=6449 loops=1) \n -> Sort (cost=47033.71..47055.91 rows=8881 width=1057) (actual time=18.801..20.560 rows=6449 loops=1) \n Sort Key: id, version, latest, active, deceased, person, formalnotes, informalnotes, description, desk, rolelevel, roletype, promotiondate, primaryrole, headofplace, careergrading, startdate, enddate, percentsalary, deskf, rolelevelf, roletypef, promotiondatef, primaryrolef, headofplacef, careergradingf, startdatef, enddatef, percentsalaryf, descriptionf, deskmv, rolelevelmv, roletypemv, promotiondatemv, primaryrolemv, headofplacemv, careergradingmv, startdatemv, enddatemv, percentsalarymv, descriptionmv, hasattachments, hasrelationships, hasprojects, audwho, audwhen, audcreated, costcentre, reportsto, manages, startdateest, enddateest, hasstarperformers, projectnames, sourcefrom, sourceto, checkedwho, checkedwhen, checkednotes, hasqueries, querytitles\n\n Sort Method: quicksort Memory: 2001kB \n -> Bitmap Heap Scan on role t1 (cost=4888.59..42321.27 rows=8881 width=1057) (actual time=7.041..12.504 rows=6449 loops=1)\n\n Recheck Cond: (desk = ANY ('BIG LIST OF INTEGERS'::bigint[])) \n Filter: ((enddate IS NULL) AND latest AND active AND (NOT deceased)) \n -> Bitmap Index Scan on role_ix2 (cost=0.00..4886.37 rows=10984 width=0) (actual time=6.948..6.948 rows=9296 loops=1)\n\n Index Cond: ((latest = true) AND (active = true) AND (deceased = false) AND (desk = ANY ('BIG LIST OF INTEGERS'::bigint[])))\n\nTotal runtime: 40.777 ms \n\n\n\nThis execution of the query used a sort to perform the \"distinct\". \n\n\n\nNow for the second run: \n\nselect version() --> \"PostgreSQL 8.4.1 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10), 64-bit\"\n\nshow enable_hashagg --> \"on\" \nset work_mem='64MB' \nshow work_mem --> \"64MB\" \n\nExplain analyse of the SQL above: \nHashAggregate (cost=43675.63..43764.44 rows=8881 width=1057) (actual time=46.556..55.694 rows=6449 loops=1) \n -> Bitmap Heap Scan on role t1 (cost=4888.59..42321.27 rows=8881 width=1057) (actual time=7.179..13.023 rows=6449 loops=1)\n\n Recheck Cond: (desk = ANY ('BIG LIST OF INTEGERS'::bigint[])) \n Filter: ((enddate IS NULL) AND latest AND active AND (NOT deceased)) \n -> Bitmap Index Scan on role_ix2 (cost=0.00..4886.37 rows=10984 width=0) (actual time=7.086..7.086 rows=9296 loops=1)\n\n Index Cond: ((latest = true) AND (active = true) AND (deceased = false) AND (desk = ANY ('BIG LIST OF INTEGERS'::bigint[])))\n\nTotal runtime: 57.536 ms \n\n\n\n\nI've tested this with v8.4.4 as well with the same results. I also tested the same query with our previous production version of Postgres (v8.3.8) and that version only appears to use sorting not hash-aggregates.\n\n\n\nObviously, I can re-write the query to use a \"distinct on (...)\" clause to improve performance - which is what I've done, but my question is: Why is the hash-aggregate slower than the sort?\n\n\nIs it something to do with the number of columns? ie. When sorting, the first few columns defined on the table (id, version) make the row unique - but when using the hash-aggregate feature, presumably every column needs to be hashed which takes longer especially for long text fields?\n\nThanks, \n--Jatinder \n\n\n\n\n\nCoalition Development Ltd 1st Floor, One Newhams Row, London, United Kingdom, SE1 3UZ\nRegistration Number - 04328897 Registered Office - Direct Control 3rd Floor, Marvic House, Bishops Road, London, United Kingdom, SW6 7AD\n\n\nHashAggregate slower than sort?\n\n\n\nHi, \n\nI've noticed something that I find strange with the hash-aggregate feature of Postgres. I'm currently running Postgres v8.4.1 on Debian Linux 64-bit.\nI have a simple query that when planned either uses hash-aggregates or a sort depending on the amount of working memory available. The problem is that when it uses the hash-aggregates, the query runs 25% slower than when using the sort method.\nThe table in question contains about 60 columns, many of which are boolean, 32-bit integers and some are 64-bit integers. Many fields are text - and some of these can be quite long (eg 32Kb).\nThe SQL is as follows: \nexplain analyse select distinct T1.*   from role T1  where T1.endDate is null and T1.latest=true and T1.active=true and        T1.deceased=false and T1.desk in (BIG LIST OF INTEGERS); \nselect version() --> \"PostgreSQL 8.4.1 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10), 64-bit\"\nshow enable_hashagg --> \"on\" set work_mem='8MB' show work_mem --> \"8MB\" \nExplain analyse of the SQL above: Unique  (cost=47033.71..48410.27 rows=8881 width=1057) (actual time=18.803..38.969 rows=6449 loops=1)   ->  Sort  (cost=47033.71..47055.91 rows=8881 width=1057) (actual time=18.801..20.560 rows=6449 loops=1)         Sort Key: id, version, latest, active, deceased, person, formalnotes, informalnotes, description, desk, rolelevel, roletype, promotiondate, primaryrole, headofplace, careergrading, startdate, enddate, percentsalary, deskf, rolelevelf, roletypef, promotiondatef, primaryrolef, headofplacef, careergradingf, startdatef, enddatef, percentsalaryf, descriptionf, deskmv, rolelevelmv, roletypemv, promotiondatemv, primaryrolemv, headofplacemv, careergradingmv, startdatemv, enddatemv, percentsalarymv, descriptionmv, hasattachments, hasrelationships, hasprojects, audwho, audwhen, audcreated, costcentre, reportsto, manages, startdateest, enddateest, hasstarperformers, projectnames, sourcefrom, sourceto, checkedwho, checkedwhen, checkednotes, hasqueries, querytitles\n        Sort Method:  quicksort  Memory: 2001kB         ->  Bitmap Heap Scan on role t1  (cost=4888.59..42321.27 rows=8881 width=1057) (actual time=7.041..12.504 rows=6449 loops=1)\n              Recheck Cond: (desk = ANY ('BIG LIST OF INTEGERS'::bigint[]))               Filter: ((enddate IS NULL) AND latest AND active AND (NOT deceased))               ->  Bitmap Index Scan on role_ix2  (cost=0.00..4886.37 rows=10984 width=0) (actual time=6.948..6.948 rows=9296 loops=1)\n                    Index Cond: ((latest = true) AND (active = true) AND (deceased = false) AND (desk = ANY ('BIG LIST OF INTEGERS'::bigint[])))\nTotal runtime: 40.777 ms \nThis execution of the query used a sort to perform the \"distinct\". \nNow for the second run: \nselect version() --> \"PostgreSQL 8.4.1 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10), 64-bit\"\nshow enable_hashagg --> \"on\" set work_mem='64MB' show work_mem --> \"64MB\" \nExplain analyse of the SQL above: HashAggregate  (cost=43675.63..43764.44 rows=8881 width=1057) (actual time=46.556..55.694 rows=6449 loops=1)   ->  Bitmap Heap Scan on role t1  (cost=4888.59..42321.27 rows=8881 width=1057) (actual time=7.179..13.023 rows=6449 loops=1)\n        Recheck Cond: (desk = ANY ('BIG LIST OF INTEGERS'::bigint[]))         Filter: ((enddate IS NULL) AND latest AND active AND (NOT deceased))         ->  Bitmap Index Scan on role_ix2  (cost=0.00..4886.37 rows=10984 width=0) (actual time=7.086..7.086 rows=9296 loops=1)\n              Index Cond: ((latest = true) AND (active = true) AND (deceased = false) AND (desk = ANY ('BIG LIST OF INTEGERS'::bigint[])))\nTotal runtime: 57.536 ms \nI've tested this with v8.4.4 as well with the same results. I also tested the same query with our previous production version of Postgres (v8.3.8) and that version only appears to use sorting not hash-aggregates.\nObviously, I can re-write the query to use a \"distinct on (...)\" clause to improve performance - which is what I've done, but my question is: Why is the hash-aggregate slower than the sort?\nIs it something to do with the number of columns? ie. When sorting, the first few columns defined on the table (id, version) make the row unique - but when using the hash-aggregate feature, presumably every column needs to be hashed which takes longer especially for long text fields?\nThanks, --Jatinder Coalition Development Ltd 1st  Floor, One Newhams Row, London, United Kingdom, SE1 3UZ\nRegistration Number - 04328897 Registered Office - Direct Control 3rd Floor, Marvic House, Bishops Road, London, United Kingdom, SW6 7AD", "msg_date": "Fri, 18 Jun 2010 17:02:55 +0100", "msg_from": "\"Jatinder Sangha\" <[email protected]>", "msg_from_op": true, "msg_subject": "HashAggregate slower than sort?" }, { "msg_contents": "\"Jatinder Sangha\" <[email protected]> wrote:\n \n> I have a simple query that when planned either uses hash-\n> aggregates or a sort depending on the amount of working memory\n> available. The problem is that when it uses the hash-aggregates,\n> the query runs 25% slower than when using the sort method.\n> \n> The table in question contains about 60 columns, many of which are\n> boolean, 32-bit integers and some are 64-bit integers. Many fields\n> are text - and some of these can be quite long (eg 32Kb).\n \n> Obviously, I can re-write the query to use a \"distinct on (...)\"\n> clause\n \nYeah, that seems prudent, to say the least.\n \n> Why is the hash-aggregate slower than the sort?\n> \n> Is it something to do with the number of columns? ie. When\n> sorting, the first few columns defined on the table (id, version)\n> make the row unique - but when using the hash-aggregate feature,\n> presumably every column needs to be hashed which takes longer\n> especially for long text fields?\n \nSounds like a reasonable guess to me. But since you're apparently\nretrieving about 9,000 wide rows in (worst case) 56 ms, it would\nseem that your active data set may be fully cached. If so, you\ncould try reducing both random_page_cost and seq_page_cost to\nsomething in the 0.1 to 0.005 range and see if it improves the\naccuracy of the cost estimates. Not that you should go back to\nusing DISTINCT on all 60 column, including big text columns; but\nthese cost factors might help other queries pick faster plans.\n \n-Kevin\n", "msg_date": "Fri, 18 Jun 2010 12:59:21 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HashAggregate slower than sort?" }, { "msg_contents": "Hi Kevin,\n\nThanks for the suggestions.\n\nI've already converted all of my SQL to use \"distinct on (...)\" and this\nis now always faster using the hash-aggregates than when using sorting.\nThe queries now only use sorting if the hashing would take up too much\nmemory.\n\nThanks,\n--Jatinder \n\n-----Original Message-----\nFrom: Kevin Grittner [mailto:[email protected]] \nSent: 18 June 2010 18:59\nTo: Jatinder Sangha; [email protected]\nSubject: Re: [PERFORM] HashAggregate slower than sort?\n\n\"Jatinder Sangha\" <[email protected]> wrote:\n \n> I have a simple query that when planned either uses hash- aggregates \n> or a sort depending on the amount of working memory available. The \n> problem is that when it uses the hash-aggregates, the query runs 25% \n> slower than when using the sort method.\n> \n> The table in question contains about 60 columns, many of which are \n> boolean, 32-bit integers and some are 64-bit integers. Many fields are\n\n> text - and some of these can be quite long (eg 32Kb).\n \n> Obviously, I can re-write the query to use a \"distinct on (...)\"\n> clause\n \nYeah, that seems prudent, to say the least.\n \n> Why is the hash-aggregate slower than the sort?\n> \n> Is it something to do with the number of columns? ie. When sorting, \n> the first few columns defined on the table (id, version) make the row \n> unique - but when using the hash-aggregate feature, presumably every \n> column needs to be hashed which takes longer especially for long text \n> fields?\n \nSounds like a reasonable guess to me. But since you're apparently\nretrieving about 9,000 wide rows in (worst case) 56 ms, it would seem\nthat your active data set may be fully cached. If so, you could try\nreducing both random_page_cost and seq_page_cost to something in the 0.1\nto 0.005 range and see if it improves the accuracy of the cost\nestimates. Not that you should go back to using DISTINCT on all 60\ncolumn, including big text columns; but these cost factors might help\nother queries pick faster plans.\n \n-Kevin\n\n\n\nCoalition Development Ltd 1st Floor, One Newhams Row, London, United Kingdom, SE1 3UZ\nRegistration Number - 04328897 Registered Office - Direct Control 3rd Floor, Marvic House, Bishops Road, London, United Kingdom, SW6 7AD\n\n", "msg_date": "Mon, 21 Jun 2010 10:58:34 +0100", "msg_from": "\"Jatinder Sangha\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HashAggregate slower than sort?" }, { "msg_contents": "\"Jatinder Sangha\" <[email protected]> wrote:\n \n> I've already converted all of my SQL to use \"distinct on (...)\"\n> and this is now always faster using the hash-aggregates than when\n> using sorting. The queries now only use sorting if the hashing\n> would take up too much memory.\n \nIt's great that you have a solution to your immediate problem, but\nif the active portion of your database is really as fully cached as\nyour problem case indicates, you should probably still tweak the\ncosting factors. Doing so will help the optimizer pick good plans\nfor any arbitrary query you choose to run. If the caching was\nunusual, and was just showing up because were repeatedly running\nthat one test case, then never mind. :-)\n \n-Kevin\n", "msg_date": "Mon, 21 Jun 2010 08:53:44 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HashAggregate slower than sort?" } ]
[ { "msg_contents": "I think I have read what is to be read about queries being prepared in \nplpgsql functions, but I still can not explain the following, so I thought \nto post it here:\n\nSuppose 2 functions: factor(int,int) and offset(int, int).\nSuppose a third function: convert(float,int,int) which simply returns \n$1*factor($2,$3)+offset($2,$3)\nAll three functions are IMMUTABLE.\n\nVery simple, right? Now I have very fast AND very slow executing queries on \nsome 150k records:\n\nVERY FAST (half a second):\n----------------\nSELECT data*factor(1,2)+offset(1,2) FROM tbl_data;\n\nVERY SLOW (a minute):\n----------------\nSELECT convert(data, 1, 2) FROM tbl_data;\n\nThe slowness cannot be due to calling a function 150k times. If I define \nconvert2(float,int,int) to return a constant value, then it executes in \nabout a second. (still half as slow as the VERY FAST query).\n\nI assume that factor and offset are cached in the VERY FAST query, and not \nin the slow one? If so, why not and how can I \"force\" it? Currently I need \nonly one function for conversions.\n\nRegards,\nDavor \n\n\n", "msg_date": "Sat, 19 Jun 2010 21:38:14 +0200", "msg_from": "\"Davor J.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow function in queries SELECT clause." }, { "msg_contents": "2010/6/19 Davor J. <[email protected]>\n\n> I think I have read what is to be read about queries being prepared in\n> plpgsql functions, but I still can not explain the following, so I thought\n> to post it here:\n>\n> Suppose 2 functions: factor(int,int) and offset(int, int).\n> Suppose a third function: convert(float,int,int) which simply returns\n> $1*factor($2,$3)+offset($2,$3)\n> All three functions are IMMUTABLE.\n>\n> Very simple, right? Now I have very fast AND very slow executing queries on\n> some 150k records:\n>\n> VERY FAST (half a second):\n> ----------------\n> SELECT data*factor(1,2)+offset(1,2) FROM tbl_data;\n>\n> VERY SLOW (a minute):\n> ----------------\n> SELECT convert(data, 1, 2) FROM tbl_data;\n>\n> The slowness cannot be due to calling a function 150k times. If I define\n> convert2(float,int,int) to return a constant value, then it executes in\n> about a second. (still half as slow as the VERY FAST query).\n>\n> I assume that factor and offset are cached in the VERY FAST query, and not\n> in the slow one? If so, why not and how can I \"force\" it? Currently I need\n> only one function for conversions.\n>\n> Regards,\n> Davor\n>\n>\n>\n>\nHi,\nshow us the code of those two functions and explain analyze of those\nqueries.\n\nregards\nSzymon Guz\n\n2010/6/19 Davor J. <[email protected]>\nI think I have read what is to be read about queries being prepared in\nplpgsql functions, but I still can not explain the following, so I thought\nto post it here:\n\nSuppose 2 functions: factor(int,int) and offset(int, int).\nSuppose a third function: convert(float,int,int) which simply returns\n$1*factor($2,$3)+offset($2,$3)\nAll three functions are IMMUTABLE.\n\nVery simple, right? Now I have very fast AND very slow executing queries on\nsome 150k records:\n\nVERY FAST (half a second):\n----------------\nSELECT data*factor(1,2)+offset(1,2) FROM tbl_data;\n\nVERY SLOW (a minute):\n----------------\nSELECT convert(data, 1, 2) FROM tbl_data;\n\nThe slowness cannot be due to calling a function 150k times. If I define\nconvert2(float,int,int) to return a constant value, then it executes in\nabout a second. (still half as slow as the VERY FAST query).\n\nI assume that factor and offset are cached in the VERY FAST query, and not\nin the slow one? If so, why not and how can I \"force\" it? Currently I need\nonly one function for conversions.\n\nRegards,\nDavor\n\nHi,show us the code of those two functions and explain analyze of those queries.\nregardsSzymon Guz", "msg_date": "Sun, 20 Jun 2010 13:23:33 +0200", "msg_from": "Szymon Guz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow function in queries SELECT clause." }, { "msg_contents": "I didn't consider them to be important as they showed the same, only the execution time was different. Also, they are a bit more complex than the ones put in the previous post. But here they are:\n\nDefinitions:\n-----------------------------------------------------------\nCREATE OR REPLACE FUNCTION appfunctions.fnc_unit_conversion_factor(_tree_id integer, _unit_to_id integer)\n RETURNS real AS\n$BODY$ \nDECLARE\nBEGIN \nRETURN (SELECT unit_conv_factor AS factor\n FROM vew_unit_conversions AS c\n INNER JOIN tbl_sensors AS s ON (s.unit_id = c.unit_id_from)\n INNER JOIN tbl_trees USING (sens_id)\n WHERE tree_id = _tree_id AND unit_id_to = _unit_to_id)::real;\nEND; \n$BODY$\n LANGUAGE 'plpgsql' IMMUTABLE\n--------------------------\nCREATE OR REPLACE FUNCTION appfunctions.fnc_unit_conversion_offset(_tree_id integer, _unit_to_id integer)\n RETURNS real AS\n$BODY$ \nDECLARE\nBEGIN \nRETURN (SELECT unit_conv_offset AS offset\n FROM vew_unit_conversions AS c\n INNER JOIN tbl_sensors AS s ON (s.unit_id = c.unit_id_from)\n INNER JOIN tbl_trees USING (sens_id)\n WHERE tree_id = _tree_id AND unit_id_to = _unit_to_id)::real;\nEND; \n$BODY$\n LANGUAGE 'plpgsql' IMMUTABLE\n--------------------------\nCREATE OR REPLACE FUNCTION appfunctions.fnc_unit_convert(_rawdata real, _tree_id integer, _unit_to_id integer)\n RETURNS real AS\n$BODY$ \nDECLARE \nBEGIN \nRETURN _rawdata\n * fnc_unit_conversion_factor(_tree_id, _unit_to_id) \n + fnc_unit_conversion_offset(_tree_id, _unit_to_id);\nEND; \n$BODY$\n LANGUAGE 'plpgsql' IMMUTABLE\n\n\n\nExecutions:\n-----------------------------------------------------------\nEXPLAIN ANALYSE SELECT timestamp,\n\ndata_from_tree_id_70 AS \"flow_11\" \n\n FROM \n(SELECT sens_chan_data_timestamp AS timestamp, sens_chan_data_data AS data_from_tree_id_70 FROM tbl_sensor_channel_data WHERE tree_id = 70 AND sens_chan_data_timestamp >= '2008-06-11T00:00:00' AND sens_chan_data_timestamp <= '2008-06-18T00:00:00' ) AS \"70\" \n\n ORDER BY timestamp;\n\n\"Sort (cost=175531.00..175794.64 rows=105456 width=12) (actual time=598.454..638.400 rows=150678 loops=1)\"\n\" Sort Key: tbl_sensor_channel_data.sens_chan_data_timestamp\"\n\" Sort Method: external sort Disk: 3240kB\"\n\" -> Bitmap Heap Scan on tbl_sensor_channel_data (cost=3005.29..166732.66 rows=105456 width=12) (actual time=34.810..371.099 rows=150678 loops=1)\"\n\" Recheck Cond: ((tree_id = 70) AND (sens_chan_data_timestamp >= '2008-06-11 00:00:00'::timestamp without time zone) AND (sens_chan_data_timestamp <= '2008-06-18 00:00:00'::timestamp without time zone))\"\n\" -> Bitmap Index Scan on tbl_sensor_channel_data_pkey (cost=0.00..2978.92 rows=105456 width=0) (actual time=28.008..28.008 rows=150678 loops=1)\"\n\" Index Cond: ((tree_id = 70) AND (sens_chan_data_timestamp >= '2008-06-11 00:00:00'::timestamp without time zone) AND (sens_chan_data_timestamp <= '2008-06-18 00:00:00'::timestamp without time zone))\"\n\"Total runtime: 663.478 ms\"\n-----------------------------------------------------------\nEXPLAIN ANALYSE SELECT timestamp,\n\nfnc_unit_convert(data_from_tree_id_70, 70, 7) AS \"flow_11\" \n\n FROM \n(SELECT sens_chan_data_timestamp AS timestamp, sens_chan_data_data AS data_from_tree_id_70 FROM tbl_sensor_channel_data WHERE tree_id = 70 AND sens_chan_data_timestamp >= '2008-06-11T00:00:00' AND sens_chan_data_timestamp <= '2008-06-18T00:00:00' ) AS \"70\" \n\n ORDER BY timestamp;\n\n\"Sort (cost=201895.00..202158.64 rows=105456 width=12) (actual time=35334.017..35372.977 rows=150678 loops=1)\"\n\" Sort Key: tbl_sensor_channel_data.sens_chan_data_timestamp\"\n\" Sort Method: external sort Disk: 3240kB\"\n\" -> Bitmap Heap Scan on tbl_sensor_channel_data (cost=3005.29..193096.66 rows=105456 width=12) (actual time=60.012..35037.129 rows=150678 loops=1)\"\n\" Recheck Cond: ((tree_id = 70) AND (sens_chan_data_timestamp >= '2008-06-11 00:00:00'::timestamp without time zone) AND (sens_chan_data_timestamp <= '2008-06-18 00:00:00'::timestamp without time zone))\"\n\" -> Bitmap Index Scan on tbl_sensor_channel_data_pkey (cost=0.00..2978.92 rows=105456 width=0) (actual time=21.884..21.884 rows=150678 loops=1)\"\n\" Index Cond: ((tree_id = 70) AND (sens_chan_data_timestamp >= '2008-06-11 00:00:00'::timestamp without time zone) AND (sens_chan_data_timestamp <= '2008-06-18 00:00:00'::timestamp without time zone))\"\n\"Total runtime: 35397.841 ms\"\n-----------------------------------------------------------\nEXPLAIN ANALYSE SELECT timestamp,\n\ndata_from_tree_id_70*fnc_unit_conversion_factor(70, 7)+ fnc_unit_conversion_offset(70, 7) AS \"flow_11\" \n\n FROM \n(SELECT sens_chan_data_timestamp AS timestamp, sens_chan_data_data AS data_from_tree_id_70 FROM tbl_sensor_channel_data WHERE tree_id = 70 AND sens_chan_data_timestamp >= '2008-06-11T00:00:00' AND sens_chan_data_timestamp <= '2008-06-18T00:00:00' ) AS \"70\" \n\n ORDER BY timestamp;\n\nEXPLAIN ANALYSE SELECT timestamp,\n\n\"Sort (cost=176058.28..176321.92 rows=105456 width=12) (actual time=630.350..669.843 rows=150678 loops=1)\"\n\" Sort Key: tbl_sensor_channel_data.sens_chan_data_timestamp\"\n\" Sort Method: external sort Disk: 3240kB\"\n\" -> Bitmap Heap Scan on tbl_sensor_channel_data (cost=3005.29..167259.94 rows=105456 width=12) (actual time=35.498..399.726 rows=150678 loops=1)\"\n\" Recheck Cond: ((tree_id = 70) AND (sens_chan_data_timestamp >= '2008-06-11 00:00:00'::timestamp without time zone) AND (sens_chan_data_timestamp <= '2008-06-18 00:00:00'::timestamp without time zone))\"\n\" -> Bitmap Index Scan on tbl_sensor_channel_data_pkey (cost=0.00..2978.92 rows=105456 width=0) (actual time=27.433..27.433 rows=150678 loops=1)\"\n\" Index Cond: ((tree_id = 70) AND (sens_chan_data_timestamp >= '2008-06-11 00:00:00'::timestamp without time zone) AND (sens_chan_data_timestamp <= '2008-06-18 00:00:00'::timestamp without time zone))\"\n\"Total runtime: 694.968 ms\"\n\n\n\n\n\n\"Szymon Guz\" <[email protected]> wrote in message news:[email protected]...\n\n\n\n 2010/6/19 Davor J. <[email protected]>\n\n I think I have read what is to be read about queries being prepared in\n plpgsql functions, but I still can not explain the following, so I thought\n to post it here:\n\n Suppose 2 functions: factor(int,int) and offset(int, int).\n Suppose a third function: convert(float,int,int) which simply returns\n $1*factor($2,$3)+offset($2,$3)\n All three functions are IMMUTABLE.\n\n Very simple, right? Now I have very fast AND very slow executing queries on\n some 150k records:\n\n VERY FAST (half a second):\n ----------------\n SELECT data*factor(1,2)+offset(1,2) FROM tbl_data;\n\n VERY SLOW (a minute):\n ----------------\n SELECT convert(data, 1, 2) FROM tbl_data;\n\n The slowness cannot be due to calling a function 150k times. If I define\n convert2(float,int,int) to return a constant value, then it executes in\n about a second. (still half as slow as the VERY FAST query).\n\n I assume that factor and offset are cached in the VERY FAST query, and not\n in the slow one? If so, why not and how can I \"force\" it? Currently I need\n only one function for conversions.\n\n Regards,\n Davor\n\n\n\n\n\n\n Hi,\n show us the code of those two functions and explain analyze of those queries.\n\n\n regards\n Szymon Guz\n\n\n\n\n\n\nI didn't consider them to be important as they \nshowed the same, only the execution time was different. Also, they are a bit \nmore complex than the ones put in the previous post. But here they \nare:\n \nDefinitions:-----------------------------------------------------------CREATE \nOR REPLACE FUNCTION appfunctions.fnc_unit_conversion_factor(_tree_id integer, \n_unit_to_id integer)  RETURNS real AS$BODY$ DECLAREBEGIN \nRETURN (SELECT unit_conv_factor AS factor  FROM \nvew_unit_conversions AS c  INNER JOIN tbl_sensors AS s ON (s.unit_id = \nc.unit_id_from)  INNER JOIN tbl_trees USING (sens_id)  WHERE \ntree_id = _tree_id AND unit_id_to = _unit_to_id)::real;END; \n$BODY$  LANGUAGE 'plpgsql' \nIMMUTABLE--------------------------CREATE OR REPLACE FUNCTION \nappfunctions.fnc_unit_conversion_offset(_tree_id integer, _unit_to_id \ninteger)  RETURNS real AS$BODY$ DECLAREBEGIN RETURN \n(SELECT unit_conv_offset AS offset  FROM vew_unit_conversions AS \nc  INNER JOIN tbl_sensors AS s ON (s.unit_id = \nc.unit_id_from)  INNER JOIN tbl_trees USING (sens_id)  WHERE \ntree_id = _tree_id AND unit_id_to = _unit_to_id)::real;END; \n$BODY$  LANGUAGE 'plpgsql' \nIMMUTABLE--------------------------CREATE OR REPLACE FUNCTION \nappfunctions.fnc_unit_convert(_rawdata real, _tree_id integer, _unit_to_id \ninteger)  RETURNS real AS$BODY$ DECLARE BEGIN RETURN \n_rawdata * fnc_unit_conversion_factor(_tree_id, _unit_to_id) \n + fnc_unit_conversion_offset(_tree_id, _unit_to_id);END; \n$BODY$  LANGUAGE 'plpgsql' IMMUTABLE\n \n \n \nExecutions:-----------------------------------------------------------EXPLAIN \nANALYSE SELECT timestamp,\n \ndata_from_tree_id_70 AS \"flow_11\" \n \n FROM (SELECT sens_chan_data_timestamp AS \ntimestamp, sens_chan_data_data AS data_from_tree_id_70 FROM \ntbl_sensor_channel_data WHERE tree_id = 70 AND sens_chan_data_timestamp >= \n'2008-06-11T00:00:00' AND sens_chan_data_timestamp <= '2008-06-18T00:00:00' ) \nAS \"70\" \n \n ORDER BY timestamp;\n \n\"Sort  (cost=175531.00..175794.64 rows=105456 \nwidth=12) (actual time=598.454..638.400 rows=150678 loops=1)\"\"  Sort \nKey: tbl_sensor_channel_data.sens_chan_data_timestamp\"\"  Sort \nMethod:  external sort  Disk: 3240kB\"\"  ->  Bitmap \nHeap Scan on tbl_sensor_channel_data  (cost=3005.29..166732.66 rows=105456 \nwidth=12) (actual time=34.810..371.099 rows=150678 \nloops=1)\"\"        Recheck Cond: ((tree_id \n= 70) AND (sens_chan_data_timestamp >= '2008-06-11 00:00:00'::timestamp \nwithout time zone) AND (sens_chan_data_timestamp <= '2008-06-18 \n00:00:00'::timestamp without time \nzone))\"\"        ->  Bitmap Index \nScan on tbl_sensor_channel_data_pkey  (cost=0.00..2978.92 rows=105456 \nwidth=0) (actual time=28.008..28.008 rows=150678 \nloops=1)\"\"              \nIndex Cond: ((tree_id = 70) AND (sens_chan_data_timestamp >= '2008-06-11 \n00:00:00'::timestamp without time zone) AND (sens_chan_data_timestamp <= \n'2008-06-18 00:00:00'::timestamp without time zone))\"\"Total runtime: 663.478 \nms\"-----------------------------------------------------------EXPLAIN \nANALYSE SELECT timestamp,\n \nfnc_unit_convert(data_from_tree_id_70, 70, 7) AS \n\"flow_11\" \n \n FROM (SELECT sens_chan_data_timestamp AS \ntimestamp, sens_chan_data_data AS data_from_tree_id_70 FROM \ntbl_sensor_channel_data WHERE tree_id = 70 AND sens_chan_data_timestamp >= \n'2008-06-11T00:00:00' AND sens_chan_data_timestamp <= '2008-06-18T00:00:00' ) \nAS \"70\" \n \n ORDER BY timestamp;\n \n\"Sort  (cost=201895.00..202158.64 rows=105456 \nwidth=12) (actual time=35334.017..35372.977 rows=150678 loops=1)\"\"  \nSort Key: tbl_sensor_channel_data.sens_chan_data_timestamp\"\"  Sort \nMethod:  external sort  Disk: 3240kB\"\"  ->  Bitmap \nHeap Scan on tbl_sensor_channel_data  (cost=3005.29..193096.66 rows=105456 \nwidth=12) (actual time=60.012..35037.129 rows=150678 \nloops=1)\"\"        Recheck Cond: ((tree_id \n= 70) AND (sens_chan_data_timestamp >= '2008-06-11 00:00:00'::timestamp \nwithout time zone) AND (sens_chan_data_timestamp <= '2008-06-18 \n00:00:00'::timestamp without time \nzone))\"\"        ->  Bitmap Index \nScan on tbl_sensor_channel_data_pkey  (cost=0.00..2978.92 rows=105456 \nwidth=0) (actual time=21.884..21.884 rows=150678 \nloops=1)\"\"              \nIndex Cond: ((tree_id = 70) AND (sens_chan_data_timestamp >= '2008-06-11 \n00:00:00'::timestamp without time zone) AND (sens_chan_data_timestamp <= \n'2008-06-18 00:00:00'::timestamp without time zone))\"\"Total runtime: \n35397.841 \nms\"-----------------------------------------------------------EXPLAIN \nANALYSE SELECT timestamp,\n \ndata_from_tree_id_70*fnc_unit_conversion_factor(70, \n7)+ fnc_unit_conversion_offset(70, 7) AS \"flow_11\" \n \n FROM (SELECT sens_chan_data_timestamp AS \ntimestamp, sens_chan_data_data AS data_from_tree_id_70 FROM \ntbl_sensor_channel_data WHERE tree_id = 70 AND sens_chan_data_timestamp >= \n'2008-06-11T00:00:00' AND sens_chan_data_timestamp <= '2008-06-18T00:00:00' ) \nAS \"70\" \n \n ORDER BY timestamp;\n \nEXPLAIN ANALYSE SELECT timestamp,\n \n\"Sort  (cost=176058.28..176321.92 rows=105456 \nwidth=12) (actual time=630.350..669.843 rows=150678 loops=1)\"\"  Sort \nKey: tbl_sensor_channel_data.sens_chan_data_timestamp\"\"  Sort \nMethod:  external sort  Disk: 3240kB\"\"  ->  Bitmap \nHeap Scan on tbl_sensor_channel_data  (cost=3005.29..167259.94 rows=105456 \nwidth=12) (actual time=35.498..399.726 rows=150678 \nloops=1)\"\"        Recheck Cond: ((tree_id \n= 70) AND (sens_chan_data_timestamp >= '2008-06-11 00:00:00'::timestamp \nwithout time zone) AND (sens_chan_data_timestamp <= '2008-06-18 \n00:00:00'::timestamp without time \nzone))\"\"        ->  Bitmap Index \nScan on tbl_sensor_channel_data_pkey  (cost=0.00..2978.92 rows=105456 \nwidth=0) (actual time=27.433..27.433 rows=150678 \nloops=1)\"\"              \nIndex Cond: ((tree_id = 70) AND (sens_chan_data_timestamp >= '2008-06-11 \n00:00:00'::timestamp without time zone) AND (sens_chan_data_timestamp <= \n'2008-06-18 00:00:00'::timestamp without time zone))\"\"Total runtime: 694.968 \nms\"\n \n \n \n \n \n\"Szymon Guz\" <[email protected]> wrote in message news:[email protected]...\n\n2010/6/19 Davor J. <[email protected]>\nI think I have read what is to be read about queries being \n prepared inplpgsql functions, but I still can not explain the following, \n so I thoughtto post it here:Suppose 2 functions: factor(int,int) \n and offset(int, int).Suppose a third function: convert(float,int,int) \n which simply returns$1*factor($2,$3)+offset($2,$3)All three \n functions are IMMUTABLE.Very simple, right? Now I have very fast AND \n very slow executing queries onsome 150k records:VERY FAST (half \n a second):----------------SELECT data*factor(1,2)+offset(1,2) FROM \n tbl_data;VERY SLOW (a minute):----------------SELECT \n convert(data, 1, 2) FROM tbl_data;The slowness cannot be due to \n calling a function 150k times. If I defineconvert2(float,int,int) to \n return a constant value, then it executes inabout a second. (still half \n as slow as the VERY FAST query).I assume that factor and offset are \n cached in the VERY FAST query, and notin the slow one? If so, why not \n and how can I \"force\" it? Currently I needonly one function for \n conversions.Regards,Davor\n\nHi,\nshow us the code of those two functions and explain analyze of those \n queries.\n\nregards\nSzymon Guz", "msg_date": "Sun, 20 Jun 2010 13:53:52 +0200", "msg_from": "\"Davor J.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow function in queries SELECT clause." }, { "msg_contents": "\"Davor J.\" <[email protected]> writes:\n> Suppose 2 functions: factor(int,int) and offset(int, int).\n> Suppose a third function: convert(float,int,int) which simply returns \n> $1*factor($2,$3)+offset($2,$3)\n> All three functions are IMMUTABLE.\n\nYou should write the third function as a SQL function, which'd allow it\nto be inlined.\n\n> VERY FAST (half a second):\n> ----------------\n> SELECT data*factor(1,2)+offset(1,2) FROM tbl_data;\n\nIn this case both factor() calls are folded to constants, hence executed\nonly once.\n\n> VERY SLOW (a minute):\n> ----------------\n> SELECT convert(data, 1, 2) FROM tbl_data;\n\nWithout inlining, there's no hope of any constant-folding here.\nThe optimizer just sees the plpgsql function as a black box and\ncan't do anything with it.\n\nBTW, your later mail shows that the factor() functions are not really\nIMMUTABLE, since they select from tables that presumably are subject to\nchange. The \"correct\" declaration would be STABLE. If you're relying\non constant-folding to get reasonable application performance, you're\ngoing to have to continue to mislabel them as IMMUTABLE; but be aware\nthat you're likely to have issues any time you do change the table\ncontents. The changes won't get reflected into existing query plans.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Jun 2010 11:21:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow function in queries SELECT clause. " }, { "msg_contents": "Thanks Tom,\n\nYour concepts of \"inlining\" and \"black box\" really cleared things up for me. \nWith fnc_unit_convert() written in SQL and declared as STABLE I indeed have \nfast performance now.\n\nI appreciate the note on the IMMUTABLE part. The table contents should not \nchange in a way to affect the functions. So, as far as I understand the \nPostgres workings, this shouldn't pose a problem.\n\nRegards,\nDavor\n\n\"Tom Lane\" <[email protected]> wrote in message \nnews:[email protected]...\n> \"Davor J.\" <[email protected]> writes:\n>> Suppose 2 functions: factor(int,int) and offset(int, int).\n>> Suppose a third function: convert(float,int,int) which simply returns\n>> $1*factor($2,$3)+offset($2,$3)\n>> All three functions are IMMUTABLE.\n>\n> You should write the third function as a SQL function, which'd allow it\n> to be inlined.\n>\n>> VERY FAST (half a second):\n>> ----------------\n>> SELECT data*factor(1,2)+offset(1,2) FROM tbl_data;\n>\n> In this case both factor() calls are folded to constants, hence executed\n> only once.\n>\n>> VERY SLOW (a minute):\n>> ----------------\n>> SELECT convert(data, 1, 2) FROM tbl_data;\n>\n> Without inlining, there's no hope of any constant-folding here.\n> The optimizer just sees the plpgsql function as a black box and\n> can't do anything with it.\n>\n> BTW, your later mail shows that the factor() functions are not really\n> IMMUTABLE, since they select from tables that presumably are subject to\n> change. The \"correct\" declaration would be STABLE. If you're relying\n> on constant-folding to get reasonable application performance, you're\n> going to have to continue to mislabel them as IMMUTABLE; but be aware\n> that you're likely to have issues any time you do change the table\n> contents. The changes won't get reflected into existing query plans.\n>\n> regards, tom lane\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n", "msg_date": "Mon, 21 Jun 2010 08:57:41 +0200", "msg_from": "\"Davor J.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow function in queries SELECT clause." }, { "msg_contents": "> \"Tom Lane\" <[email protected]> wrote in message \n> news:[email protected]...\n>> \"Davor J.\" <[email protected]> writes:\n>>> Suppose 2 functions: factor(int,int) and offset(int, int).\n>>> Suppose a third function: convert(float,int,int) which simply returns\n>>> $1*factor($2,$3)+offset($2,$3)\n>>> All three functions are IMMUTABLE.\n>>\n>> You should write the third function as a SQL function, which'd allow it\n>> to be inlined.\n>>\n>>> VERY FAST (half a second):\n>>> ----------------\n>>> SELECT data*factor(1,2)+offset(1,2) FROM tbl_data;\n>>\n>> In this case both factor() calls are folded to constants, hence executed\n>> only once.\n>>\n>>> VERY SLOW (a minute):\n>>> ----------------\n>>> SELECT convert(data, 1, 2) FROM tbl_data;\n>>\n>> Without inlining, there's no hope of any constant-folding here.\n>> The optimizer just sees the plpgsql function as a black box and\n>> can't do anything with it.\n>>\n> Your concepts of \"inlining\" and \"black box\" really cleared things up for \n> me. With fnc_unit_convert() written in SQL and declared as STABLE I indeed \n> have fast performance now.\n\nA note on performance here: If I declare the fast SQL function \nfnc_unit_convert() as STRICT or as SECURITY DEFINER, then I suddenly get \nslow performance again (i.e. no apparent inlining). \n\n\n", "msg_date": "Thu, 12 Aug 2010 14:00:36 +0200", "msg_from": "\"Davor J.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow function in queries SELECT clause." } ]
[ { "msg_contents": "Which one is good - join between table or using exists in where condition?\n\nQuery 1;\n\nSelect a.*\nfrom a\nwhere exists\n(\nselect 1 from b inner join c on b.id1 = c.id where a.id = b.id)\n\nQuery 2:\nselect a.*\nfrom a\ninner join\n(select b.id from b inner join c on b.id1 = c.id) as q\non a.id = q.id\n\nAny suggestion please.\n\nWhich one is good - join between table or using exists in where condition?Query 1;Select a.*from awhere exists(select 1 from b inner join c on b.id1 = c.id where a.id = b.id)\nQuery 2:select a.*from ainner join (select b.id from b inner join c on b.id1 = c.id) as qon a.id = q.id\nAny suggestion please.", "msg_date": "Sun, 20 Jun 2010 14:55:45 +0600", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": true, "msg_subject": "join vs exists" } ]
[ { "msg_contents": "AI Rumman wrote:\n \n> Which one is good - join between table or using exists in where\n> condition?\n \nYour example wouldn't return the same results unless there was at\nmost one matching row in b and one matching row in c, at least\nwithout resorting to DISTINCT (which you don't show). So, be careful\nof not getting the wrong results in an attempt to optimize.\n \nYou don't say which version of PostgreSQL you're using, but if its a\nfairly recent major version, I would expect nearly identical\nperformance if the queries returned the same results without\nDISTINCT, and would usually expect better results for the EXISTS than\nthe JOIN with DISTINCT.\n \n-Kevin\n\n", "msg_date": "Sun, 20 Jun 2010 10:02:18 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: join vs exists" } ]
[ { "msg_contents": "Hi.\n\nI have been wondering if anyone has been experimenting with \"really \nagressive\"\nautovacuuming. The database I'm adminstrating rarely have \"long running\" \ntransactions\n(over several minutes). And a fair amount of buffercache and an OS cache of\n(at best 64GB). A lot of the OS cache is being used for read-caching.\n\nMy thought was that if I tuned autovacuum to be \"really aggressive\" then\nI could get autovacuum to actually vacuum the tuples before they\nget evicted from the OS cache thus effectively \"saving\" the IO-overhead\nof vacuuming.\n\nThe largest consequence I can see at the moment is that when I get a\nfull vacuum (for preventing transaction-id wraparound) it would be\nrun with the same aggressive settings, thus giving a real performance\nhit in that situation.\n\nHas anyone tried to do similar? What is your experience?\nIs the idea totally bogus?\n\nJesper\n\n-- \nJesper Krogh\n", "msg_date": "Sun, 20 Jun 2010 19:44:29 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Aggressive autovacuuming ?" }, { "msg_contents": "On Sun, Jun 20, 2010 at 11:44 AM, Jesper Krogh <[email protected]> wrote:\n> Hi.\n>\n> I have been wondering if anyone has been experimenting with \"really\n> agressive\"\n> autovacuuming.\n\nI have been using moderately aggressive autovac, with 6 or more\nthreads running with 1ms sleep, then keeping track of them to see if\nthey're being too aggresive. Basically as long as io utilization\ndoesn't hit 100% it doesn't seem to have any negative or even\nnoticeable effect.\n\nI head more in the direction of running a few more threads than I\nabsolutely need to keep up with bloat. If I'm set for 5 threads and I\nalways have five threads running, I go to 6, 7, 8 or wherever they're\nnever all active.\n\nBut you need the IO capability to use aggresive vacuuming.\n\n> The database I'm adminstrating rarely have \"long running\"\n> transactions\n> (over several minutes). And a fair amount of buffercache and an OS cache of\n> (at best 64GB). A lot of the OS cache is being used for read-caching.\n>\n> My thought was that if I tuned autovacuum to be \"really aggressive\" then\n> I could get autovacuum to actually vacuum the tuples before they\n> get evicted from the OS cache thus effectively \"saving\" the IO-overhead\n> of vacuuming.\n\nBut vacuuming by design has to write out and that's the real resource\nyou're likely to use up first.\n\n> The largest consequence I can see at the moment is that when I get a\n> full vacuum (for preventing transaction-id wraparound) it would be\n\nI assume you mean the automatic database wide vacuum. I don't think\n8.4 and above need that anymore. I thnk 8.3 does that too, but I'm\nnot 100% sure.\n\n> run with the same aggressive settings, thus giving a real performance\n> hit in that situation.\n>\n> Has anyone tried to do similar? What is your experience?\n> Is the idea totally bogus?\n\nCranking up autovacuum is not a bogus idea, but it directly impacts\nyour IO subsystem, and if you get too aggressive (zero naptime is way\naggressive) you have to back off on the number of threads to keep\nthings sane. If your IO subsystem is one 7200RPM SATA drive with\nwrite cache disabled / fsync properly working, you're not gonna be\nable to get very aggresive before you make your IO subsystem bog down.\n", "msg_date": "Sun, 20 Jun 2010 14:13:15 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Aggressive autovacuuming ?" }, { "msg_contents": "Excerpts from Scott Marlowe's message of dom jun 20 16:13:15 -0400 2010:\n> On Sun, Jun 20, 2010 at 11:44 AM, Jesper Krogh <[email protected]> wrote:\n> > Hi.\n> >\n> > I have been wondering if anyone has been experimenting with \"really\n> > agressive\"\n> > autovacuuming.\n> \n> I have been using moderately aggressive autovac, with 6 or more\n> threads running with 1ms sleep, then keeping track of them to see if\n> they're being too aggresive. Basically as long as io utilization\n> doesn't hit 100% it doesn't seem to have any negative or even\n> noticeable effect.\n\nKeep in mind that autovacuum scales down the cost limit the more workers\nthere are. So if you have 10ms sleeps and 1 worker, it should roughly\nuse a similar amount of I/O than if you have 10ms sleeps and 10 workers\n(each worker would sleep 10 times more frequently).\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 21 Jun 2010 12:36:54 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Aggressive autovacuuming ?" }, { "msg_contents": "Jesper Krogh <[email protected]> wrote:\n \n> My thought was that if I tuned autovacuum to be \"really\n> aggressive\" then I could get autovacuum to actually vacuum the\n> tuples before they get evicted from the OS cache thus effectively\n> \"saving\" the IO-overhead of vacuuming.\n \nInteresting concept. That might be a way to avoid the extra disk\nI/O to set hint bits, and then some. I haven't tried it, but I'm\ngoing to make a note to take a look when (if???) I get some free\ntime. If you give it a try, please post the results. If you're I/O\nbound (rather than CPU bound) and you choose *extremely* aggressive\nsettings, the multiple writes to pages *might* collapse in cache and\nsignificantly reduce I/O.\n \nI don't think I'd try it on a release prior to 8.4, however. Nor\nwould I consider trying this in a production environment without a\ngood set of tests.\n \n-Kevin\n", "msg_date": "Mon, 21 Jun 2010 12:01:38 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Aggressive autovacuuming ?" }, { "msg_contents": "On Sun, Jun 20, 2010 at 4:13 PM, Scott Marlowe <[email protected]> wrote:\n>> The largest consequence I can see at the moment is that when I get a\n>> full vacuum (for preventing transaction-id wraparound) it would be\n>\n> I assume you mean the automatic database wide vacuum.  I don't think\n> 8.4 and above need that anymore.  I thnk 8.3 does that too, but I'm\n> not 100% sure.\n\n8.4 (and 9.0) do still need to do vacuums to freeze tuples before\ntransaction ID wraparound occurs. This is not to be confused with\nVACUUM FULL, which is something else altogether.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 23 Jun 2010 13:58:01 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Aggressive autovacuuming ?" }, { "msg_contents": "On Wed, Jun 23, 2010 at 1:58 PM, Robert Haas <[email protected]> wrote:\n> On Sun, Jun 20, 2010 at 4:13 PM, Scott Marlowe <[email protected]> wrote:\n>>> The largest consequence I can see at the moment is that when I get a\n>>> full vacuum (for preventing transaction-id wraparound) it would be\n>>\n>> I assume you mean the automatic database wide vacuum.  I don't think\n>> 8.4 and above need that anymore.  I thnk 8.3 does that too, but I'm\n>> not 100% sure.\n>\n> 8.4 (and 9.0) do still need to do vacuums to freeze tuples before\n> transaction ID wraparound occurs.  This is not to be confused with\n> VACUUM FULL, which is something else altogether.\n\nMy point was that modern pgsql doesn't need db wide vacuum to prevent\nwrap around anymore, but can vacuum individual relations to prevent\nwraparound.\n", "msg_date": "Wed, 23 Jun 2010 14:20:46 -0400", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Aggressive autovacuuming ?" }, { "msg_contents": "On Wed, Jun 23, 2010 at 2:20 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Jun 23, 2010 at 1:58 PM, Robert Haas <[email protected]> wrote:\n>> On Sun, Jun 20, 2010 at 4:13 PM, Scott Marlowe <[email protected]> wrote:\n>>>> The largest consequence I can see at the moment is that when I get a\n>>>> full vacuum (for preventing transaction-id wraparound) it would be\n>>>\n>>> I assume you mean the automatic database wide vacuum.  I don't think\n>>> 8.4 and above need that anymore.  I thnk 8.3 does that too, but I'm\n>>> not 100% sure.\n>>\n>> 8.4 (and 9.0) do still need to do vacuums to freeze tuples before\n>> transaction ID wraparound occurs.  This is not to be confused with\n>> VACUUM FULL, which is something else altogether.\n>\n> My point was that modern pgsql doesn't need db wide vacuum to prevent\n> wrap around anymore, but can vacuum individual relations to prevent\n> wraparound.\n\nOh, I see. I didn't realize we used to do that. Looks like that\nchange was committed 11/5/2006.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 23 Jun 2010 14:49:47 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Aggressive autovacuuming ?" } ]
[ { "msg_contents": "Hi,\n\nI'm getting low performance on SUM and GROUP BY queries.\nHow can I improve my database to perform such queries.\n\nHere is my table schema:\n=> \\d acct_2010_25\n Tabela \"public.acct_2010_25\"\n Coluna | Tipo |\nModificadores\n----------------+-----------------------------+------------------------------------------------------------------------\n ip_src | inet | not null default\n'0.0.0.0'::inet\n ip_dst | inet | not null default\n'0.0.0.0'::inet\n as_src | bigint | not null default 0\n as_dst | bigint | not null default 0\n port_src | integer | not null default 0\n port_dst | integer | not null default 0\n tcp_flags | smallint | not null default 0\n ip_proto | smallint | not null default 0\n packets | integer | not null\n flows | integer | not null default 0\n bytes | bigint | not null\n stamp_inserted | timestamp without time zone | not null default '0001-01-01\n00:00:00 BC'::timestamp without time zone\n stamp_updated | timestamp without time zone |\nÍndices:\n \"acct_2010_25_pk\" PRIMARY KEY, btree (stamp_inserted, ip_src, ip_dst,\nport_src, port_dst, ip_proto)\n \"ibytes_acct_2010_25\" btree (bytes)\n\nHere is my one query example (could add pk to flow and packet fields):\n\n=> explain analyze SELECT ip_src, port_src, ip_dst, port_dst, tcp_flags,\nip_proto,SUM(\"bytes\"),SUM(\"packets\"),SUM(\"flows\") FROM \"acct_2010_25\" WHERE\n\"stamp_inserted\">='2010-06-20 10:10' AND \"stamp_inserted\"<'2010-06-21 10:10'\nGROUP BY ip_src, port_src, ip_dst, port_dst, tcp_flags, ip_proto order by\nSUM(bytes) desc LIMIT 50 OFFSET 0;\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3998662.81..3998662.94 rows=50 width=50) (actual\ntime=276981.107..276981.133 rows=50 loops=1)\n -> Sort (cost=3998662.81..4001046.07 rows=953305 width=50) (actual\ntime=276981.105..276981.107 rows=50 loops=1)\n Sort Key: sum(bytes)\n -> GroupAggregate (cost=3499863.27..3754872.33 rows=953305\nwidth=50) (actual time=165468.257..182677.580 rows=8182616 loops=1)\n -> Sort (cost=3499863.27..3523695.89 rows=9533049 width=50)\n(actual time=165468.022..168908.828 rows=9494165 loops=1)\n Sort Key: ip_src, port_src, ip_dst, port_dst,\ntcp_flags, ip_proto\n -> Seq Scan on acct_2010_25 (cost=0.00..352648.10\nrows=9533049 width=50) (actual time=0.038..50860.391 rows=9494165 loops=1)\n Filter: ((stamp_inserted >= '2010-06-20\n10:10:00'::timestamp without time zone) AND (stamp_inserted < '2010-06-21\n10:10:00'::timestamp without time zone))\n Total runtime: 278791.661 ms\n(9 registros)\n\nAnother one just summing bytes (still low):\n\n=> explain analyze SELECT ip_src, port_src, ip_dst, port_dst, tcp_flags,\nip_proto,SUM(\"bytes\") FROM \"acct_2010_25\" WHERE\n\"stamp_inserted\">='2010-06-20 10:10' AND \"stamp_inserted\"<'2010-06-21 10:10'\nGROUP BY ip_src, port_src, ip_dst, port_dst, tcp_flags, ip_proto LIMIT 50\nOFFSET 0;\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3395202.50..3395213.12 rows=50 width=42) (actual\ntime=106261.359..106261.451 rows=50 loops=1)\n -> GroupAggregate (cost=3395202.50..3602225.48 rows=974226 width=42)\n(actual time=106261.357..106261.435 rows=50 loops=1)\n -> Sort (cost=3395202.50..3419558.14 rows=9742258 width=42)\n(actual time=106261.107..106261.169 rows=176 loops=1)\n Sort Key: ip_src, port_src, ip_dst, port_dst, tcp_flags,\nip_proto\n -> Seq Scan on acct_2010_25 (cost=0.00..367529.72\nrows=9742258 width=42) (actual time=0.073..8058.598 rows=9494165 loops=1)\n Filter: ((stamp_inserted >= '2010-06-20\n10:10:00'::timestamp without time zone) AND (stamp_inserted < '2010-06-21\n10:10:00'::timestamp without time zone))\n Total runtime: 109911.882 ms\n(7 registros)\n\n\nThe server has 2 Intel(R) Xeon(R) CPU E5430 @ 2.66GHz and 16GB RAM.\nI'm using PostgreSQL 8.1.18 default config from Centos 5.5 (just\nincreased checkpoint_segments to 50).\n\nWhat can I change to increase performance?\n\nThanks in advance.\n\nCheers.\n\n-- \nSergio Roberto Charpinel Jr.\n\nHi,I'm getting low performance on SUM and GROUP BY queries.How can I improve my database to perform such queries.Here is my table schema:=> \\d acct_2010_25\n                                             Tabela \"public.acct_2010_25\"     Coluna     |            Tipo             |                             Modificadores                              \n----------------+-----------------------------+------------------------------------------------------------------------ ip_src         | inet                        | not null default '0.0.0.0'::inet\n ip_dst         | inet                        | not null default '0.0.0.0'::inet as_src         | bigint                      | not null default 0 as_dst         | bigint                      | not null default 0\n port_src       | integer                     | not null default 0 port_dst       | integer                     | not null default 0 tcp_flags      | smallint                    | not null default 0\n ip_proto       | smallint                    | not null default 0 packets        | integer                     | not null flows          | integer                     | not null default 0\n bytes          | bigint                      | not null stamp_inserted | timestamp without time zone | not null default '0001-01-01 00:00:00 BC'::timestamp without time zone stamp_updated  | timestamp without time zone | \nÍndices:    \"acct_2010_25_pk\" PRIMARY KEY, btree (stamp_inserted, ip_src, ip_dst, port_src, port_dst, ip_proto)    \"ibytes_acct_2010_25\" btree (bytes)\nHere is my one query example (could add pk to flow and packet fields):=> explain analyze SELECT ip_src, port_src, ip_dst, port_dst, tcp_flags, ip_proto,SUM(\"bytes\"),SUM(\"packets\"),SUM(\"flows\") FROM \"acct_2010_25\" WHERE \"stamp_inserted\">='2010-06-20 10:10' AND \"stamp_inserted\"<'2010-06-21 10:10' GROUP BY ip_src, port_src, ip_dst, port_dst, tcp_flags, ip_proto order by SUM(bytes) desc LIMIT 50 OFFSET 0;\n                                                                                      QUERY PLAN                                                                                      --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=3998662.81..3998662.94 rows=50 width=50) (actual time=276981.107..276981.133 rows=50 loops=1)   ->  Sort  (cost=3998662.81..4001046.07 rows=953305 width=50) (actual time=276981.105..276981.107 rows=50 loops=1)\n         Sort Key: sum(bytes)         ->  GroupAggregate  (cost=3499863.27..3754872.33 rows=953305 width=50) (actual time=165468.257..182677.580 rows=8182616 loops=1)               ->  Sort  (cost=3499863.27..3523695.89 rows=9533049 width=50) (actual time=165468.022..168908.828 rows=9494165 loops=1)\n                     Sort Key: ip_src, port_src, ip_dst, port_dst, tcp_flags, ip_proto                     ->  Seq Scan on acct_2010_25  (cost=0.00..352648.10 rows=9533049 width=50) (actual time=0.038..50860.391 rows=9494165 loops=1)\n                           Filter: ((stamp_inserted >= '2010-06-20 10:10:00'::timestamp without time zone) AND (stamp_inserted < '2010-06-21 10:10:00'::timestamp without time zone)) Total runtime: 278791.661 ms\n(9 registros)Another one just summing bytes (still low):=> explain analyze SELECT ip_src, port_src, ip_dst, port_dst, tcp_flags, ip_proto,SUM(\"bytes\") FROM \"acct_2010_25\" WHERE \"stamp_inserted\">='2010-06-20 10:10' AND \"stamp_inserted\"<'2010-06-21 10:10' GROUP BY ip_src, port_src, ip_dst, port_dst, tcp_flags, ip_proto LIMIT 50 OFFSET 0;\n                                                                                   QUERY PLAN                                                                                   --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=3395202.50..3395213.12 rows=50 width=42) (actual time=106261.359..106261.451 rows=50 loops=1)   ->  GroupAggregate  (cost=3395202.50..3602225.48 rows=974226 width=42) (actual time=106261.357..106261.435 rows=50 loops=1)\n         ->  Sort  (cost=3395202.50..3419558.14 rows=9742258 width=42) (actual time=106261.107..106261.169 rows=176 loops=1)               Sort Key: ip_src, port_src, ip_dst, port_dst, tcp_flags, ip_proto\n               ->  Seq Scan on acct_2010_25  (cost=0.00..367529.72 rows=9742258 width=42) (actual time=0.073..8058.598 rows=9494165 loops=1)                     Filter: ((stamp_inserted >= '2010-06-20 10:10:00'::timestamp without time zone) AND (stamp_inserted < '2010-06-21 10:10:00'::timestamp without time zone))\n Total runtime: 109911.882 ms(7 registros)The server has 2 Intel(R) Xeon(R) CPU  E5430 @ 2.66GHz and 16GB RAM.I'm using PostgreSQL 8.1.18 default config from Centos 5.5 (just increased checkpoint_segments to 50).\nWhat can I change to increase performance?Thanks in advance.Cheers.-- Sergio Roberto Charpinel Jr.", "msg_date": "Mon, 21 Jun 2010 11:42:14 -0300", "msg_from": "\"Sergio Charpinel Jr.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Low perfomance SUM and Group by large databse" }, { "msg_contents": "On 21/06/10 22:42, Sergio Charpinel Jr. wrote:\n> Hi,\n> \n> I'm getting low performance on SUM and GROUP BY queries.\n> How can I improve my database to perform such queries.\n\n> -> Sort (cost=3499863.27..3523695.89 rows=9533049\n> width=50) (actual time=165468.022..168908.828 rows=9494165 loops=1)\n> Sort Key: ip_src, port_src, ip_dst, port_dst,\n> tcp_flags, ip_proto\n> -> Seq Scan on acct_2010_25 (cost=0.00..352648.10\n> rows=9533049 width=50) (actual time=0.038..50860.391 rows=9494165 loops=1)\n> Filter: ((stamp_inserted >= '2010-06-20\n> 10:10:00'::timestamp without time zone) AND (stamp_inserted <\n> '2010-06-21 10:10:00'::timestamp without time zone))\n\nProvide an index on at least (ip_src,port_src,ip_dst,port_dst). If you\nfrequently do other queries that only want some of that information you\ncould create several individual indexes for those columns instead, as Pg\nwill combine them for a query, but that is much less efficient than an\nindex across all four columns.\n\nCREATE INDEX ip_peers_idx ON acct_2010_25(ip_src,port_src,ip_dst_port_dst);\n\nEvery index added costs you insert/update/delete speed, so try to find\nthe smallest/simplest index that gives you acceptable performance.\n\n> Another one just summing bytes (still low):\n> \n> => explain analyze SELECT ip_src, port_src, ip_dst, port_dst, tcp_flags,\n> ip_proto,SUM(\"bytes\") FROM \"acct_2010_25\" WHERE\n> \"stamp_inserted\">='2010-06-20 10:10' AND \"stamp_inserted\"<'2010-06-21\n> 10:10' GROUP BY ip_src, port_src, ip_dst, port_dst, tcp_flags, ip_proto\n> LIMIT 50 OFFSET 0;\n\nSame deal. You have no suitable index, so Pg has to do a sequential scan\nof the table. Since you appear to query on stamp_inserted a lot, you\nshould index it.\n\n--\nCraig Ringer\n", "msg_date": "Tue, 22 Jun 2010 06:06:47 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low perfomance SUM and Group by large databse" }, { "msg_contents": "On 22/06/10 00:42, Sergio Charpinel Jr. wrote:\n> Hi,\n>\n[snip]\n>\n> => explain analyze SELECT ip_src, port_src, ip_dst, port_dst,\n> tcp_flags, ip_proto,SUM(\"bytes\"),SUM(\"packets\"),SUM(\"flows\") FROM\n> \"acct_2010_25\" WHERE \"stamp_inserted\">='2010-06-20 10:10' AND\n> \"stamp_inserted\"<'2010-06-21 10:10' GROUP BY ip_src, port_src, ip_dst,\n> port_dst, tcp_flags, ip_proto order by SUM(bytes) desc LIMIT 50 OFFSET 0;\n> \n> QUERY PLAN \n> \n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=3998662.81..3998662.94 rows=50 width=50) (actual\n> time=276981.107..276981.133 rows=50 loops=1)\n> -> Sort (cost=3998662.81..4001046.07 rows=953305 width=50)\n> (actual time=276981.105..276981.107 rows=50 loops=1)\n> Sort Key: sum(bytes)\n> -> GroupAggregate (cost=3499863.27..3754872.33 rows=953305\n> width=50) (actual time=165468.257..182677.580 rows=8182616 loops=1)\n> -> Sort (cost=3499863.27..3523695.89 rows=9533049\n> width=50) (actual time=165468.022..168908.828 rows=9494165 loops=1)\n> Sort Key: ip_src, port_src, ip_dst, port_dst,\n> tcp_flags, ip_proto\n\nYou are having to sort and aggregate a large number of rows before you\ncan get the top 50. That's 9 million rows in this case, width 50 =\n400MB+ sort. That's going to be slow as you are going to have to sort\nit on disk unless you bump up sort mem to 500Mb (bad idea). So unless\nyou have really fast storage for temporary tables it's going to take a\nwhile. About 2.5 minutes you are experiencing at the moment is probably\nnot too bad.\n\nI'm sure improvements have been made in the area since 8.1 and if you\nare able to upgrade to 8.4 which is also offered by Centos5 now, you\nmight get benefit there. I can't remember the specific benefits, but I\nbelieve sorting speed has improved, your explain analyze will also give\nyou more information about what's going on with disk/memory sorting.\n\n> -> Seq Scan on acct_2010_25\n> (cost=0.00..352648.10 rows=9533049 width=50) (actual\n> time=0.038..50860.391 rows=9494165 loops=1)\n> Filter: ((stamp_inserted >= '2010-06-20\n> 10:10:00'::timestamp without time zone) AND (stamp_inserted <\n> '2010-06-21 10:10:00'::timestamp without time zone))\n> Total runtime: 278791.661 ms\n> (9 registros)\n>\n> Another one just summing bytes (still low):\n>\n> => explain analyze SELECT ip_src, port_src, ip_dst, port_dst,\n> tcp_flags, ip_proto,SUM(\"bytes\") FROM \"acct_2010_25\" WHERE\n> \"stamp_inserted\">='2010-06-20 10:10' AND \"stamp_inserted\"<'2010-06-21\n> 10:10' GROUP BY ip_src, port_src, ip_dst, port_dst, tcp_flags,\n> ip_proto LIMIT 50 OFFSET 0;\n> \n> QUERY PLAN \n> \n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=3395202.50..3395213.12 rows=50 width=42) (actual\n> time=106261.359..106261.451 rows=50 loops=1)\n> -> GroupAggregate (cost=3395202.50..3602225.48 rows=974226\n> width=42) (actual time=106261.357..106261.435 rows=50 loops=1)\n> -> Sort (cost=3395202.50..3419558.14 rows=9742258 width=42)\n> (actual time=106261.107..106261.169 rows=176 loops=1)\n> Sort Key: ip_src, port_src, ip_dst, port_dst,\n> tcp_flags, ip_proto\n> -> Seq Scan on acct_2010_25 (cost=0.00..367529.72\n> rows=9742258 width=42) (actual time=0.073..8058.598 rows=9494165 loops=1)\n> Filter: ((stamp_inserted >= '2010-06-20\n> 10:10:00'::timestamp without time zone) AND (stamp_inserted <\n> '2010-06-21 10:10:00'::timestamp without time zone))\n> Total runtime: 109911.882 ms\n> (7 registros)\n>\n>\n> The server has 2 Intel(R) Xeon(R) CPU E5430 @ 2.66GHz and 16GB RAM.\n> I'm using PostgreSQL 8.1.18 default config from Centos 5.5 (just\n> increased checkpoint_segments to 50).\n\nCheckpoint segments won't help you as the number of segments is about\nwriting to the database and how fast that can happen.\n\n>\n> What can I change to increase performance?\n\nIncreasing sort-memory (work_mem) will give you speed benefits even\nthough you are going to disk. I don't know how much spare memory you\nhave, but trying other values between 8MB and 128MB may be useful just\nfor the specific query runs. If you can afford 512Mb for each of the\ntwo sorts, go for that, but it's dangerous as mentioned due to the risk\nof using more RAM than you have. work_mem allocates that amount of\nmemory per sort.\n\nIf you are running these queries all the time, a summary table the\nproduces there reports on a regular basis, maybe daily or even hourly\nwould be useful. Basically the large amount of information that needs\nto be processed and sorted is what's taking all the time here. \n\nRegards\n\nRussell\n", "msg_date": "Tue, 22 Jun 2010 19:45:19 +1000", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low perfomance SUM and Group by large databse" }, { "msg_contents": "Craig, Russel,\n\nI appreciate your help.\n\nThanks.\n\n2010/6/22 Russell Smith <[email protected]>\n\n> On 22/06/10 00:42, Sergio Charpinel Jr. wrote:\n> > Hi,\n> >\n> [snip]\n> >\n> > => explain analyze SELECT ip_src, port_src, ip_dst, port_dst,\n> > tcp_flags, ip_proto,SUM(\"bytes\"),SUM(\"packets\"),SUM(\"flows\") FROM\n> > \"acct_2010_25\" WHERE \"stamp_inserted\">='2010-06-20 10:10' AND\n> > \"stamp_inserted\"<'2010-06-21 10:10' GROUP BY ip_src, port_src, ip_dst,\n> > port_dst, tcp_flags, ip_proto order by SUM(bytes) desc LIMIT 50 OFFSET 0;\n> >\n> > QUERY PLAN\n> >\n> >\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Limit (cost=3998662.81..3998662.94 rows=50 width=50) (actual\n> > time=276981.107..276981.133 rows=50 loops=1)\n> > -> Sort (cost=3998662.81..4001046.07 rows=953305 width=50)\n> > (actual time=276981.105..276981.107 rows=50 loops=1)\n> > Sort Key: sum(bytes)\n> > -> GroupAggregate (cost=3499863.27..3754872.33 rows=953305\n> > width=50) (actual time=165468.257..182677.580 rows=8182616 loops=1)\n> > -> Sort (cost=3499863.27..3523695.89 rows=9533049\n> > width=50) (actual time=165468.022..168908.828 rows=9494165 loops=1)\n> > Sort Key: ip_src, port_src, ip_dst, port_dst,\n> > tcp_flags, ip_proto\n>\n> You are having to sort and aggregate a large number of rows before you\n> can get the top 50. That's 9 million rows in this case, width 50 =\n> 400MB+ sort. That's going to be slow as you are going to have to sort\n> it on disk unless you bump up sort mem to 500Mb (bad idea). So unless\n> you have really fast storage for temporary tables it's going to take a\n> while. About 2.5 minutes you are experiencing at the moment is probably\n> not too bad.\n>\n> I'm sure improvements have been made in the area since 8.1 and if you\n> are able to upgrade to 8.4 which is also offered by Centos5 now, you\n> might get benefit there. I can't remember the specific benefits, but I\n> believe sorting speed has improved, your explain analyze will also give\n> you more information about what's going on with disk/memory sorting.\n>\n> > -> Seq Scan on acct_2010_25\n> > (cost=0.00..352648.10 rows=9533049 width=50) (actual\n> > time=0.038..50860.391 rows=9494165 loops=1)\n> > Filter: ((stamp_inserted >= '2010-06-20\n> > 10:10:00'::timestamp without time zone) AND (stamp_inserted <\n> > '2010-06-21 10:10:00'::timestamp without time zone))\n> > Total runtime: 278791.661 ms\n> > (9 registros)\n> >\n> > Another one just summing bytes (still low):\n> >\n> > => explain analyze SELECT ip_src, port_src, ip_dst, port_dst,\n> > tcp_flags, ip_proto,SUM(\"bytes\") FROM \"acct_2010_25\" WHERE\n> > \"stamp_inserted\">='2010-06-20 10:10' AND \"stamp_inserted\"<'2010-06-21\n> > 10:10' GROUP BY ip_src, port_src, ip_dst, port_dst, tcp_flags,\n> > ip_proto LIMIT 50 OFFSET 0;\n> >\n> > QUERY PLAN\n> >\n> >\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Limit (cost=3395202.50..3395213.12 rows=50 width=42) (actual\n> > time=106261.359..106261.451 rows=50 loops=1)\n> > -> GroupAggregate (cost=3395202.50..3602225.48 rows=974226\n> > width=42) (actual time=106261.357..106261.435 rows=50 loops=1)\n> > -> Sort (cost=3395202.50..3419558.14 rows=9742258 width=42)\n> > (actual time=106261.107..106261.169 rows=176 loops=1)\n> > Sort Key: ip_src, port_src, ip_dst, port_dst,\n> > tcp_flags, ip_proto\n> > -> Seq Scan on acct_2010_25 (cost=0.00..367529.72\n> > rows=9742258 width=42) (actual time=0.073..8058.598 rows=9494165 loops=1)\n> > Filter: ((stamp_inserted >= '2010-06-20\n> > 10:10:00'::timestamp without time zone) AND (stamp_inserted <\n> > '2010-06-21 10:10:00'::timestamp without time zone))\n> > Total runtime: 109911.882 ms\n> > (7 registros)\n> >\n> >\n> > The server has 2 Intel(R) Xeon(R) CPU E5430 @ 2.66GHz and 16GB RAM.\n> > I'm using PostgreSQL 8.1.18 default config from Centos 5.5 (just\n> > increased checkpoint_segments to 50).\n>\n> Checkpoint segments won't help you as the number of segments is about\n> writing to the database and how fast that can happen.\n>\n> >\n> > What can I change to increase performance?\n>\n> Increasing sort-memory (work_mem) will give you speed benefits even\n> though you are going to disk. I don't know how much spare memory you\n> have, but trying other values between 8MB and 128MB may be useful just\n> for the specific query runs. If you can afford 512Mb for each of the\n> two sorts, go for that, but it's dangerous as mentioned due to the risk\n> of using more RAM than you have. work_mem allocates that amount of\n> memory per sort.\n>\n> If you are running these queries all the time, a summary table the\n> produces there reports on a regular basis, maybe daily or even hourly\n> would be useful. Basically the large amount of information that needs\n> to be processed and sorted is what's taking all the time here.\n>\n> Regards\n>\n> Russell\n>\n\n\n\n-- \nSergio Roberto Charpinel Jr.\n\nCraig, Russel,I appreciate your help.Thanks.2010/6/22 Russell Smith <[email protected]>\nOn 22/06/10 00:42, Sergio Charpinel Jr. wrote:\n> Hi,\n>\n[snip]\n>\n> => explain analyze SELECT ip_src, port_src, ip_dst, port_dst,\n> tcp_flags, ip_proto,SUM(\"bytes\"),SUM(\"packets\"),SUM(\"flows\") FROM\n> \"acct_2010_25\" WHERE \"stamp_inserted\">='2010-06-20 10:10' AND\n> \"stamp_inserted\"<'2010-06-21 10:10' GROUP BY ip_src, port_src, ip_dst,\n> port_dst, tcp_flags, ip_proto order by SUM(bytes) desc LIMIT 50 OFFSET 0;\n>\n>                QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Limit  (cost=3998662.81..3998662.94 rows=50 width=50) (actual\n> time=276981.107..276981.133 rows=50 loops=1)\n>    ->  Sort  (cost=3998662.81..4001046.07 rows=953305 width=50)\n> (actual time=276981.105..276981.107 rows=50 loops=1)\n>          Sort Key: sum(bytes)\n>          ->  GroupAggregate  (cost=3499863.27..3754872.33 rows=953305\n> width=50) (actual time=165468.257..182677.580 rows=8182616 loops=1)\n>                ->  Sort  (cost=3499863.27..3523695.89 rows=9533049\n> width=50) (actual time=165468.022..168908.828 rows=9494165 loops=1)\n>                      Sort Key: ip_src, port_src, ip_dst, port_dst,\n> tcp_flags, ip_proto\n\nYou are having to sort and aggregate a large number of rows before you\ncan get the top 50.  That's 9 million rows in this case, width 50 =\n400MB+ sort.  That's going to be slow as you are going to have to sort\nit on disk unless you bump up sort mem to 500Mb (bad idea).  So unless\nyou have really fast storage for temporary tables it's going to take a\nwhile.  About 2.5 minutes you are experiencing at the moment is probably\nnot too bad.\n\nI'm sure improvements have been made in the area since 8.1 and if you\nare able to upgrade to 8.4 which is also offered by Centos5 now, you\nmight get benefit there.  I can't remember the specific benefits, but I\nbelieve sorting speed has improved, your explain analyze will also give\nyou more information about what's going on with disk/memory sorting.\n\n>                      ->  Seq Scan on acct_2010_25\n>  (cost=0.00..352648.10 rows=9533049 width=50) (actual\n> time=0.038..50860.391 rows=9494165 loops=1)\n>                            Filter: ((stamp_inserted >= '2010-06-20\n> 10:10:00'::timestamp without time zone) AND (stamp_inserted <\n> '2010-06-21 10:10:00'::timestamp without time zone))\n>  Total runtime: 278791.661 ms\n> (9 registros)\n>\n> Another one just summing bytes (still low):\n>\n> => explain analyze SELECT ip_src, port_src, ip_dst, port_dst,\n> tcp_flags, ip_proto,SUM(\"bytes\") FROM \"acct_2010_25\" WHERE\n> \"stamp_inserted\">='2010-06-20 10:10' AND \"stamp_inserted\"<'2010-06-21\n> 10:10' GROUP BY ip_src, port_src, ip_dst, port_dst, tcp_flags,\n> ip_proto LIMIT 50 OFFSET 0;\n>\n>             QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Limit  (cost=3395202.50..3395213.12 rows=50 width=42) (actual\n> time=106261.359..106261.451 rows=50 loops=1)\n>    ->  GroupAggregate  (cost=3395202.50..3602225.48 rows=974226\n> width=42) (actual time=106261.357..106261.435 rows=50 loops=1)\n>          ->  Sort  (cost=3395202.50..3419558.14 rows=9742258 width=42)\n> (actual time=106261.107..106261.169 rows=176 loops=1)\n>                Sort Key: ip_src, port_src, ip_dst, port_dst,\n> tcp_flags, ip_proto\n>                ->  Seq Scan on acct_2010_25  (cost=0.00..367529.72\n> rows=9742258 width=42) (actual time=0.073..8058.598 rows=9494165 loops=1)\n>                      Filter: ((stamp_inserted >= '2010-06-20\n> 10:10:00'::timestamp without time zone) AND (stamp_inserted <\n> '2010-06-21 10:10:00'::timestamp without time zone))\n>  Total runtime: 109911.882 ms\n> (7 registros)\n>\n>\n> The server has 2 Intel(R) Xeon(R) CPU  E5430 @ 2.66GHz and 16GB RAM.\n> I'm using PostgreSQL 8.1.18 default config from Centos 5.5 (just\n> increased checkpoint_segments to 50).\n\nCheckpoint segments won't help you as the number of segments is about\nwriting to the database and how fast that can happen.\n\n>\n> What can I change to increase performance?\n\nIncreasing sort-memory (work_mem) will give you speed benefits even\nthough you are going to disk.  I don't know how much spare memory you\nhave, but trying other values between 8MB and 128MB may be useful just\nfor the specific query runs.  If you can afford 512Mb for each of the\ntwo sorts, go for that, but it's dangerous as mentioned due to the risk\nof using more RAM than you have.  work_mem allocates that amount of\nmemory per sort.\n\nIf you are running these queries all the time, a summary table the\nproduces there reports on a regular basis, maybe daily or even hourly\nwould be useful.  Basically the large amount of information that needs\nto be processed and sorted is what's taking all the time here.\n\nRegards\n\nRussell\n-- Sergio Roberto Charpinel Jr.", "msg_date": "Wed, 23 Jun 2010 08:40:39 -0300", "msg_from": "\"Sergio Charpinel Jr.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low perfomance SUM and Group by large databse" }, { "msg_contents": "Hi,\n\nOne more question about two specifics query behavior: If I add \"AND (ip_dst\n= x.x.x.x)\", it uses another plan and take a much more time. In both of\nthem, I'm using WHERE clause. Why this behavior?\n\n=> explain analyze SELECT ip_src, port_src, ip_dst, port_dst, tcp_flags,\nip_proto, bytes, packets, flows FROM \"acct_2010_26\" WHERE\n\"stamp_inserted\">='2010-06-28 09:07' AND \"stamp_inserted\"<'2010-06-29 08:07'\nAND (ip_dst = '8.8.8.8') ORDER BY bytes DESC LIMIT 50 OFFSET 0;\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=496332.56..496332.69 rows=50 width=50) (actual\ntime=125390.523..125390.540 rows=50 loops=1)\n -> Sort (cost=496332.56..496351.35 rows=7517 width=50) (actual\ntime=125390.520..125390.525 rows=50 loops=1)\n Sort Key: bytes\n -> Index Scan using acct_2010_26_pk on acct_2010_26\n (cost=0.00..495848.62 rows=7517 width=50) (actual time=0.589..125385.680\nrows=1011 loops=1)\n Index Cond: ((stamp_inserted >= '2010-06-28\n09:07:00'::timestamp without time zone) AND (stamp_inserted < '2010-06-29\n08:07:00'::timestamp without time zone) AND (ip_dst = '8.8.8.8'::inet))\n Total runtime: 125390.711 ms\n(6 registros)\n\n\n=> explain analyze SELECT ip_src, port_src, ip_dst, port_dst, tcp_flags,\nip_proto, bytes, packets, flows FROM \"acct_2010_26\" WHERE\n\"stamp_inserted\">='2010-06-28 09:07' AND \"stamp_inserted\"<'2010-06-29 08:07'\nORDER BY bytes DESC LIMIT 50 OFFSET 0;\n\nQUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..268.25 rows=50 width=50) (actual time=0.150..70.780\nrows=50 loops=1)\n -> Index Scan Backward using ibytes_acct_2010_26 on acct_2010_26\n (cost=0.00..133240575.70 rows=24835384 width=50) (actual time=0.149..70.762\nrows=50 loops=1)\n Filter: ((stamp_inserted >= '2010-06-28 09:07:00'::timestamp\nwithout time zone) AND (stamp_inserted < '2010-06-29 08:07:00'::timestamp\nwithout time zone))\n Total runtime: 70.830 ms\n(4 registros)\n\n\nThanks in advance.\n\n2010/6/23 Sergio Charpinel Jr. <[email protected]>\n\n> Craig, Russel,\n>\n> I appreciate your help.\n>\n> Thanks.\n>\n> 2010/6/22 Russell Smith <[email protected]>\n>\n> On 22/06/10 00:42, Sergio Charpinel Jr. wrote:\n>> > Hi,\n>> >\n>> [snip]\n>> >\n>> > => explain analyze SELECT ip_src, port_src, ip_dst, port_dst,\n>> > tcp_flags, ip_proto,SUM(\"bytes\"),SUM(\"packets\"),SUM(\"flows\") FROM\n>> > \"acct_2010_25\" WHERE \"stamp_inserted\">='2010-06-20 10:10' AND\n>> > \"stamp_inserted\"<'2010-06-21 10:10' GROUP BY ip_src, port_src, ip_dst,\n>> > port_dst, tcp_flags, ip_proto order by SUM(bytes) desc LIMIT 50 OFFSET\n>> 0;\n>> >\n>> > QUERY PLAN\n>> >\n>> >\n>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> > Limit (cost=3998662.81..3998662.94 rows=50 width=50) (actual\n>> > time=276981.107..276981.133 rows=50 loops=1)\n>> > -> Sort (cost=3998662.81..4001046.07 rows=953305 width=50)\n>> > (actual time=276981.105..276981.107 rows=50 loops=1)\n>> > Sort Key: sum(bytes)\n>> > -> GroupAggregate (cost=3499863.27..3754872.33 rows=953305\n>> > width=50) (actual time=165468.257..182677.580 rows=8182616 loops=1)\n>> > -> Sort (cost=3499863.27..3523695.89 rows=9533049\n>> > width=50) (actual time=165468.022..168908.828 rows=9494165 loops=1)\n>> > Sort Key: ip_src, port_src, ip_dst, port_dst,\n>> > tcp_flags, ip_proto\n>>\n>> You are having to sort and aggregate a large number of rows before you\n>> can get the top 50. That's 9 million rows in this case, width 50 =\n>> 400MB+ sort. That's going to be slow as you are going to have to sort\n>> it on disk unless you bump up sort mem to 500Mb (bad idea). So unless\n>> you have really fast storage for temporary tables it's going to take a\n>> while. About 2.5 minutes you are experiencing at the moment is probably\n>> not too bad.\n>>\n>> I'm sure improvements have been made in the area since 8.1 and if you\n>> are able to upgrade to 8.4 which is also offered by Centos5 now, you\n>> might get benefit there. I can't remember the specific benefits, but I\n>> believe sorting speed has improved, your explain analyze will also give\n>> you more information about what's going on with disk/memory sorting.\n>>\n>> > -> Seq Scan on acct_2010_25\n>> > (cost=0.00..352648.10 rows=9533049 width=50) (actual\n>> > time=0.038..50860.391 rows=9494165 loops=1)\n>> > Filter: ((stamp_inserted >= '2010-06-20\n>> > 10:10:00'::timestamp without time zone) AND (stamp_inserted <\n>> > '2010-06-21 10:10:00'::timestamp without time zone))\n>> > Total runtime: 278791.661 ms\n>> > (9 registros)\n>> >\n>> > Another one just summing bytes (still low):\n>> >\n>> > => explain analyze SELECT ip_src, port_src, ip_dst, port_dst,\n>> > tcp_flags, ip_proto,SUM(\"bytes\") FROM \"acct_2010_25\" WHERE\n>> > \"stamp_inserted\">='2010-06-20 10:10' AND \"stamp_inserted\"<'2010-06-21\n>> > 10:10' GROUP BY ip_src, port_src, ip_dst, port_dst, tcp_flags,\n>> > ip_proto LIMIT 50 OFFSET 0;\n>> >\n>> > QUERY PLAN\n>> >\n>> >\n>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> > Limit (cost=3395202.50..3395213.12 rows=50 width=42) (actual\n>> > time=106261.359..106261.451 rows=50 loops=1)\n>> > -> GroupAggregate (cost=3395202.50..3602225.48 rows=974226\n>> > width=42) (actual time=106261.357..106261.435 rows=50 loops=1)\n>> > -> Sort (cost=3395202.50..3419558.14 rows=9742258 width=42)\n>> > (actual time=106261.107..106261.169 rows=176 loops=1)\n>> > Sort Key: ip_src, port_src, ip_dst, port_dst,\n>> > tcp_flags, ip_proto\n>> > -> Seq Scan on acct_2010_25 (cost=0.00..367529.72\n>> > rows=9742258 width=42) (actual time=0.073..8058.598 rows=9494165\n>> loops=1)\n>> > Filter: ((stamp_inserted >= '2010-06-20\n>> > 10:10:00'::timestamp without time zone) AND (stamp_inserted <\n>> > '2010-06-21 10:10:00'::timestamp without time zone))\n>> > Total runtime: 109911.882 ms\n>> > (7 registros)\n>> >\n>> >\n>> > The server has 2 Intel(R) Xeon(R) CPU E5430 @ 2.66GHz and 16GB RAM.\n>> > I'm using PostgreSQL 8.1.18 default config from Centos 5.5 (just\n>> > increased checkpoint_segments to 50).\n>>\n>> Checkpoint segments won't help you as the number of segments is about\n>> writing to the database and how fast that can happen.\n>>\n>> >\n>> > What can I change to increase performance?\n>>\n>> Increasing sort-memory (work_mem) will give you speed benefits even\n>> though you are going to disk. I don't know how much spare memory you\n>> have, but trying other values between 8MB and 128MB may be useful just\n>> for the specific query runs. If you can afford 512Mb for each of the\n>> two sorts, go for that, but it's dangerous as mentioned due to the risk\n>> of using more RAM than you have. work_mem allocates that amount of\n>> memory per sort.\n>>\n>> If you are running these queries all the time, a summary table the\n>> produces there reports on a regular basis, maybe daily or even hourly\n>> would be useful. Basically the large amount of information that needs\n>> to be processed and sorted is what's taking all the time here.\n>>\n>> Regards\n>>\n>> Russell\n>>\n>\n>\n>\n> --\n> Sergio Roberto Charpinel Jr.\n>\n\n\n\n-- \nSergio Roberto Charpinel Jr.\n\nHi,One more question about two specifics query behavior: If I add \"AND (ip_dst = x.x.x.x)\", it uses another plan and take a much more time. In both of them, I'm using WHERE clause. Why this behavior?\n=> explain analyze SELECT ip_src, port_src, ip_dst, port_dst, tcp_flags, ip_proto, bytes, packets, flows FROM \"acct_2010_26\" WHERE \"stamp_inserted\">='2010-06-28 09:07' AND \"stamp_inserted\"<'2010-06-29 08:07' AND (ip_dst = '8.8.8.8') ORDER BY bytes DESC LIMIT 50 OFFSET 0;\n                                                                                                     QUERY PLAN                                                                                                     \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=496332.56..496332.69 rows=50 width=50) (actual time=125390.523..125390.540 rows=50 loops=1)   ->  Sort  (cost=496332.56..496351.35 rows=7517 width=50) (actual time=125390.520..125390.525 rows=50 loops=1)\n         Sort Key: bytes         ->  Index Scan using acct_2010_26_pk on acct_2010_26  (cost=0.00..495848.62 rows=7517 width=50) (actual time=0.589..125385.680 rows=1011 loops=1)               Index Cond: ((stamp_inserted >= '2010-06-28 09:07:00'::timestamp without time zone) AND (stamp_inserted < '2010-06-29 08:07:00'::timestamp without time zone) AND (ip_dst = '8.8.8.8'::inet))\n Total runtime: 125390.711 ms(6 registros)=> explain analyze SELECT ip_src, port_src, ip_dst, port_dst, tcp_flags, ip_proto, bytes, packets, flows FROM \"acct_2010_26\" WHERE \"stamp_inserted\">='2010-06-28 09:07' AND \"stamp_inserted\"<'2010-06-29 08:07' ORDER BY bytes DESC LIMIT 50 OFFSET 0;\n                                                                             QUERY PLAN                                                                             --------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.00..268.25 rows=50 width=50) (actual time=0.150..70.780 rows=50 loops=1)   ->  Index Scan Backward using ibytes_acct_2010_26 on acct_2010_26  (cost=0.00..133240575.70 rows=24835384 width=50) (actual time=0.149..70.762 rows=50 loops=1)\n         Filter: ((stamp_inserted >= '2010-06-28 09:07:00'::timestamp without time zone) AND (stamp_inserted < '2010-06-29 08:07:00'::timestamp without time zone)) Total runtime: 70.830 ms\n(4 registros)Thanks in advance.2010/6/23 Sergio Charpinel Jr. <[email protected]>\nCraig, Russel,I appreciate your help.Thanks.\n2010/6/22 Russell Smith <[email protected]>\nOn 22/06/10 00:42, Sergio Charpinel Jr. wrote:\n> Hi,\n>\n[snip]\n>\n> => explain analyze SELECT ip_src, port_src, ip_dst, port_dst,\n> tcp_flags, ip_proto,SUM(\"bytes\"),SUM(\"packets\"),SUM(\"flows\") FROM\n> \"acct_2010_25\" WHERE \"stamp_inserted\">='2010-06-20 10:10' AND\n> \"stamp_inserted\"<'2010-06-21 10:10' GROUP BY ip_src, port_src, ip_dst,\n> port_dst, tcp_flags, ip_proto order by SUM(bytes) desc LIMIT 50 OFFSET 0;\n>\n>                QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Limit  (cost=3998662.81..3998662.94 rows=50 width=50) (actual\n> time=276981.107..276981.133 rows=50 loops=1)\n>    ->  Sort  (cost=3998662.81..4001046.07 rows=953305 width=50)\n> (actual time=276981.105..276981.107 rows=50 loops=1)\n>          Sort Key: sum(bytes)\n>          ->  GroupAggregate  (cost=3499863.27..3754872.33 rows=953305\n> width=50) (actual time=165468.257..182677.580 rows=8182616 loops=1)\n>                ->  Sort  (cost=3499863.27..3523695.89 rows=9533049\n> width=50) (actual time=165468.022..168908.828 rows=9494165 loops=1)\n>                      Sort Key: ip_src, port_src, ip_dst, port_dst,\n> tcp_flags, ip_proto\n\nYou are having to sort and aggregate a large number of rows before you\ncan get the top 50.  That's 9 million rows in this case, width 50 =\n400MB+ sort.  That's going to be slow as you are going to have to sort\nit on disk unless you bump up sort mem to 500Mb (bad idea).  So unless\nyou have really fast storage for temporary tables it's going to take a\nwhile.  About 2.5 minutes you are experiencing at the moment is probably\nnot too bad.\n\nI'm sure improvements have been made in the area since 8.1 and if you\nare able to upgrade to 8.4 which is also offered by Centos5 now, you\nmight get benefit there.  I can't remember the specific benefits, but I\nbelieve sorting speed has improved, your explain analyze will also give\nyou more information about what's going on with disk/memory sorting.\n\n>                      ->  Seq Scan on acct_2010_25\n>  (cost=0.00..352648.10 rows=9533049 width=50) (actual\n> time=0.038..50860.391 rows=9494165 loops=1)\n>                            Filter: ((stamp_inserted >= '2010-06-20\n> 10:10:00'::timestamp without time zone) AND (stamp_inserted <\n> '2010-06-21 10:10:00'::timestamp without time zone))\n>  Total runtime: 278791.661 ms\n> (9 registros)\n>\n> Another one just summing bytes (still low):\n>\n> => explain analyze SELECT ip_src, port_src, ip_dst, port_dst,\n> tcp_flags, ip_proto,SUM(\"bytes\") FROM \"acct_2010_25\" WHERE\n> \"stamp_inserted\">='2010-06-20 10:10' AND \"stamp_inserted\"<'2010-06-21\n> 10:10' GROUP BY ip_src, port_src, ip_dst, port_dst, tcp_flags,\n> ip_proto LIMIT 50 OFFSET 0;\n>\n>             QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Limit  (cost=3395202.50..3395213.12 rows=50 width=42) (actual\n> time=106261.359..106261.451 rows=50 loops=1)\n>    ->  GroupAggregate  (cost=3395202.50..3602225.48 rows=974226\n> width=42) (actual time=106261.357..106261.435 rows=50 loops=1)\n>          ->  Sort  (cost=3395202.50..3419558.14 rows=9742258 width=42)\n> (actual time=106261.107..106261.169 rows=176 loops=1)\n>                Sort Key: ip_src, port_src, ip_dst, port_dst,\n> tcp_flags, ip_proto\n>                ->  Seq Scan on acct_2010_25  (cost=0.00..367529.72\n> rows=9742258 width=42) (actual time=0.073..8058.598 rows=9494165 loops=1)\n>                      Filter: ((stamp_inserted >= '2010-06-20\n> 10:10:00'::timestamp without time zone) AND (stamp_inserted <\n> '2010-06-21 10:10:00'::timestamp without time zone))\n>  Total runtime: 109911.882 ms\n> (7 registros)\n>\n>\n> The server has 2 Intel(R) Xeon(R) CPU  E5430 @ 2.66GHz and 16GB RAM.\n> I'm using PostgreSQL 8.1.18 default config from Centos 5.5 (just\n> increased checkpoint_segments to 50).\n\nCheckpoint segments won't help you as the number of segments is about\nwriting to the database and how fast that can happen.\n\n>\n> What can I change to increase performance?\n\nIncreasing sort-memory (work_mem) will give you speed benefits even\nthough you are going to disk.  I don't know how much spare memory you\nhave, but trying other values between 8MB and 128MB may be useful just\nfor the specific query runs.  If you can afford 512Mb for each of the\ntwo sorts, go for that, but it's dangerous as mentioned due to the risk\nof using more RAM than you have.  work_mem allocates that amount of\nmemory per sort.\n\nIf you are running these queries all the time, a summary table the\nproduces there reports on a regular basis, maybe daily or even hourly\nwould be useful.  Basically the large amount of information that needs\nto be processed and sorted is what's taking all the time here.\n\nRegards\n\nRussell\n-- Sergio Roberto Charpinel Jr.\n\n-- Sergio Roberto Charpinel Jr.", "msg_date": "Tue, 29 Jun 2010 08:59:37 -0300", "msg_from": "\"Sergio Charpinel Jr.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low perfomance SUM and Group by large databse" }, { "msg_contents": "On Tue, Jun 29, 2010 at 7:59 AM, Sergio Charpinel Jr.\n<[email protected]> wrote:\n> One more question about two specifics query behavior: If I add \"AND (ip_dst\n> = x.x.x.x)\", it uses another plan and take a much more time. In both of\n> them, I'm using WHERE clause. Why this behavior?\n\nWith either query, the planner is choosing to scan backward through\nthe acct_2010_26_pk index to get the rows in descending order by the\n\"bytes\" column. It keeps scanning until it finds 50 rows that match\nthe WHERE clause. With just the critieria on stamp_inserted, matches\nare pretty common, so it doesn't have to scan very far before finding\n50 suitable rows. But when you add the ip_dst = 'x.x.x.x' criterion,\nsuddenly a much smaller percentage of the rows match and so it has to\nread much further into the index before it finds 50 that do.\n\nA second index on just the ip_dst column might help a lot - then it\ncould consider index-scanning for the matching rows and sorting them\nafterwards.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Fri, 2 Jul 2010 13:21:47 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low perfomance SUM and Group by large databse" } ]
[ { "msg_contents": "Hi folks,\n\nis there a general problem with raid10 performance postgresql on it?\nWe see very low performance on writes (2-3x slower than on less\nperformant servers). I wonder if it is solely problem of raid10\nconfiguration, or if it is postgresql's thing.\n\nWould moving WAL dir to separate disk help potentially ?\n\nWe're running centos 5.4, and server config is:\n\nx3550 M2, xeon 4c e5530 2.4ghz , 6GB of ram\ndisks: ibm 300gb 2.5 SAS\n\nraid: serveRAID M5014 SAS/SATA controller\n\nstrip size is the default 128k\n\n\nthanks.\n\n-- \nGJ\n", "msg_date": "Tue, 22 Jun 2010 10:31:08 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": true, "msg_subject": "raid10 write performance" }, { "msg_contents": "On 6/22/2010 4:31 AM, Grzegorz Jaśkiewicz wrote:\n> Hi folks,\n>\n> is there a general problem with raid10 performance postgresql on it?\n> We see very low performance on writes (2-3x slower than on less\n> performant servers). I wonder if it is solely problem of raid10\n> configuration, or if it is postgresql's thing.\n> \n\nRAID 10 is the commonly suggested layout for DB's as its performance to \nredundancy is good.\nThe question that begs to be ask is what is the IO layout on the other \nservers your comparing against.\n\n\n> Would moving WAL dir to separate disk help potentially ?\n> \n\nYes it can have a big impact.\nhttp://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm\nhttp://wiki.postgresql.org/wiki/Performance_Analysis_Tools\n\nhttp://wiki.postgresql.org/wiki/Performance_Optimization\n\n\n> We're running centos 5.4, and server config is:\n>\n> x3550 M2, xeon 4c e5530 2.4ghz , 6GB of ram\n> disks: ibm 300gb 2.5 SAS\n>\n> raid: serveRAID M5014 SAS/SATA controller\n>\n> strip size is the default 128k\n>\n>\n> thanks.\n>\n> \n\n\n\nAll legitimate Magwerks Corporation quotations are sent in a .PDF file attachment with a unique ID number generated by our proprietary quotation system. Quotations received via any other form of communication will not be honored.\n\nCONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally privileged, confidential or other information proprietary to Magwerks Corporation and is intended solely for the use of the individual to whom it addresses. If the reader of this e-mail is not the intended recipient or authorized agent, the reader is hereby notified that any unauthorized viewing, dissemination, distribution or copying of this e-mail is strictly prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and destroy all occurrences of this e-mail immediately.\nThank you.", "msg_date": "Tue, 22 Jun 2010 08:40:03 -0500", "msg_from": "Justin Graf <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 write performance" }, { "msg_contents": "Justin Graf wrote:\n> On 6/22/2010 4:31 AM, Grzegorz Jaśkiewicz wrote:\n> \n>> Would moving WAL dir to separate disk help potentially ?\n>> \n>> \n>\n> Yes it can have a big impact.\nWAL on a separate spindle will make a HUGE difference in performance. \nTPS rates frequently double OR BETTER with WAL on a dedicated spindle.\n\nStrongly recommended.\n\nBe aware that you must pay CLOSE ATTENTION to your backup strategy if\nWAL is on a different physical disk. Snapshotting the data disk where\nWAL is on a separate spindle and backing it up **WILL NOT WORK** and\n**WILL** result in an non-restoreable backup.\n\nThe manual discusses this but it's easy to miss.... don't or you'll get\na NASTY surprise if something goes wrong.....\n\n-- Karl", "msg_date": "Tue, 22 Jun 2010 09:29:53 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 write performance" }, { "msg_contents": "Grzegorz Jaśkiewicz wrote:\n> raid: serveRAID M5014 SAS/SATA controller\n> \n\nDo the \"performant servers\" have a different RAID card? This one has \nterrible performance, and could alone be the source of your issue. The \nServeRAID cards are slow in general, and certainly slow running RAID10.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Tue, 22 Jun 2010 10:40:31 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 write performance" }, { "msg_contents": "Of course, no backup strategy is complete without testing a full restore\nonto bare hardware :-)\n\nOn Tue, Jun 22, 2010 at 9:29 AM, Karl Denninger <[email protected]> wrote:\n\n> Justin Graf wrote:\n>\n> On 6/22/2010 4:31 AM, Grzegorz Jaśkiewicz wrote:\n>\n>\n> Would moving WAL dir to separate disk help potentially ?\n>\n>\n>\n> Yes it can have a big impact.\n>\n> WAL on a separate spindle will make a HUGE difference in performance. TPS\n> rates frequently double OR BETTER with WAL on a dedicated spindle.\n>\n> Strongly recommended.\n>\n> Be aware that you must pay CLOSE ATTENTION to your backup strategy if WAL\n> is on a different physical disk. Snapshotting the data disk where WAL is on\n> a separate spindle and backing it up **WILL NOT WORK** and **WILL** result\n> in an non-restoreable backup.\n>\n> The manual discusses this but it's easy to miss.... don't or you'll get a\n> NASTY surprise if something goes wrong.....\n>\n> -- Karl\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\nOf course, no backup strategy is complete without testing a full restore onto bare hardware :-)On Tue, Jun 22, 2010 at 9:29 AM, Karl Denninger <[email protected]> wrote:\n\n\nJustin Graf wrote:\n\nOn 6/22/2010 4:31 AM, Grzegorz Jaśkiewicz wrote:\n \n\nWould moving WAL dir to separate disk help potentially ?\n \n \n\nYes it can have a big impact.\n\nWAL on a separate spindle will make a HUGE difference in performance. \nTPS rates frequently double OR BETTER with WAL on a dedicated spindle.\n\nStrongly recommended.\n\nBe aware that you must pay CLOSE ATTENTION to your backup strategy if\nWAL is on a different physical disk.  Snapshotting the data disk where\nWAL is on a separate spindle and backing it up **WILL NOT WORK** and\n**WILL** result in an non-restoreable backup.\n\nThe manual discusses this but it's easy to miss.... don't or you'll get\na NASTY surprise if something goes wrong.....\n\n-- Karl\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 22 Jun 2010 11:01:28 -0500", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 write performance" }, { "msg_contents": "\r\nOn Jun 22, 2010, at 7:29 AM, Karl Denninger wrote:\r\n\r\n> Justin Graf wrote:\r\n>> \r\n>> On 6/22/2010 4:31 AM, Grzegorz Jaśkiewicz wrote:\r\n>> \r\n>>> Would moving WAL dir to separate disk help potentially ?\r\n>>> \r\n>>> \r\n>> \r\n>> Yes it can have a big impact.\r\n> WAL on a separate spindle will make a HUGE difference in performance. TPS rates frequently double OR BETTER with WAL on a dedicated spindle.\r\n> \r\n> Strongly recommended.\r\n> \r\n\r\nMost of the performance increase on Linux, if your RAID card has a data-safe write-back cache (battery or solid state cache persistence in case of power failure), is a separate file system, not separate spindle. Especially if ext3 is used with the default \"ordered\" journal, it is absolutely caustic to performance to have WAL and data on the same file system. \r\n\r\nThe whole 'separate spindle' thing is important if you don't have a good caching raid card, otherwise it doesn't matter that much.\r\n\r\n\r\n> Be aware that you must pay CLOSE ATTENTION to your backup strategy if WAL is on a different physical disk. Snapshotting the data disk where WAL is on a separate spindle and backing it up **WILL NOT WORK** and **WILL** result in an non-restoreable backup.\r\n> \r\n> The manual discusses this but it's easy to miss.... don't or you'll get a NASTY surprise if something goes wrong.....\r\n> \r\n> -- Karl\r\n> <karl.vcf><ATT00001..txt>\r\n\r\n", "msg_date": "Tue, 22 Jun 2010 10:19:19 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 write performance" }, { "msg_contents": "On 06/22/10 16:40, Greg Smith wrote:\n> Grzegorz Jaśkiewicz wrote:\n>> raid: serveRAID M5014 SAS/SATA controller\n>> \n> \n> Do the \"performant servers\" have a different RAID card? This one has\n> terrible performance, and could alone be the source of your issue. The\n> ServeRAID cards are slow in general, and certainly slow running RAID10.\n\nWhat are some good RAID10 cards nowadays?\n\nOn the other hand, RAID10 is simple enough that soft-RAID\nimplementations should be more than adequate - any ideas why a dedicated\ncard has it \"slow\"?\n\n", "msg_date": "Wed, 23 Jun 2010 12:06:56 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 write performance" }, { "msg_contents": "* Ivan Voras:\n\n> On the other hand, RAID10 is simple enough that soft-RAID\n> implementations should be more than adequate - any ideas why a dedicated\n> card has it \"slow\"?\n\nBarrier support on RAID10 seems to require some smallish amount of\nnon-volatile storage which supports a high number of write operations\nper second, so a software-only solution might not be available.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Wed, 23 Jun 2010 12:00:21 +0000", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 write performance" }, { "msg_contents": "On 06/23/10 14:00, Florian Weimer wrote:\n> * Ivan Voras:\n> \n>> On the other hand, RAID10 is simple enough that soft-RAID\n>> implementations should be more than adequate - any ideas why a dedicated\n>> card has it \"slow\"?\n> \n> Barrier support on RAID10 seems to require some smallish amount of\n> non-volatile storage which supports a high number of write operations\n> per second, so a software-only solution might not be available.\n\nIf I understand you correctly, this can be said in general for all\nspinning-disk usage and is not specific to RAID10. (And in the case of\nhigh, constant TPS, no amount of NVRAM will help you).\n\n", "msg_date": "Wed, 23 Jun 2010 14:25:05 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 write performance" }, { "msg_contents": "On Wed, 23 Jun 2010, Ivan Voras wrote:\n> On 06/23/10 14:00, Florian Weimer wrote:\n>> Barrier support on RAID10 seems to require some smallish amount of\n>> non-volatile storage which supports a high number of write operations\n>> per second, so a software-only solution might not be available.\n>\n> If I understand you correctly, this can be said in general for all\n> spinning-disk usage and is not specific to RAID10. (And in the case of\n> high, constant TPS, no amount of NVRAM will help you).\n\nNo. Write barriers work fine with a single disc, assuming it is set up \ncorrectly. The barrier is a command telling the disc to make sure that one \npiece of data is safe before starting to write another piece of data.\n\nHowever, as soon as you have multiple discs, the individual discs do not \nhave a way of communicating with each other to make sure that the first \npiece of data is written before the other. That's why you need a little \nbit of non-volatile storage to mediate that to properly support barriers.\n\nOf course, from a performance point of view, yes, you need some NVRAM on \nany kind of spinning storage to maintain high commit rates.\n\nMatthew\n\n-- \n I wouldn't be so paranoid if you weren't all out to get me!!\n", "msg_date": "Wed, 23 Jun 2010 13:46:13 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 write performance" }, { "msg_contents": "On Wed, Jun 23, 2010 at 8:25 AM, Ivan Voras <[email protected]> wrote:\n> On 06/23/10 14:00, Florian Weimer wrote:\n>> * Ivan Voras:\n>>\n>>> On the other hand, RAID10 is simple enough that soft-RAID\n>>> implementations should be more than adequate - any ideas why a dedicated\n>>> card has it \"slow\"?\n>>\n>> Barrier support on RAID10 seems to require some smallish amount of\n>> non-volatile storage which supports a high number of write operations\n>> per second, so a software-only solution might not be available.\n>\n> If I understand you correctly, this can be said in general for all\n> spinning-disk usage and is not specific to RAID10. (And in the case of\n> high, constant TPS, no amount of NVRAM will help you).\n\nNot entirely true. Let's say you have enough battery backed cache to\nhold 10,000 transaction writes in memory at once. The RAID controller\ncan now re-order those writes so that they go from one side of the\ndisk to the other, instead of randomly all over the place. That will\nmost certainly help improve your throughput.\n", "msg_date": "Wed, 23 Jun 2010 08:54:27 -0400", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 write performance" }, { "msg_contents": "On Wed, Jun 23, 2010 at 6:06 AM, Ivan Voras <[email protected]> wrote:\n> On 06/22/10 16:40, Greg Smith wrote:\n>> Grzegorz Jaśkiewicz wrote:\n>>> raid: serveRAID M5014 SAS/SATA controller\n>>>\n>>\n>> Do the \"performant servers\" have a different RAID card?  This one has\n>> terrible performance, and could alone be the source of your issue.  The\n>> ServeRAID cards are slow in general, and certainly slow running RAID10.\n>\n> What are some good RAID10 cards nowadays?\n\nLSI, Areca, 3Ware (now LSI I believe)\n\n> On the other hand, RAID10 is simple enough that soft-RAID\n> implementations should be more than adequate - any ideas why a dedicated\n> card has it \"slow\"?\n\nThis is mostly a problem with some older cards that focused on RAID-5\nperformance, and RAID-10 was an afterthought. On many of these cards\n(older PERCs for instance) it was faster to either use a bunch of\nRAID-1 pairs in hardware with RAID-0 in software on top, or put the\nthing into JBOD mode and do it all in software.\n", "msg_date": "Wed, 23 Jun 2010 08:56:39 -0400", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: raid10 write performance" } ]
[ { "msg_contents": "v. 8.4.3 \n\nI have a table that has several indexes, one of which the table is clustered on. If I do an ALTER TABLE Foo ADD COLUMN bar integer not null default -1;\n\nIt re-writes the whole table.\n\n* Does it adhere to the CLUSTER property of the table and write the new version clustered?\n* Does it properly write it with the FILLFACTOR setting?\n* Are all the indexes re-created too, or are they bloated and need a REINDEX?\n\nhttp://www.postgresql.org/docs/8.4/static/sql-altertable.html \n does not seem to answer the above, it mentions the conditions that cause a rewrite but does not say what the state is after the rewrite with respect to CLUSTER, FILLFACTOR, and index bloat.\n\nThanks in advance!\n\n", "msg_date": "Tue, 22 Jun 2010 10:30:35 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": true, "msg_subject": "ALTER Table and CLUSTER does adding a new column rewrite clustered?\n\t(8.4.3)" }, { "msg_contents": "Scott Carey wrote:\n> v. 8.4.3 \n> \n> I have a table that has several indexes, one of which the table is\n> clustered on. If I do an ALTER TABLE Foo ADD COLUMN bar integer not\n> null default -1;\n> \n> It re-writes the whole table.\n\nAll good questions:\n\n> * Does it adhere to the CLUSTER property of the table and write the new\n> version clustered?\n\nThe new table is the exact same heap ordering as the old table; it does\nnot refresh the clustering if the table has become unclustered.\n\n> * Does it properly write it with the FILLFACTOR setting?\n\nYes, inserts are used to populate the new table, and inserts honor\nFILLFACTOR.\n\n> * Are all the indexes re-created too, or are they bloated and need a REINDEX?\n\nThey are recreated.\n\n> http://www.postgresql.org/docs/8.4/static/sql-altertable.html \n> does not seem to answer the above, it mentions the conditions that\n> cause a rewrite but does not say what the state is after the rewrite\n> with respect to CLUSTER, FILLFACTOR, and index bloat.\n\nI have added a documentation patch to mention the indexes are rebuilt; \napplied patch attached.\n\nThe gory details can be found in src/backend/commands/tablecmds.c.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +", "msg_date": "Thu, 24 Jun 2010 11:03:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ALTER Table and CLUSTER does adding a new\n\tcolumn rewrite clustered? (8.4.3)" } ]
[ { "msg_contents": "This query seems unreasonable slow on a well-indexed table (13 million\nrows). Separate indexes are present on guardid_id , from_num and\ntargetprt columns.\nThe table was analyzed with a default stats target of 600.\nPostgres 8.1.9 on 2 cpu quad core 5430 with 32G RAM (work_mem=502400)\n 6 x 450G 15K disks on a RAID 10 setup. (RHEL 5 )\n\nThe table size is 3.6GB (table + indexes)\n\nexplain analyze select 1 from mydev_tr_hr_dimension_2010_06_13 where\nguardid_id=19 and from_num=184091764 and targetprt=25 limit 1;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..323.36 rows=1 width=0) (actual\ntime=19238.104..19238.104 rows=0 loops=1)\n -> Index Scan using mydev_tr_hr_dimension_2010_06_13_from_num on\nmydev_tr_hr_dimension_2010_06_13 (cost=0.00..26515.46 rows=82\nwidth=0) (actual time=19238.103..19238.103 rows=0 loops=1)\n Index Cond: (from_num = 184091764)\n Filter: ((guardid_id = 19) AND (targetprt = 25))\n Total runtime: 19238.126 ms\n", "msg_date": "Tue, 22 Jun 2010 14:44:39 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "slow index lookup" }, { "msg_contents": "Excerpts from Anj Adu's message of mar jun 22 17:44:39 -0400 2010:\n> This query seems unreasonable slow on a well-indexed table (13 million\n> rows). Separate indexes are present on guardid_id , from_num and\n> targetprt columns.\n\nMaybe you need to vacuum or reindex?\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 22 Jun 2010 19:44:15 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow index lookup" }, { "msg_contents": "i have several partitions like this (similar size ...similar data\ndistribution)..these partitions are only \"inserted\"..never updated.\nWhy would I need to vacuum..\n\nI can reindex..just curious what can cause the index to go out of whack.\n\nOn Tue, Jun 22, 2010 at 4:44 PM, Alvaro Herrera\n<[email protected]> wrote:\n> Excerpts from Anj Adu's message of mar jun 22 17:44:39 -0400 2010:\n>> This query seems unreasonable slow on a well-indexed table (13 million\n>> rows). Separate indexes are present on guardid_id , from_num and\n>> targetprt columns.\n>\n> Maybe you need to vacuum or reindex?\n>\n> --\n> Álvaro Herrera <[email protected]>\n> The PostgreSQL Company - Command Prompt, Inc.\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n>\n", "msg_date": "Tue, 22 Jun 2010 18:00:29 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow index lookup" }, { "msg_contents": "On Tue, 2010-06-22 at 18:00 -0700, Anj Adu wrote:\n> i have several partitions like this (similar size ...similar data\n> distribution)..these partitions are only \"inserted\"..never updated.\n> Why would I need to vacuum..\n> \n\nAn explain analyze is what is in order for further diagnosis.\n\nJD\n\n\n> I can reindex..just curious what can cause the index to go out of whack.\n> \n> On Tue, Jun 22, 2010 at 4:44 PM, Alvaro Herrera\n> <[email protected]> wrote:\n> > Excerpts from Anj Adu's message of mar jun 22 17:44:39 -0400 2010:\n> >> This query seems unreasonable slow on a well-indexed table (13 million\n> >> rows). Separate indexes are present on guardid_id , from_num and\n> >> targetprt columns.\n> >\n> > Maybe you need to vacuum or reindex?\n> >\n> > --\n> > Álvaro Herrera <[email protected]>\n> > The PostgreSQL Company - Command Prompt, Inc.\n> > PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n> >\n> \n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\n\n", "msg_date": "Tue, 22 Jun 2010 18:10:15 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow index lookup" }, { "msg_contents": "I did post the explain analyze..can you please clarify\n\nOn Tue, Jun 22, 2010 at 6:10 PM, Joshua D. Drake <[email protected]> wrote:\n> On Tue, 2010-06-22 at 18:00 -0700, Anj Adu wrote:\n>> i have several partitions like this (similar size ...similar data\n>> distribution)..these partitions are only \"inserted\"..never updated.\n>> Why would I need to vacuum..\n>>\n>\n> An explain analyze is what is in order for further diagnosis.\n>\n> JD\n>\n>\n>> I can reindex..just curious what can cause the index to go out of whack.\n>>\n>> On Tue, Jun 22, 2010 at 4:44 PM, Alvaro Herrera\n>> <[email protected]> wrote:\n>> > Excerpts from Anj Adu's message of mar jun 22 17:44:39 -0400 2010:\n>> >> This query seems unreasonable slow on a well-indexed table (13 million\n>> >> rows). Separate indexes are present on guardid_id , from_num and\n>> >> targetprt columns.\n>> >\n>> > Maybe you need to vacuum or reindex?\n>> >\n>> > --\n>> > Álvaro Herrera <[email protected]>\n>> > The PostgreSQL Company - Command Prompt, Inc.\n>> > PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n>> >\n>>\n>\n> --\n> PostgreSQL.org Major Contributor\n> Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\n> Consulting, Training, Support, Custom Development, Engineering\n>\n>\n", "msg_date": "Tue, 22 Jun 2010 18:21:46 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow index lookup" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Excerpts from Anj Adu's message of mar jun 22 17:44:39 -0400 2010:\n>> This query seems unreasonable slow on a well-indexed table (13 million\n>> rows). Separate indexes are present on guardid_id , from_num and\n>> targetprt columns.\n\n> Maybe you need to vacuum or reindex?\n\nRethinking the set of indexes is probably a more appropriate suggestion.\nSeparate indexes aren't usefully combinable for a case like this --- in\nprinciple the thing could do a BitmapAnd, but the startup time would be\npretty horrid, and the LIMIT 1 is discouraging it from trying that.\nIf this is an important case to optimize then you need a 3-column index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Jun 2010 22:01:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow index lookup " }, { "msg_contents": "Appears to have helped with the combination index. I'll need to\neliminate caching effects before making sure its the right choice.\n\nThanks for the suggestion.\n\nOn Tue, Jun 22, 2010 at 7:01 PM, Tom Lane <[email protected]> wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> Excerpts from Anj Adu's message of mar jun 22 17:44:39 -0400 2010:\n>>> This query seems unreasonable slow on a well-indexed table (13 million\n>>> rows). Separate indexes are present on guardid_id , from_num and\n>>> targetprt columns.\n>\n>> Maybe you need to vacuum or reindex?\n>\n> Rethinking the set of indexes is probably a more appropriate suggestion.\n> Separate indexes aren't usefully combinable for a case like this --- in\n> principle the thing could do a BitmapAnd, but the startup time would be\n> pretty horrid, and the LIMIT 1 is discouraging it from trying that.\n> If this is an important case to optimize then you need a 3-column index.\n>\n>                        regards, tom lane\n>\n", "msg_date": "Tue, 22 Jun 2010 20:05:20 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow index lookup" }, { "msg_contents": "The combination index works great. Would adding the combination index\nguarantee that the optimizer will choose that index for these kind of\nqueries involving the columns in the combination. I verified a couple\nof times and it picked the right index. Just wanted to make sure it\ndoes that consistently.\n\nOn Tue, Jun 22, 2010 at 7:01 PM, Tom Lane <[email protected]> wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> Excerpts from Anj Adu's message of mar jun 22 17:44:39 -0400 2010:\n>>> This query seems unreasonable slow on a well-indexed table (13 million\n>>> rows). Separate indexes are present on guardid_id , from_num and\n>>> targetprt columns.\n>\n>> Maybe you need to vacuum or reindex?\n>\n> Rethinking the set of indexes is probably a more appropriate suggestion.\n> Separate indexes aren't usefully combinable for a case like this --- in\n> principle the thing could do a BitmapAnd, but the startup time would be\n> pretty horrid, and the LIMIT 1 is discouraging it from trying that.\n> If this is an important case to optimize then you need a 3-column index.\n>\n>                        regards, tom lane\n>\n", "msg_date": "Wed, 23 Jun 2010 09:59:46 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow index lookup" }, { "msg_contents": "Anj Adu <[email protected]> wrote:\n \n> The combination index works great. Would adding the combination\n> index guarantee that the optimizer will choose that index for\n> these kind of queries involving the columns in the combination. I\n> verified a couple of times and it picked the right index. Just\n> wanted to make sure it does that consistently.\n \nIt's cost based -- as long as it thinks that approach will be\nfaster, it will use it.\n \n-Kevin\n", "msg_date": "Wed, 23 Jun 2010 12:08:08 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow index lookup" } ]
[ { "msg_contents": "PasteBin for the vmstat output\nhttp://pastebin.com/mpHCW9gt\n\nOn Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah\n<[email protected]> wrote:\n> Dear List ,\n>\n> I observe that my postgresql (ver 8.4.2) dedicated server has turned cpu\n> bound and there is a high load average in the server > 50 usually.\n> The server has\n> 2 Quad Core CPUs already and there are 6 or 8 drives in raid 10 , there is\n> negligable i/o wait. There is 32GB ram and no swapping.\n>\n> When i strace processes at random i see lot of lseek (XXX,0,SEEK_END) calls\n> which i feel were not that frequent before. can any pointers be got\n> for investigating\n> the high cpu usage by postgresql processes.\n>\n> attached is strace out in strace.txt file (sorry if that was not\n> allowed, i am not sure)\n>\n> vmstat output\n>\n> # vmstat 10\n>\n> output.\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu-----------\n> r b swpd free buff cache si so bi bo in\n> cs us sy id wa st\n> 13 2 150876 2694612 4804 24915540 1 0 443 203 0 0\n> 50 6 39 5 0\n> 17 1 150868 3580472 4824 24931312 1 0 1395 803 12951 15403\n> 63 11 22 4 0\n> 20 5 150868 3369892 4840 24938180 0 0 1948 1827 12691 14542\n> 79 13 6 2 0\n> 8 0 150868 2771920 4856 24968016 0 0 2680 1254 13890 14329\n> 72 11 11 5 0\n> 18 2 150864 2454008 4872 24995640 0 0 2530 923 13968 15434\n> 63 10 20 7 0\n> 45 3 150860 2367760 4888 25011756 0 0 1338 1327 13203 14580\n> 71 11 16 3 0\n> 5 6 150860 1949212 4904 25033052 0 0 1727 1981 13960 15079\n> 73 11 12 5 0\n> 27 0 150860 1723104 4920 25049588 0 0 1484 794 13199 13676\n> 73 10 13 3 0\n> 28 6 150860 1503888 4928 25069724 0 0 1650 981 12625 14867\n> 75 9 14 2 0\n> 8 3 150860 1807744 4944 25087404 0 0 1521 791 13110 15421\n> 69 9 18 4 0\n>\n> Rajesh Kumar Mallah.\n> Avid/Loyal-PostgreSQL user for (past 10 years)\n>\n", "msg_date": "Wed, 23 Jun 2010 20:24:42 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cpu bound postgresql setup." }, { "msg_contents": "Rajesh Kumar Mallah <[email protected]> wrote:\n> PasteBin for the vmstat output\n> http://pastebin.com/mpHCW9gt\n> \n> On Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah\n> <[email protected]> wrote:\n>> Dear List ,\n>>\n>> I observe that my postgresql (ver 8.4.2) dedicated server has\n>> turned cpu bound and there is a high load average in the server >\n>> 50 usually.\n>> The server has 2 Quad Core CPUs already and there are 6 or 8\n>> drives in raid 10 , there is negligable i/o wait. There is 32GB\n>> ram and no swapping.\n>>\n>> When i strace processes at random i see lot of lseek\n>> (XXX,0,SEEK_END) calls which i feel were not that frequent\n>> before. can any pointers be got for investigating the high cpu\n>> usage by postgresql processes.\n \nI'm not clear on what problem you are experiencing. Using a lot of\nyour hardware's capacity isn't a problem in itself -- are you\ngetting poor response time? Poor throughput? Some other problem? \nIs it continuous, or only when certain queries run?\n \nOne thing that is apparent is that you might want to use a\nconnection pool, or if you're already using one you might want to\nconfigure it to reduce the maximum number of active queries. With\neight cores and eight drives, your best throughput is going to be at\nsomewhere around 24 active connections, and you appear to be going\nto at least twice that.\n \nIf you can provide a copy of your postgresql.conf settings (without\ncomments) and an EXPLAIN ANALYZE of a slow query, along with the\nschema information for the tables used by the query, you'll probably\nget useful advice on how to adjust your configuration, indexing, or\nquery code to improve performance.\n \n-Kevin\n", "msg_date": "Wed, 23 Jun 2010 10:29:08 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu bound postgresql setup." } ]
[ { "msg_contents": "On 6/23/10, Kevin Grittner <[email protected]> wrote:\n> Rajesh Kumar Mallah <[email protected]> wrote:\n>> PasteBin for the vmstat output\n>> http://pastebin.com/mpHCW9gt\n>>\n>> On Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah\n>> <[email protected]> wrote:\n>>> Dear List ,\n>>>\n>>> I observe that my postgresql (ver 8.4.2) dedicated server has\n>>> turned cpu bound and there is a high load average in the server >\n>>> 50 usually.\n>>> The server has 2 Quad Core CPUs already and there are 6 or 8\n>>> drives in raid 10 , there is negligable i/o wait. There is 32GB\n>>> ram and no swapping.\n>>>\n>>> When i strace processes at random i see lot of lseek\n>>> (XXX,0,SEEK_END) calls which i feel were not that frequent\n>>> before. can any pointers be got for investigating the high cpu\n>>> usage by postgresql processes.\n>\n> I'm not clear on what problem you are experiencing. Using a lot of\n> your hardware's capacity isn't a problem in itself -- are you\n> getting poor response time? Poor throughput? Some other problem?\n> Is it continuous, or only when certain queries run?\n>\n> One thing that is apparent is that you might want to use a\n> connection pool, or if you're already using one you might want to\n> configure it to reduce the maximum number of active queries. With\n> eight cores and eight drives, your best throughput is going to be at\n> somewhere around 24 active connections, and you appear to be going\n> to at least twice that.\n>\n> If you can provide a copy of your postgresql.conf settings (without\n> comments) and an EXPLAIN ANALYZE of a slow query, along with the\n> schema information for the tables used by the query, you'll probably\n> get useful advice on how to adjust your configuration, indexing, or\n> query code to improve performance.\n>\n> -Kevin\n>\n\n-- \nSent from Gmail for mobile | mobile.google.com\n", "msg_date": "Wed, 23 Jun 2010 23:00:01 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cpu bound postgresql setup. Firstly many thanks for responding. I\n\tam concerned because the load averages have increased and users\n\tcomplaining\n\tof slowness. I do not change settings frequenly. I was curious if there\n\tis any half dead component in th" }, { "msg_contents": "Your response somehow landed in the subject line, apparently\ntruncated. I'll extract that to the message body and reply to what\nmade it through.\n \nRajesh Kumar Mallah <[email protected]> wrote:\n \n> Firstly many thanks for responding. I am concerned because the\n> load averages have increased and users complaining of slowness.\n \nIf performance has gotten worse, then something has changed. It\nwould be helpful to know what. More users? New software? Database\ngrowth? Database bloat? (etc.)\n \n> I do not change settings frequenly.\n \nThat doesn't mean your current settings can't be changed to make\nthings better.\n \n> I was curious if there is any half dead component in th\n \nHave you reviewed what shows up if you run (as a database\nsuperuser)?:\n \n select * from pg_stat_activity;\n \nYou might want to review this page:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n", "msg_date": "Wed, 23 Jun 2010 12:43:15 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu bound postgresql setup." }, { "msg_contents": "Dear List,\n\n1. It was found that too many stray queries were getting generated\nfrom rouge users and bots\n we controlled using some manual methods.\n\n2. We have made application changes and some significant changes have been done.\n\n3. we use xfs and our controller has BBU , we changed barriers=1 to\nbarriers=0 as\n i learnt that having barriers=1 on xfs and fsync as the sync\nmethod, the advantage\n of BBU is lost unless barriers is = 0 (correct me if my\nunderstanding is wrong)\n\n4. We had implemented partitioning using exclusion constraints ,\nparent relnship\n was removed from quite a lot of old partition tables.\n\nour postgresql.conf\n\n--------------------------------------\n# cat postgresql.conf | grep -v \"^\\s*#\" | grep -v \"^\\s*$\"\n\nlisten_addresses = '*' # what IP address(es) to listen on;\nport = 5432 # (change requires restart)\nmax_connections = 300 # (change requires restart)\nshared_buffers = 10GB # min 128kB\nwork_mem = 4GB # min 64kB\nfsync = on # turns forced synchronization on or off\nsynchronous_commit = on # immediate fsync at commit\ncheckpoint_segments = 30 # in logfile segments, min 1, 16MB each\narchive_mode = on # allows archiving to be done\narchive_command = '/opt/scripts/archive_wal.sh %p %f '\narchive_timeout = 600 # force a logfile segment switch after this\neffective_cache_size = 18GB\nconstraint_exclusion = on # on, off, or partition\nlogging_collector = on # Enable capturing of stderr and csvlog\nlog_directory = '/var/log/postgresql' # directory where log\nfiles are written,\nlog_filename = 'postgresql.log' # log file name pattern,\nlog_truncate_on_rotation = on # If on, an existing log file of the\nlog_rotation_age = 1d # Automatic rotation of logfiles will\nlog_error_verbosity = verbose # terse, default, or verbose messages\nlog_min_duration_statement = 5000 # -1 is disabled, 0 logs all statements\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8' # locale for system\nerror message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\nadd_missing_from = on\ncustom_variable_classes = 'general' # list of custom\nvariable class names\ngeneral.report_level = ''\ngeneral.disable_audittrail2 = ''\ngeneral.employee=''\n\n\nAlso i would like to apologize that some of the discussions on this problem\n inadvertently became private between me & kevin.\n\n\nOn Thu, Jun 24, 2010 at 12:10 AM, Rajesh Kumar Mallah\n<[email protected]> wrote:\n> It was nice to go through the interesting posting guidelines. i shall\n> be analyzing the slow queries more objectively tomorrow during the\n> peak hours. I really hope it sould be possible to track down the\n> problem.\n>\n> On 6/23/10, Kevin Grittner <[email protected]> wrote:\n>> Rajesh Kumar Mallah <[email protected]> wrote:\n>>\n>>> did you suggest at some point that number of backend per core\n>>> should be preferebly 3 ?\n>>\n>> I've found the number of *active* backends is optimal around (2 *\n>> cores) + spindles. You said you had eight cores and eight or ten\n>> spindles, so I figure a connection pool limited to somewhere around\n>> 24 active connections is ideal. (Depending on how you set up your\n>> pool, you may need a higher total number of connections to keep 24\n>> active.)\n>>\n>> -Kevin\n>>\n>\n> --\n> Sent from Gmail for mobile | mobile.google.com\n>\n", "msg_date": "Thu, 24 Jun 2010 20:26:02 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cpu bound postgresql setup." }, { "msg_contents": "I'm not clear whether you still have a problem, or whether the\nchanges you mention solved your issues. I'll comment on potential\nissues that leap out at me.\n \nRajesh Kumar Mallah <[email protected]> wrote:\n \n> 3. we use xfs and our controller has BBU , we changed barriers=1\n> to barriers=0 as i learnt that having barriers=1 on xfs and fsync\n> as the sync method, the advantage of BBU is lost unless barriers\n> is = 0 (correct me if my understanding is wrong)\n \nWe use noatime,nobarrier in /etc/fstab. I'm not sure where you're\nsetting that, but if you have a controller with BBU, you want to set\nit to whichever disables write barriers.\n \n> max_connections = 300\n \nAs I've previously mentioned, I would use a connection pool, in\nwhich case this wouldn't need to be that high.\n \n> work_mem = 4GB\n \nThat's pretty high. That much memory can be used by each active\nconnection, potentially for each of several parts of the active\nquery on each connection. You should probably set this much lower\nin postgresql.conf and boost it if necessary for individual queries.\n \n> effective_cache_size = 18GB\n \nWith 32GB RAM on the machine, I would probably set this higher --\nsomewhere in the 24GB to 30GB range, unless you have specific\nreasons to believe otherwise. It's not that critical, though.\n \n> add_missing_from = on\n \nWhy? There has been discussion of eliminating this option -- do you\nhave queries which rely on the non-standard syntax this enables?\n \n> Also i would like to apologize that some of the discussions on\n> this problem inadvertently became private between me & kevin.\n \nOops. I failed to notice that. Thanks for bringing it back to the\nlist. (It's definitely in your best interest to keep it in front of\nall the other folks here, some of whom regularly catch things I miss\nor get wrong.)\n \nIf you still do have slow queries, please follow up with details.\n \n-Kevin\n", "msg_date": "Thu, 24 Jun 2010 10:27:58 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu bound postgresql setup." }, { "msg_contents": "On Thu, Jun 24, 2010 at 8:57 PM, Kevin Grittner\n<[email protected]> wrote:\n> I'm not clear whether you still have a problem, or whether the\n> changes you mention solved your issues. I'll comment on potential\n> issues that leap out at me.\n\nIt shall require more observation to know if the \"problem\" is solved.\nmy \"problem\" was high load average in the server . We find that\nwhen ldavg is between 10-20 responses of applications were acceptable\nldavg > 40 makes things slower.\n\nWhat prompted me to post to list is that the server transitioned from\nbeing IO bound to CPU bound and 90% of syscalls being\nlseek(XXX, 0, SEEK_END) = YYYYYYY\n\n>\n> Rajesh Kumar Mallah <[email protected]> wrote:\n>\n>> 3. we use xfs and our controller has BBU , we changed barriers=1\n>> to barriers=0 as i learnt that having barriers=1 on xfs and fsync\n>> as the sync method, the advantage of BBU is lost unless barriers\n>> is = 0 (correct me if my understanding is wrong)\n>\n> We use noatime,nobarrier in /etc/fstab. I'm not sure where you're\n> setting that, but if you have a controller with BBU, you want to set\n> it to whichever disables write barriers.\n\nas per suggestion in discussions on some other thread I set it\nin /etc/fstab.\n\n>\n>> max_connections = 300\n>\n> As I've previously mentioned, I would use a connection pool, in\n> which case this wouldn't need to be that high.\n\nWe do use connection pooling provided to mod_perl server\nvia Apache::DBI::Cache. If i reduce this i *get* \"too many\nconnections from non-superuser ... \" error. Will pgpool - I/II\nstill applicable in this scenario ?\n\n\n>\n>> work_mem = 4GB\n>\n> That's pretty high. That much memory can be used by each active\n> connection, potentially for each of several parts of the active\n> query on each connection. You should probably set this much lower\n> in postgresql.conf and boost it if necessary for individual queries.\n\nhmmm.. it was 8GB for many months !\n\ni shall reduce it further, but will it not result in usage of too many\ntemp files\nand saturate i/o?\n\n\n\n>\n>> effective_cache_size = 18GB\n>\n> With 32GB RAM on the machine, I would probably set this higher --\n> somewhere in the 24GB to 30GB range, unless you have specific\n> reasons to believe otherwise. It's not that critical, though.\n\ni do not remember well but there is a system view that (i think)\nguides at what stage the marginal returns of increasing it\nstarts disappearing , i had set it a few years back.\n\n\n>\n>> add_missing_from = on\n>\n> Why? There has been discussion of eliminating this option -- do you\n> have queries which rely on the non-standard syntax this enables?\n\nunfortunately yes.\n\n>\n>> Also i would like to apologize that some of the discussions on\n>> this problem inadvertently became private between me & kevin.\n>\n> Oops. I failed to notice that. Thanks for bringing it back to the\n> list. (It's definitely in your best interest to keep it in front of\n> all the other folks here, some of whom regularly catch things I miss\n> or get wrong.)\n>\n> If you still do have slow queries, please follow up with details.\n\n\nI have now set log_min_duration_statement = 5000\nand there are few queries that come to logs.\n\nplease comment on the connection pooling aspect.\n\nWarm Regards\nRajesh Kumar Mallah.\n\n>\n> -Kevin\n>\n", "msg_date": "Thu, 24 Jun 2010 22:55:32 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cpu bound postgresql setup." }, { "msg_contents": ">i do not remember well but there is a system view that (i think)\n>guides at what stage the marginal returns of increasing it\n>starts disappearing , i had set it a few years back.\n\nSorry the above comment was regarding setting shared_buffers\nnot effective_cache_size.\n\n\n\nOn Thu, Jun 24, 2010 at 10:55 PM, Rajesh Kumar Mallah\n<[email protected]> wrote:\n> On Thu, Jun 24, 2010 at 8:57 PM, Kevin Grittner\n> <[email protected]> wrote:\n>> I'm not clear whether you still have a problem, or whether the\n>> changes you mention solved your issues. I'll comment on potential\n>> issues that leap out at me.\n>\n> It shall require more observation to know if the \"problem\" is solved.\n> my \"problem\" was high load average in the server . We find that\n> when ldavg is between 10-20 responses of applications were acceptable\n> ldavg > 40 makes things slower.\n>\n> What prompted me to post to list is that the server transitioned from\n> being IO bound to CPU bound and 90% of syscalls being\n> lseek(XXX, 0, SEEK_END) = YYYYYYY\n>\n>>\n>> Rajesh Kumar Mallah <[email protected]> wrote:\n>>\n>>> 3. we use xfs and our controller has BBU , we changed barriers=1\n>>> to barriers=0 as i learnt that having barriers=1 on xfs and fsync\n>>> as the sync method, the advantage of BBU is lost unless barriers\n>>> is = 0 (correct me if my understanding is wrong)\n>>\n>> We use noatime,nobarrier in /etc/fstab. I'm not sure where you're\n>> setting that, but if you have a controller with BBU, you want to set\n>> it to whichever disables write barriers.\n>\n> as per suggestion in discussions on some other thread I set it\n> in /etc/fstab.\n>\n>>\n>>> max_connections = 300\n>>\n>> As I've previously mentioned, I would use a connection pool, in\n>> which case this wouldn't need to be that high.\n>\n> We do use connection pooling provided to mod_perl server\n> via Apache::DBI::Cache. If i reduce this i *get* \"too many\n> connections from non-superuser ... \" error. Will pgpool - I/II\n> still applicable in this scenario ?\n>\n>\n>>\n>>> work_mem = 4GB\n>>\n>> That's pretty high. That much memory can be used by each active\n>> connection, potentially for each of several parts of the active\n>> query on each connection. You should probably set this much lower\n>> in postgresql.conf and boost it if necessary for individual queries.\n>\n> hmmm.. it was 8GB for many months !\n>\n> i shall reduce it further, but will it not result in usage of too many\n> temp files\n> and saturate i/o?\n>\n>\n>\n>>\n>>> effective_cache_size = 18GB\n>>\n>> With 32GB RAM on the machine, I would probably set this higher --\n>> somewhere in the 24GB to 30GB range, unless you have specific\n>> reasons to believe otherwise. It's not that critical, though.\n>\n> i do not remember well but there is a system view that (i think)\n> guides at what stage the marginal returns of increasing it\n> starts disappearing , i had set it a few years back.\n>\n>\n>>\n>>> add_missing_from = on\n>>\n>> Why? There has been discussion of eliminating this option -- do you\n>> have queries which rely on the non-standard syntax this enables?\n>\n> unfortunately yes.\n>\n>>\n>>> Also i would like to apologize that some of the discussions on\n>>> this problem inadvertently became private between me & kevin.\n>>\n>> Oops. I failed to notice that. Thanks for bringing it back to the\n>> list. (It's definitely in your best interest to keep it in front of\n>> all the other folks here, some of whom regularly catch things I miss\n>> or get wrong.)\n>>\n>> If you still do have slow queries, please follow up with details.\n>\n>\n> I have now set log_min_duration_statement = 5000\n> and there are few queries that come to logs.\n>\n> please comment on the connection pooling aspect.\n>\n> Warm Regards\n> Rajesh Kumar Mallah.\n>\n>>\n>> -Kevin\n>>\n>\n", "msg_date": "Thu, 24 Jun 2010 22:57:23 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cpu bound postgresql setup." }, { "msg_contents": "Rajesh,\n\nI had a similar situation a few weeks ago whereby performance all of a\nsudden decreased.\nThe one tunable which resolved the problem in my case was increasing the\nnumber of checkpoint segments.\nAfter increasing them, everything went back to its normal state.\n\n\n> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of Rajesh Kumar Mallah\n> Sent: Thursday, June 24, 2010 11:27 AM\n> To: Kevin Grittner\n> Cc: [email protected]\n> Subject: Re: [PERFORM] cpu bound postgresql setup.\n> \n> >i do not remember well but there is a system view that (i think)\n> >guides at what stage the marginal returns of increasing it\n> >starts disappearing , i had set it a few years back.\n> \n> Sorry the above comment was regarding setting shared_buffers\n> not effective_cache_size.\n> \n> \n> \n> On Thu, Jun 24, 2010 at 10:55 PM, Rajesh Kumar Mallah\n> <[email protected]> wrote:\n> > On Thu, Jun 24, 2010 at 8:57 PM, Kevin Grittner\n> > <[email protected]> wrote:\n> >> I'm not clear whether you still have a problem, or whether the\n> >> changes you mention solved your issues. I'll comment on potential\n> >> issues that leap out at me.\n> >\n> > It shall require more observation to know if the \"problem\" is\nsolved.\n> > my \"problem\" was high load average in the server . We find that\n> > when ldavg is between 10-20 responses of applications were\nacceptable\n> > ldavg > 40 makes things slower.\n> >\n> > What prompted me to post to list is that the server transitioned\nfrom\n> > being IO bound to CPU bound and 90% of syscalls being\n> > lseek(XXX, 0, SEEK_END) = YYYYYYY\n> >\n> >>\n> >> Rajesh Kumar Mallah <[email protected]> wrote:\n> >>\n> >>> 3. we use xfs and our controller has BBU , we changed barriers=1\n> >>> to barriers=0 as i learnt that having barriers=1 on xfs and fsync\n> >>> as the sync method, the advantage of BBU is lost unless barriers\n> >>> is = 0 (correct me if my understanding is wrong)\n> >>\n> >> We use noatime,nobarrier in /etc/fstab. I'm not sure where you're\n> >> setting that, but if you have a controller with BBU, you want to\nset\n> >> it to whichever disables write barriers.\n> >\n> > as per suggestion in discussions on some other thread I set it\n> > in /etc/fstab.\n> >\n> >>\n> >>> max_connections = 300\n> >>\n> >> As I've previously mentioned, I would use a connection pool, in\n> >> which case this wouldn't need to be that high.\n> >\n> > We do use connection pooling provided to mod_perl server\n> > via Apache::DBI::Cache. If i reduce this i *get* \"too many\n> > connections from non-superuser ... \" error. Will pgpool - I/II\n> > still applicable in this scenario ?\n> >\n> >\n> >>\n> >>> work_mem = 4GB\n> >>\n> >> That's pretty high. That much memory can be used by each active\n> >> connection, potentially for each of several parts of the active\n> >> query on each connection. You should probably set this much lower\n> >> in postgresql.conf and boost it if necessary for individual\nqueries.\n> >\n> > hmmm.. it was 8GB for many months !\n> >\n> > i shall reduce it further, but will it not result in usage of too\n> many\n> > temp files\n> > and saturate i/o?\n> >\n> >\n> >\n> >>\n> >>> effective_cache_size = 18GB\n> >>\n> >> With 32GB RAM on the machine, I would probably set this higher --\n> >> somewhere in the 24GB to 30GB range, unless you have specific\n> >> reasons to believe otherwise. It's not that critical, though.\n> >\n> > i do not remember well but there is a system view that (i think)\n> > guides at what stage the marginal returns of increasing it\n> > starts disappearing , i had set it a few years back.\n> >\n> >\n> >>\n> >>> add_missing_from = on\n> >>\n> >> Why? There has been discussion of eliminating this option -- do\nyou\n> >> have queries which rely on the non-standard syntax this enables?\n> >\n> > unfortunately yes.\n> >\n> >>\n> >>> Also i would like to apologize that some of the discussions on\n> >>> this problem inadvertently became private between me & kevin.\n> >>\n> >> Oops. I failed to notice that. Thanks for bringing it back to the\n> >> list. (It's definitely in your best interest to keep it in front\nof\n> >> all the other folks here, some of whom regularly catch things I\nmiss\n> >> or get wrong.)\n> >>\n> >> If you still do have slow queries, please follow up with details.\n> >\n> >\n> > I have now set log_min_duration_statement = 5000\n> > and there are few queries that come to logs.\n> >\n> > please comment on the connection pooling aspect.\n> >\n> > Warm Regards\n> > Rajesh Kumar Mallah.\n> >\n> >>\n> >> -Kevin\n> >>\n> >\n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 24 Jun 2010 12:37:57 -0600", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu bound postgresql setup." }, { "msg_contents": "Excerpts from Rajesh Kumar Mallah's message of jue jun 24 13:25:32 -0400 2010:\n\n> What prompted me to post to list is that the server transitioned from\n> being IO bound to CPU bound and 90% of syscalls being\n> lseek(XXX, 0, SEEK_END) = YYYYYYY\n\nIt could be useful to find out what file is being seeked. Correlate the\nXXX with files in /proc/<pid>/fd (at least on Linux) to find out more.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 24 Jun 2010 14:58:23 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu bound postgresql setup." }, { "msg_contents": "Rajesh Kumar Mallah <[email protected]> wrote:\n> Kevin Grittner <[email protected]> wrote:\n \n>>> max_connections = 300\n>>\n>> As I've previously mentioned, I would use a connection pool, in\n>> which case this wouldn't need to be that high.\n> \n> We do use connection pooling provided to mod_perl server\n> via Apache::DBI::Cache. If i reduce this i *get* \"too many\n> connections from non-superuser ... \" error. Will pgpool - I/II\n> still applicable in this scenario ?\n \nYeah, you can't reduce this setting without first having a\nconnection pool in place which will limit how many connections are\nin use. We haven't used any of the external connection pool\nproducts for PostgreSQL yet, because when we converted to PostgreSQL\nwe were already using a pool built into our application framework. \nThis pool queues requests for database transactions and has one\nthread per connection in the database pool to pull and service\nobjects which encapsulate the logic of the database transaction.\n \nWe're moving to new development techniques, since that framework is\nover ten years old now, but the overall approach is going to stay\nthe same -- because it has worked so well for us. By queuing\nrequests beyond the number which can keep all the server's resources\nbusy, we avoid wasting resources on excessive context switching and\n(probably more significant) contention for locks. At one point our\nbusiest server started to suffer performance problems under load,\nand we were able to fix them by simple configuring the connection\npool to half its previous size -- both response time and throughput\nimproved.\n \n-Kevin\n", "msg_date": "Thu, 24 Jun 2010 14:00:27 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu bound postgresql setup." }, { "msg_contents": "Benjamin Krajmalnik wrote:\n> Rajesh,\n> \n> I had a similar situation a few weeks ago whereby performance all of a\n> sudden decreased.\n> The one tunable which resolved the problem in my case was increasing the\n> number of checkpoint segments.\n> After increasing them, everything went back to its normal state.\n\nDid you get a db server log message suggesting in increasing that\nsetting? I hope so.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Mon, 28 Jun 2010 17:44:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu bound postgresql setup." }, { "msg_contents": "Bruce,\nUnfortunately not. The behavior I had was ebbs and flows. On FreeBSD,\nI was seeing a lot of kernel wait states in top. So every few minutes,\nresponsiveness of the db was pretty bad. 8.4.4/amd64 on FreeBSD 7.2\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Monday, June 28, 2010 3:45 PM\n> To: Benjamin Krajmalnik\n> Cc: Rajesh Kumar Mallah; Kevin Grittner; pgsql-\n> [email protected]\n> Subject: Re: [PERFORM] cpu bound postgresql setup.\n> \n> Benjamin Krajmalnik wrote:\n> > Rajesh,\n> >\n> > I had a similar situation a few weeks ago whereby performance all of\n> a\n> > sudden decreased.\n> > The one tunable which resolved the problem in my case was increasing\n> the\n> > number of checkpoint segments.\n> > After increasing them, everything went back to its normal state.\n> \n> Did you get a db server log message suggesting in increasing that\n> setting? I hope so.\n> \n> --\n> Bruce Momjian <[email protected]> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n> \n> + None of us is going to be here forever. +\n", "msg_date": "Tue, 29 Jun 2010 09:11:53 -0600", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu bound postgresql setup." }, { "msg_contents": "Benjamin Krajmalnik wrote:\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > Sent: Monday, June 28, 2010 3:45 PM\n> > To: Benjamin Krajmalnik\n> > Cc: Rajesh Kumar Mallah; Kevin Grittner; pgsql-\n> > [email protected]\n> > Subject: Re: [PERFORM] cpu bound postgresql setup.\n> > \n> > Benjamin Krajmalnik wrote:\n> > > Rajesh,\n> > >\n> > > I had a similar situation a few weeks ago whereby performance all of\n> > a\n> > > sudden decreased.\n> > > The one tunable which resolved the problem in my case was increasing\n> > the\n> > > number of checkpoint segments.\n> > > After increasing them, everything went back to its normal state.\n> > \n> > Did you get a db server log message suggesting in increasing that\n> > setting? I hope so.\n\n> Unfortunately not. The behavior I had was ebbs and flows. On FreeBSD,\n> I was seeing a lot of kernel wait states in top. So every few minutes,\n> responsiveness of the db was pretty bad. 8.4.4/amd64 on FreeBSD 7.2\n\nBummer. What is supposed to happen is if you are checkpointing too\nfrequently ( < 30 seconds), you get the suggestion about increasing\ncheckpoint_segments. I have not heard of cases where you are not\ncheckpointing too frequently, and increasing checkpoint_segments helps.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Tue, 29 Jun 2010 11:29:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpu bound postgresql setup." } ]
[ { "msg_contents": "I have a situation where we are limited by the chassis on the box (and cost).\n\nWe have a 12 x 600G hot swappable disk system (raid 10)\nand 2 internal disk ( 2x 146G)\n\nWe would like to maximize storage on the large disks .\n\nDoes it make sense to put the WAL and OS on the internal disks and use\nthe 12 large disks only for data or should we put the WAL along with\ndata and leave the OS on the internal disks.\n\nOn our current systems..everything is on a single RAID 10 volume (and\nperformance is good)\n\nWe are just considering options now that we have the 2 extra disks to spare.\n", "msg_date": "Wed, 23 Jun 2010 12:01:32 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "WAL+Os on a single disk" }, { "msg_contents": "On Wed, Jun 23, 2010 at 3:01 PM, Anj Adu <[email protected]> wrote:\n> I have a situation where we are limited by the chassis on the box (and cost).\n>\n> We have a 12 x 600G hot swappable disk system (raid 10)\n> and 2 internal disk  ( 2x 146G)\n>\n> We would like to maximize storage on the large disks .\n>\n> Does it make sense to put the WAL and OS on the internal disks and use\n> the 12 large disks only for data or should we put the WAL along with\n> data and leave the OS on the internal disks.\n>\n> On our current systems..everything is on a single RAID 10 volume (and\n> performance is good)\n>\n> We are just considering options now that we have the 2 extra disks to spare.\n\nI have 16 disks in a server, 2 hot spares, 2 for OS and WAL and 12 for\nRAID-10. The RAID-10 array hits 100% utilization long before the 2 in\na RAID-1 for OS and WAL do. And we log all modifying SQL statements\nonto the same disk set. So for us, the WAL and OS and logging on the\nsame data set works well.\n", "msg_date": "Wed, 23 Jun 2010 15:34:33 -0400", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL+Os on a single disk" }, { "msg_contents": "On Wed, 23 Jun 2010, Scott Marlowe wrote:\n>> We have a 12 x 600G hot swappable disk system (raid 10)\n>> and 2 internal disk  ( 2x 146G)\n>>\n>> Does it make sense to put the WAL and OS on the internal disks\n>\n> So for us, the WAL and OS and logging on the same data set works well.\n\nGenerally, it is recommended that you put the WAL onto a separate disc to \nthe data. However, in this case, I would be careful. It may be that the 12 \ndisc array is more capable. Specifically, it is likely that the 12-disc \narray has a battery backed cache, but the two internal drives (RAID 1 \npresumably) do not. If this is the case, then putting the WAL on the \ninternal drives will reduce performance, as you will only be able to \ncommit a transaction once per revolution of the internal discs. In \ncontrast, if the WAL is on a battery backed cache array, then you can \ncommit much more frequently.\n\nTest it and see.\n\nMatthew\n\n-- \n I don't want the truth. I want something I can tell parliament!\n -- Rt. Hon. Jim Hacker MP", "msg_date": "Thu, 24 Jun 2010 10:14:00 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL+Os on a single disk" }, { "msg_contents": "On Thu, Jun 24, 2010 at 5:14 AM, Matthew Wakeling <[email protected]> wrote:\n> On Wed, 23 Jun 2010, Scott Marlowe wrote:\n>>>\n>>> We have a 12 x 600G hot swappable disk system (raid 10)\n>>> and 2 internal disk  ( 2x 146G)\n>>>\n>>> Does it make sense to put the WAL and OS on the internal disks\n>>\n>> So for us, the WAL and OS and logging on the same data set works well.\n>\n> Generally, it is recommended that you put the WAL onto a separate disc to\n> the data. However, in this case, I would be careful. It may be that the 12\n> disc array is more capable. Specifically, it is likely that the 12-disc\n> array has a battery backed cache, but the two internal drives (RAID 1\n> presumably) do not. If this is the case, then putting the WAL on the\n> internal drives will reduce performance, as you will only be able to commit\n> a transaction once per revolution of the internal discs. In contrast, if the\n> WAL is on a battery backed cache array, then you can commit much more\n> frequently.\n\nThis is not strictly true of the WAL, which writes sequentially and\nmore than one transaction at a time. As you said though, test it to\nbe sure.\n", "msg_date": "Thu, 24 Jun 2010 09:31:50 -0400", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL+Os on a single disk" }, { "msg_contents": "What would you recommend to do a quick test for this? (i.e WAL on\ninternal disk vs WALon the 12 disk raid array )?\n\nOn Thu, Jun 24, 2010 at 6:31 AM, Scott Marlowe <[email protected]> wrote:\n> On Thu, Jun 24, 2010 at 5:14 AM, Matthew Wakeling <[email protected]> wrote:\n>> On Wed, 23 Jun 2010, Scott Marlowe wrote:\n>>>>\n>>>> We have a 12 x 600G hot swappable disk system (raid 10)\n>>>> and 2 internal disk  ( 2x 146G)\n>>>>\n>>>> Does it make sense to put the WAL and OS on the internal disks\n>>>\n>>> So for us, the WAL and OS and logging on the same data set works well.\n>>\n>> Generally, it is recommended that you put the WAL onto a separate disc to\n>> the data. However, in this case, I would be careful. It may be that the 12\n>> disc array is more capable. Specifically, it is likely that the 12-disc\n>> array has a battery backed cache, but the two internal drives (RAID 1\n>> presumably) do not. If this is the case, then putting the WAL on the\n>> internal drives will reduce performance, as you will only be able to commit\n>> a transaction once per revolution of the internal discs. In contrast, if the\n>> WAL is on a battery backed cache array, then you can commit much more\n>> frequently.\n>\n> This is not strictly true of the WAL, which writes sequentially and\n> more than one transaction at a time.  As you said though, test it to\n> be sure.\n>\n", "msg_date": "Thu, 24 Jun 2010 07:55:04 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL+Os on a single disk" }, { "msg_contents": "On Thu, Jun 24, 2010 at 10:55 AM, Anj Adu <[email protected]> wrote:\n> What would you recommend to do a quick test for this? (i.e WAL on\n> internal disk vs WALon the 12 disk raid array )?\n\nMaybe just pgbench?\n\nhttp://archives.postgresql.org/pgsql-performance/2010-06/msg00223.php\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Fri, 25 Jun 2010 21:15:36 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL+Os on a single disk" } ]