threads
listlengths 1
275
|
---|
[
{
"msg_contents": "I am using postgresql to be the central database for a variety of tools for\nour testing infrastructure. We have web tools and CLI tools that require access\nto machine configuration and other states for automation. We have one tool that\nuses a table that looks like this:\n\nsystest_live=# \\d cuty\n Table \"public.cuty\"\n Column | Type | Modifiers \n-------------+--------------------------+-----------\n resource_id | integer | not null\n lock_start | timestamp with time zone | \n lock_by | character varying(12) | \n frozen | timestamp with time zone | \n freeze_end | timestamp with time zone | \n freeze_by | character varying(12) | \n state | character varying(15) | \nIndexes:\n \"cuty_pkey\" PRIMARY KEY, btree (resource_id)\n \"cuty_main_idx\" btree (resource_id, lock_start)\nForeign-key constraints:\n \"cuty_resource_id_fkey\" FOREIGN KEY (resource_id) REFERENCES resource(resource_id) ON UPDATE CASCADE ON DELETE CASCADE\n\nVarious users run a tool that updates this table to determine if the particular\nresource is available or not. Within a course of a few days, this table can\nbe updated up to 200,000 times. There are only about 3500 records in this\ntable, but the update and select queries against this table start to slow\ndown considerablly after a few days. Ideally, this table doesn't even need\nto be stored and written to the filesystem. After I run a vacuum against this\ntable, the overall database performance seems to rise again. When database\nis running with recent vacuum the average server load is about .40, but after\nthis table is updated 200,000+ times, the server load can go up to 5.0.\n\nhere is a typical update query:\n2006-04-03 10:53:39 PDT testtool systest_live kyoto.englab.juniper.net(4888) LOG: duration: 2263.741 ms statement: UPDATE cuty SET\n lock_start = NOW(),\n lock_by = 'tlim'\n WHERE resource_id='2262' and (lock_start IS NULL OR lock_start < (NOW() - interval '3600 second'))\n\nWe used to use MySQL for these tools and we never had any issues, but I believe\nit is due to the transactional nature of Postgres that is adding an overhead\nto this problem. Are there any table options that enables the table contents\nto be maintained in ram only or have delayed writes for this particular table?\n\nThanks in advance,\nKenji\n",
"msg_date": "Mon, 3 Apr 2006 11:24:03 -0700",
"msg_from": "Kenji Morishige <[email protected]>",
"msg_from_op": true,
"msg_subject": "optimizing db for small table with tons of updates"
},
{
"msg_contents": "Kenji,\n\n> We used to use MySQL for these tools and we never had any issues, but I\n> believe it is due to the transactional nature of Postgres that is adding\n> an overhead to this problem. \n\nYou're correct.\n\n> Are there any table options that enables \n> the table contents to be maintained in ram only or have delayed writes\n> for this particular table?\n\nNo. That's not really the right solution anyway; if you want \nnon-transactional data, why not just use a flat file? Or Memcached?\n\nPossible solutions:\n1) if the data is non-transactional, consider using pgmemcached.\n2) if you want to maintain transactions, use a combination of autovacuum \nand vacuum delay to do more-or-less continuous low-level vacuuming of the \ntable. Using Postgres 8.1 will help you to be able to manage this.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 3 Apr 2006 11:29:42 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing db for small table with tons of updates"
},
{
"msg_contents": "Dear Kenji,\n\nwe had similar issuse with a banner impression update system,\nthat had high concurrency. we modfied the system to use insert\ninstead of update of the same row. performance wise things are\nmuch better , but you have to keep deleting old data.\n\nhope you extrapolate what i mean if its applicable to your case.\n\nRegds\nRajesh Kumar Mallah\n\nOn 4/3/06, Kenji Morishige <[email protected]> wrote:\n> I am using postgresql to be the central database for a variety of tools for\n> our testing infrastructure. We have web tools and CLI tools that require access\n> to machine configuration and other states for automation. We have one tool that\n> uses a table that looks like this:\n>\n> systest_live=# \\d cuty\n> Table \"public.cuty\"\n> Column | Type | Modifiers\n> -------------+--------------------------+-----------\n> resource_id | integer | not null\n> lock_start | timestamp with time zone |\n> lock_by | character varying(12) |\n> frozen | timestamp with time zone |\n> freeze_end | timestamp with time zone |\n> freeze_by | character varying(12) |\n> state | character varying(15) |\n> Indexes:\n> \"cuty_pkey\" PRIMARY KEY, btree (resource_id)\n> \"cuty_main_idx\" btree (resource_id, lock_start)\n> Foreign-key constraints:\n> \"cuty_resource_id_fkey\" FOREIGN KEY (resource_id) REFERENCES resource(resource_id) ON UPDATE CASCADE ON DELETE CASCADE\n>\n> Various users run a tool that updates this table to determine if the particular\n> resource is available or not. Within a course of a few days, this table can\n> be updated up to 200,000 times. There are only about 3500 records in this\n> table, but the update and select queries against this table start to slow\n> down considerablly after a few days. Ideally, this table doesn't even need\n> to be stored and written to the filesystem. After I run a vacuum against this\n> table, the overall database performance seems to rise again. When database\n> is running with recent vacuum the average server load is about .40, but after\n> this table is updated 200,000+ times, the server load can go up to 5.0.\n>\n> here is a typical update query:\n> 2006-04-03 10:53:39 PDT testtool systest_live kyoto.englab.juniper.net(4888) LOG: duration: 2263.741 ms statement: UPDATE cuty SET\n> lock_start = NOW(),\n> lock_by = 'tlim'\n> WHERE resource_id='2262' and (lock_start IS NULL OR lock_start < (NOW() - interval '3600 second'))\n>\n> We used to use MySQL for these tools and we never had any issues, but I believe\n> it is due to the transactional nature of Postgres that is adding an overhead\n> to this problem. Are there any table options that enables the table contents\n> to be maintained in ram only or have delayed writes for this particular table?\n>\n> Thanks in advance,\n> Kenji\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n",
"msg_date": "Tue, 4 Apr 2006 00:06:45 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing db for small table with tons of updates"
},
{
"msg_contents": "Kenji Morishige <[email protected]> writes:\n> Various users run a tool that updates this table to determine if the particular\n> resource is available or not. Within a course of a few days, this table can\n> be updated up to 200,000 times. There are only about 3500 records in this\n> table, but the update and select queries against this table start to slow\n> down considerablly after a few days. Ideally, this table doesn't even need\n> to be stored and written to the filesystem. After I run a vacuum against this\n> table, the overall database performance seems to rise again.\n\nYou should never have let such a table go that long without vacuuming.\n\nYou might consider using autovac to take care of it for you. If you\ndon't want to use autovac, set up a cron job that will vacuum the table\nat least once per every few thousand updates.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 03 Apr 2006 14:39:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing db for small table with tons of updates "
},
{
"msg_contents": "I've been stumped as to how to call psql from the command line without it \nprompting me for a password. Is there a enviornoment variable I can specify for\nthe password or something I can place in .pgsql? I could write a perl wrapper\naround it, but I've been wondering how I can call psql -c without it prompting\nme. Is it possible?\n\n-Kenji\n\nOn Mon, Apr 03, 2006 at 02:39:10PM -0400, Tom Lane wrote:\n> Kenji Morishige <[email protected]> writes:\n> > Various users run a tool that updates this table to determine if the particular\n> > resource is available or not. Within a course of a few days, this table can\n> > be updated up to 200,000 times. There are only about 3500 records in this\n> > table, but the update and select queries against this table start to slow\n> > down considerablly after a few days. Ideally, this table doesn't even need\n> > to be stored and written to the filesystem. After I run a vacuum against this\n> > table, the overall database performance seems to rise again.\n> \n> You should never have let such a table go that long without vacuuming.\n> \n> You might consider using autovac to take care of it for you. If you\n> don't want to use autovac, set up a cron job that will vacuum the table\n> at least once per every few thousand updates.\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Mon, 3 Apr 2006 12:02:13 -0700",
"msg_from": "Kenji Morishige <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: optimizing db for small table with tons of updates"
},
{
"msg_contents": "Kenji Morishige wrote:\n> I've been stumped as to how to call psql from the command line without it \n> prompting me for a password. Is there a enviornoment variable I can specify for\n> the password or something I can place in .pgsql? I could write a perl wrapper\n> around it, but I've been wondering how I can call psql -c without it prompting\n> me. Is it possible?\n\nSure it is. Set up a .pgpass file.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Mon, 3 Apr 2006 15:03:54 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing db for small table with tons of updates"
},
{
"msg_contents": "Sweet! Thanks.\n-Kenji\n\nOn Mon, Apr 03, 2006 at 03:03:54PM -0400, Alvaro Herrera wrote:\n> Kenji Morishige wrote:\n> > I've been stumped as to how to call psql from the command line without it \n> > prompting me for a password. Is there a enviornoment variable I can specify for\n> > the password or something I can place in .pgsql? I could write a perl wrapper\n> > around it, but I've been wondering how I can call psql -c without it prompting\n> > me. Is it possible?\n> \n> Sure it is. Set up a .pgpass file.\n> \n> -- \n> Alvaro Herrera http://www.CommandPrompt.com/\n> The PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Mon, 3 Apr 2006 12:08:36 -0700",
"msg_from": "Kenji Morishige <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: optimizing db for small table with tons of updates"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Kenji Morishige wrote:\n>> I've been stumped as to how to call psql from the command line without it \n>> prompting me for a password. Is there a enviornoment variable I can specify for\n>> the password or something I can place in .pgsql? I could write a perl wrapper\n>> around it, but I've been wondering how I can call psql -c without it prompting\n>> me. Is it possible?\n\n> Sure it is. Set up a .pgpass file.\n\nAlso, consider whether a non-password-based auth method (eg, ident)\nmight work for you. Personally, I wouldn't trust ident over TCP, but\nif your kernel supports it on unix-socket connections it is secure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 03 Apr 2006 15:23:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing db for small table with tons of updates "
},
{
"msg_contents": "Cool, looks like I had tried the .pgpass thing a while back and wasn't working,\nI realized I had a typo or something in there. It works like a charm. Security\nin our intranet is not a big issue at the moment. Thanks for the help!\n-Kenji\n\nOn Mon, Apr 03, 2006 at 03:23:50PM -0400, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > Kenji Morishige wrote:\n> >> I've been stumped as to how to call psql from the command line without it \n> >> prompting me for a password. Is there a enviornoment variable I can specify for\n> >> the password or something I can place in .pgsql? I could write a perl wrapper\n> >> around it, but I've been wondering how I can call psql -c without it prompting\n> >> me. Is it possible?\n> \n> > Sure it is. Set up a .pgpass file.\n> \n> Also, consider whether a non-password-based auth method (eg, ident)\n> might work for you. Personally, I wouldn't trust ident over TCP, but\n> if your kernel supports it on unix-socket connections it is secure.\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Mon, 3 Apr 2006 12:28:51 -0700",
"msg_from": "Kenji Morishige <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: optimizing db for small table with tons of updates"
}
] |
[
{
"msg_contents": " version \n\n------------------------------------------------------------------------\n PostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n3.3.6\n(1 row)\n\n\n-- The order of fields around the \"=\" in the WHERE conditions\n-- affects the query plan. I would rather not have to worry about\n-- that. It seems that it puts me back in the place of having to\n-- figure what join order is best. Here are two sql statements and\n-- the query plan that is generated for each. The worst of the two\n-- is first and the best one is second.\n-- Mike Quinn\n\n-- the worst way --\n\nEXPLAIN ANALYZE\nSELECT\nLocts.id,\nCommtypes.name\nFROM\nGrowers\n,\nLocts\n,\nCrops\n,\nCommtypes\nWHERE\nGrowers.id = '0401606'\nAND\n-- Commtypes.number = Crops.Commtype\nCrops.Commtype = Commtypes.number\nAND\nLocts.number = Crops.Loct\n-- Crops.Loct = Locts.number\nAND\nGrowers.number = Locts.Grower\n-- Locts.Grower = Growers.number\n;\n QUERY\nPLAN \n---------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=18934.81..647002.69 rows=1045 width=20) (actual\ntime=525.267..4079.051 rows=69 loops=1)\n Join Filter: (\"outer\".commtype = \"inner\".number)\n -> Nested Loop (cost=18923.21..631988.31 rows=1310 width=18)\n(actual time=523.867..4036.005 rows=69 loops=1)\n Join Filter: (\"inner\".number = \"outer\".loct)\n -> Seq Scan on crops (cost=0.00..7599.46 rows=258746\nwidth=24) (actual time=0.006..278.656 rows=258746 loops=1)\n -> Materialize (cost=18923.21..18924.25 rows=104 width=18)\n(actual time=0.001..0.007 rows=9 loops=258746)\n -> Nested Loop (cost=5503.02..18923.11 rows=104\nwidth=18) (actual time=0.061..523.703 rows=9 loops=1)\n Join Filter: (\"outer\".number = \"inner\".grower)\n -> Index Scan using growers_id on growers \n(cost=0.00..3.05 rows=4 width=12) (actual time=0.016..0.024 rows=1\nloops=1)\n Index Cond: ((id)::text = '0401606'::text)\n -> Materialize (cost=5503.02..7451.58\nrows=112456 width=30) (actual time=0.007..433.970 rows=112456 loops=1)\n -> Seq Scan on locts (cost=0.00..4566.56\nrows=112456 width=30) (actual time=0.003..176.771 rows=112456 loops=1)\n -> Materialize (cost=11.60..16.69 rows=509 width=26) (actual\ntime=0.001..0.287 rows=509 loops=69)\n -> Seq Scan on commtypes (cost=0.00..11.09 rows=509\nwidth=26) (actual time=0.021..0.672 rows=509 loops=1)\n Total runtime: 4081.766 ms\n(15 rows)\n\n-- the best way --\n\nEXPLAIN ANALYZE\nSELECT\nLocts.id,\nCommtypes.name\nFROM\nGrowers\n,\nLocts\n,\nCrops\n,\nCommtypes\nWHERE\nGrowers.id = '0401606'\nAND\nCommtypes.number = Crops.Commtype\n-- Crops.Commtype = Commtypes.number\nAND\n-- Locts.number = Crops.Loct\nCrops.Loct = Locts.number\nAND\n-- Growers.number = Locts.Grower\nLocts.Grower = Growers.number\n;\n QUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..11224.18 rows=1045 width=20) (actual\ntime=0.259..1.172 rows=69 loops=1)\n -> Nested Loop (cost=0.00..5717.09 rows=1310 width=18) (actual\ntime=0.205..0.466 rows=69 loops=1)\n -> Nested Loop (cost=0.00..31.90 rows=104 width=18) (actual\ntime=0.141..0.171 rows=9 loops=1)\n -> Index Scan using growers_id on growers \n(cost=0.00..3.05 rows=4 width=12) (actual time=0.078..0.080 rows=1\nloops=1)\n Index Cond: ((id)::text = '0401606'::text)\n -> Index Scan using locts_grower on locts \n(cost=0.00..6.15 rows=85 width=30) (actual time=0.058..0.070 rows=9\nloops=1)\n Index Cond: (locts.grower = \"outer\".number)\n -> Index Scan using crops_loct on crops (cost=0.00..54.13\nrows=43 width=24) (actual time=0.012..0.022 rows=8 loops=9)\n Index Cond: (crops.loct = \"outer\".number)\n -> Index Scan using commtypes_number_key on commtypes \n(cost=0.00..4.19 rows=1 width=26) (actual time=0.006..0.007 rows=1\nloops=69)\n Index Cond: (commtypes.number = \"outer\".commtype)\n Total runtime: 1.308 ms\n(12 rows)\n\n\n",
"msg_date": "Mon, 03 Apr 2006 14:24:48 -0700",
"msg_from": "\"Mike Quinn\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "The order of fields around the \"=\" in the WHERE conditions"
},
{
"msg_contents": "\"Mike Quinn\" <[email protected]> writes:\n> -- The order of fields around the \"=\" in the WHERE conditions\n> -- affects the query plan.\n\nThat absolutely should not be happening. Could we see a complete test\ncase?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 03 Apr 2006 17:35:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The order of fields around the \"=\" in the WHERE conditions "
},
{
"msg_contents": "\"Mike Quinn\" <[email protected]> writes:\n> -- The order of fields around the \"=\" in the WHERE conditions\n> -- affects the query plan.\n\nBTW, what's the datatype(s) of the join columns? The behavior looks\nconsistent with the idea that the planner doesn't think it can commute\nthe join conditions, which would be a bug/omission in the set of\noperators existing for the datatype(s). I believe we've got commutators\nfor all the standard '=' operators, but a contrib or third-party\ndatatype might be missing this.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 03 Apr 2006 17:58:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The order of fields around the \"=\" in the WHERE conditions "
},
{
"msg_contents": "The datatype of the join columns is a user defined type and there are no\ncommutators defined. I will fix that and retest. Thanks for the\ninsight.\n\nMike Quinn\n",
"msg_date": "Tue, 04 Apr 2006 10:18:30 -0700",
"msg_from": "\"Mike Quinn\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The order of fields around the \"=\" in the WHERE"
},
{
"msg_contents": " version \n\n------------------------------------------------------------------------\n PostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n3.3.6\n(1 row)\n\n-- After commutator added to operators of user defined type,\n-- the order of fields around the \"=\" in WHERE conditions\n-- no longer affect the query plan.\n\n-- previously the worst way --\n\nEXPLAIN ANALYZE\nSELECT\nLocts.id,\nCommtypes.name\nFROM\nGrowers\n,\nLocts\n,\nCrops\n,\nCommtypes\nWHERE\nGrowers.id = '0401606'\nAND\n-- Commtypes.number = Crops.Commtype\nCrops.Commtype = Commtypes.number\nAND\nLocts.number = Crops.Loct\n-- Crops.Loct = Locts.number\nAND\nGrowers.number = Locts.Grower\n-- Locts.Grower = Growers.number\n;\n QUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..17467.00 rows=954 width=20) (actual\ntime=0.253..1.155 rows=69 loops=1)\n -> Nested Loop (cost=0.00..12413.93 rows=1202 width=18) (actual\ntime=0.191..0.472 rows=69 loops=1)\n -> Nested Loop (cost=0.00..32.51 rows=104 width=18) (actual\ntime=0.142..0.171 rows=9 loops=1)\n -> Index Scan using growers_id on growers \n(cost=0.00..3.05 rows=4 width=12) (actual time=0.065..0.067 rows=1\nloops=1)\n Index Cond: ((id)::text = '0401606'::text)\n -> Index Scan using locts_grower on locts \n(cost=0.00..6.23 rows=91 width=30) (actual time=0.070..0.085 rows=9\nloops=1)\n Index Cond: (\"outer\".number = locts.grower)\n -> Index Scan using crops_loct on crops (cost=0.00..118.53\nrows=42 width=24) (actual time=0.011..0.021 rows=8 loops=9)\n Index Cond: (\"outer\".number = crops.loct)\n -> Index Scan using commtypes_number_key on commtypes \n(cost=0.00..4.19 rows=1 width=26) (actual time=0.006..0.007 rows=1\nloops=69)\n Index Cond: (\"outer\".commtype = commtypes.number)\n Total runtime: 1.299 ms\n(12 rows)\n\n-- previously the best way --\n\nEXPLAIN ANALYZE\nSELECT\nLocts.id,\nCommtypes.name\nFROM\nGrowers\n,\nLocts\n,\nCrops\n,\nCommtypes\nWHERE\nGrowers.id = 0401606\nAND\nCommtypes.number = Crops.Commtype\n-- Crops.Commtype = Commtypes.number\nAND\n-- Locts.number = Crops.Loct\nCrops.Loct = Locts.number\nAND\n-- Growers.number = Locts.Grower\nLocts.Grower = Growers.number\n;\n QUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..17467.00 rows=954 width=20) (actual\ntime=0.063..0.947 rows=69 loops=1)\n -> Nested Loop (cost=0.00..12413.93 rows=1202 width=18) (actual\ntime=0.050..0.318 rows=69 loops=1)\n -> Nested Loop (cost=0.00..32.51 rows=104 width=18) (actual\ntime=0.036..0.064 rows=9 loops=1)\n -> Index Scan using growers_id on growers \n(cost=0.00..3.05 rows=4 width=12) (actual time=0.018..0.020 rows=1\nloops=1)\n Index Cond: ((id)::text = '0401606'::text)\n -> Index Scan using locts_grower on locts \n(cost=0.00..6.23 rows=91 width=30) (actual time=0.012..0.023 rows=9\nloops=1)\n Index Cond: (locts.grower = \"outer\".number)\n -> Index Scan using crops_loct on crops (cost=0.00..118.53\nrows=42 width=24) (actual time=0.007..0.018 rows=8 loops=9)\n Index Cond: (crops.loct = \"outer\".number)\n -> Index Scan using commtypes_number_key on commtypes \n(cost=0.00..4.19 rows=1 width=26) (actual time=0.005..0.006 rows=1\nloops=69)\n Index Cond: (commtypes.number = \"outer\".commtype)\n Total runtime: 1.091 ms\n(12 rows)\n\n\n\n>>> \"Mike Quinn\" <[email protected]> 4/4/06 10:18:30 AM >>>\nThe datatype of the join columns is a user defined type and there are\nno\ncommutators defined. I will fix that and retest. Thanks for the\ninsight.\n\nMike Quinn\n\n---------------------------(end of\nbroadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n",
"msg_date": "Tue, 04 Apr 2006 15:07:25 -0700",
"msg_from": "\"Mike Quinn\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The order of fields around the \"=\" in the WHERE"
}
] |
[
{
"msg_contents": "Hi,\n\nI've got a somewhat puzzling performance problem here.\n\nI'm trying to do a few tests with PostgreSQL 8.1.3 under Solaris\n(an OS I'm sort of a newbie in).\n\nThe machine is a X4100 and the OS is Solaris 10 1/06 fresh install\naccording to manual. It's got two SAS disks in RAID 1, 4GB of RAM.\n\nNow the problem is: this box is *much* slower than I expect.\n\nI've got a libpg test program that happily inserts data\nusing PQputCopyData().\n\nIt performs an order of magnitude worse than the same thing\non a small Sun (Ultra20) running Linux. Or 4 times slower than\nan iBook (sic!) running MacOS X.\n\nSo, I've this very bad feeling that there is something basic\nI'm missing here.\n\nFollowing are some stats:\n\n\"sync; dd; sync\" show these disks write at 53 MB/s => good.\n\niostat 1 while my test is running says:\n\n tty sd0 sd1 sd2 sd5\ncpu\n tin tout kps tps serv kps tps serv kps tps serv kps tps serv us sy\nwt id\n 1 57 0 0 0 0 0 0 0 0 0 1809 23 70 0\n1 0 99\n 0 235 0 0 0 0 0 0 0 0 0 2186 223 14 1\n1 0 99\n 0 81 0 0 0 0 0 0 0 0 0 2488 251 13 1\n1 0 98\n 0 81 0 0 0 0 0 0 0 0 0 2296 232 15 1\n0 0 99\n 0 81 0 0 0 0 0 0 0 0 0 2416 166 9 1\n0 0 98\n 0 81 0 0 0 0 0 0 0 0 0 2528 218 14 1\n1 0 99\n 0 81 0 0 0 0 0 0 0 0 0 2272 223 15 1\n0 0 99\n\nIf I interpret this correctly the disk writes at not more than 2.5\nMB/sec while the Opterons do nothing => this is bad.\n\nI've tried both, a hand compile with gcc and the solarispackages\nfrom pgfoundry.org => same result.\n\nEons ago PCs had those \"turbo\" switches (it was never totally clear\nwhy they put them there in the first place, anyway). I've this bad\nfeeling there's a secret \"turbo\" switch I can't spot hidden somewhere\nin Solaris :/\n\n\nBye, Chris.\n\n",
"msg_date": "Tue, 04 Apr 2006 02:39:38 +0200",
"msg_from": "Chris Mair <[email protected]>",
"msg_from_op": true,
"msg_subject": "bad performance on Solaris 10"
},
{
"msg_contents": "Chris,\n\n> Eons ago PCs had those \"turbo\" switches (it was never totally clear\n> why they put them there in the first place, anyway). I've this bad\n> feeling there's a secret \"turbo\" switch I can't spot hidden somewhere\n> in Solaris :/\n\nYes. Check out Jignesh's configuration advice .... ach, this is down. \nHold on, I will get you instructions on how to turn on filesystem caching \nand readahead in Solaris.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 3 Apr 2006 17:49:27 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Jignesh¹s blog has some of the good stuff in it:\n http://blogs.sun.com/roller/page/jkshah\n\n- Luke\n\n\nOn 4/3/06 5:49 PM, \"Josh Berkus\" <[email protected]> wrote:\n\n> Chris,\n> \n>> > Eons ago PCs had those \"turbo\" switches (it was never totally clear\n>> > why they put them there in the first place, anyway). I've this bad\n>> > feeling there's a secret \"turbo\" switch I can't spot hidden somewhere\n>> > in Solaris :/\n> \n> Yes. Check out Jignesh's configuration advice .... ach, this is down.\n> Hold on, I will get you instructions on how to turn on filesystem caching\n> and readahead in Solaris.\n> \n> --\n> --Josh\n> \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n> \n\n\n\n\n\nRe: [PERFORM] bad performance on Solaris 10\n\n\nJignesh’s blog has some of the good stuff in it:\n http://blogs.sun.com/roller/page/jkshah\n\n- Luke\n\n\nOn 4/3/06 5:49 PM, \"Josh Berkus\" <[email protected]> wrote:\n\nChris,\n\n> Eons ago PCs had those \"turbo\" switches (it was never totally clear\n> why they put them there in the first place, anyway). I've this bad\n> feeling there's a secret \"turbo\" switch I can't spot hidden somewhere\n> in Solaris :/\n\nYes. Check out Jignesh's configuration advice .... ach, this is down. \nHold on, I will get you instructions on how to turn on filesystem caching\nand readahead in Solaris.\n\n--\n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings",
"msg_date": "Mon, 03 Apr 2006 20:46:28 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Chris Mair wrote:\n> Hi,\n> \n> I've got a somewhat puzzling performance problem here.\n> \n> I'm trying to do a few tests with PostgreSQL 8.1.3 under Solaris\n> (an OS I'm sort of a newbie in).\n> \n> The machine is a X4100 and the OS is Solaris 10 1/06 fresh install\n> according to manual. It's got two SAS disks in RAID 1, 4GB of RAM.\n> \n> Now the problem is: this box is *much* slower than I expect.\n> \n> I've got a libpg test program that happily inserts data\n> using PQputCopyData().\n> \n> It performs an order of magnitude worse than the same thing\n> on a small Sun (Ultra20) running Linux. Or 4 times slower than\n> an iBook (sic!) running MacOS X.\n> \n> So, I've this very bad feeling that there is something basic\n> I'm missing here.\n> \n> Following are some stats:\n> \n> \"sync; dd; sync\" show these disks write at 53 MB/s => good.\n> \n> iostat 1 while my test is running says:\n> \n> tty sd0 sd1 sd2 sd5\n> cpu\n> tin tout kps tps serv kps tps serv kps tps serv kps tps serv us sy\n> wt id\n> 1 57 0 0 0 0 0 0 0 0 0 1809 23 70 0\n> 1 0 99\n> 0 235 0 0 0 0 0 0 0 0 0 2186 223 14 1\n> 1 0 99\n> 0 81 0 0 0 0 0 0 0 0 0 2488 251 13 1\n> 1 0 98\n> 0 81 0 0 0 0 0 0 0 0 0 2296 232 15 1\n> 0 0 99\n> 0 81 0 0 0 0 0 0 0 0 0 2416 166 9 1\n> 0 0 98\n> 0 81 0 0 0 0 0 0 0 0 0 2528 218 14 1\n> 1 0 99\n> 0 81 0 0 0 0 0 0 0 0 0 2272 223 15 1\n> 0 0 99\n> \n> If I interpret this correctly the disk writes at not more than 2.5\n> MB/sec while the Opterons do nothing => this is bad.\n> \n> I've tried both, a hand compile with gcc and the solarispackages\n> from pgfoundry.org => same result.\n> \n> Eons ago PCs had those \"turbo\" switches (it was never totally clear\n> why they put them there in the first place, anyway). I've this bad\n> feeling there's a secret \"turbo\" switch I can't spot hidden somewhere\n> in Solaris :/\n> \n\nI ran across something like this on a Solaris 8, RAID1 system, and \nswitching off logging on filesystem containing postgres made a huge \ndifference!\n\nNow solaris 8 is ancient history, however see:\n\nhttp://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6238533\n\nApparently there can still be issues with logging without forcedirectio \n(which is the default I think).\n\nI suspect that making a *separate* filesystem for the pg_xlog directory \nand mounting that logging + forcedirectio would be a nice way to also \nget performance while keeping the advantages of logging + file \nbuffercache for the *rest* of the postgres components.\nCheers\n\nMark\n",
"msg_date": "Tue, 04 Apr 2006 17:12:04 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Mark,\n\n> I suspect that making a *separate* filesystem for the pg_xlog directory\n> and mounting that logging + forcedirectio would be a nice way to also\n> get performance while keeping the advantages of logging + file\n> buffercache for the *rest* of the postgres components.\n> Cheers\n\nYes, we tested this. It makes a huge difference in WAL speed.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 3 Apr 2006 23:09:43 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Hi,\n\nthanks for all replys.\n\nI've done a few tests.\n\nRemounting the fs where $PGDATA lives with \"forcedirectio\"\n(together with logging, that is default) did not help\n(if not harm...) performance.\n\nDoing what http://blogs.sun.com/roller/page/jkshah suggests:\n wal_sync_method = fsync (unchanged)\n wal_buffers = 128 (was 8)\n checkpoint_segments = 128 (was 3)\n bgwriter_all_percent = 0 (was 0.333)\n bgwriter_all_maxpages = 0 (was 5)\nand leaving everything else default (solarispackages from pgfoundry)\nincreased performance ~ 7 times!\n\nPlaying around with these modifications I find that it's\nactually just the\n wal_buffers = 128\nalone which makes all the difference!\n\nQuickly playing around with wal_buffers on Linux and Mac OS X\nI see it influences the performance of my test a bit, maybe in the\n10-20% range (I'm really doing quick tests, nothing systematic),\nbut nowhere near as spectacularly as on Solaris.\n\nI'm happy so far, but I find it very surprising that this single\nparameter has such an impact (only on) Solaris 10.\n\n(my test program is a bulk inserts using PQputCopyData in large\ntransactions - all test were 8.1.3).\n\nBye, Chris\n\n\n\n\n",
"msg_date": "Wed, 05 Apr 2006 23:31:25 +0200",
"msg_from": "Chris Mair <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Chris,\n\nOn 4/5/06 2:31 PM, \"Chris Mair\" <[email protected]> wrote:\n\n> Doing what http://blogs.sun.com/roller/page/jkshah suggests:\n> wal_sync_method = fsync (unchanged)\n> wal_buffers = 128 (was 8)\n> checkpoint_segments = 128 (was 3)\n> bgwriter_all_percent = 0 (was 0.333)\n> bgwriter_all_maxpages = 0 (was 5)\n> and leaving everything else default (solarispackages from pgfoundry)\n> increased performance ~ 7 times!\n\nIn the recent past, Jignesh Shaw of Sun MDE discovered that changing the\nbgwriter_* parameters to zero had a dramatic positive impact on performance.\n\nThere are also some critical UFS kernel tuning parameters to set, you should\nfind those in his blog.\n\nWe found and fixed some libpq issues with Solaris that were also critical -\nthey should be in 8.1.3 I think.\n\n- Luke\n\n\n",
"msg_date": "Wed, 05 Apr 2006 14:40:14 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Luke Lonergan wrote:\n> Chris,\n> \n> On 4/5/06 2:31 PM, \"Chris Mair\" <[email protected]> wrote:\n> \n> > Doing what http://blogs.sun.com/roller/page/jkshah suggests:\n> > wal_sync_method = fsync (unchanged)\n> > wal_buffers = 128 (was 8)\n> > checkpoint_segments = 128 (was 3)\n> > bgwriter_all_percent = 0 (was 0.333)\n> > bgwriter_all_maxpages = 0 (was 5)\n> > and leaving everything else default (solarispackages from pgfoundry)\n> > increased performance ~ 7 times!\n> \n> In the recent past, Jignesh Shaw of Sun MDE discovered that changing the\n> bgwriter_* parameters to zero had a dramatic positive impact on performance.\n\nThis essentially means stopping all bgwriter activity, thereby deferring\nall I/O until checkpoint. Was this considered? With\ncheckpoint_segments to 128, it wouldn't surprise me that there wasn't\nany checkpoint executed at all during the whole test ...\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Wed, 5 Apr 2006 17:48:24 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Alvaro,\n\nOn 4/5/06 2:48 PM, \"Alvaro Herrera\" <[email protected]> wrote:\n\n> This essentially means stopping all bgwriter activity, thereby deferring\n> all I/O until checkpoint. Was this considered? With\n> checkpoint_segments to 128, it wouldn't surprise me that there wasn't\n> any checkpoint executed at all during the whole test ...\n\nYes, many things about the Solaris UFS filesystem caused a great deal of\npain over the 10 months of experiments we ran with Sun MDE. Ultimately, the\nconclusion was that ZFS is going to make all of the pain go away.\n\nIn the meantime, all you can do is tweak up UFS and avoid I/O as much as\npossible.\n\n- Luke \n\n\n",
"msg_date": "Wed, 05 Apr 2006 14:56:40 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "\n> > > Doing what http://blogs.sun.com/roller/page/jkshah suggests:\n> > > wal_sync_method = fsync (unchanged)\n> > > wal_buffers = 128 (was 8)\n> > > checkpoint_segments = 128 (was 3)\n> > > bgwriter_all_percent = 0 (was 0.333)\n> > > bgwriter_all_maxpages = 0 (was 5)\n> > > and leaving everything else default (solarispackages from pgfoundry)\n> > > increased performance ~ 7 times!\n\nOk, so I could quite believe my own benchmarks and I decided\nto do a fresh initdb and retry everything.\n\nAt first it looked like I coudn't reproduce the speed up I just saw.\n\nThen I realized it was the \nwal_sync_method = fsync\nline that makes all the difference!\n\nNormally parameters that are commented are default values, but for\nwal_sync_method it actually says (note the comment):\n\nwal_sync_method = fsync # the default is the first option\n # supported by the operating system:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\n\nSo Im my last mail I drew the wrong conclusion, because i didn't comment\nwal_sync_method to double check.\n\nTo the point: the default wal_sync_method choosen on Solaris 10 appears\nto be a very bad one - for me, picking fsync increases performance ~\ntimes 7, all other parameters unchanged!\n\nWould it be a good idea to change this in the default install?\n\nBye, Chris.\n\nPS: yes I did a fresh initdb again to double check ;)\n\n",
"msg_date": "Thu, 06 Apr 2006 01:13:55 +0200",
"msg_from": "Chris Mair <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Chris Mair wrote:\n> Hi,\n> \n> thanks for all replys.\n> \n> I've done a few tests.\n> \n> Remounting the fs where $PGDATA lives with \"forcedirectio\"\n> (together with logging, that is default) did not help\n> (if not harm...) performance.\n> \n>\n\nSure - forcedirectio on the entire $PGDATA is a definite loss, you only \nwant it on $PGDATA/pg_xlog. The usual way this is accomplished is by \nmaking a separate filsystem for pg_xlog and symlinking from $PGDATA.\n\nDid you try the other option of remounting the fs for $PGDATA without \nlogging or forcedirectio?\n\nCheers\n\nMark\n\n\n",
"msg_date": "Thu, 06 Apr 2006 11:25:36 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "\n> > I've done a few tests.\n> > \n> > Remounting the fs where $PGDATA lives with \"forcedirectio\"\n> > (together with logging, that is default) did not help\n> > (if not harm...) performance.\n> > \n> >\n> \n> Sure - forcedirectio on the entire $PGDATA is a definite loss, you only \n> want it on $PGDATA/pg_xlog. The usual way this is accomplished is by \n> making a separate filsystem for pg_xlog and symlinking from $PGDATA.\n> \n> Did you try the other option of remounting the fs for $PGDATA without \n> logging or forcedirectio?\n\nnot yet, I'm not on the final disk set yet.\n\nwhen I get there I'll have two separate filesystems for pg_xlog and base\nand will try what you suggest.\n\n(but note the other mail about wal_sync_method = fsync)\n\nbye, chris.\n\n",
"msg_date": "Thu, 06 Apr 2006 01:29:40 +0200",
"msg_from": "Chris Mair <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "appears this didn't make it to the list... resending to the list\ndirectly...\n---\n\n> > > Doing what http://blogs.sun.com/roller/page/jkshah suggests:\n> > > wal_sync_method = fsync (unchanged)\n> > > wal_buffers = 128 (was 8)\n> > > checkpoint_segments = 128 (was 3)\n> > > bgwriter_all_percent = 0 (was 0.333)\n> > > bgwriter_all_maxpages = 0 (was 5)\n> > > and leaving everything else default (solarispackages from\npgfoundry)\n> > > increased performance ~ 7 times!\n\nOk, so I could quite believe my own benchmarks and I decided\nto do a fresh initdb and retry everything.\n\nAt first it looked like I coudn't reproduce the speed up I just saw.\n\nThen I realized it was the \nwal_sync_method = fsync\nline that makes all the difference!\n\nNormally parameters that are commented are default values, but for\nwal_sync_method it actually says (note the comment):\n\nwal_sync_method = fsync # the default is the first option\n # supported by the operating system:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\n\nSo Im my last mail I drew the wrong conclusion, because i didn't comment\nwal_sync_method to double check.\n\nTo the point: the default wal_sync_method choosen on Solaris 10 appears\nto be a very bad one - for me, picking fsync increases performance ~\ntimes 7, all other parameters unchanged!\n\nWould it be a good idea to change this in the default install?\n\nBye, Chris.\n\nPS: yes I did a fresh initdb again to double check ;)\n\n",
"msg_date": "Thu, 06 Apr 2006 01:33:07 +0200",
"msg_from": "Chris Mair <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Chris Mair wrote:\n\n> \n> (but note the other mail about wal_sync_method = fsync)\n> \n\nYeah - looks good! (is the default open_datasync still?). Might be worth \ntrying out the fdatasync method too (ISTR this being quite good... again \non Solaris 8, so things might have changed)!\n\nCheers\n\nMark\n\n",
"msg_date": "Thu, 06 Apr 2006 13:23:54 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Chris,\n\n> Remounting the fs where $PGDATA lives with \"forcedirectio\"\n> (together with logging, that is default) did not help\n> (if not harm...) performance.\n\nNot all of PG. JUST pg_xlog. forcedirectio is only a good idea for the xlog.\n\n> Quickly playing around with wal_buffers on Linux and Mac OS X\n> I see it influences the performance of my test a bit, maybe in the\n> 10-20% range (I'm really doing quick tests, nothing systematic),\n> but nowhere near as spectacularly as on Solaris.\n>\n> I'm happy so far, but I find it very surprising that this single\n> parameter has such an impact (only on) Solaris 10.\n\nThat *is* interesting. I hadn't tested this previously specifically on \nSolaris.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 5 Apr 2006 21:32:16 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Mark, Chris,\n\n> Yeah - looks good! (is the default open_datasync still?). Might be worth\n> trying out the fdatasync method too (ISTR this being quite good... again\n> on Solaris 8, so things might have changed)!\n\nI was just talking to a member of the Solaris-UFS team who recommended that we \ntest fdatasync.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 5 Apr 2006 21:35:14 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "\n> > Yeah - looks good! (is the default open_datasync still?). Might be worth\n> > trying out the fdatasync method too (ISTR this being quite good... again\n> > on Solaris 8, so things might have changed)!\n> \n> I was just talking to a member of the Solaris-UFS team who recommended that we \n> test fdatasync.\n\nOk, so I did a few runs for each of the sync methods, keeping all the\nrest constant and got this:\n\nopen_datasync 0.7\nfdatasync 4.6\nfsync 4.5\nfsync_writethrough not supported\nopen_sync 0.6\n\nin arbitrary units - higher is faster.\n\nQuite impressive!\n\nBye, Chris.\n\n\n\n",
"msg_date": "Thu, 06 Apr 2006 08:01:09 +0200",
"msg_from": "Chris Mair <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Chris Mair wrote:\n\n>Ok, so I did a few runs for each of the sync methods, keeping all the\n>rest constant and got this:\n>\n>open_datasync 0.7\n>fdatasync 4.6\n>fsync 4.5\n>fsync_writethrough not supported\n>open_sync 0.6\n>\n>in arbitrary units - higher is faster.\n>\n>Quite impressive!\n>\n>\n> \n>\nChris,\nJust to make sure the x4100 config is similar to your Linux system, can \nyou verify the default setting for disk write cache and make sure they \nare both enabled or disabled. Here's how to check in Solaris.\nAs root, run \"format -e\" -> pick a disk -> cache -> write_cache -> display\n\nNot sure how to do it on Linux though!\n\nRegards,\n-Robert\n",
"msg_date": "Thu, 06 Apr 2006 00:36:37 -0700",
"msg_from": "Robert Lor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "\n> >Ok, so I did a few runs for each of the sync methods, keeping all the\n> >rest constant and got this:\n> >\n> >open_datasync 0.7\n> >fdatasync 4.6\n> >fsync 4.5\n> >fsync_writethrough not supported\n> >open_sync 0.6\n> >\n> >in arbitrary units - higher is faster.\n> >\n> >Quite impressive!\n> >\n> >\n> > \n> >\n> Chris,\n> Just to make sure the x4100 config is similar to your Linux system, can \n> you verify the default setting for disk write cache and make sure they \n> are both enabled or disabled. Here's how to check in Solaris.\n> As root, run \"format -e\" -> pick a disk -> cache -> write_cache -> display\n> \n> Not sure how to do it on Linux though!\n> \n> Regards,\n> -Robert\n\nI don't have access to the machine for the next few days due to eh...\nlet's call it firewall accident ;), but it might very well be that it\nwas off on the x4100 (I know it's on the smaller Linux box).\n\nThat together with the bad default sync method can definitely explain\nthe strangely slow out of box performance I got.\n\nSo thanks again for explaining this to me :)\n\nBye, Chris.\n\n\n\n",
"msg_date": "Fri, 07 Apr 2006 21:29:34 +0200",
"msg_from": "Chris Mair <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "\n> > Chris,\n> > Just to make sure the x4100 config is similar to your Linux system, can \n> > you verify the default setting for disk write cache and make sure they \n> > are both enabled or disabled. Here's how to check in Solaris.\n> > As root, run \"format -e\" -> pick a disk -> cache -> write_cache -> display\n> > \n> > Not sure how to do it on Linux though!\n> > \n> > Regards,\n> > -Robert\n> \n> I don't have access to the machine for the next few days due to eh...\n> let's call it firewall accident ;), but it might very well be that it\n> was off on the x4100 (I know it's on the smaller Linux box).\n> \n> That together with the bad default sync method can definitely explain\n> the strangely slow out of box performance I got.\n> \n> So thanks again for explaining this to me :)\n> \n> Bye, Chris.\n\nJust for completeness:\nI checked now using the above commands and can confirm the write cache\nwas disabled on the x4100 and was on on Linux. \n\nBye, Chris.\n\n\n\n\n\n",
"msg_date": "Mon, 10 Apr 2006 23:05:22 +0200",
"msg_from": "Chris Mair <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Luke Lonergan wrote:\n> Alvaro,\n> \n> On 4/5/06 2:48 PM, \"Alvaro Herrera\" <[email protected]> wrote:\n> \n> > This essentially means stopping all bgwriter activity, thereby deferring\n> > all I/O until checkpoint. Was this considered? With\n> > checkpoint_segments to 128, it wouldn't surprise me that there wasn't\n> > any checkpoint executed at all during the whole test ...\n> \n> Yes, many things about the Solaris UFS filesystem caused a great deal of\n> pain over the 10 months of experiments we ran with Sun MDE. Ultimately, the\n> conclusion was that ZFS is going to make all of the pain go away.\n> \n> In the meantime, all you can do is tweak up UFS and avoid I/O as much as\n> possible.\n\nIt is hard to imagine why people spend so much time modifying Sun\nmachines run with acceptable performance when non-Sun operating systems\nwork fine without such hurtles.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Wed, 12 Apr 2006 15:56:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Bruce,\n\nOn 4/12/06 12:56 PM, \"Bruce Momjian\" <[email protected]> wrote:\n\n> It is hard to imagine why people spend so much time modifying Sun\n> machines run with acceptable performance when non-Sun operating systems\n> work fine without such hurtles.\n\nThere are a lot of Solaris customers that we support and that we'd like to\nsupport. To many of them, Solaris has many advantages other than speed,\nthough they expect a reasonably comparable performance, perhaps within a\nfactor of 2 of other options.\n\nOracle has spent a great deal of time (a decade!) optimizing their software\nfor Solaris, and it shows. There are also some typical strategies that\nSolaris people used to use to make Solaris perform better, like using VxFS\n(Veritas Filesystem), or Oracle Raw IO to make their systems perform better.\n\nLately I find people are not so receptive to VxFS, and Sun is promoting ZFS,\nand we don't have a reasonable near term option for Raw IO in Postgres, so\nwe need to work to find a reasonable path for Solaris users IMO. The long\ndelays in ZFS production haven't helped us there, as the problems with UFS\nare severe.\n\nWe at Greenplum have worked hard over the last year to find options for\nPostgres on Solaris and have the best configuration setup that we think is\npossible now on UFS, and our customers benefit from that. However, Linux on\nXFS or even ext3 is definitely the performance leader.\n\n- Luke \n\n\n",
"msg_date": "Wed, 12 Apr 2006 15:35:33 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "People,\n\n> Lately I find people are not so receptive to VxFS, and Sun is promoting\n> ZFS, and we don't have a reasonable near term option for Raw IO in\n> Postgres, so we need to work to find a reasonable path for Solaris users\n> IMO. The long delays in ZFS production haven't helped us there, as the\n> problems with UFS are severe.\n\nFWIW, I'm testing on ZFS now. But it's not stable yet. People are welcome \nto join the Solaris 11 beta program.\n\nIn the near term, there are fixes to be made both in PostgreSQL \nconfiguration and in Solaris configuration. Also, some of the work being \ndone for 8.2 ... the external sort work done by Simon and sponsored by \nGreenPlum, and the internal sort work which Jonah and others are doing ... \nwill improve things on Solaris as our sort issues hit Solaris harder than \nother OSes.\n\nExpect lots more info on performance config for Solaris from me & Robert in \nthe next few weeks. \n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Wed, 12 Apr 2006 15:49:14 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "\nBruce,\n\nHard to answer that... People like me who know and love PostgreSQL and \nSolaris finds this as an opportunity to make their favorite database \nwork best on their favorite operating system.\n\nMany times PostgreSQL has many things based on assumption that it will \nrun on Linux and it is left to Solaris to emulate that behavior.That \nsaid there are ways to improve performance even on UFS on Solaris, it \njust requires more tweaks.\n\nHopefully this will lead to few Solaris friendly default values like \nfsync/odatasync :-)\n\nRegards,\nJignesh\n\n\nBruce Momjian wrote:\n\n>\n>It is hard to imagine why people spend so much time modifying Sun\n>machines run with acceptable performance when non-Sun operating systems\n>work fine without such hurtles.\n> \n>\n",
"msg_date": "Wed, 12 Apr 2006 21:53:33 -0400",
"msg_from": "\"Jignesh K. Shah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n> Many times PostgreSQL has many things based on assumption that it will \n> run on Linux and it is left to Solaris to emulate that behavior.\n\nAu contraire --- PG tries its best to be OS-agnostic. I've personally\nresisted people trying to optimize it by putting in Linux-specific\nbehavior. The above sounds to me like making excuses for a poor OS.\n\n(And yes, I will equally much resist any requests to put in Solaris-\nspecific behavior...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Apr 2006 00:52:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10 "
},
{
"msg_contents": "Jignesh K. Shah wrote:\n> \n> Bruce,\n> \n> Hard to answer that... People like me who know and love PostgreSQL and \n> Solaris finds this as an opportunity to make their favorite database \n> work best on their favorite operating system.\n> \n> Many times PostgreSQL has many things based on assumption that it will \n> run on Linux and it is left to Solaris to emulate that behavior.That \n> said there are ways to improve performance even on UFS on Solaris, it \n> just requires more tweaks.\n> \n> Hopefully this will lead to few Solaris friendly default values like \n> fsync/odatasync :-)\n\nYes, if someone wants to give us a clear answer on which wal_sync method\nis best on all versions of Solaris, we can easily make that change.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Thu, 13 Apr 2006 04:39:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "\nBruce Momjian wrote On 04/13/06 01:39 AM,:\n> \n> Yes, if someone wants to give us a clear answer on which wal_sync method\n> is best on all versions of Solaris, we can easily make that change.\n> \n\nWe're doing tests to see how various parameters in postgresql.conf\naffect performance on Solaris and will share the results shortly.\n\nRegards,\n-Robert\n\n",
"msg_date": "Thu, 13 Apr 2006 10:40:08 -0700",
"msg_from": "Robert Lor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "On 4/12/06, Josh Berkus <[email protected]> wrote:\n> People,\n>\n> > Lately I find people are not so receptive to VxFS, and Sun is promoting\n> > ZFS, and we don't have a reasonable near term option for Raw IO in\n> > Postgres, so we need to work to find a reasonable path for Solaris users\n> > IMO. The long delays in ZFS production haven't helped us there, as the\n> > problems with UFS are severe.\n\nI just recently worked with sun solaris 10 and found it to be\nreasonably performant without much tuning. This was on a dual sparc\nsunblade workstation which i felt was very well engineered. I was\nable (with zero solaris experience) to get postgresql up and crunching\naway at some really data intensive tasks while running an application\ncompiled their very excellent fortran compiler.\n\nIn the enterprise world I am finding that the only linux distrubutions\nsupported are redhat and suse, meaning if you have a problem with your\nsan running against your gentoo box you have a serious problem.\nSolaris OTOH, is generally very well supported (especially on sun\nhardware) and is free. So I give sun great credit for providing a\nfree if not necessarily completely open platform for developing open\nsource applications in an enterprise environment.\n\nMerlin\n",
"msg_date": "Thu, 13 Apr 2006 15:38:21 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Hi Bruce,\n\n\nI saw even on this alias also that people assumed that the default \nwal_sync_method was fsync on Solaris.\n\nI would select fsync or fdsync as the default on Solaris. (I prefer \nfsync as it is already highlighted as default in postgresql)\n\nAnother thing to improve the defaults on Solaris will be to increase the \ndefaults of\nwal_buffers\nand\ncheckpoint_segments\n\n(I think in 8.1 checkpoint_segments have been already improved to a \ndefault of 8 from the previous 3 and that may be already some help in \nperformance out there. )\n\nThese three changes improve out-of-box performance of PostgreSQL quite a \nbit on Solaris (SPARC as well as x64 platforms).\n\nThen you will suddenly see decrease in the number of people PostgreSQL \ncommunity complaining about Solaris 10, as well as Solaris community \ncomplaining about PostgreSQL. (The benefits are mutual)\n\nDon't get me wrong. As Luke mentioned it took a while to get the \npotential of PostgreSQL on Solaris and people like me start doing other \ncomplex workarounds in Solaris like \"forcedirectio\", etc. (Yeah I did a \ntest, if you force fsync as wal_sync_method while on Solaris, then \nyou may not even require to do forcedirectio of your $PGDATA/pg_xlogs to \nget back the lost performance)\n\nIf we had realized that fsync/odatasync difference was the culprit we \ncould have saved couple of months of efforts.\n\nYes I agree that putting OS specific things in PostgreSQL hurts \ncommunity and sticking to POSIX standards help.\n\nJust my two cents.\n\nRegards,\nJignesh\n\n\nBruce Momjian wrote:\n\n>Jignesh K. Shah wrote:\n> \n>\n>>Bruce,\n>>\n>>Hard to answer that... People like me who know and love PostgreSQL and \n>>Solaris finds this as an opportunity to make their favorite database \n>>work best on their favorite operating system.\n>>\n>>Many times PostgreSQL has many things based on assumption that it will \n>>run on Linux and it is left to Solaris to emulate that behavior.That \n>>said there are ways to improve performance even on UFS on Solaris, it \n>>just requires more tweaks.\n>>\n>>Hopefully this will lead to few Solaris friendly default values like \n>>fsync/odatasync :-)\n>> \n>>\n>\n>Yes, if someone wants to give us a clear answer on which wal_sync method\n>is best on all versions of Solaris, we can easily make that change.\n>\n> \n>\n",
"msg_date": "Fri, 14 Apr 2006 10:02:10 -0400",
"msg_from": "\"Jignesh K. Shah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
},
{
"msg_contents": "Jignesh,\n\n> Don't get me wrong. As Luke mentioned it took a while to get the\n> potential of PostgreSQL on Solaris and people like me start doing other\n> complex workarounds in Solaris like \"forcedirectio\", etc. (Yeah I did a\n> test, if you force fsync as wal_sync_method while on Solaris, then\n> you may not even require to do forcedirectio of your $PGDATA/pg_xlogs to\n> get back the lost performance)\n\nI didn't see these later test results. Can you link? \n\nAlso, I presume this was on DW, and not on OLTP, yes?\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Fri, 14 Apr 2006 12:01:25 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on Solaris 10"
}
] |
[
{
"msg_contents": "Hi, reading the archives i cant find a clear answer about softupdates in \nfreebsd, is it recommended to enable it for the data directory?\n\n---\nmiguel\n",
"msg_date": "Mon, 03 Apr 2006 18:58:28 -0600",
"msg_from": "Miguel <[email protected]>",
"msg_from_op": true,
"msg_subject": "freebsd/softupdates for data dir"
},
{
"msg_contents": "Miguel wrote:\n> Hi, reading the archives i cant find a clear answer about softupdates in \n> freebsd, is it recommended to enable it for the data directory?\n> \n\nThere is a pretty good article about softupdates and journelling here:\n\nhttp://www.usenix.org/publications/library/proceedings/usenix2000/general/full_papers/seltzer/seltzer_html/index.html\n\nand in the freebsd docs here:\n\nhttp://www.freebsd.org/doc/handbook/configtuning-disk.html\n\nPostgres does not do a lot of file meta-data operations (unless you do a \n*lot* of CREATE/DROP INDEX/TABLE/DATABASE), so the performance gains \nassociated with softupdates will probably be minimal.\n\nI've always left them on, and never had any issues...(even after \nunscheduled power loss - which happened here yesterday). As I understand \nit, the softupdate code reorders *metadata* operations, and does not \nalter data operations - so the effect of fysnc(2) on a preexisting file \nis not changed by softupdates being on or off.\n\nCheers\n\nMark\n",
"msg_date": "Tue, 04 Apr 2006 14:10:25 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: freebsd/softupdates for data dir"
},
{
"msg_contents": "\nOn Apr 3, 2006, at 10:10 PM, Mark Kirkwood wrote:\n\n> I've always left them on, and never had any issues...(even after \n> unscheduled power loss - which happened here yesterday). As I \n> understand it, the softupdate code reorders *metadata* operations, \n> and does not alter data operations - so the effect of fysnc(2) on a \n> preexisting file is not changed by softupdates being on or off.\n\nThis is also my understanding, and I also leave softupdates on for \nthe data partition. Even if it doesn't improve performance, it will \nnot reduce it, and otherwise does no harm with respect to postgres' \ndisk usage.\n\n",
"msg_date": "Tue, 4 Apr 2006 10:41:32 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: freebsd/softupdates for data dir"
},
{
"msg_contents": "On Apr 4, 2006, at 10:41 AM, Vivek Khera wrote:\n> On Apr 3, 2006, at 10:10 PM, Mark Kirkwood wrote:\n>\n>> I've always left them on, and never had any issues...(even after \n>> unscheduled power loss - which happened here yesterday). As I \n>> understand it, the softupdate code reorders *metadata* operations, \n>> and does not alter data operations - so the effect of fysnc(2) on \n>> a preexisting file is not changed by softupdates being on or off.\n>\n> This is also my understanding, and I also leave softupdates on for \n> the data partition. Even if it doesn't improve performance, it \n> will not reduce it, and otherwise does no harm with respect to \n> postgres' disk usage.\n\nMore importantly, it allows the system to come up and do fsck in the \nbackground. If you've got a large database that's a pretty big benefit.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n",
"msg_date": "Wed, 5 Apr 2006 18:07:30 -0400",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: freebsd/softupdates for data dir"
},
{
"msg_contents": "\nOn Apr 5, 2006, at 6:07 PM, Jim Nasby wrote:\n\n>\n> More importantly, it allows the system to come up and do fsck in \n> the background. If you've got a large database that's a pretty big \n> benefit.\n\nThat's a UFS2 feature, not a soft-updates feature.\n\n",
"msg_date": "Thu, 6 Apr 2006 09:45:34 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: freebsd/softupdates for data dir"
},
{
"msg_contents": "On Thu, Apr 06, 2006 at 09:45:34AM -0400, Vivek Khera wrote:\n> \n> On Apr 5, 2006, at 6:07 PM, Jim Nasby wrote:\n> \n> >\n> >More importantly, it allows the system to come up and do fsck in \n> >the background. If you've got a large database that's a pretty big \n> >benefit.\n> \n> That's a UFS2 feature, not a soft-updates feature.\n\nIt's both. You can't background fsck with softupdates disabled.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 11 Apr 2006 17:59:14 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: freebsd/softupdates for data dir"
}
] |
[
{
"msg_contents": "I was wondering if anyone on the list has a successful installation of\npgmemcache running\nthat uses LISTEN/NOTIFY to signal a successfully completed transaction,\ni.e., to get around the fact\nthat TRIGGERS are transaction unaware. Or perhaps any other\ninformation regarding a successful\ndeployment of pgmemcache.\n\nThe information available on the web/groups is pretty scant. Any\ninformation would be greatly appreciated!\n\n",
"msg_date": "4 Apr 2006 00:24:42 -0700",
"msg_from": "\"C Storm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgmemcache"
},
{
"msg_contents": "On Tue, Apr 04, 2006 at 12:24:42AM -0700, C Storm wrote:\n> I was wondering if anyone on the list has a successful installation of\n> pgmemcache running\n> that uses LISTEN/NOTIFY to signal a successfully completed transaction,\n> i.e., to get around the fact\n> that TRIGGERS are transaction unaware. Or perhaps any other\n> information regarding a successful\n> deployment of pgmemcache.\n\nThe problem with attempting that is that you'd have a window between\ntransaction commit and when the cache was invalidated. If that's\nacceptable then it shouldn't be too difficult to set something up using\nLISTEN/NOTIFY like you describe.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 11 Apr 2006 17:55:16 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgmemcache"
},
{
"msg_contents": "\n\tIt would be nice to have ON COMMIT triggers for this use.\n\n\tHowever you can emulate ON COMMIT triggers with a modification of the \nmemcache update process :\n\n\t- A standard trigger sends the data to update to memcache\n\t- The trigger also sends the PID\n\t- Instead of being used immediately, this data is kept in a buffer\n\t- Notify is issued\n\n\tOn commit :\n\t- postgres sends the NOTIFY signal\n\t- the memcache updater reads the NOTIFY (which embeds the PID I believe) \n; it finds the buffered data sent above and uses it to update memcached\n\n\tOn rollback :\n\t- Interesting problem ;)))\n\n\tOK, it's a kludge. When can we get ON COMMIT triggers ?\n\n\n\n\n> On Tue, Apr 04, 2006 at 12:24:42AM -0700, C Storm wrote:\n>> I was wondering if anyone on the list has a successful installation of\n>> pgmemcache running\n>> that uses LISTEN/NOTIFY to signal a successfully completed transaction,\n>> i.e., to get around the fact\n>> that TRIGGERS are transaction unaware. Or perhaps any other\n>> information regarding a successful\n>> deployment of pgmemcache.\n>\n> The problem with attempting that is that you'd have a window between\n> transaction commit and when the cache was invalidated. If that's\n> acceptable then it shouldn't be too difficult to set something up using\n> LISTEN/NOTIFY like you describe.\n\n\n",
"msg_date": "Wed, 12 Apr 2006 09:19:18 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgmemcache"
},
{
"msg_contents": "PFC,\n\n> \tIt would be nice to have ON COMMIT triggers for this use.\n>\n> \tHowever you can emulate ON COMMIT triggers with a modification of the\n> memcache update process :\n\nWell, I'm back in touch with the GORDA project so possibly we can have \nBEFORE COMMIT triggers after all.\n\nBTW, it's important to note that AFTER COMMIT triggers are logically \nimpossible, so please use BEFORE COMMIT so that it's clear what we're \ntalking about.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Wed, 12 Apr 2006 16:03:43 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgmemcache"
},
{
"msg_contents": "On Wed, Apr 12, 2006 at 04:03:43PM -0700, Josh Berkus wrote:\n> PFC,\n> \n> > \tIt would be nice to have ON COMMIT triggers for this use.\n> >\n> > \tHowever you can emulate ON COMMIT triggers with a modification of the\n> > memcache update process :\n> \n> Well, I'm back in touch with the GORDA project so possibly we can have \n> BEFORE COMMIT triggers after all.\n> \n> BTW, it's important to note that AFTER COMMIT triggers are logically \n> impossible, so please use BEFORE COMMIT so that it's clear what we're \n> talking about.\n\nWhy are AFTER COMMIT triggers impossible? ISTM they would be useful as a\nmeans to indicate to some external process if a transaction succeeded or\nnot. And for some things you might only want to fire the trigger after\nyou knew that the transaction had committed.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 12 Apr 2006 19:10:25 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgmemcache"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Why are AFTER COMMIT triggers impossible?\n\nWhat happens if such a trigger gets an error? You can't un-commit.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Apr 2006 20:35:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgmemcache "
},
{
"msg_contents": "Hi, Tom,\n\nTom Lane wrote:\n\n>>Why are AFTER COMMIT triggers impossible?\n> \n> What happens if such a trigger gets an error? You can't un-commit.\n\nThen it must be specified that those triggers are in their own\ntransaction, and cannot abort the transaction.\n\nOr use the 2-phase-commit infrastructure for them.\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Thu, 13 Apr 2006 16:03:13 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgmemcache"
},
{
"msg_contents": "Not sure if I follow why this is a problem. Seems like it would be \nbeneficial to have both BEFORE and AFTER COMMIT triggers.\nWith the BEFORE COMMIT trigger you would have the ability to 'un- \ncommit' (rollback) the transaction. With\nthe AFTER COMMIT trigger you wouldn't have that option because the \ncommit has already been successful. However,\nwith an AFTER COMMIT you would be able to trigger other downstream \nevents that rely on a transaction successfully committing.\nIf the trigger fails it is the triggers problem, it isn't the \ncommit's problem, i.e., you wouldn't want to 'un-commit'. If the \ntrigger\ngets an error it has to gracefully deal with that error programatically.\n\nWhere have I gone astray with this logic?\n\nOn Apr 12, 2006, at 5:35 PM, Tom Lane wrote:\n\n> \"Jim C. Nasby\" <[email protected]> writes:\n>> Why are AFTER COMMIT triggers impossible?\n>\n> What happens if such a trigger gets an error? You can't un-commit.\n>\n> \t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 13 Apr 2006 10:29:28 -0700",
"msg_from": "Christian Storm <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgmemcache "
},
{
"msg_contents": "Christian Storm <[email protected]> writes:\n> Not sure if I follow why this is a problem. Seems like it would be \n> beneficial to have both BEFORE and AFTER COMMIT triggers.\n> With the BEFORE COMMIT trigger you would have the ability to 'un- \n> commit' (rollback) the transaction. With\n> the AFTER COMMIT trigger you wouldn't have that option because the \n> commit has already been successful. However,\n> with an AFTER COMMIT you would be able to trigger other downstream \n> events that rely on a transaction successfully committing.\n\nAn AFTER COMMIT trigger would have to be in a separate transaction.\nWhat happens if there's more than one, and one of them fails? Even\nmore to the point, if it's a separate transaction, don't you have\nto fire all these triggers again when you commit that transaction?\nThe idea seems circular.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Apr 2006 13:38:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgmemcache "
},
{
"msg_contents": "On Apr 13, 2006, at 12:38 PM, Tom Lane wrote:\n\n> Christian Storm <[email protected]> writes:\n>> Not sure if I follow why this is a problem. Seems like it would be\n>> beneficial to have both BEFORE and AFTER COMMIT triggers.\n>> With the BEFORE COMMIT trigger you would have the ability to 'un-\n>> commit' (rollback) the transaction. With\n>> the AFTER COMMIT trigger you wouldn't have that option because the\n>> commit has already been successful. However,\n>> with an AFTER COMMIT you would be able to trigger other downstream\n>> events that rely on a transaction successfully committing.\n>\n> An AFTER COMMIT trigger would have to be in a separate transaction.\n> What happens if there's more than one, and one of them fails? Even\n> more to the point, if it's a separate transaction, don't you have\n> to fire all these triggers again when you commit that transaction?\n> The idea seems circular.\n\nI suspect that in reality you'd probably want each on-commit trigger \nto be it's own transaction, but it depends on what you're doing. \nAlso, I can't see any use for them where you'd actually be \ninteracting with the database, only if you were calling something \nexternally via a function. One example would be sending an email out \nwhen a certain table changes; in many cases it's better to let the \nchange happen even if the email can't be sent, and you'd rather not \nsend an email if the transaction just ends up rolling back for some \nreason. And yes, you'd have to ensure you didn't code yourself up a \ntrigger loop.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n",
"msg_date": "Thu, 13 Apr 2006 13:29:25 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgmemcache "
},
{
"msg_contents": "\n> An AFTER COMMIT trigger would have to be in a separate transaction.\n\n\tI guess AFTER COMMIT triggers would be like a NOTIFY, but more powerful. \nWhile NOTIFY can't transmit information to another process, this trigger \ncould, and the other process could then view the results of the commited \ntransaction.\n\tAlso, implementing a long process (something involving network \nroundtrips, for instance) in a BEFORE COMMIT trigger would delay the \ntransaction and any locks it holds with no benefit.\n\n> What happens if there's more than one, and one of them fails?\n\n\tEach one in its own transaction ?\n\n> Even more to the point, if it's a separate transaction, don't you have\n> to fire all these triggers again when you commit that transaction?\n> The idea seems circular.\n\n\tI guess AFTER COMMIT triggers are most useful when coupled to a trigger \non a modification to a table. So, the \"before / after commit\" could be an \nattribute of an AFTER INSERT/UPDATE/DELETE trigger. If the AFTER COMMIT \ntrigger doesn't do any modifications to the target table, there will be no \ninfinite loop.\n\n\tThe before/after commit could also be implemented not via triggers, but \nvia deferred actions, by telling postgres to execute a specific query just \nbefore/after the transaction commits. This could be used to implement the \ntriggers, but would also be more generic : a trigger on INSERT could then \ndefer a call to memcache update once the transaction is commited. It gets \nlisp-ish, but it would be really cool?\n",
"msg_date": "Thu, 13 Apr 2006 22:23:31 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgmemcache "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Christian Storm <[email protected]> writes:\n> > Not sure if I follow why this is a problem. Seems like it would be \n> > beneficial to have both BEFORE and AFTER COMMIT triggers.\n> > With the BEFORE COMMIT trigger you would have the ability to 'un- \n> > commit' (rollback) the transaction. With\n> > the AFTER COMMIT trigger you wouldn't have that option because the \n> > commit has already been successful. However,\n> > with an AFTER COMMIT you would be able to trigger other downstream \n> > events that rely on a transaction successfully committing.\n> \n> An AFTER COMMIT trigger would have to be in a separate transaction.\n> What happens if there's more than one, and one of them fails? Even\n> more to the point, if it's a separate transaction, don't you have\n> to fire all these triggers again when you commit that transaction?\n> The idea seems circular.\n\nMaybe it just means they would have to be limited to not making any database\nmodifications. Ie, all they can do is notify the outside world that the\ntransaction committed. \n\nPresumably if you wanted to make any database modifications you would just do\nit in the transaction anyways since they wouldn't show up until the\ntransaction commits.\n\n-- \ngreg\n\n",
"msg_date": "13 Apr 2006 16:52:37 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgmemcache"
},
{
"msg_contents": "PFC <[email protected]> writes:\n> \tI guess AFTER COMMIT triggers would be like a NOTIFY, but more powerful. \n\nI'll let you in on a secret: NOTIFY is actually a before-commit\noperation. This is good enough because it never, or hardly ever,\nfails. I would argue that anything you want to do in an AFTER COMMIT\ntrigger could just as well be done in a BEFORE COMMIT trigger; if that's\nnot reliable enough then you need to fix your trigger.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Apr 2006 20:43:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgmemcache "
},
{
"msg_contents": "> I'll let you in on a secret: NOTIFY is actually a before-commit\n> operation. This is good enough because it never, or hardly ever,\n> fails. I would argue that anything you want to do in an AFTER COMMIT\n> trigger could just as well be done in a BEFORE COMMIT trigger; if \n> that's\n> not reliable enough then you need to fix your trigger.\n\nSo, looping back to pgmemcache. It doesn't seem like it's about the \ntrigger failing; it's about\nthe transaction failing. How could I 'fix my trigger' so that if \nthe transaction fails\nthe cache wouldn't be updated?\n\n> An AFTER COMMIT trigger would have to be in a separate transaction.\n\nAgreed. Each trigger being in its own transaction.\n\n> What happens if there's more than one, and one of them fails?\n\nEach AFTER COMMIT trigger would be in its own transaction. So if it\nfails, let it fail. It isn't the original transactions fault. If \nyou want it to bundled\nwith the original transaction then do a BEFORE COMMIT.\n\nIf there is more than one, so be it. One may fail, another may not.\nIf that is not okay or the intended behavior, then create a new \ntrigger than\ncombines them into one transaction so that they are coupled.\n\n> Even\n> more to the point, if it's a separate transaction, don't you have\n> to fire all these triggers again when you commit that transaction?\n> The idea seems circular.\n\nWhy would it have to do this? Not sure if I follow. In my mind,\nthe only way this would happen is if the AFTER COMMIT trigger acted\nupon the very same table it is triggered off of. Then you could\nhave a circular reference. However, this is true for the BEFORE COMMIT\nsituation too. I don't see the difference. What am I missing?\n\n\nCheers,\n\nChristian\n\n",
"msg_date": "Fri, 14 Apr 2006 11:01:10 -0700",
"msg_from": "Christian Storm <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgmemcache "
}
] |
[
{
"msg_contents": "I have a table with 1 live row that I found has 115000 dead rows in it ( \nfrom a testing run ). I'm trying to VACUUM FULL the table and it has \nrun for over 18 hours without completion. Considering the hardware on \nthis box and the fact that performance seems reasonable in all other \naspects, I'm confused as to why this would happen. The database other \nthan this table is quite large ( 70 gigs on disk ) and I would expect to \ntake days to complete but I just did 'vacuum full table_stats'. That \nshould only do that table, correct? I'm running 8.0.3.\n\n Table \"public.table_stats\"\n Column | Type | Modifiers\n---------------------+-----------------------------+-----------\n count_cfs | integer |\n count_ncfs | integer |\n count_unitactivity | integer |\n count_eventactivity | integer |\n min_eventmain | timestamp without time zone |\n max_eventmain | timestamp without time zone |\n min_eventactivity | timestamp without time zone |\n max_eventactivity | timestamp without time zone |\n geocoding_hitrate | double precision |\n recent_load | timestamp without time zone |\n count_eventmain | integer |\n\n\nThis is the table structure.\n\nAny ideas where to begin troubleshooting this?\n\nThanks.\n\n",
"msg_date": "Tue, 04 Apr 2006 08:59:47 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum full seems to hang on very small table"
},
{
"msg_contents": "Dan Harris wrote:\n> I have a table with 1 live row that I found has 115000 dead rows in it (\n> from a testing run ). I'm trying to VACUUM FULL the table and it has\n> run for over 18 hours without completion. Considering the hardware on\n> this box and the fact that performance seems reasonable in all other\n> aspects, I'm confused as to why this would happen. The database other\n> than this table is quite large ( 70 gigs on disk ) and I would expect to\n> take days to complete but I just did 'vacuum full table_stats'. That\n> should only do that table, correct? I'm running 8.0.3.\n\nVACUUM FULL requires an exclusive lock on the table that it's vacuuming.\n Chances are something else has a lock on the table is blocking the\nvacuum from obtaining the necessary lock. Check pg_locks for ungranted\nlocks, you'll probably find that the request from the vacuum is ungranted.\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n",
"msg_date": "Tue, 04 Apr 2006 11:07:51 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum full seems to hang on very small table"
},
{
"msg_contents": "On Tue, 2006-04-04 at 08:59 -0600, Dan Harris wrote:\n> I have a table with 1 live row that I found has 115000 dead rows in it ( \n> from a testing run ). I'm trying to VACUUM FULL the table and it has \n> run for over 18 hours without completion. Considering the hardware on \n> this box and the fact that performance seems reasonable in all other \n> aspects, I'm confused as to why this would happen. The database other \n> than this table is quite large ( 70 gigs on disk ) and I would expect to \n> take days to complete but I just did 'vacuum full table_stats'. That \n> should only do that table, correct? I'm running 8.0.3.\n\nRead this http://www.postgresql.org/docs/8.0/static/release-8-0-5.html\nand you'll probably decide to upgrade.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 04 Apr 2006 16:35:14 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum full seems to hang on very small table"
}
] |
[
{
"msg_contents": "I have relatively small tables (toode and rid) in fast server.\nBoth tables are indexed on toode field.\n\nFollowing query takes long time to run.\ntoode field type is char(20). It is difficult to change this field type.\n\nAny idea how to speed up this query ?\n\nUPDATE firma1.rid SET toode=NULL\n WHERE toode IS NOT NULL AND\n toode NOT IN (SELECT TOODE FROM firma1.TOODE);\n\nQuery returned successfully: 0 rows affected, 594813 ms execution time.\n\nexplain window shows:\n\nSeq Scan on rid (cost=2581.07..20862553.77 rows=51848 width=1207)\n Filter: ((toode IS NOT NULL) AND (NOT (subplan)))\n SubPlan\n -> Materialize (cost=2581.07..2944.41 rows=14734 width=84)\n -> Seq Scan on toode (cost=0.00..2350.34 rows=14734 width=84)\n\n\nAndrus. \n\n\n",
"msg_date": "Tue, 4 Apr 2006 22:37:18 +0300",
"msg_from": "\"Andrus\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query runs too long for indexed tables"
},
{
"msg_contents": "On Tue, 2006-04-04 at 14:37, Andrus wrote:\n> I have relatively small tables (toode and rid) in fast server.\n> Both tables are indexed on toode field.\n> \n> Following query takes long time to run.\n> toode field type is char(20). It is difficult to change this field type.\n> \n> Any idea how to speed up this query ?\n> \n> UPDATE firma1.rid SET toode=NULL\n> WHERE toode IS NOT NULL AND\n> toode NOT IN (SELECT TOODE FROM firma1.TOODE);\n> \n> Query returned successfully: 0 rows affected, 594813 ms execution time.\n> \n> explain window shows:\n> \n> Seq Scan on rid (cost=2581.07..20862553.77 rows=51848 width=1207)\n> Filter: ((toode IS NOT NULL) AND (NOT (subplan)))\n> SubPlan\n> -> Materialize (cost=2581.07..2944.41 rows=14734 width=84)\n> -> Seq Scan on toode (cost=0.00..2350.34 rows=14734 width=84)\n\nLet me guess, you've updated it a lot and aren't familiar with Vacuum?\n\nrun a vacuum full on your database. schedule a vacuum (plain one) to\nrun every so often (hours or days are a good interval for most folks)\n\nIf that's NOT your problem, then please, let us know. \n",
"msg_date": "Tue, 04 Apr 2006 14:48:48 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query runs too long for indexed tables"
},
{
"msg_contents": "> Let me guess, you've updated it a lot and aren't familiar with Vacuum?\n>\n> run a vacuum full on your database. schedule a vacuum (plain one) to\n> run every so often (hours or days are a good interval for most folks)\n>\n> If that's NOT your problem, then please, let us know.\n\nScot, thank you. Excellent. If database is created and VACUUM ANALYZE is \nissued, this query runs fast.\nHowever, I need to speed up it during running script.\n\nThis is a database creation script. Script does the following:\n\n1. CREATE DATABASE foo;\n2. START TRANSACTION;\n3. Create 145 tables with primary keys. Add data to those tables.\n4. Create some additional indexes\n5. ANALYZE\n6. Clear bad bad foreign keys fields using commands like\n\nUPDATE firma1.rid SET toode=NULL\n WHERE toode IS NOT NULL AND\n toode NOT IN (SELECT TOODE FROM firma1.TOODE);\n\n7. Create foreign key references\n8. COMMIT\n\nThis script runs about 1 hour in modern server with fsync off.\nLargest table has 100000 records, few other tables have 15000 records and \nremaining have fewer records.\n\nHow to speed this up ?\nIs'nt running ANALYZE sufficient to speed up foreign key clearing ?\n\nIt seems that ANALYZE does'nt work. Should I isse COMMIT before running \nANALYZE or issue more commits?\n\nServer has 4 GB RAM\n\npostgres.conf file is default from 8.1.3 window zip file except the \nfollowing settings are added to end:\n\nfsync=off\nshared_buffers = 30000\nredirect_stderr = on\nlog_min_error_statement = error\nautovacuum = on\n... also 2 stats settings from aurtovacuur\nmax_fsm_pages = 30000\n\nAndrus. \n\n\n",
"msg_date": "Wed, 5 Apr 2006 12:39:46 +0300",
"msg_from": "\"Andrus\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query runs too long for indexed tables"
},
{
"msg_contents": "\n\n> UPDATE firma1.rid SET toode=NULL\n> WHERE toode IS NOT NULL AND\n> toode NOT IN (SELECT TOODE FROM firma1.TOODE);\n\n\tWhy not use a LEFT JOIN for this ?\n",
"msg_date": "Wed, 05 Apr 2006 12:22:53 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query runs too long for indexed tables"
},
{
"msg_contents": "> Why not use a LEFT JOIN for this ?\n\nUPDATE firma1.rid SET rid.toode=NULL\nLEFT join firma1.toode using(toode)\n WHERE rid.toode IS NOT NULL AND toode.toode IS NULL;\n\nCauses:\n\nERROR: syntax error at or near \"LEFT\" at character 41\n\nouter joins are not supported in Postgres UPDATE command.\n\nAndrus. \n\n\n",
"msg_date": "Wed, 5 Apr 2006 13:40:25 +0300",
"msg_from": "\"Andrus\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query runs too long for indexed tables"
},
{
"msg_contents": "> outer joins are not supported in Postgres UPDATE command.\n\n\tTrue (and sad).\n\n\tYou can try the following script to play with the various options :\n\n\nDROP TABLE one;\nDROP TABLE two;\n\nCREATE TABLE one (a SERIAL PRIMARY KEY, b INT NULL);\nCREATE TABLE two (b INT NOT NULL PRIMARY KEY);\nINSERT INTO two (b) SELECT x*2 FROM generate_series( 1, 50000 ) AS x;\nINSERT INTO one (b) SELECT x FROM generate_series( 1, 100000 ) AS x;\n\nEXPLAIN ANALYZE SELECT count(*) FROM one LEFT JOIN two ON one.b=two.b \nWHERE two.b IS NULL;\n\n--Try with and without...\n--CREATE INDEX one_b ON one(b);\n\nVACUUM ANALYZE one;\nVACUUM ANALYZE two;\n\nEXPLAIN ANALYZE SELECT count(*) FROM one LEFT JOIN two ON one.b=two.b \nWHERE two.b IS NULL;\n\nBEGIN;\nEXPLAIN ANALYZE UPDATE one SET b=NULL WHERE b NOT IN (SELECT b FROM two );\nSELECT * FROM one ORDER BY a LIMIT 5;\nROLLBACK;\nVACUUM one;\n\nBEGIN;\nEXPLAIN ANALYZE UPDATE one SET b=NULL WHERE b IN (SELECT one.b FROM one \nLEFT JOIN two ON one.b=two.b WHERE two.b IS NULL);\nSELECT * FROM one ORDER BY a LIMIT 5;\nROLLBACK;\nVACUUM one;\n\nBEGIN;\nEXPLAIN ANALYZE UPDATE one SET b=NULL FROM one x LEFT JOIN two ON \nx.b=two.b WHERE two.b IS NULL AND one.a=x.a;\nSELECT * FROM one ORDER BY a LIMIT 5;\nROLLBACK;\nVACUUM one;\n\nBEGIN;\nEXPLAIN ANALYZE UPDATE one SET b=NULL FROM one x LEFT JOIN two ON \nx.b=two.b WHERE two.b IS NULL AND one.b=x.b;\nSELECT * FROM one ORDER BY a LIMIT 5;\nROLLBACK;\nVACUUM one;\n\nBEGIN;\nEXPLAIN ANALYZE UPDATE one SET b=(SELECT two.b FROM two WHERE two.b=one.b);\nSELECT * FROM one ORDER BY a LIMIT 5;\nROLLBACK;\nVACUUM one;\n\nBEGIN;\nEXPLAIN ANALYZE UPDATE one SET b=NULL WHERE NOT EXISTS (SELECT 1 FROM two \nWHERE two.b = one.b);\nSELECT * FROM one ORDER BY a LIMIT 5;\nROLLBACK;\nVACUUM one;\n\nBEGIN;\nCREATE TABLE tmp AS SELECT one.a, two.b FROM one LEFT JOIN two ON \none.b=two.b;\nSELECT * FROM tmp ORDER BY a LIMIT 5;\nDROP TABLE one;\nALTER TABLE tmp RENAME TO one;\nROLLBACK;\n\n\n",
"msg_date": "Wed, 05 Apr 2006 13:25:05 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query runs too long for indexed tables"
},
{
"msg_contents": "\"Andrus\" <[email protected]> writes:\n> UPDATE firma1.rid SET toode=NULL\n> WHERE toode IS NOT NULL AND\n> toode NOT IN (SELECT TOODE FROM firma1.TOODE);\n\n> How to speed this up ?\n\nIncreasing work_mem to the point where you get a hashed NOT-IN would\nhelp, probably. Have you tried using EXPLAIN to see what the plan is\nfor the UPDATEs?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 05 Apr 2006 10:09:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query runs too long for indexed tables "
}
] |
[
{
"msg_contents": "Wondering if\n\nUpdate firma1.rid set toode=null where toode is not null and not\nexists(select 1 from firma1.toode where toode=rid.toode); \n\nWould be faster... Problem appears to be the seqscan of seqscan... No?\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Scott Marlowe\n> Sent: Tuesday, April 04, 2006 3:49 PM\n> To: Andrus\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Query runs too long for indexed tables\n> \n> On Tue, 2006-04-04 at 14:37, Andrus wrote:\n> > I have relatively small tables (toode and rid) in fast server.\n> > Both tables are indexed on toode field.\n> > \n> > Following query takes long time to run.\n> > toode field type is char(20). It is difficult to change \n> this field type.\n> > \n> > Any idea how to speed up this query ?\n> > \n> > UPDATE firma1.rid SET toode=NULL\n> > WHERE toode IS NOT NULL AND\n> > toode NOT IN (SELECT TOODE FROM firma1.TOODE);\n> > \n> > Query returned successfully: 0 rows affected, 594813 ms \n> execution time.\n> > \n> > explain window shows:\n> > \n> > Seq Scan on rid (cost=2581.07..20862553.77 rows=51848 width=1207)\n> > Filter: ((toode IS NOT NULL) AND (NOT (subplan)))\n> > SubPlan\n> > -> Materialize (cost=2581.07..2944.41 rows=14734 width=84)\n> > -> Seq Scan on toode (cost=0.00..2350.34 rows=14734 \n> > width=84)\n> \n> Let me guess, you've updated it a lot and aren't familiar with Vacuum?\n> \n> run a vacuum full on your database. schedule a vacuum (plain \n> one) to run every so often (hours or days are a good interval \n> for most folks)\n> \n> If that's NOT your problem, then please, let us know. \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n",
"msg_date": "Tue, 4 Apr 2006 16:00:13 -0400",
"msg_from": "\"Marc Morin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query runs too long for indexed tables"
}
] |
[
{
"msg_contents": "Explain analyze would be nice ;-) \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Andrus\n> Sent: Tuesday, April 04, 2006 3:37 PM\n> To: [email protected]\n> Subject: [PERFORM] Query runs too long for indexed tables\n> \n> I have relatively small tables (toode and rid) in fast server.\n> Both tables are indexed on toode field.\n> \n> Following query takes long time to run.\n> toode field type is char(20). It is difficult to change this \n> field type.\n> \n> Any idea how to speed up this query ?\n> \n> UPDATE firma1.rid SET toode=NULL\n> WHERE toode IS NOT NULL AND\n> toode NOT IN (SELECT TOODE FROM firma1.TOODE);\n> \n> Query returned successfully: 0 rows affected, 594813 ms \n> execution time.\n> \n> explain window shows:\n> \n> Seq Scan on rid (cost=2581.07..20862553.77 rows=51848 width=1207)\n> Filter: ((toode IS NOT NULL) AND (NOT (subplan)))\n> SubPlan\n> -> Materialize (cost=2581.07..2944.41 rows=14734 width=84)\n> -> Seq Scan on toode (cost=0.00..2350.34 \n> rows=14734 width=84)\n> \n> \n> Andrus. \n> \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n",
"msg_date": "Tue, 4 Apr 2006 16:03:30 -0400",
"msg_from": "\"Marc Morin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query runs too long for indexed tables"
}
] |
[
{
"msg_contents": "Greetings -\n \nI am testing a Sun Microsystems Sun Fire T2000 demo server at our\ncompany. I want to know if anyone here has any experience with this\nhardware and postgresql 8.1.3. I installed the copy of postgresql 8.1.3\nfrom blastwave.org onto this demo box and loaded our production database\ninto it. This box has a single Ultrasparc T1 cpu with six execution\npiplelines that can each handle 4 threads. With the Unix top utility\nthe postgresql server appears to bounce around between the available\nthreads on the system. For example I run a single large query and I can\nsee the postgresql server sometimes running on cpu/0, other times on\ncpu/1, cpu/3,....etc up to cpu/23. However, never is the load for the\npostgres server reported to be higher than 4.16%. I did the math and\n4.16% x 24 threads = 98.84% cpu load. So I wonder if the Solaris 10\nkernel is somehow throttling the processes so that any single virtual\nprocessor can do no more than 4.16% load. We got this server last week\nand I was able to install it in our rack just yesterday. Now I need to\nsee how I can optimize the postgresql server to work on this box. Does\nanyone have any suggestions? I know the postgresql server is not smp\naware but I believe parts of it are. In particular the buffer manager\nis supposed to scale the performance almost linearly with the number of\ncpu's (including virtual ones). I don't know however, if I need to\nrecompile the postgresql server myself to get those benefits. I am\nusing the version of postgresql 8.1.3 that is available on\nblastwave.org. I am also working with the 64 bit version of the\ndatabase server. This machine has over 8GB of ram so I was thinking of\nusing the 64 bit version of the postgresql server so I can access ram\nbeyong the 4gb limit imposed by 32 bit addressing. Any help or\nrecommendations for performance tweaking of postgresql is very much\nappreciated.\n \n \nThanks,\nJuan\n\n\n\n\n\nGreetings -\n \nI am testing a Sun \nMicrosystems Sun Fire T2000 demo server at our company. I want to \nknow if anyone here has any experience with this hardware and postgresql \n8.1.3. I installed the copy of postgresql 8.1.3 from blastwave.org onto \nthis demo box and loaded our production database into it. This box has a \nsingle Ultrasparc T1 cpu with six execution piplelines that can each handle 4 \nthreads. With the Unix top utility the postgresql server appears to bounce \naround between the available threads on the system. For example I run a \nsingle large query and I can see the postgresql server sometimes running on \ncpu/0, other times on cpu/1, cpu/3,....etc up to cpu/23. However, \nnever is the load for the postgres server reported to be higher than \n4.16%. I did the math and 4.16% x 24 threads = 98.84% cpu load. So I \nwonder if the Solaris 10 kernel is somehow throttling the processes so that \nany single virtual processor can do no more than 4.16% load. We got this \nserver last week and I was able to install it in our rack just \nyesterday. Now I need to see how I can optimize the postgresql \nserver to work on this box. Does anyone have any suggestions? \nI know the postgresql server is not smp aware but I believe parts of it \nare. In particular the buffer manager is supposed to scale the performance \nalmost linearly with the number of cpu's (including virtual ones). I don't \nknow however, if I need to recompile the postgresql server myself to get those \nbenefits. I am using the version of postgresql 8.1.3 that is \navailable on blastwave.org. I am also working with the 64 bit version of \nthe database server. This machine has over 8GB of ram so I was \nthinking of using the 64 bit version of the postgresql server so I can \naccess ram beyong the 4gb limit imposed by 32 bit addressing. Any help or \nrecommendations for performance tweaking of postgresql is very much \nappreciated.\n \n \nThanks,\nJuan",
"msg_date": "Wed, 5 Apr 2006 13:12:54 -0500",
"msg_from": "\"Juan Casero \\(FL FLC\\)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "\"Juan Casero \\(FL FLC\\)\" <[email protected]> writes:\n> ... This box has a single Ultrasparc T1 cpu with six execution\n> piplelines that can each handle 4 threads. With the Unix top utility\n> the postgresql server appears to bounce around between the available\n> threads on the system.\n\nTry sending it more than one query at a time? If you're testing with\njust one client connection issuing queries, that's about what I'd expect.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 05 Apr 2006 15:04:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3 "
},
{
"msg_contents": "Juan,\n\nOn 4/5/06 11:12 AM, \"Juan Casero (FL FLC)\" <[email protected]>\nwrote:\n\n> I know the postgresql server is not smp aware but I believe\n> parts of it are. In particular the buffer manager is supposed to scale the\n> performance almost linearly with the number of cpu's (including virtual ones).\n> I don't know however, if I need to recompile the postgresql server myself to\n> get those benefits.\n\nAs Tom said, to get the benefits of parallelism on one query, you would need\na parallelizing database like Teradata, Oracle Parallel Query option,\nNetezza, or Bizgres MPP.\n\nThe announcement about Postgres linear scalability for SMP is only relevant\nto statement throughput for highly concurrent environments (web sites, OLTP,\netc).\n\n- Luke\n\n\n",
"msg_date": "Wed, 05 Apr 2006 13:43:29 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "\n\n\n\n\nHi Juan Casero,\n\nI've found that serial query clients are best served by PostgreSQL\nrunning on fast single or dual core processors, ( such as the Athlon\nFX60 ) rather than expensive n-way systems. I was orginally using an\n8-way Xeon computer for a similar serial throughput problem. and i\nwasn't supprised to find that at least 6 of the 8 processors were idle.\nThe point is, for this type client, you are better off spending the\nmoney on the fastest single or dual core processors than a multiway box.\n\nAnthony.\n\nJuan Casero (FL FLC) wrote:\n\n\n\nGreetings -\n \nI\nam testing a Sun Microsystems Sun Fire T2000 demo server at our\ncompany. I want to know if anyone here has any experience with this\nhardware and postgresql 8.1.3. I installed the copy of postgresql\n8.1.3 from blastwave.org onto this demo box and loaded our production\ndatabase into it. This box has a single Ultrasparc T1 cpu with six\nexecution piplelines that can each handle 4 threads. With the Unix top\nutility the postgresql server appears to bounce around between the\navailable threads on the system. For example I run a single large\nquery and I can see the postgresql server sometimes running on cpu/0,\nother times on cpu/1, cpu/3,....etc up to cpu/23. However, never is\nthe load for the postgres server reported to be higher than 4.16%. I\ndid the math and 4.16% x 24 threads = 98.84% cpu load. So I wonder if\nthe Solaris 10 kernel is somehow throttling the processes so that any\nsingle virtual processor can do no more than 4.16% load. We got this\nserver last week and I was able to install it in our rack just\nyesterday. Now I need to see how I can optimize the postgresql server\nto work on this box. Does anyone have any suggestions? I know the\npostgresql server is not smp aware but I believe parts of it are. In\nparticular the buffer manager is supposed to scale the performance\nalmost linearly with the number of cpu's (including virtual ones). I\ndon't know however, if I need to recompile the postgresql server myself\nto get those benefits. I am using the version of postgresql 8.1.3\nthat is available on blastwave.org. I am also working with the 64 bit\nversion of the database server. This machine has over 8GB of ram so I\nwas thinking of using the 64 bit version of the postgresql server so I\ncan access ram beyong the 4gb limit imposed by 32 bit addressing. Any\nhelp or recommendations for performance tweaking of postgresql is very\nmuch appreciated.\n \n \nThanks,\nJuan\n\n\n\n\n",
"msg_date": "Thu, 06 Apr 2006 13:35:47 +1000",
"msg_from": "Anthony Ransley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Tom is right. Unless your workload can generate lots of simultaneous \nqueries, you will not reap the full benefit of the Sun Fire T2000 \nsystem. I have tested 8.1.3 with an OLTP workload on an 8 cores system. \nWith 1500-2000 client connections, the CPU was only about 30% utilized. \nThe UltraSPARC T1 processor was designed for throughput with many cores \nrunning at lower frequency (1-1.2 GHz) to reduce power consumption. To \nspeed up a single big query, you'd be better off with a parallelize DB \nor an Opteron system with higher clock speed like this one \nhttp://www.sun.com/servers/entry/x4200/\n\nRegards,\n-Robert\n\nTom Lane wrote:\n\n>\"Juan Casero \\(FL FLC\\)\" <[email protected]> writes:\n> \n>\n>>... This box has a single Ultrasparc T1 cpu with six execution\n>>piplelines that can each handle 4 threads. With the Unix top utility\n>>the postgresql server appears to bounce around between the available\n>>threads on the system.\n>> \n>>\n>\n>Try sending it more than one query at a time? If you're testing with\n>just one client connection issuing queries, that's about what I'd expect.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: explain analyze is your friend\n> \n>\n\n",
"msg_date": "Wed, 05 Apr 2006 20:41:16 -0700",
"msg_from": "Robert Lor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
}
] |
[
{
"msg_contents": "I am not sure about this. I mean I have postgresql 8.1.3 running on my\nWindows XP P4 HT laptop that I use for testing my webapps. When I hit\nthis pgsql on this laptop with a large query I can see the load spike up\nreally high on both of my virtual processors. Whatever, pgsql is doing\nit looks like both cpu's are being used indepently. The usage curve is\nnot identical on both of them that makes me think that parts of the\nserver are multithreaded. Admittedly I am not familiar with the source\ncode fo postgresql so I was hoping maybe one of the developers who is\ncould definitely answer this question.\n\nThanks,\nJuan\n\n-----Original Message-----\nFrom: Luke Lonergan [mailto:[email protected]] \nSent: Wednesday, April 05, 2006 4:43 PM\nTo: Juan Casero (FL FLC); [email protected]\nSubject: Re: [PERFORM] Sun Fire T2000 and PostgreSQL 8.1.3\n\nJuan,\n\nOn 4/5/06 11:12 AM, \"Juan Casero (FL FLC)\" <[email protected]>\nwrote:\n\n> I know the postgresql server is not smp aware but I believe parts of \n> it are. In particular the buffer manager is supposed to scale the \n> performance almost linearly with the number of cpu's (including\nvirtual ones).\n> I don't know however, if I need to recompile the postgresql server \n> myself to get those benefits.\n\nAs Tom said, to get the benefits of parallelism on one query, you would\nneed a parallelizing database like Teradata, Oracle Parallel Query\noption, Netezza, or Bizgres MPP.\n\nThe announcement about Postgres linear scalability for SMP is only\nrelevant to statement throughput for highly concurrent environments (web\nsites, OLTP, etc).\n\n- Luke\n",
"msg_date": "Wed, 5 Apr 2006 15:54:49 -0500",
"msg_from": "\"Juan Casero \\(FL FLC\\)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Juan,\n\nOn 4/5/06 1:54 PM, \"Juan Casero (FL FLC)\" <[email protected]>\nwrote:\n\n> I am not sure about this. I mean I have postgresql 8.1.3 running on my\n> Windows XP P4 HT laptop that I use for testing my webapps. When I hit\n> this pgsql on this laptop with a large query I can see the load spike up\n> really high on both of my virtual processors. Whatever, pgsql is doing\n> it looks like both cpu's are being used indepently. The usage curve is\n> not identical on both of them that makes me think that parts of the\n> server are multithreaded. Admittedly I am not familiar with the source\n> code fo postgresql so I was hoping maybe one of the developers who is\n> could definitely answer this question.\n\nThere's no part of the Postgres backend that is threaded or multi-processed.\nA reasonable explanation for your windows experience is that your web server\nor the psql client may be taking some CPU cycles while the backend is\nprocessing your query. Also, depending on how the CPU load is reported, if\nthe OS is doing prefetching of I/O, it might show up as load.\n\n- Luke\n\n\n",
"msg_date": "Wed, 05 Apr 2006 14:37:28 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Juan Casero (FL FLC) wrote:\n> When I hit\n> this pgsql on this laptop with a large query I can see the load spike up\n> really high on both of my virtual processors. Whatever, pgsql is doing\n> it looks like both cpu's are being used indepently. \n\nIntel HT was partly a marketing thing, you don't really get two CPU's /\ncores etc. The virtual CPU is really that, there is only one cpu doing\nthe actual work with some extra glue to help the \"hyperthreading\".\n\nAs to how hyper intel's hyperthreading is, OSDL did some testing (I\nthink the dbt2 workload) and I remember HT reducing performance for\npgsql by about 15%. Worth looking up, benchmarks are subject to a lot of\nissues but was interesting.\n\nThere have been some seriously good recommendations in this newsgroup\nfor nice high powered servers, including good disk subsystems.\n\nMost involve some AMD Opertons, lots of spindles with a good raid\ncontroller preferred to one or two large disks and a good helping of\nram. Be interesting to get some numbers on the sunfire machine.\n\n- August\n\n\n",
"msg_date": "Wed, 05 Apr 2006 14:58:49 -0700",
"msg_from": "August Zajonc <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Juan,\n\n> When I hit\n> this pgsql on this laptop with a large query I can see the load spike up\n> really high on both of my virtual processors. Whatever, pgsql is doing\n> it looks like both cpu's are being used indepently. \n\nNope, sorry, you're being decieved. Postgres is strictly one process, one \nquery. \n\nYou can use Bizgres MPP to achieve multithreading; it's proprietary and you \nhave to pay for it. It does work well, though.\n\nMore importantly, though, you haven't really explained why you care about \nmultithreading.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 5 Apr 2006 15:01:39 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "On Wed, 2006-04-05 at 15:54, Juan Casero (FL FLC) wrote:\n> I am not sure about this. I mean I have postgresql 8.1.3 running on my\n> Windows XP P4 HT laptop that I use for testing my webapps. When I hit\n> this pgsql on this laptop with a large query I can see the load spike up\n> really high on both of my virtual processors. Whatever, pgsql is doing\n> it looks like both cpu's are being used indepently. The usage curve is\n> not identical on both of them that makes me think that parts of the\n> server are multithreaded. Admittedly I am not familiar with the source\n> code fo postgresql so I was hoping maybe one of the developers who is\n> could definitely answer this question.\n\nI think that really depends on your workload.\n\nAre you going to have a dozen or so transactions running at a time? \nthen regular postgresql is probably ok.\n\nIf you're gonna be running only one or two big, fat, hairy reporting\nqueries, then you might wanna look at the bizgress mpp version.\n\nNote that some queries lend themselves to parallel processing more than\nothers.\n",
"msg_date": "Wed, 05 Apr 2006 17:08:29 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "[email protected] (\"Juan Casero \\(FL FLC\\)\") writes:\n> I am not sure about this. I mean I have postgresql 8.1.3 running on\n> my Windows XP P4 HT laptop that I use for testing my webapps. When\n> I hit this pgsql on this laptop with a large query I can see the\n> load spike up really high on both of my virtual processors.\n> Whatever, pgsql is doing it looks like both cpu's are being used\n> indepently. The usage curve is not identical on both of them that\n> makes me think that parts of the server are multithreaded.\n\nThis is almost certainly a function of the fact that you're running\nthe single-threaded backend process, which then feeds a\nsingle-threaded front end process, namely the application that is\nbeing fed data.\n\nFor a query that returns a large return set, that will indeed make\nboth processors get pretty busy; one for the DB server, one for\nwhatever program is processing the results.\n-- \nselect 'cbbrowne' || '@' || 'acm.org';\nhttp://www.ntlug.org/~cbbrowne/rdbms.html\nIt's hard to tell if someone is inconspicuous. \n",
"msg_date": "Wed, 05 Apr 2006 18:11:54 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "[email protected] (Josh Berkus) writes:\n> Juan,\n>\n>> When I hit\n>> this pgsql on this laptop with a large query I can see the load spike up\n>> really high on both of my virtual processors. Whatever, pgsql is doing\n>> it looks like both cpu's are being used indepently. \n>\n> Nope, sorry, you're being decieved. Postgres is strictly one process, one \n> query. \n\nIt's not entirely deception; there is indeed independent use of both\nCPUs, it's just that it isn't from multithreading...\n-- \noutput = reverse(\"gro.mca\" \"@\" \"enworbbc\")\nhttp://www.ntlug.org/~cbbrowne/internet.html\n\"Don't use C; In my opinion, C is a library programming language not\nan app programming language.\" -- Owen Taylor (GTK+ and ORBit\ndeveloper)\n",
"msg_date": "Wed, 05 Apr 2006 18:12:43 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Juan,\n>> When I hit\n>> this pgsql on this laptop with a large query I can see the load spike up\n>> really high on both of my virtual processors. Whatever, pgsql is doing\n>> it looks like both cpu's are being used indepently. \n\n> Nope, sorry, you're being decieved. Postgres is strictly one process, one \n> query. \n\nThis is not strictly true: we have for instance pushed off some work\ninto a \"background writer\" process, and even just having both a client\nand a server process active allows some small amount of parallelism.\nBut you're certainly not going to see effective use of more than about\ntwo CPUs on a single query stream ... at least not without Bizgres or\nsome other add-on.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 05 Apr 2006 18:24:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3 "
},
{
"msg_contents": "\nOn Apr 5, 2006, at 5:58 PM, August Zajonc wrote:\n\n> Most involve some AMD Opertons, lots of spindles with a good raid\n> controller preferred to one or two large disks and a good helping of\n> ram. Be interesting to get some numbers on the sunfire machine.\n\nI can highly recommend the SunFire X4100, however the only dual- \nchannel RAID card that fits in the box is the Adaptec 2230SLP. It is \nnot quite as fast as the LSI 320-2x when running freebsd, but is \nsufficient for my purpose.\n\n\n",
"msg_date": "Thu, 6 Apr 2006 11:20:17 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
}
] |
[
{
"msg_contents": "Luke (or anyone else who may be following this thread)\n\nDo you think bizgres might be a good choice of database server for the\nUltrasparc T1 based T2000? I have downloaded the source code but I was\nhoping to find out if the potential performance gains were worth the\neffort to compile and install the code.\n\nThanks,\nJuan \n\n-----Original Message-----\nFrom: Luke Lonergan [mailto:[email protected]] \nSent: Wednesday, April 05, 2006 5:37 PM\nTo: Juan Casero (FL FLC); [email protected]\nSubject: Re: [PERFORM] Sun Fire T2000 and PostgreSQL 8.1.3\n\nJuan,\n\nOn 4/5/06 1:54 PM, \"Juan Casero (FL FLC)\" <[email protected]>\nwrote:\n\n> I am not sure about this. I mean I have postgresql 8.1.3 running on \n> my Windows XP P4 HT laptop that I use for testing my webapps. When I \n> hit this pgsql on this laptop with a large query I can see the load \n> spike up really high on both of my virtual processors. Whatever, \n> pgsql is doing it looks like both cpu's are being used indepently. The\n\n> usage curve is not identical on both of them that makes me think that \n> parts of the server are multithreaded. Admittedly I am not familiar \n> with the source code fo postgresql so I was hoping maybe one of the \n> developers who is could definitely answer this question.\n\nThere's no part of the Postgres backend that is threaded or\nmulti-processed.\nA reasonable explanation for your windows experience is that your web\nserver or the psql client may be taking some CPU cycles while the\nbackend is processing your query. Also, depending on how the CPU load\nis reported, if the OS is doing prefetching of I/O, it might show up as\nload.\n\n- Luke\n",
"msg_date": "Wed, 5 Apr 2006 16:45:03 -0500",
"msg_from": "\"Juan Casero \\(FL FLC\\)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Juan,\n\nOn 4/5/06 2:45 PM, \"Juan Casero (FL FLC)\" <[email protected]>\nwrote:\n\n> Do you think bizgres might be a good choice of database server for the\n> Ultrasparc T1 based T2000? I have downloaded the source code but I was\n> hoping to find out if the potential performance gains were worth the\n> effort to compile and install the code.\n\nBizgres (non-MPP) does not do any multi-CPU parallelism, so it won't use\nmore CPUs in your T2000. It does have the faster sort performance and an\non-disk bitmap index, both of which will make it run many times (3-6) faster\nthan 8.1 Postgres depending if your queries use larger data or involve\nsorts.\n\nBizgres MPP is closed source and unfortunately for your T2000 experiment it\ndoesn't currently support Solaris SPARC CPUs, only Solaris x86. It would\nuse all of your CPUs and I/O channels on one or more machines for every\nquery. Again, it's optimized for queries where that use a lot of data or\nhave a lot of complexity (sorts, aggregations, joins, etc).\n\n- Luke\n\n\n",
"msg_date": "Wed, 05 Apr 2006 14:54:13 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
}
] |
[
{
"msg_contents": "I am evaluating this SunFire T2000 as a replacement for an Intel P3 1Ghz\npostgresql server. This intel server runs a retail reporting database\non postgresql 8.1.3. I need to realize significant performance gains on\nT2000 server to justify the expense. So I need to tune the postgresql\nserver as much as I can for it. Right now the operating system (solaris\n10) sees each thread as a single cpu and only allows each thread 4.16%\nof the available cpu resources for processing queries. Since postgresql\nis not multithreaded and since I cannot apparently break past the\noperating system imposed limits on a single thread I can't fully realize\nthe performance benefits of the T2000 server unless and until I start\ngetting lots of people hitting the database server with requests. This\ndoesn't happen right now. It may happen later on as I write more\napplications for the server but I am looking to see if the performance\nbenefit we can get from this server is worth the price tag right now.\nThat is why I am looking for ways to tweak postgres on it. \n\n\nThanks,\nJuan \n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Wednesday, April 05, 2006 6:02 PM\nTo: [email protected]\nCc: Juan Casero (FL FLC); Luke Lonergan\nSubject: Re: [PERFORM] Sun Fire T2000 and PostgreSQL 8.1.3\n\nJuan,\n\n> When I hit\n> this pgsql on this laptop with a large query I can see the load spike \n> up really high on both of my virtual processors. Whatever, pgsql is \n> doing it looks like both cpu's are being used indepently.\n\nNope, sorry, you're being decieved. Postgres is strictly one process,\none \nquery. \n\nYou can use Bizgres MPP to achieve multithreading; it's proprietary and\nyou have to pay for it. It does work well, though.\n\nMore importantly, though, you haven't really explained why you care\nabout multithreading.\n\n--\n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 5 Apr 2006 19:33:49 -0500",
"msg_from": "\"Juan Casero \\(FL FLC\\)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Juan,\n\nOn 4/5/06 5:33 PM, \"Juan Casero (FL FLC)\" <[email protected]>\nwrote:\n\n> ... but I am looking to see if the performance\n> benefit we can get from this server is worth the price tag right now.\n\nWhile many people here will look forward to performance results on the\nT2000, I can guarantee that your server money will go much further for a\nreporting application with an Opteron based system. Buy a Sun Galaxy with a\npair of Opteron 275s, run Linux on it, and I predict you will see\nperformance 4-5 times faster than the T2000 running Solaris for handling\nsingle queries, and 2-3 times faster when handling multiple queries.\n\n- Luke\n\n\n",
"msg_date": "Wed, 05 Apr 2006 17:49:21 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Juan, I think that AMD Opteron is more flex (OS and Hardware Upgrade) \nand then, the best solution.�\n\nWhat are you think about the Sun Fire X64 X4200 Server?\n\nTake a look in this analysis and performance benchmark�.\n\nRegards,\nMTada\n\n� http://www.anandtech.com/systems/showdoc.aspx?i=2727&p=2\n� http://www.anandtech.com/systems/showdoc.aspx?i=2727&p=7\n\nJuan Casero (FL FLC) wrote:\n\n>I am evaluating this SunFire T2000 as a replacement for an Intel P3 1Ghz\n>postgresql server. This intel server runs a retail reporting database\n>on postgresql 8.1.3. I need to realize significant performance gains on\n>T2000 server to justify the expense. So I need to tune the postgresql\n>server as much as I can for it. Right now the operating system (solaris\n>10) sees each thread as a single cpu and only allows each thread 4.16%\n>of the available cpu resources for processing queries. Since postgresql\n>is not multithreaded and since I cannot apparently break past the\n>operating system imposed limits on a single thread I can't fully realize\n>the performance benefits of the T2000 server unless and until I start\n>getting lots of people hitting the database server with requests. This\n>doesn't happen right now. It may happen later on as I write more\n>applications for the server but I am looking to see if the performance\n>benefit we can get from this server is worth the price tag right now.\n>That is why I am looking for ways to tweak postgres on it. \n>\n>\n>Thanks,\n>Juan \n>\n>-----Original Message-----\n>From: Josh Berkus [mailto:[email protected]] \n>Sent: Wednesday, April 05, 2006 6:02 PM\n>To: [email protected]\n>Cc: Juan Casero (FL FLC); Luke Lonergan\n>Subject: Re: [PERFORM] Sun Fire T2000 and PostgreSQL 8.1.3\n>\n>Juan,\n>\n> \n>\n>>When I hit\n>>this pgsql on this laptop with a large query I can see the load spike \n>>up really high on both of my virtual processors. Whatever, pgsql is \n>>doing it looks like both cpu's are being used indepently.\n>> \n>>\n>\n>Nope, sorry, you're being decieved. Postgres is strictly one process,\n>one \n>query. \n>\n>You can use Bizgres MPP to achieve multithreading; it's proprietary and\n>you have to pay for it. It does work well, though.\n>\n>More importantly, though, you haven't really explained why you care\n>about multithreading.\n>\n>--\n>--Josh\n>\n>Josh Berkus\n>Aglio Database Solutions\n>San Francisco\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n> \n>\n\n",
"msg_date": "Wed, 05 Apr 2006 22:11:32 -0300",
"msg_from": "Marcelo Tada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Juan Casero (FL FLC) wrote:\n\n>I can't fully realize\n>the performance benefits of the T2000 server unless and until I start\n>getting lots of people hitting the database server with requests. This\n>doesn't happen right now. It may happen later on as I write more\n>applications for the server but I am looking to see if the performance\n>benefit we can get from this server is worth the price tag right now.\n>That is why I am looking for ways to tweak postgres on it. \n>\n> \n>\nIf you have a need to use the system for other purposes with the extra \nCPU bandwidth, you can partition it using the Solaris Containers \nfeature. Doing this will give you room to grow your DB usage later and \nmake full use of the system now.\n\nJust a thought!\n\nRegards,\nRobert\n",
"msg_date": "Wed, 05 Apr 2006 21:15:45 -0700",
"msg_from": "Robert Lor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Luke Lonergan wrote:\n> Juan,\n> \n> On 4/5/06 5:33 PM, \"Juan Casero (FL FLC)\" <[email protected]>\n> wrote:\n> \n>> ... but I am looking to see if the performance\n>> benefit we can get from this server is worth the price tag right now.\n> \n> While many people here will look forward to performance results on the\n> T2000, I can guarantee that your server money will go much further for a\n> reporting application with an Opteron based system. Buy a Sun Galaxy with a\n> pair of Opteron 275s, run Linux on it, and I predict you will see\n> performance 4-5 times faster than the T2000 running Solaris for handling\n> single queries, and 2-3 times faster when handling multiple queries.\n\nWe've got a Sun Fire V40z and it's quite a nice machine -- 6x 15krpm \ndrives, 4GB RAM, and a pair of Opteron 850s. This gives us more than \nenough power now for what we need, but it's nice to know that we can \nshoehorn a lot more RAM, and up it to eight CPU cores if needed.\n\nThe newer Sun Opteron systems look nice too, but unless you're using \nexternal storage, their little 2.5\" hard drives may not be ideal.\n\nThanks\nLeigh\n\n> \n> - Luke\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n",
"msg_date": "Thu, 06 Apr 2006 14:23:07 +1000",
"msg_from": "Leigh Dyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Leigh,\n\nOn 4/5/06 9:23 PM, \"Leigh Dyer\" <[email protected]> wrote:\n\n> We've got a Sun Fire V40z and it's quite a nice machine -- 6x 15krpm\n> drives, 4GB RAM, and a pair of Opteron 850s. This gives us more than\n> enough power now for what we need, but it's nice to know that we can\n> shoehorn a lot more RAM, and up it to eight CPU cores if needed.\n\nWe have one of these too - ours is signed by Scott McNealy.\n \n> The newer Sun Opteron systems look nice too, but unless you're using\n> external storage, their little 2.5\" hard drives may not be ideal.\n\nYes - but they end-of-lifed the V20z and V40z!\n\nOne big problem with the sun line in general is the tiny internal storage\ncapacity - already too small on the V40z at 5/6 drives, now ridiculous at 4\nSAS drives on the galaxy series.\n\n- Luke\n\n\n",
"msg_date": "Wed, 05 Apr 2006 21:27:27 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Luke Lonergan wrote:\n> Leigh,\n> \n> On 4/5/06 9:23 PM, \"Leigh Dyer\" <[email protected]> wrote:\n> \n>> We've got a Sun Fire V40z and it's quite a nice machine -- 6x 15krpm\n>> drives, 4GB RAM, and a pair of Opteron 850s. This gives us more than\n>> enough power now for what we need, but it's nice to know that we can\n>> shoehorn a lot more RAM, and up it to eight CPU cores if needed.\n> \n> We have one of these too - ours is signed by Scott McNealy.\n> \nNice :)\n\n>> The newer Sun Opteron systems look nice too, but unless you're using\n>> external storage, their little 2.5\" hard drives may not be ideal.\n> \n> Yes - but they end-of-lifed the V20z and V40z!\n> \nThat's quite disappointing to hear -- our V40z isn't even six months \nold! We're not a big company, so external storage solutions are outside \nour price range, but we still wanted a nice brand-name box, and the V40z \nwas a great deal compared to smaller boxes like the HP DL385.\n\n> One big problem with the sun line in general is the tiny internal storage\n> capacity - already too small on the V40z at 5/6 drives, now ridiculous at 4\n> SAS drives on the galaxy series.\nI'm sure those little SAS drives would be great for web servers and \nother non-IO-intensive tasks though -- I'd love to get some X4100s in to \nreplace our Poweredge 1750s for that. It's a smart move overall IMHO, \nbut it's certainly not great for database serving.\n\nThanks\nLeigh\n> \n> - Luke\n> \n\n",
"msg_date": "Thu, 06 Apr 2006 14:47:00 +1000",
"msg_from": "Leigh Dyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Leigh Dyer wrote:\n> Luke Lonergan wrote:\n> \n>> Leigh,\n>>\n>> On 4/5/06 9:23 PM, \"Leigh Dyer\" <[email protected]> wrote:\n>>\n>>> We've got a Sun Fire V40z and it's quite a nice machine -- 6x 15krpm\n>>> drives, 4GB RAM, and a pair of Opteron 850s. This gives us more than\n>>> enough power now for what we need, but it's nice to know that we can\n>>> shoehorn a lot more RAM, and up it to eight CPU cores if needed.\n>>\n>>\n>> We have one of these too - ours is signed by Scott McNealy.\n>> \n> \n> Nice :)\n> \n>>> The newer Sun Opteron systems look nice too, but unless you're using\n>>> external storage, their little 2.5\" hard drives may not be ideal.\n>>\n>>\n>> Yes - but they end-of-lifed the V20z and V40z!\n>>\n> That's quite disappointing to hear -- our V40z isn't even six months \n> old! We're not a big company, so external storage solutions are outside \n> our price range, but we still wanted a nice brand-name box, and the V40z \n> was a great deal compared to smaller boxes like the HP DL385.\n> \n>> One big problem with the sun line in general is the tiny internal storage\n>> capacity - already too small on the V40z at 5/6 drives, now ridiculous \n>> at 4\n>> SAS drives on the galaxy series.\n> \n> I'm sure those little SAS drives would be great for web servers and \n> other non-IO-intensive tasks though -- I'd love to get some X4100s in to \n> replace our Poweredge 1750s for that. It's a smart move overall IMHO, \n> but it's certainly not great for database serving.\n> \n>\n\nI notice that Supermicro have recently brought out some Opteron systems, \nthey are hiding them here:\n\nhttp://www.supermicro.com/Aplus/system/\n\n\nThe 4U's have 8 SATA/SCSI drive bays - maybe still not enough, but \nbetter than 6!\n\nCheers\n\nMark\n\n\n\n",
"msg_date": "Thu, 06 Apr 2006 18:44:52 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Mark Kirkwood wrote:\n\n>>\n>>>> The newer Sun Opteron systems look nice too, but unless you're using\n>>>> external storage, their little 2.5\" hard drives may not be ideal.\n>>>\n>>>\n>>>\n>>> Yes - but they end-of-lifed the V20z and V40z!\n>>>\n>> That's quite disappointing to hear -- our V40z isn't even six months \n>> old! We're not a big company, so external storage solutions are \n>> outside our price range, but we still wanted a nice brand-name box, \n>> and the V40z was a great deal compared to smaller boxes like the HP \n>> DL385.\n>>\n>>> One big problem with the sun line in general is the tiny internal \n>>> storage\n>>> capacity - already too small on the V40z at 5/6 drives, now \n>>> ridiculous at 4\n>>> SAS drives on the galaxy series.\n>>\n>>\n>> I'm sure those little SAS drives would be great for web servers and \n>> other non-IO-intensive tasks though -- I'd love to get some X4100s in \n>> to replace our Poweredge 1750s for that. It's a smart move overall \n>> IMHO, but it's certainly not great for database serving.\n>>\n>>\n>\n\nExcuse me for this off topic, but i notice that you are very excited \nabout the sun's hardware, what os do you install on them , slowlaris?, \nhas that os improved in some espectacular way that i should take a look \nagain?, i used it until solaris 9 and the performance was horrible.\nim a happy freebsd user now (using hp and dell hardware though)\n\n---\nMiguel\n",
"msg_date": "Thu, 06 Apr 2006 01:32:41 -0600",
"msg_from": "Miguel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Miguel wrote:\n> \n> Excuse me for this off topic, but i notice that you are very excited \n> about the sun's hardware, what os do you install on them , slowlaris?, \n> has that os improved in some espectacular way that i should take a look \n> again?, i used it until solaris 9 and the performance was horrible.\n> im a happy freebsd user now (using hp and dell hardware though)\n\nI'm running Debian Sarge AMD64 on mine, and it works wonderfully. I'm \nnot a Solaris person, and I never plan on becoming one, but Sun's \nOpteron hardware is quite nice. The remote management was one of the \nfeatures that sold me -- full control over power, etc. and \nserial-over-LAN, through an SSH interface.\n\nSun don't support Debian officially (but we don't have a software \nsupport contract anyway, so I'm not too fussed), but I'm pretty sure \nthey support at least SLES and RHEL.\n\nThanks\nLeigh\n\n",
"msg_date": "Thu, 06 Apr 2006 17:42:27 +1000",
"msg_from": "Leigh Dyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "\nOn Apr 5, 2006, at 9:11 PM, Marcelo Tada wrote:\n\n> What are you think about the Sun Fire X64 X4200 Server?\n\nI use the X4100 and like it a lot. I'm about to buy another. I see \nno advantage to the X4200 unless you want the extra internal disks. \nI use an external array.\n\n",
"msg_date": "Thu, 6 Apr 2006 11:36:41 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "\nOn Apr 6, 2006, at 12:47 AM, Leigh Dyer wrote:\n\n> I'm sure those little SAS drives would be great for web servers and \n> other non-IO-intensive tasks though -- I'd love to get some X4100s \n> in to replace our Poweredge 1750s for that. It's a smart move \n> overall IMHO,\n\nFor this purpose, bang for the buck would lead me to getting Dell \n1850 with hardware RAID and U320 drives. The quick-n-dirty tests \ni've seen on the FreeBSD mailing list shows the disks are much faster \non the 1850 than the X4100 with its SAS disks.\n\n",
"msg_date": "Thu, 6 Apr 2006 11:39:07 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Hi Leigh\n\ninline comments\n\n\nLeigh Dyer wrote:\n\n> Luke Lonergan wrote:\n>\n>> Juan,\n>\n> We've got a Sun Fire V40z and it's quite a nice machine -- 6x 15krpm \n> drives, 4GB RAM, and a pair of Opteron 850s. This gives us more than \n> enough power now for what we need, but it's nice to know that we can \n> shoehorn a lot more RAM, and up it to eight CPU cores if needed.\n>\n> The newer Sun Opteron systems look nice too, but unless you're using \n> external storage, their little 2.5\" hard drives may not be ideal.\n>\n\nThats because Sun Fire V40z had write cache turned on while the \n4200/4100 has the write cache turned off. There is a religious belief \naround the \"write cache\" on the disk in Sun :-) To really compare the \nperformance, you have to turn on the write cache (I believe it was \nformat -e and the cache option.. but that could have changed.. need to \nverify that again.. Same goes for T2000 SAS disks too.. Write cache is \nturned off on it so be careful before you compare benchmarks on internal \ndrives :-)\n\n-Jignesh\n\n\n\n> Thanks\n> Leigh\n>\n>>\n>> - Luke\n>>\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 2: Don't 'kill -9' the postmaster\n>>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Thu, 06 Apr 2006 17:55:30 -0400",
"msg_from": "\"Jignesh K. Shah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
}
] |
[
{
"msg_contents": "Ok that is beginning to become clear to me. Now I need to determine if\nthis server is worth the investment for us. Maybe it is not a speed\ndaemon but to be honest the licensing costs of an SMP aware RDBMS is\noutside our budget. When postgresql starts does it start up a super\nserver process and then forks copies of itself to handle incoming\nrequests? Or do I have to specify how many server processes should be\nstarted up? I figured maybe I can take advantage of the multiple cpu's\non this system by starting up enough postgres server processes to handle\nlarge numbers of incoming connections. I have this server available for\nsixty days so I may as well explore the performance of postgresql on it.\n\n\n\nThanks,\nJuan \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Luke\nLonergan\nSent: Wednesday, April 05, 2006 5:37 PM\nTo: Juan Casero (FL FLC); [email protected]\nSubject: Re: [PERFORM] Sun Fire T2000 and PostgreSQL 8.1.3\n\nJuan,\n\nOn 4/5/06 1:54 PM, \"Juan Casero (FL FLC)\" <[email protected]>\nwrote:\n\n> I am not sure about this. I mean I have postgresql 8.1.3 running on \n> my Windows XP P4 HT laptop that I use for testing my webapps. When I \n> hit this pgsql on this laptop with a large query I can see the load \n> spike up really high on both of my virtual processors. Whatever, \n> pgsql is doing it looks like both cpu's are being used indepently. The\n\n> usage curve is not identical on both of them that makes me think that \n> parts of the server are multithreaded. Admittedly I am not familiar \n> with the source code fo postgresql so I was hoping maybe one of the \n> developers who is could definitely answer this question.\n\nThere's no part of the Postgres backend that is threaded or\nmulti-processed.\nA reasonable explanation for your windows experience is that your web\nserver or the psql client may be taking some CPU cycles while the\nbackend is processing your query. Also, depending on how the CPU load\nis reported, if the OS is doing prefetching of I/O, it might show up as\nload.\n\n- Luke\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n",
"msg_date": "Wed, 5 Apr 2006 21:14:46 -0500",
"msg_from": "\"Juan Casero \\(FL FLC\\)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Juan,\n\n> Ok that is beginning to become clear to me. Now I need to determine if\n> this server is worth the investment for us. Maybe it is not a speed\n> daemon but to be honest the licensing costs of an SMP aware RDBMS is\n> outside our budget. \n\nYou still haven't explained why you want multi-threaded queries. This is \nsounding like keeping up with the Joneses.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 5 Apr 2006 23:56:35 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Hi, Juan,\n\nJuan Casero (FL FLC) wrote:\n> Ok that is beginning to become clear to me. Now I need to determine if\n> this server is worth the investment for us. Maybe it is not a speed\n> daemon but to be honest the licensing costs of an SMP aware RDBMS is\n> outside our budget. When postgresql starts does it start up a super\n> server process and then forks copies of itself to handle incoming\n> requests? \n\nIt starts a super server process (Postmaster) and some background\nprocesses (background writer, stats collector). For each incoming\nconnection, the postmaster forks a single-threaded backend process,\nwhich handles all queries and transactions on this connection, and\nterminates when the connection terminates.\n\nSo as a thumb-rule, each connection can utilize only a single CPU. You\ncan utilize a few more CPUs than you have simultaneous connections, due\nto the background processes and the OS needing CPU for I/O, but thats\nrather marginal.\n\nAFAIK, Bizgres MPP has extended the backend processes to be multi\nthreaded, so a single connection can utilize several CPUs for some types\nof queries (large data sets, sorting/joining/hashing etc.). Btw, I\npresume that they might offer you a free test license, and I also\npresume their license fee is much lower than Oracle or DB/2.\n\n> Or do I have to specify how many server processes should be\n> started up? \n\nYou can limit the number of server processes by setting the maximum\nconnection limit.\n\n> I figured maybe I can take advantage of the multiple cpu's\n> on this system by starting up enough postgres server processes to handle\n> large numbers of incoming connections. I have this server available for\n> sixty days so I may as well explore the performance of postgresql on it.\n\nYes, you can take advantage if you have multiple clients (e. G. a wep\napp, that's what the T2000 / Niagara were made for). You have a Tomcat\nor Jboss sitting on it, each http connection forks its own thread. Each\ncustomer has its own CPU :-)\n\nThen use a connection pool to PostgreSQL, and you're fine. The more\ncustomers, the more CPUs are utilized.\n\nBut beware, if you have floating point maths, it will be very slow. All\n8 CPUs / 32 Threads share a single FPU. So if you need floating point\n(e. G. Mapserver, PostGIS geoprocessing, Java2D chart drawing or\nsomething), T2000 is not the right thing for you.\n\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Thu, 06 Apr 2006 11:08:14 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a problem with the choice of index made by the query planner.\n\nMy table looks like this:\n\nCREATE TABLE t\n(\n p1 varchar not null,\n p2 varchar not null,\n p3 varchar not null,\n i1 integer,\n i2 integer,\n i3 integer,\n i4 integer,\n i5 integer,\n d1 date,\n d2 date,\n d3 date,\n PRIMARY KEY (p1, p2, p3)\n);\n\nI have also created an index on (p2, p3), as some of my lookups are on these\nonly.\nAll the integers and dates are data values.\nThe table has around 9 million rows.\nI am using postgresl 7.4.7\n\nI have set statistics to 1000 on the p1, p2 and p3 columns, and run vacuum full\nanalyse. However, I still see\nquery plans like this:\n\ndb=# explain select * from t where p1 = 'something' and p2 = 'fairly_common'\nand p3 = 'fairly_common'; \n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------\n Index Scan using p2p3 on t (cost=0.00..6.01 rows=1 width=102)\n Index Cond: (((p2)::text = 'fairly_common'::text) AND ((p3)::text =\n'fairly_common'::text))\n Filter: ((p1)::text = 'something'::text)\n(3 rows)\n\nThe problem appears to be this:\n\ndb=# explain select * from t where p2 = 'fairly_common' and p3 =\n'fairly_common'; \nQUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------\n Index Scan using p2p3 on t (cost=0.00..6.01 rows=1 width=102)\n Index Cond: (((p2)::text = 'fairly_common'::text) AND ((p3)::text =\n'fairly_common'::text))\n(3 rows)\n\nThe query planner thinks that this will return only 1 row.\nIn fact, these index lookups sometimes return up to 500 rows, which then must\nbe filtered by p1.\nThis can take 2 or 3 seconds to execute for what should be a simple primary key\nlookup.\n\nFor VERY common values of p2 and p3, the query planner chooses the primary key,\nbecause these values are stored\nexplicitly in the analyse results. For rare values there is no problem,\nbecause the query runs quickly.\nBut for \"fairly common\" values, there is a problem.\n\nI would like the query planner to use the primary key for all of these lookups.\n How can I enforce this?\n\nThanks,\nBrian\n",
"msg_date": "Thu, 6 Apr 2006 12:35:44 +1000 (EST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query planner is using wrong index."
},
{
"msg_contents": "On fim, 2006-04-06 at 12:35 +1000, Brian Herlihy wrote:\n\n> I have a problem with the choice of index made by the query planner.\n> \n> My table looks like this:\n> \n> CREATE TABLE t\n> (\n> p1 varchar not null,\n> p2 varchar not null,\n> p3 varchar not null,\n> i1 integer,\n> i2 integer,\n> i3 integer,\n> i4 integer,\n> i5 integer,\n> d1 date,\n> d2 date,\n> d3 date,\n> PRIMARY KEY (p1, p2, p3)\n> );\n> \n> I have also created an index on (p2, p3), as some of my lookups are on these\n> only.\n\n> All the integers and dates are data values.\n> The table has around 9 million rows.\n> I am using postgresl 7.4.7\n> \n> I have set statistics to 1000 on the p1, p2 and p3 columns, and run vacuum full\n> analyse. However, I still see\n> query plans like this:\n> \n...\n> db=# explain select * from t where p2 = 'fairly_common' and p3 =\n> 'fairly_common'; \n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using p2p3 on t (cost=0.00..6.01 rows=1 width=102)\n> Index Cond: (((p2)::text = 'fairly_common'::text) AND ((p3)::text =\n> 'fairly_common'::text))\n> (3 rows)\n\nplease show us an actual EXPLAIN ANALYZE\nthis will show us more.\n\n> I would like the query planner to use the primary key for all of these lookups.\n> How can I enforce this?\n\nHow would that help? have you tested to see if it would \nactualy be better?\n\ngnari\n\n\n",
"msg_date": "Thu, 06 Apr 2006 08:52:54 +0000",
"msg_from": "Ragnar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner is using wrong index."
},
{
"msg_contents": "\n--- Ragnar <[email protected]> wrote:\n\n> On fim, 2006-04-06 at 12:35 +1000, Brian Herlihy wrote:\n> \n> > I have a problem with the choice of index made by the query planner.\n> > \n> > My table looks like this:\n> > \n> > CREATE TABLE t\n> > (\n> > p1 varchar not null,\n> > p2 varchar not null,\n> > p3 varchar not null,\n> > i1 integer,\n> > i2 integer,\n> > i3 integer,\n> > i4 integer,\n> > i5 integer,\n> > d1 date,\n> > d2 date,\n> > d3 date,\n> > PRIMARY KEY (p1, p2, p3)\n> > );\n> > \n> > I have also created an index on (p2, p3), as some of my lookups are on\n> these\n> > only.\n> \n> > All the integers and dates are data values.\n> > The table has around 9 million rows.\n> > I am using postgresl 7.4.7\n> > \n> > I have set statistics to 1000 on the p1, p2 and p3 columns, and run vacuum\n> full\n> > analyse. However, I still see\n> > query plans like this:\n> > \n> ...\n> > db=# explain select * from t where p2 = 'fairly_common' and p3 =\n> > 'fairly_common'; \n> \n> > QUERY PLAN \n> >\n>\n-----------------------------------------------------------------------------------------------------------------------------------\n> > Index Scan using p2p3 on t (cost=0.00..6.01 rows=1 width=102)\n> > Index Cond: (((p2)::text = 'fairly_common'::text) AND ((p3)::text =\n> > 'fairly_common'::text))\n> > (3 rows)\n> \n> please show us an actual EXPLAIN ANALYZE\n> this will show us more.\n> \n> > I would like the query planner to use the primary key for all of these\n> lookups.\n> > How can I enforce this?\n> \n> How would that help? have you tested to see if it would \n> actualy be better?\n> \n> gnari\n> \n\nYes, the primary key is far better. I gave it the ultimate test - I dropped\nthe (p2, p3) index. It's blindingly fast when using the PK, which is what I\nexpect from Postgresql :) This query is part of an import process, which has\nbeen getting increasingly slow as the table has grown.\n\nI first discovered the problem when I noticed queries which should be simple PK\nlookups taking up to 2.5 seconds on an idle system. I discussed this problem\nin the Postgres IRC channel, and it turns out to be due to an inaccurate\nselectivity estimate.\n\nThe columns p2 and p3 are highly correlated, which is why I often get hundreds\nof rows even after specifying values for both these columns. However, the\nquery optimizer assumes the columns are not correlated. It calculates the\nselectivity for each column seperately, then multiplies them to get the\ncombined selectivity for specifying both p2 and p3. This results in an\nestimate of 1 row, which makes the (p2,p3) index look as good as the (p1,p2,p3)\nindex.\n\nI'm aware now that there is no way to force use of a particular index in\nPostgres. I've also been told that there is no way to have the optimizer take\ninto account correlation between column values.\n\nMy options seem to be\n - Fudge the analysis results so that the selectivity estimate changes. I\nhave tested reducing n_distinct, but this doesn't seem to help.\n - Combine the columns into one column, allowing postgres to calculate the\ncombined selectivity.\n - Drop the (p2, p3) index. But I need this for other queries.\n\nNone of these are good solutions. So I am hoping that there is a better way to\ngo about this!\n\nThanks,\nBrian\n",
"msg_date": "Thu, 6 Apr 2006 19:27:11 +1000 (EST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query planner is using wrong index."
},
{
"msg_contents": "On fim, 2006-04-06 at 19:27 +1000, Brian Herlihy wrote:\n> --- Ragnar <[email protected]> wrote:\n> \n> > On fim, 2006-04-06 at 12:35 +1000, Brian Herlihy wrote:\n> >\n...\n> > > PRIMARY KEY (p1, p2, p3)\n...\n> > > \n> > > I have also created an index on (p2, p3), as some of my lookups are on\n> > > these only.\n...\n> > > db=# explain select * from t where p2 = 'fairly_common' and p3 =\n> > > 'fairly_common';\n \n> > please show us an actual EXPLAIN ANALYZE\n \n> > > I would like the query planner to use the primary key for all of these\n> > lookups.\n> > \n> > have you tested to see if it would actualy be better?\n> > \n\n> Yes, the primary key is far better. I gave it the ultimate test - I dropped\n> the (p2, p3) index. It's blindingly fast when using the PK, \n\nI have problems understanding exactly how an index on \n(p1,p2,p3) can be faster than and index on (p2,p3) for\na query not involving p1.\ncan you demonstrate this with actual EXPLAIN ANALYZES ?\nsomething like:\nEXPLAIN ANALYZE select * from t where p2 = ? and p3 = ?;\nBEGIN;\nDROP INDEX p2p3;\nEXPLAIN ANALYZE select * from t where p2 = ? and p3 = ?;\nROLLBACK;\n\nmaybe your p2p3 index needs REINDEX ?\n\n\n> My options seem to be\n> - Fudge the analysis results so that the selectivity estimate changes. I\n> have tested reducing n_distinct, but this doesn't seem to help.\n> - Combine the columns into one column, allowing postgres to calculate the\n> combined selectivity.\n> - Drop the (p2, p3) index. But I need this for other queries.\n> \n> None of these are good solutions. So I am hoping that there is a better way to\n> go about this!\n\nI think we must detemine exactly what the problem is\nbefore devising complex solutions\n\ngnari\n\n\n",
"msg_date": "Thu, 06 Apr 2006 11:49:56 +0000",
"msg_from": "Ragnar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner is using wrong index."
},
{
"msg_contents": "\n--- Ragnar <[email protected]> wrote:\n\n> On fim, 2006-04-06 at 19:27 +1000, Brian Herlihy wrote:\n> \n> > Yes, the primary key is far better. I gave it the ultimate test - I\n> dropped\n> > the (p2, p3) index. It's blindingly fast when using the PK, \n> \n> I have problems understanding exactly how an index on \n> (p1,p2,p3) can be faster than and index on (p2,p3) for\n> a query not involving p1.\n> can you demonstrate this with actual EXPLAIN ANALYZES ?\n> something like:\n> EXPLAIN ANALYZE select * from t where p2 = ? and p3 = ?;\n> BEGIN;\n> DROP INDEX p2p3;\n> EXPLAIN ANALYZE select * from t where p2 = ? and p3 = ?;\n> ROLLBACK;\n> \n> maybe your p2p3 index needs REINDEX ?\n> \n\nHere's the output. The timings after caching are repeatable (varying only by\n10% or so). \n\nQuery before caching:\n\ndb# explain analyze select * from t WHERE p1 = 'a' and p2 = 'uk.altavista.com'\nAND p3 = 'web/results?itag=&q=&kgs=&kls=';\n QUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using p2_p3_idx on t (cost=0.00..6.02 rows=1 width=102) (actual\ntime=2793.247..2793.247 rows=0 loops=1)\n Index Cond: (((p2)::text = 'uk.altavista.com'::text) AND ((p3)::text =\n'web/results?itag=&q=&kgs=&kls='::text))\n Filter: ((p1)::text = 'a'::text)\n Total runtime: 2793.303 ms\n(4 rows)\n\nQuery after caching:\n\ndb# explain analyze select * from t WHERE p1 = 'a' and p2 = 'uk.altavista.com'\nAND p3 = 'web/results?itag=&q=&kgs=&kls=';\n QUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using p2_p3_idx on t (cost=0.00..6.02 rows=1 width=102) (actual\ntime=0.617..0.617 rows=0 loops=1)\n Index Cond: (((p2)::text = 'uk.altavista.com'::text) AND ((p3)::text =\n'web/results?itag=&q=&kgs=&kls='::text))\n Filter: ((p1)::text = 'a'::text)\n Total runtime: 0.665 ms\n(4 rows)\n\n=== At this point I did \"DROP INDEX p2_p3_idx\"\n\nQuery after dropping index:\n\ndb# explain analyze select * from t WHERE p1 = 'a' and p2 = 'uk.altavista.com'\nAND p3 = 'web/results?itag=&q=&kgs=&kls=';\n \nQUERY PLAN \n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using t_pkey on t (cost=0.00..6.02 rows=1 width=102) (actual\ntime=95.188..95.188 rows=0 loops=1)\n Index Cond: (((p1)::text = 'a'::text) AND ((p2)::text =\n'uk.altavista.com'::text) AND ((p3)::text =\n'web/results?itag=&q=&kgs=&kls='::text))\n Total runtime: 95.239 ms\n(3 rows)\n\nQuery after dropping index, fully cached:\n\ndb# explain analyze select * from t WHERE p1 = 'a' and p2 = 'uk.altavista.com'\nAND p3 = 'web/results?itag=&q=&kgs=&kls=';\n \nQUERY PLAN \n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using t_pkey on t (cost=0.00..6.02 rows=1 width=102) (actual\ntime=0.030..0.030 rows=0 loops=1)\n Index Cond: (((p1)::text = 'a'::text) AND ((p2)::text =\n'uk.altavista.com'::text) AND ((p3)::text =\n'web/results?itag=&q=&kgs=&kls='::text))\n Total runtime: 0.077 ms\n(3 rows)\n\n\n\nAnd one where the query planner chooses the primary key instead. Both p2 and\np3 are present as Most Common Values in pg_statistics:\n\nQuery before fully cached:\n\ndb# explain analyze SELECT * FROM t WHERE p1 = 'b' AND p2 = 'www.google.com'\nAND p3 = 'search?hl=&lr=&q=';\n\n \nQUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using t_pkey on t (cost=0.00..6.02 rows=1 width=102) (actual\ntime=212.092..212.100 rows=1 loops=1)\n Index Cond: (((p1)::text = 'b'::text) AND ((p2)::text =\n'www.google.com'::text) AND ((p3)::text = 'search?hl=&lr=&q='::text))\n Total runtime: 212.159 ms\n(3 rows)\n\nQuery after fully cached:\n\ndb# explain analyze SELECT * FROM t WHERE p1 = 'b' AND p2 = 'www.google.com'\nAND p3 = 'search?hl=&lr=&q=';\n \nQUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using t_pkey on t (cost=0.00..6.02 rows=1 width=102) (actual\ntime=0.034..0.039 rows=1 loops=1)\n Index Cond: (((p1)::text = 'b'::text) AND ((p2)::text =\n'www.google.com'::text) AND ((p3)::text = 'search?hl=&lr=&q='::text))\n Total runtime: 0.094 ms\n(3 rows)\n\n\nI have set statistics to 1000 on all of p1, p2 and p3. The table was recently\nvacuumed and analyzed, and the index was recreated (after being dropped) before\nthese tests were run. The tests are 100% reproducible, both in postgresql\n7.4.7 and 8.1.3.\n\nThe indexes are:\n\nt_pkey (p1, p2, p3) -- UNIQUE, PRIMARY KEY\np2_p3_idx (p2, p3) -- NOT UNIQUE\n\nThe problem is that a lookup which specifies p2 and p3 can return as many as\n500 rows. The optimizer assumes that such a lookup will return 1 row, and so\nit chooses a bad plan. That sums it up.\n\nWhat I need is a way to make it choose the primary key.\n\nThanks in advance,\nBrian\n",
"msg_date": "Fri, 7 Apr 2006 00:01:51 +1000 (EST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query planner is using wrong index."
},
{
"msg_contents": "On f�s, 2006-04-07 at 00:01 +1000, Brian Herlihy wrote:\n> --- Ragnar <[email protected]> wrote:\n> \n> > On fim, 2006-04-06 at 19:27 +1000, Brian Herlihy wrote:\n> > \n> > > Yes, the primary key is far better. I gave it the ultimate test - I\n> > dropped\n> > > the (p2, p3) index. It's blindingly fast when using the PK, \n> > \n> > I have problems understanding exactly how an index on \n> > (p1,p2,p3) can be faster than and index on (p2,p3) for\n> > a query not involving p1.\n\n> db# explain analyze select * from t WHERE p1 = 'a' and p2 = 'uk.altavista.com'\n> AND p3 = 'web/results?itag=&q=&kgs=&kls=';\n\nthis is different from what you said earlier. in your \noriginal post you showed a problem query without any\nreference to p1 in the WHERE clause. this confused me.\n\n> Index Scan using p2_p3_idx on t (cost=0.00..6.02 rows=1 width=102) (actual\n> time=2793.247..2793.247 rows=0 loops=1)\n> Index Cond: (((p2)::text = 'uk.altavista.com'::text) AND ((p3)::text =\n> 'web/results?itag=&q=&kgs=&kls='::text))\n> Filter: ((p1)::text = 'a'::text)\n> Total runtime: 2793.303 ms\n> (4 rows)\n\ntry to add an ORDER BY clause:\n\nexplain analyze \n select * from t \n WHERE p1 = 'a'\n and p2 = 'uk.altavista.com'\n AND p3 = 'web/results?itag=&q=&kgs=&kls='\n ORDER BY p1,p2,p3;\n\nthis might push the planner into using the primary key\n\ngnari\n\n\n\n\n",
"msg_date": "Thu, 06 Apr 2006 16:20:29 +0000",
"msg_from": "Ragnar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner is using wrong index."
},
{
"msg_contents": "\n--- Ragnar <[email protected]> wrote:\n\n> On f�s, 2006-04-07 at 00:01 +1000, Brian Herlihy wrote:\n> > Index Scan using p2_p3_idx on t (cost=0.00..6.02 rows=1 width=102)\n> (actual\n> > time=2793.247..2793.247 rows=0 loops=1)\n> > Index Cond: (((p2)::text = 'uk.altavista.com'::text) AND ((p3)::text =\n> > 'web/results?itag=&q=&kgs=&kls='::text))\n> > Filter: ((p1)::text = 'a'::text)\n> > Total runtime: 2793.303 ms\n> > (4 rows)\n> \n> try to add an ORDER BY clause:\n> \n> explain analyze \n> select * from t \n> WHERE p1 = 'a'\n> and p2 = 'uk.altavista.com'\n> AND p3 = 'web/results?itag=&q=&kgs=&kls='\n> ORDER BY p1,p2,p3;\n> \n> this might push the planner into using the primary key\n> \n> gnari\n> \n\nThankyou very much, that works very well for select. However, I need it to\nwork for update as well. Is there an equivalent way to force use of an index\nfor updates?\n\nHere are the results for select:\n\ndb# explain analyze select * from t WHERE p1 = 'a' and p2 = 'uk.altavista.com'\nAND p3 = 'web/results?itag=&q=&kgs=&kls=' order by p1,p2,p3;\n \nQUERY PLAN \n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using t_pkey on t (cost=0.00..6.02 rows=1 width=102) (actual\ntime=32.519..32.519 rows=0 loops=1)\n Index Cond: (((p1)::text = 'a'::text) AND ((p2)::text =\n'uk.altavista.com'::text) AND ((p3)::text =\n'web/results?itag=&q=&kgs=&kls='::text))\n Total runtime: 32.569 ms\n(3 rows)\n\ndb# explain analyze select * from t WHERE p1 = 'a' and p2 = 'uk.altavista.com'\nAND p3 = 'web/results?itag=&q=&kgs=&kls=';\n QUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using p2_p3_idx on t (cost=0.00..6.02 rows=1 width=102) (actual\ntime=2790.364..2790.364 rows=0 loops=1)\n Index Cond: (((p2)::text = 'uk.altavista.com'::text) AND ((p3)::text =\n'web/results?itag=&q=&kgs=&kls='::text))\n Filter: ((p1)::text = 'a'::text)\n Total runtime: 2790.420 ms\n(4 rows)\n\n\nBut I cannot add an \"order by\" to an update.\n\nThe other idea I came up with last night was to change p2_p3_idx so it indexes\na value derived from p2 and p3, rather than p2 and p3 themselves. This would\n\"hide\" this index from the optimizer, forcing it to use the primary key.\n\nI am really surprised that I have to go through such contortions just to use\nthe primary key! This area of Postgres needs improvement.\n\nThanks,\nBrian\n",
"msg_date": "Fri, 7 Apr 2006 09:56:21 +1000 (EST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query planner is using wrong index."
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of Brian Herlihy\n> Sent: Thursday, April 06, 2006 6:56 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] Query planner is using wrong index.\n[Snip]\n> I am really surprised that I have to go through such contortions just\nto\n> use\n> the primary key! This area of Postgres needs improvement.\n> \n\n\nOf course you mentioned that you are using 7.4.7. You might want to try\nupgrading to 8.1.3. There have been a lot of improvements to the\nperformance since 7.4. I don't know if your specific problem was fixed,\nbut it's worth a try.\n\nAlso you might want to at least upgrade to 7.4.12 for the bug fixes.\n\n\n",
"msg_date": "Thu, 6 Apr 2006 19:09:33 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner is using wrong index."
},
{
"msg_contents": "\n--- Dave Dutcher <[email protected]> wrote:\n> > -----Original Message-----\n> > To: [email protected]\n> > Subject: Re: [PERFORM] Query planner is using wrong index.\n> [Snip]\n> > I am really surprised that I have to go through such contortions just\n> to\n> > use\n> > the primary key! This area of Postgres needs improvement.\n> > \n> \n> \n> Of course you mentioned that you are using 7.4.7. You might want to try\n> upgrading to 8.1.3. There have been a lot of improvements to the\n> performance since 7.4. I don't know if your specific problem was fixed,\n> but it's worth a try.\n> \n> Also you might want to at least upgrade to 7.4.12 for the bug fixes.\n\nThanks for the suggestions. I've verified the same problem in 8.1.3 as well,\nafter my initial post. It was actually in 8.1.3 that I first discovered the\nproblem.\n\nI noticed this item in the TODO list:\n\n- Allow accurate statistics to be collected on indexes with more than one\ncolumn or expression indexes, perhaps using per-index statistics\n\nThis is what I need! But until that is added, I need a way to use the primary\nkey with the current version :)\n\nThanks,\nBrian\n",
"msg_date": "Fri, 7 Apr 2006 10:15:38 +1000 (EST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query planner is using wrong index."
},
{
"msg_contents": "Brian Herlihy <[email protected]> writes:\n> My options seem to be\n> - Fudge the analysis results so that the selectivity estimate changes. I\n> have tested reducing n_distinct, but this doesn't seem to help.\n> - Combine the columns into one column, allowing postgres to calculate the\n> combined selectivity.\n> - Drop the (p2, p3) index. But I need this for other queries.\n\nHave you considered reordering the pkey to be (p2,p3,p1) and then\ndropping the (p2,p3) index?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 06 Apr 2006 23:17:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner is using wrong index. "
},
{
"msg_contents": "--- Tom Lane <[email protected]> wrote:\n\n> Brian Herlihy <[email protected]> writes:\n> > My options seem to be\n> > - Fudge the analysis results so that the selectivity estimate changes. I\n> > have tested reducing n_distinct, but this doesn't seem to help.\n> > - Combine the columns into one column, allowing postgres to calculate the\n> > combined selectivity.\n> > - Drop the (p2, p3) index. But I need this for other queries.\n> \n> Have you considered reordering the pkey to be (p2,p3,p1) and then\n> dropping the (p2,p3) index?\n> \n> \t\t\tregards, tom lane\n\nHi Tom,\n\nI've considered it. Unfortunately I need to do lookups on (p1) and (p1,p2) as\nwell as (p1, p2, p3).\n\nThe solution I've gone with is to create an index on (p2 || '/' || p3). This\nis unique for each p2/p3 combination, because p2 cannot contain the '/'\ncharacter. I'm assuming that this index will be no slower to generate than one\non (p2, p3), as concatenation is very cheap. Having the index on an expression\n\"hides\" it from the optimizer, which is then forced to use the primary key\ninstead.\n\nIt works perfectly now! There were only 2 queries in the system which need\nthis index, so it was no problem to change them.\n\nThankyou very much for all your time and patience!\n\nBefore I go, I have a question - From discussions on the Postgresql irc\nchannel, and from reading the TODO list on the website, I am under the\nimpression that there are no plans to allow optimizer hints, such as \"use index\ntable_pkey\". Is this really true? Such a feature would make life inestimably\neasier for your end-users, particularly me :)\n\nThanks,\nBrian\n",
"msg_date": "Fri, 7 Apr 2006 16:41:08 +1000 (EST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query planner is using wrong index. "
},
{
"msg_contents": "Brian Herlihy <[email protected]> writes:\n> Before I go, I have a question - From discussions on the Postgresql irc\n> channel, and from reading the TODO list on the website, I am under the\n> impression that there are no plans to allow optimizer hints, such as \"use index\n> table_pkey\". Is this really true?\n\nI personally don't think it's a good idea: the time spent in designing,\nimplementing, and maintaining a usable hint system would be significant,\nand IMHO the effort is better spent on *fixing* the optimizer problems\nthan working around them. There are also good arguments about how hints\nwouldn't be future-proof --- for instance, the recent addition of bitmap\nindexscan capability would've obsoleted an awful lot of hints, had\nanyone had any on their queries. We'd then be faced with either turning\noff the hints or being forced by them to adopt inferior plans.\n\nThe direction I'd like to see us go to solve your problem is maintaining\ncross-column statistics. It's not practical to store joint stats for\nevery set of columns, but the existence of an index on (p2,p3) ought to\ncue us that p2/p3 stats would be worth having.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Apr 2006 10:34:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner is using wrong index. "
},
{
"msg_contents": "Tom Lane wrote:\n> Brian Herlihy <[email protected]> writes:\n>> Before I go, I have a question - From discussions on the Postgresql irc\n>> channel, and from reading the TODO list on the website, I am under the\n>> impression that there are no plans to allow optimizer hints, such as \"use index\n>> table_pkey\". Is this really true?\n> \n> I personally don't think it's a good idea: the time spent in designing,\n> implementing, and maintaining a usable hint system would be significant,\n> and IMHO the effort is better spent on *fixing* the optimizer problems\n> than working around them.\n\nTom - does the planner/executor know it's got row estimates wrong? That \nis, if I'm not running an EXPLAIN ANALYSE is there a point at which we \ncould log \"planner estimate for X out by factor of Y\"?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 07 Apr 2006 16:08:10 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Spotting planner errors (was Re: Query planner is using wrong index.)"
},
{
"msg_contents": "Richard Huxton <[email protected]> writes:\n> Tom - does the planner/executor know it's got row estimates wrong? That \n> is, if I'm not running an EXPLAIN ANALYSE is there a point at which we \n> could log \"planner estimate for X out by factor of Y\"?\n\nNot at the moment, but you could certainly imagine changing the executor\nto count rows even without EXPLAIN ANALYZE, and then complain during\nplan shutdown.\n\nNot sure how helpful that would be; there would be a lot of noise from\ncommon cases such as executing underneath a LIMIT node.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Apr 2006 11:12:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Spotting planner errors (was Re: Query planner is using wrong\n\tindex.)"
},
{
"msg_contents": "Tom Lane wrote:\n> Richard Huxton <[email protected]> writes:\n>> Tom - does the planner/executor know it's got row estimates wrong? That \n>> is, if I'm not running an EXPLAIN ANALYSE is there a point at which we \n>> could log \"planner estimate for X out by factor of Y\"?\n> \n> Not at the moment, but you could certainly imagine changing the executor\n> to count rows even without EXPLAIN ANALYZE, and then complain during\n> plan shutdown.\n> \n> Not sure how helpful that would be; there would be a lot of noise from\n> common cases such as executing underneath a LIMIT node.\n\nHmm - thinking about it you'd probably want to record it similarly to \nstats too. It's the fact that the planner *repeatedly* gets an estimate \nwrong that's of interest.\n\nWould it be prohibitive to total actions taken - to act as raw data for \nrandom_page_cost / cpu_xxx_cost? If you could get a ratio of estimated \nvs actual time vs the various page-fetches/index-fetches etc. we could \nactually plug some meaningful numbers in.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 07 Apr 2006 16:24:56 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Spotting planner errors (was Re: Query planner is using"
}
] |
[
{
"msg_contents": "Hello,\nI am doing some test with differents values for the parameter maintenance_work_mem in order to\nameliorate the time for the creation of index and and the use of vacuum and analyse.\nI read in the doc that this parameter is just for create index, vacuum and analyse and foreign key .\nBut when i test 2 queries with differents values the result are twice big :\nfor mwm to 64 Mo the query 1 last 34 min and the query 2 41 min\nfor mwm to 512 mo the query 1 last 17 min and the query 2 21 min\nSo my question is in what condition the parameter maintenance_work_mem influence on the execution of queries. \n Thanks ,\nHello,\nI am doing some test with differents values for the parameter maintenance_work_mem in order to\nameliorate the time for the creation of index and and the use of vacuum and analyse.\nI read in the doc that this parameter is just for create index, vacuum and analyse and foreign key .\nBut when i test 2 queries with differents values the result are twice big :\nfor mwm to 64 Mo the query 1 last 34 min and the query 2 41 min\nfor mwm to 512 mo the query 1 last 17 min and the query 2 21 min\nSo my question is in what condition the parameter maintenance_work_mem influence on the execution of queries. \n Thanks ,",
"msg_date": "Thu, 6 Apr 2006 10:57:41 +0200 (CEST)",
"msg_from": "luchot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Maintenance_work_mem influence on queries"
}
] |
[
{
"msg_contents": "Because I plan to develop a rather large (for us anyway) data warehouse\nwith PostgreSQL. I am looking for the right hardware that can handle\nqueries on a database that might grow to over a 100 gigabytes. Right\nnow our decision support system based on postgresql 8.1.3 stores retail\nsales information for about 4 four years back *but* only as weekly\nsummaries. I want to build the system so it can handle daily sales\ntransactions also. You can imagine how many more records this will\ninvolve so I am looking for hardware that can give me the performance I\nneed to make this project useable. In other words parsing and loading\nthe daily transaction logs for our stores is likely to take huge amounts\nof effort. I need a machine that can complete the task in a reasonable\namount of time. As people start to query the database to find sales\nrelated reports and information I need to make sure the queries will run\nreasonably fast for them. I have already hand optimized all of my\nqueries on the current system. But currently I only have weekly sales\nsummaries. Other divisions in our company have done a similar project\nusing MS SQL Server on SMP hardware far outclassing the database server\nI currently use and they report heavy loads on the server with less than\nideal query run times. I am sure I can do my part to optimize the\nqueries once I start this project but there is only so much you can do.\nAt some point you just need more powerful hardware. This is where I am\nat right now. Apart from that since I will only get this one chance to\nbuy a new server for data processing I need to make sure that I buy\nsomething that can grow over time as our needs change. I don't want to\nbuy a server only to find out later that it cannot meet our needs with\nfuture database projects. I have to balance a limited budget, room for\nfuture performance growth, and current system requirements. Trust me it\nisn't easy. \n\n\nJuan\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Thursday, April 06, 2006 2:57 AM\nTo: [email protected]\nCc: Juan Casero (FL FLC); Luke Lonergan\nSubject: Re: [PERFORM] Sun Fire T2000 and PostgreSQL 8.1.3\n\nJuan,\n\n> Ok that is beginning to become clear to me. Now I need to determine \n> if this server is worth the investment for us. Maybe it is not a \n> speed daemon but to be honest the licensing costs of an SMP aware \n> RDBMS is outside our budget.\n\nYou still haven't explained why you want multi-threaded queries. This\nis sounding like keeping up with the Joneses.\n\n--\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 6 Apr 2006 06:32:45 -0500",
"msg_from": "\"Juan Casero \\(FL FLC\\)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "On 4/6/06, Juan Casero (FL FLC) <[email protected]> wrote:\n> Because I plan to develop a rather large (for us anyway) data warehouse\n> with PostgreSQL. I am looking for the right hardware that can handle\n> queries on a database that might grow to over a 100 gigabytes.\n\nYou need to look for a server that has fast I/O. 100 GB of data will\ntake a long time to scan through and won't fit in RAM.\n\n> Right\n> now our decision support system based on postgresql 8.1.3 stores retail\n> sales information for about 4 four years back *but* only as weekly\n> summaries. I want to build the system so it can handle daily sales\n> transactions also. You can imagine how many more records this will\n> involve so I am looking for hardware that can give me the performance I\n> need to make this project useable.\n\nSounds like you need to be doing a few heavy queries when you do this,\nnot tons of small queries. That likely means you need fewer CPUs that\nare very fast.\n\n> In other words parsing and loading\n> the daily transaction logs for our stores is likely to take huge amounts\n> of effort. I need a machine that can complete the task in a reasonable\n> amount of time.\n\nSee my previous comment\n\n> As people start to query the database to find sales\n> related reports and information I need to make sure the queries will run\n> reasonably fast for them.\n\nGet more than one CPU core and make sure you have a lot of drive\nspindles. You will definately want to be able to ensure a long running\nquery doesn't hog your i/o system. I have a server with a single disk\nand when we do a long query the server load will jump from about .2 to\n10 until the long query finishes. More cpus won't help this because\nthe bottle neck is the disk.\n\n> I have already hand optimized all of my\n> queries on the current system. But currently I only have weekly sales\n> summaries. Other divisions in our company have done a similar project\n> using MS SQL Server on SMP hardware far outclassing the database server\n> I currently use and they report heavy loads on the server with less than\n> ideal query run times. I am sure I can do my part to optimize the\n> queries once I start this project but there is only so much you can do.\n> At some point you just need more powerful hardware. This is where I am\n> at right now.\n\nYou say \"this is where I am at right __now__\" but where will you be in\n9 months? Sounds like you will be i/o bound by the time you get above\n10GB.\n\n> Apart from that since I will only get this one chance to\n> buy a new server for data processing I need to make sure that I buy\n> something that can grow over time as our needs change. I don't want to\n> buy a server only to find out later that it cannot meet our needs with\n> future database projects. I have to balance a limited budget, room for\n> future performance growth, and current system requirements. Trust me it\n> isn't easy.\n\nIsn't it about time we had our annual \"what kind of server can I get\nfor $8k\" thread?\n\n--\nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Thu, 6 Apr 2006 09:01:45 -0500",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "Juan,\n\nOn 4/6/06 7:01 AM, \"Matthew Nuzum\" <[email protected]> wrote:\n\n>> Apart from that since I will only get this one chance to\n>> buy a new server for data processing I need to make sure that I buy\n>> something that can grow over time as our needs change. I don't want to\n>> buy a server only to find out later that it cannot meet our needs with\n>> future database projects. I have to balance a limited budget, room for\n>> future performance growth, and current system requirements. Trust me it\n>> isn't easy.\n> \n> Isn't it about time we had our annual \"what kind of server can I get\n> for $8k\" thread?\n\nBased on Juan's description, here's a config that will *definitely* be the\nfastest possible in an $8K budget:\n\nBuy a dual opteron server with 8 x 400GB SATA II disks on a 3Ware 9550SX\nRAID controller with 16GB of RAM pre-installed with Centos 4.3 for $6,000\nhere:\n http://www.asacomputers.com/\n\nDownload the *free* open source Bizgres here:\n http://bgn.greenplum.com/\n\nUse bitmap indexes for columns with less than 10,000 unique values, and your\nsystem will fly through 100GB.\n\nThis is the fastest OSS business intelligence kit for the money, guaranteed.\n\n- Luke\n\n\n",
"msg_date": "Thu, 06 Apr 2006 09:17:39 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "For a DSS type workload with PostgreSQL where you end up with single \nlong running queries on postgresql with about 100GB, you better use \nsomething like Sun Fire V40z with those fast Ultra320 internal drives. \nThis might be perfect low cost complete database in a box.\n\nSun Fire T2000 is great for OLTP where you can end up with hundreds of \nusers doing quick and small lookups and T2000 can crank simple thread \nexecutions far better than others. However when it comes to long running \nqueries you end up using 1/32 of the power and may not live up to your \nexpectations. For example consider your PostgreSQL talking to Apache \nWebServer all on T2000... You can put them in separate zones if you have \ndifferent administrators for them. :-)\n\nAs for PostgreSQL on Solaris, I already have the best parameters to use \non Solaris based on my tests, the default odatasync hurts performance on \nSolaris, so does checkpoint segments, others are tweaked so that they \nare set for bigger databases and hence may not show much difference on \nperformances...\n\nThat said I will still be interested to see your app performance with \npostgreSQL on Sun Fire T2000 as there are always ways of perseverence to \nimprove performance :-)\n\n\nRegards,\nJignesh\n\n\nJuan Casero (FL FLC) wrote:\n\n>Because I plan to develop a rather large (for us anyway) data warehouse\n>with PostgreSQL. I am looking for the right hardware that can handle\n>queries on a database that might grow to over a 100 gigabytes. Right\n>now our decision support system based on postgresql 8.1.3 stores retail\n>sales information for about 4 four years back *but* only as weekly\n>summaries. I want to build the system so it can handle daily sales\n>transactions also. You can imagine how many more records this will\n>involve so I am looking for hardware that can give me the performance I\n>need to make this project useable. In other words parsing and loading\n>the daily transaction logs for our stores is likely to take huge amounts\n>of effort. I need a machine that can complete the task in a reasonable\n>amount of time. As people start to query the database to find sales\n>related reports and information I need to make sure the queries will run\n>reasonably fast for them. I have already hand optimized all of my\n>queries on the current system. But currently I only have weekly sales\n>summaries. Other divisions in our company have done a similar project\n>using MS SQL Server on SMP hardware far outclassing the database server\n>I currently use and they report heavy loads on the server with less than\n>ideal query run times. I am sure I can do my part to optimize the\n>queries once I start this project but there is only so much you can do.\n>At some point you just need more powerful hardware. This is where I am\n>at right now. Apart from that since I will only get this one chance to\n>buy a new server for data processing I need to make sure that I buy\n>something that can grow over time as our needs change. I don't want to\n>buy a server only to find out later that it cannot meet our needs with\n>future database projects. I have to balance a limited budget, room for\n>future performance growth, and current system requirements. Trust me it\n>isn't easy. \n>\n>\n>Juan\n>\n>-----Original Message-----\n>From: Josh Berkus [mailto:[email protected]] \n>Sent: Thursday, April 06, 2006 2:57 AM\n>To: [email protected]\n>Cc: Juan Casero (FL FLC); Luke Lonergan\n>Subject: Re: [PERFORM] Sun Fire T2000 and PostgreSQL 8.1.3\n>\n>Juan,\n>\n> \n>\n>>Ok that is beginning to become clear to me. Now I need to determine \n>>if this server is worth the investment for us. Maybe it is not a \n>>speed daemon but to be honest the licensing costs of an SMP aware \n>>RDBMS is outside our budget.\n>> \n>>\n>\n>You still haven't explained why you want multi-threaded queries. This\n>is sounding like keeping up with the Joneses.\n>\n>--\n>Josh Berkus\n>Aglio Database Solutions\n>San Francisco\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: don't forget to increase your free space map settings\n> \n>\n",
"msg_date": "Thu, 06 Apr 2006 18:13:07 -0400",
"msg_from": "\"Jignesh K. Shah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
},
{
"msg_contents": "\nI am thinking the most flexible solution would be to get a dual Operon\nmachine, and initially do both data loading and queries on the same\nmachine. When the load gets too high, buy a second machine and set it\nup as a Slony slave and run your queries on that, and do the data loads\non the original machine as master.\n\n---------------------------------------------------------------------------\n\nJuan Casero (FL FLC) wrote:\n> Because I plan to develop a rather large (for us anyway) data warehouse\n> with PostgreSQL. I am looking for the right hardware that can handle\n> queries on a database that might grow to over a 100 gigabytes. Right\n> now our decision support system based on postgresql 8.1.3 stores retail\n> sales information for about 4 four years back *but* only as weekly\n> summaries. I want to build the system so it can handle daily sales\n> transactions also. You can imagine how many more records this will\n> involve so I am looking for hardware that can give me the performance I\n> need to make this project useable. In other words parsing and loading\n> the daily transaction logs for our stores is likely to take huge amounts\n> of effort. I need a machine that can complete the task in a reasonable\n> amount of time. As people start to query the database to find sales\n> related reports and information I need to make sure the queries will run\n> reasonably fast for them. I have already hand optimized all of my\n> queries on the current system. But currently I only have weekly sales\n> summaries. Other divisions in our company have done a similar project\n> using MS SQL Server on SMP hardware far outclassing the database server\n> I currently use and they report heavy loads on the server with less than\n> ideal query run times. I am sure I can do my part to optimize the\n> queries once I start this project but there is only so much you can do.\n> At some point you just need more powerful hardware. This is where I am\n> at right now. Apart from that since I will only get this one chance to\n> buy a new server for data processing I need to make sure that I buy\n> something that can grow over time as our needs change. I don't want to\n> buy a server only to find out later that it cannot meet our needs with\n> future database projects. I have to balance a limited budget, room for\n> future performance growth, and current system requirements. Trust me it\n> isn't easy. \n> \n> \n> Juan\n> \n> -----Original Message-----\n> From: Josh Berkus [mailto:[email protected]] \n> Sent: Thursday, April 06, 2006 2:57 AM\n> To: [email protected]\n> Cc: Juan Casero (FL FLC); Luke Lonergan\n> Subject: Re: [PERFORM] Sun Fire T2000 and PostgreSQL 8.1.3\n> \n> Juan,\n> \n> > Ok that is beginning to become clear to me. Now I need to determine \n> > if this server is worth the investment for us. Maybe it is not a \n> > speed daemon but to be honest the licensing costs of an SMP aware \n> > RDBMS is outside our budget.\n> \n> You still haven't explained why you want multi-threaded queries. This\n> is sounding like keeping up with the Joneses.\n> \n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Wed, 12 Apr 2006 22:55:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sun Fire T2000 and PostgreSQL 8.1.3"
}
] |
[
{
"msg_contents": "> > On 3/22/06 5:56 AM, \"Spiegelberg, Greg\" <[email protected]> wrote:\n> >\n> > > Has anyone tested PostgreSQL 8.1.x compiled with Intel's Linux C/C++\n> > > compiler?\n> >\n> > We used to compile 8.0 with icc and 7.x before that. We found very good\n> > performance gains for Intel P4 architecture processors and some gains for\n> > AMD Athlon.\n> >\n> > Lately, the gcc compilers have caught up with icc on pipelining\n> > optimizations and they generate better code for Opteron than icc, so we\n> > found that icc was significantly slower than gcc on Opteron and no\n> > different\n> > on P4/Xeon.\n> >\n> > Maybe things have changed in newer versions of icc, the last tests I did\n> > were about 1 year ago.\n\nEnterpriseDB is seeing the same thing, that gcc4 now has the same\nperformance as icc, and is more flexible.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Thu, 6 Apr 2006 09:30:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intel C/C++ Compiler Tests (fwd)"
}
] |
[
{
"msg_contents": "Hi,\n\nI am working on Web Based application using Perl and Apache.\nI have to show to the users some query results by pages.\nSome time the result can be over 1000 rows (but can be more).\nThe question is how to make this.\n\nThe one way is to use OFFSET and LIMIT. That's OK but every time the \nwhole query must be parsed and executed.\n\nIf I use cursors it's better but my problem is that cursors live only in \nthe current transaction.\nSo when the Web Server finish I've lost the transaction and the cursor.\n\nThere is some software written from my coleagues that on every server \nrequest open a transaction and cursor. Move to the requested\npage and show the result(After that the script finishes, so is the \ntransaction). So my question is.\nShould I rewrte this by using OFFSET/LIMIT or it is better every time to \ncreate the cursor and use it to get the rows.\nIs there a way to save the cursor between separe Browser request (and to \ngive it time to live)? Or After all OFFSET and LIMIT?\n\nThanks in advance.\n\nKaloyan Iliev\n\n",
"msg_date": "Thu, 06 Apr 2006 17:48:05 +0300",
"msg_from": "Kaloyan Iliev <[email protected]>",
"msg_from_op": true,
"msg_subject": "CURSOR OR OFFSET/LIMIT"
},
{
"msg_contents": "\nOn Apr 6, 2006, at 10:48 AM, Kaloyan Iliev wrote:\n\n> If I use cursors it's better but my problem is that cursors live \n> only in the current transaction.\n> So when the Web Server finish I've lost the transaction and the \n> cursor.\n\n\nCursors can live outside the transaction if you declare them WITH \nHOLD specified. But that still may not help you in a web environment \nif you want to break the results into pages served on separate \nrequests (and possibly different connections).\n\nhttp://www.postgresql.org/docs/8.1/interactive/sql-declare.html\n\n> Is there a way to save the cursor between separe Browser request \n> (and to give it time to live)?\n\nSure, but you need to add a lot of connection management to do this. \nYou would need to keep track of the cursors and make sure a \nsubsequent request uses the right connection.\n\n\n\n\n\n\nJohn DeSoi, Ph.D.\nhttp://pgedit.com/\nPower Tools for PostgreSQL\n\n",
"msg_date": "Thu, 6 Apr 2006 13:11:33 -0400",
"msg_from": "John DeSoi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CURSOR OR OFFSET/LIMIT"
}
] |
[
{
"msg_contents": "Hello\n\nI have a sql statement that takes 108489.780 ms with 8.0.7 in a\nRHEL4/amd64linux server with 2xAMD Opteron(tm) Processor 275 2.00GHz /\n8GB RAM and only 4193.588 ms with 7.4.12 in a RHEL3/386linux server with\n2xIntel(R) Xeon(TM) CPU 2.40GHz / 4GB RAM.\n\nSome information:\n\n- There is no IO when I am running the sql, but it uses 99% of the cpu. \n- I run VACUUM VERBOSE ANALYZE in both databases before the test.\n- The databases are identical.\n- No other jobs running when testing.\n- Some different parameters between 7.4.12 and 8.0.7 :\n\n7.4.12:\n-------\nshared_buffers = 114966 #(15% of ram) \nsort_mem = 16384 \nvacuum_mem = 524288 \nwal_buffers = 64 \ncheckpoint_segments = 16 \neffective_cache_size = 383220 #(50% ram)\nrandom_page_cost = 3 \ndefault_statistics_target = 100 \n\n8.0.7:\n------\nshared_buffers = 250160 #(25% ram) \nwork_mem = 8192 \nmaintenance_work_mem = 131072 \nwal_buffers = 128 \ncheckpoint_segments = 64 \neffective_cache_size = 500321 #(50% ram)\nrandom_page_cost = 3 \ndefault_statistics_target = 100\n \nAny ideas of what I can test/configurate to find out why this happens?\nThanks in advance.\n\n\n*******************\nWith 7.4.12\n*******************\nrtprod=# explain analyze SELECT DISTINCT main.* FROM Users main ,\nPrincipals Principals_1, ACL ACL_2, Groups Groups_3, CachedGroupMembers\nCachedGroupMembers_4 WHERE ((ACL_2.RightName = 'OwnTicket')) AND\n((CachedGroupMembers_4.MemberId = Principals_1.id)) AND ((Groups_3.id =\nCachedGroupMembers_4.GroupId)) AND ((Principals_1.Disabled =\n'0')OR(Principals_1.Disabled = '0')) AND ((Principals_1.id != '1')) AND\n((main.id = Principals_1.id)) AND ( ( ACL_2.PrincipalId =\nGroups_3.id AND ACL_2.PrincipalType = 'Group' AND ( Groups_3.Domain =\n'SystemInternal' OR Groups_3.Domain = 'UserDefined' OR Groups_3.Domain =\n'ACLEquivalence')) OR ( ( (Groups_3.Domain = 'RT::Queue-Role' ) ) AND\nGroups_3.Type = ACL_2.PrincipalType) ) AND (ACL_2.ObjectType =\n'RT::System' OR (ACL_2.ObjectType = 'RT::Queue') ) ORDER BY main.Name\nASC;\n \n QUERYPLAN \n\n Unique (cost=40250.00..40250.09 rows=1 width=695) (actual\ntime=3974.528..4182.343 rows=264 loops=1)\n -> Sort (cost=40250.00..40250.00 rows=1 width=695) (actual\ntime=3974.522..3992.487 rows=24697 loops=1)\n Sort Key: main.name, main.id, main.\"password\", main.comments,\nmain.signature, main.emailaddress, main.freeformcontactinfo,\nmain.organization, main.realname, main.nickname, main.lang,\nmain.emailencoding, main.webencoding, main.externalcontactinfoid,\nmain.contactinfosystem, main.externalauthid, main.authsystem,\nmain.gecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.address1, main.address2, main.city, main.state,\nmain.zip, main.country, main.timezone, main.pgpkey, main.creator,\nmain.created, main.lastupdatedby, main.lastupdated\n -> Nested Loop (cost=33.67..40249.99 rows=1 width=695)\n(actual time=37.793..3240.146 rows=24697 loops=1)\n -> Nested Loop (cost=33.67..40246.95 rows=1 width=699)\n(actual time=37.754..2635.812 rows=24697 loops=1)\n -> Nested Loop (cost=33.67..40242.47 rows=1\nwidth=4) (actual time=37.689..2091.634 rows=24755 loops=1)\n -> Nested Loop (cost=33.67..40225.72 rows=1\nwidth=4) (actual time=37.663..1967.388 rows=54 loops=1)\n Join Filter:\n((((\"outer\".\"domain\")::text = 'RT::Queue-Role'::text) OR\n(\"inner\".principalid = \"outer\".id)) AND (((\"outer\".\"type\")::text =\n(\"inner\".principaltype)::text) OR (\"inner\".principalid = \"outer\".id))\nAND (((\"outer\".\"domain\")::text = 'RT::Queue-Role'::text) OR\n((\"inner\".principaltype)::text = 'Group'::text)) AND\n(((\"outer\".\"type\")::text = (\"inner\".principaltype)::text) OR\n((\"inner\".principaltype)::text = 'Group'::text)) AND\n(((\"outer\".\"type\")::text = (\"inner\".principaltype)::text) OR\n((\"outer\".\"domain\")::text = 'SystemInternal'::text) OR\n((\"outer\".\"domain\")::text = 'UserDefined'::text) OR\n((\"outer\".\"domain\")::text = 'ACLEquivalence'::text)))\n -> Index Scan using groups4, groups4,\ngroups4, groups4 on groups groups_3 (cost=0.00..2164.05 rows=15845\nwidth=32) (actual time=0.041..43.636 rows=16160 loops=1)\n Index Cond: (((\"domain\")::text =\n'RT::Queue-Role'::text) OR ((\"domain\")::text = 'SystemInternal'::text)\nOR ((\"domain\")::text = 'UserDefined'::text) OR ((\"domain\")::text =\n'ACLEquivalence'::text))\n -> Materialize (cost=33.67..34.15\nrows=48 width=13) (actual time=0.001..0.040 rows=54 loops=16160)\n -> Seq Scan on acl acl_2 \n(cost=0.00..33.67 rows=48 width=13) (actual time=0.016..0.989 rows=54\nloops=1)\n Filter: (((rightname)::text\n= 'OwnTicket'::text) AND (((objecttype)::text = 'RT::System'::text) OR\n((objecttype)::text = 'RT::Queue'::text)))\n -> Index Scan using cachedgroupmembers3 on\ncachedgroupmembers cachedgroupmembers_4 (cost=0.00..16.45 rows=23\nwidth=8) (actual time=0.015..1.296 rows=458 loops=54)\n Index Cond: (\"outer\".id =\ncachedgroupmembers_4.groupid)\n -> Index Scan using users_pkey on users main \n(cost=0.00..4.47 rows=1 width=695) (actual time=0.007..0.009 rows=1\nloops=24755)\n Index Cond: (main.id = \"outer\".memberid)\n -> Index Scan using principals_pkey on principals\nprincipals_1 (cost=0.00..3.02 rows=1 width=4) (actual time=0.010..0.012\nrows=1 loops=24697)\n Index Cond: (\"outer\".id = principals_1.id)\n Filter: ((disabled = 0::smallint) AND (id <> 1))\n Total runtime: 4193.588 ms\n(21 rows)\n\n\n*****************************\nWith 8.0.7\n*****************************\n\nrtprod=# explain analyze SELECT DISTINCT main.* FROM Users main ,\nPrincipals Principals_1, ACL ACL_2, Groups Groups_3, CachedGroupMembers\nCachedGroupMembers_4 WHERE ((ACL_2.RightName = 'OwnTicket')) AND\n((CachedGroupMembers_4.MemberId = Principals_1.id)) AND ((Groups_3.id =\nCachedGroupMembers_4.GroupId)) AND ((Principals_1.Disabled =\n'0')OR(Principals_1.Disabled = '0')) AND ((Principals_1.id != '1')) AND\n((main.id = Principals_1.id)) AND ( ( ACL_2.PrincipalId =\nGroups_3.id AND ACL_2.PrincipalType = 'Group' AND ( Groups_3.Domain =\n'SystemInternal' OR Groups_3.Domain = 'UserDefined' OR Groups_3.Domain =\n'ACLEquivalence')) OR ( ( (Groups_3.Domain = 'RT::Queue-Role' ) ) AND\nGroups_3.Type = ACL_2.PrincipalType) ) AND (ACL_2.ObjectType =\n'RT::System' OR (ACL_2.ObjectType = 'RT::Queue') ) ORDER BY main.Name\nASC;\n\n\n QUERYPLAN \n Unique (cost=164248.03..164250.91 rows=33 width=695) (actual\ntime=108249.642..108479.808 rows=264 loops=1)\n -> Sort (cost=164248.03..164248.11 rows=33 width=695) (actual\ntime=108249.637..108293.474 rows=24697 loops=1)\n Sort Key: main.name, main.id, main.\"password\", main.comments,\nmain.signature, main.emailaddress, main.freeformcontactinfo,\nmain.organization, main.realname, main.nickname, main.lang,\nmain.emailencoding, main.webencoding, main.externalcontactinfoid,\nmain.contactinfosystem, main.externalauthid, main.authsystem,\nmain.gecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.address1, main.address2, main.city, main.state,\nmain.zip, main.country, main.timezone, main.pgpkey, main.creator,\nmain.created, main.lastupdatedby, main.lastupdated\n -> Nested Loop (cost=4702.57..164247.19 rows=33 width=695)\n(actual time=2949.010..107407.145 rows=24697 loops=1)\n Join Filter: (((\"inner\".principalid = \"outer\".id) AND\n((\"inner\".principaltype)::text = 'Group'::text) AND\n(((\"outer\".\"domain\")::text = 'SystemInternal'::text) OR\n((\"outer\".\"domain\")::text = 'UserDefined'::text) OR\n((\"outer\".\"domain\")::text = 'ACLEquivalence'::text))) OR\n(((\"outer\".\"domain\")::text = 'RT::Queue-Role'::text) AND\n((\"outer\".\"type\")::text = (\"inner\".principaltype)::text)))\n -> Hash Join (cost=4667.85..51078.88 rows=62852\nwidth=727) (actual time=649.028..13602.451 rows=513264 loops=1)\n Hash Cond: (\"outer\".groupid = \"inner\".id)\n -> Merge Join (cost=0.00..32353.73 rows=62852\nwidth=699) (actual time=0.809..6644.928 rows=513264 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".memberid)\n -> Merge Join (cost=0.00..6379.54\nrows=15877 width=699) (actual time=0.118..911.395 rows=15866 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n -> Index Scan using users_pkey on\nusers main (cost=0.00..1361.01 rows=15880 width=695) (actual\ntime=0.016..49.141 rows=15880 loops=1)\n -> Index Scan using principals_pkey on\nprincipals principals_1 (cost=0.00..4399.08 rows=168394 width=4)\n(actual time=0.026..412.688 rows=168409 loops=1)\n Filter: ((disabled = 0::smallint)\nAND (id <> 1))\n -> Index Scan using cachedgroupmembers2 on\ncachedgroupmembers cachedgroupmembers_4 (cost=0.00..18647.25\nrows=666758 width=8) (actual time=0.008..1513.877 rows=666754 loops=1)\n -> Hash (cost=3094.48..3094.48 rows=152548\nwidth=32) (actual time=637.618..637.618 rows=0 loops=1)\n -> Seq Scan on groups groups_3 \n(cost=0.00..3094.48 rows=152548 width=32) (actual time=0.017..333.669\nrows=152548 loops=1)\n -> Materialize (cost=34.72..35.20 rows=48 width=13)\n(actual time=0.001..0.077 rows=54 loops=513264)\n -> Seq Scan on acl acl_2 (cost=0.00..34.67\nrows=48 width=13) (actual time=0.013..0.850 rows=54 loops=1)\n Filter: (((rightname)::text =\n'OwnTicket'::text) AND (((objecttype)::text = 'RT::System'::text) OR\n((objecttype)::text = 'RT::Queue'::text)))\n Total runtime: 108486.306 ms\n(21 rows)\n\n\n\n\n-- \nRafael Martinez, <[email protected]>\nCenter for Information Technology Services\nUniversity of Oslo, Norway\n\nPGP Public Key: http://folk.uio.no/rafael/\n\n",
"msg_date": "Fri, 07 Apr 2006 14:45:33 +0200",
"msg_from": "Rafael Martinez Guerrero <[email protected]>",
"msg_from_op": true,
"msg_subject": "Same SQL, 104296ms of difference between 7.4.12 and 8.0.7"
},
{
"msg_contents": "Rafael Martinez Guerrero wrote:\n> Hello\n> \n> I have a sql statement that takes 108489.780 ms with 8.0.7 in a\n> RHEL4/amd64linux server with 2xAMD Opteron(tm) Processor 275 2.00GHz /\n> 8GB RAM and only 4193.588 ms with 7.4.12 in a RHEL3/386linux server with\n> 2xIntel(R) Xeon(TM) CPU 2.40GHz / 4GB RAM.\n> \n> Some information:\n> \n> - There is no IO when I am running the sql, but it uses 99% of the cpu. \n> - I run VACUUM VERBOSE ANALYZE in both databases before the test.\n> - The databases are identical.\n> - No other jobs running when testing.\n> - Some different parameters between 7.4.12 and 8.0.7 :\n> \n> 7.4.12:\n> -------\n> shared_buffers = 114966 #(15% of ram) \n> sort_mem = 16384 \n> vacuum_mem = 524288 \n> wal_buffers = 64 \n> checkpoint_segments = 16 \n> effective_cache_size = 383220 #(50% ram)\n> random_page_cost = 3 \n> default_statistics_target = 100 \n> \n> 8.0.7:\n> ------\n> shared_buffers = 250160 #(25% ram) \n> work_mem = 8192 \n> maintenance_work_mem = 131072 \n> wal_buffers = 128 \n> checkpoint_segments = 64 \n> effective_cache_size = 500321 #(50% ram)\n> random_page_cost = 3 \n> default_statistics_target = 100\n> \n> Any ideas of what I can test/configurate to find out why this happens?\n> Thanks in advance.\n\nI haven't looked in detail at the plans, but what stands out to me is \nthat you've got a sort with a lot of columns and you've halved sort_mem \n(work_mem). Try increasing it (perhaps to 32000 even).\n\tset work_mem = 32000;\n\nGive that a quick go and see what happens. If it doesn't work, we'll \nlook at the plans in more detail.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 07 Apr 2006 14:31:17 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same SQL, 104296ms of difference between 7.4.12 and"
},
{
"msg_contents": "On Fri, 2006-04-07 at 15:31, Richard Huxton wrote:\n> Rafael Martinez Guerrero wrote:\n> > Hello\n> > \n> > I have a sql statement that takes 108489.780 ms with 8.0.7 in a\n> > RHEL4/amd64linux server with 2xAMD Opteron(tm) Processor 275 2.00GHz /\n> > 8GB RAM and only 4193.588 ms with 7.4.12 in a RHEL3/386linux server with\n> > 2xIntel(R) Xeon(TM) CPU 2.40GHz / 4GB RAM.\n> > \n> > Some information:\n> > \n> > - There is no IO when I am running the sql, but it uses 99% of the cpu. \n> > - I run VACUUM VERBOSE ANALYZE in both databases before the test.\n> > - The databases are identical.\n> > - No other jobs running when testing.\n> > - Some different parameters between 7.4.12 and 8.0.7 :\n> > \n> > 7.4.12:\n> > -------\n> > shared_buffers = 114966 #(15% of ram) \n> > sort_mem = 16384 \n> > vacuum_mem = 524288 \n> > wal_buffers = 64 \n> > checkpoint_segments = 16 \n> > effective_cache_size = 383220 #(50% ram)\n> > random_page_cost = 3 \n> > default_statistics_target = 100 \n> > \n> > 8.0.7:\n> > ------\n> > shared_buffers = 250160 #(25% ram) \n> > work_mem = 8192 \n> > maintenance_work_mem = 131072 \n> > wal_buffers = 128 \n> > checkpoint_segments = 64 \n> > effective_cache_size = 500321 #(50% ram)\n> > random_page_cost = 3 \n> > default_statistics_target = 100\n> > \n> > Any ideas of what I can test/configurate to find out why this happens?\n> > Thanks in advance.\n> \n> I haven't looked in detail at the plans, but what stands out to me is \n> that you've got a sort with a lot of columns and you've halved sort_mem \n> (work_mem). Try increasing it (perhaps to 32000 even).\n> \tset work_mem = 32000;\n> \n> Give that a quick go and see what happens. If it doesn't work, we'll \n> look at the plans in more detail.\n\nI know that this SQL could be done in a much better way, but I can not\nchange it at the moment. \n\nwork_mem = 16384:\n-----------------\nAfter restarting the database and running the explain two times:\n107911.229 ms\n\nwork_mem = 32768:\n-----------------\nAfter restarting the database and running the explain two times:\n103988.337 ms\n\n\n-- \nRafael Martinez, <[email protected]>\nCenter for Information Technology Services\nUniversity of Oslo, Norway\n\nPGP Public Key: http://folk.uio.no/rafael/\n\n",
"msg_date": "Fri, 07 Apr 2006 16:00:44 +0200",
"msg_from": "Rafael Martinez Guerrero <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Same SQL, 104296ms of difference between 7.4.12 and"
},
{
"msg_contents": "Rafael Martinez Guerrero wrote:\n>>> Any ideas of what I can test/configurate to find out why this happens?\n>>> Thanks in advance.\n>> I haven't looked in detail at the plans, but what stands out to me is \n>> that you've got a sort with a lot of columns and you've halved sort_mem \n>> (work_mem). Try increasing it (perhaps to 32000 even).\n>> \tset work_mem = 32000;\n>>\n>> Give that a quick go and see what happens. If it doesn't work, we'll \n>> look at the plans in more detail.\n> \n> I know that this SQL could be done in a much better way, but I can not\n> change it at the moment. \n> \n> work_mem = 16384:\n> -----------------\n> After restarting the database and running the explain two times:\n> 107911.229 ms\n> \n> work_mem = 32768:\n> -----------------\n> After restarting the database and running the explain two times:\n> 103988.337 ms\n\nDamn! I hate it when I have to actually work at a problem :-)\n\n\nWell, the first thing that strikes me is that the row estimates are \nterrible for 7.4.12 (which runs quickly) and much better for 8.0.7 \n(which runs slowly). Which suggests you were lucky before.\n\nThe second thing I notice is the bit that goes: Materialize ... Seq Scan \non acl acl_2. If you compare the two you'll see that the 7.4 version \nloops 16,160 times but 8.0 loops 513,264 times.\n\nThis is a bad choice, and I'm guessing it's made because it gets the row \nestimate wrong:\nHash Join (cost=4667.85..51078.88 rows=62852 width=727) (actual \ntime=649.028..13602.451 rows=513264 loops=1)\n\nThat's the comparison Groups_3.id = CachedGroupMembers_4.GroupId if I'm \nreading this correctly. Is there anything unusual about those two columns?\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 07 Apr 2006 15:40:51 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same SQL, 104296ms of difference between 7.4.12 and"
},
{
"msg_contents": "On Fri, 2006-04-07 at 16:41 +0200, Gábriel Ákos wrote:\n\n> > \n> > Any ideas of what I can test/configurate to find out why this happens?\n> > Thanks in advance.\n> \n> Increase work_mem to 50% of memory, and don't care about \n> maintenance_work_mem and effective_cache_size, they don't matter in this \n> case.\n> \n\nThe problem is not the amount of memory. It works much faster with only\n16M and 7.4.12 than 8.0.7.\n\n-- \nRafael Martinez, <[email protected]>\nCenter for Information Technology Services\nUniversity of Oslo, Norway\n\nPGP Public Key: http://folk.uio.no/rafael/\n\n",
"msg_date": "Fri, 07 Apr 2006 16:53:40 +0200",
"msg_from": "Rafael Martinez <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same SQL, 104296ms of difference between 7.4.12 and"
},
{
"msg_contents": "Rafael Martinez Guerrero <[email protected]> writes:\n> I have a sql statement that takes 108489.780 ms with 8.0.7 in a\n> RHEL4/amd64linux server with 2xAMD Opteron(tm) Processor 275 2.00GHz /\n> 8GB RAM and only 4193.588 ms with 7.4.12 in a RHEL3/386linux server with\n> 2xIntel(R) Xeon(TM) CPU 2.40GHz / 4GB RAM.\n\nI think you've discovered a planner regression.\nSimplified test case using the regression database:\n\nexplain select * from tenk1 a, tenk1 b\nwhere (a.ten = b.ten and (a.unique1 = 100 or a.unique1 = 101))\n or (a.hundred = b.hundred and a.unique1 = 42);\n\n7.4:\n Nested Loop (cost=0.00..2219.74 rows=4 width=488)\n Join Filter: (((\"outer\".hundred = \"inner\".hundred) OR (\"outer\".ten = \"inner\".ten)) AND ((\"outer\".unique1 = 42) OR (\"outer\".ten = \"inner\".ten)) AND ((\"outer\".hundred = \"inner\".hundred) OR (\"outer\".unique1 = 100) OR (\"outer\".unique1 = 101)))\n -> Index Scan using tenk1_unique1, tenk1_unique1, tenk1_unique1 on tenk1 a (cost=0.00..18.04 rows=3 width=244)\n Index Cond: ((unique1 = 42) OR (unique1 = 100) OR (unique1 = 101))\n -> Seq Scan on tenk1 b (cost=0.00..458.24 rows=10024 width=244)\n(5 rows)\n\n8.0:\n Nested Loop (cost=810.00..6671268.00 rows=2103 width=488)\n Join Filter: (((\"outer\".ten = \"inner\".ten) AND ((\"outer\".unique1 = 100) OR (\"outer\".unique1 = 101))) OR ((\"outer\".hundred = \"inner\".hundred) AND (\"outer\".unique1 = 42)))\n -> Seq Scan on tenk1 a (cost=0.00..458.00 rows=10000 width=244)\n -> Materialize (cost=810.00..1252.00 rows=10000 width=244)\n -> Seq Scan on tenk1 b (cost=0.00..458.00 rows=10000 width=244)\n(5 rows)\n\nNote the failure to pull out the unique1 conditions from the join clause\nand use them with the index. I didn't bother to do EXPLAIN ANALYZE;\nthis plan obviously sucks compared to the other.\n\n8.1:\nTRAP: FailedAssertion(\"!(!restriction_is_or_clause((RestrictInfo *) orarg))\", File: \"indxpath.c\", Line: 479)\nLOG: server process (PID 12201) was terminated by signal 6\nserver closed the connection unexpectedly\n\nOh dear.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Apr 2006 11:29:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same SQL, 104296ms of difference between 7.4.12 and 8.0.7 "
},
{
"msg_contents": "I wrote:\n> Rafael Martinez Guerrero <[email protected]> writes:\n>> I have a sql statement that takes 108489.780 ms with 8.0.7 in a\n>> RHEL4/amd64linux server with 2xAMD Opteron(tm) Processor 275 2.00GHz /\n>> 8GB RAM and only 4193.588 ms with 7.4.12 in a RHEL3/386linux server with\n>> 2xIntel(R) Xeon(TM) CPU 2.40GHz / 4GB RAM.\n\n> I think you've discovered a planner regression.\n> Simplified test case using the regression database:\n\n> explain select * from tenk1 a, tenk1 b\n> where (a.ten = b.ten and (a.unique1 = 100 or a.unique1 = 101))\n> or (a.hundred = b.hundred and a.unique1 = 42);\n\nI've repaired the assertion crash in 8.1/HEAD, but I don't think it's\npractical to teach 8.0 to optimize queries like this nicely. The reason\n7.4 can do it is that 7.4 forces the WHERE condition into CNF, ie\n\n (a.hundred = b.hundred OR a.ten = b.ten) AND\n (a.unique1 = 42 OR a.ten = b.ten) AND\n (a.hundred = b.hundred OR a.unique1 = 100 OR a.unique1 = 101) AND\n (a.unique1 = 42 OR a.unique1 = 100 OR a.unique1 = 101)\n\nfrom which it's easy to extract the index condition for A. We decided\nthat forcing to CNF wasn't such a hot idea, so 8.0 and later don't do\nit, but 8.0's logic for extracting index conditions from joinquals isn't\nup to the problem of handling sub-ORs. Fixing that looks like a larger\nchange than I care to back-patch into an old release.\n\nMy recommendation is to update to 8.1.4 when it comes out.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Apr 2006 13:36:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same SQL, 104296ms of difference between 7.4.12 and 8.0.7 "
},
{
"msg_contents": "On Fri, 2006-04-07 at 13:36 -0400, Tom Lane wrote:\n> I wrote:\n> > Rafael Martinez Guerrero <[email protected]> writes:\n> >> I have a sql statement that takes 108489.780 ms with 8.0.7 in a\n> >> RHEL4/amd64linux server with 2xAMD Opteron(tm) Processor 275 2.00GHz /\n> >> 8GB RAM and only 4193.588 ms with 7.4.12 in a RHEL3/386linux server with\n> >> 2xIntel(R) Xeon(TM) CPU 2.40GHz / 4GB RAM.\n> \n> > I think you've discovered a planner regression.\n> > Simplified test case using the regression database:\n> \n> > explain select * from tenk1 a, tenk1 b\n> > where (a.ten = b.ten and (a.unique1 = 100 or a.unique1 = 101))\n> > or (a.hundred = b.hundred and a.unique1 = 42);\n> \n> I've repaired the assertion crash in 8.1/HEAD, but I don't think it's\n> practical to teach 8.0 to optimize queries like this nicely. The reason\n> 7.4 can do it is that 7.4 forces the WHERE condition into CNF, ie\n> \n[..................]\n\nTom, thank you very much for your help. As I suspected this was a more\ncomplicated problem than the configuration of some parameters :( . Good\nthat we have found out this now and not after the upgrade.\n\nAll our upgrade plans and testing for all our databases have been done\nfor/with 8.0.x (yes, I know 8.1.x is much better, but I am working in a\nconservative place from the sysadm point of view). We will have to\nchange our plans and go for 8.1 if we want this to work. \n\n> My recommendation is to update to 8.1.4 when it comes out.\n\nAny idea about when 8.1.4 will be released?\nThanks again.\n\n-- \nRafael Martinez, <[email protected]>\nCenter for Information Technology Services\nUniversity of Oslo, Norway\n\nPGP Public Key: http://folk.uio.no/rafael/\n\n",
"msg_date": "Fri, 07 Apr 2006 21:04:51 +0200",
"msg_from": "Rafael Martinez <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same SQL, 104296ms of difference between 7.4.12 and"
}
] |
[
{
"msg_contents": "I have a web server with PostgreSQL and RHEL. It hosts a search\nengine, and each time some one makes a query, it uses the HDD Raid\narray. The DB is not very big, it is less than a GB. I plan to add\nmore RAM anyway.\n\nWhat I'd like to do is find out how to keep the whole DB in RAM so\nthat each time some one does a query, it doesn't use the HDD. Is it\npossible, if so, how?\nThanks,\n\nCharles.\n",
"msg_date": "Fri, 7 Apr 2006 11:37:34 -0300",
"msg_from": "\"Charles A. Landemaine\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Loading the entire DB into RAM"
},
{
"msg_contents": "If memory serves me correctly I have seen several posts about this in \nthe past.\n\nI'll try to recall highlights.\n\n1. Create a md in linux sufficiently large enough to handle the data set \nyou are wanting to store.\n2. Create a HD based copy somewhere as your permanent storage mechanism.\n3. Start up your PostgreSQL instance with the MD as the data store\n4. Load your data to the MD instance.\n5. Figure out how you will change indexes _and_ ensure that your disk \nstorage is consistent with your MD instance.\n\nI haven't done so, but it would be interesting to have a secondary \ndatabase somewhere that is your primary storage. It needn't be \nespecially powerful, or even available. It serves as the place to \ngenerate your indexing data. You could then use SLONY to propogate the \ndata to the MD production system.\n\nOf course, if you are updating your system that resides in ram, you \nshould be thinking the other way. Have SLONY replicate changes to the \nother, permanent storage, system.\n\nEither way you do it, I can't think of an out of the box method to doing \nit. Somehow one has to transfer data from permanent storage to the md \ninstance, and, likewise, back to permanent storage.\n\nOut of curiosity, what are you using as the search engine?\n\n\nCharles A. Landemaine wrote:\n> I have a web server with PostgreSQL and RHEL. It hosts a search\n> engine, and each time some one makes a query, it uses the HDD Raid\n> array. The DB is not very big, it is less than a GB. I plan to add\n> more RAM anyway.\n>\n> What I'd like to do is find out how to keep the whole DB in RAM so\n> that each time some one does a query, it doesn't use the HDD. Is it\n> possible, if so, how?\n> Thanks,\n>\n> Charles.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n>\n> \n\n",
"msg_date": "Fri, 07 Apr 2006 08:54:26 -0600",
"msg_from": "Matt Davies | Postgresql List <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Loading the entire DB into RAM"
},
{
"msg_contents": "On 4/7/06, Matt Davies | Postgresql List <[email protected]> wrote:\n> Out of curiosity, what are you using as the search engine?\n\nThank you. We designed the search engine ourself (we didn't use a\nready-to-use solution).\n\n--\nCharles A. Landemaine.\n",
"msg_date": "Fri, 7 Apr 2006 12:00:59 -0300",
"msg_from": "\"Charles A. Landemaine\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Loading the entire DB into RAM"
},
{
"msg_contents": "\"Charles A. Landemaine\" <[email protected]> writes:\n> What I'd like to do is find out how to keep the whole DB in RAM so\n> that each time some one does a query, it doesn't use the HDD. Is it\n> possible, if so, how?\n\nThat should happen essentially for free, if the kernel doesn't have any\nbetter use for the memory --- anything read from disk once will stay in\nkernel disk cache. Perhaps you need to take a closer look at your\nkernel VM parameters. Or maybe you don't have enough RAM yet for both\nthe DB contents and the processes you need to run.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Apr 2006 11:25:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Loading the entire DB into RAM "
},
{
"msg_contents": "On 4/7/06, Charles A. Landemaine <[email protected]> wrote:\n> I have a web server with PostgreSQL and RHEL. It hosts a search\n> engine, and each time some one makes a query, it uses the HDD Raid\n> array. The DB is not very big, it is less than a GB. I plan to add\n> more RAM anyway.\n>\n> What I'd like to do is find out how to keep the whole DB in RAM so\n> that each time some one does a query, it doesn't use the HDD. Is it\n> possible, if so, how?\n\ndon't bother.\n\nIf your database is smaller than ram on the box, the operating will\ncache it quite effectively. All you should be worrying about is to\nset fsync=on (you care about your data) or off (you don't). If your\ndata is truly static you might get better performance out of a\nin-process data storage, like sqlite for example.\n\nMerlin\n",
"msg_date": "Fri, 7 Apr 2006 11:29:57 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Loading the entire DB into RAM"
},
{
"msg_contents": "On Fri, 2006-04-07 at 09:54, Matt Davies | Postgresql List wrote:\n> If memory serves me correctly I have seen several posts about this in \n> the past.\n> \n> I'll try to recall highlights.\n> \n> 1. Create a md in linux sufficiently large enough to handle the data set \n> you are wanting to store.\n> 2. Create a HD based copy somewhere as your permanent storage mechanism.\n> 3. Start up your PostgreSQL instance with the MD as the data store\n> 4. Load your data to the MD instance.\n> 5. Figure out how you will change indexes _and_ ensure that your disk \n> storage is consistent with your MD instance.\n\nSNIP\n\n> Either way you do it, I can't think of an out of the box method to doing \n> it. Somehow one has to transfer data from permanent storage to the md \n> instance, and, likewise, back to permanent storage.\n\ndd could do that. Just have a third partition that holds the drive\nimage. Start up the mirror set, dd the file system into place on the md\ndevice. When you're ready to shut the machine down or back it up, shut\ndown the postmaster, sync the md drive, dd the filesystem back off to\nthe image backup drive.\n\nBut I'd really just recommend getting a LOT of RAM and letting the\nkernel do all the caching. If you've got a 2 gig database and 4 gigs of\nram, you should be gold.\n",
"msg_date": "Fri, 07 Apr 2006 10:49:14 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Loading the entire DB into RAM"
},
{
"msg_contents": "** This has not been tested.\n\n\tCreate a ramdisk of required size\n\tCreate a Linux software RAID mirror between the ramdisk, and a partition \nof the same size.\n\tMark the physical-disk as write-mostly (reads will go to the ramdisk)\n\tFormat it and load data...\n\n\tOn reboot you'll get a RAID1 mirror with 1 failed drive (because the \nramdisk is dead of course). Just recreate the ramdisk and resync.\n\n\t\n",
"msg_date": "Fri, 07 Apr 2006 19:12:12 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Loading the entire DB into RAM"
}
] |
[
{
"msg_contents": "Bing-bong, passenger announcement.. the panic train is now pulling into\nplatform 8.1.3. Bing-bong. =)\n\nOK, having moved from our quad-xeon to an 8-CPU IBM pSeries 650\n(8x1.45GHz POWER4 instead of 4 x 3GHz Xeon), our query times have shot\nup and our website is next to unusable. The IBM is not swapping (not\nwith 16GB of RAM!), disk i/o is low, but there must be something\ncritically wrong for this monster to be performing so badly..\n\nThere is little IO (maybe 500KB/sec), but the CPUs are often at 100%\nusage.\n\nVACUUM VERBOSE ANALYZE shows me 40000 page slots are needed to track\nall free space. I have 160000 page slots configured, and this machine is\ndedicated to pg.\n\nThe thing that really winds me up about this, is that aside from all\nthe normal 'my postgres is slow l0lz!' troubleshooting is the previous\nmachine (Debian sarge on four 3GHz Xeons) is using 8.1.3 also, with an\ninferior I/O subsystem, and it churns through the workload very\nmerrily, only reaching a full loadavg of 4 at peak times, and running\nour main 'hotelsearch' function in ~1000ms.. \n\nThis IBM on the other hand is often taking 5-10 seconds to do the same\nthing - although just by watching the logs it's clear to see the\nworkload coming in waves, and then calming down again. (this\ncorrelation is matched by watching the load-balancer's logs as it takes\nunresponsive webservers out of the cluster)\n\nHere's the differences (I've removed obvious things like file/socket\npaths) in \"select name,setting from pg_catalog.pg_settings\" between the\ntwo:\n\n--- cayenne 2006-04-07 18:43:48.000000000 +0100 # quad xeon\n+++ jalapeno 2006-04-07 18:44:08.000000000 +0100 # ibm 650\n- effective_cache_size | 320000\n+ effective_cache_size | 640000\n- integer_datetimes | on\n+ integer_datetimes | off\n- maintenance_work_mem | 262144\n+ maintenance_work_mem | 1048576\n- max_connections | 150\n+ max_connections | 100\n- max_fsm_pages | 66000\n+ max_fsm_pages | 160000\n- max_stack_depth | 2048\n+ max_stack_depth | 16384\n- tcp_keepalives_count | 0\n- tcp_keepalives_idle | 0\n- tcp_keepalives_interval | 0\n- temp_buffers | 1000\n- TimeZone | GB\n+ tcp_keepalives_count | 8\n+ tcp_keepalives_idle | 7200\n+ tcp_keepalives_interval | 75\n+ temp_buffers | 4000\n+ TimeZone | GMT0BST,M3.5.0,M10.5.0\n- wal_sync_method | fdatasync\n- work_mem | 4096\n+ wal_sync_method | open_datasync\n+ work_mem | 16384\n\nSo, jalapeno really should have much more room to move. shared_buffers\nis 60000 on both machines.\n\nI'm reaching the end of my tether here - our search functions are just\nso extensive and my pg knowledge is so small that it's overwhelming to\ntry and step through it to find any bottlenecks :(\n\nJust to reiterate, it all runs great on cayenne since we trimmed a lot\nof the fat out of the search, and I can't understand why the IBM box\nisn't absolutely throwing queries out the door :)\n\nCheers,\nGavin.\n",
"msg_date": "Fri, 7 Apr 2006 18:58:23 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "Gavin Hamill <[email protected]> writes:\n> OK, having moved from our quad-xeon to an 8-CPU IBM pSeries 650\n> (8x1.45GHz POWER4 instead of 4 x 3GHz Xeon), our query times have shot\n> up and our website is next to unusable. The IBM is not swapping (not\n> with 16GB of RAM!), disk i/o is low, but there must be something\n> critically wrong for this monster to be performing so badly..\n\nHave you vacuumed/analyzed since reloading your data? Compare some\nEXPLAIN ANALYZE outputs for identical queries on the two machines,\nthat usually helps figure out what's wrong.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Apr 2006 14:41:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow. "
},
{
"msg_contents": "On Fri, 2006-04-07 at 12:58, Gavin Hamill wrote:\n> Bing-bong, passenger announcement.. the panic train is now pulling into\n> platform 8.1.3. Bing-bong. =)\n> \n> OK, having moved from our quad-xeon to an 8-CPU IBM pSeries 650\n> (8x1.45GHz POWER4 instead of 4 x 3GHz Xeon), our query times have shot\n> up and our website is next to unusable. The IBM is not swapping (not\n> with 16GB of RAM!), disk i/o is low, but there must be something\n> critically wrong for this monster to be performing so badly..\n> \n> There is little IO (maybe 500KB/sec), but the CPUs are often at 100%\n> usage.\n\nCan you test your AIX box with linux on it? It may well be that\nsomething in AIX is causing this performance problem. I know that on\nthe same SPARC hardware, a postgresql database is 2 or more times faster\non top of linux or BSD than it is on solaris, at least it was back a few\nyears ago when I tested it.\n\nAre the same queries getting the same basic execution plan on both\nboxes? Turn on logging for slow queries, and explain analyze them on\nboth machines to see if they are.\n\nIf they aren't, figure out why.\n\nI'd put the old 4 way Xeon back in production and do some serious\ntesting of this pSeries machine. IBM should be willing to help you, I\nhope.\n\nMy guess is that this is an OS issue. Maybe there are AIX tweaks that\nwill get it up to the same or higher level of performance as your four\nway xeon. Maybe there aren't.\n\nMyself, I'd throw a spare drive in for the OS, put some flavor of linux\non it\n\nhttp://www-1.ibm.com/partnerworld/pwhome.nsf/weblook/pat_linux_learn_why_power.html\n\nand do some load testing there. If the machine can't perform up to\nsnuff with the same basic OS and a similar setup to your Xeon, send it\nback to IBM and buy one of these:\n\nhttp://www.asaservers.com/system_dept.asp?dept_id=SD-002\n\nor something similar. I can't imagine it costing more than a pSeries.\n",
"msg_date": "Fri, 07 Apr 2006 13:54:21 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "On Fri, 07 Apr 2006 14:41:39 -0400\nTom Lane <[email protected]> wrote:\n\n> Gavin Hamill <[email protected]> writes:\n> > OK, having moved from our quad-xeon to an 8-CPU IBM pSeries 650\n> > (8x1.45GHz POWER4 instead of 4 x 3GHz Xeon), our query times have shot\n> > up and our website is next to unusable. The IBM is not swapping (not\n> > with 16GB of RAM!), disk i/o is low, but there must be something\n> > critically wrong for this monster to be performing so badly..\n> \n> Have you vacuumed/analyzed since reloading your data? \n\nAbsolutely - a VACUUM FULL was the first thing I did, and have VACUUM ANALYZE VERBOSE'd a couple of times since. I have plenty of overhead to keep the entire free space map in RAM.\n\n> Compare some\n> EXPLAIN ANALYZE outputs for identical queries on the two machines,\n> that usually helps figure out what's wrong.\n\nIf only :)\n\nSince 90% of the db work is the 'hotelsearch' function (which is 350 lines-worth that I'm not permitted to share :(( ), an EXPLAIN ANALYZE reveals practically nothing:\n\n##### jalapeno (IBM)\nlaterooms=# EXPLAIN ANALYZE select * from hotelsearch(12.48333::numeric, 41.90000::numeric, 5::int4, '2006-04-13'::date, 5::int4, NULL::int4, 1::int4, NULL::int4, NULL::int4, TRUE::bool, FALSE::bool, FALSE::bool, 1::int2, 'GBP'::text, 'Roma'::text, 7::int4, NULL::int4, NULL::int4) limit 500;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..6.25 rows=500 width=1587) (actual time=2922.282..2922.908 rows=255 loops=1)\n -> Function Scan on hotelsearch (cost=0.00..12.50 rows=1000 width=1587) (actual time=2922.277..2922.494 rows=255 loops=1)\n Total runtime: 2923.296 ms\n(3 rows)\n\n##### cayenne (xeon)\nlaterooms=# EXPLAIN ANALYZE select * from hotelsearch(12.48333::numeric, 41.90000::numeric, 5::int4, '2006-04-13'::date, 5::int4, NULL::int4, 1::int4, NULL::int4, NULL::int4, TRUE::bool, FALSE::bool, FALSE::bool, 1::int2, 'GBP'::text, 'Roma'::text, 7::int4, NULL::int4, NULL::int4) limit 500;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..6.25 rows=500 width=1587) (actual time=1929.483..1930.103 rows=255 loops=1)\n -> Function Scan on hotelsearch (cost=0.00..12.50 rows=1000 width=1587) (actual time=1929.479..1929.693 rows=255 loops=1)\n Total runtime: 1930.506 ms\n(3 rows)\n\n\nThe 'LIMIT 500' is a red herring since the function body will get all data, so reducing the LIMIT in the call to hotelsearch doesn't reduce the amount of work being done.\n\nThe killer in it all is tail'ing the postgres log (which I have set only to log queries at 1000ms or up) is things will be returning at 1000-2000ms.. then suddenly shoot up to 8000ms.. and if I try a few of those 8000ms queries on the xeon box, they exec in ~1500ms.. and if I try them again a few moments later on the ibm, they'll also exec in maybe ~2500ms.\n\nThis is one hell of a moving target and I can't help but think I'm just missing something that's right in front of my nose, too close to see. \n\nCheers,\nGavin.\n\n",
"msg_date": "Fri, 7 Apr 2006 20:45:26 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "On Fri, 07 Apr 2006 13:54:21 -0500\nScott Marlowe <[email protected]> wrote:\n\n> Are the same queries getting the same basic execution plan on both\n> boxes? Turn on logging for slow queries, and explain analyze them on\n> both machines to see if they are.\n\nSee reply to Tom Lane :)\n\n> I'd put the old 4 way Xeon back in production and do some serious\n> testing of this pSeries machine. IBM should be willing to help you, I\n> hope.\n\nThey probably would if this had been bought new - as it is, we have\nrented the machine for a month from a 2nd-user dealer to see if it's\ncapable of taking the load. I'm now glad we did this. \n\n> My guess is that this is an OS issue. Maybe there are AIX tweaks that\n> will get it up to the same or higher level of performance as your four\n> way xeon. Maybe there aren't.\n\nThe pSeries isn't much older than our Xeon machine, and I expected the\nperformance level to be exemplary out of the box.. we've enabled the\n64-bit kernel+userspace, and compiled pg for 64-bitness with the gcc\nflags as reccommended by Senica Cunningham on this very list..\n\n> Myself, I'd throw a spare drive in for the OS, put some flavor of\n> linux on it\n\nTerrifying given I know nothing about the pSeries boot system, but at\nthis stage I'm game for nearly anything. \n\n> http://www.asaservers.com/system_dept.asp?dept_id=SD-002\n\nMulti-Opteron was the other thing we considered but decided to give\n'Big Iron' UNIX a whirl...\n\nCheers,\nGavin.\n",
"msg_date": "Fri, 7 Apr 2006 20:59:19 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "Gavin Hamill <[email protected]> writes:\n> Scott Marlowe <[email protected]> wrote:\n>> My guess is that this is an OS issue. Maybe there are AIX tweaks that\n>> will get it up to the same or higher level of performance as your four\n>> way xeon. Maybe there aren't.\n\n> The pSeries isn't much older than our Xeon machine, and I expected the\n> performance level to be exemplary out of the box..\n\nI'm fairly surprised too. One thing I note from your comparison of\nsettings is that the default WAL sync method is different on the two\noperating systems. If the query load is update-heavy then it would be\nvery worth your while to experiment with the sync method. However,\nif the bottleneck is pure-SELECT transactions then WAL sync should not\nbe a factor at all.\n\nDoes AIX have anything comparable to oprofile or dtrace? It'd be\ninteresting to try to monitor things at that level and see what we can\nlearn. Failing a low-level profiler, there should at least be something\ncomparable to strace --- you should try watching some of the backends\nwith strace and see what their behavior is when the performance goes\nsouth. Lots of delaying select()s or semop()s would be a red flag.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Apr 2006 16:06:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow. "
},
{
"msg_contents": "On Fri, 7 Apr 2006 20:59:19 +0100\nGavin Hamill <[email protected]> wrote:\n> > I'd put the old 4 way Xeon back in production and do some serious\n> > testing of this pSeries machine. IBM should be willing to help you, I\n> > hope.\n> \n> They probably would if this had been bought new - as it is, we have\n> rented the machine for a month from a 2nd-user dealer to see if it's\n> capable of taking the load. I'm now glad we did this. \n\nWe also had problems with a high end AIX system and we got no help from\nIBM. They expected you to put Oracle on and if you used anything else\nyou were on your own. We had exactly the same issue. We expected to\nget an order of magnitude improvement and instead the app bogged down.\nIt also got worse over time. We had to reboot every night to get\nanything out of it. Needless to say, they got their system back.\n\n> \n> > My guess is that this is an OS issue. Maybe there are AIX tweaks that\n> > will get it up to the same or higher level of performance as your four\n> > way xeon. Maybe there aren't.\n> \n> The pSeries isn't much older than our Xeon machine, and I expected the\n> performance level to be exemplary out of the box.. we've enabled the\n> 64-bit kernel+userspace, and compiled pg for 64-bitness with the gcc\n> flags as reccommended by Senica Cunningham on this very list..\n\nThat's Seneca.\n\nWe found that our money was better spent on multiple servers running\nNetBSD with a home grown multi-master replication system. Need more\npower? Just add more servers.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 7 Apr 2006 16:16:02 -0400",
"msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "On Fri, 2006-04-07 at 14:59, Gavin Hamill wrote:\n> On Fri, 07 Apr 2006 13:54:21 -0500\n> Scott Marlowe <[email protected]> wrote:\n> \n> > Are the same queries getting the same basic execution plan on both\n> > boxes? Turn on logging for slow queries, and explain analyze them on\n> > both machines to see if they are.\n> \n> See reply to Tom Lane :)\n\nI didn't see one go by yet... Could be sitting in the queue.\n\n> They probably would if this had been bought new - as it is, we have\n> rented the machine for a month from a 2nd-user dealer to see if it's\n> capable of taking the load. I'm now glad we did this. \n\nThank god. I had a picture of you sitting on top of a brand new very\nexpensive pSeries \n\nLet us know if changing the fsync setting helps. Hopefully that's all\nthe problem is.\n\nOff on a tangent. If the aggregate memory bandwidth of the pSeries is\nno greater than you Xeon you might not see a big improvement if you were\nmemory bound before. If you were CPU bound, you may or may not see an\nimprovement.\n\nCan you describe the disc subsystems in the two machines for us? What\nkind of read / write load you have? It could be the older box was\nrunning on IDE drives with fake fsync responses which would lie, be\nfast, but not reliable in case of a power outage.\n\nDo you have hardware RAID for your pSeries? how many discs, how much\nbattery backed cache, etc?\n\n> Multi-Opteron was the other thing we considered but decided to give\n> 'Big Iron' UNIX a whirl...\n\nIt still might be a good choice, if it's a simple misconfiguration\nissue.\n\nBut man, those new multiple core opterons can make some impressive\nmachines for very little money.\n",
"msg_date": "Fri, 07 Apr 2006 15:24:18 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "\nGavin Hamill <[email protected]> writes:\n\n> This is one hell of a moving target and I can't help but think I'm just\n> missing something that's right in front of my nose, too close to see.\n\nI'm assuming you compiled postgres yourself? Do you have the output from the\nconfigure script? I'm wondering if it failed to find a good spinlock match for\nthe architecture. Not sure if that's really likely but it's a possibility.\n\nAlso, I'm pretty sure IBM has tools that would let you disable some of the\nprocessors to see if maybe it's a shared memory bus issue. If I understand you\nright the machine isn't in production yet? In which case I would get timing\ninformation for a single processor, two processors, four processors, and eight\nprocessors. If you see it max out and start dropping then that would point\ntowards a hardware/low level postgres issue like spinlocks or shared memory\nrather than a high level database issue like stats.\n\n-- \ngreg\n\n",
"msg_date": "07 Apr 2006 16:30:15 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "Gavin Hamill wrote:\n> Bing-bong, passenger announcement.. the panic train is now pulling into\n> platform 8.1.3. Bing-bong. =)\n> \n> OK, having moved from our quad-xeon to an 8-CPU IBM pSeries 650\n> (8x1.45GHz POWER4 instead of 4 x 3GHz Xeon), our query times have shot\n> up and our website is next to unusable. The IBM is not swapping (not\n\nI would say running _one_ query at a time depends on the power of _one_ \ncpu. PPCs aren't that fast, I'd say they are slower than Xeons. Moreover \nI'm sure that AMD Opterons are faster than Xeons. I'd say you should go \nand test an opteron-based configuration. You'll get much more power for \nthe same (much likely for less) money.\n\n\n-- \n�dv�zlettel,\nG�briel �kos\n-=E-Mail :[email protected]|Web: http://www.i-logic.hu=-\n-=Tel/fax:+3612367353 |Mobil:+36209278894 =-\n",
"msg_date": "Fri, 07 Apr 2006 22:39:41 +0200",
"msg_from": "=?ISO-8859-1?Q?G=E1briel_=C1kos?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "On Fri, 7 Apr 2006 16:16:02 -0400\n\"D'Arcy J.M. Cain\" <[email protected]> wrote:\n\n> We also had problems with a high end AIX system and we got no help\n> from IBM. They expected you to put Oracle on and if you used\n> anything else you were on your own. \n\nUrk, I thought IBM were supposedly Linux sycophants thesedays...\n\n> We had exactly the same issue.\n> We expected to get an order of magnitude improvement and instead the\n> app bogged down.\n\nThat's kind of encouraging, I suppose - that it might not be something\nmind-bogglingly stupid I'm doing.\n\n> It also got worse over time. We had to reboot every\n> night to get anything out of it. Needless to say, they got their\n> system back.\n\n<nod>\n \n> That's Seneca.\n\nOops - meant to check the spelling before I sent that =)\n \n> We found that our money was better spent on multiple servers running\n> NetBSD with a home grown multi-master replication system. Need more\n> power? Just add more servers.\n\nAye, I originally suggested multiple servers, but was talked round to\none giant db so that our devels didn't have to rewrite code to deal\nwith read/write + read-only db handles...\n\nCheers,\nGavin.\n",
"msg_date": "Fri, 7 Apr 2006 22:19:34 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "On Fri, 07 Apr 2006 15:24:18 -0500\nScott Marlowe <[email protected]> wrote:\n\n> > See reply to Tom Lane :)\n> \n> I didn't see one go by yet... Could be sitting in the queue.\n\nIf it's not arrived by now - EXPLAIN ANALYZE doesn't tell me\nanything :)\n\n> Let us know if changing the fsync setting helps. Hopefully that's all\n> the problem is.\n\nfsync's already off - yes a bit scary, but our I/O is only about\n500KB/sec writing.. the whole db fits in RAM / kernel disk cache, and\nI'd rather have performance than security at this exact moment..\n \n> Off on a tangent. If the aggregate memory bandwidth of the pSeries is\n> no greater than you Xeon you might not see a big improvement if you\n> were memory bound before. If you were CPU bound, you may or may not\n> see an improvement.\n\nI did look into the specs of the system, and the memory bw on the\npSeries was /much/ greater than the Xeon - it's one of the things that\nreally pushed me towards it in the end. I forget the figures, but it\nwas 3 or 4 times greater.\n\n> Can you describe the disc subsystems in the two machines for us? What\n> kind of read / write load you have? It could be the older box was\n> running on IDE drives with fake fsync responses which would lie, be\n> fast, but not reliable in case of a power outage.\n\nAgain, I'm confident that I/O's not the killer here.. the Xeon is a Dell\n6850- hardware RAID1.. SCSI drives.\n\n> > Multi-Opteron was the other thing we considered but decided to give\n> > 'Big Iron' UNIX a whirl...\n> \n> It still might be a good choice, if it's a simple misconfiguration\n> issue.\n> \n> But man, those new multiple core opterons can make some impressive\n> machines for very little money.\n\nSo I see - we could buy two quad-opterons for the cost of renting this\npSeries for a month....\n\nCheers,\nGavin.\n",
"msg_date": "Fri, 7 Apr 2006 22:24:43 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "On Fri, 07 Apr 2006 16:06:02 -0400\nTom Lane <[email protected]> wrote:\n\n> > The pSeries isn't much older than our Xeon machine, and I expected\n> > the performance level to be exemplary out of the box..\n> \n> I'm fairly surprised too. One thing I note from your comparison of\n> settings is that the default WAL sync method is different on the two\n> operating systems. \n\nWe're very read-focussed.. there's update activity, sure, but the IO is\nonly pushing about 500KByte/sec on average, usually much less. I also\nhave fsync switched off - yes dangerous, but I just want to eliminate\nIO completely as a contributing factor.\n\n> Does AIX have anything comparable to oprofile or dtrace? \n\nI've used neither on Linux, but a quick google showed up a few articles\nalong the lines of 'in theory it shouldn't be hard to port to AIX....'\nbut nothing concrete. My guess is IBM sell a tool to do this. Hell, the\nC++ compiler is £1200... (hence our use of GCC 4.1 to compile pg)\n\n\n> Failing a low-level profiler, there should at least be\n> something comparable to strace --- you should try watching some of\n> the backends with strace and see what their behavior is when the\n> performance goes south. Lots of delaying select()s or semop()s would\n> be a red flag.\n\nThere's truss installed which seems to do the same as strace on\nLinux... and here's a wildly non-scientific glance.. I watched the\n'topas' output (top for AIX) , identified a PID that was doing a lot of\nwork, then attached truss to that pid. In addition to lots of send\n(), recv() and lseek()s... about once a minute I saw hundreds of calls\nto __semop() interspersed with _select(), followed by tons of lseek()\n+kread()+__semop() and then I can see the kwrite() to the pg logfile\n\n246170: kwrite(2, \" L O G : d u\", 8) = 8 etc.\n\nCheers,\nGavin.\n\n",
"msg_date": "Fri, 7 Apr 2006 22:36:35 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "Gavin,\n\nOn 4/7/06 2:24 PM, \"Gavin Hamill\" <[email protected]> wrote:\n\n> I did look into the specs of the system, and the memory bw on the\n> pSeries was /much/ greater than the Xeon - it's one of the things that\n> really pushed me towards it in the end. I forget the figures, but it\n> was 3 or 4 times greater.\n\n From the literature at:\n \nhttp://www-03.ibm.com/servers/eserver/pseries/hardware/midrange/p650_desc.ht\nml\n\n³The pSeries 650 features a peak aggregate memory to L3 cache bandwidth of\n25.6GB/second for an 8way configuration. In addition, aggregate I/O\nbandwidth is up to 16GB/second. The result is a remarkable combination of\nsystem architecture, speed and power that delivers efficient and\ncost-effective data sharing and application throughput.²\n\nThat¹s a total of 25.6GB/s for 8 CPUs, or 3.2GB/s per CPU. 3GHz P4 Xeons\ntypically have an 800MHz memory bus with double the speed at 6.4GB/s result\n(800MHz x 8 bytes per L2 cache line = 6.4GB/s). Furthermore, the speed at\nwhich the P4 Xeon can load data into L2 cache from memory is matched to the\nbus because the L2 cache line width is 8 bytes wide and can stream data to\nL2 at full bus speed.\n\nThat said, I find typical memory bandwidth for the P4 in applications is\nlimited at about 2GB/s. See here for more detail:\nhttp://www.cs.virginia.edu/stream/standard/Bandwidth.html\n\nIn fact, looking at the results there, the IBM 650m2 only gets 6GB/s on all\n8 CPUs. I wouldn¹t be surprised if the strange L3 cache architecture of the\nIBM 650 is holding it back from streaming memory access efficiently.\n\nWhether this has anything to do with your problem or not, I have no idea!\n\n- Luke \n\n\n\nRe: [PERFORM] pg 8.1.3, AIX, huge box, painfully slow.\n\n\nGavin,\n\nOn 4/7/06 2:24 PM, \"Gavin Hamill\" <[email protected]> wrote:\n\n> I did look into the specs of the system, and the memory bw on the\n> pSeries was /much/ greater than the Xeon - it's one of the things that\n> really pushed me towards it in the end. I forget the figures, but it\n> was 3 or 4 times greater.\n\n From the literature at: \n http://www-03.ibm.com/servers/eserver/pseries/hardware/midrange/p650_desc.html\n\n“The pSeries 650 features a peak aggregate memory to L3 cache bandwidth of 25.6GB/second for an 8way configuration. In addition, aggregate I/O bandwidth is up to 16GB/second. The result is a remarkable combination of system architecture, speed and power that delivers efficient and cost-effective data sharing and application throughput.”\n\nThat’s a total of 25.6GB/s for 8 CPUs, or 3.2GB/s per CPU. 3GHz P4 Xeons typically have an 800MHz memory bus with double the speed at 6.4GB/s result (800MHz x 8 bytes per L2 cache line = 6.4GB/s). Furthermore, the speed at which the P4 Xeon can load data into L2 cache from memory is matched to the bus because the L2 cache line width is 8 bytes wide and can stream data to L2 at full bus speed.\n\nThat said, I find typical memory bandwidth for the P4 in applications is limited at about 2GB/s. See here for more detail: http://www.cs.virginia.edu/stream/standard/Bandwidth.html\n\nIn fact, looking at the results there, the IBM 650m2 only gets 6GB/s on all 8 CPUs. I wouldn’t be surprised if the strange L3 cache architecture of the IBM 650 is holding it back from streaming memory access efficiently.\n\nWhether this has anything to do with your problem or not, I have no idea!\n\n- Luke",
"msg_date": "Fri, 07 Apr 2006 14:50:31 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "Gavin Hamill <[email protected]> writes:\n> There's truss installed which seems to do the same as strace on\n> Linux... and here's a wildly non-scientific glance.. I watched the\n> 'topas' output (top for AIX) , identified a PID that was doing a lot of\n> work, then attached truss to that pid. In addition to lots of send\n> (), recv() and lseek()s...\n\nThose are good, they represent real work getting done.\n\n> about once a minute I saw hundreds of calls\n> to __semop() interspersed with _select(),\n\nThis is not good. Did the semop storms coincide with visible slowdown?\n(I'd assume so, but you didn't actually say...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Apr 2006 17:56:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow. "
},
{
"msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n> That said, I find typical memory bandwidth for the P4 in applications is\n> limited at about 2GB/s. See here for more detail:\n> http://www.cs.virginia.edu/stream/standard/Bandwidth.html\n\n> In fact, looking at the results there, the IBM 650m2 only gets 6GB/s\n> on all 8 CPUs. I wouldn't be surprised if the strange L3 cache\n> architecture of the IBM 650 is holding it back from streaming memory\n> access efficiently.\n\nGiven Gavin's latest report, I'm wondering how much the IBM slows down\nwhen a spinlock operation is involved. If the memory architecture isn't\ngood about supporting serialized access to memory, that gaudy sounding\nbandwidth number might have little to do with PG's real-world behavior.\nOn the other hand, we already know that Xeons suck about as badly as\ncan be on that same measure; could the pSeries really be worse?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Apr 2006 18:02:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow. "
},
{
"msg_contents": "Tom,\n\nOn 4/7/06 3:02 PM, \"Tom Lane\" <[email protected]> wrote:\n\n> On the other hand, we already know that Xeons suck about as badly as\n> can be on that same measure; could the pSeries really be worse?\n\nI wouldn't be too surprised, but it sounds like it needs a test. Do we have\na test for this? Is there a contention-prone query stream that we can think\nup?\n\n- Luke\n\n\n",
"msg_date": "Fri, 07 Apr 2006 15:06:48 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n> On 4/7/06 3:02 PM, \"Tom Lane\" <[email protected]> wrote:\n>> On the other hand, we already know that Xeons suck about as badly as\n>> can be on that same measure; could the pSeries really be worse?\n\n> I wouldn't be too surprised, but it sounds like it needs a test. Do we have\n> a test for this? Is there a contention-prone query stream that we can think\n> up?\n\nIf you want you could install a pre-8.1 PG and then try one of the\nqueries that we were using as test cases a year ago for spinlock\ninvestigations. I don't recall details right now but I remember\nhaving posted a pretty trivial test case that would send a\nmultiprocessor machine into context-swap storm, which sounds a whole\nlot like what Gavin is seeing.\n\nI think that 8.1 ought to be relatively free of buffer-manager spinlock\ncontention, which is why I doubt that test case would be interesting\nagainst 8.1. The interesting question is what else is he seeing\ncontention for, if it's not the BufMgrLock?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Apr 2006 18:12:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow. "
},
{
"msg_contents": "On Fri, 07 Apr 2006 17:56:49 -0400\nTom Lane <[email protected]> wrote:\n\n> This is not good. Did the semop storms coincide with visible\n> slowdown? (I'd assume so, but you didn't actually say...)\n\nIf I'd been able to tell, then I'd tell you =) I'll have another go...\n\nYes, there's a definate correlation here.. I attached truss to the\nmain postmaster..\n\n$ truss -Ff -p 340344 2>&1 | grep semop\n\nhere's a snippet\n\n278774: __semop(15728650, 0x0FFFFFFFFFFF7E80, 1) = 0\n155712: __semop(15728650, 0x0FFFFFFFFFFF5920, 1) = 0\n278774: __semop(15728649, 0x0FFFFFFFFFFF6F10, 1)\n114914: __semop(15728649, 0x0FFFFFFFFFFF6A40, 1) = 0 = 0 \n114914: __semop(15728650, 0x0FFFFFFFFFFF61E0, 1)\n155712: __semop(15728650, 0x0FFFFFFFFFFF6850, 1) = 0 = 0 \n155712: __semop(15728650, 0x0FFFFFFFFFFF6890, 1) = 0 1\n55712: __semop(15728650, 0x0FFFFFFFFFFF5920, 1)\n278774: __semop(15728650, 0x0FFFFFFFFFFF6F10, 1) \n155712: __semop(15728650, 0x0FFFFFFFFFFF6850, 1) = 0 = 0\n278774: __semop(15728649, 0x0FFFFFFFFFFF7E40, 1)\n114914: __semop(15728649, 0x0FFFFFFFFFFF6A80, 1) = 0 = 0\n278774: __semop(15728650, 0x0FFFFFFFFFFF7E80, 1) \n\nAnd when I saw a flood of semop's for any particular PID, a second later\nin the 'topas' process list would show that PID at a 100% CPU ...\n\nMost intriguing :)\n\nCheers,\nGavin.\n",
"msg_date": "Fri, 7 Apr 2006 23:27:59 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "Gavin Hamill <[email protected]> writes:\n> On Fri, 07 Apr 2006 17:56:49 -0400\n> Tom Lane <[email protected]> wrote:\n>> This is not good. Did the semop storms coincide with visible\n>> slowdown? (I'd assume so, but you didn't actually say...)\n\n> Yes, there's a definate correlation here.. I attached truss to the\n> main postmaster..\n> ...\n> And when I saw a flood of semop's for any particular PID, a second later\n> in the 'topas' process list would show that PID at a 100% CPU ...\n\nSo apparently we've still got a problem with multiprocess contention for\nan LWLock somewhere. It's not the BufMgrLock because that's gone in 8.1.\nIt could be one of the finer-grain locks that are still there, or it\ncould be someplace else.\n\nAre you in a position to try your workload using PG CVS tip? There's a\nnontrivial possibility that we've already fixed this --- a couple months\nago I did some work to reduce contention in the lock manager:\n\n2005-12-11 16:02 tgl\n\n\t* src/: backend/access/transam/twophase.c,\n\tbackend/storage/ipc/procarray.c, backend/storage/lmgr/README,\n\tbackend/storage/lmgr/deadlock.c, backend/storage/lmgr/lock.c,\n\tbackend/storage/lmgr/lwlock.c, backend/storage/lmgr/proc.c,\n\tinclude/storage/lock.h, include/storage/lwlock.h,\n\tinclude/storage/proc.h: Divide the lock manager's shared state into\n\t'partitions', so as to reduce contention for the former single\n\tLockMgrLock. Per my recent proposal. I set it up for 16\n\tpartitions, but on a pgbench test this gives only a marginal\n\tfurther improvement over 4 partitions --- we need to test more\n\tscenarios to choose the number of partitions.\n\nThis is unfortunately not going to help you as far as getting that\nmachine into production now (unless you're brave enough to run CVS tip\nas production, which I certainly am not). I'm afraid you're most likely\ngoing to have to ship that pSeries back at the end of the month, but\nwhile you've got it it'd be awfully nice if we could use it as a testbed\n...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Apr 2006 18:52:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow. "
},
{
"msg_contents": "Gavin,\n\nOn 4/7/06 3:27 PM, \"Gavin Hamill\" <[email protected]> wrote:\n\n> 278774: __semop(15728650, 0x0FFFFFFFFFFF7E80, 1) = 0\n> 155712: __semop(15728650, 0x0FFFFFFFFFFF5920, 1) = 0\n> 278774: __semop(15728649, 0x0FFFFFFFFFFF6F10, 1)\n> 114914: __semop(15728649, 0x0FFFFFFFFFFF6A40, 1) = 0 = 0\n> 114914: __semop(15728650, 0x0FFFFFFFFFFF61E0, 1)\n> 155712: __semop(15728650, 0x0FFFFFFFFFFF6850, 1) = 0 = 0\n> 155712: __semop(15728650, 0x0FFFFFFFFFFF6890, 1) = 0 1\n> 55712: __semop(15728650, 0x0FFFFFFFFFFF5920, 1)\n> 278774: __semop(15728650, 0x0FFFFFFFFFFF6F10, 1)\n> 155712: __semop(15728650, 0x0FFFFFFFFFFF6850, 1) = 0 = 0\n> 278774: __semop(15728649, 0x0FFFFFFFFFFF7E40, 1)\n> 114914: __semop(15728649, 0x0FFFFFFFFFFF6A80, 1) = 0 = 0\n> 278774: __semop(15728650, 0x0FFFFFFFFFFF7E80, 1)\n\nSeems like you're hitting a very small target in RAM with these semop calls.\nI wonder what part of the code is doing this - Tom would know better how to\ntrace it, but the equivalent of oprofile output would be nice.\n\nThe other thing that I'd like to see is an evaluation of the memory access\nlatency of this machine from Register to RAM. I couldn't find a\nbenchmarking tool that was UNIX friendly out there, maybe I'll write one\nreal quick. I suspect this machine has a heinous latency and a storm of\nsemops to the same spot of RAM might be a far worse performance problem on\nthis machine than on others...\n\n- Luke \n\n\n\n\n",
"msg_date": "Fri, 07 Apr 2006 15:56:52 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n> On 4/7/06 3:27 PM, \"Gavin Hamill\" <[email protected]> wrote:\n\n>> 278774: __semop(15728650, 0x0FFFFFFFFFFF7E80, 1) = 0\n>> 155712: __semop(15728650, 0x0FFFFFFFFFFF5920, 1) = 0\n>> 278774: __semop(15728649, 0x0FFFFFFFFFFF6F10, 1)\n\n> Seems like you're hitting a very small target in RAM with these semop calls.\n\nIIRC the address passed to semop() in our code is always a local struct\non the stack, so that's a bit of a red herring --- there won't be\ncross-processor contention for that.\n\nIt's plausible though that we are seeing contention across members of\nthe LWLock array, with the semop storm just being a higher-level symptom\nof the real hardware-level problem. You might try increasing\nLWLOCK_PADDED_SIZE to 64 or even 128, see\nsrc/backend/storage/lmgr/lwlock.c (this is something that does exist in\n8.1, so it'd be easy to try).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Apr 2006 19:05:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow. "
},
{
"msg_contents": "On Fri, 07 Apr 2006 18:52:20 -0400\nTom Lane <[email protected]> wrote:\n\n> Are you in a position to try your workload using PG CVS tip? There's\n> a nontrivial possibility that we've already fixed this --- a couple\n> months ago I did some work to reduce contention in the lock manager:\n\nWell, there's a question. At the moment it's still live - but I'll need\nto swap back to the Xeon machine since I can't afford to have a Saturday\nwith the db firing on three cylinders (out of eight :)\n\nAt that point you're welcome to twiddle, compile, throw anything you\nwant at it. If it helps us as much as the greater pg world, then that's\nperfect.\n\n> This is unfortunately not going to help you as far as getting that\n> machine into production now (unless you're brave enough to run CVS tip\n> as production, which I certainly am not). \n\n.. if the problem can actually be boiled down to the locking/threading\nissues, surely it should be straightforward to backport those changes\nto 8.1.3 mainline?\n\n> I'm afraid you're most\n> likely going to have to ship that pSeries back at the end of the\n> month, but while you've got it it'd be awfully nice if we could use\n> it as a testbed ...\n\nWe have it for the next 2 weeks, and whilst I can't guarantee access for\nall that time, you're welcome to hammer away at it over this weekend if\nthat's any help? Mail me privately and I'll sort out login details if\nthis is interesting.\n\nCheers,\nGavin.\n",
"msg_date": "Sat, 8 Apr 2006 00:39:19 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "On Fri, 07 Apr 2006 15:56:52 -0700\n\"Luke Lonergan\" <[email protected]> wrote:\n\n> Seems like you're hitting a very small target in RAM with these semop\n> calls. I wonder what part of the code is doing this - Tom would know\n> better how to trace it, but the equivalent of oprofile output would\n> be nice.\n\nI'm happy to test whatever I can, but I simply don't know enough AIX to\nbe able to tell whether a similar kernel-level profiler is\navailable/possible.\n \n> The other thing that I'd like to see is an evaluation of the memory\n> access latency of this machine from Register to RAM. I couldn't find\n> a benchmarking tool that was UNIX friendly out there, maybe I'll\n> write one real quick. I suspect this machine has a heinous latency\n> and a storm of semops to the same spot of RAM might be a far worse\n> performance problem on this machine than on others...\n\nWell, as I said to Tom, the machine is available for running tests\non :) If it helps us, and helps pg become more AIX friendly, then I'm\nall for whatever needs done...\n\nCheers,\nGavin.\n",
"msg_date": "Sat, 8 Apr 2006 02:26:12 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "Gavin Hamill wrote:\n> On Fri, 07 Apr 2006 15:24:18 -0500\n> Scott Marlowe <[email protected]> wrote:\n> \n>>> See reply to Tom Lane :)\n>> I didn't see one go by yet... Could be sitting in the queue.\n> \n> If it's not arrived by now - EXPLAIN ANALYZE doesn't tell me\n> anything :)\n> \n>> Let us know if changing the fsync setting helps. Hopefully that's all\n>> the problem is.\n> \n> fsync's already off - yes a bit scary, but our I/O is only about\n> 500KB/sec writing.. the whole db fits in RAM / kernel disk cache, and\n> I'd rather have performance than security at this exact moment..\n\n>>> Multi-Opteron was the other thing we considered but decided to give\n>>> 'Big Iron' UNIX a whirl...\n>> It still might be a good choice, if it's a simple misconfiguration\n>> issue.\n>>\n>> But man, those new multiple core opterons can make some impressive\n>> machines for very little money.\n> \n> So I see - we could buy two quad-opterons for the cost of renting this\n> pSeries for a month....\n\nI don't know about the pSeries, but I had a client the other week with \nthe same usage pattern. They made the switch from a quad Xeon to dual \n(+dual-core) Opteron and were extremely impressed.\n\nI can probably put you in touch with their sysadmin - contact me \noff-list if you'd like that.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 10 Apr 2006 10:06:41 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "On Fri, 2006-04-07 at 19:05 -0400, Tom Lane wrote:\n\n> It's plausible though that we are seeing contention across members of\n> the LWLock array, with the semop storm just being a higher-level symptom\n> of the real hardware-level problem. You might try increasing\n> LWLOCK_PADDED_SIZE to 64 or even 128, see\n> src/backend/storage/lmgr/lwlock.c (this is something that does exist in\n> 8.1, so it'd be easy to try).\n\npSeries cache lines are 128 bytes wide, so I'd go straight to 128.\n\nIf you're renting all 8 CPUs, I'd drop to 4 and try that instead. With 8\nCPUs the contention will vary according to what each CPU is doing at any\none time - when they all hit the contention spot, things will get worse.\n\nThe pSeries has good CPUs and great caching, so I'd expect contention to\nbe somewhat more apparent as a bottleneck.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Mon, 10 Apr 2006 10:26:49 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "Simon Riggs wrote:\n\n> pSeries cache lines are 128 bytes wide, so I'd go straight to 128.\n>\nHello :)\n\nOK, that line of code is:\n\n#define LWLOCK_PADDED_SIZE (sizeof(LWLock) <= 16 ? 16 : 32)\n\nWhat should I change this to? I don't understand the syntax of the <= 16 \n? : stuff...\n\nwould a simple \"#define LWLOCK_PADDED_SIZE 128\" be sufficient?\n\n>If you're renting all 8 CPUs, I'd drop to 4 and try that instead. With 8\n>CPUs the contention will vary according to what each CPU is doing at any\n>one time - when they all hit the contention spot, things will get worse.\n>\n> \n>\nWe have a physical machine installed in our rack at the data centre, \nrather than renting a virtual partition of a real machine... I'm not \nsure how to enable/disable CPUs even with the help of 'smitty' :)\n\n>The pSeries has good CPUs and great caching, so I'd expect contention to\n>be somewhat more apparent as a bottleneck.\n>\n> \n>\nYep, I expected 32MB of 'L3' cache would yield impressive results :)\n\nCheers,\nGavin.\n\n\n",
"msg_date": "Mon, 10 Apr 2006 10:41:42 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "Gavin Hamill <[email protected]> writes:\n> would a simple \"#define LWLOCK_PADDED_SIZE 128\" be sufficient?\n\nYeah, that's fine.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 Apr 2006 11:35:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow. "
},
{
"msg_contents": "Tom Lane wrote:\n\n> This is unfortunately not going to help you as far as getting that\n> machine into production now (unless you're brave enough to run CVS tip\n> as production, which I certainly am not). I'm afraid you're most likely\n> going to have to ship that pSeries back at the end of the month, but\n> while you've got it it'd be awfully nice if we could use it as a testbed\n\nWe have PSeries boxes here that won't be going away anytime soon. If\nthere are any specific test cases that need to run, I should be able to\nfind the time to do it.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n",
"msg_date": "Mon, 10 Apr 2006 15:40:58 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
},
{
"msg_contents": "Tom Lane wrote:\n\n>Gavin Hamill <[email protected]> writes:\n> \n>\n>>would a simple \"#define LWLOCK_PADDED_SIZE 128\" be sufficient?\n>> \n>>\n>\n>Yeah, that's fine.\n> \n>\n\nOK I tried that but noticed no real improvement... in the interim I've \ninstalled Debian on the pSeries (using \nhttp://debian.gonicus.de/debian/dists/sarge/main/disks-powerpc/current/pseries/install.txt \n) and using a simple load-test script - it picks a 'hotelsearch' select \nat random from a big file and just does a pg_query on that via PHP...\n\nUsing apachebench with 10 clients gave a loadavg of about 10 after a few \nminutes, and the logs showed typical query times of 8 seconds. Again, no \ndisk activity, normal context-switching, just full-out CPU usage...\n\nWe're improving the quality + efficiency of the hotelsearch function all \nthe time (Simon will certainly be able to vouch for its complexity) - am \nreally uncertain what to do next tho! :/\n\nCheers,\nGavin.\n\n",
"msg_date": "Thu, 13 Apr 2006 15:45:42 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg 8.1.3, AIX, huge box, painfully slow."
}
] |
[
{
"msg_contents": "Hi,\n\nFirst of all, the reason I'm posting on the PostgreSQL Performance \nlist is we have a performance issue with one of our applications and \nit's related to the speed at which PostgreSQL can do counts. But it's \nalso related to the data structure we've designed to develop our \ncomparison shopping engine. It's very normalized and the speed of our \nqueries are slowing down considerably as we add more and more data.\n\nSo I've been looking at some of the other comparison shopping engines \nand I'm trying to figure out how they manage to get the counts of \ntheir products for a set of attributes and attribute values so quickly.\n\nFor example, on the following page, they have 1,260,658 products:\n\nhttp://www.mysimon.com/Home-Furnishings/9000-10975_8-0.html?tag=glnav\n\nThey display 3 attributes with values on the page: Price Range, Home \nFurnishing Type, and Store. Plus there are a selection of other \nattributes not displaying the value choices.\n\nFor Price Range, they have the following values and product counts \n(in brackets):\n\n# Below $20 (204,315)\n# $20 - $50 (234,694)\n# $50 - $80 (188,811)\n# $80 - $130 (182,721)\n# $130 - $240 (222,519)\n\nFor Home Furnishing Type they have:\n\n# Wall Art and Decor (438,493)\n# Lighting (243,098)\n# Bathroom Furnishings (132,441)\n# Rugs (113,216)\n# Decorative Accents (65,418)\n\nAnd for Store they have:\n\n# Art.com (360,933)\n# HomeAnnex (130,410)\n# AllPosters.com (72,529)\n# HomeClick.com (61,423)\n# 1STOPlighting Superstore (32,074)\n\nNow, initially I thought they would just pre-compute these counts, \nbut the problem is, when you click on any of the above attribute \nvalues, they reduce the remaining possible set of matching products \n(and set of possible remaining attributes and attribute values) by \nthe amount displayed next to the attribute value selected. You can \nclick on any combination of attribute values to filter down the \nremaining set of matching products, so there's a large combination of \npaths you can take to arrive at a set of products you might be \ninterested in.\n\nDo you think they are pre-computed? Or do you think they might use a \nquery similar to the following?:\n\nselect pav.attribute_value_id, count(p.product_id)\nfrom product_attribute_value pav,\n\t attribute a,\n\t product p\nwhere a.attribute_id in (some set of attribute ids) and\npav.product_id = p.product_id and\npav.attribute_id = a.attribute_id and p.product_id in\n\t(select product_id\n\t from category_product\n\t where category_id = some category id) and\np.is_active = 'true'\ngroup by pav.attribute_value_id;\n\nIt would seem to me that although the above query suggests a \nnormalized database structure, that joining with 3 tables plus a 4th \ntable in the sub-query with an IN qualifier and grouping to get the \nproduct counts would take a VERY long time, especially on a possible \nresult set of 1,260,658 products.\n\nThe other issue is trying to figure out what the remaining set of \npossible attribute and attribute values are in order to reach the \nremaining set of products.\n\nDoes anyone have any insights into what kind of data structures would \nbe necessary to accomplish such a feat? I know that PostgreSQL has \nperformance issues with counting rows, so I can't imagine being able \nto use the above kind of query to get the results we need. I don't \nknow what kind of database backend mysimon is using either. It would \nalso seem to be logical that having a flattened data structure would \nseem to be necessary in order to get the performance required. Their \npages seem to load pretty fast.\n\nWe are possibly considering some kind of pre-computed decision-tree \ntype data structure to get the counts of the set of products that \ncould be reached by selecting any combination of attributes and \nattribute values. Does this seem like a reasonable idea?\n\nPerhaps there's a better way?\n\nI also apologize if this isn't the appropriate list to publish this \nkind of question to. The reason I posted here was because it is a \nperformance related question, but not necessarily directly related to \nPostgreSQL. It's just that we are using PostgreSQL for our product \ncomparison engine so I thought there could be some PostgreSQL \nspecific optimizations that could be made. If not, please let me know \nand I'll move it elsewhere.\n\n\nThanks very much,\n\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com",
"msg_date": "Sat, 8 Apr 2006 23:49:10 -0600",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "OT: Data structure design question: How do they count so fast?"
},
{
"msg_contents": "Hi Brandon,\n\nThanks for your suggestion. I'll think about that one. Part of the \nproblem is also trying to figure out what the remaining set of \nattributes and attribute values are, so that slows it down \nconsiderably too. There are many many combinations of attribute \nvalues that can be clicked on.\n\nMore work to do!\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Apr 9, 2006, at 9:56 PM, Brandon Hines wrote:\n\n> Brendan,\n>\n> I have a number of applications the require similar functionality. \n> What I typically do is to create a count table that gets updated \n> with a trigger. But instead of keeping absolute counts, I keep \n> counts of the smallest necessary element. For example, I have a \n> table with approx 12 million elements, each element belongs to one \n> of a thousand classes of elements. The materialized view table \n> with only about a thousand rows is small enough for sum() queries \n> of various classes fast enough for web pages.\n>\n> -Brandon\n>\n> Brendan Duddridge wrote:\n>> Hi,\n>> First of all, the reason I'm posting on the PostgreSQL Performance \n>> list is we have a performance issue with one of our applications \n>> and it's related to the speed at which PostgreSQL can do counts. \n>> But it's also related to the data structure we've designed to \n>> develop our comparison shopping engine. It's very normalized and \n>> the speed of our queries are slowing down considerably as we add \n>> more and more data.\n>> So I've been looking at some of the other comparison shopping \n>> engines and I'm trying to figure out how they manage to get the \n>> counts of their products for a set of attributes and attribute \n>> values so quickly.\n>> For example, on the following page, they have 1,260,658 products:\n>> http://www.mysimon.com/Home-Furnishings/9000-10975_8-0.html?tag=glnav\n>> They display 3 attributes with values on the page: Price Range, \n>> Home Furnishing Type, and Store. Plus there are a selection of \n>> other attributes not displaying the value choices.\n>> For Price Range, they have the following values and product counts \n>> (in brackets):\n>> # Below $20 (204,315)\n>> # $20 - $50 (234,694)\n>> # $50 - $80 (188,811)\n>> # $80 - $130 (182,721)\n>> # $130 - $240 (222,519)\n>> For Home Furnishing Type they have:\n>> # Wall Art and Decor (438,493)\n>> # Lighting (243,098)\n>> # Bathroom Furnishings (132,441)\n>> # Rugs (113,216)\n>> # Decorative Accents (65,418)\n>> And for Store they have:\n>> # Art.com (360,933)\n>> # HomeAnnex (130,410)\n>> # AllPosters.com (72,529)\n>> # HomeClick.com (61,423)\n>> # 1STOPlighting Superstore (32,074)\n>> Now, initially I thought they would just pre-compute these counts, \n>> but the problem is, when you click on any of the above attribute \n>> values, they reduce the remaining possible set of matching \n>> products (and set of possible remaining attributes and attribute \n>> values) by the amount displayed next to the attribute value \n>> selected. You can click on any combination of attribute values to \n>> filter down the remaining set of matching products, so there's a \n>> large combination of paths you can take to arrive at a set of \n>> products you might be interested in.\n>> Do you think they are pre-computed? Or do you think they might use \n>> a query similar to the following?:\n>> select pav.attribute_value_id, count(p.product_id)\n>> from product_attribute_value pav,\n>> attribute a,\n>> product p\n>> where a.attribute_id in (some set of attribute ids) and\n>> pav.product_id = p.product_id and\n>> pav.attribute_id = a.attribute_id and p.product_id in\n>> (select product_id\n>> from category_product\n>> where category_id = some category id) and\n>> p.is_active = 'true'\n>> group by pav.attribute_value_id;\n>> It would seem to me that although the above query suggests a \n>> normalized database structure, that joining with 3 tables plus a \n>> 4th table in the sub-query with an IN qualifier and grouping to \n>> get the product counts would take a VERY long time, especially on \n>> a possible result set of 1,260,658 products. The other issue is \n>> trying to figure out what the remaining set of possible attribute \n>> and attribute values are in order to reach the remaining set of \n>> products.\n>> Does anyone have any insights into what kind of data structures \n>> would be necessary to accomplish such a feat? I know that \n>> PostgreSQL has performance issues with counting rows, so I can't \n>> imagine being able to use the above kind of query to get the \n>> results we need. I don't know what kind of database backend \n>> mysimon is using either. It would also seem to be logical that \n>> having a flattened data structure would seem to be necessary in \n>> order to get the performance required. Their pages seem to load \n>> pretty fast.\n>> We are possibly considering some kind of pre-computed decision- \n>> tree type data structure to get the counts of the set of products \n>> that could be reached by selecting any combination of attributes \n>> and attribute values. Does this seem like a reasonable idea?\n>> Perhaps there's a better way?\n>> I also apologize if this isn't the appropriate list to publish \n>> this kind of question to. The reason I posted here was because it \n>> is a performance related question, but not necessarily directly \n>> related to PostgreSQL. It's just that we are using PostgreSQL for \n>> our product comparison engine so I thought there could be some \n>> PostgreSQL specific optimizations that could be made. If not, \n>> please let me know and I'll move it elsewhere. Thanks very much,\n>> *\n>> *____________________________________________________________________\n>> *Brendan Duddridge* | CTO | 403-277-5591 x24 | \n>> [email protected] <mailto:[email protected]>\n>> *\n>> *ClickSpace Interactive Inc.\n>> Suite L100, 239 - 10th Ave. SE\n>> Calgary, AB T2G 0V9\n>> http://www.clickspace.com\n>\n>",
"msg_date": "Sun, 9 Apr 2006 23:50:38 -0600",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OT: Data structure design question: How do they count so fast?"
},
{
"msg_contents": "Brendan Duddridge wrote:\n> \n> Now, initially I thought they would just pre-compute these counts, but \n> the problem is, when you click on any of the above attribute values, \n> they reduce the remaining possible set of matching products (and set of \n> possible remaining attributes and attribute values) by the amount \n> displayed next to the attribute value selected. You can click on any \n> combination of attribute values to filter down the remaining set of \n> matching products, so there's a large combination of paths you can take \n> to arrive at a set of products you might be interested in.\n> \n> Do you think they are pre-computed? Or do you think they might use a \n> query similar to the following?:\n\nPre-computed almost certainly, but at what level of granularity? And \nwith application-level caching?\n\n> select pav.attribute_value_id, count(p.product_id)\n> from product_attribute_value pav,\n> attribute a,\n> product p\n> where a.attribute_id in (some set of attribute ids) and\n> pav.product_id = p.product_id and\n> pav.attribute_id = a.attribute_id and p.product_id in\n> (select product_id\n> from category_product\n> where category_id = some category id) and\n> p.is_active = 'true'\n> group by pav.attribute_value_id;\n> \n> It would seem to me that although the above query suggests a normalized \n> database structure, that joining with 3 tables plus a 4th table in the \n> sub-query with an IN qualifier and grouping to get the product counts \n> would take a VERY long time, especially on a possible result set of \n> 1,260,658 products.\n\nHmm - I'm not sure I'd say this was necessarily normalised. In the \nexample you gave there were three definite types of attribute:\n 1. Price range (< 20, 20-50, ...)\n 2. Product type (lighting, rugs, ...)\n 3. Store (art.com, homeannex, ...)\nYour example discards this type information.\n\nI'm also not sure it lets store A sell widgets for 19.99 and B for 25.99\n\nSo - let's look at how we might break this down into simple relations:\n product_types (product_id, prod_type, prod_subtype)\n product_availability (product_id, store_id, price_range)\nand so on for each set of parameters.\n\nThen, if PG isn't calculating fast enough I'd be tempted to throw in a \nsummary table:\n product_counts(store_id, price_range, prod_type, prod_subtype, ..., \nnum_products)\nThen total over this for the top-level queries.\n\nI'd also cache common top-level queries at the applicaton level anyway.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 10 Apr 2006 10:23:48 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OT: Data structure design question: How do they count"
},
{
"msg_contents": "Hi Richard (and anyone else who want's to comment!),\n\nI'm not sure it will really work pre-computed. At least not in an \nobvious way (for me! :-)) It's fine to display a pre-computed list of \nproduct counts for the initial set of attribute and attribute values, \nbut we need to be able to display the counts of products for any \ncombination of selected attribute values.\n\nOnce an attribute value is picked that reduces the set of products to \na small enough set, the queries are fast, so, perhaps we just need to \npre-compute the counts for combinations of attribute values that lead \nto a high count of products. Hence, our thought on building a \ndecision-tree type data structure. This would store the various \ncombinations of attributes and attribute values, along with the \nproduct count, along with a list of attributes and attribute values \nthat are applicable for the current set of selected attribute values. \nThis sounds like it could get rather complicated, so we were hoping \nsomeone might have an idea on a much simpler solution.\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Apr 10, 2006, at 3:23 AM, Richard Huxton wrote:\n\n> Brendan Duddridge wrote:\n>> Now, initially I thought they would just pre-compute these counts, \n>> but the problem is, when you click on any of the above attribute \n>> values, they reduce the remaining possible set of matching \n>> products (and set of possible remaining attributes and attribute \n>> values) by the amount displayed next to the attribute value \n>> selected. You can click on any combination of attribute values to \n>> filter down the remaining set of matching products, so there's a \n>> large combination of paths you can take to arrive at a set of \n>> products you might be interested in.\n>> Do you think they are pre-computed? Or do you think they might use \n>> a query similar to the following?:\n>\n> Pre-computed almost certainly, but at what level of granularity? \n> And with application-level caching?\n>\n>> select pav.attribute_value_id, count(p.product_id)\n>> from product_attribute_value pav,\n>> attribute a,\n>> product p\n>> where a.attribute_id in (some set of attribute ids) and\n>> pav.product_id = p.product_id and\n>> pav.attribute_id = a.attribute_id and p.product_id in\n>> (select product_id\n>> from category_product\n>> where category_id = some category id) and\n>> p.is_active = 'true'\n>> group by pav.attribute_value_id;\n>> It would seem to me that although the above query suggests a \n>> normalized database structure, that joining with 3 tables plus a \n>> 4th table in the sub-query with an IN qualifier and grouping to \n>> get the product counts would take a VERY long time, especially on \n>> a possible result set of 1,260,658 products.\n>\n> Hmm - I'm not sure I'd say this was necessarily normalised. In the \n> example you gave there were three definite types of attribute:\n> 1. Price range (< 20, 20-50, ...)\n> 2. Product type (lighting, rugs, ...)\n> 3. Store (art.com, homeannex, ...)\n> Your example discards this type information.\n>\n> I'm also not sure it lets store A sell widgets for 19.99 and B for \n> 25.99\n>\n> So - let's look at how we might break this down into simple relations:\n> product_types (product_id, prod_type, prod_subtype)\n> product_availability (product_id, store_id, price_range)\n> and so on for each set of parameters.\n>\n> Then, if PG isn't calculating fast enough I'd be tempted to throw \n> in a summary table:\n> product_counts(store_id, price_range, prod_type, \n> prod_subtype, ..., num_products)\n> Then total over this for the top-level queries.\n>\n> I'd also cache common top-level queries at the applicaton level \n> anyway.\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n>\n\n\n",
"msg_date": "Mon, 10 Apr 2006 14:34:02 -0600",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OT: Data structure design question: How do they count so fast?"
}
] |
[
{
"msg_contents": "Hello!\n\n \n\nKindly go through the following ,\n\n \n\n \n\nI wanted to know whether, the command line arguments(function arguments)\n------ $1 $2 $3 -- can be used as in the following , like, ----\n\n \n\n \n\nCREATE TYPE TT\nAS(something,something,.................................................\n.................etc..................);\n\nCREATE OR REPLACE FUNCTION f1(varchar,varchar,varchar,varchar) RETURNS\n.......................(something).............\n\n \n\nBEGIN\n\n SELECT a1,a2,a3,a4,a5,a6\n\n FROM (SELECT * FROM T1, T2......WHERE ............etc... Flag = 0\n$1 $2 $3 $4)\n\n ORDER BY ........................\n\n\n...............................\n\nRETURN NEXT .........;\n\n END LOOP;\n\n RETURN;\n\nEND;\n\n' LANGUAGE 'plpgsql';\n\n \n\nNOTE : The values for $1 $2 $3 $4 will be passed when the function\nis invoked(called) from the command prompt.\n\n \n\nI tried implementing the above, but this type of usage is not\nsupported , how should use it? \n\n \n\n I am converting from (sprintf, \"SELECT query stmts (which uses %s %s\n%s %s ...... ) to functions.\n\n \n\n \n\nAny help will be deeply appreciated. Thank you.\n\n \n\n \n\nKind regards,\n\n Chethana.",
"msg_date": "Sun, 9 Apr 2006 02:27:57 -0700",
"msg_from": "\"Chethana, Rao (IE10)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pls reply ASAP"
},
{
"msg_contents": "On 4/9/06, Chethana, Rao (IE10) <[email protected]> wrote:\n>\n>\n>\n>\n> Hello!\n>\n>\n>\n> Kindly go through the following ,\n>\n>\n>\n>\n>\n> I wanted to know whether, the command line arguments(function arguments)\n------ $1 $2 $3 -- can be used as in the following , like, ----\n>\n>\n>\n>\n>\n> CREATE TYPE TT AS(something,something,…………………………………………………………etc………………);\n>\n> CREATE OR REPLACE FUNCTION f1(varchar,varchar,varchar,varchar) RETURNS\n…………………..(something)………….\n>\n\nthe overall idea expressed is doable.\nfollowing are comments\n\n1. you have to put RETURNS setof TT (if you plan to return TT) since you\nused RETURN NEXT\n2. you have to use SELECT INTO rec in the function where rec is rowtype TT\n\nhope it helps\n\n------- non technical comments\n------------------------------------------------------\n3. its not a performance question , it shud have been marked more\nappropriately to pgsql-sql i think.\n4. its not a good etiquette to address email to someone and mark Cc to a\nlist.\n\nkind regds\nmallah.\n\n>\n>\n> BEGIN\n>\n> SELECT a1,a2,a3,a4,a5,a6\n>\n> FROM (SELECT * FROM T1, T2……WHERE …………etc… Flag = 0 $1 $2 $3 $4)\n>\n> ORDER BY\n……………………\n>\n> ………………………….\n>\n> RETURN NEXT ………;\n>\n> END LOOP;\n>\n> RETURN;\n>\n> END;\n>\n> ' LANGUAGE 'plpgsql';\n>\n>\n>\n> NOTE : The values for $1 $2 $3 $4 will be passed when the function is\ninvoked(called) from the command prompt.\n>\n>\n>\n> I tried implementing the above, but this type of usage is not supported\n, how should use it?\n>\n>\n>\n> I am converting from (sprintf, \"SELECT query stmts (which uses %s %s\n%s %s …… ) to functions.\n>\n>\n>\n>\n>\n> Any help will be deeply appreciated. Thank you.\n>\n>\n>\n>\n>\n> Kind regards,\n>\n> Chethana.\n>\n>\n>\n>\n>\n>\n\nOn 4/9/06, Chethana, Rao (IE10) <[email protected]> wrote:> > > > > Hello! > > \n> > Kindly go through the following , > > > > > >\nI wanted to know whether, the command line arguments(function\narguments) ------ $1 $2 $3 -- can be\nused as in the following , like, ---- > > > > > > CREATE TYPE TT AS(something,something,…………………………………………………………etc………………); > > CREATE OR REPLACE FUNCTION f1(varchar,varchar,varchar,varchar) RETURNS …………………..(something)…………. \n>\n\nthe overall idea expressed is doable.\nfollowing are comments\n\n1. you have to put RETURNS setof TT (if you plan to return TT) since you used RETURN NEXT\n2. you have to use SELECT INTO rec in the function where rec is rowtype TT\n\nhope it helps\n\n------- non technical comments ------------------------------------------------------\n3. its not a performance question , it shud have been marked more appropriately to pgsql-sql i think.\n4. its not a good etiquette to address email to someone and mark Cc to a list.\n\nkind regds\nmallah.\n> > > BEGIN > > SELECT a1,a2,a3,a4,a5,a6 > > FROM (SELECT * FROM T1, T2……WHERE …………etc… Flag = 0 $1 $2 $3 $4) > > ORDER\nBY\n…………………… > > …………………………. > > RETURN NEXT ………; > > END LOOP; > > RETURN; > > END; \n> > ' LANGUAGE 'plpgsql'; > > > >\nNOTE : The values for $1 $2 $3\n$4 will be passed when the function is\ninvoked(called) from the command prompt. > > > >\nI tried implementing the above, but this type of\nusage is not supported , how should use it? > > > > I\nam converting from (sprintf, \"SELECT query stmts (which\nuses %s %s %s %s ……\n) to\nfunctions. > > > > > > Any help will be deeply appreciated. Thank you. > > > > > > Kind regards, > > Chethana. \n> > > > > >",
"msg_date": "Sun, 9 Apr 2006 21:04:43 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pls reply ASAP"
}
] |
[
{
"msg_contents": "It'd be helpful if posted with the EXPLAIN of the slow running queries on\nthe respective table.\n\ncool.\nL.\n\n\nOn 4/9/06, Doron Baranes <[email protected]> wrote:\n>\n> Hi\n>\n>\n>\n> I am new at postgres and I'm having performance issues.\n>\n> I am running on postgres 7.4.6 on a pineapp with 512MB RAM.\n>\n> I did a database vacuum analyze and rebuild my indexes.\n>\n> When I perform queries on tables of 2M-10M of rows it takes several\n> minutes and\n>\n> I see at sar and top that the cpu and memory is heavily used.\n>\n>\n>\n> I would be glad for guidance on server parameters or other configurations\n> which would help.\n>\n>\n>\n> 10x.\n>\n>\n> Doron.\n>\n\nIt'd be helpful if posted with the EXPLAIN of the slow running queries on the respective table.\n \ncool.\nL.\n \nOn 4/9/06, Doron Baranes <[email protected]> wrote:\n\n\n\nHi\n \nI am new at postgres and I'm having performance issues.\nI am running on postgres 7.4.6 on a pineapp with 512MB RAM.\nI did a database vacuum analyze and rebuild my indexes.\nWhen I perform queries on tables of 2M-10M of rows it takes several minutes and\n\nI see at sar and top that the cpu and memory is heavily used.\n \nI would be glad for guidance on server parameters or other configurations which would help.\n\n \n10x.\nDoron.",
"msg_date": "Sun, 9 Apr 2006 14:42:36 +0400",
"msg_from": "Luckys <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "Hi\n\n \n\nI am new at postgres and I'm having performance issues.\n\nI am running on postgres 7.4.6 on a pineapp with 512MB RAM.\n\nI did a database vacuum analyze and rebuild my indexes.\n\nWhen I perform queries on tables of 2M-10M of rows it takes several\nminutes and\n\nI see at sar and top that the cpu and memory is heavily used.\n\n \n\nI would be glad for guidance on server parameters or other\nconfigurations which would help.\n\n \n\n10x.\n\n\nDoron.\n\n\n\n\n\n\n\n\n\n\nHi\n \nI\nam new at postgres and I'm having performance issues.\nI\nam running on postgres 7.4.6 on a pineapp with 512MB RAM.\nI\ndid a database vacuum analyze and rebuild my indexes.\nWhen\nI perform queries on tables of 2M-10M of rows it takes several minutes and\nI\nsee at sar and top that the cpu and memory is heavily used.\n \nI\nwould be glad for guidance on server parameters or other configurations which\nwould help.\n \n10x.\n\nDoron.",
"msg_date": "Sun, 9 Apr 2006 12:47:52 +0200",
"msg_from": "\"Doron Baranes\" <[email protected]>",
"msg_from_op": false,
"msg_subject": ""
},
{
"msg_contents": "On sun, 2006-04-09 at 12:47 +0200, Doron Baranes wrote:\n> Hi\n> \n\n> I am running on postgres 7.4.6 on a pineapp with 512MB RAM.\n> \n> I did a database vacuum analyze and rebuild my indexes.\n\nIf you have previously done a lot of deletes or updates\nwithout regular vacuums, you may have to do a\n VACUUM FULL ANALYZE\nonce to get the table into normal state.\n\nAfter this, regular normal VACUUM ANALYZE should be\nenough.\n\n> When I perform queries on tables of 2M-10M of rows it takes several\n> minutes and\n\nWe would need to see the output of EXPLAIN ANALYZE\nfor your query, along with some information about\nthe schema of the tables involved, such as what indexes\nhave been created.\n\nAlso, let us know about any non-default configuration. \n\ngnari\n\n\n",
"msg_date": "Sun, 09 Apr 2006 11:10:16 +0000",
"msg_from": "Ragnar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "Hello,\n\nwe have some performance problems with postgres 8.0.4, more precisely\nwith vacuuming 'large' database with a lot of deleted rows.\n\nWe had a 3.2 GB database, consisting mainly from 4 large tables, two of\nthem (say table A and B) having about 14.000.000 of rows and 1 GB of\nsize each, and two (say C and D) having about 4.000.000 of rows and 500\nMB each. The rest of the database is not important.\n\nWe've decided to remove unneeded 'old' data, which means removing about\n99.999% of rows from tables A, C and D (about 2 GB of data). At the\nbeginning, the B table (containing aggregated from A, C and D) was\nemptied (dropped and created) and filled in with current data. Then,\nbefore the deletion the data from tables A, C, D were backed up using\nanother tables (say A_old, C_old, D_old) filled in using\n\n INSERT INTO A SELECT * FROM A_old ...\n\nand fixed so there are no duplicities (rows both in A and A_old). Then\nthese data were deleted from A, C, D and tables A_old, C_old and D_old\nwere dumped, truncated and all the tables were vacuumed (with FULL\nANALYZE options). So the procedure was this\n\n1) drop, create and fill table B (aggregated data from A, C, D)\n2) copy 'old' data from A, C and D to A_old, C_old a D_old\n3) delete old data from A, C, D\n4) dump data from A_old, C_old and D_old\n5) truncate tables A, C, D\n6) vacuum full analyze tables A, C, D, A_old, C_old and D_old\n\nSo the dump of the fatabase has about 1.2 GB of data, from which about\n1 GB is in the B table (the one rebuilt in step 1). This was done yesterday.\n\nThe problem is this - today, we run a scheduled VACUUM FULL ANALYZE for\nthe whole database, and it runs for about 10 hours already, which is\nmuch more than usual (and it is still running).\n\nThe hardware is not too bad - it's Dell server with 2 x 3.0 GHz P4 HT,\n4GB of RAM, 2x15k SCSI drives in hw RAID etc.\n\nThe question is why this happens and how to get round that. I guess it's\ncaused by a huge amount of data deleted yesterday, but on the other side\nall the modified tables were vacuumed at the end. But I guess dropping\nand reloading the whole database would be much faster (at most 1.5 hour\nincluding creating indexes etc.)\n\nthanks for your advices\nTomas\n",
"msg_date": "Sun, 09 Apr 2006 20:22:51 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "serious problems with vacuuming databases"
},
{
"msg_contents": "Tomas Vondra wrote:\n\nHi,\n\n> Then\n> these data were deleted from A, C, D and tables A_old, C_old and D_old\n> were dumped, truncated and all the tables were vacuumed (with FULL\n> ANALYZE options). So the procedure was this\n> \n> 1) drop, create and fill table B (aggregated data from A, C, D)\n> 2) copy 'old' data from A, C and D to A_old, C_old a D_old\n> 3) delete old data from A, C, D\n> 4) dump data from A_old, C_old and D_old\n> 5) truncate tables A, C, D\n> 6) vacuum full analyze tables A, C, D, A_old, C_old and D_old\n> \n> So the dump of the fatabase has about 1.2 GB of data, from which about\n> 1 GB is in the B table (the one rebuilt in step 1). This was done yesterday.\n> \n> The problem is this - today, we run a scheduled VACUUM FULL ANALYZE for\n> the whole database, and it runs for about 10 hours already, which is\n> much more than usual (and it is still running).\n\nProbably the indexes are bloated after the vacuum full. I think the\nbest way to get rid of the \"fat\" is to recreate both tables and indexes\nanew. For this the best tool would be to CLUSTER the tables on some\nindex, probably the primary key. This will be much faster than\nVACUUMing the tables, and the indexes will be much smaller as result.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Sun, 9 Apr 2006 14:37:47 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serious problems with vacuuming databases"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> 1) drop, create and fill table B (aggregated data from A, C, D)\n> 2) copy 'old' data from A, C and D to A_old, C_old a D_old\n> 3) delete old data from A, C, D\n> 4) dump data from A_old, C_old and D_old\n> 5) truncate tables A, C, D\n> 6) vacuum full analyze tables A, C, D, A_old, C_old and D_old\n\nSteps 3/5/6 make no sense at all to me: why bother deleting data retail\nwhen you are about to truncate the tables, and why bother vacuuming a\ntable you just truncated? Is the above *really* what you did?\n\n> The problem is this - today, we run a scheduled VACUUM FULL ANALYZE for\n> the whole database, and it runs for about 10 hours already, which is\n> much more than usual (and it is still running).\n\nIs it actually grinding the disk, or is it just blocked waiting for\nsomeone's lock? If it's actually doing work, which table is it working\non? (You should be able to figure that out by looking in pg_locks,\nor by strace'ing the process to see which files it's touching.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 09 Apr 2006 14:53:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serious problems with vacuuming databases "
},
{
"msg_contents": "Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> 1) drop, create and fill table B (aggregated data from A, C, D)\n>> 2) copy 'old' data from A, C and D to A_old, C_old a D_old\n>> 3) delete old data from A, C, D\n>> 4) dump data from A_old, C_old and D_old\n>> 5) truncate tables A, C, D\n>> 6) vacuum full analyze tables A, C, D, A_old, C_old and D_old\n> \n> Steps 3/5/6 make no sense at all to me: why bother deleting data retail\n> when you are about to truncate the tables, and why bother vacuuming a\n> table you just truncated? Is the above *really* what you did?\n\nYes, the above is exactly what I did with the exception that there's an\nerror in the step (5) - there should be truncation of the _old tables.\nThe reasons that led me to this particular steps are two:\n\n(a) I don't want to delete all the data, just data older than two days.\n Until today we've kept all the data (containing two years access log\n for one of our production websites), but now we've decided to remove\n the data we don't need and leave just the aggregated version. That's\n why I have used DELETE rather than TRUNCATE.\n\n(b) I want to create 'incremental' backups, so once I'll need the data\n I can take several packages (dumps of _old tables) and import them\n one after another. Using pg_dump doesn't allow me this - dumping the\n whole tables A, C and D is not an option, because I want to leave\n some of the data in the tables.\n\n From now on, the tables will be cleared on a daily (or maybe weekly)\n basis, which means much smaller amount of data (about 50.000 rows\n a day).\n> \n>> The problem is this - today, we run a scheduled VACUUM FULL ANALYZE for\n>> the whole database, and it runs for about 10 hours already, which is\n>> much more than usual (and it is still running).\n> \n> Is it actually grinding the disk, or is it just blocked waiting for\n> someone's lock? If it's actually doing work, which table is it working\n> on? (You should be able to figure that out by looking in pg_locks,\n> or by strace'ing the process to see which files it's touching.)\n\nThanks for the hint, I'll try to figure that in case the dump/reload\nrecommended by Alvaro Herrera doesn't help. But as far as I know the\ndisks are not grinded right now, so I guess it's the problem with indexes.\n\nt.v.\n",
"msg_date": "Sun, 09 Apr 2006 22:33:58 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: serious problems with vacuuming databases"
},
{
"msg_contents": "> Probably the indexes are bloated after the vacuum full. I think the\n> best way to get rid of the \"fat\" is to recreate both tables and indexes\n> anew. For this the best tool would be to CLUSTER the tables on some\n> index, probably the primary key. This will be much faster than\n> VACUUMing the tables, and the indexes will be much smaller as result.\n\nI guess you're right. I forgot to mention there are 12 composed indexes\non the largest (and not deleted) table B, having about 14.000.000 rows\nand 1 GB of data. I'll try to dump/reload the database ...\n\nt.v.\n",
"msg_date": "Sun, 09 Apr 2006 22:37:05 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: serious problems with vacuuming databases"
},
{
"msg_contents": "> I guess you're right. I forgot to mention there are 12 composed indexes\n> on the largest (and not deleted) table B, having about 14.000.000 rows\n> and 1 GB of data. I'll try to dump/reload the database ...\n\nAaargh, the problem probably is not caused by the largest table, as it\nwas dropped, filled in with the data and after that all the indexes were\ncreated. The problem could be caused by the tables with deleted data, of\ncourse.\n\nt.v.\n",
"msg_date": "Sun, 09 Apr 2006 22:44:51 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: serious problems with vacuuming databases"
},
{
"msg_contents": "Tomas Vondra wrote:\n> > Probably the indexes are bloated after the vacuum full. I think the\n> > best way to get rid of the \"fat\" is to recreate both tables and indexes\n> > anew. For this the best tool would be to CLUSTER the tables on some\n> > index, probably the primary key. This will be much faster than\n> > VACUUMing the tables, and the indexes will be much smaller as result.\n> \n> I guess you're right. I forgot to mention there are 12 composed indexes\n> on the largest (and not deleted) table B, having about 14.000.000 rows\n> and 1 GB of data. I'll try to dump/reload the database ...\n\nHuh, I didn't suggest to dump/reload. I suggested CLUSTER. You need to\napply it only to tables where you have lots of dead tuples, which IIRC\nare A, C and D.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Sun, 9 Apr 2006 16:45:55 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serious problems with vacuuming databases"
},
{
"msg_contents": "> Huh, I didn't suggest to dump/reload. I suggested CLUSTER. You need to\n> apply it only to tables where you have lots of dead tuples, which IIRC\n> are A, C and D.\n\nSorry, I should read more carefully. Will clustering a table according\nto one index solve problems with all the indexes on the table (if the\ntable has for example two indexes?).\n\nt.v.\n",
"msg_date": "Sun, 09 Apr 2006 23:49:22 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: serious problems with vacuuming databases"
},
{
"msg_contents": "Tomas Vondra wrote:\n> > Huh, I didn't suggest to dump/reload. I suggested CLUSTER. You need to\n> > apply it only to tables where you have lots of dead tuples, which IIRC\n> > are A, C and D.\n> \n> Sorry, I should read more carefully. Will clustering a table according\n> to one index solve problems with all the indexes on the table (if the\n> table has for example two indexes?).\n\nYes, it will rebuild all indexes.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Sun, 9 Apr 2006 18:01:53 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serious problems with vacuuming databases"
},
{
"msg_contents": "Hi Tomas,\n\nTomas wrote:\nWe've decided to remove unneeded 'old' data, which means removing about\n99.999% of rows from tables A, C and D (about 2 GB of data). At the\nbeginning, the B table (containing aggregated from A, C and D) was emptied\n(dropped and created) and filled in with current data. Then, before the\ndeletion the data from tables A, C, D were backed up using another tables\n(say A_old, C_old, D_old) filled in using\n.....\n1) drop, create and fill table B (aggregated data from A, C, D)\n2) copy 'old' data from A, C and D to A_old, C_old a D_old\n3) delete old data from A, C, D\n4) dump data from A_old, C_old and D_old\n5) truncate tables A, C, D\n6) vacuum full analyze tables A, C, D, A_old, C_old and D_old\n----\n\nI think you do some difficult database maintainance. Why you do that, if you\njust want to have some small piece of datas from your tables. Why don't you\ntry something like:\n1. create table A with no index (don't fill data to this table), \n2. create table A_week_year inherit table A, with index you want, and some\ncondition for insertion. (eg: table A1 you used for 1 week data of a year\nand so on..)\n3. do this step for table B, C and D\n4. if you have relation, make the relation to inherit table (optional).\n\nI think you should read the postgresql help, for more information about\ntable inheritance.\n\nThe impact is, you might have much table. But each table will only have\nsmall piece of datas, example: just for one week. And you don't have to do a\ndifficult database maintainance like you have done. You just need to create\ntables for every week of data, do vacuum/analyze and regular backup.\n\n\nBest regards,\nahmad fajar,\n\n\n",
"msg_date": "Mon, 10 Apr 2006 13:54:56 +0700",
"msg_from": "\"Ahmad Fajar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serious problems with vacuuming databases"
},
{
"msg_contents": "> Hi Tomas,\n> \n> Tomas wrote:\n> We've decided to remove unneeded 'old' data, which means removing about\n> 99.999% of rows from tables A, C and D (about 2 GB of data). At the\n> beginning, the B table (containing aggregated from A, C and D) was emptied\n> (dropped and created) and filled in with current data. Then, before the\n> deletion the data from tables A, C, D were backed up using another tables\n> (say A_old, C_old, D_old) filled in using\n> .....\n> 1) drop, create and fill table B (aggregated data from A, C, D)\n> 2) copy 'old' data from A, C and D to A_old, C_old a D_old\n> 3) delete old data from A, C, D\n> 4) dump data from A_old, C_old and D_old\n> 5) truncate tables A, C, D\n> 6) vacuum full analyze tables A, C, D, A_old, C_old and D_old\n> ----\n> \n> I think you do some difficult database maintainance. Why you do that, if you\n> just want to have some small piece of datas from your tables. Why don't you\n> try something like:\n> 1. create table A with no index (don't fill data to this table), \n> 2. create table A_week_year inherit table A, with index you want, and some\n> condition for insertion. (eg: table A1 you used for 1 week data of a year\n> and so on..)\n> 3. do this step for table B, C and D\n> 4. if you have relation, make the relation to inherit table (optional).\n> \n> I think you should read the postgresql help, for more information about\n> table inheritance.\n> \n> The impact is, you might have much table. But each table will only have\n> small piece of datas, example: just for one week. And you don't have to do a\n> difficult database maintainance like you have done. You just need to create\n> tables for every week of data, do vacuum/analyze and regular backup.\n> \n> \n> Best regards,\n> ahmad fajar,\n\nThanks for your advice, but I've read the sections about inheritance and\nI don't see a way how to use that in my case, as I think the inheritance\ntakes care about the structure, not about the data.\n\nBut I've read a section about partitioning (using inheritance) too, and\nit seems useful. I'll try to solve the performance issues using this.\n\nThanks for your advices\nTomas\n",
"msg_date": "Mon, 24 Apr 2006 21:00:54 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serious problems with vacuuming databases"
}
] |
[
{
"msg_contents": "I have a slow sql:\nSELECT * FROM mytable WHERE id IN (1,3,5,7,....3k here...);\nmytable is about 10k rows.\n\nif don't use the \"IN\" clause, it will cost 0,11 second, otherwise it\nwill cost 2.x second\nI guess pg use linear search to deal with IN clause, is there any way\nto let pg use other search method with IN clause? (ex.Binary Search or\nhash Search)\n\n",
"msg_date": "9 Apr 2006 20:43:40 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "slow \"IN\" clause"
},
{
"msg_contents": "\n<[email protected]> wrote\n> I have a slow sql:\n> SELECT * FROM mytable WHERE id IN (1,3,5,7,....3k here...);\n> mytable is about 10k rows.\n>\n> if don't use the \"IN\" clause, it will cost 0,11 second, otherwise it\n> will cost 2.x second\n> I guess pg use linear search to deal with IN clause, is there any way\n> to let pg use other search method with IN clause? (ex.Binary Search or\n> hash Search)\n>\n\nIf you can put (1, 3, .., 3k) in a table, PG may choose a hash join.\n\nRegards,\nQingqing\n\n\n",
"msg_date": "Mon, 10 Apr 2006 12:44:46 +0800",
"msg_from": "\"Qingqing Zhou\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow \"IN\" clause"
},
{
"msg_contents": "On lun, 2006-04-10 at 12:44 +0800, Qingqing Zhou wrote: \n> <[email protected]> wrote\n> > I have a slow sql:\n> > SELECT * FROM mytable WHERE id IN (1,3,5,7,....3k here...);\n> > mytable is about 10k rows.\n> >\n> > if don't use the \"IN\" clause, it will cost 0,11 second, otherwise it\n> > will cost 2.x second\n> > I guess pg use linear search to deal with IN clause, is there any way\n> > to let pg use other search method with IN clause? (ex.Binary Search or\n> > hash Search)\n> >\n> \n> If you can put (1, 3, .., 3k) in a table, PG may choose a hash join.\n\nAnd maybe using\n\nSELECT * FROM yourtable WHERE id < 6002 AND id % 2 = 1;\n\nturns out to be faster, if we are allowed to extrapolate from the\nexample. \n\nV.\n\n",
"msg_date": "Tue, 11 Apr 2006 01:02:28 -0400",
"msg_from": "Vinko Vrsalovic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow \"IN\" clause"
}
] |
[
{
"msg_contents": "Hello,\n\nI have difficulty in fetching the records from the database.\nDatabase table contains more than 1 GB data.\nFor fetching the records it is taking more the 1 hour and that's why it is\nslowing down the performance.\nplease provide some help regarding improving the performance and how do I\nrun query so that records will be fetched in a less time.\n\nHello,\n \nI have difficulty in fetching the records from the database.\nDatabase table contains more than 1 GB data.\nFor fetching the records it is taking more the 1 hour and that's why it is slowing down the performance.\nplease provide some help regarding improving the performance and how do I run query so that records will be fetched in a less time.",
"msg_date": "Mon, 10 Apr 2006 12:51:18 +0530",
"msg_from": "\"soni de\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Takes too long to fetch the data from database"
},
{
"msg_contents": "what is the query ?\nuse LIMIT or a restricting where clause.\n\n\nregds\nmallah.\n\nOn 4/10/06, soni de <[email protected]> wrote:\n>\n> Hello,\n>\n> I have difficulty in fetching the records from the database.\n> Database table contains more than 1 GB data.\n> For fetching the records it is taking more the 1 hour and that's why it is\n> slowing down the performance.\n> please provide some help regarding improving the performance and how do I\n> run query so that records will be fetched in a less time.\n>\n\nwhat is the query ?use LIMIT or a restricting where clause.regdsmallah.On 4/10/06, soni de <\[email protected]> wrote:Hello,\n \nI have difficulty in fetching the records from the database.\nDatabase table contains more than 1 GB data.\nFor fetching the records it is taking more the 1 hour and that's why it is slowing down the performance.\nplease provide some help regarding improving the performance and how do I run query so that records will be fetched in a less time.",
"msg_date": "Mon, 10 Apr 2006 21:28:30 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Takes too long to fetch the data from database"
},
{
"msg_contents": "Rajesh Kumar Mallah wrote:\n> \n> what is the query ?\n> use LIMIT or a restricting where clause.\n\nYou could also use a cursor.\n\nJoshua D. Drake\n> \n> \n> regds\n> mallah.\n> \n> On 4/10/06, *soni de* < [email protected] <mailto:[email protected]>> wrote:\n> \n> Hello,\n> \n> I have difficulty in fetching the records from the database.\n> Database table contains more than 1 GB data.\n> For fetching the records it is taking more the 1 hour and that's why\n> it is slowing down the performance.\n> please provide some help regarding improving the performance and how\n> do I run query so that records will be fetched in a less time.\n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Mon, 10 Apr 2006 11:20:04 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Takes too long to fetch the data from database"
},
{
"msg_contents": "I have flushed the database, so currently records in the \"lan\" table are:\n665280\n\nbut records can be increased more than 1GB and in that case it takes more\nthan 1 hour\n\n\n\nBelow is explain analyze output taken from the table having 665280 records\n\n\n\npdb=# explain analyze SELECT sdate, stime, rbts from lan WHERE (\n\n ( bname = 'pluto' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate\n\n >= '2004-07-21' ) AND ( sdate <= '2004-07-21' ) ) ) ORDER BY sdate, stime\n;\n\nNOTICE: QUERY PLAN:\n\nSort (cost=17.13..17.13 rows=1 width=16) (actual time=619140.18..619140.29rows\n\n=288 loops=1)\n\n -> Index Scan using lan_pkey on lan (cost=0.00..17.12 rows=1 width=16)\n(ac\n\ntual time=7564.44..619121.61 rows=288 loops=1)\n\nTotal runtime: 619140.76 msec\n\n\n\nEXPLAIN\n\n\n\nbsdb=# explain analyze SELECT DISTINCT sdate, stime, rbts from lan\n\n WHERE ( ( bname = 'pluto' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate\n>= '2004-07-21' ) AND ( sdate <= '2004-07-21' ) ) )\n\n ORDER BY sdate, stime ;\n\nNOTICE: QUERY PLAN:\n\n\n\nUnique (cost=17.13..17.14 rows=1 width=16) (actual time=\n610546.66..610564.31 rows=288 loops=1)\n\n -> Sort (cost=17.13..17.13 rows=1 width=16) (actual time=\n610546.65..610546.75 rows=288 loops=1)\n\n -> Index Scan using lan_pkey on lan (cost=0.00..17.12 rows=1\nwidth=16) (actual time=7524.47..610533.50 rows=288 loops=1)\n\nTotal runtime: 610565.51 msec\n\n\n\nEXPLAIN\n\n\n\npdb=# explain analyze SELECT ALL sdate, stime, rbts from lan WHERE ( (\nbname = 'neptune' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate >=\n'2004-07-21' ) AND ( sdate <= '2004-07-21' ) ) ) ORDER BY sdate, stime ;\n\nNOTICE: QUERY PLAN:\n\n\n\nSort (cost=17.13..17.13 rows=1 width=16) (actual time=\n1260756.66..1260756.76 rows=288 loops=1)\n\n -> Index Scan using lan_pkey on lan (cost=0.00..17.12 rows=1 width=16)\n(actual time=7725.97..1260752.47 rows=288 loops=1)\n\nTotal runtime: 1260757.09 msec\n\n\n\n\n\npdb=# \\d lan\n\n Table \"lan\"\n\n Column | Type | Modifiers\n\n------------------+-----------------------+-----------\n\n bname | character varying(64) | not null\n\n sdate | date | not null\n\n stime | integer | not null\n\n cno | smallint | not null\n\n pno | smallint | not null\n\n rbts | bigint |\n\n tbts | bigint |\n\n u_inpkt | bigint |\n\n u_outpkt | bigint |\n\n m_inpkt | bigint |\n\n m_outpkt | bigint |\n\n b_inpkt | bigint |\n\n b_outpkt | bigint |\n\nPrimary key: lan_pkey\n\nCheck constraints: \"lan_stime\" ((stime >= 0) AND (stime < 86400))\n\n\nOn 4/10/06, Joshua D. Drake <[email protected]> wrote:\n>\n> Rajesh Kumar Mallah wrote:\n> >\n> > what is the query ?\n> > use LIMIT or a restricting where clause.\n>\n> You could also use a cursor.\n>\n> Joshua D. Drake\n> >\n> >\n> > regds\n> > mallah.\n> >\n> > On 4/10/06, *soni de* < [email protected] <mailto:[email protected]>>\n> wrote:\n> >\n> > Hello,\n> >\n> > I have difficulty in fetching the records from the database.\n> > Database table contains more than 1 GB data.\n> > For fetching the records it is taking more the 1 hour and that's why\n> > it is slowing down the performance.\n> > please provide some help regarding improving the performance and how\n> > do I run query so that records will be fetched in a less time.\n> >\n> >\n>\n>\n> --\n>\n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n> Providing the most comprehensive PostgreSQL solutions since 1997\n> http://www.commandprompt.com/\n>\n>\n>\n\n \nI have flushed the database, so currently records in the \"lan\" table are: 665280\nbut records can be increased more than 1GB and in that case it takes more than 1 hour\n \nBelow is explain analyze output taken from the table having 665280 records\n \npdb=# explain analyze SELECT sdate, stime, rbts from lan WHERE (\n ( bname = 'pluto' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate\n >= '2004-07-21' ) AND ( sdate <= '2004-07-21' ) ) ) ORDER BY sdate, stime ;\nNOTICE: QUERY PLAN:\nSort (cost=17.13..17.13 rows=1 width=16) (actual time=619140.18..619140.29 rows\n=288 loops=1)\n -> Index Scan using lan_pkey on lan (cost=0.00..17.12 rows=1 width=16) (ac\ntual time=7564.44..619121.61 rows=288 loops=1)\nTotal runtime: 619140.76 msec\n \nEXPLAIN\n \nbsdb=# explain analyze SELECT DISTINCT sdate, stime, rbts from lan\n WHERE ( ( bname = 'pluto' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate >= '2004-07-21' ) AND ( sdate <= '2004-07-21' ) ) ) \n\n ORDER BY sdate, stime ;\nNOTICE: QUERY PLAN:\n \nUnique (cost=17.13..17.14 rows=1 width=16) (actual time=610546.66..610564.31 rows=288 loops=1)\n -> Sort (cost=17.13..17.13 rows=1 width=16) (actual time=610546.65..610546.75 rows=288 loops=1)\n -> Index Scan using lan_pkey on lan (cost=0.00..17.12 rows=1 width=16) (actual time=7524.47..610533.50 rows=288 loops=1)\n\nTotal runtime: 610565.51 msec\n \nEXPLAIN\n \npdb=# explain analyze SELECT ALL sdate, stime, rbts from lan WHERE ( ( bname = 'neptune' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate >= '2004-07-21' ) AND ( sdate <= '2004-07-21' ) ) ) ORDER BY sdate, stime ;\n\nNOTICE: QUERY PLAN:\n \nSort (cost=17.13..17.13 rows=1 width=16) (actual time=1260756.66..1260756.76 rows=288 loops=1)\n -> Index Scan using lan_pkey on lan (cost=0.00..17.12 rows=1 width=16) (actual time=7725.97..1260752.47 rows=288 loops=1)\n\nTotal runtime: 1260757.09 msec\n \n \npdb=# \\d lan\n Table \"lan\"\n Column | Type | Modifiers\n------------------+-----------------------+-----------\n bname | character varying(64) | not null\n sdate | date | not null\n stime | integer | not null\n cno | smallint | not null\n pno | smallint | not null\n rbts | bigint |\n tbts | bigint |\n u_inpkt | bigint |\n u_outpkt | bigint |\n m_inpkt | bigint |\n m_outpkt | bigint |\n b_inpkt | bigint |\n b_outpkt | bigint |\nPrimary key: lan_pkey\nCheck constraints: \"lan_stime\" ((stime >= 0) AND (stime < 86400))\nOn 4/10/06, Joshua D. Drake <[email protected]> wrote:\nRajesh Kumar Mallah wrote:>> what is the query ?> use LIMIT or a restricting where clause.\nYou could also use a cursor.Joshua D. Drake>>> regds> mallah.>> On 4/10/06, *soni de* < [email protected] <mailto:\[email protected]>> wrote:>> Hello,>> I have difficulty in fetching the records from the database.> Database table contains more than 1 GB data.> For fetching the records it is taking more the 1 hour and that's why\n> it is slowing down the performance.> please provide some help regarding improving the performance and how> do I run query so that records will be fetched in a less time.>>\n-- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/",
"msg_date": "Tue, 11 Apr 2006 12:35:27 +0530",
"msg_from": "\"soni de\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Takes too long to fetch the data from database"
},
{
"msg_contents": "soni de wrote:\n> I have flushed the database, so currently records in the \"lan\" table are:\n> 665280\n> \n> but records can be increased more than 1GB and in that case it takes more\n> than 1 hour\n> \n> Below is explain analyze output taken from the table having 665280 records\n> \n> pdb=# explain analyze SELECT sdate, stime, rbts from lan WHERE (\n> ( bname = 'pluto' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate\n> >= '2004-07-21' ) AND ( sdate <= '2004-07-21' ) ) ) ORDER BY sdate, stime\n> ;\n> \n> NOTICE: QUERY PLAN:\n> Sort (cost=17.13..17.13 rows=1 width=16) (actual time=619140.18..619140.29rows\n> =288 loops=1)\n> -> Index Scan using lan_pkey on lan (cost=0.00..17.12 rows=1 width=16)\n> (actual time=7564.44..619121.61 rows=288 loops=1)\n> \n> Total runtime: 619140.76 msec\n\nOK - there is clearly something wrong here when you take 10 minutes to \nfetch 288 rows from an index.\n\n1. VACUUM FULL VERBOSE lan;\n2. test again, and if that doesn't work...\n3. REINDEX TABLE lan;\n4. test again\n\nI'm guessing you have a *lot* of dead rows in there.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 11 Apr 2006 11:17:01 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Takes too long to fetch the data from database"
},
{
"msg_contents": "> pdb=# explain analyze SELECT sdate, stime, rbts from lan WHERE (\n>\n> ( bname = 'pluto' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate\n>\n> >= '2004-07-21' ) AND ( sdate <= '2004-07-21' ) ) ) ORDER BY sdate, stime\n> ;\n\nthis query would benefit from an index on\npluto, cno, pno, sdate\n\ncreate index Ian_idx on Ian(bname, cno, pno, sdate);\n\n\n> pdb=# explain analyze SELECT ALL sdate, stime, rbts from lan WHERE ( (\n> bname = 'neptune' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate >=\n> '2004-07-21' ) AND ( sdate <= '2004-07-21' ) ) ) ORDER BY sdate, stime ;\n\nditto above. Generally, the closer the fields in the where clause are\nmatched by the index, the it will speed up your query.\n\nMerlin\n",
"msg_date": "Tue, 11 Apr 2006 09:13:24 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Takes too long to fetch the data from database"
},
{
"msg_contents": "Richard Huxton <[email protected]> writes:\n> soni de wrote:\n>> NOTICE: QUERY PLAN:\n>> Sort (cost=17.13..17.13 rows=1 width=16) (actual time=619140.18..619140.29rows\n>> =288 loops=1)\n>> -> Index Scan using lan_pkey on lan (cost=0.00..17.12 rows=1 width=16)\n>> (actual time=7564.44..619121.61 rows=288 loops=1)\n>> \n>> Total runtime: 619140.76 msec\n\n> OK - there is clearly something wrong here when you take 10 minutes to \n> fetch 288 rows from an index.\n\n> I'm guessing you have a *lot* of dead rows in there.\n\nYeah. The other small problem here is that EXPLAIN output hasn't looked\nlike that since PG 7.2 (unless Soni has just omitted the index-condition\nlines). I'd recommend updating to something modern.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Apr 2006 09:45:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Takes too long to fetch the data from database "
},
{
"msg_contents": "Please provide me some help regarding how could I use cursor in following\ncases? :\n\n\n\nI want to fetch 50 records at a time starting from largest stime.\n\n\n\nTotal no. of records in the \"wan\" table: 82019\n\n\n\npdb=# \\d wan\n\n Table \"wan\"\n\n Column | Type | Modifiers\n\n-------------+--------------------------+-----------\n\n stime | bigint | not null\n\n kname | character varying(64) |\n\n eid | smallint |\n\n rtpe | smallint |\n\n taddr | character varying(16) |\n\n ntime | bigint |\n\n Primary key: wan_pkey\n\n\n\nstime is the primary key.\n\n\n\npdb=#\n\n\n\nSELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;\n\n\n\npdb=# explain analyze SELECT * FROM wan ORDER BY stime LIMIT 50\n\n OFFSET 81900;\n\nNOTICE: QUERY PLAN:\n\n\n\nLimit (cost=17995.15..17995.15 rows=50 width=95) (actual time=\n9842.92..9843.20\n\nrows=50 loops=1)\n\n -> Sort (cost=17995.15..17995.15 rows=82016 width=95) (actual time=\n9364.56..\n\n9793.00 rows=81951 loops=1)\n\n -> Seq Scan on wan (cost=0.00..3281.16 rows=82016 width=95) (actu\n\nal time=0.11..3906.29 rows=82019 loops=1)\n\nTotal runtime: 10010.76 msec\n\n\n\nEXPLAIN\n\npdb=#\n\n\n\n\n\nSELECT * FROM wan where kname='pluto' ORDER BY stime LIMIT 50 OFFSET 81900;\n\n\n\npdb=# explain analyze SELECT * from wan where kname='pluto' order by stime\nlimit 50 offset 81900;\n\nNOTICE: QUERY PLAN:\n\n\n\nLimit (cost=3494.13..3494.13 rows=1 width=95) (actual\ntime=9512.85..9512.85rows=0 loops=1)\n\n -> Sort (cost=3494.13..3494.13 rows=206 width=95) (actual time=\n9330.74..9494.90 rows=27485 loops=1)\n\n -> Seq Scan on wan (cost=0.00..3486.20 rows=206 width=95) (actual\ntime=0.28..4951.76 rows=27485 loops=1)\n\nTotal runtime: 9636.96 msec\n\n\n\nEXPLAIN\n\n\n\nSELECT * FROM wan where kname='pluto' and rtpe=20 ORDER BY stime LIMIT 50\nOFFSET 81900;\n\n\n\npdb=# explain analyze SELECT * from wan where kname='pluto' and rtpe = 20\norder by stime limit 50 offset 81900;\n\nNOTICE: QUERY PLAN:\n\n\n\nLimit (cost=3691.25..3691.25 rows=1 width=95) (actual\ntime=7361.50..7361.50rows=0 loops=1)\n\n -> Sort (cost=3691.25..3691.25 rows=1 width=95) (actual time=\n7361.50..7361.50 rows=0 loops=1)\n\n -> Seq Scan on wan (cost=0.00..3691.24 rows=1 width=95) (actual\ntime=7361.30..7361.30 rows=0 loops=1)\n\nTotal runtime: 7361.71 msec\n\n\n\nEXPLAIN\n\npdb=#\n\n\n\nall the above queries taking around 7~10 sec. to fetch the last 50 records.\nI want to reduce this time because table is growing and table can contain\nmore than 1 GB data then for 1 GB data above queries will take too much\ntime.\n\n\n\nI am not getting how to use cursor to fetch records starting from last\nrecords in the above case offset can be any number (less than total no. of\nrecords).\n\n\n\nI have use following cursor, but it is taking same time as query takes.\n\n\n\nBEGIN;\n\nDECLARE crs cursor FOR SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET\n81900;\n\nFETCH ALL in crs;\n\nCLOSE crs;\n\nCOMMIT;\n\n\n\n\nOn 4/11/06, Merlin Moncure <[email protected]> wrote:\n>\n> > pdb=# explain analyze SELECT sdate, stime, rbts from lan WHERE (\n> >\n> > ( bname = 'pluto' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate\n> >\n> > >= '2004-07-21' ) AND ( sdate <= '2004-07-21' ) ) ) ORDER BY sdate,\n> stime\n> > ;\n>\n> this query would benefit from an index on\n> pluto, cno, pno, sdate\n>\n> create index Ian_idx on Ian(bname, cno, pno, sdate);\n>\n>\n> > pdb=# explain analyze SELECT ALL sdate, stime, rbts from lan WHERE ( (\n> > bname = 'neptune' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate >=\n> > '2004-07-21' ) AND ( sdate <= '2004-07-21' ) ) ) ORDER BY sdate, stime\n> ;\n>\n> ditto above. Generally, the closer the fields in the where clause are\n> matched by the index, the it will speed up your query.\n>\n> Merlin\n>\n\n \n\nPlease provide me some help regarding how could I use cursor in following cases? :\n\n \nI want to fetch 50 records at a time starting from largest stime. \n\n \nTotal no. of records in the \"wan\" table:\n 82019\n \npdb=# \\d wan\n Table \"wan\"\n\n \n Column | Type | Modifiers\n-------------+--------------------------+-----------\n stime\n | bigint | not null\n kname\n | character varying(64) |\n eid\n | smallint |\n rtpe\n | smallint |\n taddr\n | character varying(16) |\n ntime\n | bigint |\n Primary key: wan_pkey\n\n \nstime is the primary key.\n \npdb=#\n \nSELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;\n\n \npdb=# explain analyze SELECT * FROM wan ORDER BY stime LIMIT 50\n\n OFFSET 81900;\n\nNOTICE: QUERY PLAN:\n\n \nLimit (cost=17995.15..17995.15\n rows=50 width=95) (actual time=9842.92..9843.20\nrows=50 loops=1)\n ->\n Sort (cost=17995.15..17995.15 rows=82016 width=95) (actual time=9364.56..\n9793.00 rows=81951 loops=1)\n ->\n Seq Scan on wan (cost=0.00..3281.16 rows=82016 width=95) (actu\nal time=0.11..3906.29 rows=82019 loops=1)\nTotal runtime: 10010.76 msec\n \nEXPLAIN\npdb=#\n \n \nSELECT * FROM wan where kname='pluto' ORDER BY stime LIMIT 50 OFFSET 81900;\n\n \npdb=# explain analyze SELECT * from wan where kname='pluto' order by stime limit 50 offset 81900;\n\nNOTICE: QUERY PLAN:\n\n \nLimit (cost=3494.13..3494.13\n rows=1 width=95) (actual time=9512.85..9512.85 rows=0 loops=1)\n ->\n Sort (cost=3494.13..3494.13 rows=206 width=95) (actual time=9330.74..9494.90 rows=27485 loops=1)\n ->\n Seq Scan on wan (cost=0.00..3486.20 rows=206 width=95) (actual time=0.28..4951.76 rows=27485 loops=1)\nTotal runtime: 9636.96 msec\n \nEXPLAIN\n \nSELECT * FROM wan where kname='pluto' and rtpe=20 ORDER BY stime LIMIT 50 OFFSET 81900;\n\n \npdb=# explain analyze SELECT * from wan where kname='pluto' and rtpe = 20 order by stime limit 50 offset 81900;\n\nNOTICE: QUERY PLAN:\n\n \nLimit (cost=3691.25..3691.25\n rows=1 width=95) (actual time=7361.50..7361.50 rows=0 loops=1)\n ->\n Sort (cost=3691.25..3691.25 rows=1 width=95) (actual time=7361.50..7361.50 rows=0 loops=1)\n ->\n Seq Scan on wan (cost=0.00..3691.24 rows=1 width=95) (actual time=7361.30..7361.30 rows=0 loops=1)\nTotal runtime: 7361.71 msec\n \nEXPLAIN\npdb=#\n \nall the above queries taking around 7~10 sec. to fetch the last 50 records. I want to reduce this time because table is growing and table can contain more than 1 GB data then for 1 GB data above queries will take too much time.\n\n \nI am not getting how to use cursor to fetch records starting from last records in the above case offset can be any number (less than total no. of records). \n\n \nI have use following cursor, but it is taking same time as query takes.\n\n \nBEGIN;\nDECLARE crs cursor FOR SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;\n\nFETCH ALL in crs;\nCLOSE crs;\nCOMMIT;\n \n \nOn 4/11/06, Merlin Moncure <[email protected]> wrote:\n> pdb=# explain analyze SELECT sdate, stime, rbts from lan WHERE (>> ( bname = 'pluto' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate\n>> >= '2004-07-21' ) AND ( sdate <= '2004-07-21' ) ) ) ORDER BY sdate, stime> ;this query would benefit from an index onpluto, cno, pno, sdatecreate index Ian_idx on Ian(bname, cno, pno, sdate);\n> pdb=# explain analyze SELECT ALL sdate, stime, rbts from lan WHERE ( (> bname = 'neptune' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate >=> '2004-07-21' ) AND ( sdate <= '2004-07-21' ) ) ) ORDER BY sdate, stime ;\nditto above. Generally, the closer the fields in the where clause arematched by the index, the it will speed up your query.Merlin",
"msg_date": "Thu, 20 Apr 2006 11:07:31 +0530",
"msg_from": "\"soni de\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Takes too long to fetch the data from database"
},
{
"msg_contents": "On Thu, Apr 20, 2006 at 11:07:31 +0530,\n soni de <[email protected]> wrote:\n> Please provide me some help regarding how could I use cursor in following\n> cases? :\n> \n> I want to fetch 50 records at a time starting from largest stime.\n> \n> SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;\n\nSomething like the following may be faster:\nSELECT * FROM wan ORDER BY stime DESC LIMIT 50;\n",
"msg_date": "Thu, 20 Apr 2006 01:30:14 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Takes too long to fetch the data from database"
},
{
"msg_contents": "> SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;\n\nyou need to try and solve the problem without using 'offset'. you could do:\nBEGIN;\nDECLARE crs cursor FOR SELECT * FROM wan ORDER BY stime;\nFETCH ABSOLUTE 81900 in crs;\nFETCH 49 in crs;\nCLOSE crs;\nCOMMIT;\n\nthis may be a bit faster but will not solve the fundamental problem.\n\nthe more interesting question is why you want to query exactly 81900\nrows into a set. This type of thinking will always get you into\ntrouble, absolute positioning will not really work in a true sql\nsense. if you are browsing a table sequentially, there are much\nbetter methods.\n\nmerlin\n",
"msg_date": "Thu, 20 Apr 2006 12:05:02 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Takes too long to fetch the data from database"
},
{
"msg_contents": "I don't want to query exactly 81900 rows into set. I just want to fetch 50\nor 100 rows at a time in a decreasing order of stime.(i.e 50 or 100 rows\nstarting from last to end).\n\nif we fetched sequentially, there is also problem in fetching all the\nrecords (select * from wan where kname='pluto' order by stime) it is taking\nmore than 4~5 minutes. tried it on same table having more than 326054\nrecords.\n\n\nOn 4/20/06, Merlin Moncure <[email protected]> wrote:\n>\n> > SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;\n>\n> you need to try and solve the problem without using 'offset'. you could\n> do:\n> BEGIN;\n> DECLARE crs cursor FOR SELECT * FROM wan ORDER BY stime;\n> FETCH ABSOLUTE 81900 in crs;\n> FETCH 49 in crs;\n> CLOSE crs;\n> COMMIT;\n>\n> this may be a bit faster but will not solve the fundamental problem.\n>\n> the more interesting question is why you want to query exactly 81900\n> rows into a set. This type of thinking will always get you into\n> trouble, absolute positioning will not really work in a true sql\n> sense. if you are browsing a table sequentially, there are much\n> better methods.\n>\n> merlin\n>\n\nI don't want to query exactly 81900 rows into set. I just want to fetch 50 or 100 rows at a time in a decreasing order of stime.(i.e 50 or 100 rows starting from last to end).\n \nif we fetched sequentially, there is also problem in fetching all the records (select * from wan where kname='pluto' order by stime) it is taking more than 4~5 minutes. tried it on same table having more than 326054 records.\n\n \nOn 4/20/06, Merlin Moncure <[email protected]> wrote:\n> SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;you need to try and solve the problem without using 'offset'. you could do:\nBEGIN;DECLARE crs cursor FOR SELECT * FROM wan ORDER BY stime;FETCH ABSOLUTE 81900 in crs;FETCH 49 in crs;CLOSE crs;COMMIT;this may be a bit faster but will not solve the fundamental problem.\nthe more interesting question is why you want to query exactly 81900rows into a set. This type of thinking will always get you intotrouble, absolute positioning will not really work in a true sqlsense. if you are browsing a table sequentially, there are much\nbetter methods.merlin",
"msg_date": "Fri, 21 Apr 2006 10:12:24 +0530",
"msg_from": "\"soni de\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Takes too long to fetch the data from database"
},
{
"msg_contents": "I've never used a cursor in Postgres, but I don't think it will help you\na lot. In theory cursors make it easier to do paging, but your main\nproblem is that getting the first page is slow. A cursor isn't going to\nbe any faster at getting the first page than OFFSET/LIMIT is.\n \nDid you try Bruno's suggestion of:\n \nSELECT * FROM wan ORDER BY stime DESC OFFSET 0 LIMIT 50;\n \nYou should run an EXPLAIN ANALYZE on that query to see if it is using an\nindex scan. Also what version of Postgres are you using? You can run\nselect version(); to check.\n \n \n \n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of soni de\nSent: Thursday, April 20, 2006 11:42 PM\nTo: Merlin Moncure\nCc: [email protected]\nSubject: Re: [PERFORM] Takes too long to fetch the data from database\n \nI don't want to query exactly 81900 rows into set. I just want to fetch\n50 or 100 rows at a time in a decreasing order of stime.(i.e 50 or 100\nrows starting from last to end).\n \nif we fetched sequentially, there is also problem in fetching all the\nrecords (select * from wan where kname='pluto' order by stime) it is\ntaking more than 4~5 minutes. tried it on same table having more than\n326054 records. \n\n \nOn 4/20/06, Merlin Moncure <[email protected]> wrote: \n> SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;\n\nyou need to try and solve the problem without using 'offset'. you could\ndo: \nBEGIN;\nDECLARE crs cursor FOR SELECT * FROM wan ORDER BY stime;\nFETCH ABSOLUTE 81900 in crs;\nFETCH 49 in crs;\nCLOSE crs;\nCOMMIT;\n\nthis may be a bit faster but will not solve the fundamental problem. \n\nthe more interesting question is why you want to query exactly 81900\nrows into a set. This type of thinking will always get you into\ntrouble, absolute positioning will not really work in a true sql\nsense. if you are browsing a table sequentially, there are much \nbetter methods.\n\nmerlin\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI’ve never used a cursor in Postgres, but I don’t think it will help you a\nlot. In theory cursors make it\neasier to do paging, but your main problem is that getting the first page is\nslow. A cursor isn’t going to\nbe any faster at getting the first page than OFFSET/LIMIT is.\n \nDid you try Bruno’s suggestion of:\n \nSELECT * FROM wan ORDER BY stime\nDESC OFFSET 0 LIMIT 50;\n \nYou should run an EXPLAIN ANALYZE on that\nquery to see if it is using an index scan. \nAlso what version of Postgres are you\nusing? You can run select version(); to check.\n \n \n \n\n-----Original Message-----\nFrom:\[email protected]\n[mailto:[email protected]] On Behalf Of soni de\nSent: Thursday, April 20, 2006 11:42 PM\nTo: Merlin Moncure\nCc:\[email protected]\nSubject: Re: [PERFORM] Takes too\nlong to fetch the data from database\n \n\nI don't want to query exactly 81900 rows into set. I just want to fetch\n50 or 100 rows at a time in a decreasing order of stime.(i.e 50 or 100 rows\nstarting from last to end).\n\n\n \n\n\nif we fetched sequentially, there is also problem in fetching all the\nrecords (select * from wan where kname='pluto' order by stime) it is\ntaking more than 4~5 minutes. tried it on same table having more than 326054\nrecords. \n\n\n\n \n\n\nOn 4/20/06, Merlin Moncure\n<[email protected]> wrote:\n\n> SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;\n\nyou need to try and solve the problem without using 'offset'. you\ncould do: \nBEGIN;\nDECLARE crs cursor FOR SELECT * FROM wan ORDER BY stime;\nFETCH ABSOLUTE 81900 in crs;\nFETCH 49 in crs;\nCLOSE crs;\nCOMMIT;\n\nthis may be a bit faster but will not solve the fundamental problem. \n\nthe more interesting question is why you want to query exactly 81900\nrows into a set. This type of thinking will always get you into\ntrouble, absolute positioning will not really work in a true sql\nsense. if you are browsing a table sequentially, there are much \nbetter methods.\n\nmerlin",
"msg_date": "Fri, 21 Apr 2006 08:26:33 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Takes too long to fetch the data from database"
},
{
"msg_contents": "On 4/21/06, soni de <[email protected]> wrote:\n>\n> I don't want to query exactly 81900 rows into set. I just want to fetch 50\n> or 100 rows at a time in a decreasing order of stime.(i.e 50 or 100 rows\n> starting from last to end).\n\naha! you need to implement a 'sliding window' query. simplest is\nwhen you are ordering one field that is unique:\n\n1st 50:\nselect * from t order by k limit 50;\n2nd 50:\nselect * from t where k > k1 order by k limit 50:\n\nif you are ordering on two fields or on a field that is not unique, you must do:\n\n1st 50:\nselect * from t order by j, k limit 50;\n2nd 50:\nselect * from t where j >= j1 and (j > j1 or k > k1) order by j, k limit 50;\n3 fields:\nselect * from t where i >= i1 and (i > i1 or j >= j1) and (i > i1 or j\n> k1 or k > k1) order by i,j,k limit 50;\n\ni1,j1,k1 are the values of the 50th record you pulled out of the last query.\n\nif this seems a little complicated, either wait for pg 8.2 or get cvs\ntip and rewrite as:\nselect * from t where (i,j,k) > (i1,j1,k1) order by i,j,k limit 50;\n",
"msg_date": "Fri, 21 Apr 2006 09:44:25 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Takes too long to fetch the data from database"
},
{
"msg_contents": "On Fri, Apr 21, 2006 at 10:12:24 +0530,\n soni de <[email protected]> wrote:\n> I don't want to query exactly 81900 rows into set. I just want to fetch 50\n> or 100 rows at a time in a decreasing order of stime.(i.e 50 or 100 rows\n> starting from last to end).\n\nYou can do this efficiently, if stime has an index and you can deal with using\nstime from the previous query instead of the record count. The idea is to\nselect up 50 or 100 records in descending order where the stime is <=\nthe previous stime. This can give you some overlapping records, so you need\nsome way to deal with this.\n",
"msg_date": "Fri, 21 Apr 2006 12:41:49 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Takes too long to fetch the data from database"
},
{
"msg_contents": "On Fri, Apr 21, 2006 at 09:44:25AM -0400, Merlin Moncure wrote:\n> 2nd 50:\n> select * from t where j >= j1 and (j > j1 or k > k1) order by j, k limit 50;\n> 3 fields:\n> select * from t where i >= i1 and (i > i1 or j >= j1) and (i > i1 or j\n> > k1 or k > k1) order by i,j,k limit 50;\n\nNote that in 8.2 you'll be able to do:\n\nWHERE (i, j, k) >= (i1, j1, k1)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 21 Apr 2006 13:19:27 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Takes too long to fetch the data from database"
},
{
"msg_contents": "Hello,\n\nI have tried the query SELECT * FROM wan ORDER BY stime DESC OFFSET 0 LIMIT\n50; and it is working great.\nEXPLAIN ANALYSE of the above query is:\npdb=# EXPLAIN ANALYZE select * from wan order by stime desc limit 50 ;\nNOTICE: QUERY PLAN:\n\nLimit (cost=0.00..12.10 rows=50 width=95) (actual time=24.29..50.24 rows=50\nloops=1)\n -> Index Scan Backward using wan_pkey on wan \n(cost=0.00..19983.31rows=82586 width=95) (actual time=\n24.28..50.14 rows=51 loops=1)\nTotal runtime: 50.55 msec\n\nEXPLAIN\n\nNow I am facing another problem, If I use where clause is select query it is\ntaking too much time. Can you please help me on this.\n\nExplain analyze are follows:\npdb=# EXPLAIN ANALYZE select count(1) from wan where kname = 'pluto';\nNOTICE: QUERY PLAN:\n\nAggregate (cost=3507.84..3507.84 rows=1 width=0) (actual time=\n214647.53..214647.54 rows=1 loops=1)\n -> Seq Scan on wan (cost=0.00..3507.32 rows=208 width=0) (actual time=\n13.65..214599.43 rows=18306 loops=1)\nTotal runtime: 214647.87 msec\n\nEXPLAIN\npdb=# EXPLAIN ANALYZE select * from wan where kname = 'pluto' order by stime\nlimit 50;\nNOTICE: QUERY PLAN:\n\nLimit (cost=3515.32..3515.32 rows=50 width=95) (actual time=\n230492.69..230493.07 rows=50 loops=1)\n -> Sort (cost=3515.32..3515.32 rows=208 width=95) (actual time=\n230492.68..230493.00 rows=51 loops=1)\n -> Seq Scan on wan (cost=0.00..3507.32 rows=208 width=95) (actual\ntime=0.44..229217.38 rows=18306 loops=1)\nTotal runtime: 230631.62 msec\n\nEXPLAIN\npdb=# EXPLAIN ANALYZE SELECT * FROM wan WHERE stime >= 20123 AND stime <=\n24000 ORDER BY stime limit 50;\nNOTICE: QUERY PLAN:\n\nLimit (cost=0.00..2519.70 rows=50 width=95) (actual\ntime=7346.74..7351.42rows=50 loops=1)\n -> Index Scan using wan_pkey on wan (cost=0.00..20809.17 rows=413\nwidth=95) (actual time=7346.73..7351.32 rows=51 loops=1)\nTotal runtime: 7351.71 msec\n\nEXPLAIN\n\nfor above queries if I use desc order then the queries takes too much time.\nI am not getting for the above queries how do I increase the speed.\n\nPostgresql version is 7.2.3\ntotal no. of records: 5700300\n\nOn 4/21/06, Dave Dutcher <[email protected]> wrote:\n>\n> I've never used a cursor in Postgres, but I don't think it will help you\n> a lot. In theory cursors make it easier to do paging, but your main\n> problem is that getting the first page is slow. A cursor isn't going to\n> be any faster at getting the first page than OFFSET/LIMIT is.\n>\n>\n>\n> Did you try Bruno's suggestion of:\n>\n>\n>\n> SELECT * FROM wan ORDER BY stime DESC OFFSET 0 LIMIT 50;\n>\n>\n>\n> You should run an EXPLAIN ANALYZE on that query to see if it is using an\n> index scan. Also what version of Postgres are you using? You can run\n> select version(); to check.\n>\n>\n>\n>\n>\n>\n>\n> -----Original Message-----\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *soni de\n> *Sent:* Thursday, April 20, 2006 11:42 PM\n> *To:* Merlin Moncure\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Takes too long to fetch the data from database\n>\n>\n>\n> I don't want to query exactly 81900 rows into set. I just want to fetch 50\n> or 100 rows at a time in a decreasing order of stime.(i.e 50 or 100 rows\n> starting from last to end).\n>\n>\n>\n> if we fetched sequentially, there is also problem in fetching all the\n> records (select * from wan where kname='pluto' order by stime) it is taking\n> more than 4~5 minutes. tried it on same table having more than 326054\n> records.\n>\n>\n>\n>\n> On 4/20/06, *Merlin Moncure* <[email protected]> wrote:\n>\n> > SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;\n>\n> you need to try and solve the problem without using 'offset'. you could\n> do:\n> BEGIN;\n> DECLARE crs cursor FOR SELECT * FROM wan ORDER BY stime;\n> FETCH ABSOLUTE 81900 in crs;\n> FETCH 49 in crs;\n> CLOSE crs;\n> COMMIT;\n>\n> this may be a bit faster but will not solve the fundamental problem.\n>\n> the more interesting question is why you want to query exactly 81900\n> rows into a set. This type of thinking will always get you into\n> trouble, absolute positioning will not really work in a true sql\n> sense. if you are browsing a table sequentially, there are much\n> better methods.\n>\n> merlin\n>\n>\n>\n\nHello,\n \nI have tried the query SELECT * FROM wan ORDER BY stime DESC OFFSET 0 LIMIT 50; and it is working great.\nEXPLAIN ANALYSE of the above query is:\npdb=# EXPLAIN ANALYZE select * from wan order by stime desc limit 50 ;NOTICE: QUERY PLAN:\n\nLimit (cost=0.00..12.10 rows=50 width=95) (actual time=24.29..50.24 rows=50 loops=1) -> Index Scan Backward using wan_pkey on wan (cost=0.00..19983.31 rows=82586 width=95) (actual time=24.28..50.14 rows=51 loops=1)\nTotal runtime: 50.55 msec\nEXPLAIN\n \nNow I am facing another problem, If I use where clause is select query it is taking too much time. Can you please help me on this.\n \nExplain analyze are follows:\npdb=# EXPLAIN ANALYZE select count(1) from wan where kname = 'pluto';NOTICE: QUERY PLAN:\n\nAggregate (cost=3507.84..3507.84 rows=1 width=0) (actual time=214647.53..214647.54 rows=1 loops=1) -> Seq Scan on wan (cost=0.00..3507.32 rows=208 width=0) (actual time=13.65..214599.43 rows=18306 loops=1)\nTotal runtime: 214647.87 msec\nEXPLAINpdb=# EXPLAIN ANALYZE select * from wan where kname = 'pluto' order by stime limit 50;NOTICE: QUERY PLAN:\nLimit (cost=3515.32..3515.32 rows=50 width=95) (actual time=230492.69..230493.07 rows=50 loops=1) -> Sort (cost=3515.32..3515.32 rows=208 width=95) (actual time=230492.68..230493.00 rows=51 loops=1) -> Seq Scan on wan (cost=\n0.00..3507.32 rows=208 width=95) (actual time=0.44..229217.38 rows=18306 loops=1)Total runtime: 230631.62 msec\nEXPLAINpdb=# EXPLAIN ANALYZE SELECT * FROM wan WHERE stime >= 20123 AND stime <= 24000 ORDER BY stime limit 50;NOTICE: QUERY PLAN:\nLimit (cost=0.00..2519.70 rows=50 width=95) (actual time=7346.74..7351.42 rows=50 loops=1) -> Index Scan using wan_pkey on wan (cost=0.00..20809.17 rows=413 width=95) (actual time=7346.73..7351.32 rows=51 loops=1)\nTotal runtime: 7351.71 msec\nEXPLAIN\nfor above queries if I use desc order then the queries takes too much time.\nI am not getting for the above queries how do I increase the speed.\n \nPostgresql version is 7.2.3\ntotal no. of records: 5700300 \nOn 4/21/06, Dave Dutcher <[email protected]> wrote:\n\n\n\nI've never used a cursor in Postgres, but I don't think it will help you a lot. In theory cursors make it easier to do paging, but your main problem is that getting the first page is slow.\n A cursor isn't going to be any faster at getting the first page than OFFSET/LIMIT is.\n \nDid you try Bruno's suggestion of:\n \nSELECT * FROM wan ORDER BY stime DESC OFFSET 0 LIMIT 50;\n \nYou should run an EXPLAIN ANALYZE on that query to see if it is using an index scan. Also what version of \nPostgres are you using? You can run select version(); to check.\n \n \n \n\n\n-----Original Message-----From: \[email protected] [mailto:[email protected]] \nOn Behalf Of soni deSent: Thursday, April 20, 2006\n 11:42 PM\nTo: Merlin MoncureCc: \[email protected]: Re: [PERFORM] Takes too long to fetch the data from database\n \n\n\nI don't want to query exactly 81900 rows into set. I just want to fetch 50 or 100 rows at a time in a decreasing order of stime.(i.e 50 or 100 rows starting from last to end).\n\n\n \n\nif we fetched sequentially, there is also problem in fetching all the records (select * from wan where kname='pluto' order by stime) it is taking more than 4~5 minutes. tried it on same table having more than 326054 records. \n\n\n \n\nOn 4/20/06, Merlin Moncure <\[email protected]> wrote: \n> SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;you need to try and solve the problem without using 'offset'. you could do: \nBEGIN;DECLARE crs cursor FOR SELECT * FROM wan ORDER BY stime;FETCH ABSOLUTE 81900 in crs;FETCH 49 in crs;CLOSE crs;COMMIT;this may be a bit faster but will not solve the fundamental problem. \nthe more interesting question is why you want to query exactly 81900rows into a set. This type of thinking will always get you intotrouble, absolute positioning will not really work in a true sqlsense. if you are browsing a table sequentially, there are much \nbetter methods.merlin",
"msg_date": "Tue, 9 May 2006 09:24:15 +0530",
"msg_from": "\"soni de\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Takes too long to fetch the data from database"
},
{
"msg_contents": "On Tue, May 09, 2006 at 09:24:15 +0530,\n soni de <[email protected]> wrote:\n> \n> EXPLAIN\n> pdb=# EXPLAIN ANALYZE select * from wan where kname = 'pluto' order by stime\n> limit 50;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=3515.32..3515.32 rows=50 width=95) (actual time=\n> 230492.69..230493.07 rows=50 loops=1)\n> -> Sort (cost=3515.32..3515.32 rows=208 width=95) (actual time=\n> 230492.68..230493.00 rows=51 loops=1)\n> -> Seq Scan on wan (cost=0.00..3507.32 rows=208 width=95) (actual\n> time=0.44..229217.38 rows=18306 loops=1)\n> Total runtime: 230631.62 msec\n\nUnless you have an index on (kname, stime) the query is going to need to\nfind the records with a value for kname of 'pluto' and then get the most\nrecent 50 of them. It looks like there are enough estimated records\nwith kname = 'pluto', that a sequential scan is being prefered.\nCreating an extra index will slow down inserts somewhat, but will speed\nup queries like the above significantly, so may be worthwhile for you.\nI think later versions of Postgres are smarter, but for sure in 7.2\nyou will need to write the query like:\nSELECT *\n FROM wan\n WHERE kname = 'pluto'\n ORDER BY kname DESC, stime DESC\n LIMIT 50\n;\n",
"msg_date": "Tue, 9 May 2006 02:38:55 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Takes too long to fetch the data from database"
}
] |
[
{
"msg_contents": "Hi \n\nI'm currently upgrading a Posgresql 7.3.2 database to a\n8.1.<something-good>\n\nI'd run pg_dump | gzip > sqldump.gz on the old system. That took about\n30 hours and gave me an 90GB zipped file. Running \ncat sqldump.gz | gunzip | psql \ninto the 8.1 database seems to take about the same time. Are there \nany tricks I can use to speed this dump+restore process up? \n\nThe database contains quite alot of BLOB, thus the size. \n\nJesper\n-- \n./Jesper Krogh, [email protected], Jabber ID: [email protected]\n\n\n",
"msg_date": "Mon, 10 Apr 2006 07:55:33 +0000 (UTC)",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Restore performance?"
},
{
"msg_contents": "Jesper,\n \nIf they both took the same amount of time, then you are almost certainly bottlenecked on gzip.\n \nTry a faster CPU or use \"gzip -fast\".\n \n- Luke\n\n________________________________\n\nFrom: [email protected] on behalf of Jesper Krogh\nSent: Mon 4/10/2006 12:55 AM\nTo: [email protected]\nSubject: [PERFORM] Restore performance?\n\n\n\nHi\n\nI'm currently upgrading a Posgresql 7.3.2 database to a\n8.1.<something-good>\n\nI'd run pg_dump | gzip > sqldump.gz on the old system. That took about\n30 hours and gave me an 90GB zipped file. Running\ncat sqldump.gz | gunzip | psql\ninto the 8.1 database seems to take about the same time. Are there\nany tricks I can use to speed this dump+restore process up?\n\nThe database contains quite alot of BLOB, thus the size.\n\nJesper\n--\n./Jesper Krogh, [email protected], Jabber ID: [email protected]\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n\n\n\n",
"msg_date": "Mon, 10 Apr 2006 04:04:40 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore performance?"
},
{
"msg_contents": "Jesper Krogh wrote:\n> Hi \n> \n> I'm currently upgrading a Posgresql 7.3.2 database to a\n> 8.1.<something-good>\n> \n> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about\n> 30 hours and gave me an 90GB zipped file. Running \n> cat sqldump.gz | gunzip | psql \n> into the 8.1 database seems to take about the same time. Are there \n> any tricks I can use to speed this dump+restore process up? \n\nIf you can have both database systems up at the same time, you could \npg_dump | psql.\n\nRegards,\nAndreas\n",
"msg_date": "Mon, 10 Apr 2006 10:43:31 +0200",
"msg_from": "Andreas Pflug <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore performance?"
},
{
"msg_contents": "> If they both took the same amount of time, then you are almost certainly\n> bottlenecked on gzip.\n>\n> Try a faster CPU or use \"gzip -fast\".\n\ngzip does not seem to be the bottleneck, on restore is psql the nr. 1\nconsumer on cpu-time.\n\nJesper\nSorry for the double post.\n-- \nJesper Krogh\n\n",
"msg_date": "Mon, 10 Apr 2006 10:44:59 +0200 (CEST)",
"msg_from": "\"Jesper Krogh\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore performance?"
},
{
"msg_contents": "> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about\n> 30 hours and gave me an 90GB zipped file. Running \n> cat sqldump.gz | gunzip | psql \n> into the 8.1 database seems to take about the same time. Are there \n> any tricks I can use to speed this dump+restore process up? \n> \n> The database contains quite alot of BLOB, thus the size. \n\nYou could try slony - it can do almost-zero-downtime upgrades.\n\nGreetings\nMarcin Mank\n",
"msg_date": "Mon, 10 Apr 2006 11:02:40 +0200",
"msg_from": "=?iso-8859-2?Q?Marcin_Ma=F1k?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore performance?"
},
{
"msg_contents": "\"Jesper Krogh\" <[email protected]> writes:\n> gzip does not seem to be the bottleneck, on restore is psql the nr. 1\n> consumer on cpu-time.\n\nHm. We've seen some situations where readline mistakenly decides that\nthe input is interactive and wastes lots of cycles doing useless\nprocessing (like keeping history). Try \"psql -n\" and see if that helps.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 Apr 2006 11:24:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore performance? "
},
{
"msg_contents": "On 4/10/06, Jesper Krogh <[email protected]> wrote:\n>\n> Hi\n>\n> I'm currently upgrading a Posgresql 7.3.2 database to a\n> 8.1.<something-good>\n>\n> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about\n> 30 hours and gave me an 90GB zipped file. Running\n> cat sqldump.gz | gunzip | psql\n> into the 8.1 database seems to take about the same time. Are there\n> any tricks I can use to speed this dump+restore process up?\n\n\nwas the last restore successfull ?\nif so why do you want to repeat ?\n\nsome tips\n\n1. run new version of postgres in a different port and pipe pg_dump to psql\nthis may save the CPU time of compression , there is no need for a temporary\ndump file.\n\npg_dump | /path/to/psql813 -p 54XX newdb\n\n2. use new version of pg_dump to dump the old database as new version\n is supposed to be wiser.\n\n3. make sure you are trapping the restore errors properly\npsql newdb 2>&1 | cat | tee err works for me.\n\n\n\n\nThe database contains quite alot of BLOB, thus the size.\n>\n> Jesper\n> --\n> ./Jesper Krogh, [email protected], Jabber ID: [email protected]\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\nOn 4/10/06, Jesper Krogh <[email protected]> wrote:\nHiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.<something-good>I'd run pg_dump | gzip > sqldump.gz on the old system. That took about30 hours and gave me an 90GB zipped file. Running\ncat sqldump.gz | gunzip | psqlinto the 8.1 database seems to take about the same time. Are thereany tricks I can use to speed this dump+restore process up?was the last restore successfull ? \nif so why do you want to repeat ?some tips1. run new version of postgres in a different port and pipe pg_dump to psqlthis may save the CPU time of compression , there is no need for a temporary\ndump file.pg_dump | /path/to/psql813 -p 54XX newdb2. use new version of pg_dump to dump the old database as new version is supposed to be wiser.3. make sure you are trapping the restore errors properly\npsql newdb 2>&1 | cat | tee err works for me. The database contains quite alot of BLOB, thus the size.\nJesper--./Jesper Krogh, [email protected], Jabber ID: [email protected](end of broadcast)---------------------------\nTIP 6: explain analyze is your friend",
"msg_date": "Mon, 10 Apr 2006 21:09:53 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore performance?"
},
{
"msg_contents": "sorry for the post , i didn' saw the other replies only after posting.\n\nOn 4/10/06, Rajesh Kumar Mallah <[email protected]> wrote:\n>\n>\n>\n> On 4/10/06, Jesper Krogh <[email protected]> wrote:\n> >\n> > Hi\n> >\n> > I'm currently upgrading a Posgresql 7.3.2 database to a\n> > 8.1.<something-good>\n> >\n> > I'd run pg_dump | gzip > sqldump.gz on the old system. That took about\n> > 30 hours and gave me an 90GB zipped file. Running\n> > cat sqldump.gz | gunzip | psql\n> > into the 8.1 database seems to take about the same time. Are there\n> > any tricks I can use to speed this dump+restore process up?\n>\n>\n> was the last restore successfull ?\n> if so why do you want to repeat ?\n>\n> some tips\n>\n> 1. run new version of postgres in a different port and pipe pg_dump to\n> psql\n> this may save the CPU time of compression , there is no need for a\n> temporary\n> dump file.\n>\n> pg_dump | /path/to/psql813 -p 54XX newdb\n>\n> 2. use new version of pg_dump to dump the old database as new version\n> is supposed to be wiser.\n>\n> 3. make sure you are trapping the restore errors properly\n> psql newdb 2>&1 | cat | tee err works for me.\n>\n>\n>\n>\n> The database contains quite alot of BLOB, thus the size.\n> >\n> > Jesper\n> > --\n> > ./Jesper Krogh, [email protected], Jabber ID: [email protected]\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> >\n> > TIP 6: explain analyze is your friend\n> >\n>\n>\n\nsorry for the post , i didn' saw the other replies only after posting.On 4/10/06, Rajesh Kumar Mallah <[email protected]\n> wrote:\nOn 4/10/06, Jesper Krogh <[email protected]\n> wrote:\nHiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.<something-good>I'd run pg_dump | gzip > sqldump.gz on the old system. That took about30 hours and gave me an 90GB zipped file. Running\ncat sqldump.gz | gunzip | psqlinto the 8.1 database seems to take about the same time. Are thereany tricks I can use to speed this dump+restore process up?\nwas the last restore successfull ? \nif so why do you want to repeat ?some tips1. run new version of postgres in a different port and pipe pg_dump to psqlthis may save the CPU time of compression , there is no need for a temporary\n\ndump file.pg_dump | /path/to/psql813 -p 54XX newdb2. use new version of pg_dump to dump the old database as new version is supposed to be wiser.3. make sure you are trapping the restore errors properly\npsql newdb 2>&1 | cat | tee err works for me. \nThe database contains quite alot of BLOB, thus the size.\nJesper--./Jesper Krogh, [email protected], Jabber ID: \[email protected](end of broadcast)---------------------------\nTIP 6: explain analyze is your friend",
"msg_date": "Mon, 10 Apr 2006 21:12:02 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore performance?"
},
{
"msg_contents": "Rajesh Kumar Mallah wrote:\n>> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about\n>> 30 hours and gave me an 90GB zipped file. Running\n>> cat sqldump.gz | gunzip | psql\n>> into the 8.1 database seems to take about the same time. Are there\n>> any tricks I can use to speed this dump+restore process up?\n> \n> \n> was the last restore successfull ?\n> if so why do you want to repeat ?\n\n\"about the same time\" == Estimated guess from restoring a few tables\nI was running a testrun, without disabling updates to the production\ndatabase, the real run is scheduled for easter where there hopefully is\nno users on the system. So I need to repeat, I'm just trying to get a\nfeelingabout how long time I need to allocate for the operation.\n\n> 1. run new version of postgres in a different port and pipe pg_dump to psql\n> this may save the CPU time of compression , there is no need for a temporary\n> dump file.\n> \n> pg_dump | /path/to/psql813 -p 54XX newdb\n\nI'll do that. It is a completely different machine anyway.\n\n> 2. use new version of pg_dump to dump the old database as new version\n> is supposed to be wiser.\n\nCheck.\n\n> 3. make sure you are trapping the restore errors properly\n> psql newdb 2>&1 | cat | tee err works for me.\n\nThats noted.\n\n-- \nJesper Krogh, [email protected]\n\n",
"msg_date": "Mon, 10 Apr 2006 17:54:47 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Restore performance?"
},
{
"msg_contents": "4. fsync can also be turned off while loading huge dataset ,\n but seek others comments too (as study docs) as i am not sure about the\n reliability. i think it can make a lot of difference.\n\n\n\nOn 4/10/06, Jesper Krogh <[email protected]> wrote:\n>\n> Rajesh Kumar Mallah wrote:\n> >> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about\n> >> 30 hours and gave me an 90GB zipped file. Running\n> >> cat sqldump.gz | gunzip | psql\n> >> into the 8.1 database seems to take about the same time. Are there\n> >> any tricks I can use to speed this dump+restore process up?\n> >\n> >\n> > was the last restore successfull ?\n> > if so why do you want to repeat ?\n>\n> \"about the same time\" == Estimated guess from restoring a few tables\n> I was running a testrun, without disabling updates to the production\n> database, the real run is scheduled for easter where there hopefully is\n> no users on the system. So I need to repeat, I'm just trying to get a\n> feelingabout how long time I need to allocate for the operation.\n>\n> > 1. run new version of postgres in a different port and pipe pg_dump to\n> psql\n> > this may save the CPU time of compression , there is no need for a\n> temporary\n> > dump file.\n> >\n> > pg_dump | /path/to/psql813 -p 54XX newdb\n>\n> I'll do that. It is a completely different machine anyway.\n>\n> > 2. use new version of pg_dump to dump the old database as new version\n> > is supposed to be wiser.\n>\n> Check.\n>\n> > 3. make sure you are trapping the restore errors properly\n> > psql newdb 2>&1 | cat | tee err works for me.\n>\n> Thats noted.\n>\n> --\n> Jesper Krogh, [email protected]\n>\n>\n\n4. fsync can also be turned off while loading huge dataset , but seek others comments too (as study docs) as i am not sure about the reliability. i think it can make a lot of difference.\nOn 4/10/06, Jesper Krogh <[email protected]> wrote:\nRajesh Kumar Mallah wrote:>> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about>> 30 hours and gave me an 90GB zipped file. Running>> cat sqldump.gz | gunzip | psql\n>> into the 8.1 database seems to take about the same time. Are there>> any tricks I can use to speed this dump+restore process up?>>> was the last restore successfull ?> if so why do you want to repeat ?\n\"about the same time\" == Estimated guess from restoring a few tablesI was running a testrun, without disabling updates to the productiondatabase, the real run is scheduled for easter where there hopefully is\nno users on the system. So I need to repeat, I'm just trying to get afeelingabout how long time I need to allocate for the operation.> 1. run new version of postgres in a different port and pipe pg_dump to psql\n> this may save the CPU time of compression , there is no need for a temporary> dump file.>> pg_dump | /path/to/psql813 -p 54XX newdbI'll do that. It is a completely different machine anyway.\n> 2. use new version of pg_dump to dump the old database as new version> is supposed to be wiser.Check.> 3. make sure you are trapping the restore errors properly> psql newdb 2>&1 | cat | tee err works for me.\nThats noted.--Jesper Krogh, [email protected]",
"msg_date": "Mon, 10 Apr 2006 21:38:48 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore performance?"
},
{
"msg_contents": "Rajesh Kumar Mallah wrote:\n> 4. fsync can also be turned off while loading huge dataset ,\n> but seek others comments too (as study docs) as i am not sure about the\n> reliability. i think it can make a lot of difference.\n\nAlso be sure to increase maintenance_work_mem so that index creation\ngoes faster.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Mon, 10 Apr 2006 12:23:45 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore performance?"
},
{
"msg_contents": "\n> I'd run pg_dump | gzip > sqldump.gz on the old system.\n\n\tIf the source and destination databases are on different machines, you \ncan pipe pg_dump on the source machine to pg_restore on the destination \nmachine by using netcat.\n\tIf you only have 100 Mbps ethernet, compressing the data will be faster. \nIf you have Gb Ethernet, maybe you don't need to compress, but it doesn't \nhurt to test.\n\n\tuse pg_restore instead of psql, and use a recent version of pg_dump which \ncan generate dumps in the latest format.\n\n\tIf you need fast compression, use gzip -1 or even lzop, which is \nincredibly fast.\n\n\tTurn off fsync during the restore and set maintenance_work_mem to use \nmost of your available RAM for index creation.\n\n\tI think that creating foreign key constraints uses large joins ; it might \nbe good to up work_mem also.\n\n\tCheck the speed of your disks with dd beforehand. You might get a \nsurprise.\n\n\tMaybe you can also play with the bgwriter and checkpoint parameters.\n\n",
"msg_date": "Mon, 10 Apr 2006 19:20:33 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore performance?"
},
{
"msg_contents": "\nOn Apr 10, 2006, at 3:55 AM, Jesper Krogh wrote:\n\n> I'd run pg_dump | gzip > sqldump.gz on the old system. That took \n> about\n> 30 hours and gave me an 90GB zipped file. Running\n> cat sqldump.gz | gunzip | psql\n> into the 8.1 database seems to take about the same time. Are there\n> any tricks I can use to speed this dump+restore process up?\n>\n> The database contains quite alot of BLOB, thus the size.\n\nWell, your pg_dump command lost your BLOBs since the plain text \nformat doesn't support them.\n\nBut once you use the -Fc format on your dump and enable blob backups, \nyou can speed up reloads by increasing your checkpoint segments to a \nbig number like 256 and the checkpoint timeout to something like 10 \nminutes. All other normal tuning parameters should be what you plan \nto use for your normal operations, too.\n\n",
"msg_date": "Mon, 10 Apr 2006 14:48:13 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore performance?"
},
{
"msg_contents": "> Well, your pg_dump command lost your BLOBs since the plain text\n> format doesn't support them.\n\nWell, no.. they are stored as BYTEA not Large Objects.. They are encoded\nin ASCII in the pg_dump output.\n\n> But once you use the -Fc format on your dump and enable blob backups,\n> you can speed up reloads by increasing your checkpoint segments to a big\n> number like 256 and the checkpoint timeout to something like 10 minutes.\n> All other normal tuning parameters should be what you plan\n> to use for your normal operations, too.\n\nThanks.\n\nJesper\n-- \nJesper Krogh\n\n",
"msg_date": "Tue, 11 Apr 2006 14:04:57 +0200 (CEST)",
"msg_from": "\"Jesper Krogh\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore performance?"
},
{
"msg_contents": ">> Well, your pg_dump command lost your BLOBs since the plain text\n>> format doesn't support them.\n> \n> Well, no.. they are stored as BYTEA not Large Objects.. They are encoded\n> in ASCII in the pg_dump output.\n\nAs a side note: plain text dump format in 8.1 supprts LOBs\n\n",
"msg_date": "Wed, 12 Apr 2006 10:06:49 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restore performance?"
}
] |
[
{
"msg_contents": "\nHi\n\nI'm currently upgrading a Posgresql 7.3.2 database to a\n8.1.<something-good>\n\nI'd run pg_dump | gzip > sqldump.gz on the old system. That took about\n30 hours and gave me an 90GB zipped file. Running\ncat sqldump.gz | gunzip | psql\ninto the 8.1 database seems to take about the same time. Are there\nany tricks I can use to speed this dump+restore process up?\n\nNeither disk-io (viewed using vmstat 1) or cpu (viewed using top) seems to\nbe the bottleneck.\n\nThe database contains quite alot of BLOB's, thus the size.\n\nJesper\n-- \nJesper Krogh\n\n",
"msg_date": "Mon, 10 Apr 2006 10:22:31 +0200 (CEST)",
"msg_from": "\"Jesper Krogh\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dump restore performance 7.3 -> 8.1 "
}
] |
[
{
"msg_contents": "Hi,\n\nI Attached here a file with details about the tables, the queries and\nthe \nExplain analyze plans.\nHope this can be helpful to analyze my problem\n\n10x\nDoron\n\n-----Original Message-----\nFrom: Ragnar [mailto:[email protected]] \nSent: Sunday, April 09, 2006 2:37 PM\nTo: Doron Baranes\nSubject: RE: [PERFORM]\n\nOn sun, 2006-04-09 at 14:11 +0200, Doron Baranes wrote:\n\nPlease reply to the list, not to me directly. this way\nothers can help you too.\n\n> I did vacuum database analyze a few days ago.\n\nyes, I saw that in your original post. I mentioned\nVACUUM FULL ANALYZE , not just VACUUM ANALYZE\n\n> I'll attached a few explain plans.\n\n[explain plans deleted]\n\nThese are useless. you must show us the output of \nEXPLAIN ANALYZE. these are output of EXPLAIN.\nA plan is not much use without seeing the query itself.\n\nyou still have not answered the question about\nwhat indexes you have.\n\ngnari",
"msg_date": "Mon, 10 Apr 2006 10:30:37 +0200",
"msg_from": "\"Doron Baranes\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "On m�n, 2006-04-10 at 10:30 +0200, Doron Baranes wrote:\n\n> I Attached here a file with details about the tables, the queries and\n> the \n> Explain analyze plans.\n> Hope this can be helpful to analyze my problem\n\nfirst query:\n\n> explain analyze SELECT date_trunc('hour'::text, \n> i.entry_time) AS datetime,\n> COUNT(fr.grp_fate_id) ,\n> SUM(i.size)\n> FROM log.msg_info as i,log.msg_fate as f, \n> log.msg_fate_recipients as fr\n> WHERE i.origin = 1\n> AND i.msgid=f.msgid\n> AND i.entry_time > '2006-01-25'\n> AND f.grp_fate_id=fr.grp_fate_id\n> GROUP BY datetime\n> order by datetime;\n\nif i.origin has high selectivity (if very\nfew rows in msg_info have origin=1 in this\ncase), an index on msg_info(orgin) can help.\nunfortunately, as you are using 7.4 and this\nis a smallint column, you would have to change \nthe query slightly to make use of that:\n WHERE i.origin = 1::smallint\nif more than a few % or the rows have this value,\nthen this will not help \n\nthe index on msg_info(entry_time) is unlikely\nto be used, because a simple '>' comparison\nhas little selectivity. try to add an upper limit\nto the query to make it easier for the planner\nso see that few rows would be returned (if that is \nthe case)\nfor example:\n AND i.entry_time BETWEEN '2006-01-25'\n AND '2006-05-01'\nthis might also improve the estimated number\nof groups on datetime (notice: estimated rows=1485233,\nreal=623), although I am not sure if that will help you\n\nI do now know how good the planner is with dealing\nwith the date_trunc('hour'::text, i.entry_time),\nso possibly you could get some improvement with\nan indexed entry_hour column populated with trigger\nor by your application, and change your query to:\n\nexplain analyze SELECT i.entry_hour,\nCOUNT(fr.grp_fate_id) ,\nSUM(i.size)\nFROM log.msg_info as i,log.msg_fate as f, log.msg_fate_recipients as fr\nWHERE i.origin = 1\nAND i.msgid=f.msgid\nAND i.entry_hour BETWEEN '2006-01-25:00:00'\n AND '2006-05-01:00:00'\nAND f.grp_fate_id=fr.grp_fate_id\nGROUP BY entry_hour\norder by entry_hour;\n\n(adjust the upper limit to your reality)\n\ndo these suggestions help at all?\n\ngnari\n\n\n",
"msg_date": "Mon, 10 Apr 2006 11:59:29 +0000",
"msg_from": "Ragnar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "Hi,\n\nI want to optimize something like this.\n\n- My items table:\ncode int -- can take one of 100 values\nproperty varchar(250) -- can take one of 5000 values\nparam01 char(10) -- can take one of 10 values\nparam02 char(10) -- can take one of 10 values\n...\n[ 20 similar columns }\n...\nparama20 char(10) -- can take one of 10 values\n\n- The kind of query I want to optimize:\nselect * from items\nwhere code betwwen 5 and 22\nand param01 = 'P'\nand param02 = 'G'\n...\n[ all the 20 paramXX columns are used in the query}\n...\nand param20 = 'C';\n\n\nHow can I optimize this kind of query? \n\nI was thinking about using a multicolumns index, but I have read that we should limit multicolumns indice to at most 2 or 3 columns. \n\nIf that's true then 22 columns for a multicolumn incdex seems way too much. Or maybe it is workable as every column uses only a very limited set of values?\n\nI was also thinking about about using a functional index. \n\nWhat do you think would be the best solution in such a case?\n\nThanks.\n\nOscar\n\n\n\t\t\n---------------------------------\nBlab-away for as little as 1�/min. Make PC-to-Phone Calls using Yahoo! Messenger with Voice.\nHi,I want to optimize something like this.- My items table:code int -- can take one of 100 valuesproperty varchar(250) -- can take one of 5000 valuesparam01 char(10) -- can take one of 10 valuesparam02 char(10) -- can take one of 10 values...[ 20 similar columns }...parama20 char(10) -- can take one of 10 values- The kind of query I want to optimize:select * from itemswhere code betwwen 5 and 22and param01 = 'P'and param02 = 'G'...[ all the 20 paramXX columns are used in the query}...and param20 = 'C';How can I optimize this kind of query? I was thinking about using a multicolumns index, but I have read that we should limit multicolumns indice to at most 2 or 3 columns. If that's true then 22 columns\n for a multicolumn incdex seems way too much. Or maybe it is workable as every column uses only a very limited set of values?I was also thinking about about using a functional index. What do you think would be the best solution in such a case?Thanks.Oscar\nBlab-away for as little as 1�/min. Make PC-to-Phone Calls using Yahoo! Messenger with Voice.",
"msg_date": "Mon, 10 Apr 2006 09:58:57 -0700 (PDT)",
"msg_from": "Oscar Picasso <[email protected]>",
"msg_from_op": true,
"msg_subject": "Better index stategy for many fields with few values"
},
{
"msg_contents": "\n> - My items table:\n> code int -- can take one of 100 values\n> property varchar(250) -- can take one of 5000 values\n> param01 char(10) -- can take one of 10 values\n> param02 char(10) -- can take one of 10 values\n> ...\n> [ 20 similar columns }\n> ...\n> parama20 char(10) -- can take one of 10 values\n\n\tInstead of 20 columns, you could instead use a \"param\" field containing \nan array of 20 TEXT fields.\n\tThen create a simple index on (code, param) and SELECT WHERE code BETWEEN \n... AND param = '{P,G,....,C}'\n\n\tIf you don't want to modify your structure, you can create a functional \nindex on an array {param1...param20}, but your queries will be a bit \nuglier.\n",
"msg_date": "Mon, 10 Apr 2006 19:26:13 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better index stategy for many fields with few values"
},
{
"msg_contents": "Hi, Oscar,\n\nOscar Picasso wrote:\n\n> [ all the 20 paramXX columns are used in the query}\n\n> How can I optimize this kind of query?\n\nPostgreSQL 8.1 has so-called bitmap index scans, which can combine\nseveral index scans before actually accessing the data.\n\nSo I think it's best to create an index on each of the paramXX columns,\nand see with EXPLAIN ANALYZE what it is doing.\n\n> I was thinking about using a multicolumns index, but I have read that\n> we should limit multicolumns indice to at most 2 or 3 columns.\n\nYes, that's true, the index overhead gets too high.\n\n> If that's true then 22 columns for a multicolumn incdex seems way too\n> much. Or maybe it is workable as every column uses only a very limited\n> set of values?\n\nYes, I think that a 22 column index is way too much, especially with the\nnew bitmap index scans available.\n\n> I was also thinking about about using a functional index.\n\nIf there's a logical relation between those values that they can easily\ncombined, that may be a good alternative.\n\n\nI just had another weird idea:\n\nAs your paramXX values can have only 10 parameters, it also might be\nfeasible to use a bunch of 10 conditional indices, like:\n\nCREATE INDEX foo1 ON table (param1, param2 WHERE param0='1st value';\nCREATE INDEX foo2 ON table (param1, param2 WHERE param0='2nd value';\nCREATE INDEX foo3 ON table (param1, param2 WHERE param0='3rd value';\n[...]\n\nThis way, you don't have the index bloat of a 3-column index, but 10\n2-column indices that cover 1/10th of the table each.\n\nFor 22 columns, you'd need a bunch of seven such indices plus a\nsingle-column one, or can use some 3+1 and some 2+1 column index.\n\nI'd like to see the query plans from explain analyze.\n\nBtw, I expect query planning time to get rather significant for so much\ncolumns, so gequo tuning, tuning work_mem (for the bitmap scans) and\nprepared statements will pay off.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Wed, 12 Apr 2006 14:59:32 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better index stategy for many fields with few values"
},
{
"msg_contents": "On Wed, Apr 12, 2006 at 02:59:32PM +0200, Markus Schaber wrote:\n> > I was thinking about using a multicolumns index, but I have read that\n> > we should limit multicolumns indice to at most 2 or 3 columns.\n> \n> Yes, that's true, the index overhead gets too high.\n> \n> > I was also thinking about about using a functional index.\n> \n> If there's a logical relation between those values that they can easily\n> combined, that may be a good alternative.\n \nHow would that be any better than just doing a multi-column index?\n \n> I just had another weird idea:\n> \n> As your paramXX values can have only 10 parameters, it also might be\n> feasible to use a bunch of 10 conditional indices, like:\n> \n> CREATE INDEX foo1 ON table (param1, param2 WHERE param0='1st value';\n> CREATE INDEX foo2 ON table (param1, param2 WHERE param0='2nd value';\n> CREATE INDEX foo3 ON table (param1, param2 WHERE param0='3rd value';\n> [...]\n\nNot all that weird; it's known as index partitioning.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 12 Apr 2006 15:44:22 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better index stategy for many fields with few values"
},
{
"msg_contents": "Oscar,\n\nOn 4/10/06 9:58 AM, \"Oscar Picasso\" <[email protected]> wrote:\n\n> - My items table:\n> code int -- can take one of 100 values\n> property varchar(250) -- can take one of 5000 values\n> param01 char(10) -- can take one of 10 values\n> param02 char(10) -- can take one of 10 values\n> ...\n> [ 20 similar columns }\n> ...\n> parama20 char(10) -- can take one of 10 values\n> \n> - The kind of query I want to optimize:\n> select * from items\n> where code betwwen 5 and 22\n> and param01 = 'P'\n> and param02 = 'G'\n> ...\n> [ all the 20 paramXX columns are used in the query}\n> ...\n> and param20 = 'C';\n\nBizgres 0.9 has an on-disk bitmap index which will likely improve this query\nspeed by a very large amount over normal Postgres 8.1.\n\n- Luke\n\n\n",
"msg_date": "Wed, 12 Apr 2006 15:38:14 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better index stategy for many fields with few"
},
{
"msg_contents": "Hi, Jim,\n\nJim C. Nasby wrote:\n\n>>>I was also thinking about about using a functional index.\n>>If there's a logical relation between those values that they can easily\n>>combined, that may be a good alternative.\n> How would that be any better than just doing a multi-column index?\n\n10 different values per column, and 20 columns are 10^20 value combinations.\n\nPartitioning it for the first column gives 10^19 combinations which is\nsmaller than 2^64, and thus fits into a long value.\n\nAnd I just guess that a 10-partition functional index on a long value\ncould perform better than a multi-column index on 20 columns of\ncharacter(10), if only because it is approx. 1/25th in size.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Thu, 13 Apr 2006 11:33:53 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better index stategy for many fields with few values"
}
] |
[
{
"msg_contents": "I have a query that is intended to select from multiple \"small tables\" \nto get a limited subset of \"incidentid\" and then join with a \"very \nlarge\" table. One of the operations will require a sequential scan, but \nthe planner is doing the scan on the very large table before joining the \nsmall ones, resulting in a huge amount of disk I/O. How would I make \nthis query join the large table only after narrowing down the possible \nselections from the smaller tables? This is running on version 8.0.3.\n\nThanks for any ideas.\n\n-Dan\n\n\nQUERY\n########################################\n explain analyze\n select distinct\n eventmain.incidentid,\n eventmain.entrydate,\n eventgeo.eventlocation,\n recordtext as retdata\n from\n eventactivity\n join (\n select\n incidentid\n from\n k_h\n where\n id = 33396 and\n k_h.entrydate >= '2006-1-1 00:00' and\n k_h.entrydate < '2006-4-8 00:00'\n ) id_keywords using ( incidentid ) ,\n \n eventmain,\n eventgeo\n where\n eventmain.incidentid = eventactivity.incidentid and\n eventmain.incidentid = eventgeo.incidentid and\n ( ' ' || recordtext || ' ' like '%HAL%' ) and\n eventactivity.entrydate >= '2006-1-1 00:00' and\n eventactivity.entrydate < '2006-4-8 00:00'\n order by\n eventmain.entrydate limit 10000;\n\n\n\nEXPLAIN ANALYZE OUTPUT\n########################################\n Limit (cost=2521191.65..2521191.90 rows=6 width=187) (actual \ntime=1360935.787..1361072.277 rows=1400 loops=1)\n -> Unique (cost=2521191.65..2521191.90 rows=6 width=187) (actual \ntime=1360935.779..1361067.853 rows=1400 loops=1)\n -> Sort (cost=2521191.65..2521191.66 rows=6 width=187) \n(actual time=1360935.765..1360958.258 rows=16211 loops=1)\n Sort Key: eventmain.entrydate, eventmain.incidentid, \neventactivity.recordtext, eventgeo.eventlocation\n -> Nested Loop (cost=219.39..2521191.57 rows=6 \nwidth=187) (actual time=1123.115..1360579.798 rows=16211 loops=1)\n -> Nested Loop (cost=219.39..2521173.23 rows=6 \nwidth=154) (actual time=1105.773..1325907.716 rows=16211 loops=1)\n -> Hash Join (cost=219.39..2521153.37 \nrows=6 width=66) (actual time=1069.476..1289608.261 rows=16211 loops=1)\n Hash Cond: ((\"outer\".incidentid)::text \n= (\"inner\".incidentid)::text)\n -> Seq Scan on eventactivity \n(cost=0.00..2518092.06 rows=1532 width=52) (actual \ntime=57.205..1288514.530 rows=2621 loops=1)\n Filter: ((((' '::text || \n(recordtext)::text) || ' '::text) ~~ '%HAL%'::text) AND (entrydate >= \n'2006-01-01 00:00:00'::timestamp without time zone) AND (entrydate < \n'2006-04-08 00:00:00'::timestamp without time zone))\n -> Hash (cost=217.53..217.53 rows=741 \nwidth=14) (actual time=899.128..899.128 rows=0 loops=1)\n -> Index Scan using k_h_id_idx \non k_h (cost=0.00..217.53 rows=741 width=14) (actual \ntime=55.097..893.883 rows=1162 loops=1)\n Index Cond: (id = 33396)\n Filter: ((entrydate >= \n'2006-01-01 00:00:00'::timestamp without time zone) AND (entrydate < \n'2006-04-08 00:00:00'::timestamp without time zone))\n -> Index Scan using eventmain_incidentid_idx \non eventmain (cost=0.00..3.30 rows=1 width=88) (actual \ntime=1.866..2.227 rows=1 loops=16211)\n Index Cond: \n((eventmain.incidentid)::text = (\"outer\".incidentid)::text)\n -> Index Scan using eventgeo_incidentid_idx on \neventgeo (cost=0.00..3.04 rows=1 width=75) (actual time=1.770..2.126 \nrows=1 loops=16211)\n Index Cond: ((eventgeo.incidentid)::text = \n(\"outer\".incidentid)::text)\n Total runtime: 1361080.787 ms\n(19 rows)\n\n",
"msg_date": "Mon, 10 Apr 2006 13:04:06 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Encouraging multi-table join order"
},
{
"msg_contents": "Dan Harris <[email protected]> writes:\n> I have a query that is intended to select from multiple \"small tables\" \n> to get a limited subset of \"incidentid\" and then join with a \"very \n> large\" table. One of the operations will require a sequential scan, but \n> the planner is doing the scan on the very large table before joining the \n> small ones, resulting in a huge amount of disk I/O. How would I make \n> this query join the large table only after narrowing down the possible \n> selections from the smaller tables? This is running on version 8.0.3.\n\nThat's very strange --- the estimated cost of the seqscan is high enough\nthat the planner should have chosen a nestloop with inner indexscan on\nthe big table. I'm not sure about the join-order point, but the hash\nplan for the first join seems wrong in any case.\n\nUm, you do have an index on eventactivity.incidentid, right? What's the\ndatatype(s) of the incidentid columns? What happens to the plan if you\nturn off enable_hashjoin and enable_mergejoin?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 Apr 2006 19:12:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Encouraging multi-table join order "
},
{
"msg_contents": "Tom Lane wrote:\n> That's very strange --- the estimated cost of the seqscan is high enough\n> that the planner should have chosen a nestloop with inner indexscan on\n> the big table. I'm not sure about the join-order point, but the hash\n> plan for the first join seems wrong in any case.\n>\n> Um, you do have an index on eventactivity.incidentid, right? What's the\n> datatype(s) of the incidentid columns? What happens to the plan if you\n> turn off enable_hashjoin and enable_mergejoin?\n>\n> \t\t\tregards, tom lane\n> \nYes, eventactivity.incidentid is indexed. The datatype is varchar(40). \nAlthough, by checking this, I noticed that k_h.incidentid was \nvarchar(100). Perhaps the difference in length between the keys caused \nthe planner to not use the fastest method? I have no defense as to why \nthose aren't the same.. I will make them so and check.\n\nHere's the EXPLAIN analyze with enable_hashjoin = off and \nenable_mergejoin = off :\n\nLimit (cost=4226535.73..4226544.46 rows=698 width=82) (actual \ntime=74339.016..74356.521 rows=888 loops=1)\n -> Unique (cost=4226535.73..4226544.46 rows=698 width=82) (actual \ntime=74339.011..74354.073 rows=888 loops=1)\n -> Sort (cost=4226535.73..4226537.48 rows=698 width=82) \n(actual time=74339.003..74344.031 rows=3599 loops=1)\n Sort Key: eventmain.entrydate, eventmain.incidentid, \neventgeo.eventlocation, eventactivity.recordtext\n -> Nested Loop (cost=0.00..4226502.76 rows=698 \nwidth=82) (actual time=921.325..74314.959 rows=3599 loops=1)\n -> Nested Loop (cost=0.00..4935.61 rows=731 \nwidth=72) (actual time=166.354..14638.308 rows=1162 loops=1)\n -> Nested Loop (cost=0.00..2482.47 rows=741 \nwidth=50) (actual time=150.396..7348.013 rows=1162 loops=1)\n -> Index Scan using k_h_id_idx on k_h \n(cost=0.00..217.55 rows=741 width=14) (actual time=129.540..1022.243 \nrows=1162 loops=1)\n Index Cond: (id = 33396)\n Filter: ((entrydate >= \n'2006-01-01 00:00:00'::timestamp without time zone) AND (entrydate < \n'2006-04-08 00:00:00'::timestamp without time zone))\n -> Index Scan using \neventgeo_incidentid_idx on eventgeo (cost=0.00..3.04 rows=1 width=36) \n(actual time=5.260..5.429 rows=1 loops=1162)\n Index Cond: \n((eventgeo.incidentid)::text = (\"outer\".incidentid)::text)\n -> Index Scan using eventmain_incidentid_idx \non eventmain (cost=0.00..3.30 rows=1 width=22) (actual \ntime=5.976..6.259 rows=1 loops=1162)\n Index Cond: \n((eventmain.incidentid)::text = (\"outer\".incidentid)::text)\n -> Index Scan using eventactivity1 on \neventactivity (cost=0.00..5774.81 rows=20 width=52) (actual \ntime=29.768..51.334 rows=3 loops=1162)\n Index Cond: ((\"outer\".incidentid)::text = \n(eventactivity.incidentid)::text)\n Filter: ((((' '::text || (recordtext)::text) \n|| ' '::text) ~~ '%HAL%'::text) AND (entrydate >= '2006-01-01 \n00:00:00'::timestamp without time zone) AND (entrydate < '2006-04-08 \n00:00:00'::timestamp without time zone))\n\n\n",
"msg_date": "Mon, 10 Apr 2006 17:51:55 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Encouraging multi-table join order"
},
{
"msg_contents": "Dan Harris <[email protected]> writes:\n> Yes, eventactivity.incidentid is indexed. The datatype is varchar(40). \n> Although, by checking this, I noticed that k_h.incidentid was \n> varchar(100). Perhaps the difference in length between the keys caused \n> the planner to not use the fastest method?\n\nNo, the planner wouldn't care about that.\n\n> Here's the EXPLAIN analyze with enable_hashjoin = off and \n> enable_mergejoin = off :\n\nOK, so it does consider the \"right\" plan, but it's estimating it'll take\nlonger than the other one. One thing that's very strange is that the\nestimated number of rows out has changed ... did you re-ANALYZE since\nthe previous message?\n\n> -> Index Scan using eventactivity1 on \n> eventactivity (cost=0.00..5774.81 rows=20 width=52) (actual \n> time=29.768..51.334 rows=3 loops=1162)\n> Index Cond: ((\"outer\".incidentid)::text = \n> (eventactivity.incidentid)::text)\n> Filter: ((((' '::text || (recordtext)::text) \n> || ' '::text) ~~ '%HAL%'::text) AND (entrydate >= '2006-01-01 \n> 00:00:00'::timestamp without time zone) AND (entrydate < '2006-04-08 \n> 00:00:00'::timestamp without time zone))\n\nSo it's estimating 5775 cost units per probe into eventactivity, which\nis pretty high --- it must think that a lot of rows will be retrieved by\nthe index (way more than the 20 or so it thinks will get past the filter\ncondition). What does the pg_stats entry for eventactivity.incidentid\ncontain? It might be worth increasing the statistics target for that\ncolumn to try to get a better estimate.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 Apr 2006 21:01:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Encouraging multi-table join order "
},
{
"msg_contents": "Tom Lane wrote:\n> <SNIP>\n> So it's estimating 5775 cost units per probe into eventactivity, which\n> is pretty high --- it must think that a lot of rows will be retrieved by\n> the index (way more than the 20 or so it thinks will get past the filter\n> condition). \n\n> What does the pg_stats entry for eventactivity.incidentid\n> contain?\nselect * from pg_stats where tablename = 'eventactivity' and \nattname='incidentid';\n schemaname | tablename | attname | null_frac | avg_width | \nn_distinct | \nmost_common_vals \n| \nmost_common_freqs \n| \nhistogram_bounds | \ncorrelation\n------------+---------------+------------+-----------+-----------+------------+-----------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------+-------------\n public | eventactivity | incidentid | 0 | 14 | \n8157 | \n{P043190299,P051560740,P052581036,P052830218,P053100679,P053190889,P060370845,P042070391,P042690319,P043290117} \n| \n{0.00166667,0.00166667,0.00166667,0.00166667,0.00166667,0.00166667,0.00166667,0.00133333,0.00133333,0.00133333} \n| \n{P022140319,P030471058,P033090308,P041961082,P042910689,P050311006,P051350254,P052261148,P053270945,P060240316,P061000287} \n| 0.241737\n\n> It might be worth increasing the statistics target for that\n> column to try to get a better estimate.\n> \nHow high should I set this? I read the default is 10, but I'm not sure \nif doubling this would make a difference or if I should be doing a much \nlarger number. There's approx 45 million rows in the table, if that matters.\n\n\nThanks again,\nDan\n",
"msg_date": "Tue, 11 Apr 2006 14:59:22 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Encouraging multi-table join order"
},
{
"msg_contents": "Dan Harris <[email protected]> writes:\n> Tom Lane wrote:\n>> What does the pg_stats entry for eventactivity.incidentid\n>> contain?\n\n> {P043190299,P051560740,P052581036,P052830218,P053100679,P053190889,P060370845,P042070391,P042690319,P043290117} \n> | \n> {0.00166667,0.00166667,0.00166667,0.00166667,0.00166667,0.00166667,0.00166667,0.00133333,0.00133333,0.00133333} \n\n> How high should I set this? I read the default is 10, but I'm not sure \n> if doubling this would make a difference or if I should be doing a much \n> larger number. There's approx 45 million rows in the table, if that matters.\n\nWhat the stats entry is saying is that the most common entries occur\nabout 75000 times apiece (0.00166667 * 45e6), which is what's scaring\nthe planner here ;-). I think those frequencies are artificially high\nthough. The default statistics sample size is 3000 rows (300 *\nstatistics target, actually), so those numbers correspond to 5 or 4\nrows in the sample, which is probably just random chance.\n\nTry increasing the stats targets for this table to 100, then re-ANALYZE\nand see what you get. The most_common_freqs entries might drop as much\nas a factor of 10.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Apr 2006 17:08:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Encouraging multi-table join order "
},
{
"msg_contents": "Tom Lane wrote:\n> What the stats entry is saying is that the most common entries occur\n> about 75000 times apiece (0.00166667 * 45e6), which is what's scaring\n> the planner here ;-). I think those frequencies are artificially high\n> though. The default statistics sample size is 3000 rows (300 *\n> statistics target, actually), so those numbers correspond to 5 or 4\n> rows in the sample, which is probably just random chance.\n>\n> Try increasing the stats targets for this table to 100, then re-ANALYZE\n> and see what you get. The most_common_freqs entries might drop as much\n> as a factor of 10.\n>\n> \t\t\tregards, tom lane\n> \n\nTom:\n\nI believe this was the problem. I upped the statistics to 100, for a \nsample size of 30k and now the planner does the correct nested \nloop/index scan and takes only 30 seconds! This is a HUGE performance \nincrease.\n\nI wonder why the estimates were so far off the first time? This table \nhas been ANALYZED regularly ever since creation.\n\nOnce again, thank you and all of the developers for your hard work on \nPostgreSQL. This is by far the most pleasant management experience of \nany database I've worked on.\n\n-Dan\n\n",
"msg_date": "Tue, 11 Apr 2006 15:21:48 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Encouraging multi-table join order"
},
{
"msg_contents": "Dan Harris <[email protected]> writes:\n> I wonder why the estimates were so far off the first time? This table \n> has been ANALYZED regularly ever since creation.\n\nProbably just that you need a bigger sample size for such a large table.\nWe've been arguing ever since 7.2 about what the default statistics\ntarget ought to be --- a lot of people think 10 is too small. (It could\nalso be that the fixed 300X multiplier ought to depend on table size\ninstead. The math that told us 300X was OK was really about getting the\nhistogram right, not about whether the most-common-values stats would be\nany good.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Apr 2006 17:29:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Encouraging multi-table join order "
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nI'm trying to evaluate PostgreSQL as a database that will have to store\na high volume of data and access that data frequently. One of the\nfeatures on our wish list is to be able to use stored procedures to\naccess the data and I was wondering if it is usual for stored procedures\nto perform slower on PostgreSQL than raw SQL?\n\n \n\nA simple example of this can be shown with the following commands:\n\n \n\nFirst I created a test table:\n\n \n\nCREATE TABLE test (\n\nid int8,\n\nname varchar(128),\n\ndescription varchar(500),\n\nconstraint \"pk_test\" primary key (id)\n\n);\n\n \n\nThen the function I want to test:\n\n \n\nCREATE OR REPLACE FUNCTION readTest() RETURNS SETOF test AS\n\n$$\n\nDECLARE\n\n row test%ROWTYPE;\n\nBEGIN\n\n FOR row IN SELECT * FROM test LOOP\n\n RETURN NEXT row;\n\n END LOOP;\n\n \n\n RETURN;\n\nEND;\n\n$$ LANGUAGE plpgsql;\n\n \n\nFirstly, I ran EXPLAIN on the raw SQL to see how long that takes to\naccess the database the results are as follows:\n\n \n\nEXPLAIN ANALYZE SELECT * FROM test;\n\nSeq Scan on test (cost=0.00..10.90 rows=90 width=798) (actual\ntime=0.003..0.003 rows=0 loops=1)\n\nTotal runtime: 0.074 ms\n\n(2 rows)\n\n \n\nSecondly, I ran EXPLAIN on the function created above and the results\nare as follows:\n\n \n\nEXPLAIN ANALYZE SELECT * FROM readTest();\n\nFunction Scan on readtest (cost=0.00..12.50 rows=1000 width=798)\n(actual time=0.870..0.870 rows=0 loops=1)\n\nTotal runtime: 0.910 ms\n\n(2 rows)\n\n \n\nI know that the function is planned the first time it is executed so I\nran the same command again to remove that processing from the timings\nand the results are as follows:\n\n \n\nEXPLAIN ANALYZE SELECT * FROM readTest();\n\nFunction Scan on readtest (cost=0.00..12.50 rows=1000 width=798)\n(actual time=0.166..0.166 rows=0 loops=1)\n\nTotal runtime: 0.217 ms\n\n(2 rows)\n\n \n\nEvent with the planning removed, the function still performs\nsignificantly slower than the raw SQL. Is that normal or am I doing\nsomething wrong with the creation or calling of the function?\n\n \n\nThanks for your help,\n\n \n\nSimon\n\n \nVisit our Website at http://www.rm.com\n\nThis message is confidential. You should not copy it or disclose its contents to anyone. You may use and apply the information for the intended purpose only. Internet communications are not secure; therefore, RM does not accept legal responsibility for the contents of this message. Any views or opinions presented are those of the author only and not of RM. If this email has come to you in error, please delete it, along with any attachments. Please note that RM may intercept incoming and outgoing email communications. \n\nFreedom of Information Act 2000\nThis email and any attachments may contain confidential information belonging to RM. Where the email and any attachments do contain information of a confidential nature, including without limitation information relating to trade secrets, special terms or prices these shall be deemed for the purpose of the Freedom of Information Act 2000 as information provided in confidence by RM and the disclosure of which would be prejudicial to RM's commercial interests.\n\nThis email has been scanned for viruses by Trend ScanMail.\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI’m trying to evaluate PostgreSQL as a database that\nwill have to store a high volume of data and access that data frequently. One\nof the features on our wish list is to be able to use stored procedures to\naccess the data and I was wondering if it is usual for stored procedures to\nperform slower on PostgreSQL than raw SQL?\n \nA simple example of this can be shown with the following\ncommands:\n \nFirst I created a test table:\n \nCREATE TABLE test (\nid int8,\nname varchar(128),\ndescription varchar(500),\nconstraint “pk_test”\nprimary key (id)\n);\n \nThen the function I want to test:\n \nCREATE OR REPLACE FUNCTION readTest() RETURNS SETOF test AS\n$$\nDECLARE\n row\ntest%ROWTYPE;\nBEGIN\n FOR\nrow IN SELECT * FROM test LOOP\n RETURN\nNEXT row;\n END\nLOOP;\n \n RETURN;\nEND;\n$$ LANGUAGE plpgsql;\n \nFirstly, I ran EXPLAIN on the raw SQL to see how long that\ntakes to access the database the results are as follows:\n \nEXPLAIN ANALYZE SELECT * FROM test;\nSeq Scan on test (cost=0.00..10.90 rows=90 width=798)\n(actual time=0.003..0.003 rows=0 loops=1)\nTotal runtime: 0.074 ms\n(2 rows)\n \nSecondly, I ran EXPLAIN on the function created above and\nthe results are as follows:\n \nEXPLAIN ANALYZE SELECT * FROM readTest();\nFunction Scan on readtest (cost=0.00..12.50 rows=1000 width=798)\n(actual time=0.870..0.870 rows=0 loops=1)\nTotal runtime: 0.910 ms\n(2 rows)\n \nI know that the function is planned the first time it is\nexecuted so I ran the same command again to remove that processing from the\ntimings and the results are as follows:\n \nEXPLAIN ANALYZE SELECT * FROM readTest();\nFunction Scan on readtest (cost=0.00..12.50 rows=1000 width=798)\n(actual time=0.166..0.166 rows=0 loops=1)\nTotal runtime: 0.217 ms\n(2 rows)\n \nEvent with the planning removed, the function still performs\nsignificantly slower than the raw SQL. Is that normal or am I doing something wrong\nwith the creation or calling of the function?\n \nThanks for your help,\n \nSimon\n \n\n\n\nVisit our Website at www.rm.com\n\n\nThis message is confidential. You should not copy it or disclose its contents to anyone. You may use and apply the information for the intended purpose only. Internet communications are not secure; therefore, RM does not accept legal responsibility for the contents of this message. Any views or opinions presented are those of the author only and not of RM. If this email has come to you in error, please delete it, along with any attachments. Please note that RM may intercept incoming and outgoing email communications. \n\nFreedom of Information Act 2000\n\nThis email and any attachments may contain confidential information belonging to RM. Where the email and any attachments do contain information of a confidential nature, including without limitation information relating to trade secrets, special terms or prices these shall be deemed for the purpose of the Freedom of Information Act 2000 as information provided in confidence by RM and the disclosure of which would be prejudicial to RM's commercial interests.\n\nThis email has been scanned for viruses by Trend ScanMail.",
"msg_date": "Tue, 11 Apr 2006 08:19:39 +0100",
"msg_from": "\"Simon Dale\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Stored Procedure Performance"
},
{
"msg_contents": "On 4/11/06, Simon Dale <[email protected]> wrote:\n>\n> I'm trying to evaluate PostgreSQL as a database that will have to store a\n> high volume of data and access that data frequently. One of the features on\n> our wish list is to be able to use stored procedures to access the data and\n> I was wondering if it is usual for stored procedures to perform slower on\n> PostgreSQL than raw SQL?\n>\n\nworry but your benchmark is completelly flawed.\n1st. the tables are empty. will you ever run the real code on empty tables?\n2nd. do you really need a stored procedure for such a simple query?\n\ntesting something that's far from real usage will not give you any good.\nreturn next will of course show up as slower than standard select. the thing\nis - will the relative slowness of return next matter to you when you will\nput more logic in the procedure?\n\ndepesz\n\nOn 4/11/06, Simon Dale <[email protected]> wrote:\n\nI'm trying to evaluate PostgreSQL as a database that\nwill have to store a high volume of data and access that data frequently. One\nof the features on our wish list is to be able to use stored procedures to\naccess the data and I was wondering if it is usual for stored procedures to\nperform slower on PostgreSQL than raw SQL?worry but your benchmark is completelly flawed.1st. the tables are empty. will you ever run the real code on empty tables?\n2nd. do you really need a stored procedure for such a simple query?testing something that's far from real usage will not give you any good.return next will of course show up as slower than standard select. the thing is - will the relative slowness of return next matter to you when you will put more logic in the procedure?\ndepesz",
"msg_date": "Tue, 11 Apr 2006 09:58:33 +0200",
"msg_from": "\"hubert depesz lubaczewski\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "On 4/11/06, Simon Dale <[email protected]> wrote:\n>\n>\n>\n> Hi,\n>\n>\n>\n> I'm trying to evaluate PostgreSQL as a database that will have to store a\n> high volume of data and access that data frequently. One of the features on\n> our wish list is to be able to use stored procedures to access the data and\n> I was wondering if it is usual for stored procedures to perform slower on\n> PostgreSQL than raw SQL?\n\n\nNo.\n\nRETURN NEXT keeps accumulating the data before returning.\nI am not sure if any optimisations have been done to that effect.\n\nIn general functions are *NOT* slower than RAW SQL.\n\nRegds\nmallah.\n",
"msg_date": "Tue, 11 Apr 2006 13:52:48 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "Rajesh Kumar Mallah wrote:\n> On 4/11/06, Simon Dale <[email protected]> wrote:\n>>\n>> I'm trying to evaluate PostgreSQL as a database that will have to store a\n>> high volume of data and access that data frequently. One of the features on\n>> our wish list is to be able to use stored procedures to access the data and\n>> I was wondering if it is usual for stored procedures to perform slower on\n>> PostgreSQL than raw SQL?\n> \n> No.\n> \n> RETURN NEXT keeps accumulating the data before returning.\n> I am not sure if any optimisations have been done to that effect.\n> \n> In general functions are *NOT* slower than RAW SQL.\n\nActually, in cases where there is a simple way to state the query in raw \nSQL then I'd expect that a procedural solution IS slower. After all, \nyou're adding another layer of processing.\n\nOf course, you normally wouldn't write a procedural solution to a simple \nquery.\n\nAdded to this is the difference that plpgsql is planned once whereas raw \nsql will be planned on each query. This means you save planning costs \nwith the plpgsql but have the chance to get better plans with the raw sql.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 11 Apr 2006 11:04:25 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "On 4/11/06, Simon Dale <[email protected]> wrote:\n> I'm trying to evaluate PostgreSQL as a database that will have to store a\n> high volume of data and access that data frequently. One of the features on\n> our wish list is to be able to use stored procedures to access the data and\n> I was wondering if it is usual for stored procedures to perform slower on\n> PostgreSQL than raw SQL?\n\npl/pgsql procedures are a very thin layer over the query engine. \nGenerally, they run about the same speed as SQL but you are not making\napples to apples comparison. One of the few but annoying limitations\nof pl/pgsql procedures is that you can't return a select directly from\nthe query engine but have to go through the return/return next\nparadigm which will be slower than raw query for obvious reasons.\n\nYou can however return a refcursor and you may want to look at them in\nsituations where you want to return arbitrary sets outside the query\nengine or between pl/pgsql functions. An example of using refcurors\nin that way is on my blog at\nhttp://people.planetpostgresql.org/merlin/index.php?/archives/2-Dealing-With-Recursive-Sets-With-PLPGSQL.html\n\nGenerally, in my opinion if you want to really unlock the power of\npostgresql you have to master pl/pgsql. Go for it...it will work and\nwork well.\n\nmerlin\n\nmerlin\n",
"msg_date": "Tue, 11 Apr 2006 09:49:48 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "[email protected] (\"Simon Dale\") wrote:\n> <p class=MsoNormal><font size=2 face=Arial><span style='font-size:10.0pt;\n> font-family:Arial'>Event with the planning removed, the function still performs\n> significantly slower than the raw SQL. Is that normal or am I doing something wrong\n> with the creation or calling of the\n> function?<o:p></o:p></span></font></p>\n\nI'd expect this, yes.\n\nYou're doing something via \"stored procedure logic\" that would be done\nmore directly via straight SQL; of course it won't be faster.\n\nIn effect, pl/pgsql involves (planning once) then running each line of\nlogic. In effect, you replaced one query (select * from some table)\ninto 90 queries. Yup, there's extra cost there.\n\nThere's not some \"magic\" by which stored procedures provide results\nfaster as a natural \"matter of course;\" the performance benefits\ngenerally fall out of two improvements:\n\n 1. You eliminate client-to-server round trips.\n\n A stored proc that runs 8 queries saves you 8 round trips over\n submitting the 8 queries directly. Saving you latency time.\n\n 2. You can eliminate the marshalling and transmission of unnecessary\n data.\n\n A stored proc that runs 8 queries, and only returns summarized\n results that all come from the last table queried will eliminate\n the need to marshall and transmit (possibly over a slow link) the\n data for the 7 preceding queries.\n\nThe case that you tried can benefit from neither of those effects;\nyour stored procedure eliminates NO round trips, and NO\nmarshalling/transmission.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"gmail.com\")\nhttp://linuxdatabases.info/info/rdbms.html\nRules of the Evil Overlord #228. \"If the hero claims he wishes to\nconfess in public or to me personally, I will remind him that a\nnotarized deposition will serve just as well.\"\n<http://www.eviloverlord.com/>\n",
"msg_date": "Tue, 11 Apr 2006 09:56:27 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "\nHello\n\nAt my little machine (pentium 4, 2.8 Ghz, 256 Mb RAM, Suse linux 9)\nI can process about 100000 records a minute using the next setup:\n\nbegin work\n\nbegin for\n\tprocessing\n\tif 10.000 records processed:\n\t\tcommit work\n\t\tbegin work\n\tend if\nend for\n\ncommit work (!)\n\nRegards\n\nHenk Sanders\n\n\n \n\n> -----Oorspronkelijk bericht-----\n> Van: [email protected]\n> [mailto:[email protected]]Namens Merlin Moncure\n> Verzonden: dinsdag 11 april 2006 15:50\n> Aan: Simon Dale\n> CC: [email protected]\n> Onderwerp: Re: [PERFORM] Stored Procedure Performance\n> \n> \n> On 4/11/06, Simon Dale <[email protected]> wrote:\n> > I'm trying to evaluate PostgreSQL as a database that will have to store a\n> > high volume of data and access that data frequently. One of the features on\n> > our wish list is to be able to use stored procedures to access the data and\n> > I was wondering if it is usual for stored procedures to perform slower on\n> > PostgreSQL than raw SQL?\n> \n> pl/pgsql procedures are a very thin layer over the query engine. \n> Generally, they run about the same speed as SQL but you are not making\n> apples to apples comparison. One of the few but annoying limitations\n> of pl/pgsql procedures is that you can't return a select directly from\n> the query engine but have to go through the return/return next\n> paradigm which will be slower than raw query for obvious reasons.\n> \n> You can however return a refcursor and you may want to look at them in\n> situations where you want to return arbitrary sets outside the query\n> engine or between pl/pgsql functions. An example of using refcurors\n> in that way is on my blog at\n> http://people.planetpostgresql.org/merlin/index.php?/archives/2-Dealing-With-Recursive-Sets-With-PLPGSQL.html\n> \n> Generally, in my opinion if you want to really unlock the power of\n> postgresql you have to master pl/pgsql. Go for it...it will work and\n> work well.\n> \n> merlin\n> \n> merlin\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n",
"msg_date": "Tue, 11 Apr 2006 16:02:51 +0200",
"msg_from": "\"H.J. Sanders\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "Merlin Moncure wrote:\n> On 4/11/06, Simon Dale <[email protected]> wrote:\n> > I'm trying to evaluate PostgreSQL as a database that will have to store a\n> > high volume of data and access that data frequently. One of the features on\n> > our wish list is to be able to use stored procedures to access the data and\n> > I was wondering if it is usual for stored procedures to perform slower on\n> > PostgreSQL than raw SQL?\n> \n> pl/pgsql procedures are a very thin layer over the query engine. \n> Generally, they run about the same speed as SQL but you are not making\n> apples to apples comparison. One of the few but annoying limitations\n> of pl/pgsql procedures is that you can't return a select directly from\n> the query engine but have to go through the return/return next\n> paradigm which will be slower than raw query for obvious reasons.\n\nThere's one problem that hasn't been mentioned. For the optimizer a\nPL/pgSQL function (really, a function in any language except SQL) is a\nblack box. If you have a complex join of two or three functions, and\nthey don't return 1000 rows, it's very likely that the optimizer is\ngoing to get it wrong.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 11 Apr 2006 10:08:57 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "On 4/11/06, Alvaro Herrera <[email protected]> wrote:\n> Merlin Moncure wrote:\n> > pl/pgsql procedures are a very thin layer over the query engine.\n> > Generally, they run about the same speed as SQL but you are not making\n> > apples to apples comparison. One of the few but annoying limitations\n> > of pl/pgsql procedures is that you can't return a select directly from\n> > the query engine but have to go through the return/return next\n> > paradigm which will be slower than raw query for obvious reasons.\n>\n> There's one problem that hasn't been mentioned. For the optimizer a\n> PL/pgSQL function (really, a function in any language except SQL) is a\n> black box. If you have a complex join of two or three functions, and\n> they don't return 1000 rows, it's very likely that the optimizer is\n> going to get it wrong.\n\nThis doesn't bother me that much. Those cases usually have a high\noverlap with views.You just have to plan on the function being fully\nmaterialized before it is inovled further. What drives me crazy is I\nhave to do 'select * from plpgsql_srf()' but I am allowed to do the\nmuch friendlier and more versatile 'select sql_srf()', even if they do\nmore or less the same thing.\n\nOn the flip side, what drives me crazy about sql functions is that all\ntables have to be in the search path for the validator. Since I\nfrequently use the trick of having multiple schemas with one set of\nfunctions this is annoying.\n\nMerlin\n",
"msg_date": "Tue, 11 Apr 2006 17:02:00 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
}
] |
[
{
"msg_contents": "Hi,\n\n I think this is an old question, but I want to know if it really is well worth to not create some foreign keys an deal with the referential integrity at application-level?????\n Specifically, the system we are developing is a server/cliente architecture that the server is the database and the fat client is an application developed in DELPHI!!!\n\n Thanks in advance!!\n\n\n\n\n\n\n Hi,\n \n I think this is an old question, but I want \nto know if it really is well worth to not create some foreign \nkeys an deal with the referential integrity at \napplication-level?????\n Specifically, the system we are developing \nis a server/cliente architecture that the server is the database and the fat \nclient is an application developed in DELPHI!!!\n \n Thanks in \nadvance!!",
"msg_date": "Tue, 11 Apr 2006 16:13:34 -0300",
"msg_from": "\"Rodrigo Sakai\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": "On Tue, 2006-04-11 at 14:13, Rodrigo Sakai wrote:\n> Hi,\n> \n> I think this is an old question, but I want to know if it really is\n> well worth to not create some foreign keys an deal with the\n> referential integrity at application-level?????\n> Specifically, the system we are developing is a server/cliente\n> architecture that the server is the database and the fat client is an\n> application developed in DELPHI!!!\n> \n\nIf ref integrity is important, you'll have to do it either in the app or\nthe database.\n\nAlmost always, it's faster to let the database do it, as there's less\ntraffic across the wire required to maintain ref integrity, plus, the\nguys who wrote the database have spent years making sure race conditions\nwon't scram your data.\n\nFor simple, straight forward FK->PK relationships, you will likely NOT\nbe able to beat the database in terms of either reliability or\nperformance with your own code.\n",
"msg_date": "Tue, 11 Apr 2006 14:48:46 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": "\nOn Apr 12, 2006, at 4:13 , Rodrigo Sakai wrote:\n\n> I think this is an old question, but I want to know if it really \n> is well worth to not create some foreign keys an deal with the \n> referential integrity at application-level?????\n\nIf I had to choose between one or the other, I'd leave all \nreferential integrity in the database and deal with the errors thrown \nwhen referential integrity is violated in the application. PostgreSQL \nis designed to handle these kinds of issues. Anything you code in \nyour application is more likely to contain bugs or miss corner cases \nthat would allow referential integrity to be violated. PostgreSQL has \nbeen pounded on for years by a great many users and developers, \nmaking the likelihood of bugs still remaining much smaller.\n\nOf course, you can add some referential integrity checks in your \napplication code, but those should be in addition to your database- \nlevel checks.\n\nMichael Glaesemann\ngrzm myrealbox com\n\n\n\n",
"msg_date": "Wed, 12 Apr 2006 08:06:17 +0900",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": "On Wed, Apr 12, 2006 at 08:06:17AM +0900, Michael Glaesemann wrote:\n> \n> On Apr 12, 2006, at 4:13 , Rodrigo Sakai wrote:\n> \n> > I think this is an old question, but I want to know if it really \n> >is well worth to not create some foreign keys an deal with the \n> >referential integrity at application-level?????\n> \n> If I had to choose between one or the other, I'd leave all \n> referential integrity in the database and deal with the errors thrown \n> when referential integrity is violated in the application. PostgreSQL \n> is designed to handle these kinds of issues. Anything you code in \n> your application is more likely to contain bugs or miss corner cases \n> that would allow referential integrity to be violated. PostgreSQL has \n> been pounded on for years by a great many users and developers, \n> making the likelihood of bugs still remaining much smaller.\n\nIt's also pretty unlikely that you can make RI in the application\nperform better than in the database.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 11 Apr 2006 18:38:20 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Wed, Apr 12, 2006 at 08:06:17AM +0900, Michael Glaesemann wrote:\n>> ... Anything you code in \n>> your application is more likely to contain bugs or miss corner cases \n>> that would allow referential integrity to be violated. PostgreSQL has \n>> been pounded on for years by a great many users and developers, \n>> making the likelihood of bugs still remaining much smaller.\n\n> It's also pretty unlikely that you can make RI in the application\n> perform better than in the database.\n\nI think the traditional assumption among the \"you should do RI in the\napplication\" crowd is that the application has higher-level knowledge\nthat lets it understand when it can skip doing an RI check entirely.\nSkipping an RI check is always faster than doing it --- so that's right,\nit's faster. As long as you don't make any mistakes.\n\nThe question you have to ask yourself is whether you are really that\nsmart ... not just today, but every single time. To quote Clint\nEastwood: \"Do you feel lucky punk? Well, do you?\"\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Apr 2006 22:56:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE "
},
{
"msg_contents": "\n>> I think this is an old question, but I want to know if it really is \n>> well worth to not create some foreign keys an deal with the referential \n>> integrity at application-level?????\n\n\tTrust me : do it in the application and you'll enter a world of hurt. I'm \ndoing it with some mysql apps, and it's a nightmare ; doing cascaded \ndelete's by hand, etc, you always forget something, you have to modify a \nmillion places in your code everytime you add a new table, your ORM \nbloats, you get to write cleanup cron scripts which take forever to run, \nyour website crashes etc.\n\n",
"msg_date": "Wed, 12 Apr 2006 09:22:52 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": "Hi, Michael,\nHi, Rodrigo,\n\nMichael Glaesemann wrote:\n\n> If I had to choose between one or the other, I'd leave all referential\n> integrity in the database and deal with the errors thrown when\n> referential integrity is violated in the application. PostgreSQL is\n> designed to handle these kinds of issues. Anything you code in your\n> application is more likely to contain bugs or miss corner cases that\n> would allow referential integrity to be violated. PostgreSQL has been\n> pounded on for years by a great many users and developers, making the\n> likelihood of bugs still remaining much smaller.\n\nI strictly agree with Michael here.\n\n> Of course, you can add some referential integrity checks in your\n> application code, but those should be in addition to your database-\n> level checks.\n\nAgree. It does make sense to have reference checks in the UI or\napplication level for the sake of better error handling, but the\ndatabase should be the mandatory judge.\n\nThere's another advantage of database based checking: Should there ever\nbe the need of a different application working on the same database (e.\nG. an \"expert level UI\", or some connector that connects / synchronizes\nto another software, or a data import tool), database based constraints\ncannot be broken opposed to application based ones.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Wed, 12 Apr 2006 15:18:55 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": ">>> I think this is an old question, but I want to know if it really \n>>> is well worth to not create some foreign keys an deal with the \n>>> referential integrity at application-level?????\n> \n> \n> Trust me : do it in the application and you'll enter a world of \n> hurt. I'm doing it with some mysql apps, and it's a nightmare ; doing \n> cascaded delete's by hand, etc, you always forget something, you have \n> to modify a million places in your code everytime you add a new table, \n> your ORM bloats, you get to write cleanup cron scripts which take \n> forever to run, your website crashes etc.\n\nAll good advice, but... there are no absolutes in this world. Application-enforced referential integrity makes sense if (and probably ONLY if):\n\n1. You have only one application that modifies the data. (Otherwise, you have to duplicate the rules across many applications, leading to a code-maintenance nightmare).\n\n2. If your application crashes and leaves a mess, it's not a catastrophe, and you have a good way to clean it up. For example, a bank shouldn't do this, but it might be OK for a computer-aided-design application, or the backend of a news web site.\n\n3. You have application-specific knowledge about when you can skip referential integrity and thereby greatly improve performance. For example, you may have batch operations where large numbers of rows are temporarily inconsistent.\n\nIf your application doesn't meet ALL of these criteria, you probably should use the database for referential integrity.\n\nCraig\n",
"msg_date": "Wed, 12 Apr 2006 07:45:17 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": " Thanks for all help!! But my problem is with performance, I agree with all\nof you, the RI must be maintained by the database, because a bunch of\nreasons that everyone knows!\n But, I'm dealing with a very huge database that servers more than 200\nclientes at the same time, and because of it, each manipulation (delete,\ninsert, update, select) on the database have a poor performance. So, if we\ndeal with RI in each client station, we take this work off the database!\n The application is an ERP developed with DELPHI + (postgresql or oracle or\nsql server)!!\n\n Thanks again!!\n\n----- Original Message ----- \nFrom: \"Markus Schaber\" <[email protected]>\nTo: <[email protected]>\nCc: \"Rodrigo Sakai\" <[email protected]>\nSent: Wednesday, April 12, 2006 10:18 AM\nSubject: Re: [PERFORM] FOREIGN KEYS vs PERFORMANCE\n\n\n> Hi, Michael,\n> Hi, Rodrigo,\n>\n> Michael Glaesemann wrote:\n>\n> > If I had to choose between one or the other, I'd leave all referential\n> > integrity in the database and deal with the errors thrown when\n> > referential integrity is violated in the application. PostgreSQL is\n> > designed to handle these kinds of issues. Anything you code in your\n> > application is more likely to contain bugs or miss corner cases that\n> > would allow referential integrity to be violated. PostgreSQL has been\n> > pounded on for years by a great many users and developers, making the\n> > likelihood of bugs still remaining much smaller.\n>\n> I strictly agree with Michael here.\n>\n> > Of course, you can add some referential integrity checks in your\n> > application code, but those should be in addition to your database-\n> > level checks.\n>\n> Agree. It does make sense to have reference checks in the UI or\n> application level for the sake of better error handling, but the\n> database should be the mandatory judge.\n>\n> There's another advantage of database based checking: Should there ever\n> be the need of a different application working on the same database (e.\n> G. an \"expert level UI\", or some connector that connects / synchronizes\n> to another software, or a data import tool), database based constraints\n> cannot be broken opposed to application based ones.\n>\n> HTH,\n> Markus\n> -- \n> Markus Schaber | Logical Tracking&Tracing International AG\n> Dipl. Inf. | Software Development GIS\n>\n> Fight against software patents in EU! www.ffii.org\nwww.nosoftwarepatents.org\n>\n\n",
"msg_date": "Wed, 12 Apr 2006 11:49:49 -0300",
"msg_from": "\"Rodrigo Sakai\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": "On 4/11/06, Rodrigo Sakai <[email protected]> wrote:\n>\n> Hi,\n>\n> I think this is an old question, but I want to know if it really is well\n> worth to not create some foreign keys an deal with the referential integrity\n> at application-level?????\n> Specifically, the system we are developing is a server/cliente\n> architecture that the server is the database and the fat client is an\n> application developed in DELPHI!!!\n>\n> Thanks in advance!!\n\nDelphi IMO is the best RAD win32 IDE ever invented (especially when\npaired the very excellent Zeos connection objects). However, for\npurposes of data management ,imperative languages, Delphi included,\nsimply suck. Great form editor though.\n\nmerlin\n",
"msg_date": "Wed, 12 Apr 2006 11:03:26 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": "\nOn Apr 12, 2006, at 23:49 , Rodrigo Sakai wrote:\n\n> Thanks for all help!! But my problem is with performance, I agree \n> with all\n> of you, the RI must be maintained by the database, because a bunch of\n> reasons that everyone knows!\n\nYou've gotten a variety of good advice from a number of people. For \nmore specific advice (e.g., for your particular situation), it would \nbe very helpful if you could provide examples of queries that aren't \nperforming well for you (including table schema and explain analyze \noutput).\n\nMichael Glaesemann\ngrzm myrealbox com\n\n\n\n",
"msg_date": "Thu, 13 Apr 2006 00:06:27 +0900",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": "\n\tWhat kinds of operations are made slow by foreign key checks ? Is it :\n\t- Simple checks on INSERT ?\n\t- Simple checks on UPDATE ?\n\t- Cascaded deletes ?\n\t- Cascaded updates ?\n\t- Locks ?\n\t- Something else ?\n\n\tForeign keys are to ensure that the value in a column is always part of a \nspecific set (the referenced table). If this set changes very rarely, like \nthe list of countries in the world, or the list of states in a country, or \nthe various possible states an order can be in (received, processed, \nshipped...) then it's alright to skip the check if you're sure your \napplication will insert a valid value. For some other situations, doing \nthe check in the application will need some queries and could be slower \n(and certainly will be more complicated...)\n\n\tAre you sure you're not missing a few indexes, which would then force \nfkey checks to use sequential scans ?\n\n> Thanks for all help!! But my problem is with performance, I agree with \n> all\n> of you, the RI must be maintained by the database, because a bunch of\n> reasons that everyone knows!\n> But, I'm dealing with a very huge database that servers more than 200\n> clientes at the same time, and because of it, each manipulation (delete,\n> insert, update, select) on the database have a poor performance. So, if \n> we\n> deal with RI in each client station, we take this work off the database!\n> The application is an ERP developed with DELPHI + (postgresql or \n> oracle or\n> sql server)!!\n",
"msg_date": "Wed, 12 Apr 2006 17:09:37 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": "On Wed, 2006-04-12 at 09:49, Rodrigo Sakai wrote:\n> Thanks for all help!! But my problem is with performance, I agree with all\n> of you, the RI must be maintained by the database, because a bunch of\n> reasons that everyone knows!\n> But, I'm dealing with a very huge database that servers more than 200\n> clientes at the same time, and because of it, each manipulation (delete,\n> insert, update, select) on the database have a poor performance. So, if we\n> deal with RI in each client station, we take this work off the database!\n> The application is an ERP developed with DELPHI + (postgresql or oracle or\n> sql server)!!\n\nThese are separate issues.\n\nOne is performance of PostgreSQL handling FK->PK relationships.\nPostgreSQL, in my experience, is quite fast at this. However, there are\nways you can set up FK->PK relationships that are non-optimal and will\nresult in poor performance. FK->PK relationships are generally fastest\nwhen they are 1-1 and based on integer types. If there's a type\nmismatch, or you use slower types, like large text fields, or numerics,\nyou may have poor performance. Give us a sample of your schema where\nyou're having problems, let us help you troubleshoot your performance.\n\nHigh parallel load is another issue. No matter where you put your\nFK->PK relationship handling, having 200+ users connected at the same\ntime and manipulating your database is a heavy load.\n\nHandling FK->PK relationships in software often is vulnerable to race\nconditions. Like so: (T1 and T2 are different \"threads)\n\nT1: select id from mastertable where id=99; -- check for row\nT2: delete from mastertable where id=99; -- delete a row\nT1: insert into slavetable values (....); -- whoops! No master\n\nIf we change the T1 to select for update, we now have the overhead that\nmost FK->PK relationships have.\n\nWhat version of PostgreSQL are you running. Older versions had much\npoorer performance than newer versions when updating FK->PK\nrelationships.\n\nDon't assume that application level FK->PK relationships will be faster\nAND as good as the ones at database level. It's quite possible that\nthey're faster for you because you're cutting corners, referentially\nspeaking, and your data will wind up incoherent over time.\n\nAlso, you may be dealing with a database that is IO bound, and moving\nthe FK checks to software is only a short stop gap, and as the machine\nhits the IO performance ceiling, you'll have the same problem again,\nneed a bigger machine, and have incoherent data. I.e. the same problem,\nplus a few more, and have spent a lot of time spinning your wheels going\na short distance.\n",
"msg_date": "Wed, 12 Apr 2006 10:13:45 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": "On Wed, Apr 12, 2006 at 09:22:52AM +0200, PFC wrote:\n> \n> >> I think this is an old question, but I want to know if it really is \n> >>well worth to not create some foreign keys an deal with the referential \n> >>integrity at application-level?????\n> \n> \tTrust me : do it in the application and you'll enter a world of \n> \thurt. I'm doing it with some mysql apps, and it's a nightmare ; doing \n> cascaded delete's by hand, etc, you always forget something, you have to \n> modify a million places in your code everytime you add a new table, your \n> ORM bloats, you get to write cleanup cron scripts which take forever to \n> run, your website crashes etc.\n\nWell, yeah, thats typical for MySQL sites, but what's that have to do\nwith RI?\n\nSorry, couldn't resist. :P\n",
"msg_date": "Wed, 12 Apr 2006 10:36:20 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": "On Wed, Apr 12, 2006 at 07:45:17AM -0700, Craig A. James wrote:\n> All good advice, but... there are no absolutes in this world. \n> Application-enforced referential integrity makes sense if (and probably \n> ONLY if):\n> \n> 1. You have only one application that modifies the data. (Otherwise, you \n> have to duplicate the rules across many applications, leading to a \n> code-maintenance nightmare).\n\nYou forgot something:\n\n1a: You know that there will never, ever, ever, ever, be any other\napplication that wants to talk to the database.\n\nI know tons of people that get burned because they go with something\nthat's \"good enough for now\", and then regret that decision for years to\ncome.\n\n> 2. If your application crashes and leaves a mess, it's not a catastrophe, \n> and you have a good way to clean it up. For example, a bank shouldn't do \n> this, but it might be OK for a computer-aided-design application, or the \n> backend of a news web site.\n> \n> 3. You have application-specific knowledge about when you can skip \n> referential integrity and thereby greatly improve performance. For \n> example, you may have batch operations where large numbers of rows are \n> temporarily inconsistent.\n> \n> If your application doesn't meet ALL of these criteria, you probably should \n> use the database for referential integrity.\n> \n> Craig\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 12 Apr 2006 10:39:51 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": "Jim C. Nasby wrote:\n>>1. You have only one application that modifies the data. (Otherwise, you \n>>have to duplicate the rules across many applications, leading to a \n>>code-maintenance nightmare).\n> \n> You forgot something:\n> \n> 1a: You know that there will never, ever, ever, ever, be any other\n> application that wants to talk to the database.\n> \n> I know tons of people that get burned because they go with something\n> that's \"good enough for now\", and then regret that decision for years to\n> come.\n\nNo, I don't agree with this. Too many people waste time designing for \"what if...\" scenarios that never happen. You don't want to be dumb and design something that locks out a foreseeable and likely future need, but referential integrity doesn't meet this criterion. There's nothing to keep you from changing from app-managed to database-managed referential integrity if your needs change.\n\nDesign for your current requirements.\n\n\nLet us be of good cheer, remembering that the misfortunes hardest to bear are \nthose which never happen.\t\t- James Russell Lowell (1819-1891)\n\nTherefore do not be anxious about tomorrow, for tomorrow will be anxious for \nitself. Let the day's own trouble be sufficient for the day.\n\t\t\t\t\t- Matthew 6:34\n\nCraig\n",
"msg_date": "Wed, 12 Apr 2006 10:36:28 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": "On Wed, Apr 12, 2006 at 10:36:28AM -0700, Craig A. James wrote:\n> Jim C. Nasby wrote:\n> >>1. You have only one application that modifies the data. (Otherwise, you \n> >>have to duplicate the rules across many applications, leading to a \n> >>code-maintenance nightmare).\n> >\n> >You forgot something:\n> >\n> >1a: You know that there will never, ever, ever, ever, be any other\n> >application that wants to talk to the database.\n> >\n> >I know tons of people that get burned because they go with something\n> >that's \"good enough for now\", and then regret that decision for years to\n> >come.\n> \n> No, I don't agree with this. Too many people waste time designing for \n> \"what if...\" scenarios that never happen. You don't want to be dumb and \n> design something that locks out a foreseeable and likely future need, but \n> referential integrity doesn't meet this criterion. There's nothing to keep \n> you from changing from app-managed to database-managed referential \n> integrity if your needs change.\n\nIn this case your argument makes no sense, because you will spend far\nmore time re-creating RI capability inside an application than if you\njust use what the database offers natively.\n\nIt's certainly true that you don't want to over-engineer for no reason,\nbut many times choices are made to save a very small amount of time or\nhassle up-front, and those choices become extremely painful later.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 12 Apr 2006 15:59:14 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": "Jim C. Nasby wrote:\n>>No, I don't agree with this. Too many people waste time designing for \n>>\"what if...\" scenarios that never happen. You don't want to be dumb and \n>>design something that locks out a foreseeable and likely future need, but \n>>referential integrity doesn't meet this criterion. There's nothing to keep \n>>you from changing from app-managed to database-managed referential \n>>integrity if your needs change.\n> \n> In this case your argument makes no sense, because you will spend far\n> more time re-creating RI capability inside an application than if you\n> just use what the database offers natively.\n\nBut one of the specific conditions in my original response was, \"You have application-specific knowledge about when you can skip referential integrity and thereby greatly improve performance.\" If you can't do that, I agree with you.\n\nAnyway, this discussion is probably going on too long, and I'm partly to blame. I think we all agree that in almost all situations, using the database to do referential integrity is the right choice, and that you should only violate this rule if you have a really, really good reason, and you've thought out the implications carefully, and you know you may have to pay a steep price later if your requirements change.\n\nCraig\n",
"msg_date": "Wed, 12 Apr 2006 18:32:39 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
},
{
"msg_contents": " Thanks for all responses! I agree with most of you, and say that the RI is\nbest maintened by Database ! Performance must be improved in other ways\n(indexes, hardware, etc)!\n\n\n----- Original Message ----- \nFrom: \"Jim C. Nasby\" <[email protected]>\nTo: \"Craig A. James\" <[email protected]>\nCc: \"PFC\" <[email protected]>; \"Michael Glaesemann\" <[email protected]>;\n\"Rodrigo Sakai\" <[email protected]>;\n<[email protected]>\nSent: Wednesday, April 12, 2006 5:59 PM\nSubject: Re: [PERFORM] FOREIGN KEYS vs PERFORMANCE\n\n\n> On Wed, Apr 12, 2006 at 10:36:28AM -0700, Craig A. James wrote:\n> > Jim C. Nasby wrote:\n> > >>1. You have only one application that modifies the data. (Otherwise,\nyou\n> > >>have to duplicate the rules across many applications, leading to a\n> > >>code-maintenance nightmare).\n> > >\n> > >You forgot something:\n> > >\n> > >1a: You know that there will never, ever, ever, ever, be any other\n> > >application that wants to talk to the database.\n> > >\n> > >I know tons of people that get burned because they go with something\n> > >that's \"good enough for now\", and then regret that decision for years\nto\n> > >come.\n> >\n> > No, I don't agree with this. Too many people waste time designing for\n> > \"what if...\" scenarios that never happen. You don't want to be dumb and\n> > design something that locks out a foreseeable and likely future need,\nbut\n> > referential integrity doesn't meet this criterion. There's nothing to\nkeep\n> > you from changing from app-managed to database-managed referential\n> > integrity if your needs change.\n>\n> In this case your argument makes no sense, because you will spend far\n> more time re-creating RI capability inside an application than if you\n> just use what the database offers natively.\n>\n> It's certainly true that you don't want to over-engineer for no reason,\n> but many times choices are made to save a very small amount of time or\n> hassle up-front, and those choices become extremely painful later.\n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Tue, 18 Apr 2006 14:19:29 -0300",
"msg_from": "\"Rodrigo Sakai\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FOREIGN KEYS vs PERFORMANCE"
}
] |
[
{
"msg_contents": "Mark, \n\n>If you can upgrade to 8.1.(3), then the planner can consider paths that\n\n>use *both* the indexes on srcobj and dstobj (which would probably be\nthe \n>business!).\n\nYes, 8.1.3 resolved this issue. Thanks.\n\nHowever I am still getting seq scans on indexes for other queries\n\nFor example:\n\nselect * from omfile where ( objectid in ( select distinct(ref_oid) from\nts ) ); \nobjectid & ref_oid are non-unique indexes \nomimagefile & omclipfile inherit from omfile\n\n------------------------------------------------------------------------\n--------\n\n Nested Loop IN Join (cost=21432.32..951981.42 rows=204910 width=217)\n Join Filter: (\"outer\".objectid = \"inner\".ref_oid)\n -> Append (cost=0.00..8454.10 rows=204910 width=217)\n -> Seq Scan on omfile (cost=0.00..8428.20 rows=204320\nwidth=217)\n -> Seq Scan on omimagefile omfile (cost=0.00..12.70 rows=270\nwidth=217)\n -> Seq Scan on omclipfile omfile (cost=0.00..13.20 rows=320\nwidth=217)\n -> Materialize (cost=21432.32..21434.32 rows=200 width=16)\n -> Unique (cost=20614.91..21430.12 rows=200 width=16)\n -> Sort (cost=20614.91..21022.52 rows=163041 width=16)\n Sort Key: ts.ref_oid\n -> Seq Scan on ts (cost=0.00..3739.41 rows=163041\nwidth=16)\n\n(11 rows) \nTime: 164.232 ms \n\nBTW set enable_seqscan=off has no affect i.e still uses seq scans.\n\nIf I do a simple query, it is very quick, no sequencial scans. \nSo how can I get index scans to work consistently with joins?\n\nexplain select * from omfile where\nobjectid='65ef0be3-bf02-46b6-bae9-5bd015ffdb79'; \n\n------------------------------------------------------------------------\n--------\n\n Result (cost=2.00..7723.30 rows=102903 width=217)\n -> Append (cost=2.00..7723.30 rows=102903 width=217)\n -> Bitmap Heap Scan on omfile (cost=2.00..7697.60 rows=102608\nwidth=217)\n Recheck Cond: (objectid =\n'65ef0be3-bf02-46b6-bae9-5bd015ffdb79'::capsa_sys.uuid)\n -> Bitmap Index Scan on omfile_objectid_idx\n(cost=0.00..2.00 rows=102608 width=0)\n Index Cond: (objectid =\n'65ef0be3-bf02-46b6-bae9-5bd015ffdb79'::capsa_sys.uuid)\n -> Bitmap Heap Scan on omimagefile omfile (cost=1.00..12.69\nrows=135 width=217)\n Recheck Cond: (objectid =\n'65ef0be3-bf02-46b6-bae9-5bd015ffdb79'::capsa_sys.uuid)\n -> Bitmap Index Scan on omimagefile_objectid_idx\n(cost=0.00..1.00 rows=135 width=0)\n Index Cond: (objectid =\n'65ef0be3-bf02-46b6-bae9-5bd015ffdb79'::capsa_sys.uuid)\n -> Bitmap Heap Scan on omclipfile omfile (cost=1.00..13.00\nrows=160 width=217)\n Recheck Cond: (objectid =\n'65ef0be3-bf02-46b6-bae9-5bd015ffdb79'::capsa_sys.uuid)\n -> Bitmap Index Scan on omclipfile_objectid_idx\n(cost=0.00..1.00 rows=160 width=0)\n Index Cond: (objectid =\n'65ef0be3-bf02-46b6-bae9-5bd015ffdb79'::capsa_sys.uuid)\n\n(14 rows) \nTime: 5.164\n\n\n\n-----Original Message-----\nFrom: Mark Kirkwood [mailto:[email protected]] \nSent: Tuesday, March 07, 2006 12:04 AM\nTo: Harry Hehl\nCc: [email protected]\nSubject: Re: [PERFORM] Sequencial scan instead of using index\n\nHarry Hehl wrote:\n> There seems to be many posts on this issue but I not yet found an\nanswer to the seq scan issue.\n> \n> I am having an issue with a joins. I am using 8.0.3 on FC4\n> \n> Query: select * from ommemberrelation where srcobj='somevalue' and \n> dstobj in (select objectid from omfilesysentry where \n> name='dir15_file80');\n> \n> Columns srcobj, dstobj & name are all indexed.\n> \n> \n\nThe planner is over-estimating the number of rows here (33989 vs 100):\n\n-> Seq Scan on ommemberrelation (cost=0.00..2394.72 rows=33989\nwidth=177) (actual time=0.078..70.887 rows=100 loops=1)\n\nThe usual way to attack this is to up the sample size for ANALYZE:\n\nALTER TABLE ommemberrelation ALTER COLUMN srcobj SET STATISTICS 100;\nALTER TABLE ommemberrelation ALTER COLUMN dstobj SET STATISTICS 100;\n-- or even 1000.\nANALYZE ommemberrelation;\n\nThen try EXPLAIN ANALYZE again.\n\n\nIf you can upgrade to 8.1.(3), then the planner can consider paths that \nuse *both* the indexes on srcobj and dstobj (which would probably be the\n\nbusiness!).\n\nCheers\n\nMark\n",
"msg_date": "Tue, 11 Apr 2006 17:56:38 -0400",
"msg_from": "\"Harry Hehl\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sequencial scan instead of using index"
},
{
"msg_contents": "Quoting Harry Hehl <[email protected]>:\n\n> Mark, \n> \n> (snippage)However I am still getting seq scans on indexes for other queries\n> \n> For example:\n> \n> select * from omfile where ( objectid in ( select distinct(ref_oid)\n> from\n> ts ) ); \n> objectid & ref_oid are non-unique indexes \n> omimagefile & omclipfile inherit from omfile\n> \n> --------------------------------------------------------------\n> ----------\n> --------\n> \n> Nested Loop IN Join (cost=21432.32..951981.42 rows=204910 width=217)\n> Join Filter: (\"outer\".objectid = \"inner\".ref_oid)\n> -> Append (cost=0.00..8454.10 rows=204910 width=217)\n> -> Seq Scan on omfile (cost=0.00..8428.20 rows=204320\n> width=217)\n> -> Seq Scan on omimagefile omfile (cost=0.00..12.70 rows=270\n> width=217)\n> -> Seq Scan on omclipfile omfile (cost=0.00..13.20 rows=320\n> width=217)\n> -> Materialize (cost=21432.32..21434.32 rows=200 width=16)\n> -> Unique (cost=20614.91..21430.12 rows=200 width=16)\n> -> Sort (cost=20614.91..21022.52 rows=163041 width=16)\n> Sort Key: ts.ref_oid\n> -> Seq Scan on ts (cost=0.00..3739.41 rows=163041\n> width=16)\n> \n> (11 rows) \n> Time: 164.232 ms \n> \n> BTW set enable_seqscan=off has no affect i.e still uses seq scans.\n> \n> If I do a simple query, it is very quick, no sequencial scans. \n> So how can I get index scans to work consistently with joins?\n> \n> explain select * from omfile where\n> objectid='65ef0be3-bf02-46b6-bae9-5bd015ffdb79'; \n> \n> --------------------------------------------------------------------\n> ----\n> --------\n> \n> Result (cost=2.00..7723.30 rows=102903 width=217)\n> -> Append (cost=2.00..7723.30 rows=102903 width=217)\n> -> Bitmap Heap Scan on omfile (cost=2.00..7697.60 rows=102608\n> width=217)\n> Recheck Cond: (objectid =\n> '65ef0be3-bf02-46b6-bae9-5bd015ffdb79'::capsa_sys.uuid)\n> -> Bitmap Index Scan on omfile_objectid_idx\n> (cost=0.00..2.00 rows=102608 width=0)\n> Index Cond: (objectid =\n> '65ef0be3-bf02-46b6-bae9-5bd015ffdb79'::capsa_sys.uuid)\n> -> Bitmap Heap Scan on omimagefile omfile (cost=1.00..12.69\n> rows=135 width=217)\n> Recheck Cond: (objectid =\n> '65ef0be3-bf02-46b6-bae9-5bd015ffdb79'::capsa_sys.uuid)\n> -> Bitmap Index Scan on omimagefile_objectid_idx\n> (cost=0.00..1.00 rows=135 width=0)\n> Index Cond: (objectid =\n> '65ef0be3-bf02-46b6-bae9-5bd015ffdb79'::capsa_sys.uuid)\n> -> Bitmap Heap Scan on omclipfile omfile (cost=1.00..13.00\n> rows=160 width=217)\n> Recheck Cond: (objectid =\n> '65ef0be3-bf02-46b6-bae9-5bd015ffdb79'::capsa_sys.uuid)\n> -> Bitmap Index Scan on omclipfile_objectid_idx\n> (cost=0.00..1.00 rows=160 width=0)\n> Index Cond: (objectid =\n> '65ef0be3-bf02-46b6-bae9-5bd015ffdb79'::capsa_sys.uuid)\n> \n> (14 rows) \n> Time: 5.164\n> \n>\n\nHmm - that first query needs to do a sort, so you might want to experiment with\nthe sort_mem parameter. Could you show us output from explain analyze for both\nthe above queries?\n\nAt face value, selecting 200000 rows (assuming the estimates are accurate) may\nmean that a seqscan is the best plan! But we'll know more after seeing the\nexplain analyze...\n\nCheers\n\n\nMark\n",
"msg_date": "Wed, 12 Apr 2006 10:17:54 +1200 (NZST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Sequencial scan instead of using index"
},
{
"msg_contents": "Quoting \"[email protected]\" <[email protected]>:\n\n\n> Hmm - that first query needs to do a sort, so you might want to\n> experiment with\n> the sort_mem parameter\n\nOops - I mean work_mem...\n",
"msg_date": "Wed, 12 Apr 2006 10:20:55 +1200 (NZST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Sequencial scan instead of using index"
},
{
"msg_contents": "\"Harry Hehl\" <[email protected]> writes:\n> Nested Loop IN Join (cost=21432.32..951981.42 rows=204910 width=217)\n> Join Filter: (\"outer\".objectid = \"inner\".ref_oid)\n> -> Append (cost=0.00..8454.10 rows=204910 width=217)\n> -> Seq Scan on omfile (cost=0.00..8428.20 rows=204320\n> width=217)\n> -> Seq Scan on omimagefile omfile (cost=0.00..12.70 rows=270\n> width=217)\n> -> Seq Scan on omclipfile omfile (cost=0.00..13.20 rows=320\n> width=217)\n> -> Materialize (cost=21432.32..21434.32 rows=200 width=16)\n> -> Unique (cost=20614.91..21430.12 rows=200 width=16)\n> -> Sort (cost=20614.91..21022.52 rows=163041 width=16)\n> Sort Key: ts.ref_oid\n> -> Seq Scan on ts (cost=0.00..3739.41 rows=163041\n> width=16)\n\n> (11 rows) \n> Time: 164.232 ms \n\n> So how can I get index scans to work consistently with joins?\n\nIt's not the join that's the problem, it's the inheritance. I recently\nimproved the planner so that it can consider appended indexscans for an\ninheritance tree on the inside of a join, but no pre-8.2 release can do\nit.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Apr 2006 18:49:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequencial scan instead of using index "
}
] |
[
{
"msg_contents": "Thanks Mark, \n\n>Hmm - that first query needs to do a sort, so you might want to\nexperiment with the sort_mem parameter. Could you show us output from\nexplain analyze for >both the above queries?\n\nNot too concerned about the sort, more about the query performance with\nseq scan as the tables size increases.\n\n>At face value, selecting 200000 rows (assuming the estimates are\naccurate) may mean that a seqscan is the best plan! But we'll know more\nafter seeing the >explain analyze...\n\n200000 rows is about right. \n\nI saw Tom's response on the planner improvement in 8.2 but I was still\ngoing to send the explain analyze output.\nHowever I can't show you explain analyze. The postmaster goes to 99% cpu\nand stays there. The explain analyze command hangs...\n\nIt is starting to look like inheritance does help in modeling the data,\nbut for searches parallel flat tables that don't use inheritance is\nrequired to get optimum query performance. \n\nHas anyone else come to this conclusion?\n\nThanks\n\n\n\n",
"msg_date": "Wed, 12 Apr 2006 09:04:28 -0400",
"msg_from": "\"Harry Hehl\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sequencial scan instead of using index"
}
] |
[
{
"msg_contents": "Adding -performance back in\n\n-----Original Message-----\nFrom: Oscar Picasso [mailto:[email protected]]\nSent: Wednesday, April 12, 2006 5:51 PM\nTo: Jim Nasby\nSubject: Re: [PERFORM] Better index stategy for many fields with few values\n\n\nI would like to try it.\n\nHowever in an other post I added that contrary to what I stated initially all the paramXX columns are not mandatory in the query. So it seems that requirement make the problem more complexe.\n\nDoesn't this new requirement rule out this solution? \n\nNo, just group the columns logically.\n\n By the way I have test to index each column individually and check what happens in relation to bitscan map. My test table is 1 million rows. The explain analyze command shows that a bit scan is sometimes used but I still end up with queries that can take up to 10s which is way to much.\n\n\n\"Jim C. Nasby\" <[email protected]> wrote:\n\nOn Wed, Apr 12, 2006 at 02:59:32PM +0200, Markus Schaber wrote:\n> > I was thinking about using a multicolumns index, but I have read that\n> > we should limit multicolumns indice to at most 2 or 3 columns.\n> \n> Yes, that's true, the index overhead gets too high.\n> \n> > I was also thinking about about using a functional index.\n> \n> If there's a logical relation between those values that they can easily\n> combined, that may be a good alternative.\n\nHow would that be any better than just doing a multi-column index?\n\n> I just had another weird idea:\n> \n> As your paramXX values can have only 10 parameters, it also might be\n> feasible to use a bunch of 10 conditional indices, like:\n> \n> CREATE INDEX foo1 ON table (param1, param2 WHERE param0='1st value';\n> CREATE INDEX foo2 ON table (param1, param2 WHERE param0='2nd value';\n> CREATE INDEX foo3 ON table (param1, param2 WHERE param0='3rd value';\n> [...]\n\nNot all that weird; it's known as index partitioning.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n\n\n\n\n\n _____ \n\nYahoo! <http://us.rd.yahoo.com/mail_us/taglines/postman3/*http://us.rd.yahoo.com/evt=39666/*http://beta.messenger.yahoo.com> Messenger with Voice. PC-to-Phone calls for ridiculously low rates.\n\n\n\n\n\n\nAdding \n-performance back in\n\n-----Original Message-----From: Oscar Picasso \n [mailto:[email protected]]Sent: Wednesday, April 12, 2006 5:51 \n PMTo: Jim NasbySubject: Re: [PERFORM] Better index \n stategy for many fields with few values\nI would like to try it.However in an other post I added that \n contrary to what I stated initially all the paramXX columns are not mandatory \n in the query. So it seems that requirement make the problem more \n complexe.Doesn't this new requirement rule out this solution? \nNo, just \ngroup the columns logically.\n\n By the way I have test to \n index each column individually and check what happens in relation to bitscan \n map. My test table is 1 million rows. The explain analyze command \n shows that a bit scan is sometimes used but I still end up with queries that \n can take up to 10s which is way to much.\"Jim C. Nasby\" \n <[email protected]> wrote:\nOn \n Wed, Apr 12, 2006 at 02:59:32PM +0200, Markus Schaber wrote:> > I \n was thinking about using a multicolumns index, but I have read that> \n > we should limit multicolumns indice to at most 2 or 3 columns.> \n > Yes, that's true, the index overhead gets too high.> \n > > I was also thinking about about using a functional \n index.> > If there's a logical relation between those values \n that they can easily> combined, that may be a good \n alternative.How would that be any better than just doing a \n multi-column index?> I just had another weird idea:> \n > As your paramXX values can have only 10 parameters, it also might \n be> feasible to use a bunch of 10 conditional indices, like:> \n > CREATE INDEX foo1 ON table (param1, param2 WHERE param0='1st \n value';> CREATE INDEX foo2 ON table (param1, param2 WHERE param0='2nd \n value';> CREATE INDEX foo3 ON table (param1, param2 WHERE param0='3rd \n value';> [...]Not all that weird; it's known as index \n partitioning.-- Jim C. Nasby, Sr. Engineering Consultant \n [email protected] Software http://pervasive.com work: \n 512-231-6117vcard: http://jim.nasby.net/pervasive.vcf cell: \n 512-569-9461---------------------------(end of \n broadcast)---------------------------TIP 4: Have you searched our list \n archives?http://archives.postgresql.org\n\n\nYahoo! \n Messenger with Voice. PC-to-Phone calls for ridiculously low \nrates.",
"msg_date": "Wed, 12 Apr 2006 18:03:45 -0500",
"msg_from": "\"Jim Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Better index stategy for many fields with few values"
},
{
"msg_contents": "Hi, Jim,\n\nJim Nasby wrote:\n> Adding -performance back in\n> I would like to try it.\n> \n> However in an other post I added that contrary to what I stated\n> initially all the paramXX columns are not mandatory in the query. So\n> it seems that requirement make the problem more complexe.\n\nOkay, this rules out my functional index over 19 columns.\n\n> Doesn't this new requirement rule out this solution? \n> \n> No, just group the columns logically.\n\nYes, that's the solution.\n\nIf you have common groups of columns that appear and disappear\nsynchroneously, pack those together in an (possibly partitioned and/or\nfunctional) index.\n\nThen rely on the query planner that the combines the appropriate indices\nvia index bitmap scan.\n\n> By the way I have test to index each column individually and check\n> what happens in relation to bitscan map. My test table is 1\n> million rows. The explain analyze command shows that a bit scan is\n> sometimes used but I still end up with queries that can take up to\n> 10s which is way to much.\n\nIs it on the first query, or on repeated queries?\n\nIt might be that you're I/O bound, and the backend has to fetch indices\nand rows from Disk into RAM.\n\nI currently don't know whether the order of indices in a multi-index\nbitmap scan is relevant, but I could imagine that it may be useful to\nhave the most selective index scanned first.\n\nAnd keep in mind that, assuming an equal distribution of your\nparameters, every index bitmap hits 1/10th of the whole table on\naverage, so the selectivity generally is low.\n\nThe selectivity of a partitioned 3-column index will be much better\n(about 1/10000th of the whole table), and less index scans and bitmaps\nhave to be generated.\n\nA functional index may also make sense to CLUSTER the table to optimize\nthe locality of search results (and so reducing disk I/O). In case your\ntable has low write activity, but high read-only activity, the overhead\nthat comes with the additional index is neglible compared to the\nperformance improvement proper CLUSTERing can generate.\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Thu, 13 Apr 2006 11:51:22 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better index stategy for many fields with few values"
}
] |
[
{
"msg_contents": "Hi\n\nWhen I update a table that has 20 columns and the where clause includes\n16 of the columns (this is a data warehousing type update on aggregate\nfields),\n\nThe bitmap scan is not used by the optimizer. The table is indexed on 3\nof the 20 fields. The update takes really long to finish (on a 6 million\nrow table)\n\n \n\nDo I need to do some \"magic\" with configuration to turn on bitmap scans.\n\n\n\n\n\n\n\n\n\n\nHi\nWhen I update a table that has 20 columns and the where\nclause includes 16 of the columns (this is a data warehousing type update on\naggregate fields),\nThe bitmap scan is not used by the optimizer. The table is\nindexed on 3 of the 20 fields. The update takes really long to finish (on a 6\nmillion row table)\n \nDo I need to do some “magic” with configuration\nto turn on bitmap scans.",
"msg_date": "Wed, 12 Apr 2006 17:32:32 -0700",
"msg_from": "\"Sriram Dandapani\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "multi column query"
},
{
"msg_contents": "On Wed, Apr 12, 2006 at 05:32:32PM -0700, Sriram Dandapani wrote:\n> Hi\n> \n> When I update a table that has 20 columns and the where clause includes\n> 16 of the columns (this is a data warehousing type update on aggregate\n> fields),\n> \n> The bitmap scan is not used by the optimizer. The table is indexed on 3\n> of the 20 fields. The update takes really long to finish (on a 6 million\n> row table)\n> \n> Do I need to do some \"magic\" with configuration to turn on bitmap scans.\n\nNo. What's explain analyze of the query show? What's it doing now?\nSeqscan? You might try set enable_seqscan=off and see what that does.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 12 Apr 2006 19:43:41 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: multi column query"
}
] |
[
{
"msg_contents": "Doing my first write heavy database.\nWhat settings will help improve inserts?\nOnly a handfull of connections, but each doing up to 30 inserts/second.\nPlan to have 2 to 3 clients which most of the time will not run at the \nsame time, but ocasionaly it's possible two of them may bump into each \nother.\n\n\nIf anyone recalls a previous thread like this please suggest keywords to \nsearch on. My search on this topic came back pretty empty.\n",
"msg_date": "Wed, 12 Apr 2006 20:38:58 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inserts optimization?"
},
{
"msg_contents": "Francisco Reyes wrote:\n> Doing my first write heavy database.\n> What settings will help improve inserts?\n> Only a handfull of connections, but each doing up to 30 inserts/second.\n> Plan to have 2 to 3 clients which most of the time will not run at the \n> same time, but ocasionaly it's possible two of them may bump into each \n> other.\n\nIf you can, use copy instead:\n\nhttp://www.postgresql.org/docs/8.1/interactive/sql-copy.html\n\nMUCH quicker (and don't worry about using multiple clients).\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Thu, 13 Apr 2006 11:19:17 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Chris <[email protected]> writes:\n> Francisco Reyes wrote:\n>> Doing my first write heavy database.\n>> What settings will help improve inserts?\n>> Only a handfull of connections, but each doing up to 30 inserts/second.\n\n> If you can, use copy instead:\n> http://www.postgresql.org/docs/8.1/interactive/sql-copy.html\n\nOr at least try to do multiple inserts per transaction.\n\nAlso, increasing checkpoint_segments and possibly wal_buffers helps a\nlot for write-intensive loads. Try to get the WAL onto a separate disk\nspindle if you can. (These things don't matter for SELECTs, but they\ndo matter for writes.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Apr 2006 00:42:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization? "
},
{
"msg_contents": "Chris writes:\n\n> If you can, use copy instead:\n> http://www.postgresql.org/docs/8.1/interactive/sql-copy.html\n\nI am familiar with copy.\nCan't use it in this scenario.\n\nThe data is coming from a program called Bacula (Backup server).\nIt is not static data.\n",
"msg_date": "Thu, 13 Apr 2006 14:45:39 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Tom Lane writes:\n\n> Or at least try to do multiple inserts per transaction.\n\nWill see if the program has an option like that.\n\n \n> Also, increasing checkpoint_segments and possibly wal_buffers helps a\n\nWill try those.\n\n>Try to get the WAL onto a separate disk\n>spindle if you can. (These things don't matter for SELECTs, but they\n>do matter for writes.)\n\nThis particular server is pretty much what I inherited for now for this \nproject.and its Raid 5. There is a new server I am setting up \nsoon... 8 disks which we are planning to setup\n6 disks in RAID 10\n2 Hot spares\n\nIn RAID 10 would it matter that WALL is in the same RAID set?\nWould it be better:\n4 disks in RAID10 Data\n2 disks RAID 1 WALL\n2 hot spares\n\nAll in the same RAID controller\n",
"msg_date": "Thu, 13 Apr 2006 14:59:23 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "On Thu, Apr 13, 2006 at 02:59:23PM -0400, Francisco Reyes wrote:\n> In RAID 10 would it matter that WALL is in the same RAID set?\n> Would it be better:\n> 4 disks in RAID10 Data\n> 2 disks RAID 1 WALL\n> 2 hot spares\n\nWell, benchmark it with your app and find out, but generally speaking\nunless your database is mostly read you'll see a pretty big benefit to\nseperating WAL from table/index data.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 14 Apr 2006 00:59:23 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Jim C. Nasby writes:\n\n> On Thu, Apr 13, 2006 at 02:59:23PM -0400, Francisco Reyes wrote:\n>> In RAID 10 would it matter that WALL is in the same RAID set?\n>> Would it be better:\n>> 4 disks in RAID10 Data\n>> 2 disks RAID 1 WALL\n>> 2 hot spares\n> \n> Well, benchmark it with your app and find out, but generally speaking\n> unless your database is mostly read you'll see a pretty big benefit to\n> seperating WAL from table/index data.\n\nThat will not be easy to compare.. it would mean setting up the machine.. \ntrashing it.. then redoing the whole setup..\n\nI am leaning towards using pgbench against the current machine to see what \nparameters affect inserts.. perhaps also doing dome tests with just inserts \nfrom a file. Then using the same setups on the next machine and just go with \nRAID10 on the 6 disks. Split the raid into 10 may give me space issues to \ndeal with.\n\nWill also find out if the app, Bacula, batches transactions or not.\n",
"msg_date": "Fri, 14 Apr 2006 07:30:25 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": ">On Thu, Apr 13, 2006 at 02:59:23PM -0400, Francisco Reyes wrote:\n>>In RAID 10 would it matter that WALL is in the same RAID set?\n>>Would it be better:\n>>4 disks in RAID10 Data\n>>2 disks RAID 1 WALL\n>>2 hot spares\n\nI guess the first question is why 2 hot spares? You don't have many \nspindles, so you don't want to waste them. It might turn out that a \nlarger array with more spindles with outperform a smaller one with \nfewer, regardless of RAID level (assuming a decent battery-backed \ncache). You might try \n5 RAID5\n2 RAID1\n1 spare\n\nMike Stone\n",
"msg_date": "Fri, 14 Apr 2006 07:49:14 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "I hope I'm not going to say stupid things, but here's what i know (or i think \ni know :) ) about bacula + postgresql\n\nIf I remember correctly (I allready discussed this with Kern Sibbald a while \nago), bacula does each insert in its own transaction : that's how the program \nis done, and of course it works ok with mysql and MyIsam tables, as mysql \ndoesn't have transactions with myisam...\n\nSo, you'll probably end up being slowed down by WAL fsyncs ... and you won't \nhave a lot of solutions. Maybe you should start with trying to set fsync=no \nas a test to confirm that (you should have a lot of iowaits right now if you \nhaven't disabled fsync).\n\nFor now, I only could get good performance with bacula and postgresql when \ndisabling fsync...\n\n\nOn Thursday 13 April 2006 20:45, Francisco Reyes wrote:\n> Chris writes:\n> > If you can, use copy instead:\n> > http://www.postgresql.org/docs/8.1/interactive/sql-copy.html\n>\n> I am familiar with copy.\n> Can't use it in this scenario.\n>\n> The data is coming from a program called Bacula (Backup server).\n> It is not static data.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n",
"msg_date": "Fri, 14 Apr 2006 14:00:38 +0200",
"msg_from": "Marc Cousin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Marc Cousin writes:\n\n> If I remember correctly (I allready discussed this with Kern Sibbald a while \n> ago), bacula does each insert in its own transaction : that's how the program \n> is done\n\nThanks for the info.\n\n> For now, I only could get good performance with bacula and postgresql when \n> disabling fsync...\n\n\nIsn't that less safe?\nI think I am going to try increasing wal_buffers\nSpecially was reading http://www.powerpostgresql.com. Towards the middle it \nmentions improvements when increasing wal_buffers. So far performance, \nagainst a single client was fair.\n\nHow did you test your bacula setup with postgresql for time?\nDoing full backups with lots of files? \n\n\nAlso planning to check commit_delay and see if that helps.\nI will try to avoid 2 or more machines backing up at the same time.. plus in \na couple of weeks I should have a better machine for the DB anyways..\n\n\nThe description of commit_delay sure sounds very promissing:\n\n---\ncommit_delay (integer)\n\nTime delay between writing a commit record to the WAL buffer and flushing \nthe buffer out to disk, in microseconds. A nonzero delay can allow multiple \ntransactions to be committed with only one fsync() system call, if system \nload is high enough that additional transactions become ready to commit \nwithin the given interval. But the delay is just wasted if no other \ntransactions become ready to commit. Therefore, the delay is only performed \nif at least commit_siblings other transactions are active at the instant \nthat a server process has written its commit record. The default is zero (no \ndelay). \n---\n\nI only wonder what is safer.. using a second or two in commit_delay or using \nfsync = off.. Anyone cares to comment?\n\nI plan to re-read carefully the WALL docs on the site.. so I can better \ndecide.. however any field expierences would be much welcome.\n\nMarc are you on the Bacula list? I plan to join it later today and will \nbring up the PostgreSQL issue there.\n",
"msg_date": "Fri, 14 Apr 2006 12:10:12 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Francisco Reyes <[email protected]> writes:\n> I think I am going to try increasing wal_buffers\n\nThat will help not at all, if the problem is too-short transactions\nas it sounds to be. You really need to pester the authors of bacula\nto try to wrap multiple inserts per transaction. Or maybe find some\nother software that can do that, if they are uninterested in supporting\nanything but mysql.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Apr 2006 12:30:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization? "
},
{
"msg_contents": "Tom Lane writes:\n\n> That will help not at all, if the problem is too-short transactions\n> as it sounds to be.\n\n\nHow about commit_delay?\n\n> You really need to pester the authors of bacula\n> to try to wrap multiple inserts per transaction.\n\nLike any volunteer project I am sure it's more an issue of resources than an \nissue of interest.\n\n",
"msg_date": "Fri, 14 Apr 2006 13:50:30 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Michael Stone writes:\n\n> I guess the first question is why 2 hot spares?\n\n\nBecause we are using RAID 10\n\n> larger array with more spindles with outperform a smaller one with \n> fewer, regardless of RAID level (assuming a decent battery-backed \n> cache).\n\nBased on what I have read RAID 10 is supposed to be better with lots of \nrandom access.\n\n\n> 5 RAID5\n> 2 RAID1\n> 1 spare\n\nThat is certainly something worth considering... Still I wonder if 2 more \nspindles will help enough to justify going to RAID 5. My understanding is \nthat RAID10 has simpler computations requirements which is partly what makes \nit better for lots of random read/write. \n",
"msg_date": "Fri, 14 Apr 2006 14:01:56 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "On Fri, Apr 14, 2006 at 02:01:56PM -0400, Francisco Reyes wrote:\n>Michael Stone writes:\n>>I guess the first question is why 2 hot spares?\n>\n>Because we are using RAID 10\n\nI still don't follow that. Why would the RAID level matter? IOW, are you \nactually wanting 2 spares, or are you just stick with that because you \nneed a factor of two disks for your mirrors?\n\n>>larger array with more spindles with outperform a smaller one with \n>>fewer, regardless of RAID level (assuming a decent battery-backed \n>>cache).\n>\n>Based on what I have read RAID 10 is supposed to be better with lots of \n>random access.\n\nMmm, it's a bit more complicated than that. RAID 10 can be better if you \nhave lots of random writes (though a large RAID cache can mitigate \nthat). For small random reads the limiting factor is how fast you can \nseek, and that number is based more on the number of disks than the RAID \nlevel. \n\n>>5 RAID5\n>>2 RAID1\n>>1 spare\n>\n>That is certainly something worth considering... Still I wonder if 2 more \n>spindles will help enough to justify going to RAID 5. My understanding is \n>that RAID10 has simpler computations requirements which is partly what \n>makes it better for lots of random read/write. \n\nIf your RAID hardware notices a difference between the parity \ncalculations for RAID 5 and the mirroring of RAID 1 it's a fairly lousy \nunit for 2006--those calculations are really trivial for modern \nhardware. The reason that RAID 10 can give better random small block \nwrite performance is that fewer disks need to be involved per write. \nThat's something that can be mitigated with a large cache to aggregate \nthe writes, but some controllers are much better than others in that \nregard. This is really a case where you have to test with your \nparticular hardware & data, because the data access patterns are \ncritical in determining what kind of performance is required.\n\nMike Stone\n",
"msg_date": "Fri, 14 Apr 2006 15:06:25 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Michael Stone writes:\n\n> I still don't follow that. Why would the RAID level matter? IOW, are you \n> actually wanting 2 spares, or are you just stick with that because you \n> need a factor of two disks for your mirrors?\n\nRAID 10 needs pairs.. so we can either have no spares or 2 spares.\n\n> Mmm, it's a bit more complicated than that. RAID 10 can be better if you \n> have lots of random writes (though a large RAID cache can mitigate \n> that).\n\nWe are using a 3ware 9550SX with 128MB RAM (at least I believe that is what \nthat card has installed).\n\n>For small random reads the limiting factor is how \n>fast you can seek, and that number is based more on the number of disks than the RAID \n> level.\n\nI don't have any solid stats, but I would guess the machines will fairly \nclose split between reads and writes.\n\n\n> hardware. The reason that RAID 10 can give better random small block \n> write performance is that fewer disks need to be involved per write. \n\n\nThat makes sense.\n\n> That's something that can be mitigated with a large cache\n\n128MB enough in your opinion?\n\n\n> the writes, but some controllers are much better than others in that \n> regard.\n\nThe controller we are using is 3Ware 9550SX.\n\n> This is really a case where you have to test with your \n> particular hardware & data\n\n\nThat is obviously the ideal way to go, but it is very time consuming. :-(\nTo setup a machine with one set of raid setup.. test, then re-do with \ndifferent set of raid.. re test.. that's anywhere from 1 to 2 days worth of \ntesting. Unlikely I will be given that time to test. \n",
"msg_date": "Fri, 14 Apr 2006 16:09:18 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "On Fri, 2006-04-14 at 15:09, Francisco Reyes wrote:\n> Michael Stone writes:\n> \n> > I still don't follow that. Why would the RAID level matter? IOW, are you \n> > actually wanting 2 spares, or are you just stick with that because you \n> > need a factor of two disks for your mirrors?\n> \n> RAID 10 needs pairs.. so we can either have no spares or 2 spares.\n\nSpares are placed in service one at a time. You don't need 2 spares for\nRAID 10, trust me.\n\n",
"msg_date": "Fri, 14 Apr 2006 15:15:33 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Scott Marlowe writes:\n\n> Spares are placed in service one at a time.\n\nAh.. that's your point. I know that. :-)\n\n> You don't need 2 spares for\n> RAID 10, trust me.\n\nWe bought the machine with 8 drives. At one point we were considering RAID \n5, then we decided to give RAID 10 a try. We have a simmilar machine with \nraid 5 and less memory (2GB, vs 4GB on the new machine) and the old machine \nwas having some very serious issues handling the load.\n\nSo far the RAID10 machine (twice the memory, newer disks, faster CPUs,.. \nthe previous machine was about 1 year old) has been performing very well.\n\nWe are now looking to put our 3rd NFS machine into production with \nidentical specs as the machine we currently use with RAID10. This 3rd \nmachine with do NFS work (mail store) and also will be our database server \n(until we can afford to buy a 4th.. dedicated DB machine).\n\nThe whole reason I started the thread was because most PostgreSQL setups I \nhave done in the past were mostly read.. whereas now the Bacula backups plus \nanother app we will be developing.. will be doing considerable writes to the \nDB.\n",
"msg_date": "Fri, 14 Apr 2006 16:27:59 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Francisco Reyes wrote:\n\n> That is certainly something worth considering... Still I wonder if 2 \n> more spindles will help enough to justify going to RAID 5. My \n> understanding is that RAID10 has simpler computations requirements which \n> is partly what makes it better for lots of random read/write. \n\nyou are right. raid5 is definitely not suitable for database activities.\nit is good for file servers, though.\n\n-- \n�dv�zlettel,\nG�briel �kos\n-=E-Mail :[email protected]|Web: http://www.i-logic.hu=-\n-=Tel/fax:+3612367353 |Mobil:+36209278894 =-\n",
"msg_date": "Sat, 15 Apr 2006 10:32:56 +0200",
"msg_from": "=?ISO-8859-1?Q?G=E1briel_=C1kos?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Francisco Reyes wrote:\n> Michael Stone writes:\n> \n>> I still don't follow that. Why would the RAID level matter? IOW, are \n>> you actually wanting 2 spares, or are you just stick with that because \n>> you need a factor of two disks for your mirrors?\n> \n> RAID 10 needs pairs.. so we can either have no spares or 2 spares.\n\nhm, interesting. I have recently set up a HP machine with smartarray 6i \ncontroller, and it is able to handle 4 disks in raid10 plus 1 as spare.\nwith some scripting you can even use linux software raid in the same setup.\n\n-- \n�dv�zlettel,\nG�briel �kos\n-=E-Mail :[email protected]|Web: http://www.i-logic.hu=-\n-=Tel/fax:+3612367353 |Mobil:+36209278894 =-\n",
"msg_date": "Sat, 15 Apr 2006 10:35:41 +0200",
"msg_from": "=?ISO-8859-1?Q?G=E1briel_=C1kos?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "G�briel �kos writes:\n\n>> RAID 10 needs pairs.. so we can either have no spares or 2 spares.\n> \n> hm, interesting. I have recently set up a HP machine with smartarray 6i \n> controller, and it is able to handle 4 disks in raid10 plus 1 as spare.\n\n:-)\nOk so let me be a bit more clear...\n\nWe have 6 disks in RAID10.\nWe can have the following with the remaining 2 disks\nRAID1\n1 Spare, 1 for storage\n2 spares\n\nHaving at least 1 spare is a must.. which means that we can use the \nremaining disk as spare or as data... using it to store ANY data means the \ndata will not be protected by the RAID. I can not think of almost any data \nwe would care so little that we would put it in the single disk.\n\nFor a second I thought logs.. but even that is not good, because the \nprograms that write the logs may fail.. so even if we don't care about \nloosing the logs, in theory it may cause problems to the operation of the \nsystem.\n\nThat's how we ended up with 2 hot spares. \n",
"msg_date": "Sat, 15 Apr 2006 09:16:15 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "G�briel �kos writes:\n\n> you are right. raid5 is definitely not suitable for database activities.\n\nThat is not entirely true. :-)\nRight now the new server is not ready and the ONLY place I could put the DB \nfor Bacula was a machine with RAID 5. So far it is holding fine. HOWEVER... \nonly one bacula job at a time so far and the machine doesn't do anything \nelse. :-)\n\n\n> it is good for file servers, though.\n\nEven for a DB server, from what I have read and what I have experienced so \nfar for mostly read DBs RAID5 should be ok. Specially if you have enough \nmemory.\n\nAs I add more clients to Bacula and the jobs bump into each other I will \nbetter know how the DB holds up. I am doing each FULL backup at a time so \nhopefully by the time I do multiple backups from multiple machines less \nrecords will need to be inserted to the DB and there will be a more balanced \noperation between reads and writes. Right now there are lots of inserts \ngoing on.\n",
"msg_date": "Sat, 15 Apr 2006 09:26:46 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Tom Lane writes:\n\n> Also, increasing checkpoint_segments and possibly wal_buffers helps a\n> lot for write-intensive loads.\n\nFollowing up on those two recomendations from Tom.\nTom mentioned in a different message that if the inserst are small that \nincreasing wal_buffers would not help.\n\nHow about checkpoint_segments?\nAlso commit_delays seems like may be helpfull. Is it too much a risk to set \ncommit_delays to 1 second? If the machine has any problems while having a \nhigher commit_delay will the end result simply be that a larger amount of \ntransactions will be rolled back?\n\np.s. did not CC Tom because he uses an RBL which is rather \"selective\" about \nwhat it lets through (one of the worst in my opinion). \n",
"msg_date": "Sat, 15 Apr 2006 09:34:45 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Hi, Francisco,\n\nFrancisco Reyes wrote:\n\n> I only wonder what is safer.. using a second or two in commit_delay or\n> using fsync = off.. Anyone cares to comment?\n\nIt might be that you misunderstood commit_delay. It will not only delay\nthe disk write, but also block your connnection until the write actually\nis performed.\n\nIt will rise the throughput in multi-client scenarios, but will also\nrise the latency, and it will absolutely bring no speedup in\nsingle-client scenarios.\n\nIt does not decrease safety (in opposite to fsync=off), data will be\nconsistent, and any application that has successfully finished a commit\ncan be shure their data is on the platters.[1]\n\nHTH,\nMarkus\n\n[1] As long as the platters don't lie, but that's another subject.\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Tue, 18 Apr 2006 11:02:34 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "On Fri, Apr 14, 2006 at 03:15:33PM -0500, Scott Marlowe wrote:\n> On Fri, 2006-04-14 at 15:09, Francisco Reyes wrote:\n> > Michael Stone writes:\n> > \n> > > I still don't follow that. Why would the RAID level matter? IOW, are you \n> > > actually wanting 2 spares, or are you just stick with that because you \n> > > need a factor of two disks for your mirrors?\n> > \n> > RAID 10 needs pairs.. so we can either have no spares or 2 spares.\n> \n> Spares are placed in service one at a time. You don't need 2 spares for\n> RAID 10, trust me.\n\nSadly, 3ware doesn't produce any controllers with the ability to do an\nodd number of channels, so you end up burning through 2 slots to get a\nhot spare (unless you spend substantially more money and go with the\nnext model up).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 18 Apr 2006 16:03:56 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "\nOn Apr 13, 2006, at 2:59 PM, Francisco Reyes wrote:\n\n> This particular server is pretty much what I inherited for now for \n> this project.and its Raid 5. There is a new server I am setting up \n> soon... 8 disks which we are planning to setup\n> 6 disks in RAID 10\n> 2 Hot spares\n>\n> In RAID 10 would it matter that WALL is in the same RAID set?\n> Would it be better:\n> 4 disks in RAID10 Data\n> 2 disks RAID 1 WALL\n> 2 hot spares\n\nwhy do you need two hot spares?\n\nI'd go with 6 disk RAID10 for data\n2 disk RAID1 for WAL (and OS if you don't have other disks from which \nto boot)\n\nand run nothing else but Postgres on that box.\n\nbump up checkpoint_segments to some huge number like 256 and use the \nbg writer process.\n\nif a disk fails, just replace it quickly with a cold spare.\n\nand if your RAID controller has two channels, pair the mirrors across \nchannels.\n\n",
"msg_date": "Thu, 20 Apr 2006 15:43:00 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "\nOn Apr 14, 2006, at 8:00 AM, Marc Cousin wrote:\n\n> So, you'll probably end up being slowed down by WAL fsyncs ... and \n> you won't\n> have a lot of solutions. Maybe you should start with trying to set \n> fsync=no\n> as a test to confirm that (you should have a lot of iowaits right \n> now if you\n> haven't disabled fsync).\n\nInstead of doing that, why not use commit_delay to some nominal value \nto try and group the fsyncs. If they're coming in at 30 per second, \nthis should help a bit, I suspect.\n\n",
"msg_date": "Thu, 20 Apr 2006 15:45:14 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
}
] |
[
{
"msg_contents": "Greetings,\n\nI have 395M pg_dump from a PostgreSQL 7.4.2 database.\nThis dump is from one of our customer's servers. There is\na web-based administration UI which has been reported to\nbe extremely slow and unusable.\n\nTo see what's going on with their data I have grabbed a\ncopy of their nightly pg_dump output and attempting to\nrestore it on my development box, running PostgreSQL\n7.4.12.\n\nMy dev box is much slower hardware than the customer's\nserver. Even with that difference I expected to be able to\npg_restore the database within one day. But no. After\nleaving pg_restore running for about 2 days, I ctrl-C'ed\nout of it (see copy/paste below along with other info).\n\nI must say, that data was being restored, as I could do\nselect count(*) on tables which had their data restored and\nI would get valid counts back.\n\nThe database contains 34 tables. The pg_restore seems to\nrestore the first 13 tables pretty quickly, but they do not have\nmany records. The largest amongst them with ~ 17,000 rows.\n\nThen restore gets stuck on a table with 2,175,050 rows.\nFollowing this table another table exists with 2,160,616\nrows.\n\nOne thing worth mentioning is that the PostgreSQL package\nthat got deployed lacked compression, as in:\n\n$ pg_dump -Fc dbname > dbname.DUMP\npg_dump: [archiver] WARNING: requested compression not available in\nthis installation -- archive will be uncompressed\n\n\nAny suggestions as to what may be the problem here?\nI doubt that the minor version mis-match is what's causing\nthis problem. (I am try this test on another machine with the\nsame version of PostgreSQL installed on it, and right now,\nit is stuck on the first of the two huge tables, and it has\nalready been going for more than 2 hrs).\n\nI'm open to any ideas and/or suggestions (within reason) :)\n\nBest regards,\n--patrick\n\n\nme@devbox:/tmp$ date\nMon Apr 10 15:13:19 PDT 2006\nme@devbox:/tmp$ pg_restore -ad dbname customer_db.DUMP ; date\n^C\nme@devbox:/tmp$ date\nWed Apr 12 10:40:19 PDT 2006\n\nme@devbox:/tmp$ uname -a\nLinux devbox 2.4.31 #6 Sun Jun 5 19:04:47 PDT 2005 i686 unknown\nunknown GNU/Linux\nme@devbox:/tmp$ cat /proc/cpuinfo\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 8\nmodel name : Pentium III (Coppermine)\nstepping : 6\ncpu MHz : 731.477\ncache size : 256 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 mmx fxsr sse\nbogomips : 1461.45\n\nme@devbox:/tmp/$ cat /proc/meminfo\n total: used: free: shared: buffers: cached:\nMem: 527499264 523030528 4468736 0 10301440 384454656\nSwap: 1579204608 552960 1578651648\nMemTotal: 515136 kB\nMemFree: 4364 kB\nMemShared: 0 kB\nBuffers: 10060 kB\nCached: 374984 kB\nSwapCached: 460 kB\nActive: 79004 kB\nInactive: 306560 kB\nHighTotal: 0 kB\nHighFree: 0 kB\nLowTotal: 515136 kB\nLowFree: 4364 kB\nSwapTotal: 1542192 kB\nSwapFree: 1541652 kB\n\n\npostgresql.conf changes on devbox:\ncheckpoint_segments = 10\nlog_pid = true\nlog_timestamp = true\n\nThe checkpoint_segments was changed to 10 after\nseeing many \"HINT\"s in PostgreSQL log file about it.\nDoesn't seem to have affected pg_restore performance.\n",
"msg_date": "Wed, 12 Apr 2006 18:26:10 -0700",
"msg_from": "\"patrick keshishian\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg 7.4.x - pg_restore impossibly slow"
},
{
"msg_contents": "\"patrick keshishian\" <[email protected]> writes:\n> My dev box is much slower hardware than the customer's\n> server. Even with that difference I expected to be able to\n> pg_restore the database within one day. But no.\n\nSeems a bit odd. Can you narrow down more closely which step of the\nrestore is taking the time? (Try enabling log_statements.)\n\nOne thought is that kicking up work_mem and vacuum_mem is likely to\nhelp for some steps (esp. CREATE INDEX and foreign-key checking).\nAnd be sure you've done the usual tuning for write-intensive activity,\nsuch as bumping up checkpoint_segments. Turning off fsync wouldn't\nbe a bad idea either.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Apr 2006 00:46:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 7.4.x - pg_restore impossibly slow "
},
{
"msg_contents": "Hi Tom, et.al.,\n\nSo I changed the following settings in postgresql.conf file and\nrestarted PostgreSQL and then proceeded with pg_restore:\n\n# new changes for this test-run\nlog_statement = true\nsort_mem = 10240 # default 1024\nvacuum_mem = 20480 # default 8192\n# from before\ncheckpoint_segments = 10\nlog_pid = true\nlog_timestamp = true\n\nWith these settings and running:\n\npg_restore -vaOd dbname dbname.DUMP\n\n\nThings seem to progress better. The first of the large\ntables got COPY'ed within 1 hr 40 min:\n\nstart: 2006-04-13 11:44:19\nfinish: 2006-04-13 13:25:36\n\nI ended up ctrl-C'ing out of the pg_restore as the second\nlarge table was taking over 3 hours and the last PostgreSQL\nlog entry was from over 2.5hrs ago, with message:\n\n2006-04-13 14:09:29 [3049] LOG: recycled transaction log file\n\"000000060000006B\"\n\nTime for something different. Before attempting the same\nprocedure with fsync off, I ran the following sequence of\ncommands:\n\n$ dropdb dbname\n$ createdb dbname\n$ pg_restore -vsOd dbname dbname.DUMP\n$ date > db.restore ; pg_restore -vcOd dbname \\\n dbname.DUMP ; date >> db.restore\n$ cat db.restore\nThu Apr 13 18:02:51 PDT 2006\nThu Apr 13 18:17:16 PDT 2006\n\nThat's just over 14 minutes!\n\nIdeas?\n\nIs this because the -c option drops all foreign keys and\nso the restore goes faster? Should this be the preferred,\nrecommended and documented method to run pg_restore?\nAny drawbacks to this method?\n\nThanks,\n--patrick\n\n\n\n\nOn 4/12/06, Tom Lane <[email protected]> wrote:\n> \"patrick keshishian\" <[email protected]> writes:\n> > My dev box is much slower hardware than the customer's\n> > server. Even with that difference I expected to be able to\n> > pg_restore the database within one day. But no.\n>\n> Seems a bit odd. Can you narrow down more closely which step of the\n> restore is taking the time? (Try enabling log_statements.)\n>\n> One thought is that kicking up work_mem and vacuum_mem is likely to\n> help for some steps (esp. CREATE INDEX and foreign-key checking).\n> And be sure you've done the usual tuning for write-intensive activity,\n> such as bumping up checkpoint_segments. Turning off fsync wouldn't\n> be a bad idea either.\n>\n> regards, tom lane\n",
"msg_date": "Thu, 13 Apr 2006 18:26:00 -0700",
"msg_from": "\"patrick keshishian\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg 7.4.x - pg_restore impossibly slow"
},
{
"msg_contents": "\"patrick keshishian\" <[email protected]> writes:\n> With these settings and running:\n> pg_restore -vaOd dbname dbname.DUMP\n\nIf you had mentioned you were using random nondefault switches, we'd\nhave told you not to. -a in particular is a horrid idea performancewise\n--- a standard schema-plus-data restore goes way faster because it's\ndoing index builds and foreign key checks wholesale instead of\nincrementally.\n\n> Is this because the -c option drops all foreign keys and\n> so the restore goes faster? Should this be the preferred,\n> recommended and documented method to run pg_restore?\n\nIt is documented in recent versions of the documentation: see\nhttp://www.postgresql.org/docs/8.1/static/populate.html\nparticularly the last section.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Apr 2006 23:15:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 7.4.x - pg_restore impossibly slow "
},
{
"msg_contents": "On Thu, Apr 13, 2006 at 06:26:00PM -0700, patrick keshishian wrote:\n> $ dropdb dbname\n> $ createdb dbname\n> $ pg_restore -vsOd dbname dbname.DUMP\n\nThat step is pointless, because the next pg_restore will create the\nschema for you anyway.\n\n> $ date > db.restore ; pg_restore -vcOd dbname \\\n> dbname.DUMP ; date >> db.restore\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 14 Apr 2006 01:20:11 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 7.4.x - pg_restore impossibly slow"
},
{
"msg_contents": "On 4/13/06, Tom Lane <[email protected]> wrote:\n> \"patrick keshishian\" <[email protected]> writes:\n> > With these settings and running:\n> > pg_restore -vaOd dbname dbname.DUMP\n>\n> If you had mentioned you were using random nondefault switches, we'd\n\nRandom?\n\nWith all due respect, I did.\n\nI specified the PostgreSQL version of the pg_dump source\nserver. I specified the version of my dev PostgreSQL server.\n\nI provided specific information about which postgresql.conf\nentries I had changed and to what specific values they were\nchanged to.\n\nI pasted the _exact_ command used (including so called\n\"random nondefault switches\") to do the dump and the\nexact command used (again, with said \"random nondefault\nswitches\") to restore from the dump'ed data.\n\nI believe I tried my best to be as thorough as possible with\nmy post(s).\n\narchived at: http://archives.postgresql.org/pgsql-performance/2006-04/msg00287.php\n\n\n\n> have told you not to. -a in particular is a horrid idea performancewise\n> --- a standard schema-plus-data restore goes way faster because it's\n> doing index builds and foreign key checks wholesale instead of\n> incrementally.\n\nDuly noted. Option \"-a\" bad.\n\n\n> > Is this because the -c option drops all foreign keys and\n> > so the restore goes faster? Should this be the preferred,\n> > recommended and documented method to run pg_restore?\n>\n> It is documented in recent versions of the documentation: see\n> http://www.postgresql.org/docs/8.1/static/populate.html\n> particularly the last section.\n\nAs a general rule of thumb, I have always assumed,\ndocumentation for any software, from one major version\nto another, would not necessarily apply cross revisions\n(e.g., version 7.4 vs 8.1).\n\n\nBut thanks for your time and help,\n--patrick\n",
"msg_date": "Thu, 13 Apr 2006 23:59:51 -0700",
"msg_from": "\"patrick keshishian\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg 7.4.x - pg_restore impossibly slow"
},
{
"msg_contents": "On 4/13/06, Jim C. Nasby <[email protected]> wrote:\n> On Thu, Apr 13, 2006 at 06:26:00PM -0700, patrick keshishian wrote:\n> > $ dropdb dbname\n> > $ createdb dbname\n> > $ pg_restore -vsOd dbname dbname.DUMP\n>\n> That step is pointless, because the next pg_restore will create the\n> schema for you anyway.\n\nYes, I noticed this with the verbose output (random\nnon-standard option \"-v\").\n\nI was providing all information (read: exact steps taken)\nwhich may have been relevant to my post/question, so\nthat, I would avoid being guilty of omitting any possibly\nsignificant, yet random information.\n\nThanks for the insight,\n--patrick\n",
"msg_date": "Fri, 14 Apr 2006 00:10:57 -0700",
"msg_from": "\"patrick keshishian\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg 7.4.x - pg_restore impossibly slow"
}
] |
[
{
"msg_contents": "Hello, postgresql 7.4.8 on SuSE Linux here.\n\nI have a table called DMO with a column called ORA_RIF defined as \n\"timestamp without time zone\" ;\n\nI created an index on this table based on this column only.\n\nIf I run a query against a text literal the index is used:\n\n > explain select * from dmo where ora_rif>'2006-01-01';\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Index Scan using dmo_ndx02 on dmo (cost=0.00..1183.23 rows=736 width=156)\n Index Cond: (ora_rif > '2006-01-01 00:00:00'::timestamp without time \nzone)\n\nIf I try to use a function that returns the current time instead, a \nsequential scan is always performed:\n\n > explain select * from dmo where ora_rif>localtimestamp;\n QUERY PLAN\n------------------------------------------------------------------------------\n Seq Scan on dmo (cost=0.00..1008253.22 rows=2703928 width=156)\n Filter: (ora_rif > ('now'::text)::timestamp(6) without time zone)\n\n > explain select * from dmo where ora_rif>localtimestamp::timestamp \nwithout time zone;\n QUERY PLAN\n------------------------------------------------------------------------------\n Seq Scan on dmo (cost=0.00..1008253.22 rows=2703928 width=156)\n Filter: (ora_rif > ('now'::text)::timestamp(6) without time zone)\n\n... etc. ...\n\n(tried with all datetime functions with and without cast)\n\nI even tried to write a function that explicitly returns a \"timestamp \nwithout time zone\" value:\n\ncreate or replace function f () returns timestamp without time zone\nas '\ndeclare\n x timestamp without time zone ;\nbegin\n x := ''2006-01-01 00:00:00'';\n return x ;\nend ;\n' language plpgsql ;\n\nBut the result is the same:\n\n > explain select * from dmo ora_rif>f();\n QUERY PLAN\n-----------------------------------------------------------------------------\n Seq Scan on dmo (cost=0.00..987973.76 rows=2703928 width=156)\n Filter: (ora_rif > f())\n\nAny suggestion?\n\nKind regards,\n\n-- \nCris Carampa (spamto:[email protected])\n\npotevo chiedere come si chiama il vostro cane\nil mio è un po' di tempo che si chiama Libero\n\n\n",
"msg_date": "Thu, 13 Apr 2006 12:25:02 +0200",
"msg_from": "Cris Carampa <[email protected]>",
"msg_from_op": true,
"msg_subject": "index is not used if I include a function that returns current time\n\tin my query"
},
{
"msg_contents": "Interesting.... what's EXPLAIN ANALYZE show if you SET\nenable_seqscan=off; ?\n\nYou should also consider upgrading to 8.1...\n\nOn Thu, Apr 13, 2006 at 12:25:02PM +0200, Cris Carampa wrote:\n> Hello, postgresql 7.4.8 on SuSE Linux here.\n> \n> I have a table called DMO with a column called ORA_RIF defined as \n> \"timestamp without time zone\" ;\n> \n> I created an index on this table based on this column only.\n> \n> If I run a query against a text literal the index is used:\n> \n> > explain select * from dmo where ora_rif>'2006-01-01';\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------\n> Index Scan using dmo_ndx02 on dmo (cost=0.00..1183.23 rows=736 width=156)\n> Index Cond: (ora_rif > '2006-01-01 00:00:00'::timestamp without time \n> zone)\n> \n> If I try to use a function that returns the current time instead, a \n> sequential scan is always performed:\n> \n> > explain select * from dmo where ora_rif>localtimestamp;\n> QUERY PLAN\n> ------------------------------------------------------------------------------\n> Seq Scan on dmo (cost=0.00..1008253.22 rows=2703928 width=156)\n> Filter: (ora_rif > ('now'::text)::timestamp(6) without time zone)\n> \n> > explain select * from dmo where ora_rif>localtimestamp::timestamp \n> without time zone;\n> QUERY PLAN\n> ------------------------------------------------------------------------------\n> Seq Scan on dmo (cost=0.00..1008253.22 rows=2703928 width=156)\n> Filter: (ora_rif > ('now'::text)::timestamp(6) without time zone)\n> \n> ... etc. ...\n> \n> (tried with all datetime functions with and without cast)\n> \n> I even tried to write a function that explicitly returns a \"timestamp \n> without time zone\" value:\n> \n> create or replace function f () returns timestamp without time zone\n> as '\n> declare\n> x timestamp without time zone ;\n> begin\n> x := ''2006-01-01 00:00:00'';\n> return x ;\n> end ;\n> ' language plpgsql ;\n> \n> But the result is the same:\n> \n> > explain select * from dmo ora_rif>f();\n> QUERY PLAN\n> -----------------------------------------------------------------------------\n> Seq Scan on dmo (cost=0.00..987973.76 rows=2703928 width=156)\n> Filter: (ora_rif > f())\n> \n> Any suggestion?\n> \n> Kind regards,\n> \n> -- \n> Cris Carampa (spamto:[email protected])\n> \n> potevo chiedere come si chiama il vostro cane\n> il mio ? un po' di tempo che si chiama Libero\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 18 Apr 2006 14:56:01 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index is not used if I include a function that returns current\n\ttime in my query"
}
] |
[
{
"msg_contents": "laterooms=# explain analyze select allocation0_.\"ID\" as y1_, \nallocation0_.\"RoomID\" as y2_, allocation0_.\"StatusID\" as y4_, \nallocation0_.\"Price\" as y3_, allocation0_.\"Number\" as y5_, \nallocation0_.\"Date\" as y6_ from \"Allocation\" allocation0_ where \n(allocation0_.\"Date\" between '2006-06-09 00:00:00.000000' and \n'2006-06-09 00:00:00.000000')and(allocation0_.\"RoomID\" in(4300591));\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ix_date on \"Allocation\" allocation0_ (cost=0.00..4.77 \nrows=1 width=34) (actual time=1411.325..1689.860 rows=1 loops=1)\n Index Cond: ((\"Date\" >= '2006-06-09'::date) AND (\"Date\" <= \n'2006-06-09'::date))\n Filter: (\"RoomID\" = 4300591)\n Total runtime: 1689.917 ms\n(4 rows)\n\nYep, the two dates are identical - yep I would change the client \nsoftware to do where \"Date\" = '2006-06-09 00:00:00.000000' if I could...\n\nHowever, it's clear to see why this simple query is taking so long - the \nplan is selecting /all/ dates after 2006-06-09 and /all/ dates before \nthen, and only returning the union of the two - a large waste of effort, \nsurely?\n\nVACUUM ANALYZE hasn't improved matters... the schema for the table is\n\n \"ID\" int8 NOT NULL DEFAULT \nnextval(('public.\"allocation_id_seq\"'::text)::regclass),\n \"RoomID\" int4,\n \"Price\" numeric(10,2),\n \"StatusID\" int4,\n \"Number\" int4,\n \"Date\" date,\n\nand there are indexes kept for 'RoomID' and 'Date' in this 4.3-million \nrow table.\n\nIs this a bug or a hidden feature in pg 8.1.3 ? :)\n\nCheers,\nGavin.\n\n",
"msg_date": "Thu, 13 Apr 2006 13:03:01 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query - possible bug?"
},
{
"msg_contents": "On 4/13/06, Gavin Hamill <[email protected]> wrote:\n> laterooms=# explain analyze select allocation0_.\"ID\" as y1_,\n> allocation0_.\"RoomID\" as y2_, allocation0_.\"StatusID\" as y4_,\n> allocation0_.\"Price\" as y3_, allocation0_.\"Number\" as y5_,\n> allocation0_.\"Date\" as y6_ from \"Allocation\" allocation0_ where\n> (allocation0_.\"Date\" between '2006-06-09 00:00:00.000000' and\n> '2006-06-09 00:00:00.000000')and(allocation0_.\"RoomID\" in(4300591));\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using ix_date on \"Allocation\" allocation0_ (cost=0.00..4.77\n> rows=1 width=34) (actual time=1411.325..1689.860 rows=1 loops=1)\n> Index Cond: ((\"Date\" >= '2006-06-09'::date) AND (\"Date\" <=\n> '2006-06-09'::date))\n> Filter: (\"RoomID\" = 4300591)\n> Total runtime: 1689.917 ms\n> (4 rows)\n\n1.6secs isn't too bad on 4.3mill rows...\n\nHow many entries are there for that date range?\n\n--\nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Thu, 13 Apr 2006 22:46:06 +1000",
"msg_from": "\"chris smith\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query - possible bug?"
},
{
"msg_contents": "chris smith wrote:\n\n>1.6secs isn't too bad on 4.3mill rows...\n>\n>How many entries are there for that date range?\n> \n>\n1.7 secs /is/ good - it typically takes 5 or 6 seconds, which isn't so \ngood. My question is 'why does the planner choose such a bizarre range \nrequest when both elements of the 'between' are identical? :)'\n\nIf I replace the\n(allocation0_.\"Date\" between '2006-06-09 00:00:00.000000' and \n'2006-06-09 00:00:00.000000')\n\nwith\n\nallocation0_.\"Date\" ='2006-04-09 00:00:00.000000'\n\nthen the query comes back in a few milliseconds (as I'd expect :) - and \nyup I've been using different dates for each test to avoid the query \nbeing cached.\n\nFor ref, there are typically 35000 rows per date :)\n\nCheers,\nGavin.\n\n",
"msg_date": "Thu, 13 Apr 2006 14:05:33 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query - possible bug?"
},
{
"msg_contents": "Gavin Hamill wrote:\n> chris smith wrote:\n> \n>> 1.6secs isn't too bad on 4.3mill rows...\n>>\n>> How many entries are there for that date range?\n>> \n>>\n> 1.7 secs /is/ good - it typically takes 5 or 6 seconds, which isn't so \n> good. My question is 'why does the planner choose such a bizarre range \n> request when both elements of the 'between' are identical? :)'\n\nWhat's bizarre about the range request, and are you sure it's searching \ndoing the union of both conditions separately? It looks to me like it's \ndoing a standard range-search. If it was trying to fetch 4.3 million \nrows via that index, I'd expect it to use a different index instead.\n\nIf you've got stats turned on, look in pg_stat_user_indexes/tables \nbefore and after the query to see. Here's an example of a similar query \nagainst one of my log tables. It's small, but the clause is the same, \nand I don't see any evidence of the whole table being selected.\n\nlamp=> SELECT * FROM pg_stat_user_indexes WHERE relname LIKE 'act%';\n relid | indexrelid | schemaname | relname | indexrelname | \nidx_scan | idx_tup_read | idx_tup_fetch\n---------+------------+------------+---------+----------------+----------+--------------+---------------\n 6124993 | 7519044 | public | act_log | act_log_ts_idx | \n23 | 18 | 18\n 6124993 | 7371115 | public | act_log | act_log_pkey | \n0 | 0 | 0\n(2 rows)\n\nlamp=> EXPLAIN ANALYSE SELECT * FROM act_log WHERE al_ts BETWEEN \n'2006-04-05 14:10:23+00'::timestamptz AND '2006-04-05 \n14:10:23+00'::timestamptz;\n \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using act_log_ts_idx on act_log (cost=0.00..3.02 rows=1 \nwidth=102) (actual time=0.116..0.131 rows=1 loops=1)\n Index Cond: ((al_ts >= '2006-04-05 15:10:23+01'::timestamp with time \nzone) AND (al_ts <= '2006-04-05 15:10:23+01'::timestamp with time zone))\n Total runtime: 0.443 ms\n(3 rows)\n\nlamp=> SELECT * FROM pg_stat_user_indexes WHERE relname LIKE 'act%';\n relid | indexrelid | schemaname | relname | indexrelname | \nidx_scan | idx_tup_read | idx_tup_fetch\n---------+------------+------------+---------+----------------+----------+--------------+---------------\n 6124993 | 7519044 | public | act_log | act_log_ts_idx | \n24 | 19 | 19\n 6124993 | 7371115 | public | act_log | act_log_pkey | \n0 | 0 | 0\n(2 rows)\n\n\n1. vacuum full verbose your table (and post the output please)\n2. perhaps reindex?\n3. Try the explain analyse again and see what happens.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 13 Apr 2006 14:59:36 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query - possible bug?"
},
{
"msg_contents": "Gavin Hamill <[email protected]> writes:\n> If I replace the\n> (allocation0_.\"Date\" between '2006-06-09 00:00:00.000000' and \n> '2006-06-09 00:00:00.000000')\n> with\n> allocation0_.\"Date\" ='2006-04-09 00:00:00.000000'\n> then the query comes back in a few milliseconds (as I'd expect :)\n\nCould we see EXPLAIN ANALYZE for\n* both forms of the date condition, with the roomid condition;\n* both forms of the date condition, WITHOUT the roomid condition;\n* just the roomid condition\n\nI'm thinking the planner is misestimating something, but it's hard\nto tell what without breaking it down.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Apr 2006 11:26:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query - possible bug? "
},
{
"msg_contents": "Tom Lane wrote:\n\n>Gavin Hamill <[email protected]> writes:\n> \n>\n>>If I replace the\n>>(allocation0_.\"Date\" between '2006-06-09 00:00:00.000000' and \n>>'2006-06-09 00:00:00.000000')\n>>with\n>>allocation0_.\"Date\" ='2006-04-09 00:00:00.000000'\n>>then the query comes back in a few milliseconds (as I'd expect :)\n>> \n>>\n>\n>Could we see EXPLAIN ANALYZE for\n>* both forms of the date condition, with the roomid condition;\n>* both forms of the date condition, WITHOUT the roomid condition;\n>* just the roomid condition\n>\n>I'm thinking the planner is misestimating something, but it's hard\n>to tell what without breaking it down.\n> \n>\n\nOf course. In each case, I have changed the date by two weeks to try and \nminimise the effect of any query caching.\n\nThe base query is \"explain analyse select allocation0_.\"ID\" as y1_, \nallocation0_.\"RoomID\" as y2_, allocation0_.\"StatusID\" as y4_, \nallocation0_.\"Price\" as y3_, allocation0_.\"Number\" as y5_, \nallocation0_.\"Date\" as y6_ from \"Allocation\" allocation0_ where\"\n\nnow both forms of the Date condition\n\na)\n\n(allocation0_.\"Date\" between '2006-04-25 00:00:00.000000' and \n'2006-04-25 00:00:00.000000')and(allocation0_.\"RoomID\" in(211800));\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ix_date on \"Allocation\" allocation0_ (cost=0.00..4.77 \nrows=1 width=34) (actual time=3253.340..48040.396 rows=1 loops=1)\n Index Cond: ((\"Date\" >= '2006-04-25'::date) AND (\"Date\" <= \n'2006-04-25'::date))\n Filter: (\"RoomID\" = 211800)\n Total runtime: 48040.451 ms (ouch!)\n\n\nb)\n\n(allocation0_.\"Date\"= '2006-05-10 \n00:00:00.000000'::date)and(allocation0_.\"RoomID\" in(211800));\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ix_dateroom on \"Allocation\" allocation0_ \n(cost=0.00..5.01 rows=1 width=34) (actual time=0.033..0.035 rows=1 loops=1)\n Index Cond: ((\"RoomID\" = 211800) AND (\"Date\" = '2006-05-10'::date))\n Total runtime: 0.075 ms (whoosh!)\n\nAnd now without the RoomID condition:\n\na)\n(allocation0_.\"Date\" between '2006-06-10 00:00:00.000000' and \n'2006-06-10 00:00:00.000000');\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ix_date on \"Allocation\" allocation0_ (cost=0.00..4.77 \nrows=1 width=34) (actual time=0.035..6706.467 rows=34220 loops=1)\n Index Cond: ((\"Date\" >= '2006-06-10'::date) AND (\"Date\" <= \n'2006-06-10'::date))\n Total runtime: 6728.743 ms\n\nb)\n(allocation0_.\"Date\"= '2006-05-25 00:00:00.000000'::date);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on \"Allocation\" allocation0_ (cost=87.46..25017.67 \nrows=13845 width=34) (actual time=207.674..9702.656 rows=34241 loops=1)\n Recheck Cond: (\"Date\" = '2006-05-25'::date)\n -> Bitmap Index Scan on ix_date (cost=0.00..87.46 rows=13845 \nwidth=0) (actual time=185.086..185.086 rows=42705 loops=1)\n Index Cond: (\"Date\" = '2006-05-25'::date)\n Total runtime: 9725.470 ms\n\n\nWow, I'm not really sure what that tells me...\n\nCheers,\nGavin.\n\n",
"msg_date": "Tue, 18 Apr 2006 09:07:40 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query - possible bug?"
},
{
"msg_contents": "Gavin Hamill <[email protected]> writes:\n> Tom Lane wrote:\n>> I'm thinking the planner is misestimating something, but it's hard\n>> to tell what without breaking it down.\n\n> (allocation0_.\"Date\" between '2006-06-10 00:00:00.000000' and \n> '2006-06-10 00:00:00.000000');\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using ix_date on \"Allocation\" allocation0_ (cost=0.00..4.77 \n> rows=1 width=34) (actual time=0.035..6706.467 rows=34220 loops=1)\n> Index Cond: ((\"Date\" >= '2006-06-10'::date) AND (\"Date\" <= \n> '2006-06-10'::date))\n> Total runtime: 6728.743 ms\n\nBingo, there's our misestimation: estimated 1 row, actual 34220 :-(\n\nThat's why it's choosing the wrong index: it thinks the condition on\nRoomID isn't going to reduce the number of rows fetched any further,\nand so the smaller index ought to be marginally cheaper to use.\nIn reality, it works way better when using the two-column index.\n\nI think this is the same problem recently discussed about how the\ndegenerate case for a range comparison is making an unreasonably small\nestimate, where it probably ought to fall back to some equality estimate\ninstead. With the simple-equality form of the date condition, it does\nget a reasonable estimate, and so it picks the right index.\n\nThere should be a fix for this by the time PG 8.2 comes out, but in the\nmeantime you might find that it helps to write the range check in a way\nthat doesn't have identical bounds, eg\n\tdate >= '2006-06-10'::date AND date < '2006-06-11'::date\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 13:31:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query - possible bug? "
},
{
"msg_contents": "On Tue, 18 Apr 2006 13:31:48 -0400\nTom Lane <[email protected]> wrote:\n\n> There should be a fix for this by the time PG 8.2 comes out, but in\n> the meantime you might find that it helps to write the range check in\n> a way that doesn't have identical bounds, eg\n> \tdate >= '2006-06-10'::date AND date < '2006-06-11'::date\n\nOK coolies - we've already had a code release for this (and other\nstuff) planned for tomorrow morning checking on the client side\nif a single date has been chosen, then do an equality test on that...\notherwise leave the between in place - seems to work like a charm, and\nhopefully it'll mean we don't have a loadavg of 15 on our main pg\nserver tomorrow (!) :))\n\nBasically, as long as I know it's a pg issue rather than something daft\nI've done (or not done) then I'm happy enough. \n\nCheers,\nGavin.\n",
"msg_date": "Tue, 18 Apr 2006 20:08:39 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query - possible bug?"
},
{
"msg_contents": "Gavin Hamill <[email protected]> writes:\n> On Tue, 18 Apr 2006 13:31:48 -0400\n> Tom Lane <[email protected]> wrote:\n>> There should be a fix for this by the time PG 8.2 comes out, but in\n>> the meantime you might find that it helps to write the range check in\n>> a way that doesn't have identical bounds, eg\n>> date >= '2006-06-10'::date AND date < '2006-06-11'::date\n\n> OK coolies - we've already had a code release for this (and other\n> stuff) planned for tomorrow morning checking on the client side\n> if a single date has been chosen, then do an equality test on that...\n\nFair enough, no reason to replace one workaround with another. But \nwould you try it on your test case, just to verify the diagnosis?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 15:51:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query - possible bug? "
},
{
"msg_contents": "On Tue, 18 Apr 2006 15:51:44 -0400\nTom Lane <[email protected]> wrote:\n\n> Fair enough, no reason to replace one workaround with another. But \n> would you try it on your test case, just to verify the diagnosis?\n\nYup I can confirm it from testing earlier today - as soon as\nthe two dates are non-equal, an index scan is correctly selected and\nreturns results in just a few milliseconds:\n\nlaterooms=# explain analyse select allocation0_.\"ID\" as y1_,\nallocation0_.\"RoomID\" as y2_, allocation0_.\"StatusID\" as y4_,\nallocation0_.\"Price\" as y3_, allocation0_.\"Number\" as y5_,\nallocation0_.\"Date\" as y6_ from \"Allocation\" allocation0_ where\n(allocation0_.\"Date\" between '2006-04-25 00:00:00.000000' and\n'2006-04-26 00:00:00.000000')and(allocation0_.\"RoomID\" in(211800));\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\nIndex Scan using ix_dateroom on \"Allocation\" allocation0_\n(cost=0.00..14.02 rows=4 width=34) (actual time=16.799..21.804 rows=2\nloops=1) Index Cond: ((\"RoomID\" = 211800) AND (\"Date\" >=\n'2006-04-25'::date) AND (\"Date\" <= '2006-04-26'::date)) \nTotal runtime: 21.910 ms\n\nwhich I ran first, versus the identical-date equivalent which turned\nin a whopping...\n\n Index Scan using ix_date on \"Allocation\" allocation0_\n(cost=0.00..4.77 rows=1 width=34) (actual time=6874.272..69541.064\nrows=1 loops=1) Index Cond: ((\"Date\" >= '2006-04-25'::date) AND (\"Date\"\n<= '2006-04-25'::date)) Filter: (\"RoomID\" = 211800) Total runtime:\n69541.113 ms (4 rows)\n\nCheers,\nGavin.\n",
"msg_date": "Tue, 18 Apr 2006 21:26:28 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query - possible bug?"
}
] |
[
{
"msg_contents": "Hi, Oscar,\n\nPlease reply to the list and not privately, so others can learn from\nyour replies, and possibly have better Ideas than me.\n\nOscar Picasso wrote:\n\n> I cannot group the columns logically. Any column may or may not appear\n> in a query.\n\nThat's suboptimal.\n\n> Summrarizing what I have learned:\n> - I cannot use multicolumn indexes because I cannot group the column\n> logically.\n> - I cannot use funtional indexes\n> - I cannot use clustering.\n\nYou still can have a set of partitioned multi-column indices,\noverlapping enough that every combination of columns is covered (or risk\na sequential sub scan for the last two or three columns, this should not\nhurt too much if the first 17 columns were selective enough).\n\nThe main problem with indices is that they also decrease write performance.\n\nIf disk costs are not limited, it will make sense to have WAL, table and\nindices on different disks / raid arrays, to parallelize writes.\n\nBtw, I guess you have multiple, concurrent users?\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Thu, 13 Apr 2006 16:00:59 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Better index stategy for many fields with few values"
}
] |
[
{
"msg_contents": "Hello, postgresql 7.4.8 on SuSE Linux here.\n\nI have a table called DMO with a column called ORA_RIF defined as\n\"timestamp without time zone\" ;\n\nI created an index on this table based on this column only.\n\nIf I run a query against a text literal the index is used:\n\n> explain select * from dmo where ora_rif>'2006-01-01';\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Index Scan using dmo_ndx02 on dmo (cost=0.00..1183.23 rows=736 width=156)\n Index Cond: (ora_rif > '2006-01-01 00:00:00'::timestamp without time\nzone)\n\nIf I try to use a function that returns the current time instead, a\nsequential scan is always performed:\n\n> explain select * from dmo where ora_rif>localtimestamp;\n QUERY PLAN\n------------------------------------------------------------------------------\n Seq Scan on dmo (cost=0.00..1008253.22 rows=2703928 width=156)\n Filter: (ora_rif > ('now'::text)::timestamp(6) without time zone)\n\n> explain select * from dmo where ora_rif>localtimestamp::timestamp \nwithout time zone;\n QUERY PLAN\n------------------------------------------------------------------------------\n Seq Scan on dmo (cost=0.00..1008253.22 rows=2703928 width=156)\n Filter: (ora_rif > ('now'::text)::timestamp(6) without time zone)\n\n... etc. ...\n\n(tried with all datetime functions with and without cast)\n\nI even tried to write a function that explicitly returns a \"timestamp\nwithout time zone\" value:\n\ncreate or replace function f () returns timestamp without time zone\nas '\ndeclare\n x timestamp without time zone ;\nbegin\n x := ''2006-01-01 00:00:00'';\n return x ;\nend ;\n' language plpgsql ;\n\nBut the result is the same:\n\n> explain select * from dmo ora_rif>f();\n QUERY PLAN\n-----------------------------------------------------------------------------\n Seq Scan on dmo (cost=0.00..987973.76 rows=2703928 width=156)\n Filter: (ora_rif > f())\n\nAny suggestion?\n\nKind regards,\n\n-- \nCristian Veronesi - C.R.P.A. S.p.A. - Reggio Emilia, Italy\n\nThe first thing you need to learn about databases is that\nthey are not just a fancy file system for storing data.\n\n\n\n",
"msg_date": "Thu, 13 Apr 2006 16:57:57 +0200",
"msg_from": "Cristian Veronesi <[email protected]>",
"msg_from_op": true,
"msg_subject": "index is not used if I include a function that returns current time\n\tin my query"
},
{
"msg_contents": "Cristian Veronesi <[email protected]> writes:\n> If I try to use a function that returns the current time instead, a\n> sequential scan is always performed:\n> ...\n> Any suggestion?\n\n1. Use something newer than 7.4 ;-)\n\n2. Set up a dummy range constraint, ie\n\n\tselect ... where ora_rif > localtimestamp and ora_rif < 'infinity';\n\nThe problem you have is that the planner doesn't know the value of the\nfunction and falls back to a default assumption about the selectivity of\nthe '>' condition --- and that default discourages indexscans. (Note\nthe very large estimate of number of rows returned.) In the\nrange-constraint situation, the planner still doesn't know the value of\nthe function, but its default assumption for a range constraint is\ntighter and it (probably) will choose an indexscan.\n\nSince PG 8.0, the planner understands that it's reasonable to\npre-evaluate certain functions like localtimestamp to obtain\nbetter-than-guess values about selectivity, so updating would\nbe a better fix.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Apr 2006 11:50:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index is not used if I include a function that returns current\n\ttime in my query"
}
] |
[
{
"msg_contents": "Hi Markus,\n\nMarkus Schaber <[email protected]> wrote:\n\n>Hi, Oscar,\n>\n>Please reply to the list and not privately, so others can learn from\n>your replies, and possibly have better Ideas than me.\n\nThat was my intention. I made a mistake.\n\n>Oscar Picasso wrote:\n>\n>> I cannot group the columns logically. Any column may or may not appear\n>> in a query.\n>\n>That's suboptimal.\n>\n>> Summrarizing what I have learned:\n>> - I cannot use multicolumn indexes because I cannot group the column\n>> logically.\n>> - I cannot use funtional indexes\n>> - I cannot use clustering.\n>\n>You still can have a set of partitioned multi-column indices,\n>overlapping enough that every combination of columns is covered (or risk\n>a sequential sub scan for the last two or three columns, this should not\n>hurt too much if the first 17 columns were selective enough).\n>\n>The main problem with indices is that they also decrease write performance.\n>\n>If disk costs are not limited, it will make sense to have WAL, table and\n>indices on different disks / raid arrays, to parallelize writes.\n>\n>Btw, I guess you have multiple, concurrent users?\n\nYes I do.\n\nI have just made other tests with only the individual indexes and performance is much better than previously. Obviously there was an I/O problem during my initial test.\n\nSomething interesting though. If I use few columns in the query the results come very quickly and pg does a sequential scan. \n\nWhen it reachs some threshold (4 or 5 columns) pg switches to bitmap scans. It then takes an almost constant time (~ 2500 ms) not matter how many more columns I add to the where clause.\n\nInterestingly enough, queries with many columns are less common. They also return less results and even many times no result at all. \n\n From the user point of view it would be nice to have a waiting time lower than 2500ms for these queries. Maybe I could achieve that goal simply by tuning postgresql. In a such case where should I look first in order to increase bitmap scanning? \n\nMaybe I could, that way, avoid the use of partitioned multi-column indexes.\n\nOscar\n\n\n\n\t\t\n---------------------------------\nTalk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls. Great rates starting at 1¢/min.\nHi Markus,Markus Schaber <[email protected]> wrote:>Hi, Oscar,>>Please reply to the list and not privately, so others can learn from>your replies, and possibly have better Ideas than me.That was my intention. I made a mistake.>Oscar Picasso wrote:>>> I cannot group the columns logically. Any column may or may not appear>> in a query.>>That's suboptimal.>>> Summrarizing what I have learned:>> - I cannot use multicolumn indexes because I cannot group the column>> logically.>> - I cannot use funtional indexes>> - I cannot use clustering.>>You still can have a set of partitioned multi-column indices,>overlapping enough that every combination of columns is covered (or risk>a sequential sub scan for the last two or three columns, this should not>hurt too much if the first 17 columns\n were selective enough).>>The main problem with indices is that they also decrease write performance.>>If disk costs are not limited, it will make sense to have WAL, table and>indices on different disks / raid arrays, to parallelize writes.>>Btw, I guess you have multiple, concurrent users?Yes I do.I have just made other tests with only the individual indexes and performance is much better than previously. Obviously there was an I/O problem during my initial test.Something interesting though. If I use few columns in the query the results come very quickly and pg does a sequential scan. When it reachs some threshold (4 or 5 columns) pg switches to bitmap scans. It then takes an almost constant time (~ 2500 ms) not matter how many more columns I add to the where clause.Interestingly enough, queries with many columns are less common. They also return less results and even many times no\n result at all. From the user point of view it would be nice to have a waiting time lower than 2500ms for these queries. Maybe I could achieve that goal simply by tuning postgresql. In a such case where should I look first in order to increase bitmap scanning? Maybe I could, that way, avoid the use of partitioned multi-column indexes.Oscar\nTalk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls. Great rates starting at 1¢/min.",
"msg_date": "Thu, 13 Apr 2006 08:40:06 -0700 (PDT)",
"msg_from": "Oscar Picasso <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Better index stategy for many fields with few values"
}
] |
[
{
"msg_contents": "You need to run EXPLAIN ANALYZE. Also, what's random_page_cost set to? And the output of \\d chkpfw_tr_dy_dimension. The cost for that index scan looks way too high.\n\nAnd please reply-all so that the list is included.\n\n> -----Original Message-----\n> From: Sriram Dandapani [mailto:[email protected]]\n> Sent: Wednesday, April 12, 2006 7:48 PM\n> To: Jim Nasby\n> Subject: RE: [PERFORM] multi column query\n> \n> \n> I executed enable_seqscan=off and then ran an explain plan on \n> the query\n> \n> UPDATE chkpfw_tr_dy_dimension\n> \t\t\t SET summcount = a.summcount + b.summcount,\n> \t\t\t bytes = a.bytes + b.bytes,\n> \t\t\t duration = a.duration + b.duration\n> \t\t\t from chkpfw_tr_dy_dimension a,\n> c_chkpfw_dy_tr_updates b\n> \t\t\t WHERE a.firstoccurrence = b.firstoccurrence\n> \t\t\tAND a.customerid_id = b.customerid_id\n> \t\t\t AND a.sentryid_id = b.sentryid_id\n> \t\t\tAND a.node_id = b.node_id\n> \t\t\t AND a.interface_id = b.interface_id\n> \t\t\t AND a.source_id = b.source_id\n> \t\t\t AND a.destination_id = b.destination_id\n> \t\t\t AND a.sourceport_id = b.sourceport_id\n> \t\t\t AND a.destinationport_id = b.destinationport_id\n> \t\t\t AND a.inoutbound_id = b.inoutbound_id\n> \t\t\t AND a.action_id = b.action_id\n> \t\t\t AND a.protocol_id = b.protocol_id\n> \t\t\t AND a.service_id = b.service_id\n> \t\t\t AND a.sourcezone_id = b.sourcezone_id\n> \t\t\t AND a.destinationzone_id =\n> b.destinationzone_id;\n> \n> \n> \n> Here is the query plan\n> \n> \n> \"Nested Loop (cost=200000036.18..221851442.39 rows=1 width=166)\"\n> \" -> Merge Join (cost=100000036.18..121620543.75 rows=1 width=96)\"\n> \" Merge Cond: ((\"outer\".firstoccurrence =\n> \"inner\".firstoccurrence) AND (\"outer\".sentryid_id = \n> \"inner\".sentryid_id)\n> AND (\"outer\".node_id = \"inner\".node_id))\"\n> \" Join Filter: ((\"outer\".customerid_id = \"inner\".customerid_id)\n> AND (\"outer\".interface_id = \"inner\".interface_id) AND \n> (\"outer\".source_id\n> = \"inner\".source_id) AND (\"outer\".destination_id =\n> \"inner\".destination_id) AND (\"outer\".sourceport_id = \"inner\".s (..)\"\n> \" -> Index Scan using chkpfw_tr_dy_idx1 on\n> chkpfw_tr_dy_dimension a (cost=0.00..21573372.84 rows=6281981\n> width=88)\"\n> \" -> Sort (cost=100000036.18..100000037.38 rows=480 \n> width=136)\"\n> \" Sort Key: b.firstoccurrence, b.sentryid_id, b.node_id\"\n> \" -> Seq Scan on c_chkpfw_dy_tr_updates b\n> (cost=100000000.00..100000014.80 rows=480 width=136)\"\n> \" -> Seq Scan on chkpfw_tr_dy_dimension\n> (cost=100000000.00..100168078.81 rows=6281981 width=70)\"\n> \n> -----Original Message-----\n> From: Jim C. Nasby [mailto:[email protected]] \n> Sent: Wednesday, April 12, 2006 5:44 PM\n> To: Sriram Dandapani\n> Cc: [email protected]\n> Subject: Re: [PERFORM] multi column query\n> \n> On Wed, Apr 12, 2006 at 05:32:32PM -0700, Sriram Dandapani wrote:\n> > Hi\n> > \n> > When I update a table that has 20 columns and the where clause\n> includes\n> > 16 of the columns (this is a data warehousing type update \n> on aggregate\n> > fields),\n> > \n> > The bitmap scan is not used by the optimizer. The table is \n> indexed on\n> 3\n> > of the 20 fields. The update takes really long to finish (on a 6\n> million\n> > row table)\n> > \n> > Do I need to do some \"magic\" with configuration to turn on bitmap\n> scans.\n> \n> No. What's explain analyze of the query show? What's it doing now?\n> Seqscan? You might try set enable_seqscan=off and see what that does.\n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n> \n",
"msg_date": "Thu, 13 Apr 2006 11:42:25 -0500",
"msg_from": "\"Jim Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: multi column query"
}
] |
[
{
"msg_contents": "While working on determining a good stripe size for a database, I \nrealized it would be handy to know what the average request size is. \nGetting this info is a simple matter of joining pg_stat_all_tables \nand pg_statio_all_tables and doing some math, but there's one issue \nI've found; it appears that there's no information on how many heap \nblocks were read in by an index scan. Is there any way to get that info?\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n",
"msg_date": "Thu, 13 Apr 2006 13:00:07 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Blocks read for index scans"
},
{
"msg_contents": "Jim Nasby <[email protected]> writes:\n> While working on determining a good stripe size for a database, I \n> realized it would be handy to know what the average request size is. \n> Getting this info is a simple matter of joining pg_stat_all_tables \n> and pg_statio_all_tables and doing some math, but there's one issue \n> I've found; it appears that there's no information on how many heap \n> blocks were read in by an index scan. Is there any way to get that info?\n\nIf the table is otherwise idle, the change in the table's entry in\npgstatio_all_tables should do, no?\n\n(This is as of 8.1 ... older versions acted a bit differently IIRC.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Apr 2006 20:36:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocks read for index scans "
},
{
"msg_contents": "Jim Nasby wrote:\n> While working on determining a good stripe size for a database, I \n> realized it would be handy to know what the average request size is. \n> Getting this info is a simple matter of joining pg_stat_all_tables and \n> pg_statio_all_tables and doing some math, but there's one issue I've \n> found; it appears that there's no information on how many heap blocks \n> were read in by an index scan. Is there any way to get that info?\n\nRAID usually doesn't work the way most people think. ;)\n\nNot sure how well you know RAID, so I'm just mentioning some points just \nin case, and for the archives.\n\nIf your average request is for 16K, and you choose a 16K stripe size, \nthen that means half your request (assuming normal bell curve) would be \nlarger than a single stripe, and you've just succeeded in having half \nyour requests have to have two spindles seek instead of one. If that's \ndone sequentially, you're set for less than half the performance of a \nflat disk.\n\nKnowing what the average stripe size is can be a good place to start, \nbut the real question is; which stripe size will allow the majority of \nyour transactions to be possible to satisfy without having to go to two \nspindles?\n\nI've actually had good success with 2MB stripe sizes using software \nraid. If the reads are fairly well distributed, all the drives are hit \nequally, and very few small requests have to go to two spindles.\n\nRead speeds from modern drives are fast. It's usually the seeks that \nkill performance, so making sure you reduce the number of seeks should \nalmost always be the priority.\n\nThat said, it's the transactions against disk that typically matter. On \nFreeBSD, you can get an impression of this using 'systat -vmstat', and \nwatch the KB/t column for your drives.\n\nA seek will take some time, the head has to settle down, find the right \nplace to start reading etc, so a seek will always take time. A seek \nover a longer distance takes more time though, so even if your \ntransactions are pretty small, using a large stripe size can be a good \nthing if your have lots of small transactions that are close by. The \nhead will be in the area, reducing seek time.\n\nThis all depends on what types of load you have, and it's hard to \ngeneralize too much on what makes things fast. As always, it pretty \nmuch boils down to trying things while running as close to production \nload as you can.\n\nTerje\n\n\n",
"msg_date": "Fri, 14 Apr 2006 08:05:39 +0200",
"msg_from": "Terje Elde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocks read for index scans"
},
{
"msg_contents": "On Thu, Apr 13, 2006 at 08:36:09PM -0400, Tom Lane wrote:\n> Jim Nasby <[email protected]> writes:\n> > While working on determining a good stripe size for a database, I \n> > realized it would be handy to know what the average request size is. \n> > Getting this info is a simple matter of joining pg_stat_all_tables \n> > and pg_statio_all_tables and doing some math, but there's one issue \n> > I've found; it appears that there's no information on how many heap \n> > blocks were read in by an index scan. Is there any way to get that info?\n> \n> If the table is otherwise idle, the change in the table's entry in\n> pgstatio_all_tables should do, no?\n\nAhh, ok, I see the heap blocks are counted. So I guess if you wanted to\nknow what the average number of blocks read from the heap per request\nwas you'd have to do heap_blks_read / ( seq_scan + idx_scan ), with the\nlast two comming from pg_stat_all_tables.\n\nIn my case it would be helpful to break the heap access numbers out\nbetween seqscans and index scans, since each of those represents very\ndifferent access patterns. Would adding that be a mess?\n\n> (This is as of 8.1 ... older versions acted a bit differently IIRC.)\n\nYeah; I recall that it was pretty confusing exactly how things were\nbroken out and that you changed it as part of the bitmap scan work.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 14 Apr 2006 01:14:14 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocks read for index scans"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> In my case it would be helpful to break the heap access numbers out\n> between seqscans and index scans, since each of those represents very\n> different access patterns. Would adding that be a mess?\n\nYes; it'd require more counters-per-table than we now keep, thus\nnontrivial bloat in the stats collector's tables. Not to mention\nincompatible changes in the pgstats views and the underlying functions\n(which some apps probably use directly).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Apr 2006 11:12:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocks read for index scans "
},
{
"msg_contents": "On Fri, Apr 14, 2006 at 11:12:55AM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > In my case it would be helpful to break the heap access numbers out\n> > between seqscans and index scans, since each of those represents very\n> > different access patterns. Would adding that be a mess?\n> \n> Yes; it'd require more counters-per-table than we now keep, thus\n> nontrivial bloat in the stats collector's tables. Not to mention\n\nISTM it would only require two additional columns, which doesn't seem\nunreasonable, especially considering the value of the information\ncollected.\n\n> incompatible changes in the pgstats views and the underlying functions\n> (which some apps probably use directly).\n\nThere's certainly ways around that issue, especially since this would\nonly be adding new information (though we would probably want to\nconsider the old info as depricated and eventually remove it).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 18 Apr 2006 15:01:15 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocks read for index scans"
},
{
"msg_contents": "On Fri, Apr 14, 2006 at 08:05:39AM +0200, Terje Elde wrote:\n> Jim Nasby wrote:\n> >While working on determining a good stripe size for a database, I \n> >realized it would be handy to know what the average request size is. \n> >Getting this info is a simple matter of joining pg_stat_all_tables and \n> >pg_statio_all_tables and doing some math, but there's one issue I've \n> >found; it appears that there's no information on how many heap blocks \n> >were read in by an index scan. Is there any way to get that info?\n<snip> \n> Knowing what the average stripe size is can be a good place to start, \n> but the real question is; which stripe size will allow the majority of \n> your transactions to be possible to satisfy without having to go to two \n> spindles?\n\nAnd of course right now there's not a very good way to know that...\ngranted, I can look at the average request size on the machine, but that\nwill include any seqscans that are happening, and for stripe sizing I\nthink it's better to leave that out of the picture unless your workload\nis heavily based on seqscans.\n\n> That said, it's the transactions against disk that typically matter. On \n> FreeBSD, you can get an impression of this using 'systat -vmstat', and \n> watch the KB/t column for your drives.\n\nOn a related note, you know of any way to determine the breakdown\nbetween read activity and write activity on FreeBSD? vmstat, systat,\niostat all only return aggregate info. :(\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 18 Apr 2006 15:06:41 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocks read for index scans"
},
{
"msg_contents": "Jim C. Nasby wrote:\n>> That said, it's the transactions against disk that typically matter. On \n>> FreeBSD, you can get an impression of this using 'systat -vmstat', and \n>> watch the KB/t column for your drives.\n>> \n>\n> On a related note, you know of any way to determine the breakdown\n> between read activity and write activity on FreeBSD? vmstat, systat,\n> iostat all only return aggregate info. :(\n> \n\n\nCan't think of a right way to do this ATM, but for a lab-type setup to \nget an idea, you could set up a gmirror volume, then choose a balancing \nalgorithm to only read from one of the disks. The effect should be that \nwrites go to both, while reads only go to one. Activity on the \nwrite-only disk would give you an idea of the write activity, and \n(read/write disk - write-only disk) would give you an idea of the \nreads. I have to admit though, seems like quite a bit of hassle, and \nI'm not sure how good the numbers would be, given that at least some of \nthe info (KB/transaction) are totals, it'd require a bit of math to get \ndecent numbers. But at least it's something.\n\nTerje\n\n\n",
"msg_date": "Wed, 19 Apr 2006 04:35:11 +0200",
"msg_from": "Terje Elde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocks read for index scans"
},
{
"msg_contents": "On Wed, Apr 19, 2006 at 04:35:11AM +0200, Terje Elde wrote:\n> Jim C. Nasby wrote:\n> >>That said, it's the transactions against disk that typically matter. On \n> >>FreeBSD, you can get an impression of this using 'systat -vmstat', and \n> >>watch the KB/t column for your drives.\n> >> \n> >\n> >On a related note, you know of any way to determine the breakdown\n> >between read activity and write activity on FreeBSD? vmstat, systat,\n> >iostat all only return aggregate info. :(\n> > \n> \n> \n> Can't think of a right way to do this ATM, but for a lab-type setup to \n> get an idea, you could set up a gmirror volume, then choose a balancing \n> algorithm to only read from one of the disks. The effect should be that \n> writes go to both, while reads only go to one. Activity on the \n> write-only disk would give you an idea of the write activity, and \n> (read/write disk - write-only disk) would give you an idea of the \n> reads. I have to admit though, seems like quite a bit of hassle, and \n> I'm not sure how good the numbers would be, given that at least some of \n> the info (KB/transaction) are totals, it'd require a bit of math to get \n> decent numbers. But at least it's something.\n\nYeah... not gonna happen...\n\nIt's completely mind-boggling that FBSD doesn't track writes and reads\nseperately.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 19 Apr 2006 00:13:47 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocks read for index scans"
},
{
"msg_contents": "Jim C. Nasby wrote:\n\n> \n> Yeah... not gonna happen...\n> \n> It's completely mind-boggling that FBSD doesn't track writes and reads\n> seperately.\n\n'iostat' does not tell you this, but 'gstat' does - its the \"geom\" \nsystem monitor (a bit annoying that the standard tool is lacking in this \nregard...).\n\nCheers\n\nMark\n",
"msg_date": "Wed, 19 Apr 2006 17:31:21 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocks read for index scans"
}
] |
[
{
"msg_contents": "Adding -performance back in...\n\n> From: Steve Poe [mailto:[email protected]]\n> Jim,\n> \n> I could be way off, but doesn't from pg_statio_user_tables \n> contain this\n> information?\n\nhttp://www.postgresql.org/docs/8.1/interactive/monitoring-stats.html#MONITORING-STATS-VIEWS states:\n\n\"numbers of disk blocks read and buffer hits in all indexes of that table\"\n\nThat leads me to believe that it's only tracking index blocks read, and not heap blocks read. One could presume that each index row read as reported by pg_stat_all_tables would represent a heap block read, but a large number of those would (hopefully) have already been in shared_buffers.\n\n> On Thu, 2006-04-13 at 13:00 -0500, Jim Nasby wrote:\n> > While working on determining a good stripe size for a database, I \n> > realized it would be handy to know what the average request \n> size is. \n> > Getting this info is a simple matter of joining pg_stat_all_tables \n> > and pg_statio_all_tables and doing some math, but there's \n> one issue \n> > I've found; it appears that there's no information on how \n> many heap \n> > blocks were read in by an index scan. Is there any way to \n> get that info?\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Thu, 13 Apr 2006 13:48:22 -0500",
"msg_from": "\"Jim Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Blocks read for index scans"
}
] |
[
{
"msg_contents": "Hi Jim\n\nThe problem is fixed. The destination table that was being updated had 3\nseparate indexes. I combined them to a multi-column index and the effect\nwas amazing.\nThanks for your input\n\nSriram\n\n-----Original Message-----\nFrom: Jim Nasby [mailto:[email protected]] \nSent: Thursday, April 13, 2006 9:42 AM\nTo: Sriram Dandapani\nCc: Pgsql-Performance (E-mail)\nSubject: RE: [PERFORM] multi column query\n\nYou need to run EXPLAIN ANALYZE. Also, what's random_page_cost set to?\nAnd the output of \\d chkpfw_tr_dy_dimension. The cost for that index\nscan looks way too high.\n\nAnd please reply-all so that the list is included.\n\n> -----Original Message-----\n> From: Sriram Dandapani [mailto:[email protected]]\n> Sent: Wednesday, April 12, 2006 7:48 PM\n> To: Jim Nasby\n> Subject: RE: [PERFORM] multi column query\n> \n> \n> I executed enable_seqscan=off and then ran an explain plan on \n> the query\n> \n> UPDATE chkpfw_tr_dy_dimension\n> \t\t\t SET summcount = a.summcount + b.summcount,\n> \t\t\t bytes = a.bytes + b.bytes,\n> \t\t\t duration = a.duration + b.duration\n> \t\t\t from chkpfw_tr_dy_dimension a,\n> c_chkpfw_dy_tr_updates b\n> \t\t\t WHERE a.firstoccurrence = b.firstoccurrence\n> \t\t\tAND a.customerid_id = b.customerid_id\n> \t\t\t AND a.sentryid_id = b.sentryid_id\n> \t\t\tAND a.node_id = b.node_id\n> \t\t\t AND a.interface_id = b.interface_id\n> \t\t\t AND a.source_id = b.source_id\n> \t\t\t AND a.destination_id = b.destination_id\n> \t\t\t AND a.sourceport_id = b.sourceport_id\n> \t\t\t AND a.destinationport_id = b.destinationport_id\n> \t\t\t AND a.inoutbound_id = b.inoutbound_id\n> \t\t\t AND a.action_id = b.action_id\n> \t\t\t AND a.protocol_id = b.protocol_id\n> \t\t\t AND a.service_id = b.service_id\n> \t\t\t AND a.sourcezone_id = b.sourcezone_id\n> \t\t\t AND a.destinationzone_id =\n> b.destinationzone_id;\n> \n> \n> \n> Here is the query plan\n> \n> \n> \"Nested Loop (cost=200000036.18..221851442.39 rows=1 width=166)\"\n> \" -> Merge Join (cost=100000036.18..121620543.75 rows=1 width=96)\"\n> \" Merge Cond: ((\"outer\".firstoccurrence =\n> \"inner\".firstoccurrence) AND (\"outer\".sentryid_id = \n> \"inner\".sentryid_id)\n> AND (\"outer\".node_id = \"inner\".node_id))\"\n> \" Join Filter: ((\"outer\".customerid_id = \"inner\".customerid_id)\n> AND (\"outer\".interface_id = \"inner\".interface_id) AND \n> (\"outer\".source_id\n> = \"inner\".source_id) AND (\"outer\".destination_id =\n> \"inner\".destination_id) AND (\"outer\".sourceport_id = \"inner\".s (..)\"\n> \" -> Index Scan using chkpfw_tr_dy_idx1 on\n> chkpfw_tr_dy_dimension a (cost=0.00..21573372.84 rows=6281981\n> width=88)\"\n> \" -> Sort (cost=100000036.18..100000037.38 rows=480 \n> width=136)\"\n> \" Sort Key: b.firstoccurrence, b.sentryid_id, b.node_id\"\n> \" -> Seq Scan on c_chkpfw_dy_tr_updates b\n> (cost=100000000.00..100000014.80 rows=480 width=136)\"\n> \" -> Seq Scan on chkpfw_tr_dy_dimension\n> (cost=100000000.00..100168078.81 rows=6281981 width=70)\"\n> \n> -----Original Message-----\n> From: Jim C. Nasby [mailto:[email protected]] \n> Sent: Wednesday, April 12, 2006 5:44 PM\n> To: Sriram Dandapani\n> Cc: [email protected]\n> Subject: Re: [PERFORM] multi column query\n> \n> On Wed, Apr 12, 2006 at 05:32:32PM -0700, Sriram Dandapani wrote:\n> > Hi\n> > \n> > When I update a table that has 20 columns and the where clause\n> includes\n> > 16 of the columns (this is a data warehousing type update \n> on aggregate\n> > fields),\n> > \n> > The bitmap scan is not used by the optimizer. The table is \n> indexed on\n> 3\n> > of the 20 fields. The update takes really long to finish (on a 6\n> million\n> > row table)\n> > \n> > Do I need to do some \"magic\" with configuration to turn on bitmap\n> scans.\n> \n> No. What's explain analyze of the query show? What's it doing now?\n> Seqscan? You might try set enable_seqscan=off and see what that does.\n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n> \n",
"msg_date": "Thu, 13 Apr 2006 12:51:19 -0700",
"msg_from": "\"Sriram Dandapani\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: multi column query"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI've seen my pg_toast tables are becoming bigger and bigger. After googling I would like to modify my max_fsm_pages parameter to prevent that kind of problem. So I'm wondering if changing this parameter is enough and after that how can I reduce the size of these tables? By doing a full vacuum?\n\nThanks in advance,\n\nJulien\n\n\n\n\n\n\nHi everyone,\n \nI've seen my pg_toast tables are becoming bigger \nand bigger. After googling I would like to modify my max_fsm_pages parameter to \nprevent that kind of problem. So I'm wondering if changing this parameter is \nenough and after that how can I reduce the size of these tables? By doing a full \nvacuum?\n \nThanks in advance,\n \nJulien",
"msg_date": "Fri, 14 Apr 2006 15:13:43 +0200",
"msg_from": "\"Julien Drouard\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_toast size"
},
{
"msg_contents": "On Fri, Apr 14, 2006 at 03:13:43PM +0200, Julien Drouard wrote:\n> Hi everyone,\n> \n> I've seen my pg_toast tables are becoming bigger and bigger. After googling I would like to modify my max_fsm_pages parameter to prevent that kind of problem. So I'm wondering if changing this parameter is enough and after that how can I reduce the size of these tables? By doing a full vacuum?\n\nA full vacuum would do it. CLUSTERing the table might rewrite the toast\ntables as well.\n\nAs for toast, if you do a vacuum verbose over the entire cluster, it\nwill tell you at the end how much space you need in the FSM. See also\nhttp://www.pervasivepostgres.com/instantkb13/article.aspx?id=10087\nand http://www.pervasivepostgres.com/instantkb13/article.aspx?id=10116\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 18 Apr 2006 16:34:44 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_toast size"
}
] |
[
{
"msg_contents": "I have this query, where PG (8.1.2) prefers Merge Join over Hash Join\nover Nested Loop. However, this order turns out to increase in\nperformance. I was hoping someone might be able to shed some light on\nwhy PG chooses the plans in this order, and what I might do to\ninfluence it otherwise. Thanks,\n\nitvtrackdata=> explain analyze SELECT mva,blob.* FROM\nblobs00000000000033c3_c16010 AS blob NATURAL JOIN\nobjects00000000000033c3 WHERE\nAREA(BOX(POINT(bbox_x1,bbox_y0),POINT(bbox_x0,bbox_y1))#BOX('(50,10),(10,50)'))/AREA(BOX(POINT(bbox_x1,bbox_y0),POINT(bbox_x0,bbox_y1)))>0 AND time>=1263627787-32768 AND time<1273458187 AND finish-start>=8738 ORDER BY time ASC;\n\nQUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=47170.44..47184.46 rows=5607 width=28) (actual\ntime=2661.980..2663.642 rows=1687 loops=1)\n Sort Key: blob.\"time\"\n -> Merge Join (cost=46547.88..46821.32 rows=5607 width=28) (actual\ntime=2645.685..2657.621 rows=1687 loops=1)\n Merge Cond: (\"outer\".sva = \"inner\".sva)\n -> Sort (cost=18003.31..18045.36 rows=16821 width=20) (actual\ntime=181.303..183.092 rows=1741 loops=1)\n Sort Key: blob.sva\n -> Bitmap Heap Scan on blobs00000000000033c3_c16010 blob\n(cost=535.77..16822.65 rows=16821 width=20) (actual time=10.827..177.671\nrows=1741 loops=1)\n Recheck Cond: ((\"time\" >= 1263595019) AND (\"time\" <\n1273458187))\n Filter: ((area((box(point((bbox_x1)::double\nprecision, (bbox_y0)::double precision), point((bbox_x0)::double\nprecision, (bbox_y1)::double precision)) # '(50,50),(10,10)'::box)) /\narea(box(point((bbox_x1)::double precision, (bbox_y0)::double\nprecision), point((bbox_x0)::double precision, (bbox_y1)::double\nprecision)))) > 0::double precision)\n -> Bitmap Index Scan on\nidx_blobs00000000000033c3_c16010_time (cost=0.00..535.77 rows=50462\nwidth=0) (actual time=8.565..8.565 rows=50673 loops=1)\n Index Cond: ((\"time\" >= 1263595019) AND\n(\"time\" < 1273458187))\n -> Sort (cost=28544.56..28950.88 rows=162526 width=16)\n(actual time=2387.726..2437.429 rows=29969 loops=1)\n Sort Key: objects00000000000033c3.sva\n -> Seq Scan on objects00000000000033c3\n(cost=0.00..14477.68 rows=162526 width=16) (actual time=0.085..826.125\nrows=207755 loops=1)\n Filter: ((finish - \"start\") >= 8738)\n Total runtime: 2675.037 ms\n(16 rows)\n\nitvtrackdata=> set enable_mergejoin to false;\nSET\nitvtrackdata=> explain analyze SELECT mva,blob.* FROM\nblobs00000000000033c3_c16010 AS blob NATURAL JOIN\nobjects00000000000033c3 WHERE\nAREA(BOX(POINT(bbox_x1,bbox_y0),POINT(bbox_x0,bbox_y1))#BOX('(50,10),(10,50)'))/AREA(BOX(POINT(bbox_x1,bbox_y0),POINT(bbox_x0,bbox_y1)))>0 AND time>=1263627787-32768 AND time<1273458187 AND finish-start>=8738 ORDER BY time ASC;\n\nQUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=65783.09..65797.10 rows=5607 width=28) (actual\ntime=1211.228..1212.671 rows=1687 loops=1)\n Sort Key: blob.\"time\"\n -> Hash Join (cost=15419.77..65433.97 rows=5607 width=28) (actual\ntime=1067.514..1207.727 rows=1687 loops=1)\n Hash Cond: (\"outer\".sva = \"inner\".sva)\n -> Bitmap Heap Scan on blobs00000000000033c3_c16010 blob\n(cost=535.77..16822.65 rows=16821 width=20) (actual time=14.720..149.179\nrows=1741 loops=1)\n Recheck Cond: ((\"time\" >= 1263595019) AND (\"time\" <\n1273458187))\n Filter: ((area((box(point((bbox_x1)::double precision,\n(bbox_y0)::double precision), point((bbox_x0)::double precision,\n(bbox_y1)::double precision)) # '(50,50),(10,10)'::box)) /\narea(box(point((bbox_x1)::double precision, (bbox_y0)::double\nprecision), point((bbox_x0)::double precision, (bbox_y1)::double\nprecision)))) > 0::double precision)\n -> Bitmap Index Scan on\nidx_blobs00000000000033c3_c16010_time (cost=0.00..535.77 rows=50462\nwidth=0) (actual time=12.880..12.880 rows=50673 loops=1)\n Index Cond: ((\"time\" >= 1263595019) AND (\"time\" <\n1273458187))\n -> Hash (cost=14477.68..14477.68 rows=162526 width=16)\n(actual time=1052.729..1052.729 rows=207755 loops=1)\n -> Seq Scan on objects00000000000033c3\n(cost=0.00..14477.68 rows=162526 width=16) (actual time=0.028..684.047\nrows=207755 loops=1)\n Filter: ((finish - \"start\") >= 8738)\n Total runtime: 1217.938 ms\n(13 rows)\n\nitvtrackdata=> set enable_hashjoin to false;\nSET\nitvtrackdata=> explain analyze SELECT mva,blob.* FROM\nblobs00000000000033c3_c16010 AS blob NATURAL JOIN\nobjects00000000000033c3 WHERE\nAREA(BOX(POINT(bbox_x1,bbox_y0),POINT(bbox_x0,bbox_y1))#BOX('(50,10),(10,50)'))/AREA(BOX(POINT(bbox_x1,bbox_y0),POINT(bbox_x0,bbox_y1)))>0 AND time>=1263627787-32768 AND time<1273458187 AND finish-start>=8738 ORDER BY time ASC;\n\nQUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=118258.49..118272.51 rows=5607 width=28) (actual\ntime=197.204..198.871 rows=1687 loops=1)\n Sort Key: blob.\"time\"\n -> Nested Loop (cost=535.77..117909.37 rows=5607 width=28) (actual\ntime=27.560..192.526 rows=1687 loops=1)\n -> Bitmap Heap Scan on blobs00000000000033c3_c16010 blob\n(cost=535.77..16822.65 rows=16821 width=20) (actual time=27.450..157.266\nrows=1741 loops=1)\n Recheck Cond: ((\"time\" >= 1263595019) AND (\"time\" <\n1273458187))\n Filter: ((area((box(point((bbox_x1)::double precision,\n(bbox_y0)::double precision), point((bbox_x0)::double precision,\n(bbox_y1)::double precision)) # '(50,50),(10,10)'::box)) /\narea(box(point((bbox_x1)::double precision, (bbox_y0)::double\nprecision), point((bbox_x0)::double precision, (bbox_y1)::double\nprecision)))) > 0::double precision)\n -> Bitmap Index Scan on\nidx_blobs00000000000033c3_c16010_time (cost=0.00..535.77 rows=50462\nwidth=0) (actual time=24.445..24.445 rows=50673 loops=1)\n Index Cond: ((\"time\" >= 1263595019) AND (\"time\" <\n1273458187))\n -> Index Scan using idx_objects00000000000033c3_sva on\nobjects00000000000033c3 (cost=0.00..6.00 rows=1 width=16) (actual\ntime=0.013..0.015 rows=1 loops=1741)\n Index Cond: (\"outer\".sva = objects00000000000033c3.sva)\n Filter: ((finish - \"start\") >= 8738)\n Total runtime: 200.719 ms\n(12 rows)\n\n-- \nIan Westmacott <[email protected]>\nIntelliVid Corp.\n\n",
"msg_date": "Fri, 14 Apr 2006 11:36:48 -0400",
"msg_from": "Ian Westmacott <[email protected]>",
"msg_from_op": true,
"msg_subject": "merge>hash>loop"
},
{
"msg_contents": "Ian Westmacott <[email protected]> writes:\n> I have this query, where PG (8.1.2) prefers Merge Join over Hash Join\n> over Nested Loop. However, this order turns out to increase in\n> performance. I was hoping someone might be able to shed some light on\n> why PG chooses the plans in this order, and what I might do to\n> influence it otherwise. Thanks,\n\nReducing random_page_cost would influence it to prefer the nestloop.\nHowever, I doubt you're ever going to get really ideal results for\nthis query --- the estimated row counts are too far off, and the\nWHERE conditions are sufficiently bizarre that there's not much hope\nthat they'd ever be much better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Apr 2006 12:13:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: merge>hash>loop "
},
{
"msg_contents": "On Fri, 2006-04-14 at 12:13 -0400, Tom Lane wrote:\n> Ian Westmacott <[email protected]> writes:\n> > I have this query, where PG (8.1.2) prefers Merge Join over Hash Join\n> > over Nested Loop. However, this order turns out to increase in\n> > performance. I was hoping someone might be able to shed some light on\n> > why PG chooses the plans in this order, and what I might do to\n> > influence it otherwise. Thanks,\n> \n> Reducing random_page_cost would influence it to prefer the nestloop.\n> However, I doubt you're ever going to get really ideal results for\n> this query --- the estimated row counts are too far off, and the\n> WHERE conditions are sufficiently bizarre that there's not much hope\n> that they'd ever be much better.\n\nThat's what I feared, thanks. But then if I simplify things a bit,\nsuch that the row counts are quite good, and PG still chooses a\nworse plan, can I conclude anything about my configuration settings\n(like random_page_cost)?\n\nitvtrackdata=> explain analyze SELECT mva,blob.* FROM\nblobs00000000000033c3_c16010 AS blob NATURAL JOIN\nobjects00000000000033c3 WHERE time>=1263627787 AND time<1273458187 ORDER\nBY time ASC;\n\nQUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=75426.47..75552.21 rows=50295 width=28) (actual\ntime=2083.624..2146.654 rows=50472 loops=1)\n Sort Key: blob.\"time\"\n -> Hash Join (cost=16411.51..70749.84 rows=50295 width=28) (actual\ntime=1790.746..1910.204 rows=50472 loops=1)\n Hash Cond: (\"outer\".sva = \"inner\".sva)\n -> Bitmap Heap Scan on blobs00000000000033c3_c16010 blob\n(cost=533.77..14421.19 rows=50295 width=20) (actual time=12.533..78.284\nrows=50472 loops=1)\n Recheck Cond: ((\"time\" >= 1263627787) AND (\"time\" <\n1273458187))\n -> Bitmap Index Scan on\nidx_blobs00000000000033c3_c16010_time (cost=0.00..533.77 rows=50295\nwidth=0) (actual time=12.447..12.447 rows=50472 loops=1)\n Index Cond: ((\"time\" >= 1263627787) AND (\"time\" <\n1273458187))\n -> Hash (cost=12039.79..12039.79 rows=487579 width=16)\n(actual time=1618.128..1618.128 rows=487579 loops=1)\n -> Seq Scan on objects00000000000033c3\n(cost=0.00..12039.79 rows=487579 width=16) (actual time=0.019..833.931\nrows=487579 loops=1)\n Total runtime: 2194.492 ms\n(11 rows)\n\nitvtrackdata=> set enable_hashjoin to false;\nSET\nitvtrackdata=> explain analyze SELECT mva,blob.* FROM\nblobs00000000000033c3_c16010 AS blob NATURAL JOIN\nobjects00000000000033c3 WHERE time>=1263627787 AND time<1273458187 ORDER\nBY time ASC;\n\nQUERY\nPLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=321096.91..321222.64 rows=50295 width=28) (actual\ntime=929.893..992.316 rows=50472 loops=1)\n Sort Key: blob.\"time\"\n -> Nested Loop (cost=533.77..316420.28 rows=50295 width=28) (actual\ntime=16.780..719.140 rows=50472 loops=1)\n -> Bitmap Heap Scan on blobs00000000000033c3_c16010 blob\n(cost=533.77..14421.19 rows=50295 width=20) (actual time=16.693..84.299\nrows=50472 loops=1)\n Recheck Cond: ((\"time\" >= 1263627787) AND (\"time\" <\n1273458187))\n -> Bitmap Index Scan on\nidx_blobs00000000000033c3_c16010_time (cost=0.00..533.77 rows=50295\nwidth=0) (actual time=16.546..16.546 rows=50472 loops=1)\n Index Cond: ((\"time\" >= 1263627787) AND (\"time\" <\n1273458187))\n -> Index Scan using idx_objects00000000000033c3_sva on\nobjects00000000000033c3 (cost=0.00..5.99 rows=1 width=16) (actual\ntime=0.006..0.008 rows=1 loops=50472)\n Index Cond: (\"outer\".sva = objects00000000000033c3.sva)\n Total runtime: 1039.725 ms\n(10 rows)\n\n-- \nIan Westmacott <[email protected]>\nIntelliVid Corp.\n\n",
"msg_date": "Fri, 14 Apr 2006 12:45:48 -0400",
"msg_from": "Ian Westmacott <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: merge>hash>loop"
},
{
"msg_contents": "Ian Westmacott <[email protected]> writes:\n> That's what I feared, thanks. But then if I simplify things a bit,\n> such that the row counts are quite good, and PG still chooses a\n> worse plan, can I conclude anything about my configuration settings\n> (like random_page_cost)?\n\nWell, the other thing that's going on here is that we know we are\noverestimating the cost of nestloop-with-inner-indexscan plans.\nThe current estimation for that is basically \"outer scan cost plus N\ntimes inner scan cost\" where N is the estimated number of outer tuples;\nin other words the repeated indexscan probes are each assumed to happen\nfrom a cold start. In reality, caching of the upper levels of the index\nmeans that the later index probes are much cheaper than this model\nthinks. We've known about this for some time but no one's yet proposed\na more reasonable cost model.\n\nIn my mind this is tied into another issue, which is that the planner\nalways costs on the basis of each query starting from zero. In a real\nenvironment it's much cheaper to use heavily-used indexes than this cost\nmodel suggests, because they'll already be swapped in due to use by\nprevious queries. But we haven't got any infrastructure to keep track\nof what's been heavily used, let alone a cost model that could make use\nof the info.\n\nI think part of the reason that people commonly reduce random_page_cost\nto values much lower than physical reality would suggest is that it\nprovides a crude way of partially compensating for this basic problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Apr 2006 12:58:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: merge>hash>loop "
},
{
"msg_contents": "Hi, Tom,\n\nTom Lane wrote:\n\n> Well, the other thing that's going on here is that we know we are\n> overestimating the cost of nestloop-with-inner-indexscan plans.\n> The current estimation for that is basically \"outer scan cost plus N\n> times inner scan cost\" where N is the estimated number of outer tuples;\n> in other words the repeated indexscan probes are each assumed to happen\n> from a cold start. In reality, caching of the upper levels of the index\n> means that the later index probes are much cheaper than this model\n> thinks. We've known about this for some time but no one's yet proposed\n> a more reasonable cost model.\n\nMy spontaneus guess would be to use log(N)*inner instead of N*inner. I\ndon't have any backings for that, it's just what my intuition tells me\nas a first shot.\n\n> In my mind this is tied into another issue, which is that the planner\n> always costs on the basis of each query starting from zero. In a real\n> environment it's much cheaper to use heavily-used indexes than this cost\n> model suggests, because they'll already be swapped in due to use by\n> previous queries. But we haven't got any infrastructure to keep track\n> of what's been heavily used, let alone a cost model that could make use\n> of the info.\n\nAn easy first approach would be to add a user tunable cache probability\nvalue to each index (and possibly table) between 0 and 1. Then simply\nmultiply random_page_cost with (1-that value) for each scan.\n\nLater, this value could be automatically tuned by stats analysis or\nother means.\n\n> I think part of the reason that people commonly reduce random_page_cost\n> to values much lower than physical reality would suggest is that it\n> provides a crude way of partially compensating for this basic problem.\n\nI totall agree with this, it's just what we did here from time to time. :-)\n\nHmm, how does effective_cach_size correspond with it? Shouldn't a high\neffective_cache_size have a similar effect?\n\nThanks,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Tue, 18 Apr 2006 12:51:59 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: merge>hash>loop"
},
{
"msg_contents": "On Tue, Apr 18, 2006 at 12:51:59PM +0200, Markus Schaber wrote:\n> > In my mind this is tied into another issue, which is that the planner\n> > always costs on the basis of each query starting from zero. In a real\n> > environment it's much cheaper to use heavily-used indexes than this cost\n> > model suggests, because they'll already be swapped in due to use by\n> > previous queries. But we haven't got any infrastructure to keep track\n> > of what's been heavily used, let alone a cost model that could make use\n> > of the info.\n> \n> An easy first approach would be to add a user tunable cache probability\n> value to each index (and possibly table) between 0 and 1. Then simply\n> multiply random_page_cost with (1-that value) for each scan.\n> \n> Later, this value could be automatically tuned by stats analysis or\n> other means.\n\nActually, if you run with stats_block_level turned on you have a\nfirst-order approximation of what is and isn't cached. Perhaps the\nplanner could make use of this information if it's available.\n\n> > I think part of the reason that people commonly reduce random_page_cost\n> > to values much lower than physical reality would suggest is that it\n> > provides a crude way of partially compensating for this basic problem.\n> \n> I totall agree with this, it's just what we did here from time to time. :-)\n> \n> Hmm, how does effective_cach_size correspond with it? Shouldn't a high\n> effective_cache_size have a similar effect?\n\nGenerally, effective_cache_size is used to determine the likelyhood that\nsomething will be in-cache. random_page_cost tells us how expensive it\nwill be to get that information if it isn't in cache.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 18 Apr 2006 16:52:38 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: merge>hash>loop"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Actually, if you run with stats_block_level turned on you have a\n> first-order approximation of what is and isn't cached.\n\nOnly if those stats decayed (pretty fast) with time; which they don't.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 18:22:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: merge>hash>loop "
},
{
"msg_contents": "Markus Schaber <[email protected]> writes:\n> Hmm, how does effective_cach_size correspond with it? Shouldn't a high\n> effective_cache_size have a similar effect?\n\nIt seems reasonable to suppose that effective_cache_size ought to be\nused as a number indicating how much \"stuff\" would hang around from\nquery to query. Right now it's not used that way...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 18:26:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: merge>hash>loop "
},
{
"msg_contents": "On Tue, Apr 18, 2006 at 06:22:26PM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > Actually, if you run with stats_block_level turned on you have a\n> > first-order approximation of what is and isn't cached.\n> \n> Only if those stats decayed (pretty fast) with time; which they don't.\n\nGood point. :/ I'm guessing there's no easy way to see how many blocks\nfor a given relation are in shared memory, either...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 18 Apr 2006 18:15:52 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: merge>hash>loop"
},
{
"msg_contents": "On Tue, Apr 18, 2006 at 06:26:48PM -0400, Tom Lane wrote:\n> Markus Schaber <[email protected]> writes:\n> > Hmm, how does effective_cach_size correspond with it? Shouldn't a high\n> > effective_cache_size have a similar effect?\n> \n> It seems reasonable to suppose that effective_cache_size ought to be\n> used as a number indicating how much \"stuff\" would hang around from\n> query to query. Right now it's not used that way...\n\nMaybe it would be a reasonable first pass to have estimators calculate\nthe cost if a node found everything it wanted in cache and then do a\nlinear interpolation between that and the costs we currently come up\nwith? Something like pg_class.relpages / sum(pg_class.relpages) would\ngive an idea of how much of a relation is likely to be cached, which\ncould be used for the linear interpolation.\n\nOf course having *any* idea as to how much of a relation was actually in\nshared_buffers (or better yet, the OS cache) would be a lot more\naccurate, but this simple method might be a good enough first-pass.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 18 Apr 2006 18:22:26 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: merge>hash>loop"
},
{
"msg_contents": "Markus Schaber <[email protected]> writes:\n> An easy first approach would be to add a user tunable cache probability\n> value to each index (and possibly table) between 0 and 1. Then simply\n> multiply random_page_cost with (1-that value) for each scan.\n\nThat's not the way you'd need to use it. But on reflection I do think\nthere's some merit in a \"cache probability\" parameter, ranging from zero\n(giving current planner behavior) to one (causing the planner to assume\neverything is already in cache from prior queries). We'd have to look\nat exactly how such an assumption should affect the cost equations ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 19:38:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: merge>hash>loop "
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Tue, Apr 18, 2006 at 06:22:26PM -0400, Tom Lane wrote:\n>> \"Jim C. Nasby\" <[email protected]> writes:\n>>> Actually, if you run with stats_block_level turned on you have a\n>>> first-order approximation of what is and isn't cached.\n>> Only if those stats decayed (pretty fast) with time; which they don't.\n> \n> Good point. :/ I'm guessing there's no easy way to see how many blocks\n> for a given relation are in shared memory, either...\n\ncontrib/pg_buffercache will tell you this - what buffers from what \nrelation are in shared_buffers (if you want to interrogate the os file \nbuffer cache, that's a different story - tho I've been toying with doing \na utility for Freebsd that would do this).\n\nCheers\n\nMark\n",
"msg_date": "Wed, 19 Apr 2006 16:47:40 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: merge>hash>loop"
},
{
"msg_contents": "On Wed, Apr 19, 2006 at 04:47:40PM +1200, Mark Kirkwood wrote:\n> Jim C. Nasby wrote:\n> >On Tue, Apr 18, 2006 at 06:22:26PM -0400, Tom Lane wrote:\n> >>\"Jim C. Nasby\" <[email protected]> writes:\n> >>>Actually, if you run with stats_block_level turned on you have a\n> >>>first-order approximation of what is and isn't cached.\n> >>Only if those stats decayed (pretty fast) with time; which they don't.\n> >\n> >Good point. :/ I'm guessing there's no easy way to see how many blocks\n> >for a given relation are in shared memory, either...\n> \n> contrib/pg_buffercache will tell you this - what buffers from what \n> relation are in shared_buffers (if you want to interrogate the os file \n\nSo theoretically with that code we could make the cost estimator\nfunctions more intelligent about actual query costs. Now, how you'd\nactually see how those estimates improved...\n\n> buffer cache, that's a different story - tho I've been toying with doing \n> a utility for Freebsd that would do this).\n\nWell, the problem is that I doubt anything that OS-specific would be\naccepted into core. What we really need is some method that's\nOS-agnostic...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 19 Apr 2006 00:18:35 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: merge>hash>loop"
},
{
"msg_contents": "Mark Kirkwood <[email protected]> writes:\n> Jim C. Nasby wrote:\n>> Good point. :/ I'm guessing there's no easy way to see how many blocks\n>> for a given relation are in shared memory, either...\n\n> contrib/pg_buffercache will tell you this -\n\nI think the key word in Jim's comment was \"easy\", ie, cheap. Grovelling\nthrough many thousands of buffers to count the matches to a given\nrelation doesn't sound appetizing, especially not if it gets done over\nagain several times during each query-planning cycle. Trying to keep\ncentralized counts somewhere would be even worse (because of locking/\ncontention issues).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Apr 2006 01:25:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: merge>hash>loop "
},
{
"msg_contents": "Tom Lane wrote:\n> Mark Kirkwood <[email protected]> writes:\n>> Jim C. Nasby wrote:\n>>> Good point. :/ I'm guessing there's no easy way to see how many blocks\n>>> for a given relation are in shared memory, either...\n> \n>> contrib/pg_buffercache will tell you this -\n> \n> I think the key word in Jim's comment was \"easy\", ie, cheap. Grovelling\n> through many thousands of buffers to count the matches to a given\n> relation doesn't sound appetizing, especially not if it gets done over\n> again several times during each query-planning cycle. Trying to keep\n> centralized counts somewhere would be even worse (because of locking/\n> contention issues).\n> \n\nYeah - not sensible for a transaction oriented system - might be ok for \nDSS tho.\n\nCheers\n\nmark\n",
"msg_date": "Wed, 19 Apr 2006 17:33:02 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: merge>hash>loop"
},
{
"msg_contents": "On Wed, Apr 19, 2006 at 01:25:28AM -0400, Tom Lane wrote:\n> Mark Kirkwood <[email protected]> writes:\n> > Jim C. Nasby wrote:\n> >> Good point. :/ I'm guessing there's no easy way to see how many blocks\n> >> for a given relation are in shared memory, either...\n> \n> > contrib/pg_buffercache will tell you this -\n> \n> I think the key word in Jim's comment was \"easy\", ie, cheap. Grovelling\n> through many thousands of buffers to count the matches to a given\n> relation doesn't sound appetizing, especially not if it gets done over\n> again several times during each query-planning cycle. Trying to keep\n> centralized counts somewhere would be even worse (because of locking/\n> contention issues).\n\nVery true. OTOH, it might not be unreasonable to periodically slog\nthrough the buffers and store that information, perhaps once a minute,\nor every X number of transactions.\n\nI think a bigger issue is that we currently have no way to really\nmeasure the effictiveness of the planner. Without that it's impossible\nto come up with any real data on whether cost formula A is better or\nworse than cost formula B. The only means I can think of for doing this\nwould be to measure estimated cost vs actual cost, but with the overhead\nof EXPLAIN ANALYZE and other variables that might not prove terribly\npractical. Maybe someone else has some ideas...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Thu, 20 Apr 2006 11:27:32 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: merge>hash>loop"
}
] |
[
{
"msg_contents": "Since my first post in this series, data has been successfully exported\nfrom the proprietary db (in CSV format) and imported into postgres\n(PostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.4\n20050721 (Red Hat 3.4.4-2)) using \nCOPY. The tablespace holding the tables+indexes is now 60Gb, total time\nfor exporting+transfering over network+importing+indexing+vacuuming is\nabout 7 hours.\n\nI now have a query (originally using an analytical function) that\nbehaves strangely as soon as I go from 2 person_information__id:s to 3\nperson_information__id:s (see below):\n\n//\n// 2 PID:s \n//\nmica@TEST=> explain analyze select distinct on\n(driver_activity.person_information__id)\nTEST-> driver_activity.person_information__id,\nTEST-> driver_activity.end_time as prev_timestamp\nTEST-> from\nTEST-> driver_activity \nTEST-> where\nTEST-> driver_activity.person_information__id in (5398,5399)\nTEST-> and driver_activity.driver_activity_type__id = 5\nTEST-> and driver_activity.start_time < '2006-04-01'\nTEST-> order by\nTEST-> driver_activity.person_information__id,\ndriver_activity.start_time desc;\n\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n----------------------------\n Unique (cost=1909.90..1912.32 rows=1 width=20) (actual\ntime=1.153..1.623 rows=2 loops=1)\n -> Sort (cost=1909.90..1911.11 rows=485 width=20) (actual\ntime=1.151..1.389 rows=213 loops=1)\n Sort Key: person_information__id, start_time\n -> Bitmap Heap Scan on driver_activity (cost=6.91..1888.26\nrows=485 width=20) (actual time=0.141..0.677 rows=213 loops=1)\n Recheck Cond: (((person_information__id = 5398) AND\n(driver_activity_type__id = 5)) OR ((person_information__id = 5399) AND\n(driver_activity_type__id = 5)))\n Filter: (start_time < '2006-04-01 00:00:00'::timestamp\nwithout time zone)\n -> BitmapOr (cost=6.91..6.91 rows=485 width=0) (actual\ntime=0.102..0.102 rows=0 loops=1)\n -> Bitmap Index Scan on xda_pi_dat\n(cost=0.00..3.46 rows=243 width=0) (actual time=0.042..0.042 rows=56\nloops=1)\n Index Cond: ((person_information__id = 5398)\nAND (driver_activity_type__id = 5))\n -> Bitmap Index Scan on xda_pi_dat\n(cost=0.00..3.46 rows=243 width=0) (actual time=0.055..0.055 rows=157\nloops=1)\n Index Cond: ((person_information__id = 5399)\nAND (driver_activity_type__id = 5))\n Total runtime: 1.693 ms\n(12 rows)\n\n//\n// 3 PID:s \n//\nmica@TEST=> explain analyze select distinct on\n(driver_activity.person_information__id)\nTEST-> driver_activity.person_information__id,\nTEST-> driver_activity.end_time as prev_timestamp\nTEST-> from\nTEST-> driver_activity \nTEST-> where\nTEST-> driver_activity.person_information__id in (5398,5399,5400)\nTEST-> and driver_activity.driver_activity_type__id = 5\nTEST-> and driver_activity.start_time < '2006-04-01'\nTEST-> order by\nTEST-> driver_activity.person_information__id,\ndriver_activity.start_time desc;\n\t\nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-----------------------------------------------------------------\n Unique (cost=2808.35..2811.98 rows=1 width=20) (actual\ntime=5450.281..5450.948 rows=3 loops=1)\n -> Sort (cost=2808.35..2810.17 rows=727 width=20) (actual\ntime=5450.278..5450.607 rows=305 loops=1)\n Sort Key: person_information__id, start_time\n -> Bitmap Heap Scan on driver_activity (cost=2713.82..2773.80\nrows=727 width=20) (actual time=5436.259..5449.043 rows=305 loops=1)\n Recheck Cond: ((((person_information__id = 5398) AND\n(driver_activity_type__id = 5)) OR ((person_information__id = 5399) AND\n(driver_activity_type__id = 5)) OR ((person_information__id = 5400) AND\n(driver_activity_type__id = 5))) AND (driver_activity_type__id = 5))\n Filter: (start_time < '2006-04-01 00:00:00'::timestamp\nwithout time zone)\n -> BitmapAnd (cost=2713.82..2713.82 rows=15 width=0)\n(actual time=5436.148..5436.148 rows=0 loops=1)\n -> BitmapOr (cost=10.37..10.37 rows=728 width=0)\n(actual time=0.384..0.384 rows=0 loops=1)\n -> Bitmap Index Scan on xda_pi_dat\n(cost=0.00..3.46 rows=243 width=0) (actual time=0.135..0.135 rows=56\nloops=1)\n Index Cond: ((person_information__id =\n5398) AND (driver_activity_type__id = 5))\n -> Bitmap Index Scan on xda_pi_dat\n(cost=0.00..3.46 rows=243 width=0) (actual time=0.115..0.115 rows=157\nloops=1)\n Index Cond: ((person_information__id =\n5399) AND (driver_activity_type__id = 5))\n -> Bitmap Index Scan on xda_pi_dat\n(cost=0.00..3.46 rows=243 width=0) (actual time=0.126..0.126 rows=93\nloops=1)\n Index Cond: ((person_information__id =\n5400) AND (driver_activity_type__id = 5))\n -> Bitmap Index Scan on xda_dat\n(cost=0.00..2703.21 rows=474916 width=0) (actual time=5435.431..5435.431\nrows=451541 loops=1)\n Index Cond: (driver_activity_type__id = 5)\n Total runtime: 5451.094 ms\n(17 rows)\n\nQuestion: why is the extra step (Bitmap Index Scan on xda_dat)\nintroduced in the latter case? I can see it is originating from \n\n((((person_information__id = 5398) AND (driver_activity_type__id = 5))\nOR ((person_information__id = 5399) AND (driver_activity_type__id = 5))\nOR ((person_information__id = 5400) AND (driver_activity_type__id = 5)))\nAND (driver_activity_type__id = 5))\n\n..which, indented for readability, looks like this:\n\n(\n\t(\n\t\t(\n\t\t\t(person_information__id = 5398) AND\n(driver_activity_type__id = 5)\n\t\t) \n\t\tOR \n\t\t(\n\t\t\t(person_information__id = 5399) AND\n(driver_activity_type__id = 5)\n\t\t) \n\t\tOR \n\t\t(\n\t\t\t(person_information__id = 5400) AND\n(driver_activity_type__id = 5)\n\t\t)\n\t) \n\tAND \n\t(driver_activity_type__id = 5)\n)\n\n..and this last AND seems unnessecary, since the predicate on\n(driver_activity_type__id = 5) is included in each of the above\nconditions.\n\nCan this be a bug in the planner?\n\n/Mikael\n",
"msg_date": "Mon, 17 Apr 2006 01:24:20 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Migration study, step 2: rewriting queries"
},
{
"msg_contents": "\"Mikael Carneholm\" <[email protected]> writes:\n> ..and this last AND seems unnessecary, since the predicate on\n> (driver_activity_type__id = 5) is included in each of the above\n> conditions.\n\nThis should be fixed by the changes I made recently in choose_bitmap_and\n--- it wasn't being aggressive about pruning overlapping AND conditions\nwhen a sub-OR was involved. It's possible the new coding is *too*\naggressive, and will reject indexes that it'd be profitable to include;\nbut at least it won't make this particular mistake.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 Apr 2006 21:35:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 2: rewriting queries "
}
] |
[
{
"msg_contents": "Hi\n\n \n\nI have a cursor that fetches 150K rows and updates or inserts a table\nwith 150K rows.\n\n \n\nIt takes several minutes for the process to complete (about 15 minutes).\nThe select by itself (without cursor) gets all rows in 15 seconds.\n\n \n\nIs there a way to optimize the cursor to fetch all records and speed up\nthe process. I still need to do the record by record processing\n\n \n\n\n\n\n\n\n\n\n\n\nHi\n \nI have a cursor that fetches 150K rows and updates or\ninserts a table with 150K rows.\n \nIt takes several minutes for the process to complete (about\n15 minutes). The select by itself (without cursor) gets all rows in 15 seconds.\n \nIs there a way to optimize the cursor to fetch all records\nand speed up the process. I still need to do the record by record processing",
"msg_date": "Mon, 17 Apr 2006 07:34:38 -0700",
"msg_from": "\"Sriram Dandapani\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow cursor"
},
{
"msg_contents": "This is one thing that I always try to avoid, a single INSERT INTO...SELECT\n...FROM or single UPDATE is always faster compared to looping the same\nwithin a cursor, unless its inevitable.\n\nregards,\nLuckys.\n\n\nOn 4/17/06, Sriram Dandapani <[email protected]> wrote:\n>\n> Hi\n>\n>\n>\n> I have a cursor that fetches 150K rows and updates or inserts a table with\n> 150K rows.\n>\n>\n>\n> It takes several minutes for the process to complete (about 15 minutes).\n> The select by itself (without cursor) gets all rows in 15 seconds.\n>\n>\n>\n> Is there a way to optimize the cursor to fetch all records and speed up\n> the process. I still need to do the record by record processing\n>\n\nThis is one thing that I always try to avoid, a single INSERT INTO...SELECT ...FROM or single UPDATE is always faster compared to looping the same within a cursor, unless its inevitable.\n \nregards,\nLuckys. \nOn 4/17/06, Sriram Dandapani <[email protected]> wrote:\n\n\n\nHi\n \nI have a cursor that fetches 150K rows and updates or inserts a table with 150K rows.\n \nIt takes several minutes for the process to complete (about 15 minutes). The select by itself (without cursor) gets all rows in 15 seconds.\n\n \nIs there a way to optimize the cursor to fetch all records and speed up the process. I still need to do the record by record processing",
"msg_date": "Tue, 18 Apr 2006 08:55:08 +0400",
"msg_from": "Luckys <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow cursor"
}
] |
[
{
"msg_contents": "create temporary table c_chkpfw_hr_tr_updates as\n\n select * from c_chkpfw_hr_tr a\n\n where exists(select 1 from\nchkpfw_tr_hr_dimension b\n\n WHERE a.firstoccurrence =\nb.firstoccurrence\n\n AND a.sentryid_id = b.sentryid_id\n\n AND a.node_id = b.node_id\n\n\n AND a.customerid_id =\nb.customerid_id\n\n AND coalesce(a.interface_id,0) =\ncoalesce(b.interface_id,0)\n\n AND coalesce(a.source_id,0) =\ncoalesce(b.source_id,0)\n\n AND coalesce(a.destination_id,0) =\ncoalesce(b.destination_id,0)\n\n AND coalesce(a.sourceport_id,0) =\ncoalesce(b.sourceport_id,0)\n\n AND\ncoalesce(a.destinationport_id,0) = coalesce(b.destinationport_id,0)\n\n AND coalesce(a.inoutbound_id,0) =\ncoalesce(b.inoutbound_id,0)\n\n AND coalesce(a.action_id,0) =\ncoalesce(b.action_id,0)\n\n AND coalesce(a.protocol_id,0) =\ncoalesce(b.protocol_id,0)\n\n AND coalesce(a.service_id,0) =\ncoalesce(b.service_id,0)\n\n AND coalesce(a.sourcezone_id,0) =\ncoalesce(b.sourcezone_id,0)\n\n AND\ncoalesce(a.destinationzone_id,0) = coalesce(b.destinationzone_id,0));\n\n \n\nThis takes forever (I have to cancel the statement each time)\n\n \n\nc_chkpfw_hr_tr has about 20000 rows\n\nchkpfw_tr_hr_dimension has 150K rows\n\n \n\nc_chkpfw_hr_tr has same indexes as chkpfw_tr_hr_dimension\n\n \n\nFor such a small data set, this seems like a mystery. The only other\nalternative I have is to use cursors which are also very slow for row\nsets of 10- 15K or more.\n\n\n\n\n\n\n\n\n\n\ncreate temporary table c_chkpfw_hr_tr_updates as\n select * from c_chkpfw_hr_tr a\n where exists(select 1 from\nchkpfw_tr_hr_dimension b\n WHERE a.firstoccurrence\n= b.firstoccurrence\n AND a.sentryid_id =\nb.sentryid_id\n AND a.node_id =\nb.node_id \n AND a.customerid_id =\nb.customerid_id\n AND\ncoalesce(a.interface_id,0) = coalesce(b.interface_id,0)\n AND\ncoalesce(a.source_id,0) = coalesce(b.source_id,0)\n AND\ncoalesce(a.destination_id,0) = coalesce(b.destination_id,0)\n AND coalesce(a.sourceport_id,0)\n= coalesce(b.sourceport_id,0)\n AND\ncoalesce(a.destinationport_id,0) = coalesce(b.destinationport_id,0)\n AND\ncoalesce(a.inoutbound_id,0) = coalesce(b.inoutbound_id,0)\n AND\ncoalesce(a.action_id,0) = coalesce(b.action_id,0)\n AND coalesce(a.protocol_id,0)\n= coalesce(b.protocol_id,0)\n AND\ncoalesce(a.service_id,0) = coalesce(b.service_id,0)\n AND\ncoalesce(a.sourcezone_id,0) = coalesce(b.sourcezone_id,0)\n AND\ncoalesce(a.destinationzone_id,0) = coalesce(b.destinationzone_id,0));\n \nThis takes forever (I have to cancel the statement each\ntime)\n \nc_chkpfw_hr_tr has about 20000 rows\nchkpfw_tr_hr_dimension has 150K rows\n \nc_chkpfw_hr_tr has same indexes as chkpfw_tr_hr_dimension\n \nFor such a small data set, this seems like a mystery. The\nonly other alternative I have is to use cursors which are also very slow for\nrow sets of 10- 15K or more.",
"msg_date": "Mon, 17 Apr 2006 11:36:32 -0700",
"msg_from": "\"Sriram Dandapani\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "creating of temporary table takes very long"
},
{
"msg_contents": "\"Sriram Dandapani\" <[email protected]> writes:\n> [ query snipped ]\n> This takes forever (I have to cancel the statement each time)\n\nHow long did you wait?\n\n> c_chkpfw_hr_tr has same indexes as chkpfw_tr_hr_dimension\n\nWhich would be what exactly? What does EXPLAIN show for that SELECT?\n(I won't make you post EXPLAIN ANALYZE, if you haven't got the patience\nto let it finish, but you should at least provide EXPLAIN results.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Apr 2006 15:28:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: creating of temporary table takes very long "
}
] |
[
{
"msg_contents": "Explain analyze on the select statement that is the basis for temp table\ndata takes forever. I turned off enable_seqscan but it did not have an\neffect\n\n \n\n________________________________\n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Sriram\nDandapani\nSent: Monday, April 17, 2006 11:37 AM\nTo: Pgsql-Performance (E-mail)\nSubject: [PERFORM] creating of temporary table takes very long\n\n \n\ncreate temporary table c_chkpfw_hr_tr_updates as\n\n select * from c_chkpfw_hr_tr a\n\n where exists(select 1 from\nchkpfw_tr_hr_dimension b\n\n WHERE a.firstoccurrence =\nb.firstoccurrence\n\n AND a.sentryid_id = b.sentryid_id\n\n AND a.node_id = b.node_id\n\n\n AND a.customerid_id =\nb.customerid_id\n\n AND coalesce(a.interface_id,0) =\ncoalesce(b.interface_id,0)\n\n AND coalesce(a.source_id,0) =\ncoalesce(b.source_id,0)\n\n AND coalesce(a.destination_id,0) =\ncoalesce(b.destination_id,0)\n\n AND coalesce(a.sourceport_id,0) =\ncoalesce(b.sourceport_id,0)\n\n AND\ncoalesce(a.destinationport_id,0) = coalesce(b.destinationport_id,0)\n\n AND coalesce(a.inoutbound_id,0) =\ncoalesce(b.inoutbound_id,0)\n\n AND coalesce(a.action_id,0) =\ncoalesce(b.action_id,0)\n\n AND coalesce(a.protocol_id,0) =\ncoalesce(b.protocol_id,0)\n\n AND coalesce(a.service_id,0) =\ncoalesce(b.service_id,0)\n\n AND coalesce(a.sourcezone_id,0) =\ncoalesce(b.sourcezone_id,0)\n\n AND\ncoalesce(a.destinationzone_id,0) = coalesce(b.destinationzone_id,0));\n\n \n\nThis takes forever (I have to cancel the statement each time)\n\n \n\nc_chkpfw_hr_tr has about 20000 rows\n\nchkpfw_tr_hr_dimension has 150K rows\n\n \n\nc_chkpfw_hr_tr has same indexes as chkpfw_tr_hr_dimension\n\n \n\nFor such a small data set, this seems like a mystery. The only other\nalternative I have is to use cursors which are also very slow for row\nsets of 10- 15K or more.\n\n\n\n\n\n\n\n\n\n\n\nExplain analyze on the select statement\nthat is the basis for temp table data takes forever. I turned off enable_seqscan\nbut it did not have an effect\n \n\n\n\n\nFrom:\[email protected]\n[mailto:[email protected]] On Behalf Of Sriram Dandapani\nSent: Monday, April 17, 2006 11:37\nAM\nTo: Pgsql-Performance (E-mail)\nSubject: [PERFORM] creating of\ntemporary table takes very long\n\n \ncreate temporary table c_chkpfw_hr_tr_updates as\n \nselect * from c_chkpfw_hr_tr a\n \nwhere exists(select 1 from chkpfw_tr_hr_dimension b\n \nWHERE a.firstoccurrence = b.firstoccurrence\n \n AND a.sentryid_id = b.sentryid_id\n \n AND a.node_id = b.node_id \n \n \n AND a.customerid_id = b.customerid_id\n \n AND coalesce(a.interface_id,0) = coalesce(b.interface_id,0)\n \n AND coalesce(a.source_id,0) = coalesce(b.source_id,0)\n \n AND coalesce(a.destination_id,0) = coalesce(b.destination_id,0)\n \n AND coalesce(a.sourceport_id,0) = coalesce(b.sourceport_id,0)\n \n AND coalesce(a.destinationport_id,0) = coalesce(b.destinationport_id,0)\n \n AND coalesce(a.inoutbound_id,0) = coalesce(b.inoutbound_id,0)\n \n AND coalesce(a.action_id,0) = coalesce(b.action_id,0)\n \n AND coalesce(a.protocol_id,0) = coalesce(b.protocol_id,0)\n \n AND coalesce(a.service_id,0) = coalesce(b.service_id,0)\n \n AND coalesce(a.sourcezone_id,0) = coalesce(b.sourcezone_id,0)\n \n AND coalesce(a.destinationzone_id,0) = coalesce(b.destinationzone_id,0));\n \nThis takes forever (I have to cancel the statement each\ntime)\n \nc_chkpfw_hr_tr has about 20000 rows\nchkpfw_tr_hr_dimension has 150K rows\n \nc_chkpfw_hr_tr has same indexes as chkpfw_tr_hr_dimension\n \nFor such a small data set, this seems like a mystery. The\nonly other alternative I have is to use cursors which are also very slow for\nrow sets of 10- 15K or more.",
"msg_date": "Mon, 17 Apr 2006 11:52:15 -0700",
"msg_from": "\"Sriram Dandapani\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: creating of temporary table takes very long"
}
] |
[
{
"msg_contents": "Explain output. I tried explain analyze but pgadmin froze after 10\nminutes.\n\n\nQUERY PLAN\n\"Seq Scan on c_chkpfw_hr_tr a (cost=0.00..225975659.89 rows=11000\nwidth=136)\"\n\" Filter: (subplan)\"\n\" SubPlan\"\n\" -> Bitmap Heap Scan on chkpfw_tr_hr_dimension b\n(cost=1474.64..10271.13 rows=1 width=0)\"\n\" Recheck Cond: (($0 = firstoccurrence) AND ($1 = sentryid_id)\nAND ($2 = node_id))\"\n\" Filter: (($3 = customerid_id) AND (COALESCE($4, 0) =\nCOALESCE(interface_id, 0)) AND (COALESCE($5, 0) = COALESCE(source_id,\n0)) AND (COALESCE($6, 0) = COALESCE(destination_id, 0)) AND\n(COALESCE($7, 0) = COALESCE(sourceport_id, 0)) AND (COALESCE($8, 0) =\nCOALESCE(destinationport_id, 0)) AND (COALESCE($9, 0) =\nCOALESCE(inoutbound_id, 0)) AND (COALESCE($10, 0) = COALESCE(action_id,\n0)) AND (COALESCE($11, 0) = COALESCE(protocol_id, 0)) AND (COALESCE($12,\n0) = COALESCE(service_id, 0)) AND (COALESCE($13, 0) =\nCOALESCE(sourcezone_id, 0)) AND (COALESCE($14, 0) =\nCOALESCE(destinationzone_id, 0)))\"\n\" -> Bitmap Index Scan on chkpfw_tr_hr_idx1\n(cost=0.00..1474.64 rows=38663 width=0)\"\n\" Index Cond: (($0 = firstoccurrence) AND ($1 =\nsentryid_id) AND ($2 = node_id))\"\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Monday, April 17, 2006 12:29 PM\nTo: Sriram Dandapani\nCc: Pgsql-Performance (E-mail)\nSubject: Re: [PERFORM] creating of temporary table takes very long \n\n\"Sriram Dandapani\" <[email protected]> writes:\n> [ query snipped ]\n> This takes forever (I have to cancel the statement each time)\n\nHow long did you wait?\n\n> c_chkpfw_hr_tr has same indexes as chkpfw_tr_hr_dimension\n\nWhich would be what exactly? What does EXPLAIN show for that SELECT?\n(I won't make you post EXPLAIN ANALYZE, if you haven't got the patience\nto let it finish, but you should at least provide EXPLAIN results.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Apr 2006 12:48:57 -0700",
"msg_from": "\"Sriram Dandapani\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: creating of temporary table takes very long "
}
] |
[
{
"msg_contents": "Got an explain analyze output..Here it is\n\n\n\"Seq Scan on c_chkpfw_hr_tr a (cost=0.00..225975659.89 rows=11000\nwidth=136) (actual time=2.345..648070.474 rows=22001 loops=1)\"\n\" Filter: (subplan)\"\n\" SubPlan\"\n\" -> Bitmap Heap Scan on chkpfw_tr_hr_dimension b\n(cost=1474.64..10271.13 rows=1 width=0) (actual time=29.439..29.439\nrows=1 loops=22001)\"\n\" Recheck Cond: (($0 = firstoccurrence) AND ($1 = sentryid_id)\nAND ($2 = node_id))\"\n\" Filter: (($3 = customerid_id) AND (COALESCE($4, 0) =\nCOALESCE(interface_id, 0)) AND (COALESCE($5, 0) = COALESCE(source_id,\n0)) AND (COALESCE($6, 0) = COALESCE(destination_id, 0)) AND\n(COALESCE($7, 0) = COALESCE(sourceport_id, 0)) AND (COALESCE($8 (..)\"\n\" -> Bitmap Index Scan on chkpfw_tr_hr_idx1\n(cost=0.00..1474.64 rows=38663 width=0) (actual time=12.144..12.144\nrows=33026 loops=22001)\"\n\" Index Cond: (($0 = firstoccurrence) AND ($1 =\nsentryid_id) AND ($2 = node_id))\"\n\"Total runtime: 648097.800 ms\"\n\nRegards\n\nSriram\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Monday, April 17, 2006 12:29 PM\nTo: Sriram Dandapani\nCc: Pgsql-Performance (E-mail)\nSubject: Re: [PERFORM] creating of temporary table takes very long \n\n\"Sriram Dandapani\" <[email protected]> writes:\n> [ query snipped ]\n> This takes forever (I have to cancel the statement each time)\n\nHow long did you wait?\n\n> c_chkpfw_hr_tr has same indexes as chkpfw_tr_hr_dimension\n\nWhich would be what exactly? What does EXPLAIN show for that SELECT?\n(I won't make you post EXPLAIN ANALYZE, if you haven't got the patience\nto let it finish, but you should at least provide EXPLAIN results.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Apr 2006 13:05:17 -0700",
"msg_from": "\"Sriram Dandapani\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: creating of temporary table takes very long "
},
{
"msg_contents": "\"Sriram Dandapani\" <[email protected]> writes:\n> Got an explain analyze output..Here it is\n> \"Seq Scan on c_chkpfw_hr_tr a (cost=0.00..225975659.89 rows=11000\n> width=136) (actual time=2.345..648070.474 rows=22001 loops=1)\"\n> \" Filter: (subplan)\"\n> \" SubPlan\"\n> \" -> Bitmap Heap Scan on chkpfw_tr_hr_dimension b\n> (cost=1474.64..10271.13 rows=1 width=0) (actual time=29.439..29.439\n> rows=1 loops=22001)\"\n> \" Recheck Cond: (($0 = firstoccurrence) AND ($1 = sentryid_id)\n> AND ($2 = node_id))\"\n> \" Filter: (($3 = customerid_id) AND (COALESCE($4, 0) =\n> COALESCE(interface_id, 0)) AND (COALESCE($5, 0) = COALESCE(source_id,\n> 0)) AND (COALESCE($6, 0) = COALESCE(destination_id, 0)) AND\n> (COALESCE($7, 0) = COALESCE(sourceport_id, 0)) AND (COALESCE($8 (..)\"\n> \" -> Bitmap Index Scan on chkpfw_tr_hr_idx1\n> (cost=0.00..1474.64 rows=38663 width=0) (actual time=12.144..12.144\n> rows=33026 loops=22001)\"\n> \" Index Cond: (($0 = firstoccurrence) AND ($1 =\n> sentryid_id) AND ($2 = node_id))\"\n> \"Total runtime: 648097.800 ms\"\n\nThat's probably about as good a query plan as you can hope for given\nthe way the query is written. Those COALESCE comparisons are all\nunindexable (unless you make functional indexes on the COALESCE\nexpressions). You might get somewhere by converting the EXISTS\nto an IN, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 12:09:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: creating of temporary table takes very long "
}
] |
[
{
"msg_contents": ">This should be fixed by the changes I made recently in\nchoose_bitmap_and\n>--- it wasn't being aggressive about pruning overlapping AND conditions\nwhen a sub-OR was involved. It's possible the new coding is >*too*\naggressive, and will reject indexes that it'd be profitable to include;\nbut at least it won't make this particular mistake.\n\nOk, cool. I don't have time to test this right now as the project has to\nmove on (and I guess testing the fix would require a dump+build CVS\nversion+restore), but as a temporary workaround I simly dropped the\nxda_dat index (all queries on that table include the\nperson_information__id column anyway).\n\n- Mikael\n\n\n",
"msg_date": "Tue, 18 Apr 2006 10:51:26 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Migration study, step 2: rewriting queries "
},
{
"msg_contents": "\"Mikael Carneholm\" <[email protected]> writes:\n> Ok, cool. I don't have time to test this right now as the project has to\n> move on (and I guess testing the fix would require a dump+build CVS\n> version+restore), but as a temporary workaround I simly dropped the\n> xda_dat index (all queries on that table include the\n> person_information__id column anyway).\n\nThe patch is in the 8.1 branch so you don't need dump/restore anyway...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 10:32:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 2: rewriting queries "
}
] |
[
{
"msg_contents": "> > For now, I only could get good performance with bacula and \n> postgresql \n> > when disabling fsync...\n> \n> \n> Isn't that less safe?\n\nMost definitly.\n\nFWIW, I'm getting pretty good speeds with Bacula and PostgreSQL on a\nreasonably small db (file table about 40 million rows, filename about\n5.2 million and path 1.5 million). \n\nConfig changes are increasing shared mem and work mems, fsm pages,\nwal_sync_method=fdatasync, wal_buffers=16, checkpoint_segments=8,\ndefault_with_oids=off (before creating the bacula tables, so they don't\nuse oids).\n\nUsed to run with full_pages_writes=off, but not anymore since it's not\nsafe.\n\n\n> Also planning to check commit_delay and see if that helps.\n> I will try to avoid 2 or more machines backing up at the same \n> time.. plus in a couple of weeks I should have a better \n> machine for the DB anyways..\n\nBacula already serializes access to the database (they have to support\nmysql/myisam), so this shouldn't help. Actually, it might well hurt by\nintroducing extra delays.\n\n> I only wonder what is safer.. using a second or two in \n> commit_delay or using \n> fsync = off.. Anyone cares to comment?\n\nAbsolutely a commit_delay.\n\n\n//Magnus\n",
"msg_date": "Tue, 18 Apr 2006 13:56:44 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "On Tue, Apr 18, 2006 at 01:56:44PM +0200, Magnus Hagander wrote:\n> Bacula already serializes access to the database (they have to support\n> mysql/myisam), so this shouldn't help. Actually, it might well hurt by\n> introducing extra delays.\n\nYou have any contact with the developers? Maybe they're a possibility\nfor our summer of code...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 18 Apr 2006 16:10:05 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Hi, Magnus,\n\nMagnus Hagander wrote:\n\n> Bacula already serializes access to the database (they have to support\n> mysql/myisam), so this shouldn't help.\n\nOuch, that hurts.\n\nTo support mysql, they break performance for _every other_ database system?\n\n<cynism>\nNow, I understand how the mysql people manage to spread the legend of\nmysql being fast. They convince software developers to thwart all others.\n</>\n\nSeriously: How can we convince developers to either fix MySQL or abandon\nand replace it with a database, instead of crippling client software?\n\n> Actually, it might well hurt by introducing extra delays.\n\nWell, if you read the documentation, you will see that it will only wait\nif there are at least commit_siblings other transactions active. So when\nBacula serializes access, there will be no delays, as there is only a\nsingle transaction alive.\n\n\nHTH\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Wed, 19 Apr 2006 14:08:41 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "On Wed, 2006-04-19 at 07:08, Markus Schaber wrote:\n> Hi, Magnus,\n> \n> Magnus Hagander wrote:\n> \n> > Bacula already serializes access to the database (they have to support\n> > mysql/myisam), so this shouldn't help.\n> \n> Ouch, that hurts.\n> \n> To support mysql, they break performance for _every other_ database system?\n\nNote that should be \"to support MySQL with MyISAM tables\".\n\nIf they had written it for MySQL with innodb tables they would likely be\nable to use the same basic methods for performance tuning MySQL as or\nOracle or PostgreSQL.\n\nIt's the refusal of people to stop using MyISAM table types that's the\nreal issue.\n\nOf course, given the shakey ground MySQL is now on with Oracle owning\ninnodb...\n",
"msg_date": "Wed, 19 Apr 2006 10:22:10 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> It's the refusal of people to stop using MyISAM table types that's the\n> real issue.\n\nIsn't MyISAM still the default over there? It's hardly likely that the\naverage MySQL user would use anything but the default table type ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Apr 2006 11:31:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization? "
},
{
"msg_contents": "\n> Isn't MyISAM still the default over there?\n\n\tYes, it's the default.\n\tPersonnally I compile MySQL without InnoDB... and for any new development \nI use postgres.\n\n> It's hardly likely that the average MySQL user would use anything but \n> the default table type ...\n\n\tDouble yes ; also many hosts provide MySQL 4.0 or even 3.x, both of which \nhave no subquery support and are really brain-dead ; and most OSS PHP apps \nhave to be compatible... argh.\n",
"msg_date": "Wed, 19 Apr 2006 17:42:17 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization? "
},
{
"msg_contents": "On Wed, 2006-04-19 at 10:31, Tom Lane wrote:\n> Scott Marlowe <[email protected]> writes:\n> > It's the refusal of people to stop using MyISAM table types that's the\n> > real issue.\n> \n> Isn't MyISAM still the default over there? It's hardly likely that the\n> average MySQL user would use anything but the default table type ...\n\nSure. But bacula supplies its own setup scripts. and it's not that\nhard to make them a requirement for the appication. Most versions of\nMySQL that come with fedora core and other distros nowadays support\ninnodb.\n\n\n",
"msg_date": "Wed, 19 Apr 2006 11:52:46 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "> Scott Marlowe <[email protected]> writes:\n>> It's the refusal of people to stop using MyISAM table types that's the\n>> real issue.\n> \n> Isn't MyISAM still the default over there? It's hardly likely that the\n> average MySQL user would use anything but the default table type ...\n\nSince MySQL 5, InnoDB tables are default I recall.\n\n",
"msg_date": "Thu, 20 Apr 2006 09:07:30 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n>> Scott Marlowe <[email protected]> writes:\n>>> It's the refusal of people to stop using MyISAM table types that's the\n>>> real issue.\n>>\n>> Isn't MyISAM still the default over there? It's hardly likely that the\n>> average MySQL user would use anything but the default table type ...\n> \n> Since MySQL 5, InnoDB tables are default I recall.\n\nMight be for the binaries from www.mysql.com - but if you build from \nsource, it still seems to be MYISAM (checked with 5.0.18 and 5.1.7 here).\n\nCheers\n\nMark\n",
"msg_date": "Thu, 20 Apr 2006 13:32:53 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "On Wed, 2006-04-19 at 20:07, Christopher Kings-Lynne wrote:\n> > Scott Marlowe <[email protected]> writes:\n> >> It's the refusal of people to stop using MyISAM table types that's the\n> >> real issue.\n> > \n> > Isn't MyISAM still the default over there? It's hardly likely that the\n> > average MySQL user would use anything but the default table type ...\n> \n> Since MySQL 5, InnoDB tables are default I recall.\n\nIt gets built by default, but when you do a plain create table, it will\nstill default to myisam tables.\n\nNote that there is a setting somewhere in my.cnf that will make the\ndefault table type anything you want.\n\nFor Bacula though, what I was suggesting was that they simply declare\nthat you need innodb table type support if you want decent performance,\nthen coding to that, and if someone doesn't have innodb table support,\nthen they have no right to complain about poor performance. Seems a\nfair compromise to me. The Bacula folks would get to program to a real\ndatabase model with proper serlialization and all that, and the people\nwho refuse to move up to a later model MySQL get crappy performance.\n",
"msg_date": "Thu, 20 Apr 2006 09:16:27 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
}
] |
[
{
"msg_contents": "For the purpose of the application I need to establish some form of\nserialization, therefore I use FOR UPDATE. The query, inside the\nfunction, is like this:\n\npulitzer2=# explain analyze select id FROM messages JOIN\nticketing_codes_played ON id = message_id WHERE service_id = 1102 AND\nreceiving_time BETWEEN '2006-03-01' AND '2006-06-30' FOR UPDATE;\n\nQUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=32131.04..34281.86 rows=627 width=16) (actual\ntime=742.806..1491.864 rows=58005 loops=1)\n Hash Cond: (\"outer\".message_id = \"inner\".id)\n -> Seq Scan on ticketing_codes_played (cost=0.00..857.17 rows=57217\nwidth=10) (actual time=0.024..209.331 rows=58005 loops=1)\n -> Hash (cost=32090.60..32090.60 rows=16177 width=10) (actual\ntime=742.601..742.601 rows=65596 loops=1)\n -> Bitmap Heap Scan on messages (cost=4153.51..32090.60\nrows=16177 width=10) (actual time=160.555..489.459 rows=65596 loops=1)\n Recheck Cond: ((service_id = 1102) AND (receiving_time >=\n'2006-03-01 00:00:00+01'::timestamp with time zone) AND (receiving_time\n<= '2006-06-30 00:00:00+02'::timestamp with time zone))\n -> BitmapAnd (cost=4153.51..4153.51 rows=16177 width=0)\n(actual time=156.900..156.900 rows=0 loops=1)\n -> Bitmap Index Scan on idx_service_id\n(cost=0.00..469.31 rows=68945 width=0) (actual time=16.661..16.661\nrows=66492 loops=1)\n Index Cond: (service_id = 1102)\n -> Bitmap Index Scan on\nidx_messages_receiving_time (cost=0.00..3683.95 rows=346659 width=0)\n(actual time=137.526..137.526 rows=360754 loops=1)\n Index Cond: ((receiving_time >= '2006-03-01\n00:00:00+01'::timestamp with time zone) AND (receiving_time <=\n'2006-06-30 00:00:00+02'::timestamp with time zone))\n Total runtime: 6401.954 ms\n(12 rows)\n\n\n\nNow, this query takes between 8 and 30 seconds, wich is a lot, since\nduring the day we have almost 20 requests per minute. I notice that\nduring the execution of the above mentioned query i/o goes bezerk,\niostat tells me that load is around 60%. I tried playing with WAL\nconfiguration parametars, even put the log on separate disk spindles, it\ndid nothing.\n\nShall I reconsider the need for the exact lock I developed, or there is\nsomething more I could do to speed the things up?\n\n\tMario\n-- \nMario Splivalo\nMob-Art\[email protected]\n\n\"I can do it quick, I can do it cheap, I can do it well. Pick any two.\"\n\n\n",
"msg_date": "Tue, 18 Apr 2006 15:08:39 +0200",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT FOR UPDATE performance is bad"
},
{
"msg_contents": "Mario Splivalo <[email protected]> writes:\n> For the purpose of the application I need to establish some form of\n> serialization, therefore I use FOR UPDATE. The query, inside the\n> function, is like this:\n\n> pulitzer2=# explain analyze select id FROM messages JOIN\n> ticketing_codes_played ON id = message_id WHERE service_id = 1102 AND\n> receiving_time BETWEEN '2006-03-01' AND '2006-06-30' FOR UPDATE;\n\n> Hash Join (cost=32131.04..34281.86 rows=627 width=16) (actual\n> time=742.806..1491.864 rows=58005 loops=1)\n ^^^^^\n\n> Now, this query takes between 8 and 30 seconds, wich is a lot, since\n> during the day we have almost 20 requests per minute.\n\nAcquiring a row lock separately for each of 58000 rows is not going to\nbe a cheap operation. Especially not if anyone else is locking any of\nthe same rows and thereby blocking you. If there is concurrent locking,\nyou're also running a big risk of deadlock because two processes might\ntry to lock the same rows in different orders.\n\nAre you really intending to update all 58000 rows? If not, what is\nthe serialization requirement exactly (ie, what are you trying to\naccomplish)? Seems like something about this app needs to be\nredesigned.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 10:30:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT FOR UPDATE performance is bad "
},
{
"msg_contents": "Mario Splivalo <[email protected]> writes:\n>> If there is concurrent locking,\n>> you're also running a big risk of deadlock because two processes might\n>> try to lock the same rows in different orders.\n\n> I think there is no risk of a deadlock, since that particular function\n> is called from the middleware (functions are used as interface to the\n> database), and the lock order is always the same.\n\nNo, you don't even know what the order is, let alone that it's always\nthe same.\n\n> Now, I just need to have serialization, I need to have clients 'line up'\n> in order to perform something in the database. Actually, users are\n> sending codes from the newspaper, beer-cans, Cola-cans, and stuff, and\n> database needs to check has the code allready been played. Since the\n> system is designed so that it could run multiple code-games (and then\n> there similair code could exists for coke-game and beer-game), I'm using\n> messages table to see what code-game (i.e. service) that particular code\n> belongs.\n\nI'd suggest using a table that has exactly one row per \"code-game\", and\ndoing a SELECT FOR UPDATE on that row to establish the lock you need.\nThis need not have anything to do with the tables/rows you are actually\nintending to update --- although obviously such a convention is pretty\nfragile if you have updates coming from a variety of code. I think it's\nreasonably safe when you're funneling all the operations through a bit\nof middleware.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 11:33:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT FOR UPDATE performance is bad "
},
{
"msg_contents": "\nSuppose you have a table codes :\n(\n\tgame_id\tINT,\n\tcode\t\tTEXT,\n\tused\t\tBOOL NOT NULL DEFAULT 'f',\n\tprize\t\t...\n\t...\n\tPRIMARY KEY (game_id, code)\n)\n\n\tJust UPDATE codes SET used='t' WHERE used='f' AND game_id=... AND code=...\n\n\tThen check the rowcount : if one row was updated, the code was not used \nyet. If no row was updated, the code either did not exist, or was already \nused.\n\nAnother option : create a table used_codes like this :\n\n(\n\tgame_id\tINT,\n\tcode\t\tTEXT,\n\t...\n\tPRIMARY KEY (game_id, code)\n)\n\n\tThen, when trying to use a code, INSERT into this table. If you get a \nconstraint violation on the uniqueness of the primary key, your code has \nalready been used.\n\n\tBoth solutions have a big advantage : they don't require messing with \nlocks and are extremely simple. The one with UPDATE is IMHO better, \nbecause it doesn't abort the current transaction (although you could use a \nsavepoint in the INSERT case to intercept the error).\n\n\n\n\n\n\n\n\nOn Tue, 18 Apr 2006 17:33:06 +0200, Tom Lane <[email protected]> wrote:\n\n> Mario Splivalo <[email protected]> writes:\n>>> If there is concurrent locking,\n>>> you're also running a big risk of deadlock because two processes might\n>>> try to lock the same rows in different orders.\n>\n>> I think there is no risk of a deadlock, since that particular function\n>> is called from the middleware (functions are used as interface to the\n>> database), and the lock order is always the same.\n>\n> No, you don't even know what the order is, let alone that it's always\n> the same.\n>\n>> Now, I just need to have serialization, I need to have clients 'line up'\n>> in order to perform something in the database. Actually, users are\n>> sending codes from the newspaper, beer-cans, Cola-cans, and stuff, and\n>> database needs to check has the code allready been played. Since the\n>> system is designed so that it could run multiple code-games (and then\n>> there similair code could exists for coke-game and beer-game), I'm using\n>> messages table to see what code-game (i.e. service) that particular code\n>> belongs.\n>\n> I'd suggest using a table that has exactly one row per \"code-game\", and\n> doing a SELECT FOR UPDATE on that row to establish the lock you need.\n> This need not have anything to do with the tables/rows you are actually\n> intending to update --- although obviously such a convention is pretty\n> fragile if you have updates coming from a variety of code. I think it's\n> reasonably safe when you're funneling all the operations through a bit\n> of middleware.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n\n",
"msg_date": "Tue, 18 Apr 2006 19:00:40 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT FOR UPDATE performance is bad "
},
{
"msg_contents": "> Suppose you have a table codes :\n> (\n> game_id INT,\n> code TEXT,\n> used BOOL NOT NULL DEFAULT 'f',\n> prize ...\n> ...\n> PRIMARY KEY (game_id, code)\n> )\n> \n> Just UPDATE codes SET used='t' WHERE used='f' AND game_id=... AND \n> code=...\n> \n> Then check the rowcount : if one row was updated, the code was not \n> used yet. If no row was updated, the code either did not exist, or was \n> already used.\n\nYou can use a stored procedure with exceptions no?\n\nTry this:\n\nhttp://www.postgresql.org/docs/8.1/interactive/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE\n\nChris\n\n\n",
"msg_date": "Wed, 19 Apr 2006 10:21:35 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT FOR UPDATE performance is bad"
},
{
"msg_contents": "On Tue, 2006-04-18 at 11:33 -0400, Tom Lane wrote:\n> Mario Splivalo <[email protected]> writes:\n> >> If there is concurrent locking,\n> >> you're also running a big risk of deadlock because two processes might\n> >> try to lock the same rows in different orders.\n> \n> > I think there is no risk of a deadlock, since that particular function\n> > is called from the middleware (functions are used as interface to the\n> > database), and the lock order is always the same.\n> \n> No, you don't even know what the order is, let alone that it's always\n> the same.\n\nYou got me confused here! :) If I have just only one function that acts\nas a interface to the middleware, and all the operations on the database\nare done trough that one function, and I carefuly design that function\nso that I first grab the lock, and then do the stuff, aint I pretty sure\nthat I won't be having any deadlocks? \n\n> \n> > Now, I just need to have serialization, I need to have clients 'line up'\n> > in order to perform something in the database. Actually, users are\n> > sending codes from the newspaper, beer-cans, Cola-cans, and stuff, and\n> > database needs to check has the code allready been played. Since the\n> > system is designed so that it could run multiple code-games (and then\n> > there similair code could exists for coke-game and beer-game), I'm using\n> > messages table to see what code-game (i.e. service) that particular code\n> > belongs.\n> \n> I'd suggest using a table that has exactly one row per \"code-game\", and\n> doing a SELECT FOR UPDATE on that row to establish the lock you need.\n> This need not have anything to do with the tables/rows you are actually\n> intending to update --- although obviously such a convention is pretty\n> fragile if you have updates coming from a variety of code. I think it's\n> reasonably safe when you're funneling all the operations through a bit\n> of middleware.\n\nI tend to design my applications so I don't have \"flying SQL\" in my\njava/python/c#/php/whereever code, all the database stuff is done trough\nthe functions which are designed as interfaces. Those functions are also\ndesigned so they don't stop each other. So, since I need the\nserialization, I'll do as you suggested, using a lock-table with\nexactley one row per \"code-game\".\n\nJust one more question here, it has to do with postgres internals, but\nstill I'd like to know why is postgres doing such huge i/o (in my log\nfile I see a lot of messages that say \"LOG: archived transaction log\nfile\" when performing that big FOR UPDATE.\n\n\tMario\n-- \nMario Splivalo\nMob-Art\[email protected]\n\n\"I can do it quick, I can do it cheap, I can do it well. Pick any two.\"\n\n\n",
"msg_date": "Wed, 19 Apr 2006 10:20:43 +0200",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT FOR UPDATE performance is bad"
},
{
"msg_contents": "On Tue, 2006-04-18 at 19:00 +0200, PFC wrote:\n> Suppose you have a table codes :\n> (\n> \tgame_id\tINT,\n> \tcode\t\tTEXT,\n> \tused\t\tBOOL NOT NULL DEFAULT 'f',\n> \tprize\t\t...\n> \t...\n> \tPRIMARY KEY (game_id, code)\n> )\n> \n> \tJust UPDATE codes SET used='t' WHERE used='f' AND game_id=... AND code=...\n> \n> \tThen check the rowcount : if one row was updated, the code was not used \n> yet. If no row was updated, the code either did not exist, or was already \n> used.\n> \n> Another option : create a table used_codes like this :\n> \n> (\n> \tgame_id\tINT,\n> \tcode\t\tTEXT,\n> \t...\n> \tPRIMARY KEY (game_id, code)\n> )\n> \n> \tThen, when trying to use a code, INSERT into this table. If you get a \n> constraint violation on the uniqueness of the primary key, your code has \n> already been used.\n> \n> \tBoth solutions have a big advantage : they don't require messing with \n> locks and are extremely simple. The one with UPDATE is IMHO better, \n> because it doesn't abort the current transaction (although you could use a \n> savepoint in the INSERT case to intercept the error).\n> \n> \n\nThis works perfectly, but sometimes the game has no codes, and I still\nneed to know exactley who came first, who was second, and so on... So a\nlocking table as Tom suggested is, I guess, a perfect solution for my\nsituation...\n\n\tMario\n-- \nMario Splivalo\nMob-Art\[email protected]\n\n\"I can do it quick, I can do it cheap, I can do it well. Pick any two.\"\n\n\n",
"msg_date": "Wed, 19 Apr 2006 10:20:54 +0200",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT FOR UPDATE performance is bad"
},
{
"msg_contents": "On Wed, Apr 19, 2006 at 10:20:54AM +0200, Mario Splivalo wrote:\n> This works perfectly, but sometimes the game has no codes, and I still\n> need to know exactley who came first, who was second, and so on... So a\n> locking table as Tom suggested is, I guess, a perfect solution for my\n> situation...\n\nDepending on your performance requirements, you should look at\ncontrib/userlock as well, since it will probably be much more performant\nthan locking a row in a table.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Thu, 20 Apr 2006 11:50:02 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT FOR UPDATE performance is bad"
}
] |
[
{
"msg_contents": "Hi!\n\n I am having trouble with like statements on one of my tables.\n\n I already tried a vacuum and analyze but with no success.\n\n The database is PostgreSQL Database Server 8.1.3 on i686-pc-mingw32\n\nI get the following explain and I am troubled by the very high\n\"startup_cost\" ... does anyone have any idea why that value is so\nhigh?\n\n{SEQSCAN\n :startup_cost 100000000.00 \n :total_cost 100021432.33 \n :plan_rows 1 \n :plan_width 1311 \n :targetlist (\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 1 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 1\n }\n :resno 1 \n :resname image_id \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 1 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 2 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 2\n }\n :resno 2 \n :resname customer_id \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 2 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 3 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 3\n }\n :resno 3 \n :resname theme_id \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 3 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 4 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 4\n }\n :resno 4 \n :resname gallery_id \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 4 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 5 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 5\n }\n :resno 5 \n :resname event_id \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 5 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 6 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 6\n }\n :resno 6 \n :resname width \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 6 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 7 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 7\n }\n :resno 7 \n :resname height \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 7 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 8 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 8\n }\n :resno 8 \n :resname filesize \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 8 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 9 \n :vartype 1114 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 9\n }\n :resno 9 \n :resname uploadtime \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 9 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 10 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 10\n }\n :resno 10 \n :resname filename \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 10 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 11 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 11\n }\n :resno 11 \n :resname originalfilename \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 11 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 12 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 12\n }\n :resno 12 \n :resname thumbname \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 12 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 13 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 13\n }\n :resno 13 \n :resname previewname \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 13 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 14 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 14\n }\n :resno 14 \n :resname title \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 14 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 15 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 15\n }\n :resno 15 \n :resname flags \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 15 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 16 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 16\n }\n :resno 16 \n :resname photographername \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 16 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 17 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 17\n }\n :resno 17 \n :resname colors \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 17 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 18 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 18\n }\n :resno 18 \n :resname compression \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 18 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 19 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 19\n }\n :resno 19 \n :resname resolution \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 19 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 20 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 20\n }\n :resno 20 \n :resname colortype \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 20 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 21 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 21\n }\n :resno 21 \n :resname colordepth \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 21 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 22 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 22\n }\n :resno 22 \n :resname sort \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 22 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 23 \n :vartype 1114 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 23\n }\n :resno 23 \n :resname creationtime \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 23 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 24 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 24\n }\n :resno 24 \n :resname creationlocation \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 24 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 25 \n :vartype 25 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 25\n }\n :resno 25 \n :resname description \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 25 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 26 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 26\n }\n :resno 26 \n :resname cameravendor_id \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 26 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 27 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 27\n }\n :resno 27 \n :resname cameramodel_id \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 27 \n :resjunk false\n }\n )\n :qual (\n {OPEXPR \n :opno 1209 \n :opfuncid 850 \n :opresulttype 16 \n :opretset false \n :args (\n {RELABELTYPE \n :arg \n {VAR \n :varno 1 \n :varattno 14 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 14\n }\n :resulttype 25 \n :resulttypmod -1 \n :relabelformat 0\n }\n {CONST \n :consttype 25 \n :constlen -1 \n :constbyval false \n :constisnull false \n :constvalue 12 [ 12 0 0 0 68 97 118 111 114 107 97 37 ]\n }\n )\n }\n )\n :lefttree <> \n :righttree <> \n :initPlan <> \n :extParam (b)\n :allParam (b)\n :nParamExec 0 \n :scanrelid 1\n }\n\nSeq Scan on image image0_ (cost=100000000.00..100021432.33 rows=1 width=1311) (actual time=11438.273..13668.300 rows=33 loops=1)\n Filter: ((title)::text ~~ 'Davorka%'::text)\nTotal runtime: 13669.134 ms\n\n \n here's my explain:\n\n {SEQSCAN\n :startup_cost 100000000.00 \n :total_cost 100021432.33 \n :plan_rows 1 \n :plan_width 1311 \n :targetlist (\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 1 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 1\n }\n :resno 1 \n :resname image_id \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 1 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 2 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 2\n }\n :resno 2 \n :resname customer_id \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 2 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 3 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 3\n }\n :resno 3 \n :resname theme_id \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 3 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 4 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 4\n }\n :resno 4 \n :resname gallery_id \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 4 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 5 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 5\n }\n :resno 5 \n :resname event_id \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 5 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 6 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 6\n }\n :resno 6 \n :resname width \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 6 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 7 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 7\n }\n :resno 7 \n :resname height \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 7 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 8 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 8\n }\n :resno 8 \n :resname filesize \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 8 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 9 \n :vartype 1114 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 9\n }\n :resno 9 \n :resname uploadtime \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 9 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 10 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 10\n }\n :resno 10 \n :resname filename \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 10 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 11 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 11\n }\n :resno 11 \n :resname originalfilename \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 11 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 12 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 12\n }\n :resno 12 \n :resname thumbname \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 12 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 13 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 13\n }\n :resno 13 \n :resname previewname \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 13 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 14 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 14\n }\n :resno 14 \n :resname title \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 14 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 15 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 15\n }\n :resno 15 \n :resname flags \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 15 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 16 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 16\n }\n :resno 16 \n :resname photographername \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 16 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 17 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 17\n }\n :resno 17 \n :resname colors \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 17 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 18 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 18\n }\n :resno 18 \n :resname compression \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 18 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 19 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 19\n }\n :resno 19 \n :resname resolution \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 19 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 20 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 20\n }\n :resno 20 \n :resname colortype \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 20 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 21 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 21\n }\n :resno 21 \n :resname colordepth \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 21 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 22 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 22\n }\n :resno 22 \n :resname sort \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 22 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 23 \n :vartype 1114 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 23\n }\n :resno 23 \n :resname creationtime \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 23 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 24 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 24\n }\n :resno 24 \n :resname creationlocation \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 24 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 25 \n :vartype 25 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 25\n }\n :resno 25 \n :resname description \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 25 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 26 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 26\n }\n :resno 26 \n :resname cameravendor_id \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 26 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 27 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 27\n }\n :resno 27 \n :resname cameramodel_id \n :ressortgroupref 0 \n :resorigtbl 29524 \n :resorigcol 27 \n :resjunk false\n }\n )\n :qual (\n {OPEXPR \n :opno 1209 \n :opfuncid 850 \n :opresulttype 16 \n :opretset false \n :args (\n {RELABELTYPE \n :arg \n {VAR \n :varno 1 \n :varattno 14 \n :vartype 1043 \n :vartypmod 259 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 14\n }\n :resulttype 25 \n :resulttypmod -1 \n :relabelformat 0\n }\n {CONST \n :consttype 25 \n :constlen -1 \n :constbyval false \n :constisnull false \n :constvalue 12 [ 12 0 0 0 68 97 118 111 114 107 97 37 ]\n }\n )\n }\n )\n :lefttree <> \n :righttree <> \n :initPlan <> \n :extParam (b)\n :allParam (b)\n :nParamExec 0 \n :scanrelid 1\n }\n\nSeq Scan on image image0_ (cost=100000000.00..100021432.33 rows=1 width=1311) (actual time=11438.273..13668.300 rows=33 loops=1)\n Filter: ((title)::text ~~ 'Davorka%'::text)\nTotal runtime: 13669.134 ms\n\nThe table looks like the following:\n\nCREATE TABLE image\n(\n image_id int4 NOT NULL,\n customer_id int4 NOT NULL,\n theme_id int4,\n gallery_id int4,\n event_id int4,\n width int4 NOT NULL,\n height int4 NOT NULL,\n filesize int4 NOT NULL,\n uploadtime timestamp NOT NULL,\n filename varchar(255) NOT NULL,\n originalfilename varchar(255),\n thumbname varchar(255) NOT NULL,\n previewname varchar(255) NOT NULL,\n title varchar(255),\n flags int4 NOT NULL,\n photographername varchar(255),\n colors int4,\n compression varchar(64),\n resolution varchar(64),\n colortype varchar(64),\n colordepth varchar(64),\n sort int4,\n creationtime timestamp,\n creationlocation varchar(255),\n description text,\n cameravendor_id int4,\n cameramodel_id int4,\n CONSTRAINT image_pkey PRIMARY KEY (image_id),\n CONSTRAINT rel_121 FOREIGN KEY (cameravendor_id)\n REFERENCES cameravendor (cameravendor_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT rel_122 FOREIGN KEY (cameramodel_id)\n REFERENCES cameramodel (cameramodel_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT rel_21 FOREIGN KEY (customer_id)\n REFERENCES customer (customer_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT rel_23 FOREIGN KEY (theme_id)\n REFERENCES theme (theme_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT rel_26 FOREIGN KEY (gallery_id)\n REFERENCES gallery (gallery_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT rel_63 FOREIGN KEY (event_id)\n REFERENCES event (event_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n) \nWITHOUT OIDS;\n\nThese are the indexes on the table:\n\nCREATE INDEX idx_image_customer\n ON image\n USING btree\n (customer_id);\n\nCREATE INDEX idx_image_event\n ON image\n USING btree\n (event_id);\n\nCREATE INDEX idx_image_flags\n ON image\n USING btree\n (flags);\n\nCREATE INDEX idx_image_gallery\n ON image\n USING btree\n (gallery_id);\n\nCREATE INDEX idx_image_id\n ON image\n USING btree\n (image_id);\n\nCREATE INDEX idx_image_id_title\n ON image\n USING btree\n (image_id, title);\n\nCREATE INDEX idx_image_theme\n ON image\n USING btree\n (theme_id);\n\nCREATE INDEX idx_image_title\n ON image\n USING btree\n (title);\n\n\n\nI would appreciate any hint what could be the problem here.\n\nBest regards\nManuel Rorarius\n\n",
"msg_date": "Tue, 18 Apr 2006 16:35:13 +0200",
"msg_from": "\"Tarabas (Manuel Rorarius)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with LIKE-Performance"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of Tarabas (Manuel Rorarius)\n> Subject: [PERFORM] Problem with LIKE-Performance\n> \n> Hi!\n> \n> I am having trouble with like statements on one of my tables.\n\n\nIt looks like you are getting a sequential scan instead of an index\nscan. What is your locale setting? As far as I know Postgres doesn't\nsupport using indexes with LIKE unless you are using the C locale. \n\nAlso, in the future you only need to post EXPLAIN ANALYZE not EXPLAIN\nANALYZE VERBOSE.\n\nDave\n\n\n",
"msg_date": "Tue, 18 Apr 2006 10:08:27 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with LIKE-Performance"
},
{
"msg_contents": "Hi Dave,\n\nDD> It looks like you are getting a sequential scan instead of an index\nDD> scan. What is your locale setting? As far as I know Postgres doesn't\nDD> support using indexes with LIKE unless you are using the C locale.\n\nActually no, I am using de_DE as locale because I need the german\norder-by support. But even for a seq-scan it seems pretty slow, but that's\njust a feeling. The table currently has ~172.000 rows and is suposed to\nrise to about 1 mio or more.\n\nIs there any way to speed the like's up with a different locale than C\nor to get an order by in a different Locale although using the\ndefault C locale?\n\nDD> Also, in the future you only need to post EXPLAIN ANALYZE not EXPLAIN\nDD> ANALYZE VERBOSE.\n\nok, i will keep that in mind :-) didn't know how verbose you would need\nit *smile*\n\nBest regards\nManuel\n\n",
"msg_date": "Tue, 18 Apr 2006 17:16:18 +0200",
"msg_from": "\"Tarabas (Manuel Rorarius)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problem with LIKE-Performance"
},
{
"msg_contents": "\n\nDave Dutcher a �crit :\n> It looks like you are getting a sequential scan instead of an index\n> scan. What is your locale setting? As far as I know Postgres doesn't\n> support using indexes with LIKE unless you are using the C locale.\n> \nIt does if you create your index this way :\n\nCREATE INDEX idx_image_title\n ON image\n USING btree\n (title varchar_pattern_ops);\n\nPlease see http://www.postgresql.org/docs/8.1/interactive/indexes-opclass.html\n\n\nThomas\n\n",
"msg_date": "Tue, 18 Apr 2006 17:17:14 +0200",
"msg_from": "REISS Thomas DSIC DESP <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with LIKE-Performance"
},
{
"msg_contents": "\"Tarabas (Manuel Rorarius)\" <[email protected]> writes:\n> I get the following explain and I am troubled by the very high\n> \"startup_cost\" ... does anyone have any idea why that value is so\n> high?\n\n> {SEQSCAN\n> :startup_cost 100000000.00 \n\nYou have enable_seqscan = off, no?\n\nPlease refrain from posting EXPLAIN VERBOSE unless it's specifically\nrequested ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 11:20:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with LIKE-Performance "
},
{
"msg_contents": "Hi Tom,\n\nTL> \"Tarabas (Manuel Rorarius)\" <[email protected]> writes:\n>> I get the following explain and I am troubled by the very high\n>> \"startup_cost\" ... does anyone have any idea why that value is so\n>> high?\n\n>> {SEQSCAN\n>> :startup_cost 100000000.00 \n\nTL> You have enable_seqscan = off, no?\n\nYou were right, I was testing this and had it removed, but somehow I\nmust have hit the wrong button in pgadmin and it was not successfully\nremoved from the database.\n\nAfter removing the enable_seqscan = off and making sure it was gone,\nit is a lot faster again.\n\nNow it takes about 469.841 ms for the select.\n\nTL> Please refrain from posting EXPLAIN VERBOSE unless it's specifically\nTL> requested ...\n\nmea culpa, i will not do that again :-)\n\nBest regards\nManuel\n\n",
"msg_date": "Tue, 18 Apr 2006 17:34:59 +0200",
"msg_from": "\"Tarabas (Manuel Rorarius)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problem with LIKE-Performance"
},
{
"msg_contents": "On 18.04.2006, at 17:16 Uhr, Tarabas (Manuel Rorarius) wrote:\n\n> Is there any way to speed the like's up with a different locale than C\n> or to get an order by in a different Locale although using the\n> default C locale?\n\nSure. Just create the index with\n\ncreate index <tabname>_<column>_index on <tabname> (<column> \nvarchar_pattern_ops);\n\nThan you can use something like\n\nselect * from <table> where <column> like 'Something%';\n\nRemember that an index can't be used for queries with '%pattern%'.\n\ncug",
"msg_date": "Tue, 18 Apr 2006 17:43:51 +0200",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with LIKE-Performance"
},
{
"msg_contents": "\"Tarabas (Manuel Rorarius)\" <[email protected]> writes:\n> After removing the enable_seqscan = off and making sure it was gone,\n> it is a lot faster again.\n> Now it takes about 469.841 ms for the select.\n\nUm, no, enable_seqscan would certainly not have had any effect on the\n*actual* runtime of this query. All that enable_seqscan = off really\ndoes is to add a large constant to the estimated cost of any seqscan,\nso as to prevent the planner from selecting it unless there is no other\nalternative plan available. But that has nothing to do with how long\nthe seqscan will really run.\n\nIf you are seeing a speedup in repeated executions of the same seqscan\nplan, it's probably just a caching effect.\n\nAs already noted, it might be worth your while to add an index using the\npattern-ops opclass to help with queries like this.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 11:45:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with LIKE-Performance "
},
{
"msg_contents": "Hi Tom,\n\nTL> As already noted, it might be worth your while to add an index using the\nTL> pattern-ops opclass to help with queries like this.\n\nI have done that now and it works very fine as supposed.\n\nThe problem with the high startup_costs disappeared somehow after the\nchange of the enable_seqscan = off and a restart of pg-admin.\n\nfirst Time I ran the statement it showed 13 sec execution time.\n\nSeq Scan on image image0_ (cost=0.00..21414.21 rows=11 width=1311)\n(actual time=10504.138..12857.127 rows=119 loops=1)\n Filter: ((title)::text ~~ '%Davorka%'::text)\nTotal runtime: 12857.372 ms\n\nsecond time I ran the statement it dropped to ~500 msec , which is\npretty ok. :-)\n\nSeq Scan on image image0_ (cost=0.00..21414.21 rows=11 width=1311)\n(actual time=270.289..552.144 rows=119 loops=1)\n Filter: ((title)::text ~~ '%Davorka%'::text)\nTotal runtime: 552.708 ms\n\nBest regards\nManuel Rorarius\n\n",
"msg_date": "Tue, 18 Apr 2006 18:04:34 +0200",
"msg_from": "\"Tarabas (Manuel Rorarius)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [bulk] Re: Problem with LIKE-Performance"
},
{
"msg_contents": "Tarabas (Manuel Rorarius) wrote:\n> Hi Tom,\n> \n> TL> As already noted, it might be worth your while to add an index using the\n> TL> pattern-ops opclass to help with queries like this.\n> \n> I have done that now and it works very fine as supposed.\n> \n> The problem with the high startup_costs disappeared somehow after the\n> change of the enable_seqscan = off and a restart of pg-admin.\n\nI'm not sure restarting pgAdmin would have had any effect.\n\n> first Time I ran the statement it showed 13 sec execution time.\n> \n> Seq Scan on image image0_ (cost=0.00..21414.21 rows=11 width=1311)\n> (actual time=10504.138..12857.127 rows=119 loops=1)\n> Filter: ((title)::text ~~ '%Davorka%'::text)\n> Total runtime: 12857.372 ms\n> \n> second time I ran the statement it dropped to ~500 msec , which is\n> pretty ok. :-)\n\nThis will be because all the data is cached in the server's memory.\n\n> Seq Scan on image image0_ (cost=0.00..21414.21 rows=11 width=1311)\n> (actual time=270.289..552.144 rows=119 loops=1)\n> Filter: ((title)::text ~~ '%Davorka%'::text)\n> Total runtime: 552.708 ms\n\nAs you can see, the plan is still scanning all the rows. In any case, \nyou've changed the query - this has % at the beginning and end, which no \nindex will help you with.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 18 Apr 2006 17:25:43 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [bulk] Re: Problem with LIKE-Performance"
},
{
"msg_contents": "Hi Richard,\n\nRH> As you can see, the plan is still scanning all the rows. In any case, \nRH> you've changed the query - this has % at the beginning and end, which no\nRH> index will help you with.\n\nI realize that, the index definately helped a lot with the query where\nthe % is just at the end. The time went down to 0.203 ms after I\nchanged the index to varchar_pattern_ops.\n\nIndex Scan using idx_image_title on image (cost=0.00..6.01 rows=1 width=1311) (actual time=0.027..0.108 rows=33 loops=1)\nIndex Cond: (((title)::text ~>=~ 'Davorka'::character varying) AND ((title)::text ~<~ 'Davorkb'::character varying))\nFilter: ((title)::text ~~ 'Davorka%'::text)\nTotal runtime: 0.203 ms\n\nAlthough 13 sec. for the first select seems a bit odd, I think after\nthe Database-Cache on the Table kicks in, it should be fine with ~500 ms\n\nBest regards\nManuel\n\n",
"msg_date": "Tue, 18 Apr 2006 18:39:42 +0200",
"msg_from": "\"Tarabas (Manuel Rorarius)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [bulk] Re: [bulk] Re: Problem with LIKE-Performance"
}
] |
[
{
"msg_contents": "Hi,\n\ni remember something that you need a special index with locales<>\"C\".\n\nYou nned a different operator class for this index smth. like:\nCREATE INDEX idx_image_title\n ON image\n USING btree\n (title varchar_pattern_ops);\n\nYou can find the details here:\nhttp://www.postgresql.org/docs/8.1/interactive/indexes-opclass.html\n\nBest regards\n\nHakan Kocaman\nSoftware-Development\n\ndigame.de GmbH\nRichard-Byrd-Str. 4-8\n50829 Köln\n\nTel.: +49 (0) 221 59 68 88 31\nFax: +49 (0) 221 59 68 88 98\nEmail: [email protected]\n\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Tarabas (Manuel Rorarius)\n> Sent: Tuesday, April 18, 2006 4:35 PM\n> To: [email protected]\n> Subject: [PERFORM] Problem with LIKE-Performance\n> \n> \n> Hi!\n> \n> I am having trouble with like statements on one of my tables.\n> \n> I already tried a vacuum and analyze but with no success.\n> \n> The database is PostgreSQL Database Server 8.1.3 on i686-pc-mingw32\n> \n> I get the following explain and I am troubled by the very high\n> \"startup_cost\" ... does anyone have any idea why that value is so\n> high?\n> \n> {SEQSCAN\n> :startup_cost 100000000.00 \n> :total_cost 100021432.33 \n> :plan_rows 1 \n> :plan_width 1311 \n> :targetlist (\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 1 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 1\n> }\n> :resno 1 \n> :resname image_id \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 1 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 2 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 2\n> }\n> :resno 2 \n> :resname customer_id \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 2 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 3 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 3\n> }\n> :resno 3 \n> :resname theme_id \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 3 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 4 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 4\n> }\n> :resno 4 \n> :resname gallery_id \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 4 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 5 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 5\n> }\n> :resno 5 \n> :resname event_id \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 5 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 6 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 6\n> }\n> :resno 6 \n> :resname width \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 6 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 7 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 7\n> }\n> :resno 7 \n> :resname height \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 7 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 8 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 8\n> }\n> :resno 8 \n> :resname filesize \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 8 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 9 \n> :vartype 1114 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 9\n> }\n> :resno 9 \n> :resname uploadtime \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 9 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 10 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 10\n> }\n> :resno 10 \n> :resname filename \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 10 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 11 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 11\n> }\n> :resno 11 \n> :resname originalfilename \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 11 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 12 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 12\n> }\n> :resno 12 \n> :resname thumbname \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 12 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 13 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 13\n> }\n> :resno 13 \n> :resname previewname \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 13 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 14 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 14\n> }\n> :resno 14 \n> :resname title \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 14 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 15 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 15\n> }\n> :resno 15 \n> :resname flags \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 15 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 16 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 16\n> }\n> :resno 16 \n> :resname photographername \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 16 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 17 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 17\n> }\n> :resno 17 \n> :resname colors \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 17 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 18 \n> :vartype 1043 \n> :vartypmod 68 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 18\n> }\n> :resno 18 \n> :resname compression \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 18 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 19 \n> :vartype 1043 \n> :vartypmod 68 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 19\n> }\n> :resno 19 \n> :resname resolution \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 19 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 20 \n> :vartype 1043 \n> :vartypmod 68 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 20\n> }\n> :resno 20 \n> :resname colortype \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 20 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 21 \n> :vartype 1043 \n> :vartypmod 68 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 21\n> }\n> :resno 21 \n> :resname colordepth \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 21 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 22 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 22\n> }\n> :resno 22 \n> :resname sort \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 22 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 23 \n> :vartype 1114 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 23\n> }\n> :resno 23 \n> :resname creationtime \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 23 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 24 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 24\n> }\n> :resno 24 \n> :resname creationlocation \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 24 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 25 \n> :vartype 25 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 25\n> }\n> :resno 25 \n> :resname description \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 25 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 26 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 26\n> }\n> :resno 26 \n> :resname cameravendor_id \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 26 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 27 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 27\n> }\n> :resno 27 \n> :resname cameramodel_id \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 27 \n> :resjunk false\n> }\n> )\n> :qual (\n> {OPEXPR \n> :opno 1209 \n> :opfuncid 850 \n> :opresulttype 16 \n> :opretset false \n> :args (\n> {RELABELTYPE \n> :arg \n> {VAR \n> :varno 1 \n> :varattno 14 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 14\n> }\n> :resulttype 25 \n> :resulttypmod -1 \n> :relabelformat 0\n> }\n> {CONST \n> :consttype 25 \n> :constlen -1 \n> :constbyval false \n> :constisnull false \n> :constvalue 12 [ 12 0 0 0 68 97 118 111 114 107 97 37 ]\n> }\n> )\n> }\n> )\n> :lefttree <> \n> :righttree <> \n> :initPlan <> \n> :extParam (b)\n> :allParam (b)\n> :nParamExec 0 \n> :scanrelid 1\n> }\n> \n> Seq Scan on image image0_ (cost=100000000.00..100021432.33 \n> rows=1 width=1311) (actual time=11438.273..13668.300 rows=33 loops=1)\n> Filter: ((title)::text ~~ 'Davorka%'::text)\n> Total runtime: 13669.134 ms\n> \n> \n> here's my explain:\n> \n> {SEQSCAN\n> :startup_cost 100000000.00 \n> :total_cost 100021432.33 \n> :plan_rows 1 \n> :plan_width 1311 \n> :targetlist (\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 1 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 1\n> }\n> :resno 1 \n> :resname image_id \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 1 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 2 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 2\n> }\n> :resno 2 \n> :resname customer_id \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 2 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 3 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 3\n> }\n> :resno 3 \n> :resname theme_id \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 3 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 4 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 4\n> }\n> :resno 4 \n> :resname gallery_id \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 4 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 5 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 5\n> }\n> :resno 5 \n> :resname event_id \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 5 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 6 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 6\n> }\n> :resno 6 \n> :resname width \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 6 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 7 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 7\n> }\n> :resno 7 \n> :resname height \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 7 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 8 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 8\n> }\n> :resno 8 \n> :resname filesize \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 8 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 9 \n> :vartype 1114 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 9\n> }\n> :resno 9 \n> :resname uploadtime \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 9 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 10 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 10\n> }\n> :resno 10 \n> :resname filename \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 10 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 11 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 11\n> }\n> :resno 11 \n> :resname originalfilename \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 11 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 12 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 12\n> }\n> :resno 12 \n> :resname thumbname \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 12 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 13 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 13\n> }\n> :resno 13 \n> :resname previewname \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 13 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 14 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 14\n> }\n> :resno 14 \n> :resname title \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 14 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 15 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 15\n> }\n> :resno 15 \n> :resname flags \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 15 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 16 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 16\n> }\n> :resno 16 \n> :resname photographername \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 16 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 17 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 17\n> }\n> :resno 17 \n> :resname colors \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 17 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 18 \n> :vartype 1043 \n> :vartypmod 68 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 18\n> }\n> :resno 18 \n> :resname compression \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 18 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 19 \n> :vartype 1043 \n> :vartypmod 68 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 19\n> }\n> :resno 19 \n> :resname resolution \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 19 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 20 \n> :vartype 1043 \n> :vartypmod 68 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 20\n> }\n> :resno 20 \n> :resname colortype \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 20 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 21 \n> :vartype 1043 \n> :vartypmod 68 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 21\n> }\n> :resno 21 \n> :resname colordepth \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 21 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 22 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 22\n> }\n> :resno 22 \n> :resname sort \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 22 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 23 \n> :vartype 1114 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 23\n> }\n> :resno 23 \n> :resname creationtime \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 23 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 24 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 24\n> }\n> :resno 24 \n> :resname creationlocation \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 24 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 25 \n> :vartype 25 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 25\n> }\n> :resno 25 \n> :resname description \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 25 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 26 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 26\n> }\n> :resno 26 \n> :resname cameravendor_id \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 26 \n> :resjunk false\n> }\n> {TARGETENTRY \n> :expr \n> {VAR \n> :varno 1 \n> :varattno 27 \n> :vartype 23 \n> :vartypmod -1 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 27\n> }\n> :resno 27 \n> :resname cameramodel_id \n> :ressortgroupref 0 \n> :resorigtbl 29524 \n> :resorigcol 27 \n> :resjunk false\n> }\n> )\n> :qual (\n> {OPEXPR \n> :opno 1209 \n> :opfuncid 850 \n> :opresulttype 16 \n> :opretset false \n> :args (\n> {RELABELTYPE \n> :arg \n> {VAR \n> :varno 1 \n> :varattno 14 \n> :vartype 1043 \n> :vartypmod 259 \n> :varlevelsup 0 \n> :varnoold 1 \n> :varoattno 14\n> }\n> :resulttype 25 \n> :resulttypmod -1 \n> :relabelformat 0\n> }\n> {CONST \n> :consttype 25 \n> :constlen -1 \n> :constbyval false \n> :constisnull false \n> :constvalue 12 [ 12 0 0 0 68 97 118 111 114 107 97 37 ]\n> }\n> )\n> }\n> )\n> :lefttree <> \n> :righttree <> \n> :initPlan <> \n> :extParam (b)\n> :allParam (b)\n> :nParamExec 0 \n> :scanrelid 1\n> }\n> \n> Seq Scan on image image0_ (cost=100000000.00..100021432.33 \n> rows=1 width=1311) (actual time=11438.273..13668.300 rows=33 loops=1)\n> Filter: ((title)::text ~~ 'Davorka%'::text)\n> Total runtime: 13669.134 ms\n> \n> The table looks like the following:\n> \n> CREATE TABLE image\n> (\n> image_id int4 NOT NULL,\n> customer_id int4 NOT NULL,\n> theme_id int4,\n> gallery_id int4,\n> event_id int4,\n> width int4 NOT NULL,\n> height int4 NOT NULL,\n> filesize int4 NOT NULL,\n> uploadtime timestamp NOT NULL,\n> filename varchar(255) NOT NULL,\n> originalfilename varchar(255),\n> thumbname varchar(255) NOT NULL,\n> previewname varchar(255) NOT NULL,\n> title varchar(255),\n> flags int4 NOT NULL,\n> photographername varchar(255),\n> colors int4,\n> compression varchar(64),\n> resolution varchar(64),\n> colortype varchar(64),\n> colordepth varchar(64),\n> sort int4,\n> creationtime timestamp,\n> creationlocation varchar(255),\n> description text,\n> cameravendor_id int4,\n> cameramodel_id int4,\n> CONSTRAINT image_pkey PRIMARY KEY (image_id),\n> CONSTRAINT rel_121 FOREIGN KEY (cameravendor_id)\n> REFERENCES cameravendor (cameravendor_id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT rel_122 FOREIGN KEY (cameramodel_id)\n> REFERENCES cameramodel (cameramodel_id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT rel_21 FOREIGN KEY (customer_id)\n> REFERENCES customer (customer_id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT rel_23 FOREIGN KEY (theme_id)\n> REFERENCES theme (theme_id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT rel_26 FOREIGN KEY (gallery_id)\n> REFERENCES gallery (gallery_id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT rel_63 FOREIGN KEY (event_id)\n> REFERENCES event (event_id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> ) \n> WITHOUT OIDS;\n> \n> These are the indexes on the table:\n> \n> CREATE INDEX idx_image_customer\n> ON image\n> USING btree\n> (customer_id);\n> \n> CREATE INDEX idx_image_event\n> ON image\n> USING btree\n> (event_id);\n> \n> CREATE INDEX idx_image_flags\n> ON image\n> USING btree\n> (flags);\n> \n> CREATE INDEX idx_image_gallery\n> ON image\n> USING btree\n> (gallery_id);\n> \n> CREATE INDEX idx_image_id\n> ON image\n> USING btree\n> (image_id);\n> \n> CREATE INDEX idx_image_id_title\n> ON image\n> USING btree\n> (image_id, title);\n> \n> CREATE INDEX idx_image_theme\n> ON image\n> USING btree\n> (theme_id);\n> \n> CREATE INDEX idx_image_title\n> ON image\n> USING btree\n> (title);\n> \n> \n> \n> I would appreciate any hint what could be the problem here.\n> \n> Best regards\n> Manuel Rorarius\n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n",
"msg_date": "Tue, 18 Apr 2006 17:35:30 +0200",
"msg_from": "\"Hakan Kocaman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problem with LIKE-Performance"
},
{
"msg_contents": "Hi Hakan,\n\nHK> i remember something that you need a special index with locales<>\"C\".\nHK> You nned a different operator class for this index smth. like:\nHK> CREATE INDEX idx_image_title\nHK> ON image\nHK> USING btree\nHK> (title varchar_pattern_ops);\n\nI also forgot that, thanks a lot for the hint. that speeded up my\nsearches a lot!\n\nBest regards\nManuel\n\n",
"msg_date": "Tue, 18 Apr 2006 17:43:53 +0200",
"msg_from": "\"Tarabas (Manuel Rorarius)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [bulk] RE: Problem with LIKE-Performance"
}
] |
[
{
"msg_contents": "Thx Tom\n\nI guess I have to abandon the bulk update. The columns in the where\nclause comprise 80% of the table columns..So indexing all may not help.\nThe target table will have on average 60-180 million rows.\n\nI will attempt the in instead of exist and let you know the result\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, April 18, 2006 9:10 AM\nTo: Sriram Dandapani\nCc: Pgsql-Performance (E-mail)\nSubject: Re: [PERFORM] creating of temporary table takes very long \n\n\"Sriram Dandapani\" <[email protected]> writes:\n> Got an explain analyze output..Here it is\n> \"Seq Scan on c_chkpfw_hr_tr a (cost=0.00..225975659.89 rows=11000\n> width=136) (actual time=2.345..648070.474 rows=22001 loops=1)\"\n> \" Filter: (subplan)\"\n> \" SubPlan\"\n> \" -> Bitmap Heap Scan on chkpfw_tr_hr_dimension b\n> (cost=1474.64..10271.13 rows=1 width=0) (actual time=29.439..29.439\n> rows=1 loops=22001)\"\n> \" Recheck Cond: (($0 = firstoccurrence) AND ($1 =\nsentryid_id)\n> AND ($2 = node_id))\"\n> \" Filter: (($3 = customerid_id) AND (COALESCE($4, 0) =\n> COALESCE(interface_id, 0)) AND (COALESCE($5, 0) = COALESCE(source_id,\n> 0)) AND (COALESCE($6, 0) = COALESCE(destination_id, 0)) AND\n> (COALESCE($7, 0) = COALESCE(sourceport_id, 0)) AND (COALESCE($8 (..)\"\n> \" -> Bitmap Index Scan on chkpfw_tr_hr_idx1\n> (cost=0.00..1474.64 rows=38663 width=0) (actual time=12.144..12.144\n> rows=33026 loops=22001)\"\n> \" Index Cond: (($0 = firstoccurrence) AND ($1 =\n> sentryid_id) AND ($2 = node_id))\"\n> \"Total runtime: 648097.800 ms\"\n\nThat's probably about as good a query plan as you can hope for given\nthe way the query is written. Those COALESCE comparisons are all\nunindexable (unless you make functional indexes on the COALESCE\nexpressions). You might get somewhere by converting the EXISTS\nto an IN, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 09:13:04 -0700",
"msg_from": "\"Sriram Dandapani\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: creating of temporary table takes very long "
},
{
"msg_contents": "You might try rewriting the coalesces into a row comparison...\n\nWHERE row($4, $5, ...) IS NOT DISTINCT FROM row(interface_id, source_id, ...)\n\nSee\nhttp://www.postgresql.org/docs/8.1/interactive/functions-comparisons.html#AEN13408\n\nNote that the docs only show IS DISTINCT FROM, so you might have to do\n\nWHERE NOT row(...) IS DISTINCT FROM row(...)\n\nOn Tue, Apr 18, 2006 at 09:13:04AM -0700, Sriram Dandapani wrote:\n> Thx Tom\n> \n> I guess I have to abandon the bulk update. The columns in the where\n> clause comprise 80% of the table columns..So indexing all may not help.\n> The target table will have on average 60-180 million rows.\n> \n> I will attempt the in instead of exist and let you know the result\n> \n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]] \n> Sent: Tuesday, April 18, 2006 9:10 AM\n> To: Sriram Dandapani\n> Cc: Pgsql-Performance (E-mail)\n> Subject: Re: [PERFORM] creating of temporary table takes very long \n> \n> \"Sriram Dandapani\" <[email protected]> writes:\n> > Got an explain analyze output..Here it is\n> > \"Seq Scan on c_chkpfw_hr_tr a (cost=0.00..225975659.89 rows=11000\n> > width=136) (actual time=2.345..648070.474 rows=22001 loops=1)\"\n> > \" Filter: (subplan)\"\n> > \" SubPlan\"\n> > \" -> Bitmap Heap Scan on chkpfw_tr_hr_dimension b\n> > (cost=1474.64..10271.13 rows=1 width=0) (actual time=29.439..29.439\n> > rows=1 loops=22001)\"\n> > \" Recheck Cond: (($0 = firstoccurrence) AND ($1 =\n> sentryid_id)\n> > AND ($2 = node_id))\"\n> > \" Filter: (($3 = customerid_id) AND (COALESCE($4, 0) =\n> > COALESCE(interface_id, 0)) AND (COALESCE($5, 0) = COALESCE(source_id,\n> > 0)) AND (COALESCE($6, 0) = COALESCE(destination_id, 0)) AND\n> > (COALESCE($7, 0) = COALESCE(sourceport_id, 0)) AND (COALESCE($8 (..)\"\n> > \" -> Bitmap Index Scan on chkpfw_tr_hr_idx1\n> > (cost=0.00..1474.64 rows=38663 width=0) (actual time=12.144..12.144\n> > rows=33026 loops=22001)\"\n> > \" Index Cond: (($0 = firstoccurrence) AND ($1 =\n> > sentryid_id) AND ($2 = node_id))\"\n> > \"Total runtime: 648097.800 ms\"\n> \n> That's probably about as good a query plan as you can hope for given\n> the way the query is written. Those COALESCE comparisons are all\n> unindexable (unless you make functional indexes on the COALESCE\n> expressions). You might get somewhere by converting the EXISTS\n> to an IN, though.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 18 Apr 2006 17:34:11 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: creating of temporary table takes very long"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> You might try rewriting the coalesces into a row comparison...\n> WHERE row($4, $5, ...) IS NOT DISTINCT FROM row(interface_id, source_id, ...)\n\nThat would be notationally nicer, but no help performance-wise; I'm\nfairly sure that IS DISTINCT doesn't get optimized in any fashion\nwhatsoever :-(\n\nWhat might be worth trying is functional indexes on the COALESCE(foo,0)\nexpressions. Or if possible, consider revising your data schema to\navoid using NULLs in a way that requires assuming that NULL = NULL.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 18:48:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: creating of temporary table takes very long "
}
] |
[
{
"msg_contents": "Hi\n\nApologies if this has already been raised...\n\nPostgreSQL 8.1.3 and prior versions. Vacuum done.\n\nAssuming a single table with columns named c1 to cn and a requirement to\nselect from a particular position in multiple column order. \n\nThe column values in my simple example below denoted by 'cnv' a typical\nquery would look as follows\n\nselect * from mytable where\n (c1 = 'c1v' and c2 = 'c2v' and c3 >= 'c3v') or\n (c1 = 'c1v' and c2 > 'c2v') or\n (c1 > 'c1v')\n order by c1, c2, c3;\n\nIn real life with the table containing many rows (>9 Million) and\na single multicolumn index on the required columns existing I get the\nfollowing\n\nexplain analyse\n SELECT\n tran_subledger,\n tran_subaccount,\n tran_mtch,\n tran_self,\n tran_Rflg FROM tran\nWHERE ((tran_subledger = 2 AND tran_subaccount = 'ARM '\nAND tran_mtch = 0 AND tran_self >= 0 )\nOR (tran_subledger = 2 AND tran_subaccount = 'ARM ' AND\ntran_mtch > 0 )\nOR (tran_subledger = 2 AND tran_subaccount > 'ARM ' )\nOR (tran_subledger > 2 ))\nORDER BY tran_subledger,\n tran_subaccount,\n tran_mtch,\n tran_self\nlimit 10;\n \n Limit (cost=0.00..25.21 rows=10 width=36) (actual\ntime=2390271.832..2390290.305 rows=10 loops=1)\n -> Index Scan using tran_mtc_idx on tran (cost=0.00..13777295.04\nrows=5465198 width=36) (actual time=2390271.823..2390290.252 rows=10\nloops=1)\n Filter: (((tran_subledger = 2) AND (tran_subaccount = 'ARM \n'::bpchar) AND (tran_mtch = 0) AND (tran_self >= 0)) OR ((tran_subledger\n= 2) AND (tran_subaccount = 'ARM '::bpchar) AND\n(tran_mtch > 0)) OR ((tran_subledger = 2) AND (tran_subaccount >\n'ARM '::bpchar)) OR (tran_subledger > 2))\n Total runtime: 2390290.417 ms\n\nAny suggestions/comments/ideas appreciated.\n-- \nRegards\nTheo\n\n",
"msg_date": "Wed, 19 Apr 2006 00:07:55 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Multicolumn order by"
},
{
"msg_contents": "Theo Kramer <[email protected]> writes:\n> select * from mytable where\n> (c1 = 'c1v' and c2 = 'c2v' and c3 >= 'c3v') or\n> (c1 = 'c1v' and c2 > 'c2v') or\n> (c1 > 'c1v')\n> order by c1, c2, c3;\n\nYeah ... what you really want is the SQL-spec row comparison operator\n\nselect ... where (c1,c2,c3) >= ('c1v','c2v','c3v') order by c1,c2,c3;\n\nThis does not work properly in any current PG release :-( but it does\nwork and is optimized well in CVS HEAD. See eg this thread\nhttp://archives.postgresql.org/pgsql-hackers/2006-02/msg00209.php\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 19:08:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multicolumn order by "
},
{
"msg_contents": "Assuming stats are accurate, you're reading through 5.5M index rows in\norder to run that limit query. You didn't say what the index was\nactually on, but you might want to try giving each column it's own\nindex. That might make a bitmap scan feasable.\n\nI know this doesn't help right now, but 8.2 will also allow you to do\nthis using a row comparitor. You might want to compile cvs HEAD and see\nhow that does with this query (specifically if using a row comparitor\nperforms better than the query below).\n\nOn Wed, Apr 19, 2006 at 12:07:55AM +0200, Theo Kramer wrote:\n> Hi\n> \n> Apologies if this has already been raised...\n> \n> PostgreSQL 8.1.3 and prior versions. Vacuum done.\n> \n> Assuming a single table with columns named c1 to cn and a requirement to\n> select from a particular position in multiple column order. \n> \n> The column values in my simple example below denoted by 'cnv' a typical\n> query would look as follows\n> \n> select * from mytable where\n> (c1 = 'c1v' and c2 = 'c2v' and c3 >= 'c3v') or\n> (c1 = 'c1v' and c2 > 'c2v') or\n> (c1 > 'c1v')\n> order by c1, c2, c3;\n> \n> In real life with the table containing many rows (>9 Million) and\n> a single multicolumn index on the required columns existing I get the\n> following\n> \n> explain analyse\n> SELECT\n> tran_subledger,\n> tran_subaccount,\n> tran_mtch,\n> tran_self,\n> tran_Rflg FROM tran\n> WHERE ((tran_subledger = 2 AND tran_subaccount = 'ARM '\n> AND tran_mtch = 0 AND tran_self >= 0 )\n> OR (tran_subledger = 2 AND tran_subaccount = 'ARM ' AND\n> tran_mtch > 0 )\n> OR (tran_subledger = 2 AND tran_subaccount > 'ARM ' )\n> OR (tran_subledger > 2 ))\n> ORDER BY tran_subledger,\n> tran_subaccount,\n> tran_mtch,\n> tran_self\n> limit 10;\n> \n> Limit (cost=0.00..25.21 rows=10 width=36) (actual\n> time=2390271.832..2390290.305 rows=10 loops=1)\n> -> Index Scan using tran_mtc_idx on tran (cost=0.00..13777295.04\n> rows=5465198 width=36) (actual time=2390271.823..2390290.252 rows=10\n> loops=1)\n> Filter: (((tran_subledger = 2) AND (tran_subaccount = 'ARM \n> '::bpchar) AND (tran_mtch = 0) AND (tran_self >= 0)) OR ((tran_subledger\n> = 2) AND (tran_subaccount = 'ARM '::bpchar) AND\n> (tran_mtch > 0)) OR ((tran_subledger = 2) AND (tran_subaccount >\n> 'ARM '::bpchar)) OR (tran_subledger > 2))\n> Total runtime: 2390290.417 ms\n> \n> Any suggestions/comments/ideas appreciated.\n> -- \n> Regards\n> Theo\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 18 Apr 2006 18:13:53 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multicolumn order by"
},
{
"msg_contents": "On Wed, 2006-04-19 at 01:08, Tom Lane wrote:\n> Theo Kramer <[email protected]> writes:\n> > select * from mytable where\n> > (c1 = 'c1v' and c2 = 'c2v' and c3 >= 'c3v') or\n> > (c1 = 'c1v' and c2 > 'c2v') or\n> > (c1 > 'c1v')\n> > order by c1, c2, c3;\n> \n> Yeah ... what you really want is the SQL-spec row comparison operator\n> \n> select ... where (c1,c2,c3) >= ('c1v','c2v','c3v') order by c1,c2,c3;\n> \n> This does not work properly in any current PG release :-( but it does\n> work and is optimized well in CVS HEAD. See eg this thread\n> http://archives.postgresql.org/pgsql-hackers/2006-02/msg00209.php\n\nThat is awesome - been fighting with porting my isam based stuff onto\nsql for a long time and the row comparison operator is exactly what I\nhave been looking for.\n\nI tried this on my test system running 8.1.3 and appears to work fine.\nAppreciate it if you could let me know in what cases it does not work\nproperly.\n\n-- \nRegards\nTheo\n\n",
"msg_date": "Wed, 19 Apr 2006 08:00:39 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multicolumn order by"
},
{
"msg_contents": "On Wed, 2006-04-19 at 08:00, Theo Kramer wrote:\n\n> I tried this on my test system running 8.1.3 and appears to work fine.\n> Appreciate it if you could let me know in what cases it does not work\n> properly.\n\nPlease ignore - 'Explain is your friend' - got to look at the tips :)\n-- \nRegards\nTheo\n\n",
"msg_date": "Wed, 19 Apr 2006 11:26:00 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multicolumn order by"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've been struggling with some performance issues with certain\nSQL queries. I was prepping a long-ish overview of my problem\nto submit, but I think I'll start out with a simple case of the\nproblem first, hopefully answers I receive will help me solve\nmy initial issue.\n\nConsider the following two queries which yield drastically different\nrun-time:\n\ndb=# select count(*) from pk_c2 b0 where b0.offer_id=7141;\n count\n-------\n 1\n(1 row)\nTime: 5139.004 ms\n\ndb=# select count(*) from pk_c2 b0 where b0.pending=true and b0.offer_id=7141;\n count\n-------\n 1\n(1 row)\nTime: 1.828 ms\n\n\nThat's 2811 times faster!\n\nJust to give you an idea of size of pk_c2 table:\n\ndb=# select count(*) from pk_c2 ;\n count\n---------\n 2158094\n(1 row)\nTime: 5275.782 ms\n\ndb=# select count(*) from pk_c2 where pending=true;\n count\n-------\n 51\n(1 row)\nTime: 5073.699 ms\n\n\n\ndb=# explain select count(*) from pk_c2 b0 where b0.offer_id=7141;\nQUERY PLAN\n---------------------------------------------------------------------------\n Aggregate (cost=44992.78..44992.78 rows=1 width=0)\n -> Seq Scan on pk_c2 b0 (cost=0.00..44962.50 rows=12109 width=0)\n Filter: (offer_id = 7141)\n(3 rows)\nTime: 1.350 ms\n\ndb=# explain select count(*) from pk_c2 b0 where b0.pending=true and\nb0.offer_id=7141;\nQUERY PLAN\n----------------------------------------------------------------------------------------\n Aggregate (cost=45973.10..45973.10 rows=1 width=0)\n -> Index Scan using pk_boidx on pk_c2 b0 (cost=0.00..45973.09\nrows=1 width=0)\n Index Cond: (offer_id = 7141)\n Filter: (pending = true)\n(4 rows)\nTime: 1.784 ms\n\n\n\nThe table has indexes for both 'offer_id' and '(pending=true)':\n\nIndexes:\n \"pk_boidx\" btree (offer_id)\n \"pk_bpidx\" btree (((pending = true)))\n\nSo, why would the planner chose to use the index on the second query\nand not on the first?\n\n\nNote that I am able to fool the planner into using an Index scan\non offer_id by adding a silly new condition in the where clause of\nthe first form of the query:\n\n\ndb=# explain select count(*) from pk_c2 b0 where b0.offer_id=7141 and oid > 1;\nQUERY PLAN\n-------------------------------------------------------------------------------------------\n Aggregate (cost=45983.19..45983.19 rows=1 width=0)\n -> Index Scan using pk_boidx on pk_c2 b0 (cost=0.00..45973.09\nrows=4037 width=0)\n Index Cond: (offer_id = 7141)\n Filter: (oid > 1::oid)\n(4 rows)\nTime: 27.301 ms\n\ndb=# select count(*) from pk_c2 b0 where b0.offer_id=7141 and oid > 1;\n count\n-------\n 1\n(1 row)\nTime: 1.900 ms\n\nWhat gives?\n\nThis seems just too hokey for my taste.\n\n--patrick\n\n\n\ndb=# select version();\n version\n-------------------------------------------------------------------------\n PostgreSQL 7.4.12 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.6\n",
"msg_date": "Tue, 18 Apr 2006 18:02:27 -0700",
"msg_from": "\"patrick keshishian\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Planner doesn't chose Index - (slow select)"
},
{
"msg_contents": "\"patrick keshishian\" <[email protected]> writes:\n> I've been struggling with some performance issues with certain\n> SQL queries. I was prepping a long-ish overview of my problem\n> to submit, but I think I'll start out with a simple case of the\n> problem first, hopefully answers I receive will help me solve\n> my initial issue.\n\nHave you ANALYZEd this table lately?\n\n> db=# select count(*) from pk_c2 b0 where b0.offer_id=7141;\n> count\n> -------\n> 1\n> (1 row)\n\nThe planner is evidently estimating that there are 12109 such rows,\nnot 1, which is the reason for its reluctance to use an indexscan.\nGenerally the only reason for it to be off that far on such a simple\nstatistical issue is if you haven't updated the stats in a long time.\n(If you've got a really skewed data distribution for offer_id, you\nmight need to raise the statistics target for it.)\n\n> The table has indexes for both 'offer_id' and '(pending=true)':\n\n> Indexes:\n> \"pk_boidx\" btree (offer_id)\n> \"pk_bpidx\" btree (((pending = true)))\n\nThe expression index on (pending = true) won't do you any good,\nunless you spell your query in a weird way like\n\t... WHERE (pending = true) = true\nI'd suggest a plain index on \"pending\" instead.\n\n> db=# select version();\n> PostgreSQL 7.4.12 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.6\n\nYou might want to think about an update, too. 7.4 is pretty long in the\ntooth.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Apr 2006 22:19:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner doesn't chose Index - (slow select) "
},
{
"msg_contents": "Tom,\n\nYou are absolutely correct about not having run ANALYZE\non the particular table.\n\nIn my attempt to create a simple \"test case\" I created that\ntable (pk_c2) from the original and had not run ANALYZE\non it, even though, ANALYZE had been run prior to building\nthat table.\n\nThe problem on the test table and the simple select count(*)\nis no longer there (after ANALYZE).\n\nThe original issue, however, is still there. I'm stumped as\nhow to formulate my question without having to write a\nlengthy essay.\n\nAs to upgrading from 7.4, I hear you, but I'm trying to support\na deployed product.\n\nThanks again for your input,\n--patrick\n\n\n\n\nOn 4/18/06, Tom Lane <[email protected]> wrote:\n> \"patrick keshishian\" <[email protected]> writes:\n> > I've been struggling with some performance issues with certain\n> > SQL queries. I was prepping a long-ish overview of my problem\n> > to submit, but I think I'll start out with a simple case of the\n> > problem first, hopefully answers I receive will help me solve\n> > my initial issue.\n>\n> Have you ANALYZEd this table lately?\n>\n> > db=# select count(*) from pk_c2 b0 where b0.offer_id=7141;\n> > count\n> > -------\n> > 1\n> > (1 row)\n>\n> The planner is evidently estimating that there are 12109 such rows,\n> not 1, which is the reason for its reluctance to use an indexscan.\n> Generally the only reason for it to be off that far on such a simple\n> statistical issue is if you haven't updated the stats in a long time.\n> (If you've got a really skewed data distribution for offer_id, you\n> might need to raise the statistics target for it.)\n>\n> > The table has indexes for both 'offer_id' and '(pending=true)':\n>\n> > Indexes:\n> > \"pk_boidx\" btree (offer_id)\n> > \"pk_bpidx\" btree (((pending = true)))\n>\n> The expression index on (pending = true) won't do you any good,\n> unless you spell your query in a weird way like\n> ... WHERE (pending = true) = true\n> I'd suggest a plain index on \"pending\" instead.\n>\n> > db=# select version();\n> > PostgreSQL 7.4.12 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.6\n>\n> You might want to think about an update, too. 7.4 is pretty long in the\n> tooth.\n>\n> regards, tom lane\n",
"msg_date": "Wed, 19 Apr 2006 12:28:26 -0700",
"msg_from": "\"patrick keshishian\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner doesn't chose Index - (slow select)"
}
] |
[
{
"msg_contents": "> > Bacula already serializes access to the database (they have \n> to support \n> > mysql/myisam), so this shouldn't help.\n> \n> Ouch, that hurts.\n> \n> To support mysql, they break performance for _every other_ \n> database system?\n\nActually, it probably helps on SQLite as well. And considering they only\nsupport postgresql, mysql and sqlite, there is some merit to it from\ntheir perspective.\n\nYou can find a thread about it in the bacula archives from a month or\ntwo back.\n\n> <cynism>\n> Now, I understand how the mysql people manage to spread the \n> legend of mysql being fast. They convince software developers \n> to thwart all others.\n> </>\n\nYes, same as the fact that most (at least FOSS) web project-du-jour are\n\"dumbed down\" to the mysql featureset. (And not just mysql, but\nmysql-lowest-common-factors, which means myisam etc)\n\n\n> > Actually, it might well hurt by introducing extra delays.\n> \n> Well, if you read the documentation, you will see that it \n> will only wait if there are at least commit_siblings other \n> transactions active. So when Bacula serializes access, there \n> will be no delays, as there is only a single transaction alive.\n\nHm. Right. Well, it still won't help :-)\n\n//Magnus\n",
"msg_date": "Wed, 19 Apr 2006 14:11:06 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "Hi, Magnus,\n\nMagnus Hagander wrote:\n\n>>To support mysql, they break performance for _every other_ \n>>database system?\n> Actually, it probably helps on SQLite as well.\n\nAFAICS from the FAQ http://www.sqlite.org/faq.html#q7 and #q8, SQLite\ndoes serialize itsself.\n\n> And considering they only\n> support postgresql, mysql and sqlite, there is some merit to it from\n> their perspective.\n\nOkay, I understand, but I hesitate to endorse it.\n\nIMHO, they should write their application in a \"normal\" way, and then\nhave the serialization etc. encapsulated in the database driver\ninterface (possibly a wrapper class or so).\n\n>><cynism>\n>>Now, I understand how the mysql people manage to spread the \n>>legend of mysql being fast. They convince software developers \n>>to thwart all others.\n>></>\n> Yes, same as the fact that most (at least FOSS) web project-du-jour are\n> \"dumbed down\" to the mysql featureset. (And not just mysql, but\n> mysql-lowest-common-factors, which means myisam etc)\n\nWell, most of those projects don't need a database, they need a bunch of\ntables and a lock.\n\nHeck, they even use client-side SELECT-loops in PHP instead of a JOIN\nbecause \"I always confuse left and right\".\n\n\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Wed, 19 Apr 2006 14:50:26 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization?"
},
{
"msg_contents": "\"Magnus Hagander\" <[email protected]> writes:\n>>> Actually, [commit_delay] might well hurt by introducing extra delays.\n>> \n>> Well, if you read the documentation, you will see that it \n>> will only wait if there are at least commit_siblings other \n>> transactions active. So when Bacula serializes access, there \n>> will be no delays, as there is only a single transaction alive.\n\n> Hm. Right. Well, it still won't help :-)\n\nIt could actually hurt, because nonzero time is required to go look\nwhether there are any other active transactions. I'm not sure whether\nthis overhead is enough to be measurable when there's only one backend\nrunning, but it might be.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Apr 2006 11:14:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts optimization? "
}
] |
[
{
"msg_contents": "Greetings,\n\nI'd like to introduce a new readahead framework of the linux kernel:\nhttp://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1021.html\n\nHOW IT WORKS\n\nIn adaptive readahead, the context based method may be of particular\ninterest to postgresql users. It works by peeking into the file cache\nand check if there are any history pages present or accessed. In this\nway it can detect almost all forms of sequential / semi-sequential read\npatterns, e.g.\n\t- parallel / interleaved sequential scans on one file\n\t- sequential reads across file open/close\n\t- mixed sequential / random accesses\n\t- sparse / skimming sequential read\n\nIt also have methods to detect some less common cases:\n\t- reading backward\n\t- seeking all over reading N pages\n\nWAYS TO BENEFIT FROM IT\n\nAs we know, postgresql relies on the kernel to do proper readahead.\nThe adaptive readahead might help performance in the following cases:\n\t- concurrent sequential scans\n\t- sequential scan on a fragmented table\n\t (some DBs suffer from this problem, not sure for pgsql)\n\t- index scan with clustered matches\n\t- index scan on majority rows (in case the planner goes wrong)\n\nTUNABLE PARAMETERS\n\nThere are two parameters which are described in this email:\nhttp://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1024.html\n\nHere are the more oriented guidelines for postgresql users:\n\n- /proc/sys/vm/readahead_ratio\nSince most DB servers are bounty of memory, the danger of readahead\nthrashing is near to zero. In this case, you can set readahead_ratio to\n100(or even 200:), which helps the readahead window to scale up rapidly.\n\n- /proc/sys/vm/readahead_hit_rate\nSparse sequential reads are read patterns like {0, 2, 4, 5, 8, 11, ...}.\nIn this case we might prefer to do readahead to get good I/O performance\nwith the overhead of some useless pages. But if you prefer not to do so,\nset readahead_hit_rate to 1 will disable this feature.\n\n- /sys/block/sd<X>/queue/read_ahead_kb\nSet it to a large value(e.g. 4096) as you used to do.\nRAID users might want to use a bigger number.\n\nTRYING IT OUT\n\nThe latest patch for stable kernels can be downloaded here:\nhttp://www.vanheusden.com/ara/\n\nBefore compiling, make sure that the following options are enabled:\nProcessor type and features -> Adaptive file readahead\nProcessor type and features -> Readahead debug and accounting\n\n\nThe patch is open to fine tuning advices.\nComments and benchmarking results are highly appreciated.\n\nThanks,\nWu\n",
"msg_date": "Thu, 20 Apr 2006 10:08:53 +0800",
"msg_from": "Wu Fengguang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Introducing a new linux readahead framework"
}
] |
[
{
"msg_contents": "Hi,\n\nI am running on postgres 7.4.6.\nI did a vacuum analyze on the database but there was no change.\nI Attached here a file with details about the tables, the queries and\nthe Explain analyze plans.\nHope this can be helpful to analyze my problem\n\n10x\nDoron",
"msg_date": "Thu, 20 Apr 2006 10:23:57 +0200",
"msg_from": "\"Doron Baranes\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Perfrmance Problems (7.4.6)"
},
{
"msg_contents": "I think that the problem is the GROUP BY (datetime) that is \ndate_trunc('hour'::text, i.entry_time)\nYou should create an indexe with this expression (if its possible).\n\nhttp://www.postgresql.org/docs/7.4/interactive/indexes-expressional.html\n\nIf is not possible, I would create a column with value \ndate_trunc('hour'::text, i.entry_time) of each row and then index it.\n\nHope this helps :)\n\nDoron Baranes wrote:\n\n>Hi,\n>\n>I am running on postgres 7.4.6.\n>I did a vacuum analyze on the database but there was no change.\n>I Attached here a file with details about the tables, the queries and\n>the Explain analyze plans.\n>Hope this can be helpful to analyze my problem\n>\n>10x\n>Doron\n> \n>\n>------------------------------------------------------------------------\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n>\n\n",
"msg_date": "Thu, 20 Apr 2006 12:48:38 +0200",
"msg_from": "Ruben Rubio Rey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perfrmance Problems (7.4.6)"
}
] |
[
{
"msg_contents": "Ok. But that means I need a trigger on the original column to update the\nnew column on each insert/update and that overhead.\n\n-----Original Message-----\nFrom: Ruben Rubio Rey [mailto:[email protected]] \nSent: Thursday, April 20, 2006 12:49 PM\nTo: Doron Baranes; [email protected]\nSubject: Re: [PERFORM] Perfrmance Problems (7.4.6)\n\nI think that the problem is the GROUP BY (datetime) that is \ndate_trunc('hour'::text, i.entry_time)\nYou should create an indexe with this expression (if its possible).\n\nhttp://www.postgresql.org/docs/7.4/interactive/indexes-expressional.html\n\nIf is not possible, I would create a column with value \ndate_trunc('hour'::text, i.entry_time) of each row and then index it.\n\nHope this helps :)\n\nDoron Baranes wrote:\n\n>Hi,\n>\n>I am running on postgres 7.4.6.\n>I did a vacuum analyze on the database but there was no change.\n>I Attached here a file with details about the tables, the queries and\n>the Explain analyze plans.\n>Hope this can be helpful to analyze my problem\n>\n>10x\n>Doron\n> \n>\n>-----------------------------------------------------------------------\n-\n>\n>\n>---------------------------(end of\nbroadcast)---------------------------\n>TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n>\n\n",
"msg_date": "Thu, 20 Apr 2006 14:57:50 +0200",
"msg_from": "\"Doron Baranes\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Perfrmance Problems (7.4.6)"
},
{
"msg_contents": "Did you tried to index the expression?\nDid it work?\n\nDoron Baranes wrote:\n\n>Ok. But that means I need a trigger on the original column to update the\n>new column on each insert/update and that overhead.\n>\n>-----Original Message-----\n>From: Ruben Rubio Rey [mailto:[email protected]] \n>Sent: Thursday, April 20, 2006 12:49 PM\n>To: Doron Baranes; [email protected]\n>Subject: Re: [PERFORM] Perfrmance Problems (7.4.6)\n>\n>I think that the problem is the GROUP BY (datetime) that is \n>date_trunc('hour'::text, i.entry_time)\n>You should create an indexe with this expression (if its possible).\n>\n>http://www.postgresql.org/docs/7.4/interactive/indexes-expressional.html\n>\n>If is not possible, I would create a column with value \n>date_trunc('hour'::text, i.entry_time) of each row and then index it.\n>\n>Hope this helps :)\n>\n>Doron Baranes wrote:\n>\n> \n>\n>>Hi,\n>>\n>>I am running on postgres 7.4.6.\n>>I did a vacuum analyze on the database but there was no change.\n>>I Attached here a file with details about the tables, the queries and\n>>the Explain analyze plans.\n>>Hope this can be helpful to analyze my problem\n>>\n>>10x\n>>Doron\n>> \n>>\n>>-----------------------------------------------------------------------\n>> \n>>\n>-\n> \n>\n>>---------------------------(end of\n>> \n>>\n>broadcast)---------------------------\n> \n>\n>>TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>> \n>>\n>> \n>>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: Don't 'kill -9' the postmaster\n>\n>\n> \n>\n\n",
"msg_date": "Thu, 20 Apr 2006 16:54:21 +0200",
"msg_from": "Ruben Rubio Rey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perfrmance Problems (7.4.6)"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nI was just wondering whether anyone has had success with storing more\nthan 1TB of data with PostgreSQL and how they have found the\nperformance.\n\n \n\nWe need a database that can store in excess of this amount and still\nshow good performance. We will probably be implementing several tables\nwith foreign keys and also indexes which will obviously impact on both\ndata size and performance too.\n\n \n\nMany thanks in advance,\n\n \n\nSimon\nVisit our Website at http://www.rm.com\n\nThis message is confidential. You should not copy it or disclose its contents to anyone. You may use and apply the information for the intended purpose only. Internet communications are not secure; therefore, RM does not accept legal responsibility for the contents of this message. Any views or opinions presented are those of the author only and not of RM. If this email has come to you in error, please delete it, along with any attachments. Please note that RM may intercept incoming and outgoing email communications. \n\nFreedom of Information Act 2000\nThis email and any attachments may contain confidential information belonging to RM. Where the email and any attachments do contain information of a confidential nature, including without limitation information relating to trade secrets, special terms or prices these shall be deemed for the purpose of the Freedom of Information Act 2000 as information provided in confidence by RM and the disclosure of which would be prejudicial to RM's commercial interests.\n\nThis email has been scanned for viruses by Trend ScanMail.\n\n\n\n\n\n\n\n\n\nHi,\n \nI was just wondering whether anyone has had success with\nstoring more than 1TB of data with PostgreSQL and how they have found the\nperformance.\n \nWe need a database that can store in excess of this amount\nand still show good performance. We will probably be implementing several\ntables with foreign keys and also indexes which will obviously impact on both\ndata size and performance too.\n \nMany thanks in advance,\n \nSimon\n\n\n\nVisit our Website at www.rm.com\n\n\nThis message is confidential. You should not copy it or disclose its contents to anyone. You may use and apply the information for the intended purpose only. Internet communications are not secure; therefore, RM does not accept legal responsibility for the contents of this message. Any views or opinions presented are those of the author only and not of RM. If this email has come to you in error, please delete it, along with any attachments. Please note that RM may intercept incoming and outgoing email communications. \n\nFreedom of Information Act 2000\n\nThis email and any attachments may contain confidential information belonging to RM. Where the email and any attachments do contain information of a confidential nature, including without limitation information relating to trade secrets, special terms or prices these shall be deemed for the purpose of the Freedom of Information Act 2000 as information provided in confidence by RM and the disclosure of which would be prejudicial to RM's commercial interests.\n\nThis email has been scanned for viruses by Trend ScanMail.",
"msg_date": "Thu, 20 Apr 2006 14:18:58 +0100",
"msg_from": "\"Simon Dale\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Quick Performance Poll"
},
{
"msg_contents": "\nSimon,\n\nI have many databases over 1T with the largest being ~6T. All of my databases store telecom data, such as call detail\nrecords. The access is very fast when looking for a small subset of the data. For servers, I am using white box intel\nXEON and P4 systems with SATA disks, 4G of memory. SCSI is out of our price range, but if I had unlimited $ I would go\nwith SCSI /SCSI raid instead.\n\nJim\n\n---------- Original Message -----------\nFrom: \"Simon Dale\" <[email protected]>\nTo: <[email protected]>\nSent: Thu, 20 Apr 2006 14:18:58 +0100\nSubject: [PERFORM] Quick Performance Poll\n\n> Hi,\n> \n> I was just wondering whether anyone has had success with storing more\n> than 1TB of data with PostgreSQL and how they have found the\n> performance.\n> \n> We need a database that can store in excess of this amount and still\n> show good performance. We will probably be implementing several tables\n> with foreign keys and also indexes which will obviously impact on both\n> data size and performance too.\n> \n> Many thanks in advance,\n> \n> Simon\n> Visit our Website at http://www.rm.com\n> \n> This message is confidential. You should not copy it or disclose its contents to anyone. You may use and apply \n> the information for the intended purpose only. Internet communications are not secure; therefore, RM does not \n> accept legal responsibility for the contents of this message. Any views or opinions presented are those of the \n> author only and not of RM. If this email has come to you in error, please delete it, along with any \n> attachments. Please note that RM may intercept incoming and outgoing email communications.\n> \n> Freedom of Information Act 2000\n> This email and any attachments may contain confidential information belonging to RM. Where the email and any \n> attachments do contain information of a confidential nature, including without limitation information relating \n> to trade secrets, special terms or prices these shall be deemed for the purpose of the Freedom of Information \n> Act 2000 as information provided in confidence by RM and the disclosure of which would be prejudicial to RM's \n> commercial interests.\n> \n> This email has been scanned for viruses by Trend ScanMail.\n------- End of Original Message -------\n\n",
"msg_date": "Thu, 20 Apr 2006 09:36:25 -0400",
"msg_from": "\"Jim Buttafuoco\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Quick Performance Poll"
},
{
"msg_contents": "Jim,\n\nOn 4/20/06 6:36 AM, \"Jim Buttafuoco\" <[email protected]> wrote:\n\n> The access is very fast when looking for a small subset of the data.\n\nI guess you are not using indexes because building a (non bitmap) index on\n6TB on a single machine would take days if not weeks.\n\nSo if you are using table partitioning, do you have to refer to each child\ntable separately in your queries?\n\n- Luke\n\n\n",
"msg_date": "Thu, 20 Apr 2006 07:31:33 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Quick Performance Poll"
},
{
"msg_contents": "First of all this is NOT a single table and yes I am using partitioning and the constaint exclusion stuff. the largest\nset of tables is over 2T. I have not had to rebuild the biggest database yet, but for a smaller one ~1T the restore\ntakes about 12 hours including many indexes on both large and small tables\n\nJim\n\n\n\n---------- Original Message -----------\nFrom: \"Luke Lonergan\" <[email protected]>\nTo: [email protected], \"Simon Dale\" <[email protected]>, [email protected]\nSent: Thu, 20 Apr 2006 07:31:33 -0700\nSubject: Re: [PERFORM] Quick Performance Poll\n\n> Jim,\n> \n> On 4/20/06 6:36 AM, \"Jim Buttafuoco\" <[email protected]> wrote:\n> \n> > The access is very fast when looking for a small subset of the data.\n> \n> I guess you are not using indexes because building a (non bitmap) index on\n> 6TB on a single machine would take days if not weeks.\n> \n> So if you are using table partitioning, do you have to refer to each child\n> table separately in your queries?\n> \n> - Luke\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n------- End of Original Message -------\n\n",
"msg_date": "Thu, 20 Apr 2006 10:40:50 -0400",
"msg_from": "\"Jim Buttafuoco\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Quick Performance Poll"
},
{
"msg_contents": "Jim,\n\nOn 4/20/06 7:40 AM, \"Jim Buttafuoco\" <[email protected]> wrote:\n\n> First of all this is NOT a single table and yes I am using partitioning and\n> the constaint exclusion stuff. the largest\n> set of tables is over 2T. I have not had to rebuild the biggest database yet,\n> but for a smaller one ~1T the restore\n> takes about 12 hours including many indexes on both large and small tables\n\nYou would probably benefit greatly from the new on-disk bitmap index feature\nin Bizgres Open Source. It's 8.1 plus the sort speed improvement and\non-disk bitmap index.\n\nIndex creation and sizes for the binary version are in the table below (from\na performance report on bizgres network. The version in CVS tip on\npgfoundry is much faster on index creation as well.\n\nThe current drawback to bitmap index is that it isn't very maintainable\nunder insert/update, although it is safe for those operations. For now, you\nhave to drop index, do inserts/updates, rebuild index.\n\nWe'll have a version that is maintained for insert/update next.\n\n- Luke\n\n # Indexed Columns Create Time (seconds) Space Used (MBs)\n BITMAP BTREE BITMAP BTREE\n 1 L_SHIPMODE 454.8 2217.1 58 1804\n 2 L_QUANTITY 547.2 937.8 117 1804\n 3 L_LINENUMBER 374.5 412.4 59 1285\n 4 L_SHIPMODE, L_QUANTITY 948.7 2933.4 176 2845\n 5 O_ORDERSTATUS 83.5 241.3 5 321\n 6 O_ORDERPRIORITY 108.5 679.1 11 580\n 7 C_MKTSEGMENT 10.9 51.3 1 45\n 8 C_NATIONKEY 8.3 9.3 2 32 \n\n\n",
"msg_date": "Thu, 20 Apr 2006 08:03:10 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Quick Performance Poll"
},
{
"msg_contents": "\nI have been following your work with great interest. I believe I spoke to someone from Greenplum at linux world in\nBoston a couple of weeks ago.\n\n---------- Original Message -----------\nFrom: \"Luke Lonergan\" <[email protected]>\nTo: [email protected], \"Simon Dale\" <[email protected]>, [email protected]\nSent: Thu, 20 Apr 2006 08:03:10 -0700\nSubject: Re: [PERFORM] Quick Performance Poll\n\n> Jim,\n> \n> On 4/20/06 7:40 AM, \"Jim Buttafuoco\" <[email protected]> wrote:\n> \n> > First of all this is NOT a single table and yes I am using partitioning and\n> > the constaint exclusion stuff. the largest\n> > set of tables is over 2T. I have not had to rebuild the biggest database yet,\n> > but for a smaller one ~1T the restore\n> > takes about 12 hours including many indexes on both large and small tables\n> \n> You would probably benefit greatly from the new on-disk bitmap index feature\n> in Bizgres Open Source. It's 8.1 plus the sort speed improvement and\n> on-disk bitmap index.\n> \n> Index creation and sizes for the binary version are in the table below (from\n> a performance report on bizgres network. The version in CVS tip on\n> pgfoundry is much faster on index creation as well.\n> \n> The current drawback to bitmap index is that it isn't very maintainable\n> under insert/update, although it is safe for those operations. For now, you\n> have to drop index, do inserts/updates, rebuild index.\n> \n> We'll have a version that is maintained for insert/update next.\n> \n> - Luke\n> \n> # Indexed Columns Create Time (seconds) Space Used (MBs)\n> BITMAP BTREE BITMAP BTREE\n> 1 L_SHIPMODE 454.8 2217.1 58 1804\n> 2 L_QUANTITY 547.2 937.8 117 1804\n> 3 L_LINENUMBER 374.5 412.4 59 1285\n> 4 L_SHIPMODE, L_QUANTITY 948.7 2933.4 176 2845\n> 5 O_ORDERSTATUS 83.5 241.3 5 321\n> 6 O_ORDERPRIORITY 108.5 679.1 11 580\n> 7 C_MKTSEGMENT 10.9 51.3 1 45\n> 8 C_NATIONKEY 8.3 9.3 2 32\n------- End of Original Message -------\n\n",
"msg_date": "Thu, 20 Apr 2006 11:06:14 -0400",
"msg_from": "\"Jim Buttafuoco\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Quick Performance Poll"
},
{
"msg_contents": "Hi, Luke,\n\nLuke Lonergan wrote:\n\n> The current drawback to bitmap index is that it isn't very maintainable\n> under insert/update, although it is safe for those operations. For now, you\n> have to drop index, do inserts/updates, rebuild index.\n\nSo they effectively turn the table into a read-only table for now.\n\nAre they capable to index custom datatypes like the PostGIS geometries\nthat use the GIST mechanism? This could probably speed up our Geo\nDatabases for Map rendering, containing static data that is updated\napprox. 2 times per year.\n\n\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Thu, 20 Apr 2006 17:11:25 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Quick Performance Poll"
},
{
"msg_contents": "Markus,\n\nOn 4/20/06 8:11 AM, \"Markus Schaber\" <[email protected]> wrote:\n\n> Are they capable to index custom datatypes like the PostGIS geometries\n> that use the GIST mechanism? This could probably speed up our Geo\n> Databases for Map rendering, containing static data that is updated\n> approx. 2 times per year.\n\nShould work fine - the other limitation is cardinality, or number of unique\nvalues in the column being indexed. A reasonable limit is about 10,000\nunique values in the column.\n\nWe're also going to improve this aspect of the implementation, but the\nprogress might take the useful limit to 300,000 or so.\n\n- Luke\n\n\n",
"msg_date": "Thu, 20 Apr 2006 08:16:34 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Quick Performance Poll"
},
{
"msg_contents": "Interested in doing a case study for the website?\n\nOn Thu, Apr 20, 2006 at 09:36:25AM -0400, Jim Buttafuoco wrote:\n> \n> Simon,\n> \n> I have many databases over 1T with the largest being ~6T. All of my databases store telecom data, such as call detail\n> records. The access is very fast when looking for a small subset of the data. For servers, I am using white box intel\n> XEON and P4 systems with SATA disks, 4G of memory. SCSI is out of our price range, but if I had unlimited $ I would go\n> with SCSI /SCSI raid instead.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Thu, 20 Apr 2006 11:55:32 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Quick Performance Poll"
},
{
"msg_contents": "Hi Luke, \nI (still) haven't tried Bizgres, but what do you mean with \"The current drawback to bitmap index is that it isn't very\nmaintainable under insert/update, although it is safe for those operations\"?\n\nDo you mean that INSERT/UPDATE operations against bitmap indexes are imperformant ?\nIf yes, to what extend ?\n\nOr you mean that bitmap index corruption is possible when issueing DML againts BMP indexes?\nOr BMP indexes are growing too fast as a result of DML ?\n\nI am asking this question because Oracle needed 3 years to solve its BMP index problems (BMP index corruption/ space\nusage explosion when several processes are performing DML operations ).\n\nIs Bizgres implementation suffering from this kind child deseases ?\n\nRegards . Milen \n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Luke Lonergan\nSent: Thursday, April 20, 2006 5:03 PM\nTo: [email protected]; Simon Dale; [email protected]\nSubject: Re: [PERFORM] Quick Performance Poll\n\n\nJim,\n\nOn 4/20/06 7:40 AM, \"Jim Buttafuoco\" <[email protected]> wrote:\n\n> First of all this is NOT a single table and yes I am using \n> partitioning and the constaint exclusion stuff. the largest set of \n> tables is over 2T. I have not had to rebuild the biggest database \n> yet, but for a smaller one ~1T the restore takes about 12 hours \n> including many indexes on both large and small tables\n\nYou would probably benefit greatly from the new on-disk bitmap index feature in Bizgres Open Source. It's 8.1 plus the\nsort speed improvement and on-disk bitmap index.\n\nIndex creation and sizes for the binary version are in the table below (from a performance report on bizgres network.\nThe version in CVS tip on pgfoundry is much faster on index creation as well.\n\nThe current drawback to bitmap index is that it isn't very maintainable under insert/update, although it is safe for\nthose operations. For now, you have to drop index, do inserts/updates, rebuild index.\n\nWe'll have a version that is maintained for insert/update next.\n\n- Luke\n\n # Indexed Columns Create Time (seconds) Space Used (MBs)\n BITMAP BTREE BITMAP BTREE\n 1 L_SHIPMODE 454.8 2217.1 58 1804\n 2 L_QUANTITY 547.2 937.8 117 1804\n 3 L_LINENUMBER 374.5 412.4 59 1285\n 4 L_SHIPMODE, L_QUANTITY 948.7 2933.4 176 2845\n 5 O_ORDERSTATUS 83.5 241.3 5 321\n 6 O_ORDERPRIORITY 108.5 679.1 11 580\n 7 C_MKTSEGMENT 10.9 51.3 1 45\n 8 C_NATIONKEY 8.3 9.3 2 32 \n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n",
"msg_date": "Thu, 20 Apr 2006 21:45:09 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Quick Performance Poll"
},
{
"msg_contents": "Milen,\n\nOn 4/20/06 12:45 PM, \"Milen Kulev\" <[email protected]> wrote:\n\n> I (still) haven't tried Bizgres, but what do you mean with \"The current\n> drawback to bitmap index is that it isn't very\n> maintainable under insert/update, although it is safe for those operations\"?\n\nYes.\n \n> Do you mean that INSERT/UPDATE operations against bitmap indexes are\n> imperformant ?\n> If yes, to what extend ?\n\nInsert/Update (but not delete) operations will often invalidate a bitmap\nindex in our current implementation because we have not implemented a\nmaintenance method for them when insertions re-use TIDs. We are in the\nplanning stages for an update that will fix this.\n \n> Or you mean that bitmap index corruption is possible when issueing DML\n> againts BMP indexes?\n\nWe check for the case of an insertion that causes a re-used TID and issue an\nerror that indicates the index should be removed before the operation is\nretried. This isn't particularly useful for cases where inserts occur\nfrequently, so the current use-case if for tables where DML should be done\nin batches after removing the index, then the index re-applied.\n \n> I am asking this question because Oracle needed 3 years to solve its BMP index\n> problems (BMP index corruption/ space\n> usage explosion when several processes are performing DML operations ).\n\nWe will be much faster than that! Concurrency will be less than ideal with\nour maintenance approach initially, but there shouldn't be a corruption\nproblem.\n \n> Is Bizgres implementation suffering from this kind child deseases ?\n\nSneeze, cough.\n\n- Luke\n> \n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Luke Lonergan\n> Sent: Thursday, April 20, 2006 5:03 PM\n> To: [email protected]; Simon Dale; [email protected]\n> Subject: Re: [PERFORM] Quick Performance Poll\n> \n> \n> Jim,\n> \n> On 4/20/06 7:40 AM, \"Jim Buttafuoco\" <[email protected]> wrote:\n> \n>> First of all this is NOT a single table and yes I am using\n>> partitioning and the constaint exclusion stuff. the largest set of\n>> tables is over 2T. I have not had to rebuild the biggest database\n>> yet, but for a smaller one ~1T the restore takes about 12 hours\n>> including many indexes on both large and small tables\n> \n> You would probably benefit greatly from the new on-disk bitmap index feature\n> in Bizgres Open Source. It's 8.1 plus the\n> sort speed improvement and on-disk bitmap index.\n> \n> Index creation and sizes for the binary version are in the table below (from a\n> performance report on bizgres network.\n> The version in CVS tip on pgfoundry is much faster on index creation as well.\n> \n> The current drawback to bitmap index is that it isn't very maintainable under\n> insert/update, although it is safe for\n> those operations. For now, you have to drop index, do inserts/updates,\n> rebuild index.\n> \n> We'll have a version that is maintained for insert/update next.\n> \n> - Luke\n> \n> # Indexed Columns Create Time (seconds) Space Used (MBs)\n> BITMAP BTREE BITMAP BTREE\n> 1 L_SHIPMODE 454.8 2217.1 58 1804\n> 2 L_QUANTITY 547.2 937.8 117 1804\n> 3 L_LINENUMBER 374.5 412.4 59 1285\n> 4 L_SHIPMODE, L_QUANTITY 948.7 2933.4 176 2845\n> 5 O_ORDERSTATUS 83.5 241.3 5 321\n> 6 O_ORDERPRIORITY 108.5 679.1 11 580\n> 7 C_MKTSEGMENT 10.9 51.3 1 45\n> 8 C_NATIONKEY 8.3 9.3 2 32\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n> \n> \n\n\n\n",
"msg_date": "Thu, 20 Apr 2006 14:27:59 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Quick Performance Poll"
},
{
"msg_contents": "Hi Luke, \nThank you very much for your prompt reply.\nI have got ***much*** more information than expected ;)\n\nObviously there are thing to be improved in the current implementation of BMP indexes,\nBut anyway they are worth usung (I wa pretty impressed from the BMP index performance, after\nReading a PDF document on Bizgres site).\n\nThanks ahain for the information.\n\nRegards. Milen \n\n-----Original Message-----\nFrom: Luke Lonergan [mailto:[email protected]] \nSent: Thursday, April 20, 2006 11:28 PM\nTo: Milen Kulev\nCc: [email protected]; bizgres-general\nSubject: Re: [PERFORM] Quick Performance Poll\n\n\nMilen,\n\nOn 4/20/06 12:45 PM, \"Milen Kulev\" <[email protected]> wrote:\n\n> I (still) haven't tried Bizgres, but what do you mean with \"The \n> current drawback to bitmap index is that it isn't very maintainable \n> under insert/update, although it is safe for those operations\"?\n\nYes.\n \n> Do you mean that INSERT/UPDATE operations against bitmap indexes are \n> imperformant ? If yes, to what extend ?\n\nInsert/Update (but not delete) operations will often invalidate a bitmap index in our current implementation because we\nhave not implemented a maintenance method for them when insertions re-use TIDs. We are in the planning stages for an\nupdate that will fix this.\n \n> Or you mean that bitmap index corruption is possible when issueing DML \n> againts BMP indexes?\n\nWe check for the case of an insertion that causes a re-used TID and issue an error that indicates the index should be\nremoved before the operation is retried. This isn't particularly useful for cases where inserts occur frequently, so\nthe current use-case if for tables where DML should be done in batches after removing the index, then the index\nre-applied.\n \n> I am asking this question because Oracle needed 3 years to solve its \n> BMP index problems (BMP index corruption/ space usage explosion when \n> several processes are performing DML operations ).\n\nWe will be much faster than that! Concurrency will be less than ideal with our maintenance approach initially, but\nthere shouldn't be a corruption problem.\n \n> Is Bizgres implementation suffering from this kind child deseases ?\n\nSneeze, cough.\n\n- Luke\n> \n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Luke \n> Lonergan\n> Sent: Thursday, April 20, 2006 5:03 PM\n> To: [email protected]; Simon Dale; [email protected]\n> Subject: Re: [PERFORM] Quick Performance Poll\n> \n> \n> Jim,\n> \n> On 4/20/06 7:40 AM, \"Jim Buttafuoco\" <[email protected]> wrote:\n> \n>> First of all this is NOT a single table and yes I am using \n>> partitioning and the constaint exclusion stuff. the largest set of \n>> tables is over 2T. I have not had to rebuild the biggest database \n>> yet, but for a smaller one ~1T the restore takes about 12 hours \n>> including many indexes on both large and small tables\n> \n> You would probably benefit greatly from the new on-disk bitmap index \n> feature in Bizgres Open Source. It's 8.1 plus the sort speed \n> improvement and on-disk bitmap index.\n> \n> Index creation and sizes for the binary version are in the table below \n> (from a performance report on bizgres network. The version in CVS tip \n> on pgfoundry is much faster on index creation as well.\n> \n> The current drawback to bitmap index is that it isn't very \n> maintainable under insert/update, although it is safe for those \n> operations. For now, you have to drop index, do inserts/updates, \n> rebuild index.\n> \n> We'll have a version that is maintained for insert/update next.\n> \n> - Luke\n> \n> # Indexed Columns Create Time (seconds) Space Used (MBs)\n> BITMAP BTREE BITMAP BTREE\n> 1 L_SHIPMODE 454.8 2217.1 58 1804\n> 2 L_QUANTITY 547.2 937.8 117 1804\n> 3 L_LINENUMBER 374.5 412.4 59 1285\n> 4 L_SHIPMODE, L_QUANTITY 948.7 2933.4 176 2845\n> 5 O_ORDERSTATUS 83.5 241.3 5 321\n> 6 O_ORDERPRIORITY 108.5 679.1 11 580\n> 7 C_MKTSEGMENT 10.9 51.3 1 45\n> 8 C_NATIONKEY 8.3 9.3 2 32\n> \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n> \n> \n\n\n\n",
"msg_date": "Fri, 21 Apr 2006 07:22:24 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Quick Performance Poll"
}
] |
[
{
"msg_contents": "I have copied the database from production server to my laptop (pg_dump,\netc...) to do some testing.\n\nWhile testing I have found out that one particular query is beeing much\nslower on my machine than on the server (it's not just because my laptop\nis much slower than the server), and found out that postgres is using\ndifferent plan on server than on my laptop. Both on server and on my\nlaptop is postgres-8.1.2, running on Debian (sarge on server, Ubuntu on\nmy laptop), with 2.6 kernel, I compiled postgres with gcc4 on both\nmachines.\n\nThe query is like this:\n\non the server:\n\npulitzer2=# explain analyze select code_id from ticketing_codes where\ncode_group_id = 1000 and code_value = UPPER('C7ZP2U');\n\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ticketing_codes_uq_value_group_id on ticketing_codes\n(cost=0.00..6.02 rows=1 width=4) (actual time=0.104..0.107 rows=1\nloops=1)\n Index Cond: (((code_value)::text = 'C7ZP2U'::text) AND (code_group_id\n= 1000))\n Total runtime: 0.148 ms\n(3 rows)\n\n\nAnd, on my laptop:\n\nsom_pulitzer2=# explain analyze select code_id from ticketing_codes\nwhere code_group_id = 1000 and code_value = UPPER('C7ZP2U');\n QUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on ticketing_codes (cost=2.01..1102.05 rows=288\nwidth=4) (actual time=88.164..88.170 rows=1 loops=1)\n Recheck Cond: (((code_value)::text = 'C7ZP2U'::text) AND\n(code_group_id = 1000))\n -> Bitmap Index Scan on ticketing_codes_uq_value_group_id\n(cost=0.00..2.01 rows=288 width=0) (actual time=54.397..54.397 rows=1\nloops=1)\n Index Cond: (((code_value)::text = 'C7ZP2U'::text) AND\n(code_group_id = 1000))\n Total runtime: 88.256 ms\n(5 rows)\n\n\n\nThis is the table ticketing_codes:\nsom_pulitzer2=# \\d ticketing_codes;\n Table \"public.ticketing_codes\"\n Column | Type |\nModifiers\n---------------+-----------------------+-------------------------------------------------------------------\n code_id | integer | not null default\nnextval('ticketing_codes_code_id_seq'::regclass)\n code_value | character varying(10) | not null\n code_group_id | integer | not null\nIndexes:\n \"ticketing_codes_pk\" PRIMARY KEY, btree (code_id)\n \"ticketing_codes_uq_value_group_id\" UNIQUE, btree (code_value,\ncode_group_id)\nForeign-key constraints:\n \"ticketing_codes_fk__ticketing_code_groups\" FOREIGN KEY\n(code_group_id) REFERENCES ticketing_code_groups(group_id)\n\n\nAnd the \\d command produces the same result on both my server and\nlaptop. \n\nThat query is beeing called from within function, the code is like this:\n\ncodeId := code_id from ticketing_codes where code_group_id = 1000 and\ncode_value = UPPER('C7ZP2U');\n\ncodeId has been declared as int4. When that query is run inside the\nfunction, it takes around 20 seconds (compared to 88 miliseconds when I\ncall it from psql). The query is that very same query, just the values\n1000 and 'C7ZP2U' are parametars for the function.\n\nSo, the second question would be why is that query much much slower when\nrun from within function? Is there a way to see an execution plan for\nthe query inside the function?\n\n\tMike\n-- \nMario Splivalo\nMob-Art\[email protected]\n\n\"I can do it quick, I can do it cheap, I can do it well. Pick any two.\"\n\n\n",
"msg_date": "Thu, 20 Apr 2006 15:51:53 +0200",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Identical query on two machines, different plans...."
},
{
"msg_contents": "You very likely forgot to run ANALYZE on your laptop after copying the\ndata. Observe the different row count estimates in the 2 plans...\n\nHTH,\nCsaba.\n\n\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using ticketing_codes_uq_value_group_id on ticketing_codes\n> (cost=0.00..6.02 rows=1 width=4) (actual time=0.104..0.107 rows=1\n ^^^^^^ \n> loops=1)\n> Index Cond: (((code_value)::text = 'C7ZP2U'::text) AND (code_group_id\n> = 1000))\n> Total runtime: 0.148 ms\n> (3 rows)\n> \n> \n> PLAN \n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on ticketing_codes (cost=2.01..1102.05 rows=288\n ^^^^^^^^\n> width=4) (actual time=88.164..88.170 rows=1 loops=1)\n> Recheck Cond: (((code_value)::text = 'C7ZP2U'::text) AND\n> (code_group_id = 1000))\n> -> Bitmap Index Scan on ticketing_codes_uq_value_group_id\n> (cost=0.00..2.01 rows=288 width=0) (actual time=54.397..54.397 rows=1\n> loops=1)\n> Index Cond: (((code_value)::text = 'C7ZP2U'::text) AND\n> (code_group_id = 1000))\n> Total runtime: 88.256 ms\n> (5 rows)\n> \n\n\n",
"msg_date": "Thu, 20 Apr 2006 15:59:48 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Identical query on two machines, different plans...."
},
{
"msg_contents": "On Thu, 2006-04-20 at 15:59 +0200, Csaba Nagy wrote:\n> You very likely forgot to run ANALYZE on your laptop after copying the\n> data. Observe the different row count estimates in the 2 plans...\n> \n> HTH,\n> Csaba.\n\nSometimes I wish I am Dumbo the Elephant, so I could cover myself with\nme ears...\n\nThnx :)\n\n\tMike\n-- \nMario Splivalo\nMob-Art\[email protected]\n\n\"I can do it quick, I can do it cheap, I can do it well. Pick any two.\"\n\n\n",
"msg_date": "Thu, 20 Apr 2006 16:03:58 +0200",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Identical query on two machines, different plans...."
},
{
"msg_contents": "OK, I marked the wrong row counts, but the conclusion is the same.\n\nCheers,\nCsaba.\n\n\n> > QUERY PLAN\n> > ---------------------------------------------------------------------------------------------------------------------------------------------------\n> > Index Scan using ticketing_codes_uq_value_group_id on ticketing_codes\n> > (cost=0.00..6.02 rows=1 width=4) (actual time=0.104..0.107 rows=1\n> ^^^^^^ \n> > loops=1)\n> > Index Cond: (((code_value)::text = 'C7ZP2U'::text) AND (code_group_id\n> > = 1000))\n> > Total runtime: 0.148 ms\n> > (3 rows)\n> > \n> > \n> > PLAN \n> > ----------------------------------------------------------------------------------------------------------------------------------------------\n> > Bitmap Heap Scan on ticketing_codes (cost=2.01..1102.05 rows=288\n> ^^^^^^^^\n> > width=4) (actual time=88.164..88.170 rows=1 loops=1)\n> > Recheck Cond: (((code_value)::text = 'C7ZP2U'::text) AND\n> > (code_group_id = 1000))\n> > -> Bitmap Index Scan on ticketing_codes_uq_value_group_id\n> > (cost=0.00..2.01 rows=288 width=0) (actual time=54.397..54.397 rows=1\n ^^^^^^^^\n> > loops=1)\n> > Index Cond: (((code_value)::text = 'C7ZP2U'::text) AND\n> > (code_group_id = 1000))\n> > Total runtime: 88.256 ms\n> > (5 rows)\n> > \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Thu, 20 Apr 2006 16:06:11 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Identical query on two machines, different plans...."
}
] |
[
{
"msg_contents": "Hi again :)\n\nThis is a follow-up to the mega thread which made a Friday night more \ninteresting [1] - the summary is various people thought there was some \nissue with shared memory access on AIX.\n\nI then installed Debian (kernel 2.6.11) on the 8-CPU p650 (native - no \nLPAR) and saw just as woeful performance.\n\nNow I've had a chance to try a 2-CPU dualcore Opteron box, and it \n*FLIES* - the 4-way machine sits churning through our heavy \n'hotelsearch' function at ~400ms per call.\n\nBasically, this pSeries box is available until Monday lunchtime if any \npg devel wants to pop in, run tests, mess around since I am convinced \nthat the hardware itself cannot be this poor - it has to be some failing \nof pg when mixed with our dataset / load pattern.\n\ne.g. If I run 'ab -n 200 -c 4 -k http://localhost/test.php [2] with \npg_connect pointed at the pSeries, it turns in search times of ~3500ms \nwith loadavg of 4.\n\nThe same test with pg_connect pointed at the dual-Opteron turns in \n~300ms searches, with loadavg of 3.5 .. something is very very wrong \nwith the pSeries setup :)\n\nIf I crank up the heat and run apachebench with 10 hammering clients \ninstead of 4, the differences become even more stark.. pSeries: \n5000-15000ms, loadavg 9.. Opteron ~3000ms, loadavg 8. 90% of queries on \nthe Opteron conclude in under 4000ms, which maxes out at 6.5 searches \nper second. The pSeries manages 0.9 searches per second. (!)\n\nDatabases on both machines have seen a VACUUM FULL and VACUUM ANALYZE \nbefore testing, and have near-identical postgresql.conf's. (the pSeries \nhas twice the RAM)\n\nThis post is not intended to be whining that 'pg is crap on pSeries!' - \nI'm trying to make a resource available (albeit for a short time) to \nhelp fix a problem that will doubtless affect others in future - for \ncertain we're never going midrange again! :O\n\nCheers,\nGavin.\n\n[1] http://archives.postgresql.org/pgsql-performance/2006-04/msg00143.php\n[2] Trivial script which does a pg_connect, runs a random hotelsearch \nand exits.\n",
"msg_date": "Thu, 20 Apr 2006 16:13:49 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "IBM pSeries - overrated bucket of crud?"
}
] |
[
{
"msg_contents": "I'm new to PG and I'm testing default PG settings \nfor now.\n\nI have PG 8.1.3. installed with autovacuum=on.\n\nMy test table has 15830 records with 190 fields.\nI have different fields types (date, numeric, varchar,\ninteger, smallint,...).\n\nI decided to evaluate PG because I need to use schemas.\n\nFirst test I did is not very promising.\n\nI tried to update one fields in test table several times\nto see how PG react on this.\n\nI do like this:\n\nupdate table\nset field = null\n\nAfter first execute I get time 3 seconds. Then I repeat\nthis update. After each update time increase. I get\n4 sec, 7 sec, 10 sec, 12 sec, 15 sec, 18 sec, 21 sec.\n\nIs this normal (default) behaviour or I must do something\nto prevent this.\n\nRegards,\nRadovan Antloga\n\n",
"msg_date": "Thu, 20 Apr 2006 17:20:56 +0200",
"msg_from": "\"Radovan Antloga\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance decrease"
},
{
"msg_contents": "\"Radovan Antloga\" <[email protected]> writes:\n> My test table has 15830 records with 190 fields.\n\n190 fields in a table seems like rather a lot ... is that actually\nrepresentative of your intended applications?\n\n> I do like this:\n\n> update table\n> set field = null\n\nAgain, is that representative of something you'll be doing a lot in\npractice? Most apps don't often update every row of a table, in my\nexperience.\n\n> After first execute I get time 3 seconds. Then I repeat\n> this update. After each update time increase. I get\n> 4 sec, 7 sec, 10 sec, 12 sec, 15 sec, 18 sec, 21 sec.\n\nThere should be some increase because of the addition of dead rows,\nbut both the original 3 seconds and the rate of increase seem awfully\nhigh for such a small table. What are you running this on?\n\nFor comparison purposes, here's what I see on a full-table UPDATE\nof a 10000-row table on a rather slow HP box:\n\nregression=# \\timing\nTiming is on.\nregression=# create table t1 as select * from tenk1;\nSELECT\nTime: 1274.213 ms\nregression=# update t1 set unique2 = null;\nUPDATE 10000\nTime: 565.664 ms\nregression=# update t1 set unique2 = null;\nUPDATE 10000\nTime: 589.839 ms\nregression=# update t1 set unique2 = null;\nUPDATE 10000\nTime: 593.735 ms\nregression=# update t1 set unique2 = null;\nUPDATE 10000\nTime: 615.575 ms\nregression=# update t1 set unique2 = null;\nUPDATE 10000\nTime: 755.456 ms\nregression=#\n\nVacuuming brings the time back down:\n\nregression=# vacuum t1;\nVACUUM\nTime: 242.406 ms\nregression=# update t1 set unique2 = null;\nUPDATE 10000\nTime: 458.028 ms\nregression=#\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Apr 2006 11:41:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance decrease "
},
{
"msg_contents": ">190 fields in a table seems like rather a lot ... is that actually\n>representative of your intended applications?\n\nTest table is like table I use in production\nwith Firebird and Oracle db. Table has a lot of smallint\nand integer fields. As you can see I have Firebird for\nlow cost projects (small companies) and Oracle medium\nor large project.\n\n>Again, is that representative of something you'll be doing a lot in\n>practice? Most apps don't often update every row of a table, in my\n>experience.\n\nI agree with you !\nI have once or twice a month update on many records (~6000) but\nnot so many. I did not expect PG would have problems with\nupdating 15800 records.\n\nMy test was on Windows XP SP2.\nI have AMD 64 2.1 GHz cpu with\n1GB ram.\n\nRegards,\nRadovan Antloga\n\n",
"msg_date": "Thu, 20 Apr 2006 18:10:21 +0200",
"msg_from": "\"Radovan Antloga\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance decrease "
},
{
"msg_contents": "On Thu, Apr 20, 2006 at 06:10:21PM +0200, Radovan Antloga wrote:\n> I have once or twice a month update on many records (~6000) but\n> not so many. I did not expect PG would have problems with\n> updating 15800 records.\n\nAnd generally speaking, it doesn't. But you do need to ensure that\nyou're vacuuming the database frequently enough. Autovacuum is a good\nway to do that.\n\n> My test was on Windows XP SP2.\n> I have AMD 64 2.1 GHz cpu with\n> 1GB ram.\n\nOne think to keep in mind is that the windows code is rather new, so it\nis possible to find some performance issues there.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Thu, 20 Apr 2006 12:00:07 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance decrease"
},
{
"msg_contents": "On 20.04.2006, at 18:10 Uhr, Radovan Antloga wrote:\n\n> I have once or twice a month update on many records (~6000) but\n> not so many. I did not expect PG would have problems with\n> updating 15800 records.\n\nIt has no problems with that. We have a database where we often \nupdate/insert rows with about one hundred columns. No problem so far. \nPerformance is in the sub 10ms range. The whole table has about \n100000 records.\n\nDo you wrap every update in a separate transaction? I do commits \nevery 200 updates for bulk updates.\n\ncug\n\n-- \nPharmaLine, Essen, GERMANY\nSoftware and Database Development",
"msg_date": "Thu, 20 Apr 2006 23:19:44 +0200",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance decrease "
},
{
"msg_contents": "\"Radovan Antloga\" <[email protected]> writes:\n>> 190 fields in a table seems like rather a lot ... is that actually\n>> representative of your intended applications?\n\n> Test table is like table I use in production\n> with Firebird and Oracle db. Table has a lot of smallint\n> and integer fields.\n\nI did some experiments with CVS tip on updating all rows of a table with\nlots of columns --- to be specific,\n\n\tcreate table widetable(\n\t\tint1 int, text1 text, num1 numeric,\n\t\tint2 int, text2 text, num2 numeric,\n\t\tint3 int, text3 text, num3 numeric,\n\t\t...\n\t\tint59 int, text59 text, num59 numeric,\n\t\tint60 int, text60 text, num60 numeric\n\t);\n\nfor 180 columns altogether, with 16k rows of data and the test query\n\n\tupdate widetable set int30 = null;\n\nThe gprof profile looks like this:\n\n % cumulative self self total \n time seconds seconds calls ms/call ms/call name \n 19.77 1.22 1.22 _mcount\n 14.91 2.14 0.92 16385 0.06 0.06 XLogInsert\n 9.08 2.70 0.56 2932736 0.00 0.00 slot_deform_tuple\n 7.94 3.19 0.49 2965504 0.00 0.00 slot_getattr\n 6.48 3.59 0.40 2949120 0.00 0.00 ExecEvalVar\n 5.83 3.95 0.36 16384 0.02 0.02 ExecTargetList\n 4.70 4.24 0.29 16384 0.02 0.02 heap_fill_tuple\n 3.57 4.46 0.22 ExecEvalVar\n 2.43 4.61 0.15 _write_sys\n 2.27 4.75 0.14 16384 0.01 0.01 heap_compute_data_size\n 1.62 4.85 0.10 noshlibs\n 1.46 4.94 0.09 16384 0.01 0.03 heap_form_tuple\n 1.30 5.02 0.08 16384 0.00 0.01 ExecGetJunkAttribute\n 1.30 5.10 0.08 encore\n 1.13 5.17 0.07 16384 0.00 0.00 ExecFilterJunk\n 1.13 5.24 0.07 chunk2\n\nThe large number of calls to slot_deform_tuple() is annoying --- ideally\nthere'd be only one per row. But what actually happens is that the\nExecVariableList() optimization is disabled by the presence of one\nnon-user attribute in the scan's targetlist (ie, ctid, which is needed\nby the top-level executor to do the UPDATE), not to mention that the\nattribute(s) being updated will have non-Var expressions anyway. So we\nexecute the target list the naive way, and because the Vars referencing\nthe not-updated columns appear sequentially in the tlist, that means\neach ExecEvalVar/slot_getattr ends up calling slot_deform_tuple again to\ndecipher just one more column of the tuple.\n\nThis is just an O(N) penalty, not O(N^2), but still it's pretty annoying\nconsidering that all the infrastructure is there to do better. If we\nwere to determine the max attribute number to be fetched and call\nslot_getsomeattrs() up front (as happens in the ExecVariableList case)\nthen we could save a significant constant factor --- perhaps as much as\n10% of the runtime in this case.\n\nThe trick with such \"optimizations\" is to not turn them into\npessimizations --- if we decode attributes that end up not getting\nfetched then we aren't saving cycles. So I'm thinking of tying this\nspecifically to the scan node's targetlist and only doing the\nslot_getsomeattrs() call when we have decided to evaluate the\ntargetlist. Any columns referenced as part of the node's qual\nconditions wouldn't participate in the improvement. We could\nalternatively do the slot_getsomeattrs() call before evaluating the\nquals, but I'm worried that this would be a loss in the case where\nthe qual condition fails and so the targetlist is never evaluated.\n\nComments, better ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Apr 2006 20:30:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "UPDATE on many-column tables (was Re: [PERFORM] Performance decrease)"
}
] |
[
{
"msg_contents": "I'm preparing for an upgrade from PostgreSQL 7.4.5 to 8.1.3, and I \nnoticed a potential performance issue.\n\nI have two servers, a dual proc Dell with raid 5 running PostgreSQL \n7.4, and a quad proc Dell with a storage array running PostgreSQL \n8.1. Both servers have identical postgresql.conf settings and were \nrestored from the same 7.4 backup. Almost everything is faster on the \n8.1 server (mostly due to hardware), except one thing...deletes from \ntables with foreign keys.\n\nI have table A with around 100,000 rows, that has foreign keys to \naround 50 other tables. Some of these other tables (table B, for \nexample) have around 10 million rows.\n\nOn the 7.4 server, I can delete a single row from a table A in well \nunder a second (as expected). On the 8.1 server, it takes over a \nminute to delete. I tried all the usual stuff, recreating indexes, \nvacuum analyzing, explain analyze. Everything is identical between \nthe systems. If I hit ctrl-c while the delete was running on 8.1, I \nrepeatedly got the following message...\n\ndb=# delete from \"A\" where \"ID\" in ('6');\nCancel request sent\nERROR: canceling statement due to user request\nCONTEXT: SQL statement \"SELECT 1 FROM ONLY \"public\".\"B\" x WHERE \n\"A_ID\" = $1 FOR SHARE OF x\"\n\nIt looks to me like the \"SELECT ... FOR SHARE\" functionality in 8.1 \nis the culprit. Has anyone else run into this issue?\n\n\nWill Reese -- http://blog.rezra.com\n\n\n\n",
"msg_date": "Thu, 20 Apr 2006 11:48:36 -0500",
"msg_from": "Will Reese <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow deletes in 8.1 when FKs are involved"
},
{
"msg_contents": "\nHey there Will,\n I would assume that, perhaps, jst perhaps, the FK doesn't have an\nindex on the field on both sides, so, your seeing a potential sequential\nscan happening. Can you fling up an explain anaylze for everyone please\n? Anything more will be merely shooting in the dark, and, tracer bullets\naside, I have heard that -that- can be dangerous ;p\n\n Regards\n Stef\n\nWill Reese wrote:\n> I'm preparing for an upgrade from PostgreSQL 7.4.5 to 8.1.3, and I\n> noticed a potential performance issue.\n>\n> I have two servers, a dual proc Dell with raid 5 running PostgreSQL\n> 7.4, and a quad proc Dell with a storage array running PostgreSQL 8.1.\n> Both servers have identical postgresql.conf settings and were restored\n> from the same 7.4 backup. Almost everything is faster on the 8.1\n> server (mostly due to hardware), except one thing...deletes from\n> tables with foreign keys.\n>\n> I have table A with around 100,000 rows, that has foreign keys to\n> around 50 other tables. Some of these other tables (table B, for\n> example) have around 10 million rows.\n>\n> On the 7.4 server, I can delete a single row from a table A in well\n> under a second (as expected). On the 8.1 server, it takes over a\n> minute to delete. I tried all the usual stuff, recreating indexes,\n> vacuum analyzing, explain analyze. Everything is identical between\n> the systems. If I hit ctrl-c while the delete was running on 8.1, I\n> repeatedly got the following message...\n>\n> db=# delete from \"A\" where \"ID\" in ('6');\n> Cancel request sent\n> ERROR: canceling statement due to user request\n> CONTEXT: SQL statement \"SELECT 1 FROM ONLY \"public\".\"B\" x WHERE\n> \"A_ID\" = $1 FOR SHARE OF x\"\n>\n> It looks to me like the \"SELECT ... FOR SHARE\" functionality in 8.1 is\n> the culprit. Has anyone else run into this issue?\n>\n>\n> Will Reese -- http://blog.rezra.com\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Wed, 26 Apr 2006 19:43:41 -0400",
"msg_from": "Stef T <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow deletes in 8.1 when FKs are involved"
},
{
"msg_contents": "Stef:\n\nThere is already a post explaining the solution. All the proper \nindexes were there, and it works great on 7.4. The problem lies with \nleftover 7.4 RI triggers being carried over to an 8.1 database. The \nsolution is to drop the triggers and add the constraint. Hopefully \nthis will not cause as many locking issues with FKs on 8.1 as it did \nin 7.4 (which is why one of the RI triggers was removed in the first \nplace).\n\nWill Reese -- http://blog.rezra.com\n\nOn Apr 26, 2006, at 6:43 PM, Stef T wrote:\n\n>\n> Hey there Will,\n> I would assume that, perhaps, jst perhaps, the FK doesn't have an\n> index on the field on both sides, so, your seeing a potential \n> sequential\n> scan happening. Can you fling up an explain anaylze for everyone \n> please\n> ? Anything more will be merely shooting in the dark, and, tracer \n> bullets\n> aside, I have heard that -that- can be dangerous ;p\n>\n> Regards\n> Stef\n>\n> Will Reese wrote:\n>> I'm preparing for an upgrade from PostgreSQL 7.4.5 to 8.1.3, and I\n>> noticed a potential performance issue.\n>>\n>> I have two servers, a dual proc Dell with raid 5 running PostgreSQL\n>> 7.4, and a quad proc Dell with a storage array running PostgreSQL \n>> 8.1.\n>> Both servers have identical postgresql.conf settings and were \n>> restored\n>> from the same 7.4 backup. Almost everything is faster on the 8.1\n>> server (mostly due to hardware), except one thing...deletes from\n>> tables with foreign keys.\n>>\n>> I have table A with around 100,000 rows, that has foreign keys to\n>> around 50 other tables. Some of these other tables (table B, for\n>> example) have around 10 million rows.\n>>\n>> On the 7.4 server, I can delete a single row from a table A in well\n>> under a second (as expected). On the 8.1 server, it takes over a\n>> minute to delete. I tried all the usual stuff, recreating indexes,\n>> vacuum analyzing, explain analyze. Everything is identical between\n>> the systems. If I hit ctrl-c while the delete was running on 8.1, I\n>> repeatedly got the following message...\n>>\n>> db=# delete from \"A\" where \"ID\" in ('6');\n>> Cancel request sent\n>> ERROR: canceling statement due to user request\n>> CONTEXT: SQL statement \"SELECT 1 FROM ONLY \"public\".\"B\" x WHERE\n>> \"A_ID\" = $1 FOR SHARE OF x\"\n>>\n>> It looks to me like the \"SELECT ... FOR SHARE\" functionality in \n>> 8.1 is\n>> the culprit. Has anyone else run into this issue?\n>>\n>>\n>> Will Reese -- http://blog.rezra.com\n>>\n>>\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that \n>> your\n>> message can get through to the mailing list cleanly\n>>\n>\n\n",
"msg_date": "Wed, 26 Apr 2006 21:13:18 -0500",
"msg_from": "Will Reese <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow deletes in 8.1 when FKs are involved"
}
] |
[
{
"msg_contents": "We're going to get one for evaluation next week (equipped with dual\n2Gbit HBA:s and 2x14 disks, iirc). Anyone with experience from them,\nperformance wise?\n\nRegards,\nMikael\n",
"msg_date": "Thu, 20 Apr 2006 20:00:03 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hardware: HP StorageWorks MSA 1500"
},
{
"msg_contents": "Hmmm. We use an MSA 1000 with Fibre Channel interconnects. No real\ncomplaints, although I was a little bit disappointed by the RAID\ncontroller's battery-backed write cache performance; tiny random writes\nare only about 3 times as fast with write caching enabled as with it\ndisabled, I had (perhaps naively) hoped for more. Sequential scans from\nour main DB (on a 5-pair RAID 10 set with 15k RPM drives) get roughly\n80MB/sec.\n\nGetting the redundant RAID controllers to fail over correctly on Linux\nwas a big headache and required working the tech support phone all day\nuntil we finally got to the deep guru who knew the proper undocumented\nincantations.\n\n-- Mark Lewis\n\nOn Thu, 2006-04-20 at 20:00 +0200, Mikael Carneholm wrote:\n> We're going to get one for evaluation next week (equipped with dual\n> 2Gbit HBA:s and 2x14 disks, iirc). Anyone with experience from them,\n> performance wise?\n> \n> Regards,\n> Mikael\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n",
"msg_date": "Thu, 20 Apr 2006 11:43:20 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware: HP StorageWorks MSA 1500"
},
{
"msg_contents": "On Thu, 20 Apr 2006, Mikael Carneholm wrote:\n\n> We're going to get one for evaluation next week (equipped with dual\n> 2Gbit HBA:s and 2x14 disks, iirc). Anyone with experience from them,\n> performance wise?\n\nWe (Seatbooker) use one. It works well enough. Here's a sample bonnie\noutput:\n\n -------Sequential Output-------- ---Sequential Input-- --Random--\n -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---\n Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU\n\n 16384 41464 30.6 41393 10.0 16287 3.7 92433 83.2 119608 18.3 674.0 0.8\n\nwhich is hardly bad (on a four 15kRPM disk RAID 10 with 2Gbps FC).\nSequential scans on a table produce about 40MB/s of IO with the\n'disk' something like 60-70% busy according to FreeBSD's systat.\n\nHere's diskinfo -cvt output on a not quite idle system:\n\n/dev/da1\n 512 # sectorsize\n 59054899200 # mediasize in bytes (55G)\n 115341600 # mediasize in sectors\n 7179 # Cylinders according to firmware.\n 255 # Heads according to firmware.\n 63 # Sectors according to firmware.\n\nI/O command overhead:\n time to read 10MB block 0.279395 sec = 0.014 msec/sector\n time to read 20480 sectors 11.864934 sec = 0.579 msec/sector\n calculated command overhead = 0.566 msec/sector\n\nSeek times:\n Full stroke: 250 iter in 0.836808 sec = 3.347 msec\n Half stroke: 250 iter in 0.861196 sec = 3.445 msec\n Quarter stroke: 500 iter in 1.415700 sec = 2.831 msec\n Short forward: 400 iter in 0.586330 sec = 1.466 msec\n Short backward: 400 iter in 1.365257 sec = 3.413 msec\n Seq outer: 2048 iter in 1.184569 sec = 0.578 msec\n Seq inner: 2048 iter in 1.184158 sec = 0.578 msec\nTransfer rates:\n outside: 102400 kbytes in 1.367903 sec = 74859 kbytes/sec\n middle: 102400 kbytes in 1.472451 sec = 69544 kbytes/sec\n inside: 102400 kbytes in 1.521503 sec = 67302 kbytes/sec\n\n\nIt (or any FC SAN, for that matter) isn't an especially cheap way to get\nstorage. You don't get much option if you have an HP blade enclosure,\nthough.\n\nHP's support was poor. Their Indian call-centre seems not to know much\nabout them and spectacularly failed to tell us if and how we could connect\nthis (with the 2/3-port FC hub option) to two of our blade servers, one of\nwhich was one of the 'half-height' ones which require an arbitrated loop.\nWe ended up buying a FC switch.\n",
"msg_date": "Fri, 21 Apr 2006 16:24:42 +0100 (BST)",
"msg_from": "Alex Hayward <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware: HP StorageWorks MSA 1500"
}
] |
[
{
"msg_contents": "Hi,\n\nWe had a database issue today that caused us to have to restore to \nour most recent backup. We are using PITR so we have 3120 WAL files \nthat need to be applied to the database.\n\nAfter 45 minutes, it has restored only 230 WAL files. At this rate, \nit's going to take about 10 hours to restore our database.\n\nMost of the time, the server is not using very much CPU time or I/O \ntime. So I'm wondering what can be done to speed up the process?\n\nThe database is about 20 GB. The WAL files are compressed with gzip \nto about 4 MB. Expanded, the WAL files would take 48 GB.\n\nWe are using PostgreSQL 8.1.3 on OS X Server 10.4.6 connected to an \nXServe RAID. The pg_xlog is on its own separate RAID and so are the \ntable spaces.\n\nHere's a representative sample of doing iostat:\n\nhulk1:/Library/PostgreSQL admin$ iostat 5\n disk1 disk2 disk0 cpu\n KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us sy id\n19.31 101 1.91 14.39 51 0.71 37.37 4 0.13 15 10 76\n 8.00 21 0.16 0.00 0 0.00 90.22 2 0.16 0 2 98\n 8.00 32 0.25 0.00 0 0.00 0.00 0 0.00 0 1 98\n 8.00 76 0.60 0.00 0 0.00 0.00 0 0.00 0 1 99\n 8.00 587 4.59 1024.00 4 4.00 0.00 0 0.00 4 7 88\n 8.00 675 5.27 956.27 6 5.60 0.00 0 0.00 6 6 88\n11.32 1705 18.84 5.70 1 0.01 16.36 7 0.12 1 6 93\n 8.00 79 0.62 1024.00 3 3.20 0.00 0 0.00 2 2 96\n 8.00 68 0.53 0.00 0 0.00 0.00 0 0.00 0 2 98\n 8.00 76 0.59 0.00 0 0.00 0.00 0 0.00 0 1 99\n 8.02 89 0.69 0.00 0 0.00 0.00 0 0.00 1 1 98\n 8.00 572 4.47 911.11 4 3.20 0.00 0 0.00 5 5 91\n13.53 1227 16.21 781.55 4 3.21 12.14 2 0.03 3 6 90\n 8.00 54 0.42 0.00 0 0.00 90.22 2 0.16 1 1 98\n 8.00 68 0.53 0.00 0 0.00 0.00 0 0.00 0 1 99\n 8.00 461 3.60 1024.00 3 3.20 0.00 0 0.00 3 6 91\n 8.00 671 5.24 964.24 7 6.40 0.00 0 0.00 6 8 86\n 7.99 248 1.94 0.00 0 0.00 0.00 0 0.00 1 3 96\n15.06 1050 15.44 911.11 4 3.20 12.12 3 0.03 2 5 93\n19.84 176 3.41 5.70 1 0.01 0.00 0 0.00 0 1 99\n\n\ndisk1 is the RAID volume that has the table spaces on it. disk2 is \nthe pg_xlog and disk0 is the boot disk.\n\nSo you can see the CPU is idle much of the time and the IO only \noccurs in short bursts. Each line in the iostat results is 5 seconds \napart.\n\nIf there were something we could do to speed up the process, would it \nbe possible to kill the postgres process, tweak some parameter \nsomewhere and then start it up again? Or would we have to restore our \nbase backup again and start over?\n\nHow can I make this go faster?\n\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\n\n",
"msg_date": "Thu, 20 Apr 2006 13:29:46 -0600",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Recovery will take 10 hours"
},
{
"msg_contents": "Brendan Duddridge <[email protected]> writes:\n> We had a database issue today that caused us to have to restore to \n> our most recent backup. We are using PITR so we have 3120 WAL files \n> that need to be applied to the database.\n> After 45 minutes, it has restored only 230 WAL files. At this rate, \n> it's going to take about 10 hours to restore our database.\n> Most of the time, the server is not using very much CPU time or I/O \n> time. So I'm wondering what can be done to speed up the process?\n\nThat seems a bit odd --- should be eating one or the other, one would\nthink. Try strace'ing the recovery process to see what it's doing.\n\n> If there were something we could do to speed up the process, would it \n> be possible to kill the postgres process, tweak some parameter \n> somewhere and then start it up again? Or would we have to restore our \n> base backup again and start over?\n\nYou could start it up again, but it'd want to read through all the WAL\nit's already looked at, so I'd not recommend this until/unless you're\npretty sure you've fixed the performance issue. Right at the moment,\nI think this is a golden opportunity to study the performance of WAL\nrecovery --- it's not something we've tried to optimize particularly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Apr 2006 16:17:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recovery will take 10 hours "
},
{
"msg_contents": "Hi Tom,\n\nDo you mean do a kill -QUIT on the postgres process in order to \ngenerate a stack trace?\n\nWill that affect the currently running process in any bad way? And \nwhere would the output go? stdout?\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Apr 20, 2006, at 2:17 PM, Tom Lane wrote:\n\n> Brendan Duddridge <[email protected]> writes:\n>> We had a database issue today that caused us to have to restore to\n>> our most recent backup. We are using PITR so we have 3120 WAL files\n>> that need to be applied to the database.\n>> After 45 minutes, it has restored only 230 WAL files. At this rate,\n>> it's going to take about 10 hours to restore our database.\n>> Most of the time, the server is not using very much CPU time or I/O\n>> time. So I'm wondering what can be done to speed up the process?\n>\n> That seems a bit odd --- should be eating one or the other, one would\n> think. Try strace'ing the recovery process to see what it's doing.\n>\n>> If there were something we could do to speed up the process, would it\n>> be possible to kill the postgres process, tweak some parameter\n>> somewhere and then start it up again? Or would we have to restore our\n>> base backup again and start over?\n>\n> You could start it up again, but it'd want to read through all the WAL\n> it's already looked at, so I'd not recommend this until/unless you're\n> pretty sure you've fixed the performance issue. Right at the moment,\n> I think this is a golden opportunity to study the performance of WAL\n> recovery --- it's not something we've tried to optimize particularly.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n",
"msg_date": "Thu, 20 Apr 2006 15:13:31 -0600",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recovery will take 10 hours "
},
{
"msg_contents": "On Thu, 20 Apr 2006, Brendan Duddridge wrote:\n\n> Hi,\n>\n> We had a database issue today that caused us to have to restore to our most \n> recent backup. We are using PITR so we have 3120 WAL files that need to be \n> applied to the database.\n>\n> After 45 minutes, it has restored only 230 WAL files. At this rate, it's \n> going to take about 10 hours to restore our database.\n>\n> Most of the time, the server is not using very much CPU time or I/O time. So \n> I'm wondering what can be done to speed up the process?\n\nBrendan,\n\nWhere are the WAL files being stored and how are they being read back?\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Thu, 20 Apr 2006 14:19:43 -0700 (PDT)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recovery will take 10 hours"
},
{
"msg_contents": "Brendan Duddridge <[email protected]> writes:\n> Do you mean do a kill -QUIT on the postgres process in order to \n> generate a stack trace?\n\nNot at all! I'm talking about tracing the kernel calls it's making.\nDepending on your platform, the tool for this is called strace,\nktrace, truss, or maybe even just trace. With strace you'd do\nsomething like\n\n\tstrace -p PID-of-process 2>outfile\n\t... wait 30 sec or so ...\n\tcontrol-C\n\nNot sure about the APIs for the others but they're probably roughly\nsimilar ... read the man page ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Apr 2006 17:19:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recovery will take 10 hours "
},
{
"msg_contents": "Brendan,\n\n strace p <pid> -c\n\nThen do a ³CTRL-C² after a minute to get the stats of system calls.\n\n- Luke\n\nOn 4/20/06 2:13 PM, \"Brendan Duddridge\" <[email protected]> wrote:\n\n> Hi Tom,\n> \n> Do you mean do a kill -QUIT on the postgres process in order to\n> generate a stack trace?\n> \n> Will that affect the currently running process in any bad way? And\n> where would the output go? stdout?\n> \n> Thanks,\n> \n> ____________________________________________________________________\n> Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n> \n> ClickSpace Interactive Inc.\n> Suite L100, 239 - 10th Ave. SE\n> Calgary, AB T2G 0V9\n> \n> http://www.clickspace.com\n> \n> On Apr 20, 2006, at 2:17 PM, Tom Lane wrote:\n> \n>> > Brendan Duddridge <[email protected]> writes:\n>>> >> We had a database issue today that caused us to have to restore to\n>>> >> our most recent backup. We are using PITR so we have 3120 WAL files\n>>> >> that need to be applied to the database.\n>>> >> After 45 minutes, it has restored only 230 WAL files. At this rate,\n>>> >> it's going to take about 10 hours to restore our database.\n>>> >> Most of the time, the server is not using very much CPU time or I/O\n>>> >> time. So I'm wondering what can be done to speed up the process?\n>> >\n>> > That seems a bit odd --- should be eating one or the other, one would\n>> > think. Try strace'ing the recovery process to see what it's doing.\n>> >\n>>> >> If there were something we could do to speed up the process, would it\n>>> >> be possible to kill the postgres process, tweak some parameter\n>>> >> somewhere and then start it up again? Or would we have to restore our\n>>> >> base backup again and start over?\n>> >\n>> > You could start it up again, but it'd want to read through all the WAL\n>> > it's already looked at, so I'd not recommend this until/unless you're\n>> > pretty sure you've fixed the performance issue. Right at the moment,\n>> > I think this is a golden opportunity to study the performance of WAL\n>> > recovery --- it's not something we've tried to optimize particularly.\n>> >\n>> > regards, tom lane\n>> >\n>> > ---------------------------(end of\n>> > broadcast)---------------------------\n>> > TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> > choose an index scan if your joining column's datatypes do not\n>> > match\n>> >\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n> \n\n\n\n\n\nRe: [PERFORM] Recovery will take 10 hours\n\n\nBrendan,\n\n strace –p <pid> -c\n\nThen do a “CTRL-C” after a minute to get the stats of system calls.\n\n- Luke\n\nOn 4/20/06 2:13 PM, \"Brendan Duddridge\" <[email protected]> wrote:\n\nHi Tom,\n\nDo you mean do a kill -QUIT on the postgres process in order to \ngenerate a stack trace?\n\nWill that affect the currently running process in any bad way? And \nwhere would the output go? stdout?\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Apr 20, 2006, at 2:17 PM, Tom Lane wrote:\n\n> Brendan Duddridge <[email protected]> writes:\n>> We had a database issue today that caused us to have to restore to\n>> our most recent backup. We are using PITR so we have 3120 WAL files\n>> that need to be applied to the database.\n>> After 45 minutes, it has restored only 230 WAL files. At this rate,\n>> it's going to take about 10 hours to restore our database.\n>> Most of the time, the server is not using very much CPU time or I/O\n>> time. So I'm wondering what can be done to speed up the process?\n>\n> That seems a bit odd --- should be eating one or the other, one would\n> think. Try strace'ing the recovery process to see what it's doing.\n>\n>> If there were something we could do to speed up the process, would it\n>> be possible to kill the postgres process, tweak some parameter\n>> somewhere and then start it up again? Or would we have to restore our\n>> base backup again and start over?\n>\n> You could start it up again, but it'd want to read through all the WAL\n> it's already looked at, so I'd not recommend this until/unless you're\n> pretty sure you've fixed the performance issue. Right at the moment,\n> I think this is a golden opportunity to study the performance of WAL\n> recovery --- it's not something we've tried to optimize particularly.\n>\n> regards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly",
"msg_date": "Thu, 20 Apr 2006 14:21:05 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recovery will take 10 hours"
},
{
"msg_contents": "Hi Jeff,\n\nThe WAL files are stored on a separate server and accessed through an \nNFS mount located at /wal_archive.\n\nHowever, the restore failed about 5 hours in after we got this error:\n\n[2006-04-20 16:41:28 MDT] LOG: restored log file \n\"000000010000018F00000034\" from archive\n[2006-04-20 16:41:35 MDT] LOG: restored log file \n\"000000010000018F00000035\" from archive\n[2006-04-20 16:41:38 MDT] LOG: restored log file \n\"000000010000018F00000036\" from archive\nsh: line 1: /wal_archive/000000010000018F00000037.gz: No such file or \ndirectory\n[2006-04-20 16:41:46 MDT] LOG: could not open file \"pg_xlog/ \n000000010000018F00000037\" (log file 399, segment 55): No such file or \ndirectory\n[2006-04-20 16:41:46 MDT] LOG: redo done at 18F/36FFF254\nsh: line 1: /wal_archive/000000010000018F00000036.gz: No such file or \ndirectory\n[2006-04-20 16:41:46 MDT] PANIC: could not open file \"pg_xlog/ \n000000010000018F00000036\" (log file 399, segment 54): No such file or \ndirectory\n[2006-04-20 16:41:46 MDT] LOG: startup process (PID 9190) was \nterminated by signal 6\n[2006-04-20 16:41:46 MDT] LOG: aborting startup due to startup \nprocess failure\n[2006-04-20 16:41:46 MDT] LOG: logger shutting down\n\n\n\nThe /wal_archive/000000010000018F00000037.gz is there accessible on \nthe NFS mount.\n\nIs there a way to continue the restore process from where it left off?\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Apr 20, 2006, at 3:19 PM, Jeff Frost wrote:\n\n> On Thu, 20 Apr 2006, Brendan Duddridge wrote:\n>\n>> Hi,\n>>\n>> We had a database issue today that caused us to have to restore to \n>> our most recent backup. We are using PITR so we have 3120 WAL \n>> files that need to be applied to the database.\n>>\n>> After 45 minutes, it has restored only 230 WAL files. At this \n>> rate, it's going to take about 10 hours to restore our database.\n>>\n>> Most of the time, the server is not using very much CPU time or I/ \n>> O time. So I'm wondering what can be done to speed up the process?\n>\n> Brendan,\n>\n> Where are the WAL files being stored and how are they being read back?\n>\n> -- \n> Jeff Frost, Owner \t<[email protected]>\n> Frost Consulting, LLC \thttp://www.frostconsultingllc.com/\n> Phone: 650-780-7908\tFAX: 650-649-1954\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n>\n\n\n",
"msg_date": "Thu, 20 Apr 2006 17:11:01 -0600",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recovery will take 10 hours"
},
{
"msg_contents": "Hi Tom,\n\nI found it... it's called ktrace on OS X Server.\n\nHowever, as I just finished posting to the list, the process died \nwith a PANIC error:\n\n[2006-04-20 16:41:28 MDT] LOG: restored log file \n\"000000010000018F00000034\" from archive\n[2006-04-20 16:41:35 MDT] LOG: restored log file \n\"000000010000018F00000035\" from archive\n[2006-04-20 16:41:38 MDT] LOG: restored log file \n\"000000010000018F00000036\" from archive\nsh: line 1: /wal_archive/000000010000018F00000037.gz: No such file or \ndirectory\n[2006-04-20 16:41:46 MDT] LOG: could not open file \"pg_xlog/ \n000000010000018F00000037\" (log file 399, segment 55): No such file or \ndirectory\n[2006-04-20 16:41:46 MDT] LOG: redo done at 18F/36FFF254\nsh: line 1: /wal_archive/000000010000018F00000036.gz: No such file or \ndirectory\n[2006-04-20 16:41:46 MDT] PANIC: could not open file \"pg_xlog/ \n000000010000018F00000036\" (log file 399, segment 54): No such file or \ndirectory\n[2006-04-20 16:41:46 MDT] LOG: startup process (PID 9190) was \nterminated by signal 6\n[2006-04-20 16:41:46 MDT] LOG: aborting startup due to startup \nprocess failure\n[2006-04-20 16:41:46 MDT] LOG: logger shutting down\n\n\nWould turning off fsync make it go faster? Maybe it won't take 10 \nhours again if we start from scratch.\n\nAlso, what if we did just start it up again? Will postgres realize \nthat the existing wal_archive files have already been processed and \njust skip along until it finds one it hasn't processed yet?\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Apr 20, 2006, at 3:19 PM, Tom Lane wrote:\n\n> Brendan Duddridge <[email protected]> writes:\n>> Do you mean do a kill -QUIT on the postgres process in order to\n>> generate a stack trace?\n>\n> Not at all! I'm talking about tracing the kernel calls it's making.\n> Depending on your platform, the tool for this is called strace,\n> ktrace, truss, or maybe even just trace. With strace you'd do\n> something like\n>\n> \tstrace -p PID-of-process 2>outfile\n> \t... wait 30 sec or so ...\n> \tcontrol-C\n>\n> Not sure about the APIs for the others but they're probably roughly\n> similar ... read the man page ...\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n",
"msg_date": "Thu, 20 Apr 2006 17:15:00 -0600",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recovery will take 10 hours "
},
{
"msg_contents": "Oops... forgot to mention that both files that postgres said were \nmissing are in fact there:\n\nA partial listing from our wal_archive directory:\n\n-rw------- 1 postgres staff 4971129 Apr 19 20:08 \n000000010000018F00000036.gz\n-rw------- 1 postgres staff 4378284 Apr 19 20:09 \n000000010000018F00000037.gz\n\nThere didn't seem to be any issues with the NFS mount. Perhaps it \nbriefly disconnected and came back right away.\n\n\nThanks!\n\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Apr 20, 2006, at 5:11 PM, Brendan Duddridge wrote:\n\n> Hi Jeff,\n>\n> The WAL files are stored on a separate server and accessed through \n> an NFS mount located at /wal_archive.\n>\n> However, the restore failed about 5 hours in after we got this error:\n>\n> [2006-04-20 16:41:28 MDT] LOG: restored log file \n> \"000000010000018F00000034\" from archive\n> [2006-04-20 16:41:35 MDT] LOG: restored log file \n> \"000000010000018F00000035\" from archive\n> [2006-04-20 16:41:38 MDT] LOG: restored log file \n> \"000000010000018F00000036\" from archive\n> sh: line 1: /wal_archive/000000010000018F00000037.gz: No such file \n> or directory\n> [2006-04-20 16:41:46 MDT] LOG: could not open file \"pg_xlog/ \n> 000000010000018F00000037\" (log file 399, segment 55): No such file \n> or directory\n> [2006-04-20 16:41:46 MDT] LOG: redo done at 18F/36FFF254\n> sh: line 1: /wal_archive/000000010000018F00000036.gz: No such file \n> or directory\n> [2006-04-20 16:41:46 MDT] PANIC: could not open file \"pg_xlog/ \n> 000000010000018F00000036\" (log file 399, segment 54): No such file \n> or directory\n> [2006-04-20 16:41:46 MDT] LOG: startup process (PID 9190) was \n> terminated by signal 6\n> [2006-04-20 16:41:46 MDT] LOG: aborting startup due to startup \n> process failure\n> [2006-04-20 16:41:46 MDT] LOG: logger shutting down\n>\n>\n>\n> The /wal_archive/000000010000018F00000037.gz is there accessible on \n> the NFS mount.\n>\n> Is there a way to continue the restore process from where it left off?\n>\n> Thanks,\n>\n> ____________________________________________________________________\n> Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n>\n> ClickSpace Interactive Inc.\n> Suite L100, 239 - 10th Ave. SE\n> Calgary, AB T2G 0V9\n>\n> http://www.clickspace.com\n>\n> On Apr 20, 2006, at 3:19 PM, Jeff Frost wrote:\n>\n>> On Thu, 20 Apr 2006, Brendan Duddridge wrote:\n>>\n>>> Hi,\n>>>\n>>> We had a database issue today that caused us to have to restore \n>>> to our most recent backup. We are using PITR so we have 3120 WAL \n>>> files that need to be applied to the database.\n>>>\n>>> After 45 minutes, it has restored only 230 WAL files. At this \n>>> rate, it's going to take about 10 hours to restore our database.\n>>>\n>>> Most of the time, the server is not using very much CPU time or I/ \n>>> O time. So I'm wondering what can be done to speed up the process?\n>>\n>> Brendan,\n>>\n>> Where are the WAL files being stored and how are they being read \n>> back?\n>>\n>> -- \n>> Jeff Frost, Owner \t<[email protected]>\n>> Frost Consulting, LLC \thttp://www.frostconsultingllc.com/\n>> Phone: 650-780-7908\tFAX: 650-649-1954\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that \n>> your\n>> message can get through to the mailing list cleanly\n>>\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n\n",
"msg_date": "Thu, 20 Apr 2006 17:20:07 -0600",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recovery will take 10 hours"
},
{
"msg_contents": "Brendan Duddridge <[email protected]> writes:\n> However, as I just finished posting to the list, the process died \n> with a PANIC error:\n\n> [2006-04-20 16:41:28 MDT] LOG: restored log file \n> \"000000010000018F00000034\" from archive\n> [2006-04-20 16:41:35 MDT] LOG: restored log file \n> \"000000010000018F00000035\" from archive\n> [2006-04-20 16:41:38 MDT] LOG: restored log file \n> \"000000010000018F00000036\" from archive\n> sh: line 1: /wal_archive/000000010000018F00000037.gz: No such file or \n> directory\n> [2006-04-20 16:41:46 MDT] LOG: could not open file \"pg_xlog/ \n> 000000010000018F00000037\" (log file 399, segment 55): No such file or \n> directory\n> [2006-04-20 16:41:46 MDT] LOG: redo done at 18F/36FFF254\n> sh: line 1: /wal_archive/000000010000018F00000036.gz: No such file or \n> directory\n> [2006-04-20 16:41:46 MDT] PANIC: could not open file \"pg_xlog/ \n> 000000010000018F00000036\" (log file 399, segment 54): No such file or \n> directory\n\nThis looks to me like a bug in your archive restore command. It had\njust finished providing 000000010000018F00000036 at 16:41:38, why was\nit not able to do so again at 16:41:46?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Apr 2006 19:20:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recovery will take 10 hours "
},
{
"msg_contents": "\nBrendan,\n\nIs your NFS share mounted hard or soft? Do you have space to copy the files \nlocally? I suspect you're seeing NFS slowness in your restore since you \naren't using much in the way of disk IO or CPU.\n\n-Jeff\n\nOn Thu, 20 Apr 2006, Brendan Duddridge wrote:\n\n> Oops... forgot to mention that both files that postgres said were missing are \n> in fact there:\n>\n> A partial listing from our wal_archive directory:\n>\n> -rw------- 1 postgres staff 4971129 Apr 19 20:08 000000010000018F00000036.gz\n> -rw------- 1 postgres staff 4378284 Apr 19 20:09 000000010000018F00000037.gz\n>\n> There didn't seem to be any issues with the NFS mount. Perhaps it briefly \n> disconnected and came back right away.\n>\n>\n> Thanks!\n>\n>\n> ____________________________________________________________________\n> Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n>\n> ClickSpace Interactive Inc.\n> Suite L100, 239 - 10th Ave. SE\n> Calgary, AB T2G 0V9\n>\n> http://www.clickspace.com\n>\n> On Apr 20, 2006, at 5:11 PM, Brendan Duddridge wrote:\n>\n>> Hi Jeff,\n>> \n>> The WAL files are stored on a separate server and accessed through an NFS \n>> mount located at /wal_archive.\n>> \n>> However, the restore failed about 5 hours in after we got this error:\n>> \n>> [2006-04-20 16:41:28 MDT] LOG: restored log file \"000000010000018F00000034\" \n>> from archive\n>> [2006-04-20 16:41:35 MDT] LOG: restored log file \"000000010000018F00000035\" \n>> from archive\n>> [2006-04-20 16:41:38 MDT] LOG: restored log file \"000000010000018F00000036\" \n>> from archive\n>> sh: line 1: /wal_archive/000000010000018F00000037.gz: No such file or \n>> directory\n>> [2006-04-20 16:41:46 MDT] LOG: could not open file \n>> \"pg_xlog/000000010000018F00000037\" (log file 399, segment 55): No such file \n>> or directory\n>> [2006-04-20 16:41:46 MDT] LOG: redo done at 18F/36FFF254\n>> sh: line 1: /wal_archive/000000010000018F00000036.gz: No such file or \n>> directory\n>> [2006-04-20 16:41:46 MDT] PANIC: could not open file \n>> \"pg_xlog/000000010000018F00000036\" (log file 399, segment 54): No such file \n>> or directory\n>> [2006-04-20 16:41:46 MDT] LOG: startup process (PID 9190) was terminated by \n>> signal 6\n>> [2006-04-20 16:41:46 MDT] LOG: aborting startup due to startup process \n>> failure\n>> [2006-04-20 16:41:46 MDT] LOG: logger shutting down\n>> \n>> \n>> \n>> The /wal_archive/000000010000018F00000037.gz is there accessible on the NFS \n>> mount.\n>> \n>> Is there a way to continue the restore process from where it left off?\n>> \n>> Thanks,\n>> \n>> ____________________________________________________________________\n>> Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n>> \n>> ClickSpace Interactive Inc.\n>> Suite L100, 239 - 10th Ave. SE\n>> Calgary, AB T2G 0V9\n>> \n>> http://www.clickspace.com\n>> \n>> On Apr 20, 2006, at 3:19 PM, Jeff Frost wrote:\n>> \n>>> On Thu, 20 Apr 2006, Brendan Duddridge wrote:\n>>> \n>>>> Hi,\n>>>> \n>>>> We had a database issue today that caused us to have to restore to our \n>>>> most recent backup. We are using PITR so we have 3120 WAL files that need \n>>>> to be applied to the database.\n>>>> \n>>>> After 45 minutes, it has restored only 230 WAL files. At this rate, it's \n>>>> going to take about 10 hours to restore our database.\n>>>> \n>>>> Most of the time, the server is not using very much CPU time or I/O time. \n>>>> So I'm wondering what can be done to speed up the process?\n>>> \n>>> Brendan,\n>>> \n>>> Where are the WAL files being stored and how are they being read back?\n>>> \n>>> -- \n>>> Jeff Frost, Owner \t<[email protected]>\n>>> Frost Consulting, LLC \thttp://www.frostconsultingllc.com/\n>>> Phone: 650-780-7908\tFAX: 650-649-1954\n>>> \n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>>> subscribe-nomail command to [email protected] so that your\n>>> message can get through to the mailing list cleanly\n>>> \n>> \n>> \n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n>> \n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Thu, 20 Apr 2006 16:26:57 -0700 (PDT)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recovery will take 10 hours"
},
{
"msg_contents": "Well our restore command is pretty basic:\n\nrestore_command = 'gunzip </wal_archive/%f.gz>%p'\n\nI'm not sure why that would succeed then fail.\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Apr 20, 2006, at 5:20 PM, Tom Lane wrote:\n\n> Brendan Duddridge <[email protected]> writes:\n>> However, as I just finished posting to the list, the process died\n>> with a PANIC error:\n>\n>> [2006-04-20 16:41:28 MDT] LOG: restored log file\n>> \"000000010000018F00000034\" from archive\n>> [2006-04-20 16:41:35 MDT] LOG: restored log file\n>> \"000000010000018F00000035\" from archive\n>> [2006-04-20 16:41:38 MDT] LOG: restored log file\n>> \"000000010000018F00000036\" from archive\n>> sh: line 1: /wal_archive/000000010000018F00000037.gz: No such file or\n>> directory\n>> [2006-04-20 16:41:46 MDT] LOG: could not open file \"pg_xlog/\n>> 000000010000018F00000037\" (log file 399, segment 55): No such file or\n>> directory\n>> [2006-04-20 16:41:46 MDT] LOG: redo done at 18F/36FFF254\n>> sh: line 1: /wal_archive/000000010000018F00000036.gz: No such file or\n>> directory\n>> [2006-04-20 16:41:46 MDT] PANIC: could not open file \"pg_xlog/\n>> 000000010000018F00000036\" (log file 399, segment 54): No such file or\n>> directory\n>\n> This looks to me like a bug in your archive restore command. It had\n> just finished providing 000000010000018F00000036 at 16:41:38, why was\n> it not able to do so again at 16:41:46?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n>\n\n\n",
"msg_date": "Thu, 20 Apr 2006 17:27:53 -0600",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recovery will take 10 hours "
},
{
"msg_contents": "Brendan Duddridge <[email protected]> writes:\n> Oops... forgot to mention that both files that postgres said were \n> missing are in fact there:\n\nPlease place the blame where it should fall: it's your archive restore\ncommand that's telling postgres that.\n\n> There didn't seem to be any issues with the NFS mount. Perhaps it \n> briefly disconnected and came back right away.\n\nUnstable NFS mounts are Really Bad News. You shouldn't be expecting\nto run a stable database atop such a thing.\n\nIf it's not the database but only the WAL archive that's NFS'd, it might\nbe possible to live with it, but you'll need to put some defenses into\nyour archive restore script to cope with such events.\n\nAs far as restarting goes: I think you can restart from here without\nfirst redoing your base-backup restore, but as previously noted it'll\nstill read through the same WAL files it looked at before. You won't\nsave much except the time to redo the base restore.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Apr 2006 19:29:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recovery will take 10 hours "
},
{
"msg_contents": "Thanks Tom,\n\nWe are storing only the WAL archives on the NFS volume. It must have \nbeen a hiccup in the NFS mount. Jeff Frost asked if we were using \nhard or soft mounts. We were using soft mounts, so that may be where \nthe problem lies with the PANIC.\n\nIs it better to use the boot volume of the database machine for \narchiving our WAL files instead of over the NFS mount? I'm sure it's \nprobably not a good idea to archive to the same volume as the pg_xlog \ndirectory, so that's why I thought maybe using the boot drive would \nbe better. We'll just have to make sure we don't fill up the drive. \nAlthough I know that PostgreSQL often writes to the /data directory \nthat is located on the boot drive. It might not be good to start \narchiving there. Our table spaces are on a separate RAID.\n\nIf we need to restore in the future we'll just have to copy the WAL \nfiles from the boot drive of our database machine over the NFS to the \nrestore machine.\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Apr 20, 2006, at 5:29 PM, Tom Lane wrote:\n\n> Brendan Duddridge <[email protected]> writes:\n>> Oops... forgot to mention that both files that postgres said were\n>> missing are in fact there:\n>\n> Please place the blame where it should fall: it's your archive restore\n> command that's telling postgres that.\n>\n>> There didn't seem to be any issues with the NFS mount. Perhaps it\n>> briefly disconnected and came back right away.\n>\n> Unstable NFS mounts are Really Bad News. You shouldn't be expecting\n> to run a stable database atop such a thing.\n>\n> If it's not the database but only the WAL archive that's NFS'd, it \n> might\n> be possible to live with it, but you'll need to put some defenses into\n> your archive restore script to cope with such events.\n>\n> As far as restarting goes: I think you can restart from here without\n> first redoing your base-backup restore, but as previously noted it'll\n> still read through the same WAL files it looked at before. You won't\n> save much except the time to redo the base restore.\n>\n> \t\t\tregards, tom lane\n>\n\n\n",
"msg_date": "Thu, 20 Apr 2006 17:43:41 -0600",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recovery will take 10 hours "
},
{
"msg_contents": "Hi Tom,\n\nWell, we started the restore back up with the WAL archives copied to \nour local disk.\n\nIt's going at about the same pace as with the restore over NFS.\n\nSo I tried ktrace -p PID and it created a really big file. I had to \ndo 'ktrace -p PID -c' to get it to stop.\n\nThe ktrace.out file is read using kdump, but there's a lot of binary \ndata in there intermixed with some system calls.\n\nFor example:\n\n15267 postgres RET read 8192/0x2000\n15267 postgres CALL lseek(153,0,2)\n15267 postgres RET lseek 0\n15267 postgres CALL lseek(127,0,2)\n15267 postgres RET lseek 0\n15267 postgres CALL lseek(138,0,2)\n15267 postgres RET lseek 0\n15267 postgres CALL lseek(153,0,2)\n15267 postgres RET lseek 0\n15267 postgres CALL lseek(127,0,2)\n15267 postgres RET lseek 0\n15267 postgres CALL read(5,25225728,8192)\n15267 postgres GIO fd 5 read 8192 bytes\n \"\\M-P]\\0\\^A\\0\\0\\0\\^A\\0\\0\\^A\\M^H,\\M-5`\\0\\0\\0\\^C\\M-6r fill, \npolyester has a subtle sheen, machine wash\\0\\0\\0Xreverses to\\\n solid colour, polyester fill, polyester has a subtle sheen, \nmachine wash\\^_\\^Y7\\M-3\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0oG\\0\\\n \\b\\0\\^[)\\^C \\M^Or\\M-#\\^B\\0\\0\\0\\0\\0A\\M-&\\M-]\n\n... lots of data ....\n\n \\M^K$\\0\\0\\0\\fcomplete\\0\\0\\0HCustom-width Valanceless \nAluminum Mini Blinds 37 1/4-44\" w. x 48\" l.\\0\\0\\0\\M-P1\" aluminum\\\n slats, valanceless headrail and matching bottom rail, \nhidden brackets, clear acrylic tilt wand, extra slats with rou\\\n te holes in the back, can be cut down to minimum width of \n14\", hardware. . .\\0\\0\\^Aq1\" aluminum slats, valanceless he\\\n adrail and matching bottom rail, hidden brackets, clear \nacrylic tilt wand, extra slats with route holes in the back, \\\n can be cut down to minimum width of 14\", hardware and \ninstructions included, wipe with a dam\"\n15267 postgres RET read 8192/0x2000\n15267 postgres CALL lseek(138,0,2)\n15267 postgres RET lseek 0\n15267 postgres CALL lseek(158,317251584,0)\n15267 postgres RET lseek 0\n15267 postgres CALL write(158,35286464,8192)\n15267 postgres GIO fd 158 wrote 8192 bytes\n \"\\0\\0\\^A\\M^H+\\M^W@\\M-,\\0\\0\\0\\^A\\^A\\M-D\\^P\\M-@\\^_\\M-p \\^C?\\M^X \n\\M^@$?P\\M^@$?\\b\\M^@$>\\M-@\\M^@$>x\\M^@$>0\\M^@$=\\M-h\\M^@$=\\\n \\240\\M^@$#0\\M^@$\"X\\M^@$=X\\M^@$=\\^P\\M^@$<\\M-H\\M^@$<\\M^@\\M^@$<8 \n\\M^@$;\\M-p\\M^@$;\\M-(\\M^@$;`\\M^@$;\\^X\\M^@$:\\M-P\\M^@$:\\M^H\\\n\netc...\n\nI'm not sure that really tells me anything though other than the WAL \narchives don't actually archive SQL, but store only the database \nchanges.\n\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Apr 20, 2006, at 3:19 PM, Tom Lane wrote:\n\n> Brendan Duddridge <[email protected]> writes:\n>> Do you mean do a kill -QUIT on the postgres process in order to\n>> generate a stack trace?\n>\n> Not at all! I'm talking about tracing the kernel calls it's making.\n> Depending on your platform, the tool for this is called strace,\n> ktrace, truss, or maybe even just trace. With strace you'd do\n> something like\n>\n> \tstrace -p PID-of-process 2>outfile\n> \t... wait 30 sec or so ...\n> \tcontrol-C\n>\n> Not sure about the APIs for the others but they're probably roughly\n> similar ... read the man page ...\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n",
"msg_date": "Thu, 20 Apr 2006 18:24:36 -0600",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recovery will take 10 hours "
},
{
"msg_contents": "Hi Tomas,\n\nHmm... ktrace -p PID -c returns immediately without doing anything \nunless I've previously done a ktrace -p PID.\n\nAccording to the man page for ktrace's -c flag:\n -c Clear the trace points associated with the specified file \nor processes.\n\nWhen I run ktrace on OS X Server 10.4.6 it returns to the console \nimmediately, however the ktrace.out file gets larger and larger until \nI issue another ktrace command with the -c flag. It never sits \nwaiting for keyboard input.\n\n\nI haven't been able to find any way of generating the stats yet. The \nman page for ktrace or kdump doesn't mention anything about stats.\n\n\nThanks,\n\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Apr 20, 2006, at 6:51 PM, Tomas Vondra wrote:\n\n>> So I tried ktrace -p PID and it created a really big file. I had \n>> to do\n>> 'ktrace -p PID -c' to get it to stop.\n>>\n>> The ktrace.out file is read using kdump, but there's a lot of binary\n>> data in there intermixed with some system calls.\n>\n> Yes, that's what (s|k)trace does - it attaches to the process, and\n> prints out all the system calls, parameters, return values etc. That\n> gives you \"exact\" overview of what's going on in the program, but it's\n> a little bit confusing if you are not familiar with that and/or you're\n> in a hurry.\n>\n> But Luke Lonergan offered a '-c' switch, which gives you a statistics\n> of the used system calls. This way you can see number of calls for\n> individual syscalls and time spent in them. That could give you a hint\n> why the process is so slow (for example there can be an I/O bottleneck\n> or something like that).\n>\n> Just do 'ktrace -p PID -c' for about 30 seconds, then 'Ctrl-C' and \n> post\n> the output to this mailing list.\n>\n> t.v.\n>\n\n\n",
"msg_date": "Thu, 20 Apr 2006 19:05:57 -0600",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recovery will take 10 hours"
},
{
"msg_contents": "On Thu, 20 Apr 2006, Brendan Duddridge wrote:\n\n> Hi Tomas,\n>\n> Hmm... ktrace -p PID -c returns immediately without doing anything\n> unless I've previously done a ktrace -p PID.\n>\n> According to the man page for ktrace's -c flag:\n> -c Clear the trace points associated with the specified file\n> or processes.\n\nOn other systems, strace/truss with -c produces a list of sys calls with\nthe number of times they've been called in the elapsed period.\n\nTo answer your other question, temporarily disabling fsync during the\nrecovery should speed it up.\n\nFor future reference, processing thousands of WAL files for recovery is\nnot ideal. You should be doing a base backup much more often.\n\nGavin\n",
"msg_date": "Fri, 21 Apr 2006 11:15:28 +1000 (EST)",
"msg_from": "Gavin Sherry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recovery will take 10 hours"
},
{
"msg_contents": "On Thu, 2006-04-20 at 13:29 -0600, Brendan Duddridge wrote:\n\n> We had a database issue today that caused us to have to restore to \n> our most recent backup. We are using PITR so we have 3120 WAL files \n> that need to be applied to the database.\n\nHow often are you taking base backups?\n\n> After 45 minutes, it has restored only 230 WAL files. At this rate, \n> it's going to take about 10 hours to restore our database.\n\n> Most of the time, the server is not using very much CPU time or I/O \n> time. So I'm wondering what can be done to speed up the process?\n\nYou can improve the performance of a recovery by making your restore\ncommand overlap retrieval of files. The recovery process waits while the\nrestore command returns a file, so by doing an asynchronous lookahead of\none file you can be gunzipping the next file while the current one is\nbeing processed.\n\nI'll either document this better, or build an overlap into the restore\ncommand processing itself, so the script doesn't need to do this.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com/\n\n",
"msg_date": "Sun, 23 Apr 2006 17:06:07 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recovery will take 10 hours"
},
{
"msg_contents": "Hi Simon,\n\nThe backup with 3120 WAL files was a 2 day old base backup. We've moved\nto a 1 day base backup now, but that'll still be 1600 WALs or so a day.\nThat will probably take 5 hours to restore I suspect. Unless we move to\n2 or more base backups per day. That seems crazy though.\n\nSo how do you overlap the restore process with the retrieving of files?\n\nOur restore command is:\n\nrestore_command = 'gunzip </wal_archive/%f.gz>%p'\n\nIf I change it to:\n\nrestore_command = 'gunzip </wal_archive/%f.gz>%p &'\n\nto execute the restore command in the background, will that do the \ntrick?\n\nBut I don't think the real problem was the retrieval of the files. It \nonly\ntook maybe 1/2 a second to retrieve the file, but often took anywhere \nfrom\n5 to 30 seconds to process the file. More so on the longer end of the \nscale.\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Apr 23, 2006, at 10:06 AM, Simon Riggs wrote:\n\n> On Thu, 2006-04-20 at 13:29 -0600, Brendan Duddridge wrote:\n>\n>> We had a database issue today that caused us to have to restore to\n>> our most recent backup. We are using PITR so we have 3120 WAL files\n>> that need to be applied to the database.\n>\n> How often are you taking base backups?\n>\n>> After 45 minutes, it has restored only 230 WAL files. At this rate,\n>> it's going to take about 10 hours to restore our database.\n>\n>> Most of the time, the server is not using very much CPU time or I/O\n>> time. So I'm wondering what can be done to speed up the process?\n>\n> You can improve the performance of a recovery by making your restore\n> command overlap retrieval of files. The recovery process waits \n> while the\n> restore command returns a file, so by doing an asynchronous \n> lookahead of\n> one file you can be gunzipping the next file while the current one is\n> being processed.\n>\n> I'll either document this better, or build an overlap into the restore\n> command processing itself, so the script doesn't need to do this.\n>\n> -- \n> Simon Riggs\n> EnterpriseDB http://www.enterprisedb.com/\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n>\n\n\n",
"msg_date": "Sun, 23 Apr 2006 22:46:35 -0600",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recovery will take 10 hours"
},
{
"msg_contents": "Hi, Brandan,\n\nBrendan Duddridge wrote:\n\n> So how do you overlap the restore process with the retrieving of files?\n\nYou need a shell script as restore command that does both uncompressing\nthe current file, and starting a background decompress of the next\nfile(s). It also has to check whether the current file is already in\nprogress from a last run, and wait until this is finished instead of\ndecompressing it. Seems to be a little complicated than it sounds first.\n\n> restore_command = 'gunzip </wal_archive/%f.gz>%p &'\n\nWarning: Don't do it this way!\n\nIt will break things because PostgreSQL will try to access a\nnot-completely-restored wal file.\n\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 24 Apr 2006 10:29:44 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recovery will take 10 hours"
},
{
"msg_contents": "On Sun, 2006-04-23 at 22:46 -0600, Brendan Duddridge wrote:\n\n> So how do you overlap the restore process with the retrieving of files?\n\nThe restore command can be *anything*. You just write a script...\n\n> Our restore command is:\n> \n> restore_command = 'gunzip </wal_archive/%f.gz>%p'\n> \n> If I change it to:\n> \n> restore_command = 'gunzip </wal_archive/%f.gz>%p &'\n> \n> to execute the restore command in the background, will that do the \n> trick?\n\nNo, but you can execute a shell script that does use & internally.\n\n> But I don't think the real problem was the retrieval of the files. It \n> only\n> took maybe 1/2 a second to retrieve the file, but often took anywhere \n> from\n> 5 to 30 seconds to process the file. More so on the longer end of the \n> scale.\n\nSorry, thought you meant the decompression time.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com/\n\n",
"msg_date": "Mon, 24 Apr 2006 18:27:31 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recovery will take 10 hours"
}
] |
[
{
"msg_contents": "Greetings,\n\nI'd like to introduce a new readahead framework for the linux kernel:\nhttp://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1021.html\n\nHOW IT WORKS\n\nIn adaptive readahead, the context based method may be of particular\ninterest to postgresql users. It works by peeking into the file cache\nand check if there are any history pages present or accessed. In this\nway it can detect almost all forms of sequential / semi-sequential read\npatterns, e.g.\n\t- parallel / interleaved sequential scans on one file\n\t- sequential reads across file open/close\n\t- mixed sequential / random accesses\n\t- sparse / skimming sequential read\n\nIt also have methods to detect some less common cases:\n\t- reading backward\n\t- seeking all over reading N pages\n\nWAYS TO BENEFIT FROM IT\n\nAs we know, postgresql relies on the kernel to do proper readahead.\nThe adaptive readahead might help performance in the following cases:\n\t- concurrent sequential scans\n\t- sequential scan on a fragmented table\n\t (some DBs suffer from this problem, not sure for pgsql)\n\t- index scan with clustered matches\n\t- index scan on majority rows (in case the planner goes wrong)\n\nTUNABLE PARAMETERS\n\nThere are two parameters which are described in this email:\nhttp://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1024.html\n\nHere are the more oriented guidelines for postgresql users:\n\n- /proc/sys/vm/readahead_ratio\nSince most DB servers are bounty of memory, the danger of readahead\nthrashing is near to zero. In this case, you can set readahead_ratio to\n100(or even 200:), which helps the readahead window to scale up rapidly.\n\n- /proc/sys/vm/readahead_hit_rate\nSparse sequential reads are read patterns like {0, 2, 4, 5, 8, 11, ...}.\nIn this case we might prefer to do readahead to get good I/O performance\nwith the overhead of some useless pages. But if you prefer not to do so,\nset readahead_hit_rate to 1 will disable this feature.\n\n- /sys/block/sd<X>/queue/read_ahead_kb\nSet it to a large value(e.g. 4096) as you used to do.\nRAID users might want to use a bigger number.\n\nTRYING IT OUT\n\nThe latest patch for stable kernels can be downloaded here:\nhttp://www.vanheusden.com/ara/\n\nBefore compiling, make sure that the following options are enabled:\nProcessor type and features -> Adaptive file readahead\nProcessor type and features -> Readahead debug and accounting\n\nHELPING AND CONTRIBUTING\n\nThe patch is open to fine-tuning advices :)\nComments and benchmarking results are highly appreciated.\n\nThanks,\nWu\n",
"msg_date": "Fri, 21 Apr 2006 09:38:26 +0800",
"msg_from": "Wu Fengguang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Introducing a new linux readahead framework"
},
{
"msg_contents": "On Fri, Apr 21, 2006 at 09:38:26AM +0800, Wu Fengguang wrote:\n> Greetings,\n> \n> I'd like to introduce a new readahead framework for the linux kernel:\n> http://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1021.html\n> \n> HOW IT WORKS\n> \n> In adaptive readahead, the context based method may be of particular\n> interest to postgresql users. It works by peeking into the file cache\n> and check if there are any history pages present or accessed. In this\n> way it can detect almost all forms of sequential / semi-sequential read\n> patterns, e.g.\n> \t- parallel / interleaved sequential scans on one file\n> \t- sequential reads across file open/close\n> \t- mixed sequential / random accesses\n> \t- sparse / skimming sequential read\n> \n> It also have methods to detect some less common cases:\n> \t- reading backward\n> \t- seeking all over reading N pages\n\nAre there any ways to inform the kernel that you either are or aren't\ndoing a sequential read? It seems that in some cases it would be better\nto bypass a bunch of tricky logic trying to determine that it's doing a\nsequential read. A sequential scan in PostgreSQL would be such a case.\n\nThe opposite example would be an index scan of a highly uncorrelated\nindex, which would produce mostly random reads from the table. In that\ncase, reading ahead probably makes very little sense, though your logic\nmight have a better idea of the access pattern than PostgreSQL does.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Thu, 20 Apr 2006 23:31:47 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Introducing a new linux readahead framework"
},
{
"msg_contents": "On Thu, Apr 20, 2006 at 11:31:47PM -0500, Jim C. Nasby wrote:\n> > In adaptive readahead, the context based method may be of particular\n> > interest to postgresql users. It works by peeking into the file cache\n> > and check if there are any history pages present or accessed. In this\n> > way it can detect almost all forms of sequential / semi-sequential read\n> > patterns, e.g.\n> > \t- parallel / interleaved sequential scans on one file\n> > \t- sequential reads across file open/close\n> > \t- mixed sequential / random accesses\n> > \t- sparse / skimming sequential read\n> > \n> > It also have methods to detect some less common cases:\n> > \t- reading backward\n> > \t- seeking all over reading N pages\n> \n> Are there any ways to inform the kernel that you either are or aren't\n> doing a sequential read? It seems that in some cases it would be better\n\nThis call will disable readahead totally for fd:\n posix_fadvise(fd, any, any, POSIX_FADV_RANDOM);\n\nThis one will reenable it:\n posix_fadvise(fd, any, any, POSIX_FADV_NORMAL);\n\nThis one will enable readahead _and_ set max readahead window to\n2*max_readahead_kb:\n posix_fadvise(fd, any, any, POSIX_FADV_SEQUENTIAL);\n\n> to bypass a bunch of tricky logic trying to determine that it's doing a\n> sequential read. A sequential scan in PostgreSQL would be such a case.\n\nYou do not need to worry about the detecting `overhead' on sequential\nscans :) The adaptive readahead framework has a fast code path(the\nstateful method) to handle normal sequential reads, the detection of\nwhich is really trivial.\n\n> The opposite example would be an index scan of a highly uncorrelated\n> index, which would produce mostly random reads from the table. In that\n> case, reading ahead probably makes very little sense, though your logic\n> might have a better idea of the access pattern than PostgreSQL does.\n\nAs for the index scans, the failsafe code path(i.e. the context based\none) will normally be used, and it does have a little overhead in\nlooking up the page cache(about 0.4% more CPU time). However, the\npenalty of random disk access is so large that if ever it helps\nreducing a small fraction of disk accesses, you wins.\n\nThanks,\nWu\n",
"msg_date": "Fri, 21 Apr 2006 15:15:11 +0800",
"msg_from": "Wu Fengguang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Introducing a new linux readahead framework"
},
{
"msg_contents": "Hi, Wu,\n\nWu Fengguang wrote:\n\n>>>In adaptive readahead, the context based method may be of particular\n>>>interest to postgresql users. It works by peeking into the file cache\n>>>and check if there are any history pages present or accessed. In this\n>>>way it can detect almost all forms of sequential / semi-sequential read\n>>>patterns, e.g.\n>>>\t- parallel / interleaved sequential scans on one file\n>>>\t- sequential reads across file open/close\n>>>\t- mixed sequential / random accesses\n>>>\t- sparse / skimming sequential read\n>>>\n>>>It also have methods to detect some less common cases:\n>>>\t- reading backward\n>>>\t- seeking all over reading N pages\n\nGread news, thanks!\n\n> This call will disable readahead totally for fd:\n> posix_fadvise(fd, any, any, POSIX_FADV_RANDOM);\n> \n> This one will reenable it:\n> posix_fadvise(fd, any, any, POSIX_FADV_NORMAL);\n> \n> This one will enable readahead _and_ set max readahead window to\n> 2*max_readahead_kb:\n> posix_fadvise(fd, any, any, POSIX_FADV_SEQUENTIAL);\n\nI think that this is an easy, understandable and useful interpretation\nof posix_fadvise() hints.\n\n\nAre there any rough estimates when this will get into mainline kernel\n(if you intend to submit)?\n\nThanks,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Fri, 21 Apr 2006 09:53:34 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Introducing a new linux readahead framework"
},
{
"msg_contents": "Hi Markus,\n\nOn Fri, Apr 21, 2006 at 09:53:34AM +0200, Markus Schaber wrote:\n> Are there any rough estimates when this will get into mainline kernel\n> (if you intend to submit)?\n\nI'm not quite sure :)\n\nThe patch itself has been pretty stable. To get it accepted, we must\nback it by good benchmarking results for some important applications.\nI have confirmed that file service via FTP/HTTP/NFS can more or less\nbenefit from it. However, database services have not been touched yet.\nOracle/DB2 seem to bypass the readahead code route, while postgresql\nrelies totally on kernel readahead logic. So if postgresql is proved\nto work well with this patch, it will have good opportunity to go into\nmainline :)\n\nThanks,\nWu\n",
"msg_date": "Fri, 21 Apr 2006 20:20:28 +0800",
"msg_from": "Wu Fengguang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Introducing a new linux readahead framework"
},
{
"msg_contents": "On Fri, Apr 21, 2006 at 08:20:28PM +0800, Wu Fengguang wrote:\n> Hi Markus,\n> \n> On Fri, Apr 21, 2006 at 09:53:34AM +0200, Markus Schaber wrote:\n> > Are there any rough estimates when this will get into mainline kernel\n> > (if you intend to submit)?\n> \n> I'm not quite sure :)\n> \n> The patch itself has been pretty stable. To get it accepted, we must\n> back it by good benchmarking results for some important applications.\n> I have confirmed that file service via FTP/HTTP/NFS can more or less\n> benefit from it. However, database services have not been touched yet.\n> Oracle/DB2 seem to bypass the readahead code route, while postgresql\n> relies totally on kernel readahead logic. So if postgresql is proved\n> to work well with this patch, it will have good opportunity to go into\n> mainline :)\n\nIIRC Mark from OSDL said he'd try testing this when he gets a chance,\nbut you could also try running dbt2 and dbt3 against it.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 21 Apr 2006 13:34:24 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Introducing a new linux readahead framework"
},
{
"msg_contents": "On Fri, Apr 21, 2006 at 01:34:24PM -0500, Jim C. Nasby wrote:\n> IIRC Mark from OSDL said he'd try testing this when he gets a chance,\n> but you could also try running dbt2 and dbt3 against it.\n\nThanks for the info, I'll look into them.\n\nRegards,\nwu\n",
"msg_date": "Sat, 22 Apr 2006 20:15:26 +0800",
"msg_from": "Wu Fengguang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Introducing a new linux readahead framework"
},
{
"msg_contents": " From my initial testing this is very promising for a postgres server. \nBenchmark-wise, a simple dd with an 8k blocksize gets ~200MB/s as \ncompared to ~140MB/s on the same hardware without the patch. Also, that \n200MB/s seems to be unaffected by the dd blocksize, whereas without the \npatch a 512k blocksize would get ~100MB/s. I'm now watching to see how \nit does over a couple of days on real-world workloads. \n\nMike Stone\n",
"msg_date": "Wed, 26 Apr 2006 10:43:49 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Introducing a new linux readahead framework"
},
{
"msg_contents": "I found an average 14% improvement Using Pg 7.4.11 with odbc-bench as my\ntest bed with Wu's kernel patch. I have not tried version 8.x yet.\n\nThanks Wu. \n\nSteve Poe\n\nUsing Postgresql 7.4.11, on an dual Opteron with 4GB\n\nOn Fri, 2006-04-21 at 09:38 +0800, Wu Fengguang wrote:\n> Greetings,\n> \n> I'd like to introduce a new readahead framework for the linux kernel:\n> http://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1021.html\n> \n> HOW IT WORKS\n> \n> In adaptive readahead, the context based method may be of particular\n> interest to postgresql users. It works by peeking into the file cache\n> and check if there are any history pages present or accessed. In this\n> way it can detect almost all forms of sequential / semi-sequential read\n> patterns, e.g.\n> \t- parallel / interleaved sequential scans on one file\n> \t- sequential reads across file open/close\n> \t- mixed sequential / random accesses\n> \t- sparse / skimming sequential read\n> \n> It also have methods to detect some less common cases:\n> \t- reading backward\n> \t- seeking all over reading N pages\n> \n> WAYS TO BENEFIT FROM IT\n> \n> As we know, postgresql relies on the kernel to do proper readahead.\n> The adaptive readahead might help performance in the following cases:\n> \t- concurrent sequential scans\n> \t- sequential scan on a fragmented table\n> \t (some DBs suffer from this problem, not sure for pgsql)\n> \t- index scan with clustered matches\n> \t- index scan on majority rows (in case the planner goes wrong)\n> \n> TUNABLE PARAMETERS\n> \n> There are two parameters which are described in this email:\n> http://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1024.html\n> \n> Here are the more oriented guidelines for postgresql users:\n> \n> - /proc/sys/vm/readahead_ratio\n> Since most DB servers are bounty of memory, the danger of readahead\n> thrashing is near to zero. In this case, you can set readahead_ratio to\n> 100(or even 200:), which helps the readahead window to scale up rapidly.\n> \n> - /proc/sys/vm/readahead_hit_rate\n> Sparse sequential reads are read patterns like {0, 2, 4, 5, 8, 11, ...}.\n> In this case we might prefer to do readahead to get good I/O performance\n> with the overhead of some useless pages. But if you prefer not to do so,\n> set readahead_hit_rate to 1 will disable this feature.\n> \n> - /sys/block/sd<X>/queue/read_ahead_kb\n> Set it to a large value(e.g. 4096) as you used to do.\n> RAID users might want to use a bigger number.\n> \n> TRYING IT OUT\n> \n> The latest patch for stable kernels can be downloaded here:\n> http://www.vanheusden.com/ara/\n> \n> Before compiling, make sure that the following options are enabled:\n> Processor type and features -> Adaptive file readahead\n> Processor type and features -> Readahead debug and accounting\n> \n> HELPING AND CONTRIBUTING\n> \n> The patch is open to fine-tuning advices :)\n> Comments and benchmarking results are highly appreciated.\n> \n> Thanks,\n> Wu\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n",
"msg_date": "Wed, 26 Apr 2006 15:08:59 -0500",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Introducing a new linux readahead framework"
},
{
"msg_contents": "(including bizgres-general)\n\nHas anyone done any testing on bizgres? It's got some patches that\neliminate a lot of IO bottlenecks, so it might present even larger\ngains.\n\nOn Wed, Apr 26, 2006 at 03:08:59PM -0500, Steve Poe wrote:\n> I found an average 14% improvement Using Pg 7.4.11 with odbc-bench as my\n> test bed with Wu's kernel patch. I have not tried version 8.x yet.\n> \n> Thanks Wu. \n> \n> Steve Poe\n> \n> Using Postgresql 7.4.11, on an dual Opteron with 4GB\n> \n> On Fri, 2006-04-21 at 09:38 +0800, Wu Fengguang wrote:\n> > Greetings,\n> > \n> > I'd like to introduce a new readahead framework for the linux kernel:\n> > http://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1021.html\n> > \n> > HOW IT WORKS\n> > \n> > In adaptive readahead, the context based method may be of particular\n> > interest to postgresql users. It works by peeking into the file cache\n> > and check if there are any history pages present or accessed. In this\n> > way it can detect almost all forms of sequential / semi-sequential read\n> > patterns, e.g.\n> > \t- parallel / interleaved sequential scans on one file\n> > \t- sequential reads across file open/close\n> > \t- mixed sequential / random accesses\n> > \t- sparse / skimming sequential read\n> > \n> > It also have methods to detect some less common cases:\n> > \t- reading backward\n> > \t- seeking all over reading N pages\n> > \n> > WAYS TO BENEFIT FROM IT\n> > \n> > As we know, postgresql relies on the kernel to do proper readahead.\n> > The adaptive readahead might help performance in the following cases:\n> > \t- concurrent sequential scans\n> > \t- sequential scan on a fragmented table\n> > \t (some DBs suffer from this problem, not sure for pgsql)\n> > \t- index scan with clustered matches\n> > \t- index scan on majority rows (in case the planner goes wrong)\n> > \n> > TUNABLE PARAMETERS\n> > \n> > There are two parameters which are described in this email:\n> > http://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1024.html\n> > \n> > Here are the more oriented guidelines for postgresql users:\n> > \n> > - /proc/sys/vm/readahead_ratio\n> > Since most DB servers are bounty of memory, the danger of readahead\n> > thrashing is near to zero. In this case, you can set readahead_ratio to\n> > 100(or even 200:), which helps the readahead window to scale up rapidly.\n> > \n> > - /proc/sys/vm/readahead_hit_rate\n> > Sparse sequential reads are read patterns like {0, 2, 4, 5, 8, 11, ...}.\n> > In this case we might prefer to do readahead to get good I/O performance\n> > with the overhead of some useless pages. But if you prefer not to do so,\n> > set readahead_hit_rate to 1 will disable this feature.\n> > \n> > - /sys/block/sd<X>/queue/read_ahead_kb\n> > Set it to a large value(e.g. 4096) as you used to do.\n> > RAID users might want to use a bigger number.\n> > \n> > TRYING IT OUT\n> > \n> > The latest patch for stable kernels can be downloaded here:\n> > http://www.vanheusden.com/ara/\n> > \n> > Before compiling, make sure that the following options are enabled:\n> > Processor type and features -> Adaptive file readahead\n> > Processor type and features -> Readahead debug and accounting\n> > \n> > HELPING AND CONTRIBUTING\n> > \n> > The patch is open to fine-tuning advices :)\n> > Comments and benchmarking results are highly appreciated.\n> > \n> > Thanks,\n> > Wu\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 26 Apr 2006 17:28:56 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Introducing a new linux readahead framework"
},
{
"msg_contents": "Jim,\n\nI¹m thinking about it, we¹re already using a fixed read-ahead of 16MB using\nblockdev on the stock Redhat 2.6.9 kernel, it would be nice to not have to\nset this so we may try it.\n\n- Luke \n\n\nOn 4/26/06 3:28 PM, \"Jim C. Nasby\" <[email protected]> wrote:\n\n> (including bizgres-general)\n> \n> Has anyone done any testing on bizgres? It's got some patches that\n> eliminate a lot of IO bottlenecks, so it might present even larger\n> gains.\n> \n> On Wed, Apr 26, 2006 at 03:08:59PM -0500, Steve Poe wrote:\n>> > I found an average 14% improvement Using Pg 7.4.11 with odbc-bench as my\n>> > test bed with Wu's kernel patch. I have not tried version 8.x yet.\n>> >\n>> > Thanks Wu. \n>> >\n>> > Steve Poe\n>> >\n>> > Using Postgresql 7.4.11, on an dual Opteron with 4GB\n>> >\n>> > On Fri, 2006-04-21 at 09:38 +0800, Wu Fengguang wrote:\n>>> > > Greetings,\n>>> > >\n>>> > > I'd like to introduce a new readahead framework for the linux kernel:\n>>> > > http://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1021.html\n>>> > >\n>>> > > HOW IT WORKS\n>>> > >\n>>> > > In adaptive readahead, the context based method may be of particular\n>>> > > interest to postgresql users. It works by peeking into the file cache\n>>> > > and check if there are any history pages present or accessed. In this\n>>> > > way it can detect almost all forms of sequential / semi-sequential read\n>>> > > patterns, e.g.\n>>> > > - parallel / interleaved sequential scans on one file\n>>> > > - sequential reads across file open/close\n>>> > > - mixed sequential / random accesses\n>>> > > - sparse / skimming sequential read\n>>> > >\n>>> > > It also have methods to detect some less common cases:\n>>> > > - reading backward\n>>> > > - seeking all over reading N pages\n>>> > >\n>>> > > WAYS TO BENEFIT FROM IT\n>>> > >\n>>> > > As we know, postgresql relies on the kernel to do proper readahead.\n>>> > > The adaptive readahead might help performance in the following cases:\n>>> > > - concurrent sequential scans\n>>> > > - sequential scan on a fragmented table\n>>> > > (some DBs suffer from this problem, not sure for pgsql)\n>>> > > - index scan with clustered matches\n>>> > > - index scan on majority rows (in case the planner goes wrong)\n>>> > >\n>>> > > TUNABLE PARAMETERS\n>>> > >\n>>> > > There are two parameters which are described in this email:\n>>> > > http://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1024.html\n>>> > >\n>>> > > Here are the more oriented guidelines for postgresql users:\n>>> > >\n>>> > > - /proc/sys/vm/readahead_ratio\n>>> > > Since most DB servers are bounty of memory, the danger of readahead\n>>> > > thrashing is near to zero. In this case, you can set readahead_ratio to\n>>> > > 100(or even 200:), which helps the readahead window to scale up rapidly.\n>>> > >\n>>> > > - /proc/sys/vm/readahead_hit_rate\n>>> > > Sparse sequential reads are read patterns like {0, 2, 4, 5, 8, 11, ...}.\n>>> > > In this case we might prefer to do readahead to get good I/O performance\n>>> > > with the overhead of some useless pages. But if you prefer not to do so,\n>>> > > set readahead_hit_rate to 1 will disable this feature.\n>>> > >\n>>> > > - /sys/block/sd<X>/queue/read_ahead_kb\n>>> > > Set it to a large value(e.g. 4096) as you used to do.\n>>> > > RAID users might want to use a bigger number.\n>>> > >\n>>> > > TRYING IT OUT\n>>> > >\n>>> > > The latest patch for stable kernels can be downloaded here:\n>>> > > http://www.vanheusden.com/ara/\n>>> > >\n>>> > > Before compiling, make sure that the following options are enabled:\n>>> > > Processor type and features -> Adaptive file readahead\n>>> > > Processor type and features -> Readahead debug and accounting\n>>> > >\n>>> > > HELPING AND CONTRIBUTING\n>>> > >\n>>> > > The patch is open to fine-tuning advices :)\n>>> > > Comments and benchmarking results are highly appreciated.\n>>> > >\n>>> > > Thanks,\n>>> > > Wu\n>>> > >\n>>> > > ---------------------------(end of broadcast)---------------------------\n>>> > > TIP 4: Have you searched our list archives?\n>>> > >\n>>> > > http://archives.postgresql.org\n>> >\n>> >\n>> > ---------------------------(end of broadcast)---------------------------\n>> > TIP 4: Have you searched our list archives?\n>> >\n>> > http://archives.postgresql.org\n>> >\n> \n> --\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n> _______________________________________________\n> Bizgres-general mailing list\n> [email protected]\n> http://pgfoundry.org/mailman/listinfo/bizgres-general\n> \n> \n\n\n\n\n\nRe: [Bizgres-general] [PERFORM] Introducing a new linux readahead framework\n\n\nJim,\n\nI’m thinking about it, we’re already using a fixed read-ahead of 16MB using blockdev on the stock Redhat 2.6.9 kernel, it would be nice to not have to set this so we may try it.\n\n- Luke \n\n\nOn 4/26/06 3:28 PM, \"Jim C. Nasby\" <[email protected]> wrote:\n\n(including bizgres-general)\n\nHas anyone done any testing on bizgres? It's got some patches that\neliminate a lot of IO bottlenecks, so it might present even larger\ngains.\n\nOn Wed, Apr 26, 2006 at 03:08:59PM -0500, Steve Poe wrote:\n> I found an average 14% improvement Using Pg 7.4.11 with odbc-bench as my\n> test bed with Wu's kernel patch. I have not tried version 8.x yet.\n>\n> Thanks Wu. \n>\n> Steve Poe\n>\n> Using Postgresql 7.4.11, on an dual Opteron with 4GB\n>\n> On Fri, 2006-04-21 at 09:38 +0800, Wu Fengguang wrote:\n> > Greetings,\n> >\n> > I'd like to introduce a new readahead framework for the linux kernel:\n> > http://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1021.html\n> >\n> > HOW IT WORKS\n> >\n> > In adaptive readahead, the context based method may be of particular\n> > interest to postgresql users. It works by peeking into the file cache\n> > and check if there are any history pages present or accessed. In this\n> > way it can detect almost all forms of sequential / semi-sequential read\n> > patterns, e.g.\n> > - parallel / interleaved sequential scans on one file\n> > - sequential reads across file open/close\n> > - mixed sequential / random accesses\n> > - sparse / skimming sequential read\n> >\n> > It also have methods to detect some less common cases:\n> > - reading backward\n> > - seeking all over reading N pages\n> >\n> > WAYS TO BENEFIT FROM IT\n> >\n> > As we know, postgresql relies on the kernel to do proper readahead.\n> > The adaptive readahead might help performance in the following cases:\n> > - concurrent sequential scans\n> > - sequential scan on a fragmented table\n> > (some DBs suffer from this problem, not sure for pgsql)\n> > - index scan with clustered matches\n> > - index scan on majority rows (in case the planner goes wrong)\n> >\n> > TUNABLE PARAMETERS\n> >\n> > There are two parameters which are described in this email:\n> > http://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/1024.html\n> >\n> > Here are the more oriented guidelines for postgresql users:\n> >\n> > - /proc/sys/vm/readahead_ratio\n> > Since most DB servers are bounty of memory, the danger of readahead\n> > thrashing is near to zero. In this case, you can set readahead_ratio to\n> > 100(or even 200:), which helps the readahead window to scale up rapidly.\n> >\n> > - /proc/sys/vm/readahead_hit_rate\n> > Sparse sequential reads are read patterns like {0, 2, 4, 5, 8, 11, ...}.\n> > In this case we might prefer to do readahead to get good I/O performance\n> > with the overhead of some useless pages. But if you prefer not to do so,\n> > set readahead_hit_rate to 1 will disable this feature.\n> >\n> > - /sys/block/sd<X>/queue/read_ahead_kb\n> > Set it to a large value(e.g. 4096) as you used to do.\n> > RAID users might want to use a bigger number.\n> >\n> > TRYING IT OUT\n> >\n> > The latest patch for stable kernels can be downloaded here:\n> > http://www.vanheusden.com/ara/\n> >\n> > Before compiling, make sure that the following options are enabled:\n> > Processor type and features -> Adaptive file readahead\n> > Processor type and features -> Readahead debug and accounting\n> >\n> > HELPING AND CONTRIBUTING\n> >\n> > The patch is open to fine-tuning advices :)\n> > Comments and benchmarking results are highly appreciated.\n> >\n> > Thanks,\n> > Wu\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n_______________________________________________\nBizgres-general mailing list\[email protected]\nhttp://pgfoundry.org/mailman/listinfo/bizgres-general",
"msg_date": "Wed, 26 Apr 2006 16:33:40 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Bizgres-general] Introducing a new linux"
},
{
"msg_contents": "On Wed, Apr 26, 2006 at 04:33:40PM -0700, Luke Lonergan wrote:\n>I�m thinking about it, we�re already using a fixed read-ahead of 16MB using\n>blockdev on the stock Redhat 2.6.9 kernel, it would be nice to not have to\n>set this so we may try it.\n\nFWIW, I never saw much performance difference from doing that. Wu's \npatch, OTOH, gave a big boost.\n\nMike Stone\n",
"msg_date": "Wed, 26 Apr 2006 19:53:54 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Bizgres-general] Introducing a new linux"
},
{
"msg_contents": "On Wed, Apr 26, 2006 at 10:43:48AM -0400, Michael Stone wrote:\n>patch a 512k blocksize would get ~100MB/s. I'm now watching to see how \n>it does over a couple of days on real-world workloads. \n\nI've got one DB where the VACUUM ANALYZE generally takes 11M-12M ms;\nwith the patch the job took 1.7M ms. Another VACUUM that normally takes \nbetween 300k-500k ms took 150k. Definately a promising addition.\n\nMike Stone\n\n",
"msg_date": "Thu, 27 Apr 2006 07:43:42 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Introducing a new linux readahead framework"
}
] |
[
{
"msg_contents": "Hi,\n\nI more or less often come about the problem of aggregating a\nchild table counting it's different states. The cleanest solution\nI've come up with so far is:\n\nBEGIN;\nCREATE TABLE parent (\n\tid int not null,\n \tname text not null,\n\tUNIQUE(id)\n);\n\nCREATE TABLE child (\n\tname text not null,\n\tstate int not null,\n\tparent int not null references parent(id)\n);\n\nCREATE VIEW parent_childs AS\nSELECT\n\tc.parent,\n\tcount(c.state) as childtotal,\n\tcount(c.state) - count(nullif(c.state,1)) as childstate1,\n\tcount(c.state) - count(nullif(c.state,2)) as childstate2,\n\tcount(c.state) - count(nullif(c.state,3)) as childstate3\nFROM child c\nGROUP BY parent;\n\nCREATE VIEW parent_view AS\nSELECT p.*,\npc.*\nFROM parent p\nLEFT JOIN parent_childs pc ON (p.id = pc.parent);\nCOMMIT;\n\nIs this the fastest way to build these aggregates (not considering\ntricks with triggers, etc)? The count(state) - count(nullif(...)) looks\na bit clumsy.\nI also experimented with a pgsql function to sum these up, but considered\nit as not-so-nice and it also always forces a sequential scan on the\ndata.\n\nThanks for any advice,\n\nJan\n\n",
"msg_date": "Fri, 21 Apr 2006 10:37:10 +0200",
"msg_from": "Jan Dittmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Better way to write aggregates?"
},
{
"msg_contents": "\nJan,\n\nI write queries like this\n\nCREATE VIEW parent_childs AS\nSELECT\n \tc.parent,\n \tcount(c.state) as childtotal,\n \tsum(case when c.state = 1 then 1 else 0 end) as childstate1,\n \tsum(case when c.state = 2 then 1 else 0 end) as childstate2,\n \tsum(case when c.state = 3 then 1 else 0 end) as childstate3\n FROM child c\n GROUP BY parent;\n\n---------- Original Message -----------\nFrom: Jan Dittmer <[email protected]>\nTo: [email protected]\nSent: Fri, 21 Apr 2006 10:37:10 +0200\nSubject: [PERFORM] Better way to write aggregates?\n\n> Hi,\n> \n> I more or less often come about the problem of aggregating a\n> child table counting it's different states. The cleanest solution\n> I've come up with so far is:\n> \n> BEGIN;\n> CREATE TABLE parent (\n> \tid int not null,\n> \tname text not null,\n> \tUNIQUE(id)\n> );\n> \n> CREATE TABLE child (\n> \tname text not null,\n> \tstate int not null,\n> \tparent int not null references parent(id)\n> );\n> \n> CREATE VIEW parent_childs AS\n> SELECT\n> \tc.parent,\n> \tcount(c.state) as childtotal,\n> \tcount(c.state) - count(nullif(c.state,1)) as childstate1,\n> \tcount(c.state) - count(nullif(c.state,2)) as childstate2,\n> \tcount(c.state) - count(nullif(c.state,3)) as childstate3\n> FROM child c\n> GROUP BY parent;\n> \n> CREATE VIEW parent_view AS\n> SELECT p.*,\n> pc.*\n> FROM parent p\n> LEFT JOIN parent_childs pc ON (p.id = pc.parent);\n> COMMIT;\n> \n> Is this the fastest way to build these aggregates (not considering\n> tricks with triggers, etc)? The count(state) - count(nullif(...)) looks\n> a bit clumsy.\n> I also experimented with a pgsql function to sum these up, but considered\n> it as not-so-nice and it also always forces a sequential scan on the\n> data.\n> \n> Thanks for any advice,\n> \n> Jan\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n------- End of Original Message -------\n\n",
"msg_date": "Fri, 21 Apr 2006 08:27:32 -0400",
"msg_from": "\"Jim Buttafuoco\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better way to write aggregates?"
},
{
"msg_contents": "Jim Buttafuoco wrote:\n> Jan,\n> \n> I write queries like this\n> \n> CREATE VIEW parent_childs AS\n> SELECT\n> \tc.parent,\n> \tcount(c.state) as childtotal,\n> \tsum(case when c.state = 1 then 1 else 0 end) as childstate1,\n> \tsum(case when c.state = 2 then 1 else 0 end) as childstate2,\n> \tsum(case when c.state = 3 then 1 else 0 end) as childstate3\n> FROM child c\n> GROUP BY parent;\n\nIt would help if booleans could be casted to integer 1/0 :-) But\nperformance wise it should be about the same? I think I'll\nrun some tests later today with real data.\nWould an index on NULLIF(state,1) help count(NULLIF(state,1)) ?\nCan one build an index on (case when c.state = 3 then 1 else 0 end)?\n\nThanks,\n\nJan\n\n",
"msg_date": "Fri, 21 Apr 2006 14:35:33 +0200",
"msg_from": "Jan Dittmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Better way to write aggregates?"
},
{
"msg_contents": "\nI don't think an index will help you with this query. \n\n---------- Original Message -----------\nFrom: Jan Dittmer <[email protected]>\nTo: [email protected]\nCc: [email protected]\nSent: Fri, 21 Apr 2006 14:35:33 +0200\nSubject: Re: [PERFORM] Better way to write aggregates?\n\n> Jim Buttafuoco wrote:\n> > Jan,\n> > \n> > I write queries like this\n> > \n> > CREATE VIEW parent_childs AS\n> > SELECT\n> > \tc.parent,\n> > \tcount(c.state) as childtotal,\n> > \tsum(case when c.state = 1 then 1 else 0 end) as childstate1,\n> > \tsum(case when c.state = 2 then 1 else 0 end) as childstate2,\n> > \tsum(case when c.state = 3 then 1 else 0 end) as childstate3\n> > FROM child c\n> > GROUP BY parent;\n> \n> It would help if booleans could be casted to integer 1/0 :-) But\n> performance wise it should be about the same? I think I'll\n> run some tests later today with real data.\n> Would an index on NULLIF(state,1) help count(NULLIF(state,1)) ?\n> Can one build an index on (case when c.state = 3 then 1 else 0 end)?\n> \n> Thanks,\n> \n> Jan\n------- End of Original Message -------\n\n",
"msg_date": "Fri, 21 Apr 2006 08:38:58 -0400",
"msg_from": "\"Jim Buttafuoco\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better way to write aggregates?"
},
{
"msg_contents": "Jan Dittmer <[email protected]> writes:\n> It would help if booleans could be casted to integer 1/0 :-)\n\nAs of 8.1 there is such a cast in the system:\n\nregression=# select 't'::bool::int;\n int4\n------\n 1\n(1 row)\n\nIn earlier releases you can make your own.\n\nAs for the original question, though: have you looked at the crosstab\nfunctions in contrib/tablefunc?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Apr 2006 10:26:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better way to write aggregates? "
}
] |
[
{
"msg_contents": "Hello ,\n \nI have a problem of performance with a query. I use PostgreSQL 8.1.3.\n \nThe distribution of Linux is Red Hat Enterprise Linux ES release 4 (Nahant Update 2) and the server is a bi-processor Xeon 2.4GHz with 1 Go of Ram and the size of the database files is about 60 Go.\n \nThe problem is that this query uses only a few percentage of the cpu as seen with the top command :\n \nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n3342 postgres 18 0 140m 134m 132m D 5.9 13.3 17:04.06 postmaster\n \nThe vm stat command : \nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 1 184 16804 38104 933516 0 0 3092 55 667 145 12 4 71 14\n 0 1 184 16528 38140 933480 0 0 2236 0 1206 388 2 1 50 47\n 0 1 184 15008 38188 935252 0 0 2688 92 1209 396 2 0 49 48\n \n \nThe config of PostgresQL is : \n \n \nshared_buffers = 16384 (128Mo)\nwork_mem = 65536 (64 Mo)\nmaintenance_work_mem = 98304 (96 Mo)\neffective_cache_size = 84000\n \nI think that the problem is there are too much %wait that are waiting cause of the really bad rate of lecture (bi) which is only 3 Mo/s .\nIt is this value I do not understand because whit other queries this rate is about 120 Mo/s. I use SCSI DISK and a RAID 0 hardware system .\n \nThis is the query plan of the query :\n \n QUERY PLAN \n------------------------------------------------------------------------------------------------------------\n Aggregate (cost=24582205.20..24582205.22 rows=1 width=13)\n -> Nested Loop (cost=2.11..24582054.88 rows=60129 width=13)\n Join Filter: (\"inner\".l_quantity < (subplan))\n -> Seq Scan on part (cost=0.00..238744.00 rows=6013 width=4)\n Filter: ((p_brand = 'Brand#51'::bpchar) AND (p_container = 'MED JAR'::bpchar))\n -> Bitmap Heap Scan on lineitem (cost=2.11..126.18 rows=31 width=27)\n Recheck Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n -> Bitmap Index Scan on id_partkey_lineitem (cost=0.00..2.11 rows=31 width=0)\n Index Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n SubPlan\n -> Aggregate (cost=126.50..126.51 rows=1 width=10)\n -> Index Scan using id_partkey_lineitem on lineitem (cost=0.00..126.42 rows=31 width=10)\n Index Cond: (l_partkey = $0)\n(13 rows)\n \n \nThe number of tuples in Lineitem is 180 000 000.\n \nSo my question is what I have to do to increase the rate of the read which improve the execution of the query? \nI add that the server is only dedicated for PostgreSQL.\n \nRegards, \nHello ,\n \nI have a problem of performance with a query. I use PostgreSQL 8.1.3.\n \nThe distribution of Linux is Red Hat Enterprise Linux ES release 4 (Nahant Update 2) and the server is a bi-processor Xeon 2.4GHz with 1 Go of Ram and the size of the database files is about 60 Go.\n \nThe problem is that this query uses only a few percentage of the cpu as seen with the top command :\n \nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n3342 postgres 18 0 140m 134m 132m D 5.9 13.3 17:04.06 postmaster\n \nThe vm stat command : \nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 1 184 16804 38104 933516 0 0 3092 55 667 145 12 4 71 14\n 0 1 184 16528 38140 933480 0 0 2236 0 1206 388 2 1 50 47\n 0 1 184 15008 38188 935252 0 0 2688 92 1209 396 2 0 49 48\n \n \nThe config of PostgresQL is : \n \n \nshared_buffers = 16384 (128Mo)\nwork_mem = 65536 (64 Mo)\nmaintenance_work_mem = 98304 (96 Mo)\neffective_cache_size = 84000\n \nI think that the problem is there are too much %wait that are waiting cause of the really bad rate of lecture (bi) which is only 3 Mo/s .\nIt is this value I do not understand because whit other queries this rate is about 120 Mo/s. I use SCSI DISK and a RAID 0 hardware system .\n \nThis is the query plan of the query :\n \n QUERY PLAN \n------------------------------------------------------------------------------------------------------------\n Aggregate (cost=24582205.20..24582205.22 rows=1 width=13)\n -> Nested Loop (cost=2.11..24582054.88 rows=60129 width=13)\n Join Filter: (\"inner\".l_quantity < (subplan))\n -> Seq Scan on part (cost=0.00..238744.00 rows=6013 width=4)\n Filter: ((p_brand = 'Brand#51'::bpchar) AND (p_container = 'MED JAR'::bpchar))\n -> Bitmap Heap Scan on lineitem (cost=2.11..126.18 rows=31 width=27)\n Recheck Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n -> Bitmap Index Scan on id_partkey_lineitem (cost=0.00..2.11 rows=31 width=0)\n Index Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n SubPlan\n -> Aggregate (cost=126.50..126.51 rows=1 width=10)\n -> Index Scan using id_partkey_lineitem on lineitem (cost=0.00..126.42 rows=31 width=10)\n Index Cond: (l_partkey = $0)\n(13 rows)\n \n \nThe number of tuples in Lineitem is 180 000 000.\n \nSo my question is what I have to do to increase the rate of the read which improve the execution of the query? \nI add that the server is only dedicated for PostgreSQL.\n \nRegards,",
"msg_date": "Fri, 21 Apr 2006 11:33:15 +0200 (CEST)",
"msg_from": "luchot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Little use of CPU ( < 5%)"
},
{
"msg_contents": "Maybe you could post the query and an EXPLAIN ANALYZE of the query. That\nwould give more information for trying to decide what is wrong.\n \nSo your question is basically why you get a slower read rate on this\nquery than on other queries? If I had to guess, maybe it could be that\nyou are scanning an index with a low correlation (The order of the\nrecords in the index is very different then the order of the records on\nthe disk.) causing your drives to do a lot of seeking. A possible fix\nfor this might be to cluster the table on the index, but I would check\nout the explain analyze first to see which step is really the slow one.\n \n \n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of luchot\nSent: Friday, April 21, 2006 4:33 AM\nTo: [email protected]\nSubject: [PERFORM] Little use of CPU ( < 5%)\n \nHello ,\n \nI have a problem of performance with a query. I use PostgreSQL 8.1.3.\n \nThe distribution of Linux is Red Hat Enterprise Linux ES release 4\n(Nahant Update 2) and the server is a bi-processor Xeon 2.4GHz with 1 Go\nof Ram and the size of the database files is about 60 Go.\n \nThe problem is that this query uses only a few percentage of the cpu as\nseen with the top command :\n \nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n\n3342 postgres 18 0 140m 134m 132m D 5.9 13.3 17:04.06 postmaster\n \nThe vm stat command : \nprocs -----------memory---------- ---swap-- -----io---- --system--\n----cpu----\n r b swpd free buff cache si so bi bo in\ncs us sy id wa\n 0 1 184 16804 38104 933516 0 0 3092 55 667 145 12\n4 71 14\n 0 1 184 16528 38140 933480 0 0 2236 0 1206 388 2\n1 50 47\n 0 1 184 15008 38188 935252 0 0 2688 92 1209 396 2\n0 49 48\n \n \nThe config of PostgresQL is : \n \n \nshared_buffers = 16384 (128Mo)\nwork_mem = 65536 (64 Mo)\nmaintenance_work_mem = 98304 (96 Mo)\neffective_cache_size = 84000\n \nI think that the problem is there are too much %wait that are waiting\ncause of the really bad rate of lecture (bi) which is only 3 Mo/s .\nIt is this value I do not understand because whit other queries this\nrate is about 120 Mo/s. I use SCSI DISK and a RAID 0 hardware system .\n \nThis is the query plan of the query :\n \n QUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------\n Aggregate (cost=24582205.20..24582205.22 rows=1 width=13)\n -> Nested Loop (cost=2.11..24582054.88 rows=60129 width=13)\n Join Filter: (\"inner\".l_quantity < (subplan))\n -> Seq Scan on part (cost=0.00..238744.00 rows=6013 width=4)\n Filter: ((p_brand = 'Brand#51'::bpchar) AND (p_container\n= 'MED JAR'::bpchar))\n -> Bitmap Heap Scan on lineitem (cost=2.11..126.18 rows=31\nwidth=27)\n Recheck Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n -> Bitmap Index Scan on id_partkey_lineitem\n(cost=0.00..2.11 rows=31 width=0)\n Index Cond: (\"outer\".p_partkey =\nlineitem.l_partkey)\n SubPlan\n -> Aggregate (cost=126.50..126.51 rows=1 width=10)\n -> Index Scan using id_partkey_lineitem on lineitem\n(cost=0.00..126.42 rows=31 width=10)\n Index Cond: (l_partkey = $0)\n(13 rows)\n \n \nThe number of tuples in Lineitem is 180 000 000.\n \nSo my question is what I have to do to increase the rate of the read\nwhich improve the execution of the query? \nI add that the server is only dedicated for PostgreSQL.\n \nRegards, \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nMaybe you could post the query and an\nEXPLAIN ANALYZE of the query. That would give more information for trying to\ndecide what is wrong.\n \nSo your question is basically why you get\na slower read rate on this query than on other queries? If I had to guess, maybe it could be that\nyou are scanning an index with a low correlation (The order of the records in\nthe index is very different then the order of the records on the disk.) causing\nyour drives to do a lot of seeking. \nA possible fix for this might be to cluster the table on the index, but\nI would check out the explain analyze first to see which\nstep is really the slow one.\n \n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of luchot\nSent: Friday, April 21, 2006 4:33\nAM\nTo:\[email protected]\nSubject: [PERFORM] Little use of\nCPU ( < 5%)\n \nHello ,\n \nI have a problem of performance with a query. I use PostgreSQL 8.1.3.\n \nThe distribution of Linux is Red Hat Enterprise\nLinux ES release 4 (Nahant Update 2) and the server is a bi-processor Xeon 2.4GHz with 1 Go of Ram and the\nsize of the database files is about 60 Go.\n \nThe problem is that this query uses only a few percentage of the cpu as\nseen with the top command :\n \nPID\nUSER \nPR NI VIRT RES \nSHR S %CPU %MEM \nTIME+ COMMAND \n\n3342 postgres 18 0 140m 134m 132m D 5.9 13.3 17:04.06 postmaster\n \nThe vm stat command : \nprocs -----------memory---------- ---swap--\n-----io---- --system-- ----cpu----\n r \nb swpd free buff cache \nsi so bi bo in cs us sy \nid wa\n 0 1 184 16804 38104 933516 0 0 3092 55 667 145 12 \n4 71 14\n 0 \n1 184 16528 38140 933480 0 0 2236 0 1206 388 2 1 50 47\n 0 \n1 184 15008 38188 935252 0 0 2688 92 1209 396 2 \n0 49 48\n \n \nThe config of PostgresQL is\n: \n \n \nshared_buffers = 16384 (128Mo)\nwork_mem = 65536 \n(64 Mo)\nmaintenance_work_mem =\n98304 (96 Mo)\neffective_cache_size = 84000\n \nI think that the problem is\nthere are too much %wait that are waiting cause of the really bad\nrate of lecture (bi) which is only 3 Mo/s .\nIt is this value I do not\nunderstand because whit other queries this rate is about 120 Mo/s. I use SCSI\nDISK and a RAID 0 hardware system .\n \nThis is the query plan of the\nquery :\n \n \nQUERY PLAN \n\n------------------------------------------------------------------------------------------------------------\n Aggregate (cost=24582205.20..24582205.22 rows=1\nwidth=13)\n \n-> Nested Loop (cost=2.11..24582054.88 rows=60129\nwidth=13)\n \nJoin Filter: (\"inner\".l_quantity\n< (subplan))\n \n-> Seq Scan on part (cost=0.00..238744.00 rows=6013 width=4)\n \nFilter: ((p_brand = 'Brand#51'::bpchar) AND (p_container = 'MED\nJAR'::bpchar))\n \n-> Bitmap Heap Scan on\nlineitem (cost=2.11..126.18 rows=31\nwidth=27)\n \nRecheck Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n \n-> Bitmap Index Scan on\nid_partkey_lineitem \n(cost=0.00..2.11 rows=31 width=0)\n \nIndex Cond: (\"outer\".p_partkey = lineitem.l_partkey)\n \nSubPlan\n \n-> Aggregate (cost=126.50..126.51 rows=1 width=10)\n \n-> Index Scan using\nid_partkey_lineitem on lineitem \n(cost=0.00..126.42 rows=31 width=10)\n \nIndex Cond: (l_partkey = $0)\n(13 rows)\n \n \nThe number of tuples in\nLineitem is 180 000 000.\n \nSo my question is what I have\nto do to increase the rate of the read which improve the execution of the\nquery? \nI add that the server is only\ndedicated for PostgreSQL.\n \nRegards,",
"msg_date": "Fri, 21 Apr 2006 08:54:16 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Little use of CPU ( < 5%)"
},
{
"msg_contents": "Hi,\n\nI hope you can help me...there's something wrong going on my db server (OS. \nGNU/linux White box)... here's the problem...\n\nThe amount of Inactive memory Grows unlimited .... this happens only when \nPostgresql (8.1.1) is running... after a few days it \"eats\" all the RAM \nmemory ... so I've have to restart the server to free RAM....\n\nHere's the parametres actually active in then postgresql.conf file\n\nmax_connections = 100\nauthentication_timeout = 60\npassword_encryption = on\nshared_buffers = 1000\nwork_mem = 131076\nmaintenance_work_mem = 262152\nredirect_stderr = on\nlog_directory = 'pg_log'\nlog_truncate_on_rotation = on\nlog_rotation_age = 1440\nlog_rotation_size = 0\nlc_messages = 'es_ES.UTF-8'\nlc_monetary = 'es_ES.UTF-8'\nlc_numeric = 'es_ES.UTF-8'\nlc_time = 'es_ES.UTF-8'\n\nBelow you can find the content of meminfo file with 18 users connected \n(average during working days) and the server has benn running during 2 days \n2:19\n\n#cat /proc/meminfo\nMemTotal: 2074844 kB\nMemFree: 1371660 kB\nBuffers: 61748 kB\nCached: 555492 kB\nSwapCached: 0 kB\nActive: 348604 kB\nInactive: 305876 kB\nHighTotal: 1179440 kB\nHighFree: 579904 kB\nLowTotal: 895404 kB\nLowFree: 791756 kB\nSwapTotal: 2048248 kB\nSwapFree: 2048248 kB\nDirty: 260 kB\nWriteback: 0 kB\nMapped: 54824 kB\nSlab: 35756 kB\nCommitted_AS: 117624 kB\nPageTables: 3404 kB\nVmallocTotal: 106488 kB\nVmallocUsed: 3356 kB\nVmallocChunk: 102920 kB\nHugePages_Total: 0\nHugePages_Free: 0\nHugepagesize: 2048 kB\n\nthanks in advance for your help,\n\nAlvaro Arcila\n\npd: sorry for my english It's not very good, I hope I'm clear....\n\n\n",
"msg_date": "Fri, 21 Apr 2006 15:55:29 +0000",
"msg_from": "\"ALVARO ARCILA\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Inactive memory Grows unlimited"
},
{
"msg_contents": "ALVARO ARCILA wrote:\n> Hi,\n> \n> I hope you can help me...there's something wrong going on my db server (OS. \n> GNU/linux White box)... here's the problem...\n> \n> The amount of Inactive memory Grows unlimited .... this happens only when \n> Postgresql (8.1.1) is running... after a few days it \"eats\" all the RAM \n> memory ... so I've have to restart the server to free RAM....\n\nThis is normal Unix behavior. Leave it running for a few more days and\nyou'll that nothing wrong happens. Why do you think you need \"free\"\nmemory?\n\nOnly if the swap memory start getting used a lot you need to worry about\nmemory consumption.\n\nYou should upgrade to 8.1.3 BTW.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Fri, 21 Apr 2006 12:38:29 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inactive memory Grows unlimited"
},
{
"msg_contents": "Hi,\n\n>> The amount of Inactive memory Grows unlimited .... this happens only when \n>> Postgresql (8.1.1) is running... after a few days it \"eats\" all the RAM \n>> memory ... so I've have to restart the server to free RAM....\n> \n> This is normal Unix behavior. Leave it running for a few more days and\n> you'll that nothing wrong happens. Why do you think you need \"free\"\n> memory?\n> \n> Only if the swap memory start getting used a lot you need to worry about\n> memory consumption.\n> \n> You should upgrade to 8.1.3 BTW.\n\nAlso, shared_buffers seem too low and work_mem too much high for your setup:\n\n> shared_buffers = 1000\n> work_mem = 131076\n\nYou should really raise shared_buffers and decrease work_mem: 1000 \nshared_buffers is probably too conservative. On the other hand 128MB per \nsort could be eating your RAM quickly with 18 users connected.\n\n\nBest regards\n--\nMatteo Beccati\nhttp://phpadsnew.com\nhttp://phppgads.com\n",
"msg_date": "Mon, 24 Apr 2006 07:36:57 +0200",
"msg_from": "Matteo Beccati <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inactive memory Grows unlimited"
}
] |
[
{
"msg_contents": "Hi\n\nI need to set security for row level but not based on Database user's\nlogin. It should be based on the user table login. For the particular\nuser I need to allow only the particular records to access insert,\nupdate delete and select.\n\nLet me explain clearly\n\nFor example think we are using asp/asp.net website\n\nEg:\n\nwww.test.com\n\nSo take this is our website and if you try this URL then you will get a\nwindow for Login name and password.\n For example the Login name is windows user name (Here windows user\nmeans server windows user and not client) and windows password. So if\nyou have login user id you can able to login in our site and we have\nanother check. We have our own usertable this table consist all the\nuser login names and user rights. We will check the windows user who\nlogin in our site has rights in the usertable I mean he is present in\nthe usertable if he is not present then we will display a message you\nhave no rights to access this site.\n If he has login id in our usertable then he allowed viewing our\npages. Still if he has the login id we will check the user who login\nhas how much right to access to each page and the records of each table\nits all depend on the user rights.\n\n So, here I need the row level security. For each and every table we\nneed to check the corresponding user and executing the record produce\nlot of business logic problem for us.\n So after the user login we need automatically to set row level\nsecurity for all the tables. Based on the user who login.\n\nSo from there if we try select * from <tablename> then we can only able\nto get the allowed records to select, insert, update, delete.\n\nPlease can some one help how to solve this?\n\nNote:\n\nFor some help you can refer the below URL (See in that they only given\nabout the row level and column level security for each database users\nnot for our required concept)\n\nhttp://www.microsoft.com/technet/prodtechnol/sql/2005/multisec.mspx\n\n\nThanks in advance\nRams\n\n",
"msg_date": "21 Apr 2006 03:28:48 -0700",
"msg_from": "\"Friends\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "security for row level but not based on Database user's login"
},
{
"msg_contents": "Friends wrote:\n> Hi\n> \n> I need to set security for row level but not based on Database user's\n> login. It should be based on the user table login. For the particular\n> user I need to allow only the particular records to access insert,\n> update delete and select.\n\nWell, the data access stuff is all manageable via views, which is the \nstandard way to do this.\n\nYou don't say which version of PostgreSQL you are using, but I'd be \ntempted just to switch to a different user after connecting and use the \nsession_user system function to control what is visible in the view.\n\nFor example:\nCREATE VIEW my_contacts AS SELECT * FROM contacts WHERE owner = \nsession_user;\n\nIf that's not practical then you'll need to write some functions to \nsimulate your own session_user (say application_user()). This is easiest \nto write in plperl/pltcl or some other interpreted language - check the \nlist archvies for plenty of discussion.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 25 Apr 2006 10:40:08 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: security for row level but not based on Database user's"
},
{
"msg_contents": "Ramasamy wrote:\n> Hi Richard ,\n> Very good Day. Great information that you given to me.\n\nGreat glad you think it's useful. Oh, don't forget to cc: the \nmailing-list - that's the convention around here.\n\n> I will try in your idea. Here I am using SQL server 2000(Even I can use SQL\n> Sever 2005 too if needed.) We are dealing with lot of databases with lot of\n> business logic. I think your information will great for me.\n\nAh - you're not using PostgreSQL? Well, the principle is the same but \nyou should be aware that this is a PostgreSQL list so the details will \nbe different.\n\n> Let I try and I will back with you.\n> \n> Thanks and hands of too you.\n> \n> But i don't have any idea of getting Session from databases (I think you are\n> meaning that user can be handle with session). If you have some more idea\n> then it will be great full to me.\n\nWith MS-SQL you'll probably want to look at T-SQL variables, although I \ndon't know if they last longer than a single transaction. Failing that, \nfunctions or temporary tables might be a help.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 25 Apr 2006 11:40:13 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: security for row level but not based on Database user's"
}
] |
[
{
"msg_contents": "We are experiencing gradually worsening performance in PostgreSQL 7.4.7, on \na system with the following specs:\nLinux OS (Fedora Core 1, 2.4 kernal)\nFlash file system (2 Gig, about 80% full)\n256 Meg RAM\n566 MHz Celeron CPU\n\nWe use Orbit 2.9.8 to access PostGres. The database contains 62 tables.\n\nWhen the system is running with a fresh copy of the database, performance is \nfine. At its worst, we are seeing fairly simple SELECT queries taking up to \n1 second to execute. When these queries are run in a loop, the loop can take \nup to 30 seconds to execute, instead of the 2 seconds or so that we would \nexpect.\n\nVACUUM FULL ANALYZE helps, but doesn't seem to eliminate the problem.\n\nThe following table show average execution time in \"bad\" performance mode in \nthe first column, execution time after VACUUM ANALYZE in the second column, \nand % improvement (or degradation?) in the third. The fourth column show the \nquery that was executed.\n\n741.831|582.038|-21.5| ^IDECLARE table_cursor\n170.065|73.032|-57.1| FETCH ALL in table_cursor\n41.953|45.513|8.5| CLOSE table_cursor\n61.504|47.374|-23.0| SELECT last_value FROM pm_id_seq\n39.651|46.454|17.2| select id from la_looprunner\n1202.170|265.316|-77.9| select id from rt_tran\n700.669|660.746|-5.7| ^IDECLARE my_tran_load_cursor\n1192.182|47.258|-96.0| FETCH ALL in my_tran_load_cursor\n181.934|89.752|-50.7| CLOSE my_tran_load_cursor\n487.285|873.474|79.3| ^IDECLARE my_get_router_cursor\n51.543|69.950|35.7| FETCH ALL in my_get_router_cursor\n48.312|74.061|53.3| CLOSE my_get_router_cursor\n814.051|1016.219|24.8| SELECT $1 = 'INSERT'\n57.452|78.863|37.3| select id from op_sched\n48.010|117.409|144.6| select short_name, long_name from la_loopapp\n54.425|58.352|7.2| select id from cd_range\n45.289|52.330|15.5| SELECT last_value FROM rt_tran_id_seq\n39.658|82.949|109.2| SELECT last_value FROM op_sched_id_seq\n42.158|68.189|61.7| select card_id,router_id from rt_valid\n\n\nHas anyone else seen gradual performance degradation like this? Would \nupgrading to Postgres 8 help? Any other thoughts on directions for \ntroubleshooting this?\n\nThanks... \n\n\n",
"msg_date": "Fri, 21 Apr 2006 11:24:15 -0700",
"msg_from": "\"Greg Stumph\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Worsening performance with 7.4 on flash-based system"
},
{
"msg_contents": "Well, since I got no response at all to this message, I can only assume that \nI've asked the question in an insufficient way, or else that no one has \nanything to offer on our problem.\n\nThis was my first post to the list, so if there's a better way I should be \nasking this, or different data I should provide, hopefully someone will let \nme know...\n\nThanks,\nGreg\n\n\"Greg Stumph\" <[email protected]> wrote in message \nnews:[email protected]...\n> We are experiencing gradually worsening performance in PostgreSQL 7.4.7, \n> on a system with the following specs:\n> Linux OS (Fedora Core 1, 2.4 kernal)\n> Flash file system (2 Gig, about 80% full)\n> 256 Meg RAM\n> 566 MHz Celeron CPU\n>\n> We use Orbit 2.9.8 to access PostGres. The database contains 62 tables.\n>\n> When the system is running with a fresh copy of the database, performance \n> is fine. At its worst, we are seeing fairly simple SELECT queries taking \n> up to 1 second to execute. When these queries are run in a loop, the loop \n> can take up to 30 seconds to execute, instead of the 2 seconds or so that \n> we would expect.\n>\n> VACUUM FULL ANALYZE helps, but doesn't seem to eliminate the problem.\n>\n> The following table show average execution time in \"bad\" performance mode \n> in the first column, execution time after VACUUM ANALYZE in the second \n> column, and % improvement (or degradation?) in the third. The fourth \n> column show the query that was executed.\n>\n> 741.831|582.038|-21.5| ^IDECLARE table_cursor\n> 170.065|73.032|-57.1| FETCH ALL in table_cursor\n> 41.953|45.513|8.5| CLOSE table_cursor\n> 61.504|47.374|-23.0| SELECT last_value FROM pm_id_seq\n> 39.651|46.454|17.2| select id from la_looprunner\n> 1202.170|265.316|-77.9| select id from rt_tran\n> 700.669|660.746|-5.7| ^IDECLARE my_tran_load_cursor\n> 1192.182|47.258|-96.0| FETCH ALL in my_tran_load_cursor\n> 181.934|89.752|-50.7| CLOSE my_tran_load_cursor\n> 487.285|873.474|79.3| ^IDECLARE my_get_router_cursor\n> 51.543|69.950|35.7| FETCH ALL in my_get_router_cursor\n> 48.312|74.061|53.3| CLOSE my_get_router_cursor\n> 814.051|1016.219|24.8| SELECT $1 = 'INSERT'\n> 57.452|78.863|37.3| select id from op_sched\n> 48.010|117.409|144.6| select short_name, long_name from la_loopapp\n> 54.425|58.352|7.2| select id from cd_range\n> 45.289|52.330|15.5| SELECT last_value FROM rt_tran_id_seq\n> 39.658|82.949|109.2| SELECT last_value FROM op_sched_id_seq\n> 42.158|68.189|61.7| select card_id,router_id from rt_valid\n>\n>\n> Has anyone else seen gradual performance degradation like this? Would \n> upgrading to Postgres 8 help? Any other thoughts on directions for \n> troubleshooting this?\n>\n> Thanks...\n> \n\n\n",
"msg_date": "Fri, 28 Apr 2006 11:36:21 -0700",
"msg_from": "\"Greg Stumph\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Worsening performance with 7.4 on flash-based system"
},
{
"msg_contents": "Usually when simple queries take a long time to run, it's the system \ntables (pg_*) that have become bloated and need vacuuming. But that's \njust random guess on my part w/o my detailed info.\n\n\nGreg Stumph wrote:\n> Well, since I got no response at all to this message, I can only assume that \n> I've asked the question in an insufficient way, or else that no one has \n> anything to offer on our problem.\n> \n> This was my first post to the list, so if there's a better way I should be \n> asking this, or different data I should provide, hopefully someone will let \n> me know...\n> \n> Thanks,\n> Greg\n> \n> \"Greg Stumph\" <[email protected]> wrote in message \n> news:[email protected]...\n>> We are experiencing gradually worsening performance in PostgreSQL 7.4.7, \n>> on a system with the following specs:\n>> Linux OS (Fedora Core 1, 2.4 kernal)\n>> Flash file system (2 Gig, about 80% full)\n>> 256 Meg RAM\n>> 566 MHz Celeron CPU\n>>\n>> We use Orbit 2.9.8 to access PostGres. The database contains 62 tables.\n>>\n>> When the system is running with a fresh copy of the database, performance \n>> is fine. At its worst, we are seeing fairly simple SELECT queries taking \n>> up to 1 second to execute. When these queries are run in a loop, the loop \n>> can take up to 30 seconds to execute, instead of the 2 seconds or so that \n>> we would expect.\n>>\n>> VACUUM FULL ANALYZE helps, but doesn't seem to eliminate the problem.\n>>\n>> The following table show average execution time in \"bad\" performance mode \n>> in the first column, execution time after VACUUM ANALYZE in the second \n>> column, and % improvement (or degradation?) in the third. The fourth \n>> column show the query that was executed.\n>>\n>> 741.831|582.038|-21.5| ^IDECLARE table_cursor\n>> 170.065|73.032|-57.1| FETCH ALL in table_cursor\n>> 41.953|45.513|8.5| CLOSE table_cursor\n>> 61.504|47.374|-23.0| SELECT last_value FROM pm_id_seq\n>> 39.651|46.454|17.2| select id from la_looprunner\n>> 1202.170|265.316|-77.9| select id from rt_tran\n>> 700.669|660.746|-5.7| ^IDECLARE my_tran_load_cursor\n>> 1192.182|47.258|-96.0| FETCH ALL in my_tran_load_cursor\n>> 181.934|89.752|-50.7| CLOSE my_tran_load_cursor\n>> 487.285|873.474|79.3| ^IDECLARE my_get_router_cursor\n>> 51.543|69.950|35.7| FETCH ALL in my_get_router_cursor\n>> 48.312|74.061|53.3| CLOSE my_get_router_cursor\n>> 814.051|1016.219|24.8| SELECT $1 = 'INSERT'\n>> 57.452|78.863|37.3| select id from op_sched\n>> 48.010|117.409|144.6| select short_name, long_name from la_loopapp\n>> 54.425|58.352|7.2| select id from cd_range\n>> 45.289|52.330|15.5| SELECT last_value FROM rt_tran_id_seq\n>> 39.658|82.949|109.2| SELECT last_value FROM op_sched_id_seq\n>> 42.158|68.189|61.7| select card_id,router_id from rt_valid\n>>\n>>\n>> Has anyone else seen gradual performance degradation like this? Would \n>> upgrading to Postgres 8 help? Any other thoughts on directions for \n>> troubleshooting this?\n>>\n>> Thanks...\n>>\n> \n> \n",
"msg_date": "Sat, 29 Apr 2006 13:59:27 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worsening performance with 7.4 on flash-based system"
},
{
"msg_contents": "On 4/29/06, Greg Stumph <[email protected]> wrote:\n> Well, since I got no response at all to this message, I can only assume that\n> I've asked the question in an insufficient way, or else that no one has\n> anything to offer on our problem.\n>\n> This was my first post to the list, so if there's a better way I should be\n> asking this, or different data I should provide, hopefully someone will let\n> me know...\n>\n> Thanks,\n> Greg\n>\n> \"Greg Stumph\" <[email protected]> wrote in message\n> news:[email protected]...\n> > We are experiencing gradually worsening performance in PostgreSQL 7.4.7,\n> > on a system with the following specs:\n> > Linux OS (Fedora Core 1, 2.4 kernal)\n> > Flash file system (2 Gig, about 80% full)\n> > 256 Meg RAM\n> > 566 MHz Celeron CPU\n> >\n> > We use Orbit 2.9.8 to access PostGres. The database contains 62 tables.\n> >\n> > When the system is running with a fresh copy of the database, performance\n> > is fine. At its worst, we are seeing fairly simple SELECT queries taking\n> > up to 1 second to execute. When these queries are run in a loop, the loop\n> > can take up to 30 seconds to execute, instead of the 2 seconds or so that\n> > we would expect.\n\nIf you're inserting/updating/deleting a table or tables heavily, then\nyou'll need to vacuum it a lot more often than a reasonably static\ntable. Are you running contrib/autovacuum at all? PG 8.0 and above\nhave autovacuum built in but 7.4.x needs to run the contrib version.\n\nPS - the latest 7.4 version is .12 - see\nhttp://www.postgresql.org/docs/7.4/interactive/release.html for what\nhas changed (won't be much in performance terms but may fix data-loss\nbugs).\n\n--\nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Sun, 30 Apr 2006 09:51:44 +1000",
"msg_from": "\"chris smith\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worsening performance with 7.4 on flash-based system"
},
{
"msg_contents": "On Fri, 2006-04-28 at 13:36, Greg Stumph wrote:\n> Well, since I got no response at all to this message, I can only assume that \n> I've asked the question in an insufficient way, or else that no one has \n> anything to offer on our problem.\n> \n> This was my first post to the list, so if there's a better way I should be \n> asking this, or different data I should provide, hopefully someone will let \n> me know...\n\nI'd pick one particular case and do explain analyze on it both right\nafter a reload, after running for a while, and after a vacuum analyze.\n\nAlso, do a vacuum verbose on the database and post the output of that\nwhen the system's slowed down.\n\nDo you make a lot of temp tables? Run a lot of DDL? I don't think we\nhave enough information to make a real informed decision, but I'm not\nsure what questions to ask to find out where the bottleneck is...\n\nAlso, this could be the flash controller / card combo causing problems. \nDo you start with a freshly formatted card at the beginning? I know\nthat flash controllers randomize the part of the card that gets written\nto so that you don't kill one part of it early due to writing on just on\npart. Could be that as the controller maps the card behind the scenes,\nthe access gets slower on the lower level, and there's nothing\nPostgreSQL can do about it.\n\nCan you tell us what your usage patterns are in a bit more detail?\n",
"msg_date": "Mon, 01 May 2006 10:55:29 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worsening performance with 7.4 on flash-based system"
}
] |
[
{
"msg_contents": "Your numbers seem quite ok considering the number of disks. We also get\na 256Mb battery backed cache module with it, so I'm looking forward to\ntesting the write performance (first using ext3, then xfs). If I get the\nenough time to test it, I'll test both raid 0+1 and raid 5\nconfigurations although I trust raid 0+1 more.\n\nAnd no, it's not the cheapest way to get storage - but it's only half as\nexpensive as the other option: an EVA4000, which we're gonna have to go\nfor if we(they) decide to stay in bed with a proprietary database. With\npostgres we don't need replication on SAN level (using slony) so the MSA\n1500 would be sufficient, and that's a good thing (price wise) as we're\ngonna need two. OTOH, the EVA4000 will not give us mirroring so either\nway, we're gonna need two of whatever system we go for. Just hoping the\nMSA 1500 is reliable as well...\n\nSupport will hopefully not be a problem for us as we have a local\ncompany providing support, they're also the ones setting it up for us so\nat least we'll know right away if they're compentent or not :)\n\nRegards,\nMikael\n\n\n-----Original Message-----\nFrom: Alex Hayward [mailto:[email protected]] On Behalf Of\nAlex Hayward\nSent: den 21 april 2006 17:25\nTo: Mikael Carneholm\nCc: Pgsql performance\nSubject: Re: [PERFORM] Hardware: HP StorageWorks MSA 1500\n\nOn Thu, 20 Apr 2006, Mikael Carneholm wrote:\n\n> We're going to get one for evaluation next week (equipped with dual \n> 2Gbit HBA:s and 2x14 disks, iirc). Anyone with experience from them, \n> performance wise?\n\nWe (Seatbooker) use one. It works well enough. Here's a sample bonnie\noutput:\n\n -------Sequential Output-------- ---Sequential Input--\n--Random--\n -Per Char- --Block--- -Rewrite-- -Per Char- --Block---\n--Seeks---\n Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU\n/sec %CPU\n\n 16384 41464 30.6 41393 10.0 16287 3.7 92433 83.2 119608 18.3\n674.0 0.8\n\nwhich is hardly bad (on a four 15kRPM disk RAID 10 with 2Gbps FC).\nSequential scans on a table produce about 40MB/s of IO with the 'disk'\nsomething like 60-70% busy according to FreeBSD's systat.\n\nHere's diskinfo -cvt output on a not quite idle system:\n\n/dev/da1\n 512 # sectorsize\n 59054899200 # mediasize in bytes (55G)\n 115341600 # mediasize in sectors\n 7179 # Cylinders according to firmware.\n 255 # Heads according to firmware.\n 63 # Sectors according to firmware.\n\nI/O command overhead:\n time to read 10MB block 0.279395 sec = 0.014\nmsec/sector\n time to read 20480 sectors 11.864934 sec = 0.579\nmsec/sector\n calculated command overhead = 0.566\nmsec/sector\n\nSeek times:\n Full stroke: 250 iter in 0.836808 sec = 3.347 msec\n Half stroke: 250 iter in 0.861196 sec = 3.445 msec\n Quarter stroke: 500 iter in 1.415700 sec = 2.831 msec\n Short forward: 400 iter in 0.586330 sec = 1.466 msec\n Short backward: 400 iter in 1.365257 sec = 3.413 msec\n Seq outer: 2048 iter in 1.184569 sec = 0.578 msec\n Seq inner: 2048 iter in 1.184158 sec = 0.578 msec\nTransfer rates:\n outside: 102400 kbytes in 1.367903 sec = 74859\nkbytes/sec\n middle: 102400 kbytes in 1.472451 sec = 69544\nkbytes/sec\n inside: 102400 kbytes in 1.521503 sec = 67302\nkbytes/sec\n\n\nIt (or any FC SAN, for that matter) isn't an especially cheap way to get\nstorage. You don't get much option if you have an HP blade enclosure,\nthough.\n\nHP's support was poor. Their Indian call-centre seems not to know much\nabout them and spectacularly failed to tell us if and how we could\nconnect this (with the 2/3-port FC hub option) to two of our blade\nservers, one of which was one of the 'half-height' ones which require an\narbitrated loop.\nWe ended up buying a FC switch.\n\n",
"msg_date": "Fri, 21 Apr 2006 23:20:58 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware: HP StorageWorks MSA 1500"
},
{
"msg_contents": "Mikael Carneholm wrote:\n> Your numbers seem quite ok considering the number of disks. We also get\n> a 256Mb battery backed cache module with it, so I'm looking forward to\n> testing the write performance (first using ext3, then xfs). If I get the\n> enough time to test it, I'll test both raid 0+1 and raid 5\n> configurations although I trust raid 0+1 more.\n> \n> And no, it's not the cheapest way to get storage - but it's only half as\n> expensive as the other option: an EVA4000, which we're gonna have to go\n> for if we(they) decide to stay in bed with a proprietary database. With\n> postgres we don't need replication on SAN level (using slony) so the MSA\n> 1500 would be sufficient, and that's a good thing (price wise) as we're\n> gonna need two. OTOH, the EVA4000 will not give us mirroring so either\n> way, we're gonna need two of whatever system we go for. Just hoping the\n> MSA 1500 is reliable as well...\n> \n> Support will hopefully not be a problem for us as we have a local\n> company providing support, they're also the ones setting it up for us so\n> at least we'll know right away if they're compentent or not :)\n> \n\nIf I'm reading the original post correctly, the biggest issue is likely \nto be that the 14 disks on each 2Gbit fibre channel will be throttled to \n200Mb/s by the channel , when in fact you could expect (in RAID 10 \narrangement) to get about 7 * 70 Mb/s = 490 Mb/s.\n\nCheers\n\nMark\n\n",
"msg_date": "Mon, 24 Apr 2006 13:29:17 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware: HP StorageWorks MSA 1500"
},
{
"msg_contents": "On Mon, 24 Apr 2006, Mark Kirkwood wrote:\n\n> If I'm reading the original post correctly, the biggest issue is likely\n> to be that the 14 disks on each 2Gbit fibre channel will be throttled to\n> 200Mb/s by the channel , when in fact you could expect (in RAID 10\n> arrangement) to get about 7 * 70 Mb/s = 490 Mb/s.\n\nThe two controllers and two FC switches/hubs are intended for redundancy,\nrather than performance, so there's only one 2Gbit channel. I don't know\nif its possible to use both in parallel to get better performance.\n\nI believe it's possible to join two or more FC ports on the switch\ntogether, but as there's only port going to the controller internally this\npresumably wouldn't help.\n\nThere are two SCSI U320 buses, with seven bays on each. I don't know what\nthe overhead of SCSI is, but you're obviously not going to get 490MB/s\nfor each set of seven even if the FC could do it.\n\nOf course your database may not spend all day doing sequential scans one\nat a time over 14 disks, so it doesn't necessarily matter...\n\n",
"msg_date": "Mon, 24 Apr 2006 15:36:24 +0100 (BST)",
"msg_from": "Alex Hayward <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware: HP StorageWorks MSA 1500"
},
{
"msg_contents": "I'd be interested in those numbers once you get them, especially for\next3. We just picked up an HP MSA1500cs with the MSA50 sled, and I am\ncurious as to how best to configure it for Postgres. My server is the\nHP DL585 (quad, dual-core Opteron, 16GB RAM) with 4 HD bays run by a HP\nSmartArray 5i controller. I have 15 10K 300GB drives and 1 15K 150GB\ndrive (don't ask how that happened).\n\nThe database is going to be very datawarehouse-ish (bulk loads, lots of\nqueries) and has the potential to grow very large (1+ TB). Plus, with\nthat much data, actual backups won't be easy, so I'll be relying on\nRAID+watchfullness to keep me safe, at least through the prototype\nstages.\n\nHow would/do you guys set up your MSA1x00 with 1 drive sled? RAID10 vs\nRAID5 across 10+ disks? Here's what I was thinking (ext3 across\neverything):\n\nDirect attached:\n 2x300GB RAID10 - OS + ETL staging area\n 2x300GB RAID10 - log + indexes\nMSA1500:\n 10x300GB RAID10 + 1x300GB hot spare - tablespace\n\nI'm not quite sure what to do with the 15K/150GB drive, since it is a\nsingleton. I'm also planning on giving all the 256MB MSA1500 cache to\nreads, although I might change it for the batch loads to see if it\nspeeds things up.\n\nAlso, unfortunately, the MSA1500 only has a single SCSI bus, which\ncould significantly impact throughput, but we got a discount, so\nhopefully we can get another bus module in the near future and pop it\nin.\n\nAny comments are appreciated,\n-Mike\n\n",
"msg_date": "25 Apr 2006 18:42:04 -0700",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware: HP StorageWorks MSA 1500"
}
] |
[
{
"msg_contents": "Hi List\nI have maybe an easy question but i do not find an answer, i have this\nSQL query:\n\nSELECT geom,group,production_facs FROM south_america\n\t\tWHERE municipio = ''\n\t\t\tOR municipio = 'ACRE'\n\t\t\tOR municipio = 'ADJUNTAS'\n\t\t\tOR municipio = 'AGUADA'\n\nThe performance of this query is quite worse as longer it gets, its\npossible that this query gets over 20 to 30 OR comparisons, but then\nthe performance is really worse, is it possible to speed it up?\nThanks\nClemens\n\n",
"msg_date": "22 Apr 2006 14:34:13 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Easy question"
},
{
"msg_contents": "SELECT geom, group, production_facs FROM south_america\n\nWHERE UPPER(municipio) IN ('ACRE', 'ADJUNTAS', 'AGUADA');\n\n\n<[email protected]> wrote in message \nnews:[email protected]...\n> Hi List\n> I have maybe an easy question but i do not find an answer, i have this\n> SQL query:\n>\n> SELECT geom,group,production_facs FROM south_america\n> WHERE municipio = ''\n> OR municipio = 'ACRE'\n> OR municipio = 'ADJUNTAS'\n> OR municipio = 'AGUADA'\n>\n> The performance of this query is quite worse as longer it gets, its\n> possible that this query gets over 20 to 30 OR comparisons, but then\n> the performance is really worse, is it possible to speed it up?\n> Thanks\n> Clemens\n> \n\n\n",
"msg_date": "Tue, 25 Apr 2006 08:55:29 -0700",
"msg_from": "\"codeWarrior\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy question"
},
{
"msg_contents": "Thanks,\nBut the performance is the same just the formating is more simple.\nGreets,\nBert\n\n",
"msg_date": "26 Apr 2006 18:26:07 -0700",
"msg_from": "\"Bert\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy question"
},
{
"msg_contents": "You have a functional index on UPPER(municipo), right? How large is the\ntable?\n\nOn 26 Apr 2006 18:26:07 -0700, Bert <[email protected]> wrote:\n>\n> Thanks,\n> But the performance is the same just the formating is more simple.\n> Greets,\n> Bert\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\nYou have a functional index on UPPER(municipo), right? How large is the table?On 26 Apr 2006 18:26:07 -0700, Bert <\[email protected]> wrote:Thanks,But the performance is the same just the formating is more simple.\nGreets,Bert---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the postmaster",
"msg_date": "Sat, 29 Apr 2006 17:53:10 -0400",
"msg_from": "\"Michael Artz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy question"
},
{
"msg_contents": "No i didn't defined any indexes for the table, I know the performance\nwill increase with an index, but this was not my question. My question\nfurthermore belongs to the access mode of the SQL statement.\nFurthermore i do not understand why the Upper function should increase\nthe performance.\nThe table has approximately 20.000 entries.\nIs it the best way to use a B-Tree index on the municipio column in\nthis case or are there better solution to do this.\n\n",
"msg_date": "29 Apr 2006 17:07:59 -0700",
"msg_from": "\"Bert\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy question"
},
{
"msg_contents": "I can't speak to \"the access mode of the SQL statement\" but it looks\nlike the index that you are looking for is an index on an expression,\nas shown in:\n\nhttp://www.postgresql.org/docs/8.0/static/indexes-expressional.html\n\nYou probably want a btree on UPPER(municipo), if that is the primary\nquery method for the column.\n\n",
"msg_date": "1 May 2006 04:35:20 -0700",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy question"
},
{
"msg_contents": "You didn't mention version, but 8.1.x has bitmap index scans that might\ngreatly speed this up...\n\nOn Sat, Apr 22, 2006 at 02:34:13PM -0700, [email protected] wrote:\n> Hi List\n> I have maybe an easy question but i do not find an answer, i have this\n> SQL query:\n> \n> SELECT geom,group,production_facs FROM south_america\n> \t\tWHERE municipio = ''\n> \t\t\tOR municipio = 'ACRE'\n> \t\t\tOR municipio = 'ADJUNTAS'\n> \t\t\tOR municipio = 'AGUADA'\n> \n> The performance of this query is quite worse as longer it gets, its\n> possible that this query gets over 20 to 30 OR comparisons, but then\n> the performance is really worse, is it possible to speed it up?\n> Thanks\n> Clemens\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 2 May 2006 12:14:58 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy question"
},
{
"msg_contents": "Bert wrote:\n> No i didn't defined any indexes for the table, I know the performance\n> will increase with an index, but this was not my question. My question\n> furthermore belongs to the access mode of the SQL statement.\n> Furthermore i do not understand why the Upper function should increase\n> the performance.\n\nThe index will have entries like:\n\nCHRIS\nBERT\nJOE\n\nand so on.\n\nIf you run a query like:\n\nselect * from table where UPPER(name) = 'CHRIS';\n\nIt's an easy match.\n\nIf you don't create an UPPER index, it has to do a comparison with each \nrow - so the index can't be used because postgres has to convert the \nfield to upper and then do the comparison.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Wed, 03 May 2006 16:21:23 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy question"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm just getting familiar with EXPLAIN ANALYZE output, so I'd like to get\nsome help to identify which one of the following queries would be better:\n\nteste=# EXPLAIN ANALYZE SELECT aa.avaliacao_id, MAX(aa.avaliacao_versao) AS\navaliacao_versao, a.avaliacao_nome, aa.editar\nteste-# FROM teo01tb104_areas_avaliacoes aa, teo01tb201_avaliacoes a\nteste-# WHERE aa.avaliacao_id=10\nteste-# AND aa.avaliacao_id=a.avaliacao_id\nteste-# GROUP BY aa.avaliacao_id, a.avaliacao_nome, aa.editar;\n QUERY\nPLAN \n----------------------------------------------------------------------------\n-----------------------------------------------------------------------\n HashAggregate (cost=45.93..45.94 rows=1 width=52) (actual\ntime=0.466..0.469 rows=1 loops=1)\n -> Nested Loop (cost=4.04..45.66 rows=27 width=52) (actual\ntime=0.339..0.356 rows=1 loops=1)\n -> Bitmap Heap Scan on teo01tb201_avaliacoes a (cost=2.01..8.49\nrows=3 width=47) (actual time=0.219..0.223 rows=1 loops=1)\n Recheck Cond: (avaliacao_id = 10)\n -> Bitmap Index Scan on teo01tb201_avaliacoes_pk\n(cost=0.00..2.01 rows=3 width=0) (actual time=0.166..0.166 rows=1 loops=1)\n Index Cond: (avaliacao_id = 10)\n -> Bitmap Heap Scan on teo01tb104_areas_avaliacoes aa\n(cost=2.03..12.30 rows=9 width=9) (actual time=0.060..0.066 rows=1 loops=1)\n Recheck Cond: (avaliacao_id = 10)\n -> Bitmap Index Scan on teo01tb104_areas_avaliacoes_pk\n(cost=0.00..2.03 rows=9 width=0) (actual time=0.040..0.040 rows=1 loops=1)\n Index Cond: (avaliacao_id = 10)\n Total runtime: 1.339 ms\n(11 rows)\n\nteste=# SELECT a.avaliacao_id, a.avaliacao_versao, a.avaliacao_nome,\naa.editar\nteste-# FROM teo01tb201_avaliacoes a, teo01tb104_areas_avaliacoes aa\nteste-# WHERE a.avaliacao_id=10\nteste-# AND a.avaliacao_versao=(SELECT MAX(avaliacao_versao)\nteste(# FROM teo01tb201_avaliacoes\nteste(# WHERE avaliacao_id=10)\nteste-# AND a.avaliacao_id=aa.avaliacao_id;\n avaliacao_id | avaliacao_versao | avaliacao_nome | editar\n--------------+------------------+----------------+--------\n 10 | 1 | Teste | t\n(1 row)\n\nteste=# EXPLAIN ANALYZE SELECT a.avaliacao_id, a.avaliacao_versao,\na.avaliacao_nome, aa.editar\nteste-# FROM teo01tb201_avaliacoes a, teo01tb104_areas_avaliacoes aa\nteste-# WHERE a.avaliacao_id=10\nteste-# AND a.avaliacao_versao=(SELECT MAX(avaliacao_versao)\nteste(# FROM teo01tb201_avaliacoes\nteste(# WHERE avaliacao_id=10)\nteste-# AND a.avaliacao_id=aa.avaliacao_id;\n \nQUERY PLAN \n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n------------------------\n Nested Loop (cost=6.20..22.38 rows=9 width=52) (actual time=0.573..0.596\nrows=1 loops=1)\n InitPlan\n -> Result (cost=4.16..4.17 rows=1 width=0) (actual time=0.315..0.319\nrows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..4.16 rows=1 width=4) (actual\ntime=0.257..0.261 rows=1 loops=1)\n -> Index Scan Backward using teo01tb201_avaliacoes_pk on\nteo01tb201_avaliacoes (cost=0.00..12.48 rows=3 width=4) (actual\ntime=0.245..0.245 rows=1 loops=1)\n Index Cond: (avaliacao_id = 10)\n Filter: (avaliacao_versao IS NOT NULL)\n -> Index Scan using teo01tb201_avaliacoes_pk on teo01tb201_avaliacoes a\n(cost=0.00..5.83 rows=1 width=51) (actual time=0.410..0.420 rows=1 loops=1)\n Index Cond: ((avaliacao_id = 10) AND (avaliacao_versao = $1))\n -> Bitmap Heap Scan on teo01tb104_areas_avaliacoes aa (cost=2.03..12.30\nrows=9 width=5) (actual time=0.110..0.114 rows=1 loops=1)\n Recheck Cond: (avaliacao_id = 10)\n -> Bitmap Index Scan on teo01tb104_areas_avaliacoes_pk\n(cost=0.00..2.03 rows=9 width=0) (actual time=0.074..0.074 rows=1 loops=1)\n Index Cond: (avaliacao_id = 10)\n Total runtime: 1.418 ms\n(15 rows)\n\n\nI think 2nd would be better, since when database grow up the GROUP BY may\nbecome too costly. Is that right?\n\nRegards,\nBruno\n\n",
"msg_date": "Sun, 23 Apr 2006 17:57:40 -0300",
"msg_from": "\"Bruno Almeida do Lago\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "GROUP BY Vs. Sub SELECT"
},
{
"msg_contents": "\"Bruno Almeida do Lago\" <[email protected]> writes:\n> I'm just getting familiar with EXPLAIN ANALYZE output, so I'd like to get\n> some help to identify which one of the following queries would be better:\n\nWell, you're breaking one of the first laws of PG performance analysis,\nwhich is to not try to extrapolate the behavior on large tables from the\nbehavior on toy tables. You can't really see where the bottlenecks are\non a toy example, and what's more there's no reason to think that the\nplanner will use the same plan when presented with much larger tables.\nSo you need to load up a meaningful amount of data (don't forget to\nANALYZE afterward!) and then see what it does.\n\n> I think 2nd would be better, since when database grow up the GROUP BY may\n> become too costly. Is that right?\n\nThe two queries don't give the same answer, so asking which is faster\nis a bit irrelevant. (When there's more than one group, wouldn't the\nper-group MAXes be different?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 Apr 2006 19:34:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GROUP BY Vs. Sub SELECT "
},
{
"msg_contents": "OK! I totally understand what you said. I'll load this table with a\nsimulated data and see how PG deals with it.\n\nAbout the queries being different, yes, I'm sure they are :-) I did not\nmention that application is able to handle both. \n\nI'd like to get more info on EXPLAIN ANALYZE output... where can I read more\nabout it?\n\nThank you very much for your attention!!\n\nRegards,\nBruno\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Sunday, April 23, 2006 8:34 PM\nTo: Bruno Almeida do Lago\nCc: [email protected]\nSubject: Re: [PERFORM] GROUP BY Vs. Sub SELECT \n\n\"Bruno Almeida do Lago\" <[email protected]> writes:\n> I'm just getting familiar with EXPLAIN ANALYZE output, so I'd like to get\n> some help to identify which one of the following queries would be better:\n\nWell, you're breaking one of the first laws of PG performance analysis,\nwhich is to not try to extrapolate the behavior on large tables from the\nbehavior on toy tables. You can't really see where the bottlenecks are\non a toy example, and what's more there's no reason to think that the\nplanner will use the same plan when presented with much larger tables.\nSo you need to load up a meaningful amount of data (don't forget to\nANALYZE afterward!) and then see what it does.\n\n> I think 2nd would be better, since when database grow up the GROUP BY may\n> become too costly. Is that right?\n\nThe two queries don't give the same answer, so asking which is faster\nis a bit irrelevant. (When there's more than one group, wouldn't the\nper-group MAXes be different?)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 24 Apr 2006 11:36:18 -0300",
"msg_from": "\"Bruno Almeida do Lago\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GROUP BY Vs. Sub SELECT "
},
{
"msg_contents": "\n> I'd like to get more info on EXPLAIN ANALYZE output... where can I read more\n> about it?\n\nI believe this link has what you are looking for:\nhttp://www.postgresql.org/docs/8.1/interactive/performance-tips.html\n\n\nRegards,\n\nRichard Broersma Jr.\n",
"msg_date": "Mon, 24 Apr 2006 12:07:39 -0700 (PDT)",
"msg_from": "Richard Broersma Jr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GROUP BY Vs. Sub SELECT "
},
{
"msg_contents": "On Mon, Apr 24, 2006 at 12:07:39PM -0700, Richard Broersma Jr wrote:\n> \n> > I'd like to get more info on EXPLAIN ANALYZE output... where can I read more\n> > about it?\n> \n> I believe this link has what you are looking for:\n> http://www.postgresql.org/docs/8.1/interactive/performance-tips.html\n\nhttp://www.pervasive-postgres.com/lp/newsletters/2006/Insights_postgres_Apr.asp#4\nmight also be worth your time to read...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Mon, 24 Apr 2006 16:00:18 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GROUP BY Vs. Sub SELECT"
}
] |
[
{
"msg_contents": "I'm preparing for an upgrade from PostgreSQL 7.4.5 to 8.1.3, and I \nnoticed a potential performance issue.\n\nI have two servers, a dual proc Dell with raid 5 running PostgreSQL \n7.4, and a quad proc Dell with a storage array running PostgreSQL \n8.1. Both servers have identical postgresql.conf settings and were \nrestored from the same 7.4 backup. Almost everything is faster on the \n8.1 server (mostly due to hardware), except one thing...deletes from \ntables with many foreign keys pointing to them.\n\nI have table A with around 100,000 rows, that has foreign keys from \nabout 50 other tables pointing to it. Some of these other tables \n(table B, for example) have around 10 million rows.\n\nOn the 7.4 server, I can delete a single row from a table A in well \nunder a second (as expected). On the 8.1 server, it takes over a \nminute to delete. I tried all the usual stuff, recreating indexes, \nvacuum analyzing, explain analyze. Everything is identical between \nthe systems. If I hit ctrl-c while the slow delete was running on \n8.1, I repeatedly got the following message...\n\ndb=# delete from \"A\" where \"ID\" in ('6');\nCancel request sent\nERROR: canceling statement due to user request\nCONTEXT: SQL statement \"SELECT 1 FROM ONLY \"public\".\"B\" x WHERE \n\"A_ID\" = $1 FOR SHARE OF x\"\n\nIt looks to me like the \"SELECT ... FOR SHARE\" functionality in 8.1 \nis the culprit. Has anyone else run into this issue?\n\n\nWill Reese -- http://blog.rezra.com\n\n",
"msg_date": "Sun, 23 Apr 2006 21:41:14 -0500",
"msg_from": "Will Reese <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow deletes in 8.1 when FKs are involved"
},
{
"msg_contents": "Will Reese <[email protected]> writes:\n> ... Both servers have identical postgresql.conf settings and were \n> restored from the same 7.4 backup. Almost everything is faster on the \n> 8.1 server (mostly due to hardware), except one thing...deletes from \n> tables with many foreign keys pointing to them.\n\nI think it's unquestionable that you have a bad FK plan in use on the\n8.1 server. Double check that you have suitable indexes on the\nreferencing (not referenced) columns, that you've ANALYZEd all the\ntables involved, and that you've started a fresh psql session (remember\nthe backend tends to cache FK plans for the life of the connection).\n\nIt might help to EXPLAIN ANALYZE one of the slow deletes --- 8.1 will\nbreak out the time spent in FK triggers, which would let you see which\none(s) are the culprit.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 23 Apr 2006 23:32:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow deletes in 8.1 when FKs are involved "
},
{
"msg_contents": "On Sun, Apr 23, 2006 at 09:41:14PM -0500, Will Reese wrote:\n> I'm preparing for an upgrade from PostgreSQL 7.4.5 to 8.1.3, and I \n> noticed a potential performance issue.\n> \n> I have two servers, a dual proc Dell with raid 5 running PostgreSQL \n> 7.4, and a quad proc Dell with a storage array running PostgreSQL \n> 8.1. Both servers have identical postgresql.conf settings and were \n\nBTW, you'll want to tweak some things between the two .conf files,\nespecially if the 8.1.3 server has more memory. Faster drive array means\nyou probably want to tweak random_page_cost down.\n\nAlso, 8.1.3 has a lot of new config settings compared to 7.4.x; it'd\nprobably be best to take the default 8.1 config and tweak it, rather\nthan bringing the 7.4 config over.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Mon, 24 Apr 2006 12:09:15 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow deletes in 8.1 when FKs are involved"
},
{
"msg_contents": "I did double check for indexes on the referenced and referencing \ncolumns, and even though this database is restored and vacuum \nanalyzed nightly the issue remains. Using explain analyze in \npostgresql 8.1, I was able to see where the problem lies. For \nperformance reasons on our 7.4 server, we removed one of the 3 RI \ntriggers for some constraints (the RI trigger that performs the \nSELECT....FOR UPDATE to prevent modifications) and replaced it with a \ntrigger to just prevent deletes on this data indefinitely (the data \nnever gets deleted or updated in our app). This works great in \npostgresql 7.4 and nearly eliminated our performance issue, but when \nthat database is restored to postgresql 8.1 one of the remaining two \nRI triggers does not perform well at all when you try to delete from \nthat table (even though it's fine in postgresql 7.4). On the 8.1 \nserver I dropped the remaining two RI triggers, and added the \nconstraint to recreate the 3 RI triggers. After that the delete \nperformed fine. So it looks like the 7.4 RI triggers that carried \nover to the 8.1 server don't perform very well. I'm hoping that the \nSELECT...FOR SHARE functionality in 8.1 will allow us to re-add our \nconstraints and not suffer from the locking issues we had in \npostgresql 7.4.\n\nWill Reese -- http://blog.rezra.com\n\nOn Apr 23, 2006, at 10:32 PM, Tom Lane wrote:\n\n> Will Reese <[email protected]> writes:\n>> ... Both servers have identical postgresql.conf settings and were\n>> restored from the same 7.4 backup. Almost everything is faster on the\n>> 8.1 server (mostly due to hardware), except one thing...deletes from\n>> tables with many foreign keys pointing to them.\n>\n> I think it's unquestionable that you have a bad FK plan in use on the\n> 8.1 server. Double check that you have suitable indexes on the\n> referencing (not referenced) columns, that you've ANALYZEd all the\n> tables involved, and that you've started a fresh psql session \n> (remember\n> the backend tends to cache FK plans for the life of the connection).\n>\n> It might help to EXPLAIN ANALYZE one of the slow deletes --- 8.1 will\n> break out the time spent in FK triggers, which would let you see which\n> one(s) are the culprit.\n>\n> \t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 25 Apr 2006 19:02:37 -0500",
"msg_from": "Will Reese <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow deletes in 8.1 when FKs are involved "
}
] |
[
{
"msg_contents": "I have a table of ~ 41 000 rows with an index on the result of a function\napplied to a certain text column (the function basically removes \"neutral\"\nor common words like \"the\",\"on\",\"a\", etc. from the string).\n\nI then execute a query with a where clause on this function result with an\norder by on the function result also and a limit of 200. Logically the\nEXPLAIN shows that an index scan is used by the planner. A query returning\nthe maximum 200 number of records takes around 20 ms. What is surprising is\nthat the same query executed several times takes practically the same time,\nas if the result was not cached.\n\nTo test this, I then added a new text column to the same table, populating\nit with the function result mentioned above. Now, I create an index on this\nnew column and execute the same query as described above (order by on the\nnew column and limit 200). The execution plan is exactly the same, except\nthat the new index is used of course. The first execution time is similar,\ni.e. 20 ms approx., but the next executions of the same query take about 2\nms (i.e to say a 10 to 1 difference). So this time, it seems that the result\nis properly cached.\n\nCould the problem be that an index on a function result is not cached or\nless well cached ?\n\nThanks,\nPaul\n\nI have a table of ~ 41 000 rows with an index on the result of a function applied to a certain text column (the function basically removes \"neutral\" or common words like \"the\",\"on\",\"a\", etc. from the string).\nI then execute a query with a where clause on this function result with an order by on the function result also and a limit of 200. Logically the EXPLAIN shows that an index scan is used by the planner. A query returning the maximum 200 number of records takes around 20 ms. What is surprising is that the same query executed several times takes practically the same time, as if the result was not cached.\nTo test this, I then added a new text column to the same table, populating it with the function result mentioned above. Now, I create an index on this new column and execute the same query as described above (order by on the new column and limit 200). The execution plan is exactly the same, except that the new index is used of course. The first execution time is similar, \ni.e. 20 ms approx., but the next executions of the same query take about 2 ms (i.e to say a 10 to 1 difference). So this time, it seems that the result is properly cached.Could the problem be that an index on a function result is not cached or less well cached ? \nThanks,Paul",
"msg_date": "Mon, 24 Apr 2006 12:53:29 +0200",
"msg_from": "\"Paul Mackay\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index on function less well cached than \"regular\" index ?"
},
{
"msg_contents": "\"Paul Mackay\" <[email protected]> writes:\n> ...\n> EXPLAIN shows that an index scan is used by the planner. A query returning\n> the maximum 200 number of records takes around 20 ms. What is surprising is\n> that the same query executed several times takes practically the same time,\n> as if the result was not cached.\n> ...\n> Could the problem be that an index on a function result is not cached or\n> less well cached ?\n\nI can't see how that would be. The index machinery has no idea what\nit's indexing, and the kernel disk cache even less.\n\nPerhaps the majority of the runtime is going somewhere else, like the\ninitial evaluation of the function value to compare against? Or maybe\nyou've found some inefficiency in the planner's handling of function\nindexes. Try comparing EXPLAIN ANALYZE output for the two cases to see\nif the discrepancy exists during query runtime, or if it's upstream at\nplan time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 24 Apr 2006 18:11:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on function less well cached than \"regular\" index ? "
}
] |
[
{
"msg_contents": "> If I'm reading the original post correctly, the biggest issue is \n> likely to be that the 14 disks on each 2Gbit fibre channel will be \n> throttled to 200Mb/s by the channel , when in fact you could expect \n> (in RAID 10\n> arrangement) to get about 7 * 70 Mb/s = 490 Mb/s.\n\n> The two controllers and two FC switches/hubs are intended for\nredundancy, rather than performance, so there's only one 2Gbit channel.\nI > don't know if its possible to use both in parallel to get better\nperformance.\n\n> I believe it's possible to join two or more FC ports on the switch\ntogether, but as there's only port going to the controller internally\nthis presumably wouldn't help.\n\n> There are two SCSI U320 buses, with seven bays on each. I don't know\nwhat the overhead of SCSI is, but you're obviously not going to get >\n490MB/s for each set of seven even if the FC could do it.\n\n\nDarn. I was really looking forward to ~500Mb/s :(\n\n\n> Of course your database may not spend all day doing sequential scans\none at a time over 14 disks, so it doesn't necessarily matter...\n\n\nThat's probably true, but *knowing* that the max seq scan speed is that\nhigh gives you some confidence (true or fake) that the hardware will be\nsufficient the next 2 years or so. So, if dual 2GBit FC:s still don't\ndeliver more than 200Mb/s, what does?\n\n-Mikael\n",
"msg_date": "Mon, 24 Apr 2006 22:38:12 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware: HP StorageWorks MSA 1500"
},
{
"msg_contents": "Mikael Carneholm wrote:\n\n> \n>> There are two SCSI U320 buses, with seven bays on each. I don't know\n> what the overhead of SCSI is, but you're obviously not going to get >\n> 490MB/s for each set of seven even if the FC could do it.\n> \n\nYou should be able to get close to 300Mb/s on each SCSI bus - provided \nthe PCI bus on the motherboard is 64-bit and runs at 133Mhz or better \n(64-bit and 66Mhz give you a 524Mb/s limit).\n\n> \n>> Of course your database may not spend all day doing sequential scans\n> one at a time over 14 disks, so it doesn't necessarily matter...\n> \n\nYeah, it depends on the intended workload, but at some point most \ndatabases end up IO bound... so you really want to ensure the IO system \nis as capable as possible IMHO.\n\n> \n> That's probably true, but *knowing* that the max seq scan speed is that\n> high gives you some confidence (true or fake) that the hardware will be\n> sufficient the next 2 years or so. So, if dual 2GBit FC:s still don't\n> deliver more than 200Mb/s, what does?\n> \n\nMost modern PCI-X or PCIe RAID cards will do better than 200Mb/s (e.g. \n3Ware 9550SX will do ~800Mb/s).\n\nBy way of comparison my old PIII with a Promise TX4000 plus 4 IDE drives \nwill do 215Mb/s...so being throttled to 200Mb/s on modern hardware seems \nunwise to me.\n\nCheers\n\nMark\n",
"msg_date": "Tue, 25 Apr 2006 11:57:34 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware: HP StorageWorks MSA 1500"
},
{
"msg_contents": "On Tue, 25 Apr 2006, Mark Kirkwood wrote:\n\n> Mikael Carneholm wrote:\n>\n> >\n> >> There are two SCSI U320 buses, with seven bays on each. I don't know\n> > what the overhead of SCSI is, but you're obviously not going to get >\n> > 490MB/s for each set of seven even if the FC could do it.\n> >\n>\n> You should be able to get close to 300Mb/s on each SCSI bus - provided\n> the PCI bus on the motherboard is 64-bit and runs at 133Mhz or better\n> (64-bit and 66Mhz give you a 524Mb/s limit).\n\nI've no idea if the MSA1500's controllers use PCI internally. Obviously\nthis argument applies to the PCI bus you plug your FC adapters in to,\nthough.\n\nAIUI it's difficult to get PCI to actually give you it's theoretical\nmaximum bandwidth. Those speeds are still a lot more than 200MB/s, though.\n\n> >> Of course your database may not spend all day doing sequential scans\n> > one at a time over 14 disks, so it doesn't necessarily matter...\n> >\n>\n> Yeah, it depends on the intended workload, but at some point most\n> databases end up IO bound... so you really want to ensure the IO system\n> is as capable as possible IMHO.\n\nIO bound doesn't imply IO bandwidth bound. 14 disks doing a 1ms seek\nfollowed by an 8k read over and over again is a bit over 100MB/s. Adding\nin write activity would make a difference, too, since it'd have to go to\nat least two disks. There are presumably hot spares, too.\n\nI still wouldn't really want to be limited to 200MB/s if I expected to use\na full set of 14 disks for active database data where utmost performance\nreally matters and where there may be some sequential scans going on,\nthough.\n\n> > That's probably true, but *knowing* that the max seq scan speed is that\n> > high gives you some confidence (true or fake) that the hardware will be\n> > sufficient the next 2 years or so. So, if dual 2GBit FC:s still don't\n> > deliver more than 200Mb/s, what does?\n> >\n>\n> Most modern PCI-X or PCIe RAID cards will do better than 200Mb/s (e.g.\n> 3Ware 9550SX will do ~800Mb/s).\n>\n> By way of comparison my old PIII with a Promise TX4000 plus 4 IDE drives\n> will do 215Mb/s...so being throttled to 200Mb/s on modern hardware seems\n> unwise to me.\n\nThough, of course, these won't do many of the things you can do with a SAN\n- like connect several computers, or split a single array in to two pieces\nand have two computers access them as if they were separate drives, or\nremotely shut down one database machine and then start up another using\nthe same disks and data. The number of IO operations per second they can\ndo is likely to be important, too...possibly more important.\n\nThere's 4GB FC, and so presumably 4GB SANs, but that's still not vast\nbandwidth. Using multiple FC ports is the other obvious way to do it with\na SAN. I haven't looked, but I suspect you'll need quite a budget to get\nthat...\n",
"msg_date": "Thu, 27 Apr 2006 15:05:10 +0100 (BST)",
"msg_from": "Alex Hayward <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware: HP StorageWorks MSA 1500"
},
{
"msg_contents": "Alex Hayward wrote:\n\n> \n> IO bound doesn't imply IO bandwidth bound. 14 disks doing a 1ms seek\n> followed by an 8k read over and over again is a bit over 100MB/s. Adding\n> in write activity would make a difference, too, since it'd have to go to\n> at least two disks. There are presumably hot spares, too.\n> \n\nVery true - if your workload is primarily random, ~100Mb/s may be enough \nbandwidth.\n\n> I still wouldn't really want to be limited to 200MB/s if I expected to use\n> a full set of 14 disks for active database data where utmost performance\n> really matters and where there may be some sequential scans going on,\n> though.\n> \n\nYeah - thats the rub, Data mining, bulk loads, batch updates, backups \n(restores....) often use significant bandwidth.\n\n> Though, of course, these won't do many of the things you can do with a SAN\n> - like connect several computers, or split a single array in to two pieces\n> and have two computers access them as if they were separate drives, or\n> remotely shut down one database machine and then start up another using\n> the same disks and data. The number of IO operations per second they can\n> do is likely to be important, too...possibly more important.\n>\n\nSAN flexibility is nice (when it works as advertised), the cost and \nperformance however, are the main detractors. On that note I don't \nrecall IO/s being anything special on most SAN gear I've seen (this \ncould have changed for later products I guess).\n\n> There's 4GB FC, and so presumably 4GB SANs, but that's still not vast\n> bandwidth. Using multiple FC ports is the other obvious way to do it with\n> a SAN. I haven't looked, but I suspect you'll need quite a budget to get\n> that...\n> \n\nYes - the last place I worked were looking at doing this ('multiple \nattachment' was the buzz word I think) - I recall it needed special \n(read extra expensive) switches and particular cards...\n\nCheers\n\nMark\n\n",
"msg_date": "Fri, 28 Apr 2006 14:30:36 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware: HP StorageWorks MSA 1500"
}
] |
[
{
"msg_contents": "Hi\n\n \n\nI have queries that use like operators and regex patterns to determine\nif an ip address is internal or external (this is against a table with\nsay 100 million distinct ip addresses).\n\n \n\nDoes the inet data type offer comparison/search performance benefits\nover plain text for ip addresses..\n\n\n\n\n\n\n\n\n\n\nHi\n \nI have queries that use like operators and regex patterns to\ndetermine if an ip address is internal or external (this is against a table\nwith say 100 million distinct ip addresses).\n \nDoes the inet data type offer comparison/search performance\nbenefits over plain text for ip addresses..",
"msg_date": "Mon, 24 Apr 2006 15:45:14 -0700",
"msg_from": "\"Sriram Dandapani\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "ip address data type"
},
{
"msg_contents": "On Mon, Apr 24, 2006 at 03:45:14PM -0700, Sriram Dandapani wrote:\n> Hi\n> \n> I have queries that use like operators and regex patterns to determine\n> if an ip address is internal or external (this is against a table with\n> say 100 million distinct ip addresses).\n> \n> Does the inet data type offer comparison/search performance benefits\n> over plain text for ip addresses..\n\nWell, benchmark it and find out. :) But it'd be hard to be slower than\nlike or regex...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Mon, 24 Apr 2006 17:48:37 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ip address data type"
},
{
"msg_contents": "\nOn Apr 24, 2006, at 3:45 PM, Sriram Dandapani wrote:\n\n> Hi\n>\n>\n>\n> I have queries that use like operators and regex patterns to \n> determine if an ip address is internal or external (this is against \n> a table with say 100 million distinct ip addresses).\n>\n>\n>\n> Does the inet data type offer comparison/search performance \n> benefits over plain text for ip addresses..\nIt's probably better than text-based, but it's hard to be worse than \nregex and like.\n\nDepending on your exact needs http://pgfoundry.org/projects/ip4r/ may be\ninteresting, and I've also found pretty good behavior by mapping an IP\naddress onto a 2^31 offset integer.\n\nCheers,\n Steve\n",
"msg_date": "Mon, 24 Apr 2006 16:40:16 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ip address data type"
},
{
"msg_contents": "* Sriram Dandapani:\n\n> Does the inet data type offer comparison/search performance benefits\n> over plain text for ip addresses..\n\nQueries like \"host << '192.168.17.192/28'\" use an available index on\nthe host column. In theory, you could do this with LIKE and strings,\nbut this gets pretty messy and needs a lot of client-side logic.\n",
"msg_date": "Tue, 25 Apr 2006 21:22:16 +0200",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ip address data type"
}
] |
[
{
"msg_contents": "Hi all,\n\n I have the following running on postgresql version 7.4.2:\n\nCREATE SEQUENCE agenda_user_group_id_seq\nMINVALUE 1\nMAXVALUE 9223372036854775807\nCYCLE\nINCREMENT 1\nSTART 1;\n\nCREATE TABLE AGENDA_USERS_GROUPS\n(\n AGENDA_USER_GROUP_ID INT8\n CONSTRAINT pk_agndusrgrp_usergroup PRIMARY KEY\n DEFAULT NEXTVAL('agenda_user_group_id_seq'),\n USER_ID NUMERIC(10)\n CONSTRAINT fk_agenda_uid REFERENCES \nAGENDA_USERS (USER_ID)\n ON DELETE CASCADE\n NOT NULL,\n GROUP_ID NUMERIC(10)\n CONSTRAINT fk_agenda_gid REFERENCES \nAGENDA_GROUPS (GROUP_ID)\n ON DELETE CASCADE\n NOT NULL,\n CREATION_DATE DATE\n DEFAULT CURRENT_DATE,\n CONSTRAINT un_agndusrgrp_usergroup \nUNIQUE(USER_ID, GROUP_ID)\n);\n\nCREATE INDEX i_agnusrsgrs_userid ON AGENDA_USERS_GROUPS ( USER_ID );\nCREATE INDEX i_agnusrsgrs_groupid ON AGENDA_USERS_GROUPS ( GROUP_ID );\n\n\nWhen I execute:\n\nEXPLAIN ANALYZE SELECT agenda_user_group_id FROM agenda_users_groups \nWHERE group_id = 9;\n\nit does a sequential scan and doesn't use the index and I don't \nunderstand why, any idea? I have the same in postgresql 8.1 and it uses \nthe index :-|\n\nThanks\n-- \nArnau\n",
"msg_date": "Tue, 25 Apr 2006 15:28:45 +0200",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query on postgresql 7.4.2 not using index"
},
{
"msg_contents": "On 4/25/06, Arnau <[email protected]> wrote:\n> Hi all,\n>\n> I have the following running on postgresql version 7.4.2:\n>\n> CREATE SEQUENCE agenda_user_group_id_seq\n> MINVALUE 1\n> MAXVALUE 9223372036854775807\n> CYCLE\n> INCREMENT 1\n> START 1;\n>\n> CREATE TABLE AGENDA_USERS_GROUPS\n> (\n> AGENDA_USER_GROUP_ID INT8\n> CONSTRAINT pk_agndusrgrp_usergroup PRIMARY KEY\n> DEFAULT NEXTVAL('agenda_user_group_id_seq'),\n> USER_ID NUMERIC(10)\n> CONSTRAINT fk_agenda_uid REFERENCES\n> AGENDA_USERS (USER_ID)\n> ON DELETE CASCADE\n> NOT NULL,\n> GROUP_ID NUMERIC(10)\n> CONSTRAINT fk_agenda_gid REFERENCES\n> AGENDA_GROUPS (GROUP_ID)\n> ON DELETE CASCADE\n> NOT NULL,\n> CREATION_DATE DATE\n> DEFAULT CURRENT_DATE,\n> CONSTRAINT un_agndusrgrp_usergroup\n> UNIQUE(USER_ID, GROUP_ID)\n> );\n>\n> CREATE INDEX i_agnusrsgrs_userid ON AGENDA_USERS_GROUPS ( USER_ID );\n> CREATE INDEX i_agnusrsgrs_groupid ON AGENDA_USERS_GROUPS ( GROUP_ID );\n>\n>\n> When I execute:\n>\n> EXPLAIN ANALYZE SELECT agenda_user_group_id FROM agenda_users_groups\n> WHERE group_id = 9;\n\nTry\n\nEXPLAIN ANALYZE SELECT agenda_user_group_id FROM agenda_users_groups\nWHERE group_id::int8 = 9;\n\nor\n\nEXPLAIN ANALYZE SELECT agenda_user_group_id FROM agenda_users_groups\nWHERE group_id = '9';\n\nand let us know what happens.\n\n--\nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Tue, 25 Apr 2006 23:34:24 +1000",
"msg_from": "\"chris smith\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query on postgresql 7.4.2 not using index"
},
{
"msg_contents": "chris smith wrote:\n> On 4/25/06, Arnau <[email protected]> wrote:\n> \n>>Hi all,\n>>\n>> I have the following running on postgresql version 7.4.2:\n>>\n>>CREATE SEQUENCE agenda_user_group_id_seq\n>>MINVALUE 1\n>>MAXVALUE 9223372036854775807\n>>CYCLE\n>>INCREMENT 1\n>>START 1;\n>>\n>>CREATE TABLE AGENDA_USERS_GROUPS\n>>(\n>> AGENDA_USER_GROUP_ID INT8\n>> CONSTRAINT pk_agndusrgrp_usergroup PRIMARY KEY\n>> DEFAULT NEXTVAL('agenda_user_group_id_seq'),\n>> USER_ID NUMERIC(10)\n>> CONSTRAINT fk_agenda_uid REFERENCES\n>>AGENDA_USERS (USER_ID)\n>> ON DELETE CASCADE\n>> NOT NULL,\n>> GROUP_ID NUMERIC(10)\n>> CONSTRAINT fk_agenda_gid REFERENCES\n>>AGENDA_GROUPS (GROUP_ID)\n>> ON DELETE CASCADE\n>> NOT NULL,\n>> CREATION_DATE DATE\n>> DEFAULT CURRENT_DATE,\n>> CONSTRAINT un_agndusrgrp_usergroup\n>>UNIQUE(USER_ID, GROUP_ID)\n>>);\n>>\n>>CREATE INDEX i_agnusrsgrs_userid ON AGENDA_USERS_GROUPS ( USER_ID );\n>>CREATE INDEX i_agnusrsgrs_groupid ON AGENDA_USERS_GROUPS ( GROUP_ID );\n>>\n>>\n>>When I execute:\n>>\n>>EXPLAIN ANALYZE SELECT agenda_user_group_id FROM agenda_users_groups\n>>WHERE group_id = 9;\n> \n> \n> Try\n> \n> EXPLAIN ANALYZE SELECT agenda_user_group_id FROM agenda_users_groups\n> WHERE group_id::int8 = 9;\n> \n> or\n> \n> EXPLAIN ANALYZE SELECT agenda_user_group_id FROM agenda_users_groups\n> WHERE group_id = '9';\n> \n> and let us know what happens.\n> \n\n\n The same, the table has 2547556 entries:\n\nespsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM \nagenda_users_groups\nespsm_moviltelevision-# WHERE group_id::int8 = 9;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on agenda_users_groups (cost=0.00..59477.34 rows=12738 \nwidth=8) (actual time=3409.541..11818.794 rows=367026 loops=1)\n Filter: ((group_id)::bigint = 9)\n Total runtime: 13452.114 ms\n(3 filas)\n\nespsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM \nagenda_users_groups\nespsm_moviltelevision-# WHERE group_id = '9';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on agenda_users_groups (cost=0.00..53108.45 rows=339675 \nwidth=8) (actual time=916.903..5763.830 rows=367026 loops=1)\n Filter: (group_id = 9::numeric)\n Total runtime: 7259.861 ms\n(3 filas)\n\nespsm_moviltelevision=# select count(*) from agenda_users_groups ;\n count\n---------\n 2547556\n\n\nThanks\n-- \nArnau\n",
"msg_date": "Tue, 25 Apr 2006 15:49:33 +0200",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query on postgresql 7.4.2 not using index"
},
{
"msg_contents": "On 4/25/06, Arnau <[email protected]> wrote:\n> espsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM\n> agenda_users_groups\n> espsm_moviltelevision-# WHERE group_id = '9';\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on agenda_users_groups (cost=0.00..53108.45 rows=339675\n> width=8) (actual time=916.903..5763.830 rows=367026 loops=1)\n> Filter: (group_id = 9::numeric)\n> Total runtime: 7259.861 ms\n> (3 filas)\n\nArnau,\n\nWhy do you use a numeric instead of an integer/bigint??\n\nIIRC, there were a few problems with index on numeric column on older\nversion of PostgreSQL.\n\nYou can't change the type of a column with 7.4, so create a new\ninteger column then copy the values in this new column, drop the old\none, rename the new one. Run vacuum analyze and recreate your index.\n\nIt should work far better with an int.\n\nNote that you will have to update all the tables referencing this key...\n\n--\nGuillaume\n",
"msg_date": "Tue, 25 Apr 2006 16:31:51 +0200",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query on postgresql 7.4.2 not using index"
},
{
"msg_contents": "On Tue, 2006-04-25 at 08:49, Arnau wrote:\n> chris smith wrote:\n> > On 4/25/06, Arnau <[email protected]> wrote:\n> > \n> >>Hi all,\n> >>\n> >> I have the following running on postgresql version 7.4.2:\n> >>\n> >>CREATE SEQUENCE agenda_user_group_id_seq\n> >>MINVALUE 1\n> >>MAXVALUE 9223372036854775807\n> >>CYCLE\n> >>INCREMENT 1\n> >>START 1;\n> >>\n> >>CREATE TABLE AGENDA_USERS_GROUPS\n> >>(\n> >> AGENDA_USER_GROUP_ID INT8\n> >> CONSTRAINT pk_agndusrgrp_usergroup PRIMARY KEY\n> >> DEFAULT NEXTVAL('agenda_user_group_id_seq'),\n> >> USER_ID NUMERIC(10)\n> >> CONSTRAINT fk_agenda_uid REFERENCES\n> >>AGENDA_USERS (USER_ID)\n> >> ON DELETE CASCADE\n> >> NOT NULL,\n> >> GROUP_ID NUMERIC(10)\n> >> CONSTRAINT fk_agenda_gid REFERENCES\n> >>AGENDA_GROUPS (GROUP_ID)\n> >> ON DELETE CASCADE\n> >> NOT NULL,\n> >> CREATION_DATE DATE\n> >> DEFAULT CURRENT_DATE,\n> >> CONSTRAINT un_agndusrgrp_usergroup\n> >>UNIQUE(USER_ID, GROUP_ID)\n> >>);\n> >>\n> >>CREATE INDEX i_agnusrsgrs_userid ON AGENDA_USERS_GROUPS ( USER_ID );\n> >>CREATE INDEX i_agnusrsgrs_groupid ON AGENDA_USERS_GROUPS ( GROUP_ID );\n\nSNIP\n\n> The same, the table has 2547556 entries:\n> \n> espsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM \n> agenda_users_groups\n> espsm_moviltelevision-# WHERE group_id::int8 = 9;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on agenda_users_groups (cost=0.00..59477.34 rows=12738 \n> width=8) (actual time=3409.541..11818.794 rows=367026 loops=1)\n> Filter: ((group_id)::bigint = 9)\n> Total runtime: 13452.114 ms\n> (3 filas)\n> \n> espsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM \n> agenda_users_groups\n> espsm_moviltelevision-# WHERE group_id = '9';\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on agenda_users_groups (cost=0.00..53108.45 rows=339675 \n> width=8) (actual time=916.903..5763.830 rows=367026 loops=1)\n> Filter: (group_id = 9::numeric)\n> Total runtime: 7259.861 ms\n> (3 filas)\n> \n> espsm_moviltelevision=# select count(*) from agenda_users_groups ;\n> count\n> ---------\n> 2547556\n\nOK, a few points.\n\n1: 7.4.2 is WAY out of date for the 7.4 series. The 7.4 series, also\nit a bit out of date, and many issues in terms of performance have been\nenhanced in the 8.x series. You absolutely should update to the latest\n7.4 series, as there are known data loss bugs and other issues in the\n7.4.2 version.\n\n2: An index scan isn't always faster. In this instance, it looks like\nthe number of rows that match in the last version of your query is well\nover 10% of the rows. Assuming your average row takes up <10% or so of\na block, which is pretty common, then you're going to have to hit almost\nevery block anyway to get your data. So, an index scan is no win.\n\n3: To test whether or not an index scan IS a win, you can use the\nenable_xxx settings to prove it to yourself:\n\nset enable_seqscan = off;\nexplain analyze <your query here>;\n\nand compare. Note that the enable_seqscan = off thing is a sledge\nhammer, not a nudge, and generally should NOT be used in production. If\nan index scan is generally a win for you, but the database isn't using\nit, you might need to tune the database for your machine. note that you\nshould NOT tune your database based on a single query. You'll need to\nreach a compromise on your settings that makes all your queries run\nreasonably fast without the planner making insane decisions. One of the\nbetter postgresql tuning docs out there is the one at: \nhttp://www.varlena.com/GeneralBits/Tidbits/perf.html .\n\nGood luck.\n",
"msg_date": "Tue, 25 Apr 2006 10:05:02 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query on postgresql 7.4.2 not using index"
},
{
"msg_contents": "Arnau <[email protected]> writes:\n\n> Seq Scan on agenda_users_groups (cost=0.00..53108.45 rows=339675 \n> width=8) (actual time=916.903..5763.830 rows=367026 loops=1)\n> Filter: (group_id = 9::numeric)\n> Total runtime: 7259.861 ms\n> (3 filas)\n\n> espsm_moviltelevision=# select count(*) from agenda_users_groups ;\n> count\n> ---------\n> 2547556\n\nSo the SELECT is fetching nearly 15% of the rows in the table. The\nplanner is doing *the right thing* to use a seqscan, at least for\nthis particular group_id value.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 Apr 2006 11:22:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query on postgresql 7.4.2 not using index "
},
{
"msg_contents": "Tom Lane wrote:\n> Arnau <[email protected]> writes:\n> \n> \n>> Seq Scan on agenda_users_groups (cost=0.00..53108.45 rows=339675 \n>>width=8) (actual time=916.903..5763.830 rows=367026 loops=1)\n>> Filter: (group_id = 9::numeric)\n>> Total runtime: 7259.861 ms\n>>(3 filas)\n> \n> \n>>espsm_moviltelevision=# select count(*) from agenda_users_groups ;\n>> count\n>>---------\n>> 2547556\n> \n> \n> So the SELECT is fetching nearly 15% of the rows in the table. The\n> planner is doing *the right thing* to use a seqscan, at least for\n> this particular group_id value.\n\n\nI have done the same tests on 8.1.0.\n\n\nespsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM \nagenda_users_groups WHERE group_id = 9;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on agenda_users_groups (cost=2722.26..30341.78 \nrows=400361 width=8) (actual time=145.533..680.839 rows=367026 loops=1)\n Recheck Cond: (group_id = 9::numeric)\n -> Bitmap Index Scan on i_agnusrsgrs_groupid (cost=0.00..2722.26 \nrows=400361 width=0) (actual time=142.958..142.958 rows=367026 loops=1)\n Index Cond: (group_id = 9::numeric)\n Total runtime: 1004.966 ms\n(5 rows)\n\nespsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM \nagenda_users_groups WHERE group_id::int8 = 9;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on agenda_users_groups (cost=0.00..60947.43 rows=12777 \nwidth=8) (actual time=457.963..2244.928 rows=367026 loops=1)\n Filter: ((group_id)::bigint = 9)\n Total runtime: 2571.496 ms\n(3 rows)\n\nespsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM \nagenda_users_groups WHERE group_id::int8 = '9';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on agenda_users_groups (cost=0.00..60947.43 rows=12777 \nwidth=8) (actual time=407.193..2182.880 rows=367026 loops=1)\n Filter: ((group_id)::bigint = 9::bigint)\n Total runtime: 2506.998 ms\n(3 rows)\n\nespsm_moviltelevision=# select count(*) from agenda_users_groups ;\n count\n---------\n 2555437\n(1 row)\n\n\n Postgresql then uses the index, I don't understand why? in this \nserver I tried to tune the configuration, it's because of the tuning? \nBecause it's a newer version of postgresql?\n\n\nThanks for all the replies\n-- \nArnau\n",
"msg_date": "Tue, 25 Apr 2006 17:47:46 +0200",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query on postgresql 7.4.2 not using index"
},
{
"msg_contents": "On Tue, 2006-04-25 at 10:47, Arnau wrote:\n> Tom Lane wrote:\n> > Arnau <[email protected]> writes:\n> > \n> > \n> >>espsm_moviltelevision=# select count(*) from agenda_users_groups ;\n> >> count\n> >>---------\n> >> 2547556\n> > \n> > \n> > So the SELECT is fetching nearly 15% of the rows in the table. The\n> > planner is doing *the right thing* to use a seqscan, at least for\n> > this particular group_id value.\n> \n> \n> I have done the same tests on 8.1.0.\n> \n> \n> espsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM \n> agenda_users_groups WHERE group_id = 9;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on agenda_users_groups (cost=2722.26..30341.78 \n> rows=400361 width=8) (actual time=145.533..680.839 rows=367026 loops=1)\n> Recheck Cond: (group_id = 9::numeric)\n> -> Bitmap Index Scan on i_agnusrsgrs_groupid (cost=0.00..2722.26 \n> rows=400361 width=0) (actual time=142.958..142.958 rows=367026 loops=1)\n> Index Cond: (group_id = 9::numeric)\n> Total runtime: 1004.966 ms\n> (5 rows)\n\nHow big are these individual records? I'm guessing a fairly good size,\nsince an index scan is winning.\n\n> espsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM \n> agenda_users_groups WHERE group_id::int8 = 9;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on agenda_users_groups (cost=0.00..60947.43 rows=12777 \n> width=8) (actual time=457.963..2244.928 rows=367026 loops=1)\n> Filter: ((group_id)::bigint = 9)\n> Total runtime: 2571.496 ms\n> (3 rows)\n\nOK. Stop and think about what you're telling postgresql to do here.\n\nYou're telling it to cast the field group_id to int8, then compare it to\n9. How can it cast the group_id to int8 without fetching it? That's\nright, you're ensuring a seq scan. You need to put the int8 cast on the\nother side of that equality comparison, like:\n\nwhere group_id = 9::int8\n\n\n",
"msg_date": "Tue, 25 Apr 2006 10:54:23 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query on postgresql 7.4.2 not using index"
},
{
"msg_contents": "Arnau <[email protected]> writes:\n> I have done the same tests on 8.1.0.\n\nBitmap scans are a totally different animal that doesn't exist in 7.4.\nA plain indexscan, such as 7.4 knows about, is generally not effective\nfor fetching more than a percent or two of the table. The crossover\npoint for a bitmap scan is much higher (don't know exactly, but probably\nsomething like 30-50%).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 Apr 2006 11:55:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query on postgresql 7.4.2 not using index "
},
{
"msg_contents": "\n>>I have done the same tests on 8.1.0.\n>>\n>>\n>>espsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM \n>>agenda_users_groups WHERE group_id = 9;\n>> QUERY PLAN\n>>----------------------------------------------------------------------------------------------------------------------------------------------\n>> Bitmap Heap Scan on agenda_users_groups (cost=2722.26..30341.78 \n>>rows=400361 width=8) (actual time=145.533..680.839 rows=367026 loops=1)\n>> Recheck Cond: (group_id = 9::numeric)\n>> -> Bitmap Index Scan on i_agnusrsgrs_groupid (cost=0.00..2722.26 \n>>rows=400361 width=0) (actual time=142.958..142.958 rows=367026 loops=1)\n>> Index Cond: (group_id = 9::numeric)\n>> Total runtime: 1004.966 ms\n>>(5 rows)\n> \n> \n> How big are these individual records? I'm guessing a fairly good size,\n> since an index scan is winning.\n\n How I could know the size on an individual record?\n\n> \n> \n>>espsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM \n>>agenda_users_groups WHERE group_id::int8 = 9;\n>> QUERY PLAN\n>>-------------------------------------------------------------------------------------------------------------------------------\n>> Seq Scan on agenda_users_groups (cost=0.00..60947.43 rows=12777 \n>>width=8) (actual time=457.963..2244.928 rows=367026 loops=1)\n>> Filter: ((group_id)::bigint = 9)\n>> Total runtime: 2571.496 ms\n>>(3 rows)\n> \n> \n> OK. Stop and think about what you're telling postgresql to do here.\n> \n> You're telling it to cast the field group_id to int8, then compare it to\n> 9. How can it cast the group_id to int8 without fetching it? That's\n> right, you're ensuring a seq scan. You need to put the int8 cast on the\n> other side of that equality comparison, like:\n> \n> where group_id = 9::int8\n\n I just did what Chris Smith asked me to do :), here I paste the \nresults I get when I change the cast.\n\nespsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM \nagenda_users_groups WHERE group_id = 9::int8;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on agenda_users_groups (cost=2722.33..30343.06 \nrows=400379 width=8) (actual time=147.723..714.473 rows=367026 loops=1)\n Recheck Cond: (group_id = 9::numeric)\n -> Bitmap Index Scan on i_agnusrsgrs_groupid (cost=0.00..2722.33 \nrows=400379 width=0) (actual time=145.015..145.015 rows=367026 loops=1)\n Index Cond: (group_id = 9::numeric)\n Total runtime: 1038.537 ms\n(5 rows)\n\nespsm_moviltelevision=# EXPLAIN ANALYZE SELECT agenda_user_group_id FROM \nagenda_users_groups WHERE group_id = '9'::int8;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on agenda_users_groups (cost=2722.33..30343.06 \nrows=400379 width=8) (actual time=153.858..1192.838 rows=367026 loops=1)\n Recheck Cond: (group_id = 9::numeric)\n -> Bitmap Index Scan on i_agnusrsgrs_groupid (cost=0.00..2722.33 \nrows=400379 width=0) (actual time=151.298..151.298 rows=367026 loops=1)\n Index Cond: (group_id = 9::numeric)\n Total runtime: 1527.039 ms\n(5 rows)\n\n\nThanks\n-- \nArnau\n",
"msg_date": "Tue, 25 Apr 2006 18:33:33 +0200",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query on postgresql 7.4.2 not using index"
},
{
"msg_contents": "> OK. Stop and think about what you're telling postgresql to do here.\n>\n> You're telling it to cast the field group_id to int8, then compare it to\n> 9. How can it cast the group_id to int8 without fetching it? That's\n> right, you're ensuring a seq scan. You need to put the int8 cast on the\n> other side of that equality comparison, like:\n\nYeh that one was my fault :) I couldn't remember which way it went and\nif 7.4.x had issues with int8 indexes..\n\n--\nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Wed, 26 Apr 2006 11:39:41 +1000",
"msg_from": "\"chris smith\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query on postgresql 7.4.2 not using index"
}
] |
[
{
"msg_contents": "Hello,\nThe performance comparison saga of the last month continues (see list archive).\nAfter some time experimenting on windows, the conclusion is clear:\n\nwindows is likely crap for databases other than MS-SQL.\n\nI guess that MS-SQL uses lot of undocumented api calls, may run in kernel\nmode, ring 0 and a lot of dirty tricks to get some reasonable performance.\n\nThen, I asked my coleague to send a new FB dump and a Pg dump to try at my\ndesktop machine.\nThis time, the database is somewhat bigger. Around 20 million records.\n\nThe timings are attached. Tried to follow the same query sequence on both files.\nBoth databases are much more faster on linux than on windows, and the desktop\nmachine is not dedicated and tuned. (no scsi, no raid, many services enabled,\next3 fs, etc).\nAt many of the queries, postgresql is faster, sometimes way MUCH faster.\nBut Firebird have very good defaults out of the box and a few of the queries\nare really a pain in Postgresql.\nPlease, see the abismal timing differences at the last 2 queries, for example.\nThey used 100% cpu, almost no disk activity, no twait cpu, for loooong time to\ncomplete.\nMaybe these queries bring into the light some instructions weaknesses, or bad\ntuning. \nDo you have some suggestions?\nRegards.\n\nAndre Felipe Machado\nhttp://www.techforce.com.br\nlinux blog",
"msg_date": "Tue, 25 Apr 2006 14:38:16 -0200",
"msg_from": "\"andremachado\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Firebird 1.5.3 X Postgresql 8.1.3 (linux and windows)"
},
{
"msg_contents": "\"andremachado\" <[email protected]> writes:\n> After some time experimenting on windows, the conclusion is clear:\n> windows is likely crap for databases other than MS-SQL.\n\nMaybe. One thing that comes to mind is that you really should do some\nperformance tuning experiments. In particular it'd be a good idea to\nincrease checkpoint_segments and try other settings for wal_sync_method.\nYour fifth query,\n\nbddnf=# explain analyze update NOTA_FISCAL set VA_TOTAL_ITENSDNF = (select sum(ITEM_NOTA.VA_TOTAL) from ITEM_NOTA where ITEM_NOTA.ID_NF = NOTA_FISCAL.ID_NF) where ID_NF in (select NF2.ID_NF from DECLARACAO DE inner join CADASTRO CAD on (CAD.ID_DECLARACAO=DE.ID_DECLARACAO) inner join NOTA_FISCAL NF2 on (NF2.ID_CADASTRO=CAD.ID_CADASTRO) where DE.ID_ARQUIVO in (1) );\n\nshows runtime of the plan proper as 158 seconds but total runtime as 746\nseconds --- the discrepancy has to be associated with writing out the\nupdated rows, which there are a lot of (719746) in this query, but still\nwe should be able to do it faster than that. So I surmise a bottleneck\nin pushing WAL updates to disk.\n\nThe last two queries are interesting. Does Firebird have any equivalent\nof EXPLAIN, ie a way to see what sort of query plan they are using?\nI suspect they are being more aggressive about optimizing the max()\nfunctions in the sub-selects than we are. In particular, the 8.1 code\nfor optimizing min/max just punts if it sees any sub-selects in the\nWHERE clause, which prevents us from doing anything with these examples.\n\n /*\n * Also reject cases with subplans or volatile functions in WHERE. This\n * may be overly paranoid, but it's not entirely clear if the\n * transformation is safe then.\n */\n if (contain_subplans(parse->jointree->quals) ||\n contain_volatile_functions(parse->jointree->quals))\n return NULL;\n\nThis is something I'd wanted to go back and look at more carefully, but\nnever got around to.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 Apr 2006 22:06:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Firebird 1.5.3 X Postgresql 8.1.3 (linux and windows) "
}
] |
[
{
"msg_contents": "Fellow PostgreSQLers,\n\nThis post is longish and has a bit of code, but here's my question up- \nfront: Why are batch queries in my PL/pgSQL functions no faster than \niterating over a loop and executing a series of queries for each \niteration of the loop? Are batch queries or array or series \ngeneration slow in PL/pgSQL?\n\nNow, for the details of my problem. For managing an ordered many-to- \nmany relations between blog entries and tags, I created these tables:\n\n CREATE TABLE entry (\n id SERIAL PRIMARY KEY,\n title text,\n content text\n );\n\n CREATE TABLE tag (\n id SERIAL PRIMARY KEY,\n name text\n );\n\n CREATE TABLE entry_coll_tag (\n entry_id integer REFERENCES entry(id)\n ON UPDATE CASCADE\n ON DELETE CASCADE,\n tag_id integer REFERENCES tag(id)\n ON UPDATE CASCADE\n ON DELETE CASCADE,\n ord smallint,\n PRIMARY KEY (entry_id, tag_id)\n );\n\nTo make it easy to associate an entry with a bunch of tags in a \nsingle query, I wrote this PL/pgSQL function:\n\n CREATE OR REPLACE FUNCTION entry_coll_tag_set (\n obj_id integer,\n coll_ids integer[]\n ) RETURNS VOID AS $$\n DECLARE\n -- For checking to see if a tuple was updated.\n update_count smallint;\n -- For looping through the array.\n iloop integer := 1;\n BEGIN\n -- Lock the containing object tuple to prevernt inserts into the\n -- collection table.\n PERFORM true FROM entry WHERE id = obj_id FOR UPDATE;\n\n -- Update possible existing record with the current sequence so\n -- as to avoid unique constraint violations. We just set it to a\n -- negative number, since negative numbers are never used for\n -- ord.\n UPDATE entry_coll_tag\n SET ord = -ord\n WHERE entry_id = obj_id;\n\n -- Loop through the tag IDs to associate with the entry ID.\n while coll_ids[iloop] is not null loop\n -- Update possible existing collection record.\n UPDATE entry_coll_tag\n SET ord = iloop\n WHERE entry_id = obj_id\n AND tag_id = coll_ids[iloop];\n\n -- Only insert the item if nothing was updated.\n IF FOUND IS false THEN\n -- Insert a new record.\n INSERT INTO entry_coll_tag (entry_id, tag_id, ord)\n VALUES (obj_id, coll_ids[iloop], iloop);\n END IF;\n iloop := iloop + 1;\n END loop;\n\n -- Delete any remaining tuples.\n DELETE FROM entry_coll_tag\n WHERE entry_id = obj_id AND ord < 0;\n END;\n $$ LANGUAGE plpgsql SECURITY DEFINER;\n\nJosh Berkus kindly reviewed it, and suggested that I take advantage \nof generate_series() and batch updates instead of looping through the \nresults. Here's the revised version:\n\n CREATE OR REPLACE FUNCTION entry_coll_tag_set (\n obj_id integer,\n coll_ids integer[]\n ) RETURNS VOID AS $$\n BEGIN\n -- Lock the containing object tuple to prevernt inserts into the\n -- collection table.\n PERFORM true FROM entry WHERE id = obj_id FOR UPDATE;\n\n -- First set all ords to negative value to prevent unique index\n -- violations.\n UPDATE entry_coll_tag\n SET ord = -ord\n WHERE entry_id = obj_id;\n\n IF FOUND IS false THEN\n -- There are no existing tuples, so just insert the new ones.\n INSERT INTO entry_coll_tag (entry_id, tag_id, ord)\n SELECT obj_id, coll_ids[gs.ser], gs.ser\n FROM generate_series(1, array_upper(coll_ids, 1))\n AS gs(ser);\n ELSE\n -- First, update the existing tuples to new ord values.\n UPDATE entry_coll_tag SET ord = ser\n FROM (\n SELECT gs.ser, coll_ids[gs.ser] as move_tag\n FROM generate_series(1, array_upper(coll_ids, 1))\n AS gs(ser)\n ) AS expansion\n WHERE move_tag = entry_coll_tag.tag_id\n AND entry_id = obj_id;\n\n -- Now insert the new tuples.\n INSERT INTO entry_coll_tag (entry_id, tag_id, ord )\n SELECT obj_id, coll_ids[gs.ser], gs.ser\n FROM generate_series(1, array_upper(coll_ids, 1)) AS gs \n(ser)\n WHERE coll_ids[gs.ser] NOT IN (\n SELECT tag_id FROM entry_coll_tag ect2\n WHERE entry_id = obj_id\n );\n\n -- Delete any remaining tuples.\n DELETE FROM entry_coll_tag\n WHERE entry_id = obj_id AND ord < 0;\n END IF;\n END;\n $$ LANGUAGE plpgsql SECURITY DEFINER;\n\nJosh thought that the replacement of the loop with a couple of batch \nqueries would be an order of magnitude faster. So I happily ran my \nbenchmark comparing the two approaches. The benchmark actually tests \ntwo different functions that had a similar refactoring, as well as \ntwo other functions that are quite simple. Before the tests, I \ninserted 100,000 entry records, and a random number of tag records \n(1-10 for each entry) and related entry_coll_tag records, which came \nout to around 500,000 entries in entry_coll_tag. I ran VACUUM and \nANALYZE before each test, and got the following results:\n\n func: 13.67 wallclock seconds (0.11 usr + 1.82 sys = 1.93 CPU) @ \n21.95/s\nfunc2: 13.68 wallclock seconds (0.10 usr + 1.71 sys = 1.81 CPU) @ \n21.93/s\n perl: 41.14 wallclock seconds (0.26 usr + 6.94 sys = 7.20 CPU) @ \n7.29/s\n\n\"func\" is my original version of the function with the loop, \"func2\" \nis the refactored version with the batch queries, and \"perl\" is my \nattempt to do roughly the same thing in Perl. The benefits of using \nthe functions over doing the work in app space is obvious, but there \nseems to be little difference between the two function approaches. If \nyou want to see the complete benchmark SQL and test script, it's here:\n\n http://www.justatheory.com/code/ \nbench_plpgsql_collection_functions2.tgz\n\nSo my question is quite simple: Why was the batch query approach no \nfaster than the loop approach? I assume that if my tests managed 100 \nor 1000 rows in the collection table, the batch approach would show \nan improvement, but why would it not when I was using it to manage \n8-10 rows? Have I missed something here?\n\nThanks,\n\nDavid\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 25 Apr 2006 09:44:55 -0700",
"msg_from": "David Wheeler <[email protected]>",
"msg_from_op": true,
"msg_subject": "PL/pgSQL Loop Vs. Batch Update"
},
{
"msg_contents": "David Wheeler <[email protected]> writes:\n> This post is longish and has a bit of code, but here's my question up- \n> front: Why are batch queries in my PL/pgSQL functions no faster than \n> iterating over a loop and executing a series of queries for each \n> iteration of the loop?\n\nYou'd really have to look at the plans generated for each of the\ncommands in the functions to be sure. A knee-jerk reaction is to\nsuggest that that NOT IN might be the core of the problem, but it's\nonly a guess.\n\nIt's a bit tricky to examine the behavior of a parameterized query,\nwhich is what these will all be since they depend on local variables\nof the plpgsql function (which are passed as parameters to the main\nSQL executor). The basic idea is\n\n\tPREPARE foo(datatype, datatype, ...) AS SELECT ... $1 ... $2 ...\n\n\tEXPLAIN ANALYZE EXECUTE foo(value, value)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 Apr 2006 21:19:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL Loop Vs. Batch Update "
},
{
"msg_contents": "On Apr 25, 2006, at 18:19, Tom Lane wrote:\n\n> You'd really have to look at the plans generated for each of the\n> commands in the functions to be sure. A knee-jerk reaction is to\n> suggest that that NOT IN might be the core of the problem, but it's\n> only a guess.\n\nWell, the rows are indexed (I forgot to include the indexes in my \nfirst post), and given that each entry_id has no more than ten \nassociated tag_ids, I would expect it to be quite fast, relying on \nthe primary key index to look up the entry_id first, and then the \nassociated tag_ids. But that's just a guess on my part, too. Perhaps \nI should try a left outer join with tag_id IS NULL?\n\n> It's a bit tricky to examine the behavior of a parameterized query,\n> which is what these will all be since they depend on local variables\n> of the plpgsql function (which are passed as parameters to the main\n> SQL executor).\n\nRight, that makes sense.\n\n> The basic idea is\n>\n> \tPREPARE foo(datatype, datatype, ...) AS SELECT ... $1 ... $2 ...\n>\n> \tEXPLAIN ANALYZE EXECUTE foo(value, value)\n\nJust on a lark, I tried to get this to work:\n\ntry=# explain analyze EXECUTE foo(1, ARRAY \n[600001,600002,600003,600004,600005,600006,600007]);\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=26.241..26.251 \nrows=1 loops=1)\nTotal runtime: 27.512 ms\n(2 rows)\n\nThat's not much use. Is there no way to EXPLAIN ANALYZE this stuff?\n\nThanks Tom.\n\nBest,\n\nDavid\n\n",
"msg_date": "Tue, 25 Apr 2006 19:27:48 -0700",
"msg_from": "David Wheeler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL Loop Vs. Batch Update "
},
{
"msg_contents": "David Wheeler <[email protected]> writes:\n> Just on a lark, I tried to get this to work:\n\n> try=# explain analyze EXECUTE foo(1, ARRAY \n> [600001,600002,600003,600004,600005,600006,600007]);\n> QUERY PLAN\n> ------------------------------------------------------------------------ \n> --------------\n> Result (cost=0.00..0.01 rows=1 width=0) (actual time=26.241..26.251 \n> rows=1 loops=1)\n> Total runtime: 27.512 ms\n> (2 rows)\n\n> That's not much use.\n\nIt looks like you had something trivial as the definition of foo().\nTry one of the actual queries from the plpgsql function.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 Apr 2006 22:36:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL Loop Vs. Batch Update "
},
{
"msg_contents": "On Apr 25, 2006, at 19:36, Tom Lane wrote:\n\n> It looks like you had something trivial as the definition of foo().\n\nYeah, the function call. :-)\n\n> Try one of the actual queries from the plpgsql function.\n\nOh. Duh. Will do. Tomorrow.\n\nBest,\n\nDavid\n\n",
"msg_date": "Tue, 25 Apr 2006 20:18:05 -0700",
"msg_from": "David Wheeler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL Loop Vs. Batch Update "
},
{
"msg_contents": "On Apr 25, 2006, at 19:36, Tom Lane wrote:\n\n> Try one of the actual queries from the plpgsql function.\n\nHere we go:\n\ntry=# PREPARE foo(int, int[], int) AS\ntry-# INSERT INTO entry_coll_tag (entry_id, tag_id, ord )\ntry-# SELECT $1, $2[gs.ser], gs.ser + $3\ntry-# FROM generate_series(1, array_upper($2, 1)) AS gs(ser)\ntry-# WHERE $2[gs.ser] NOT IN (\ntry(# SELECT tag_id FROM entry_coll_tag ect2\ntry(# WHERE entry_id = $1\ntry(# );\nPREPARE\ntry=# explain analyze execute foo(100100, ARRAY \n[600001,600002,600003,600004,600005,600006,600007], 0);\n \nQUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-\nFunction Scan on generate_series gs (cost=7.78..25.28 rows=500 \nwidth=4) (actual time=80.982..81.265 rows=7 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Index Scan using idx_entry_tag_ord on entry_coll_tag ect2 \n(cost=0.00..7.77 rows=5 width=4) (actual time=80.620..80.620 rows=0 \nloops=1)\n Index Cond: (entry_id = $1)\nTrigger for constraint entry_coll_tag_entry_id_fkey: time=3.210 calls=7\nTrigger for constraint entry_coll_tag_tag_id_fkey: time=4.412 calls=7\nTotal runtime: 158.672 ms\n(8 rows)\n\nActually looks pretty good to me. Although is generate_series() being \nrather slow?\n\nThanks,\n\nDavid\n",
"msg_date": "Tue, 2 May 2006 16:49:31 -0700",
"msg_from": "David Wheeler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL Loop Vs. Batch Update "
},
{
"msg_contents": "On May 2, 2006, at 16:49, David Wheeler wrote:\n\n> On Apr 25, 2006, at 19:36, Tom Lane wrote:\n>\n>> Try one of the actual queries from the plpgsql function.\n>\n> Here we go:\n>\n> try=# PREPARE foo(int, int[], int) AS\n> try-# INSERT INTO entry_coll_tag (entry_id, tag_id, ord )\n> try-# SELECT $1, $2[gs.ser], gs.ser + $3\n> try-# FROM generate_series(1, array_upper($2, 1)) AS gs(ser)\n> try-# WHERE $2[gs.ser] NOT IN (\n> try(# SELECT tag_id FROM entry_coll_tag ect2\n> try(# WHERE entry_id = $1\n> try(# );\n> PREPARE\n> try=# explain analyze execute foo(100100, ARRAY \n> [600001,600002,600003,600004,600005,600006,600007], 0);\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------- \n> ---------------------------------------------------------------------- \n> -----\n> Function Scan on generate_series gs (cost=7.78..25.28 rows=500 \n> width=4) (actual time=80.982..81.265 rows=7 loops=1)\n> Filter: (NOT (hashed subplan))\n> SubPlan\n> -> Index Scan using idx_entry_tag_ord on entry_coll_tag ect2 \n> (cost=0.00..7.77 rows=5 width=4) (actual time=80.620..80.620 rows=0 \n> loops=1)\n> Index Cond: (entry_id = $1)\n> Trigger for constraint entry_coll_tag_entry_id_fkey: time=3.210 \n> calls=7\n> Trigger for constraint entry_coll_tag_tag_id_fkey: time=4.412 calls=7\n> Total runtime: 158.672 ms\n> (8 rows)\n>\n> Actually looks pretty good to me. Although is generate_series() \n> being rather slow?\n\nScratch that:\n\ntry=# delete from entry_coll_tag ;\nDELETE 7\ntry=# vacuum;\nanalyze;\nVACUUM\ntry=# analyze;\nANALYZE\ntry=# explain analyze execute foo(100100, ARRAY \n[600001,600002,600003,600004,600005,600006,600007], 0);\n \nQUERY PLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------------------------\nFunction Scan on generate_series gs (cost=7.78..25.28 rows=500 \nwidth=4) (actual time=0.193..0.284 rows=7 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Index Scan using idx_entry_tag_ord on entry_coll_tag ect2 \n(cost=0.00..7.77 rows=5 width=4) (actual time=0.022..0.022 rows=0 \nloops=1)\n Index Cond: (entry_id = $1)\nTrigger for constraint entry_coll_tag_entry_id_fkey: time=0.858 calls=7\nTrigger for constraint entry_coll_tag_tag_id_fkey: time=0.805 calls=7\nTotal runtime: 3.266 ms\n(8 rows)\n\ntry=# delete from entry_coll_tag ;DELETE 7\ntry=# explain analyze execute foo(100100, ARRAY \n[600001,600002,600003,600004,600005,600006,600007], 0);\n\nSo my tests are calling this query six hundred times. Could it be \nthat it just gets slower over time because the database needs to be \nvacuumed? Or perhaps pg_autovacuum is kicking in during execution and \n*that* slows things down?\n\nThanks,\n\nDavid\n\n\n",
"msg_date": "Tue, 2 May 2006 16:52:46 -0700",
"msg_from": "David Wheeler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL Loop Vs. Batch Update "
},
{
"msg_contents": "On May 2, 2006, at 16:52, David Wheeler wrote:\n\n>> Actually looks pretty good to me. Although is generate_series() \n>> being rather slow?\n>\n> Scratch that:\n\nBah, dammit, there were no rows in that relevant table. Please \ndisregard my previous EXPLAIN ANALYZE posts.\n\nI've re-run my script and populated it with 549,815 rows. *Now* let's \nsee what we've got:\n\n\ntry=# VACUUM;\nVACUUM\ntry=# ANALYZE;\nANALYZE\ntry=# PREPARE foo(int, int[], int) AS\ntry-# INSERT INTO entry_coll_tag (entry_id, tag_id, ord )\ntry-# SELECT $1, $2[gs.ser], gs.ser + $3\ntry-# FROM generate_series(1, array_upper($2, 1)) AS gs(ser)\ntry-# WHERE $2[gs.ser] NOT IN (\ntry(# SELECT tag_id FROM entry_coll_tag ect2\ntry(# WHERE entry_id = $1\ntry(# );\nPREPARE\ntry=# explain analyze execute foo(100100, ARRAY \n[600001,600002,600003,600004,600005,600006,600007], 0);\n \nQUERY PLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------------------------\nFunction Scan on generate_series gs (cost=9.68..27.18 rows=500 \nwidth=4) (actual time=0.965..1.055 rows=7 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Index Scan using idx_entry_tag_ord on entry_coll_tag ect2 \n(cost=0.00..9.66 rows=8 width=4) (actual time=0.844..0.844 rows=0 \nloops=1)\n Index Cond: (entry_id = $1)\nTrigger for constraint entry_coll_tag_entry_id_fkey: time=3.872 calls=7\nTrigger for constraint entry_coll_tag_tag_id_fkey: time=3.872 calls=7\nTotal runtime: 12.797 ms\n(8 rows)\n\ntry=# delete from entry_coll_tag where entry_id = 100100;\nDELETE 7\ntry=# explain analyze execute foo(100100, ARRAY \n[600001,600002,600003,600004,600005,600006,600007], 0);\n \nQUERY PLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------------------------\nFunction Scan on generate_series gs (cost=9.68..27.18 rows=500 \nwidth=4) (actual time=0.117..0.257 rows=7 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Index Scan using idx_entry_tag_ord on entry_coll_tag ect2 \n(cost=0.00..9.66 rows=8 width=4) (actual time=0.058..0.058 rows=0 \nloops=1)\n Index Cond: (entry_id = $1)\nTrigger for constraint entry_coll_tag_entry_id_fkey: time=0.542 calls=7\nTrigger for constraint entry_coll_tag_tag_id_fkey: time=0.590 calls=7\nTotal runtime: 2.118 ms\n(8 rows)\n\nDamn, that seems pretty efficient. I wonder if it's the other \nfunction, then. I'll have to work EXPLAIN ANALYZEing _it_.\n\nBest,\n\nDavid\n",
"msg_date": "Tue, 2 May 2006 18:16:56 -0700",
"msg_from": "David Wheeler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL Loop Vs. Batch Update "
}
] |
[
{
"msg_contents": "For the query\n\n \n\n \n\nSelect col1 from table1\n\nWhere col1 like '172.%'\n\n \n\nThe table has 133 million unique ip addresses. Col1 is indexed.\n\n \n\nThe optimizer is using a sequential scan\n\n \n\nThis is the explain analyze output\n\n \n\n\"Seq Scan on table1 (cost=0.00..2529284.80 rows=1 width=15) (actual\ntime=307591.339..565251.775 rows=524288 loops=1)\"\n\n\" Filter: ((col1)::text ~~ '172.%'::text)\"\n\n\"Total runtime: 565501.873 ms\"\n\n \n\n \n\nThe number of affected rows (500K) is a small fraction of the total row\ncount.\n\n\n\n\n\n\n\n\n\n\nFor the query\n \n \nSelect col1 from table1\nWhere col1 like ‘172.%’\n \nThe table has 133 million unique ip addresses. Col1 is\nindexed.\n \nThe optimizer is using a sequential scan\n \nThis is the explain analyze output\n \n\"Seq Scan on table1 (cost=0.00..2529284.80 rows=1\nwidth=15) (actual time=307591.339..565251.775 rows=524288 loops=1)\"\n\" Filter: ((col1)::text ~~ '172.%'::text)\"\n\"Total runtime: 565501.873 ms\"\n \n \nThe number of affected rows (500K) is a small fraction of\nthe total row count.",
"msg_date": "Tue, 25 Apr 2006 10:08:02 -0700",
"msg_from": "\"Sriram Dandapani\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "planner not using index for like operator"
},
{
"msg_contents": "If you are using a locale other than the C locale, you need to create\nthe index with an operator class to get index scans with like.\n \nSee here for details:\n \nhttp://www.postgresql.org/docs/8.1/interactive/indexes-opclass.html\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Sriram\nDandapani\nSent: Tuesday, April 25, 2006 12:08 PM\nTo: Pgsql-Performance (E-mail)\nSubject: [PERFORM] planner not using index for like operator\n\n\n\nFor the query\n\n \n\n \n\nSelect col1 from table1\n\nWhere col1 like '172.%'\n\n \n\nThe table has 133 million unique ip addresses. Col1 is indexed.\n\n \n\nThe optimizer is using a sequential scan\n\n \n\nThis is the explain analyze output\n\n \n\n\"Seq Scan on table1 (cost=0.00..2529284.80 rows=1 width=15) (actual\ntime=307591.339..565251.775 rows=524288 loops=1)\"\n\n\" Filter: ((col1)::text ~~ '172.%'::text)\"\n\n\"Total runtime: 565501.873 ms\"\n\n \n\n \n\nThe number of affected rows (500K) is a small fraction of the total row\ncount.\n\n\n\n\nMessage\n\n\n\n\nIf you \nare using a locale other than the C locale, you need to create the index with an \noperator class to get index scans with like.\n \nSee \nhere for details:\n \nhttp://www.postgresql.org/docs/8.1/interactive/indexes-opclass.html\n \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of Sriram \n DandapaniSent: Tuesday, April 25, 2006 12:08 PMTo: \n Pgsql-Performance (E-mail)Subject: [PERFORM] planner not using \n index for like operator\n\nFor the \n query\n \n \nSelect col1 from \n table1\nWhere col1 like \n ‘172.%’\n \nThe table has 133 million unique \n ip addresses. Col1 is indexed.\n \nThe optimizer is using a \n sequential scan\n \nThis is the explain analyze \n output\n \n\"Seq Scan on table1 \n (cost=0.00..2529284.80 rows=1 width=15) (actual time=307591.339..565251.775 \n rows=524288 loops=1)\"\n\" Filter: ((col1)::text ~~ \n '172.%'::text)\"\n\"Total runtime: 565501.873 \n ms\"\n \n \nThe number of affected rows (500K) \n is a small fraction of the total row \ncount.",
"msg_date": "Tue, 25 Apr 2006 13:03:13 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner not using index for like operator"
},
{
"msg_contents": "On Tue, Apr 25, 2006 at 10:08:02AM -0700, Sriram Dandapani wrote:\nHere's the key:\n\n> \" Filter: ((col1)::text ~~ '172.%'::text)\"\n\nIn order to do a like comparison, it has to convert col1 (which I'm\nguessing is of type 'inet') to text, so there's no way it can use an\nindex on col1 (maybe a function index, but that's a different story).\n\nIs there some reason you're not doing\n\nWHERE col1 <<= '172/8'::inet\n\n?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 25 Apr 2006 13:24:45 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner not using index for like operator"
}
] |
[
{
"msg_contents": "\n\tHere is a simple test case for this strange behaviour :\n\nannonces=> CREATE TABLE test.current (id INTEGER PRIMARY KEY, description \nTEXT);\nINFO: CREATE TABLE / PRIMARY KEY creera un index implicite \n<<current_pkey>> pour la table <<current>>\nCREATE TABLE\n\nannonces=> CREATE TABLE test.archive (id INTEGER PRIMARY KEY, description \nTEXT);\nINFO: CREATE TABLE / PRIMARY KEY creera un index implicite \n<<archive_pkey>> pour la table <<archive>>\nCREATE TABLE\n\nannonces=> CREATE VIEW test.all AS SELECT * FROM test.archive UNION ALL \nSELECT * FROM test.current ;\nCREATE VIEW\n\n\tlet's populate...\n\nannonces=> INSERT INTO test.current SELECT id, description FROM annonces;\nINSERT 0 11524\nannonces=> INSERT INTO test.archive SELECT id, description FROM \narchive_ext;\nINSERT 0 437351\n\nannonces=> ANALYZE test.archive;\nANALYZE\nannonces=> ANALYZE test.current;\nANALYZE\n\n\tThis is the bookmarks table...\n\nSELECT count(*), list_id FROM bookmarks GROUP BY list_id;\n count | list_id\n-------+---------\n 15 | 7\n 5 | 5\n 150 | 4\n 3 | 3\n 15 | 2\n 2 | 1\n 6 | 8\n\n\tI want to list the stuff I bookmarked :\n\n\nannonces=> EXPLAIN ANALYZE SELECT * FROM test.current WHERE id IN (SELECT \nannonce_id FROM bookmarks WHERE list_id IN ('4'));\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=6.58..532.84 rows=140 width=203) (actual \ntime=0.747..5.283 rows=150 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".annonce_id)\n -> Seq Scan on current (cost=0.00..467.24 rows=11524 width=203) \n(actual time=0.006..3.191 rows=11524 loops=1)\n -> Hash (cost=6.23..6.23 rows=140 width=4) (actual time=0.244..0.244 \nrows=150 loops=1)\n -> HashAggregate (cost=4.83..6.23 rows=140 width=4) (actual \ntime=0.155..0.184 rows=150 loops=1)\n -> Seq Scan on bookmarks (cost=0.00..4.45 rows=150 \nwidth=4) (actual time=0.008..0.097 rows=150 loops=1)\n Filter: (list_id = 4)\n Total runtime: 5.343 ms\n(8 lignes)\n\nannonces=> set enable_hashjoin TO 0;\nSET\nannonces=> EXPLAIN ANALYZE SELECT * FROM test.current WHERE id IN (SELECT \nannonce_id FROM bookmarks WHERE list_id IN ('4'));\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=4.83..824.22 rows=140 width=203) (actual \ntime=0.219..1.034 rows=150 loops=1)\n -> HashAggregate (cost=4.83..6.23 rows=140 width=4) (actual \ntime=0.158..0.199 rows=150 loops=1)\n -> Seq Scan on bookmarks (cost=0.00..4.45 rows=150 width=4) \n(actual time=0.011..0.096 rows=150 loops=1)\n Filter: (list_id = 4)\n -> Index Scan using current_pkey on current (cost=0.00..5.83 rows=1 \nwidth=203) (actual time=0.005..0.005 rows=1 loops=150)\n Index Cond: (current.id = \"outer\".annonce_id)\n Total runtime: 1.108 ms\n(7 lignes)\n\n\tHm, the row estimates on the \"bookmarks\" table are spot on ; why did it \nchoose the hash join ?\n\n\tNow, if I want to access the \"all\" view which contains the union of the \n\"current\" and \"archive\" table :\n\nannonces=> set enable_hashjoin TO 1;\nSET\nannonces=> EXPLAIN ANALYZE SELECT * FROM test.all WHERE id IN (SELECT \nannonce_id FROM bookmarks WHERE list_id IN ('4'));\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=6.58..33484.41 rows=314397 width=36) (actual \ntime=8300.484..8311.784 rows=150 loops=1)\n Hash Cond: (\"outer\".\"?column1?\" = \"inner\".annonce_id)\n -> Append (cost=0.00..23596.78 rows=449139 width=219) (actual \ntime=6.390..8230.821 rows=448875 loops=1)\n -> Seq Scan on archive (cost=0.00..18638.15 rows=437615 \nwidth=219) (actual time=6.389..8175.491 rows=437351 loops=1)\n -> Seq Scan on current (cost=0.00..467.24 rows=11524 width=203) \n(actual time=0.022..8.985 rows=11524 loops=1)\n -> Hash (cost=6.23..6.23 rows=140 width=4) (actual time=0.255..0.255 \nrows=150 loops=1)\n -> HashAggregate (cost=4.83..6.23 rows=140 width=4) (actual \ntime=0.168..0.197 rows=150 loops=1)\n -> Seq Scan on bookmarks (cost=0.00..4.45 rows=150 \nwidth=4) (actual time=0.015..0.102 rows=150 loops=1)\n Filter: (list_id = 4)\n Total runtime: 8311.870 ms\n(10 lignes)\n\nannonces=> set enable_hashjoin TO 0;\nSET\nannonces=> EXPLAIN ANALYZE SELECT * FROM test.all WHERE id IN (SELECT \nannonce_id FROM bookmarks WHERE list_id IN ('4'));\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=79604.61..84994.98 rows=314397 width=36) (actual \ntime=6944.229..7109.371 rows=150 loops=1)\n Merge Cond: (\"outer\".annonce_id = \"inner\".id)\n -> Sort (cost=11.22..11.57 rows=140 width=4) (actual \ntime=0.326..0.355 rows=150 loops=1)\n Sort Key: bookmarks.annonce_id\n -> HashAggregate (cost=4.83..6.23 rows=140 width=4) (actual \ntime=0.187..0.218 rows=150 loops=1)\n -> Seq Scan on bookmarks (cost=0.00..4.45 rows=150 \nwidth=4) (actual time=0.028..0.126 rows=150 loops=1)\n Filter: (list_id = 4)\n -> Sort (cost=79593.40..80716.25 rows=449139 width=36) (actual \ntime=6789.786..7014.815 rows=448625 loops=1)\n Sort Key: \"all\".id\n -> Append (cost=0.00..23596.78 rows=449139 width=219) (actual \ntime=0.013..391.447 rows=448875 loops=1)\n -> Seq Scan on archive (cost=0.00..18638.15 rows=437615 \nwidth=219) (actual time=0.013..332.353 rows=437351 loops=1)\n -> Seq Scan on current (cost=0.00..467.24 rows=11524 \nwidth=203) (actual time=0.013..8.396 rows=11524 loops=1)\n Total runtime: 37226.846 ms\n\n\tThe IN() is quite small (150 values), but the two large tables are \nseq-scanned... is there a way to avoid this ?\n\n\n-------------------------------------------------------------------------------------------------------------------------------------\n\n\tAnother nitpick : let's redo the first query differently.\n\nannonces=> EXPLAIN ANALYZE SELECT * FROM test.current WHERE id IN (SELECT \nannonce_id FROM bookmarks WHERE list_id IN ('4'));\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=6.58..532.84 rows=140 width=203) (actual \ntime=0.794..5.791 rows=150 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".annonce_id)\n -> Seq Scan on current (cost=0.00..467.24 rows=11524 width=203) \n(actual time=0.003..3.554 rows=11524 loops=1)\n -> Hash (cost=6.23..6.23 rows=140 width=4) (actual time=0.265..0.265 \nrows=150 loops=1)\n -> HashAggregate (cost=4.83..6.23 rows=140 width=4) (actual \ntime=0.179..0.210 rows=150 loops=1)\n -> Seq Scan on bookmarks (cost=0.00..4.45 rows=150 \nwidth=4) (actual time=0.021..0.102 rows=150 loops=1)\n Filter: (list_id = 4)\n Total runtime: 5.853 ms\n\nannonces=> EXPLAIN ANALYZE SELECT a.* FROM test.current a, (SELECT \nDISTINCT annonce_id FROM bookmarks WHERE list_id IN ('4')) AS b WHERE \na.id=b.annonce_id;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=12.37..538.63 rows=140 width=203) (actual \ntime=0.812..5.362 rows=150 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".annonce_id)\n -> Seq Scan on current a (cost=0.00..467.24 rows=11524 width=203) \n(actual time=0.005..3.227 rows=11524 loops=1)\n -> Hash (cost=12.02..12.02 rows=140 width=4) (actual \ntime=0.296..0.296 rows=150 loops=1)\n -> Unique (cost=9.87..10.62 rows=140 width=4) (actual \ntime=0.215..0.265 rows=150 loops=1)\n -> Sort (cost=9.87..10.25 rows=150 width=4) (actual \ntime=0.215..0.226 rows=150 loops=1)\n Sort Key: bookmarks.annonce_id\n -> Seq Scan on bookmarks (cost=0.00..4.45 rows=150 \nwidth=4) (actual time=0.007..0.104 rows=150 loops=1)\n Filter: (list_id = 4)\n Total runtime: 5.429 ms\n\n\tHm, it does Sort + Unique + Hash ; the Hash alone would have been better.\n\tReplacing DISTINCT with GROUP BY removes the sort.\n\nannonces=> EXPLAIN ANALYZE SELECT a.* FROM test.current a, (SELECT \nannonce_id FROM bookmarks WHERE list_id IN ('4') GROUP BY annonce_id) AS b \nWHERE a.id=b.annonce_id;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=7.98..534.24 rows=140 width=203) (actual \ntime=0.811..5.557 rows=150 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".annonce_id)\n -> Seq Scan on current a (cost=0.00..467.24 rows=11524 width=203) \n(actual time=0.006..3.434 rows=11524 loops=1)\n -> Hash (cost=7.63..7.63 rows=140 width=4) (actual time=0.242..0.242 \nrows=150 loops=1)\n -> HashAggregate (cost=4.83..6.23 rows=140 width=4) (actual \ntime=0.156..0.186 rows=150 loops=1)\n -> Seq Scan on bookmarks (cost=0.00..4.45 rows=150 \nwidth=4) (actual time=0.008..0.097 rows=150 loops=1)\n Filter: (list_id = 4)\n Total runtime: 5.647 ms\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 25 Apr 2006 19:53:15 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow queries salad ;)"
},
{
"msg_contents": "PFC <[email protected]> writes:\n> \tThe IN() is quite small (150 values), but the two large tables are \n> seq-scanned... is there a way to avoid this ?\n\nNot in 8.1. HEAD is a bit smarter about joins to Append relations.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 Apr 2006 14:20:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries salad ;) "
},
{
"msg_contents": "On Tue, Apr 25, 2006 at 07:53:15PM +0200, PFC wrote:\nWhat version is this??\n\n> annonces=> EXPLAIN ANALYZE SELECT * FROM test.current WHERE id IN (SELECT \n> annonce_id FROM bookmarks WHERE list_id IN ('4'));\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=6.58..532.84 rows=140 width=203) (actual \n> time=0.747..5.283 rows=150 loops=1)\n> Hash Cond: (\"outer\".id = \"inner\".annonce_id)\n> -> Seq Scan on current (cost=0.00..467.24 rows=11524 width=203) \n> (actual time=0.006..3.191 rows=11524 loops=1)\n> -> Hash (cost=6.23..6.23 rows=140 width=4) (actual time=0.244..0.244 \n> rows=150 loops=1)\n> -> HashAggregate (cost=4.83..6.23 rows=140 width=4) (actual \n> time=0.155..0.184 rows=150 loops=1)\n> -> Seq Scan on bookmarks (cost=0.00..4.45 rows=150 \n> width=4) (actual time=0.008..0.097 rows=150 loops=1)\n> Filter: (list_id = 4)\n> Total runtime: 5.343 ms\n> (8 lignes)\n> \n> annonces=> set enable_hashjoin TO 0;\n> SET\n> annonces=> EXPLAIN ANALYZE SELECT * FROM test.current WHERE id IN (SELECT \n> annonce_id FROM bookmarks WHERE list_id IN ('4'));\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=4.83..824.22 rows=140 width=203) (actual \n> time=0.219..1.034 rows=150 loops=1)\n> -> HashAggregate (cost=4.83..6.23 rows=140 width=4) (actual \n> time=0.158..0.199 rows=150 loops=1)\n> -> Seq Scan on bookmarks (cost=0.00..4.45 rows=150 width=4) \n> (actual time=0.011..0.096 rows=150 loops=1)\n> Filter: (list_id = 4)\n> -> Index Scan using current_pkey on current (cost=0.00..5.83 rows=1 \n> width=203) (actual time=0.005..0.005 rows=1 loops=150)\n> Index Cond: (current.id = \"outer\".annonce_id)\n> Total runtime: 1.108 ms\n> (7 lignes)\n> \n> \tHm, the row estimates on the \"bookmarks\" table are spot on ; why did \n> \tit choose the hash join ?\n\nBecause it thought it would be cheaper; see the estimates. Increasing\neffective_cache_size or decreasing random_page_cost would favor the\nindex scan.\n\n> \tNow, if I want to access the \"all\" view which contains the union of \n> \tthe \"current\" and \"archive\" table :\n> \n> annonces=> set enable_hashjoin TO 1;\n> SET\n> annonces=> EXPLAIN ANALYZE SELECT * FROM test.all WHERE id IN (SELECT \n> annonce_id FROM bookmarks WHERE list_id IN ('4'));\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=6.58..33484.41 rows=314397 width=36) (actual \n> time=8300.484..8311.784 rows=150 loops=1)\n> Hash Cond: (\"outer\".\"?column1?\" = \"inner\".annonce_id)\n> -> Append (cost=0.00..23596.78 rows=449139 width=219) (actual \n> time=6.390..8230.821 rows=448875 loops=1)\n> -> Seq Scan on archive (cost=0.00..18638.15 rows=437615 \n> width=219) (actual time=6.389..8175.491 rows=437351 loops=1)\n> -> Seq Scan on current (cost=0.00..467.24 rows=11524 width=203) \n> (actual time=0.022..8.985 rows=11524 loops=1)\n> -> Hash (cost=6.23..6.23 rows=140 width=4) (actual time=0.255..0.255 \n> rows=150 loops=1)\n> -> HashAggregate (cost=4.83..6.23 rows=140 width=4) (actual \n> time=0.168..0.197 rows=150 loops=1)\n> -> Seq Scan on bookmarks (cost=0.00..4.45 rows=150 \n> width=4) (actual time=0.015..0.102 rows=150 loops=1)\n> Filter: (list_id = 4)\n> Total runtime: 8311.870 ms\n> (10 lignes)\n> \n> annonces=> set enable_hashjoin TO 0;\n> SET\n> annonces=> EXPLAIN ANALYZE SELECT * FROM test.all WHERE id IN (SELECT \n> annonce_id FROM bookmarks WHERE list_id IN ('4'));\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------\n> Merge Join (cost=79604.61..84994.98 rows=314397 width=36) (actual \n> time=6944.229..7109.371 rows=150 loops=1)\n> Merge Cond: (\"outer\".annonce_id = \"inner\".id)\n> -> Sort (cost=11.22..11.57 rows=140 width=4) (actual \n> time=0.326..0.355 rows=150 loops=1)\n> Sort Key: bookmarks.annonce_id\n> -> HashAggregate (cost=4.83..6.23 rows=140 width=4) (actual \n> time=0.187..0.218 rows=150 loops=1)\n> -> Seq Scan on bookmarks (cost=0.00..4.45 rows=150 \n> width=4) (actual time=0.028..0.126 rows=150 loops=1)\n> Filter: (list_id = 4)\n> -> Sort (cost=79593.40..80716.25 rows=449139 width=36) (actual \n> time=6789.786..7014.815 rows=448625 loops=1)\n> Sort Key: \"all\".id\n> -> Append (cost=0.00..23596.78 rows=449139 width=219) (actual \n> time=0.013..391.447 rows=448875 loops=1)\n> -> Seq Scan on archive (cost=0.00..18638.15 rows=437615 \n> width=219) (actual time=0.013..332.353 rows=437351 loops=1)\n> -> Seq Scan on current (cost=0.00..467.24 rows=11524 \n> width=203) (actual time=0.013..8.396 rows=11524 loops=1)\n> Total runtime: 37226.846 ms\n> \n> \tThe IN() is quite small (150 values), but the two large tables are \n> seq-scanned... is there a way to avoid this ?\n \nPossibly if you break the view apart...\n\nSELECT ... FROM current WHERE id IN (...)\nUNION ALL\nSELECT ... FROM archive WHERE id IN (...)\n \n> -------------------------------------------------------------------------------------------------------------------------------------\n> \n> \tAnother nitpick : let's redo the first query differently.\n> \n> annonces=> EXPLAIN ANALYZE SELECT * FROM test.current WHERE id IN (SELECT \n> annonce_id FROM bookmarks WHERE list_id IN ('4'));\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=6.58..532.84 rows=140 width=203) (actual \n> time=0.794..5.791 rows=150 loops=1)\n> Hash Cond: (\"outer\".id = \"inner\".annonce_id)\n> -> Seq Scan on current (cost=0.00..467.24 rows=11524 width=203) \n> (actual time=0.003..3.554 rows=11524 loops=1)\n> -> Hash (cost=6.23..6.23 rows=140 width=4) (actual time=0.265..0.265 \n> rows=150 loops=1)\n> -> HashAggregate (cost=4.83..6.23 rows=140 width=4) (actual \n> time=0.179..0.210 rows=150 loops=1)\n> -> Seq Scan on bookmarks (cost=0.00..4.45 rows=150 \n> width=4) (actual time=0.021..0.102 rows=150 loops=1)\n> Filter: (list_id = 4)\n> Total runtime: 5.853 ms\n> \n> annonces=> EXPLAIN ANALYZE SELECT a.* FROM test.current a, (SELECT \n> DISTINCT annonce_id FROM bookmarks WHERE list_id IN ('4')) AS b WHERE \n> a.id=b.annonce_id;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=12.37..538.63 rows=140 width=203) (actual \n> time=0.812..5.362 rows=150 loops=1)\n> Hash Cond: (\"outer\".id = \"inner\".annonce_id)\n> -> Seq Scan on current a (cost=0.00..467.24 rows=11524 width=203) \n> (actual time=0.005..3.227 rows=11524 loops=1)\n> -> Hash (cost=12.02..12.02 rows=140 width=4) (actual \n> time=0.296..0.296 rows=150 loops=1)\n> -> Unique (cost=9.87..10.62 rows=140 width=4) (actual \n> time=0.215..0.265 rows=150 loops=1)\n> -> Sort (cost=9.87..10.25 rows=150 width=4) (actual \n> time=0.215..0.226 rows=150 loops=1)\n> Sort Key: bookmarks.annonce_id\n> -> Seq Scan on bookmarks (cost=0.00..4.45 rows=150 \n> width=4) (actual time=0.007..0.104 rows=150 loops=1)\n> Filter: (list_id = 4)\n> Total runtime: 5.429 ms\n> \n> \tHm, it does Sort + Unique + Hash ; the Hash alone would have been \n> \tbetter.\n\nThe hash alone wouldn't have gotten you the same output as a DISTINCT,\nthough.\n\n> \tReplacing DISTINCT with GROUP BY removes the sort.\n> \n> annonces=> EXPLAIN ANALYZE SELECT a.* FROM test.current a, (SELECT \n> annonce_id FROM bookmarks WHERE list_id IN ('4') GROUP BY annonce_id) AS b \n> WHERE a.id=b.annonce_id;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=7.98..534.24 rows=140 width=203) (actual \n> time=0.811..5.557 rows=150 loops=1)\n> Hash Cond: (\"outer\".id = \"inner\".annonce_id)\n> -> Seq Scan on current a (cost=0.00..467.24 rows=11524 width=203) \n> (actual time=0.006..3.434 rows=11524 loops=1)\n> -> Hash (cost=7.63..7.63 rows=140 width=4) (actual time=0.242..0.242 \n> rows=150 loops=1)\n> -> HashAggregate (cost=4.83..6.23 rows=140 width=4) (actual \n> time=0.156..0.186 rows=150 loops=1)\n> -> Seq Scan on bookmarks (cost=0.00..4.45 rows=150 \n> width=4) (actual time=0.008..0.097 rows=150 loops=1)\n> Filter: (list_id = 4)\n> Total runtime: 5.647 ms\n\nUhm.. that actually runs slower...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 25 Apr 2006 13:35:22 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries salad ;)"
}
] |
[
{
"msg_contents": "\nI've been given the task of making some hardware recommendations for\nthe next round of server purchases. The machines to be purchased\nwill be running FreeBSD & PostgreSQL.\n\nWhere I'm stuck is in deciding whether we want to go with dual-core\npentiums with 2M cache, or with HT pentiums with 8M cache.\n\nBoth of these are expensive bits of hardware, and I'm trying to\ngather as much evidence as possible before making a recommendation.\nThe FreeBSD community seems pretty divided over which is likely to\nbe better, and I have been unable to discover a method for estimating\nhow much of the 2M cache on our existing systems is being used.\n\nDoes anyone in the PostgreSQL community have any experience with\nlarge caches or dual-core pentiums that could make any recommendations?\nOur current Dell 2850 systems are CPU bound - i.e. they have enough\nRAM, and fast enough disks that the CPUs seem to be the limiting\nfactor. As a result, this decision on what kind of CPUs to get in\nthe next round of servers is pretty important.\n\nAny advice is much appreciated.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n",
"msg_date": "Tue, 25 Apr 2006 14:14:35 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "On Tue, 2006-04-25 at 13:14, Bill Moran wrote:\n> I've been given the task of making some hardware recommendations for\n> the next round of server purchases. The machines to be purchased\n> will be running FreeBSD & PostgreSQL.\n> \n> Where I'm stuck is in deciding whether we want to go with dual-core\n> pentiums with 2M cache, or with HT pentiums with 8M cache.\n\nGiven a choice between those two processors, I'd choose the AMD 64 x 2\nCPU. It's a significantly better processor than either of the Intel\nchoices. And if you get the HT processor, you might as well turn of HT\non a PostgreSQL machine. I've yet to see it make postgresql run faster,\nbut I've certainly seen HT make it run slower.\n\nIf you can't run AMD in your shop due to bigotry (let's call a spade a\nspade) then I'd recommend the real dual core CPU with 2M cache. Most of\nwhat makes a database slow is memory and disk bandwidth. Few datasets\nare gonna fit in that 8M cache, and when they do, they'll get flushed\nright out by the next request anyway.\n\n> Does anyone in the PostgreSQL community have any experience with\n> large caches or dual-core pentiums that could make any recommendations?\n> Our current Dell 2850 systems are CPU bound - i.e. they have enough\n> RAM, and fast enough disks that the CPUs seem to be the limiting\n> factor. As a result, this decision on what kind of CPUs to get in\n> the next round of servers is pretty important.\n\nIf the CPUs are running at 100% then you're likely not memory I/O bound,\nbut processing speed bound. The dual core will definitely be the better\noption in that case. I take it you work at a \"Dell Only\" place, hence\nno AMD for you...\n\nSad, cause the AMD is, on a price / performance scale, twice the\nprocessor for the same money as the Intel.\n",
"msg_date": "Tue, 25 Apr 2006 13:33:38 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "On Tue, 25 Apr 2006 14:14:35 -0400\nBill Moran <[email protected]> wrote:\n\n> Does anyone in the PostgreSQL community have any experience with\n> large caches or dual-core pentiums that could make any\n> recommendations? \n\nHeh :) You're in the position I was in about a year ago - we \"naturally\"\nreplaced our old Dell 2650 with £14k of Dell 6850 Quad Xeon with 8M\ncache, and TBH the performance is woeful :/\n\nHaving gone through Postgres consultancy, been through IBM 8-way POWER4\nhardware, discovered a bit of a shortcoming in PG on N-way hardware\n(where N is large) [1] , I have been able to try out a dual-dual-core\nOpteron machine, and it flies.\n\nIn fact, it flies so well that we ordered one that day. So, in short\n£3k's worth of dual-opteron beat the living daylights out of our Xeon\nmonster. I can't praise the Opteron enough, and I've always been a firm\nIntel pedant - the HyperTransport stuff must really be doing wonders. I\ntypically see 500ms searches on it instead of 1000-2000ms on the Xeon)\n\nAs it stands, I've had to borrow this Opteron so much (and send live\nsearches across the net to the remote box) because otherwise we simply\ndon't have enough CPU power to run the website (!)\n\nCheers,\nGavin.\n\n[1] Simon Riggs + Tom Lane are currently involved in optimisation work\nfor this - it turns out our extremely read-heavy load pattern reveals\nsome buffer locking issues in PG.\n",
"msg_date": "Tue, 25 Apr 2006 19:35:01 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "On Tue, 2006-04-25 at 13:14, Bill Moran wrote:\n> I've been given the task of making some hardware recommendations for\n> the next round of server purchases. The machines to be purchased\n> will be running FreeBSD & PostgreSQL.\n> \n> Where I'm stuck is in deciding whether we want to go with dual-core\n> pentiums with 2M cache, or with HT pentiums with 8M cache.\n\nBTW: For an interesting article on why the dual core Opterons are so\nmuch better than their Intel cousins, read this article:\n\nhttp://techreport.com/reviews/2005q2/opteron-x75/index.x?pg=1\n\nEnlightening read.\n",
"msg_date": "Tue, 25 Apr 2006 13:36:56 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "On Tue, Apr 25, 2006 at 01:33:38PM -0500, Scott Marlowe wrote:\n> Sad, cause the AMD is, on a price / performance scale, twice the\n> processor for the same money as the Intel.\n\nMaybe a year or two ago. Prices are all coming down. Intel more\nthan AMD.\n\nAMD still seems better - but not X2, and it depends on the workload.\n\nX2 sounds like biggotry against Intel... :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Tue, 25 Apr 2006 14:38:17 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "On Tue, 2006-04-25 at 13:38, [email protected] wrote:\n> On Tue, Apr 25, 2006 at 01:33:38PM -0500, Scott Marlowe wrote:\n> > Sad, cause the AMD is, on a price / performance scale, twice the\n> > processor for the same money as the Intel.\n> \n> Maybe a year or two ago. Prices are all coming down. Intel more\n> than AMD.\n> \n> AMD still seems better - but not X2, and it depends on the workload.\n> \n> X2 sounds like biggotry against Intel... :-)\n\nActually, that was from an article from this last month that compared\nthe dual core intel to the amd. for every dollar spent on the intel,\nyou got about half the performance of the amd. Not bigotry. fact.\n\nBut don't believe me or the other people who've seen the difference. Go\nbuy the Intel box. No skin off my back.\n\n\n",
"msg_date": "Tue, 25 Apr 2006 13:42:31 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "Bill Moran wrote:\n> I've been given the task of making some hardware recommendations for\n> the next round of server purchases. The machines to be purchased\n> will be running FreeBSD & PostgreSQL.\n> \n> Where I'm stuck is in deciding whether we want to go with dual-core\n> pentiums with 2M cache, or with HT pentiums with 8M cache.\n\nDual Core Opterons :)\n\nJoshua D. Drake\n\n> \n> Both of these are expensive bits of hardware, and I'm trying to\n> gather as much evidence as possible before making a recommendation.\n> The FreeBSD community seems pretty divided over which is likely to\n> be better, and I have been unable to discover a method for estimating\n> how much of the 2M cache on our existing systems is being used.\n> \n> Does anyone in the PostgreSQL community have any experience with\n> large caches or dual-core pentiums that could make any recommendations?\n> Our current Dell 2850 systems are CPU bound - i.e. they have enough\n> RAM, and fast enough disks that the CPUs seem to be the limiting\n> factor. As a result, this decision on what kind of CPUs to get in\n> the next round of servers is pretty important.\n> \n> Any advice is much appreciated.\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 25 Apr 2006 11:49:37 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "\n> But don't believe me or the other people who've seen the difference. Go\n> buy the Intel box. No skin off my back.\n\nTo be more detailed... AMD Opteron has some specific technical \nadvantages to their design over Intel when it comes to peforming for a \ndatabase. Specifically no front side bus :)\n\nAlso it is widely known and documented (just review the archives) that \nAMD performs better then the equivelant Intel CPU, dollar for dollar.\n\nLastly it is also known that Dell frankly, sucks for PostgreSQL. Again, \ncheck the archives.\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 25 Apr 2006 12:12:20 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": ">Actually, that was from an article from this last month that compared\n>the dual core intel to the amd. for every dollar spent on the intel,\n>you got about half the performance of the amd. Not bigotry. fact.\n>\n>But don't believe me or the other people who've seen the difference. Go\n>buy the Intel box. No skin off my back.\n> \n>\nI've been doing plenty of performance evaluation on a parallel application\nwe're developing here : on Dual Core Opterons, P4, P4D. I can say that\nthe Opterons open up a can of wupass on the Intel processors. Almost 2x\nthe performance on our application vs. what the SpecCPU numbers would\nsuggest.\n\n\n\n\n\n\n\n\n\n\n\nActually, that was from an article from this last month that compared\nthe dual core intel to the amd. for every dollar spent on the intel,\nyou got about half the performance of the amd. Not bigotry. fact.\n\nBut don't believe me or the other people who've seen the difference. Go\nbuy the Intel box. No skin off my back.\n \n\nI've been doing plenty of performance evaluation on a parallel\napplication\nwe're developing here : on Dual Core Opterons, P4, P4D. I can say that\nthe Opterons open up a can of wupass on the Intel processors. Almost 2x\nthe performance on our application vs. what the SpecCPU numbers would \nsuggest.",
"msg_date": "Tue, 25 Apr 2006 13:57:59 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "David Boreham wrote:\n> \n>> Actually, that was from an article from this last month that compared\n>> the dual core intel to the amd. for every dollar spent on the intel,\n>> you got about half the performance of the amd. Not bigotry. fact.\n>>\n>> But don't believe me or the other people who've seen the difference. Go\n>> buy the Intel box. No skin off my back.\n>> \n> I've been doing plenty of performance evaluation on a parallel application\n> we're developing here : on Dual Core Opterons, P4, P4D. I can say that\n> the Opterons open up a can of wupass on the Intel processors. Almost 2x\n> the performance on our application vs. what the SpecCPU numbers would\n> suggest.\n\nBecause Stone Cold Said So!\n\n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 25 Apr 2006 14:00:08 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> David Boreham wrote:\n> > \n> >> Actually, that was from an article from this last month that compared\n> >> the dual core intel to the amd. for every dollar spent on the intel,\n> >> you got about half the performance of the amd. Not bigotry. fact.\n> >>\n> >> But don't believe me or the other people who've seen the difference. Go\n> >> buy the Intel box. No skin off my back.\n> >> \n> > I've been doing plenty of performance evaluation on a parallel application\n> > we're developing here : on Dual Core Opterons, P4, P4D. I can say that\n> > the Opterons open up a can of wupass on the Intel processors. Almost 2x\n> > the performance on our application vs. what the SpecCPU numbers would\n> > suggest.\n> \n> Because Stone Cold Said So!\n\nI'll believe someone who uses 'wupass' in a sentence any day!\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Tue, 25 Apr 2006 17:03:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "On Tue, Apr 25, 2006 at 01:33:38PM -0500, Scott Marlowe wrote:\n> On Tue, 2006-04-25 at 13:14, Bill Moran wrote:\n> > I've been given the task of making some hardware recommendations for\n> > the next round of server purchases. The machines to be purchased\n> > will be running FreeBSD & PostgreSQL.\n> > \n> > Where I'm stuck is in deciding whether we want to go with dual-core\n> > pentiums with 2M cache, or with HT pentiums with 8M cache.\n> \n> Given a choice between those two processors, I'd choose the AMD 64 x 2\n> CPU. It's a significantly better processor than either of the Intel\n> choices. And if you get the HT processor, you might as well turn of HT\n> on a PostgreSQL machine. I've yet to see it make postgresql run faster,\n> but I've certainly seen HT make it run slower.\n\nActually, believe it or not, a coworker just saw HT double the\nperformance of pgbench on his desktop machine. Granted, not really a\nrepresentative test case, but it still blew my mind. This was with a\ndatabase that fit in his 1G of memory, and running windows XP. Both\ncases were newly minted pgbench databases with a scale of 40. Testing\nwas 40 connections and 100 transactions. With HT he saw 47.6 TPS,\nwithout it was 21.1.\n\nI actually had IT build put w2k3 server on a HT box specifically so I\ncould do more testing.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 25 Apr 2006 18:55:04 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "On Tue, Apr 25, 2006 at 01:42:31PM -0500, Scott Marlowe wrote:\n> On Tue, 2006-04-25 at 13:38, [email protected] wrote:\n> > On Tue, Apr 25, 2006 at 01:33:38PM -0500, Scott Marlowe wrote:\n> > > Sad, cause the AMD is, on a price / performance scale, twice the\n> > > processor for the same money as the Intel.\n> > Maybe a year or two ago. Prices are all coming down. Intel more\n> > than AMD.\n> > AMD still seems better - but not X2, and it depends on the workload.\n> > X2 sounds like biggotry against Intel... :-)\n> Actually, that was from an article from this last month that compared\n> the dual core intel to the amd. for every dollar spent on the intel,\n> you got about half the performance of the amd. Not bigotry. fact.\n> But don't believe me or the other people who've seen the difference. Go\n> buy the Intel box. No skin off my back.\n\nAMD Opteron vs Intel Xeon is different than AMD X2 vs Pentium D.\n\nFor AMD X2 vs Pentium D - I have both - in similar price range, and\nsimilar speed. I choose to use the AMD X2 as my server, and Pentium D\nas my Windows desktop. They're both quite fast.\n\nI made the choice I describe based on a lot of research. I was going\nto go both Intel, until I noticed that the Intel prices were dropping\nfast. 30% price cut in 2 months. AMD didn't drop at all during the\nsame time.\n\nThere are plenty of reasons to choose one over the other. Generally\nthe AMD comes out on top. It is *not* 2X though. Anybody who claims\nthis is being highly selective about which benchmarks they consider.\n\nOne article is nothing.\n\nThere is a lot of hype these days. AMD is winning the elite market,\nwhich means that they are able to continue to sell high. Intel, losing\nthis market, is cutting its prices to compete. And they do compete.\nQuite well.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Tue, 25 Apr 2006 20:54:40 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "On Tue, Apr 25, 2006 at 08:54:40PM -0400, [email protected] wrote:\n> I made the choice I describe based on a lot of research. I was going\n> to go both Intel, until I noticed that the Intel prices were dropping\n> fast. 30% price cut in 2 months. AMD didn't drop at all during the\n> same time.\n\nErrr.. big mistake. That was going to be - I was going to go both AMD.\n\n> There are plenty of reasons to choose one over the other. Generally\n> the AMD comes out on top. It is *not* 2X though. Anybody who claims\n> this is being highly selective about which benchmarks they consider.\n\nI have an Intel Pentium D 920, and an AMD X2 3800+. These are very\nclose in performance. The retail price difference is:\n\n Intel Pentium D 920 is selling for $310 CDN\n AMD X2 3800+ is selling for $347 CDN\n\nAnother benefit of Pentium D over AMD X2, at least until AMD chooses\nto switch, is that Pentium D supports DDR2, whereas AMD only supports\nDDR. There are a lot of technical pros and cons to each - with claims\nfrom AMD that DDR2 can be slower than DDR - but one claim that isn't\noften made, but that helped me make my choice:\n\n 1) DDR2 supports higher transfer speeds. I'm using DDR2 5400 on\n the Intel. I think I'm at 3200 or so on the AMD X2.\n\n 2) DDR2 is cheaper. I purchased 1 Gbyte DDR2 5400 for $147 CDN.\n 1 Gbyte of DDR 3200 starts at around the same price, and\n stretches into $200 - $300 CDN.\n\nNow, granted, the Intel 920 requires more electricity to run. Running\n24/7 for a year might make the difference in cost.\n\nIt doesn't address point 1) though. I like my DDR2 5400.\n\nSo, unfortunately, I won't be able to do a good test for you to prove\nthat my Windows Pentium D box is not only cheaper to buy, but faster,\nbecause the specs aren't exactly equivalent. In the mean time, I'm\nquite enjoying my 3d games while doing other things at the same time.\nI imagine my desktop load approaches that of a CPU-bound database\nload. 3d games require significant I/O and CPU.\n\nAnybody who claims that Intel is 2X more expensive for the same\nperformance, isn't considering all factors. No question at all - the\nOpteron is good, and the Xeon isn't - but the original poster didn't\nask about Opeteron or Xeon, did he? For the desktop lines - X2 is not\ndouble Pentium D. Maybe 10%. Maybe not at all. Especially now that\nIntel is dropping it's prices due to overstock.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Tue, 25 Apr 2006 21:17:01 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "[email protected] wrote:\n> Another benefit of Pentium D over AMD X2, at least until AMD chooses\n> to switch, is that Pentium D supports DDR2, whereas AMD only supports\n> DDR. There are a lot of technical pros and cons to each - with claims\n> from AMD that DDR2 can be slower than DDR - but one claim that isn't\n> often made, but that helped me make my choice:\n> \nThey're switching quite soon though -- within the next month now it \nseems, after moving up their earlier plans to launch in June:\n\nhttp://www.dailytech.com/article.aspx?newsid=1854\n\nThis Anandtech article shows the kind of performance increase we can \nexpect with DDR2 on AMD's new socket:\n\nhttp://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2741\n\nThe short version is that it's an improvement, but not an enormous one, \nand you need to spend quite a bit of cash on 800Mhz (PC6400) DDR2 sticks \nto see the most benefit. Some brief local (Australian) price comparisons \nshow 1GB PC-3200 DDR sticks starting at just over AU$100, with 1GB \nPC2-4200 DDR2 sticks around the same price, though Anandtech's tests \nshowed PC2-4200 DDR2 benching generally slower than PC-3200 DDR, \nprobably due to the increased latency in DDR2.\n\nComparing reasonable quality matched pairs of 1GB sticks, PC-3200 DDR \nstill seems generally cheaper than PC2-5300 DDR2, though not by a lot, \nand I'm sure the DDR2 will start dropping even further as AMD systems \nstart using it in the next month or so.\n\nOne thing's for sure though -- Intel's Pentium D prices are remarkably \nlow, and at the lower end of the price range AMD has nothing that's even \nremotely competitive in terms of price/performance. The Pentium D 805, \nfor instance, with its dual 2.67Ghz cores, costs just AU$180. The X2 \n3800+ is a far better chip, but it's also two-and-a-half times the price.\n\nNone of this really matters much in the server space though, where \nOpteron's real advantage over Xeon is not its greater raw CPU power, or \nits better dual-core implementation (though both would be hard to \ndispute), but the improved system bandwidth provided by Hypertransport. \nEven with Intel's next-gen CPUs, which look set to address the first two \npoints quite well, they still won't have an interconnect technology that \ncan really compete with AMD's.\n\nThanks\nLeigh\n\n",
"msg_date": "Wed, 26 Apr 2006 11:53:06 +1000",
"msg_from": "Leigh Dyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "[email protected] wrote:\n> \n> I have an Intel Pentium D 920, and an AMD X2 3800+. These are very\n> close in performance. The retail price difference is:\n> \n> Intel Pentium D 920 is selling for $310 CDN\n> AMD X2 3800+ is selling for $347 CDN\n> \n> Anybody who claims that Intel is 2X more expensive for the same\n> performance, isn't considering all factors. No question at all - the\n> Opteron is good, and the Xeon isn't - but the original poster didn't\n> ask about Opeteron or Xeon, did he? For the desktop lines - X2 is not\n> double Pentium D. Maybe 10%. Maybe not at all. Especially now that\n> Intel is dropping it's prices due to overstock.\n\nThere's part of the equation you are missing here. This is a PostgreSQL \nmailing list which means we're usually talking about performance of just \nthis specific server app. While in general there may not be that much of \na % difference between the 2 chips, there's a huge gap in Postgres. For \nwhatever reason, Postgres likes Opterons. Way more than Intel \nP4-architecture chips. (And it appears way more than IBM Power4 chips \nand a host of other chips also.)\n\nHere's one of the many discussions we had about this issue last year:\n\nhttp://qaix.com/postgresql-database-development/337-670-re-opteron-vs-xeon-was-what-to-do-with-6-disks-read.shtml\n\nThe exact reasons why Opteron runs PostgreSQL so much better than P4s, \nwe're not 100% sure of. We have guesses -- lower memory latency, lack of \nshared FSB, better 64-bit, 64-bit IOMMU, context-switch storms on P4, \nbetter dualcore implementation and so on. Perhaps it's a combination of \nall the above factors but somehow, the general experience people have \nhad is that equivalently priced Opterons servers run PostgreSQL 2X \nfaster than P4 servers as the baseline and the gap increases as you add \nmore sockets and more cores.\n",
"msg_date": "Wed, 26 Apr 2006 07:19:38 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "\n >While in general there may not be that much of a % difference between \nthe 2 chips,\n >there's a huge gap in Postgres. For whatever reason, Postgres likes \nOpterons.\n >Way more than Intel P4-architecture chips.\n\nIt isn't only Postgres. I work on a number of other server applications\nthat also run much faster on Opterons than the published benchmark\nfigures would suggest they should. They're all compiled with gcc4,\nso possibly there's a compiler issue. I don't run Windows on any\nof our Opteron boxes so I can't easily compare using the MS compiler.\n\n\n\n\n",
"msg_date": "Wed, 26 Apr 2006 08:43:25 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "\n\tHave a look at this Wikipedia page which outlines some differences \nbetween the AMD and Intel versions of 64-bit :\n\n\thttp://en.wikipedia.org/wiki/EM64T\n\n> It isn't only Postgres. I work on a number of other server applications\n> that also run much faster on Opterons than the published benchmark\n> figures would suggest they should. They're all compiled with gcc4,\n> so possibly there's a compiler issue. I don't run Windows on any\n> of our Opteron boxes so I can't easily compare using the MS compiler.\n\n",
"msg_date": "Wed, 26 Apr 2006 17:13:24 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "On Tue, 2006-04-25 at 18:55, Jim C. Nasby wrote:\n> On Tue, Apr 25, 2006 at 01:33:38PM -0500, Scott Marlowe wrote:\n> > On Tue, 2006-04-25 at 13:14, Bill Moran wrote:\n> > > I've been given the task of making some hardware recommendations for\n> > > the next round of server purchases. The machines to be purchased\n> > > will be running FreeBSD & PostgreSQL.\n> > > \n> > > Where I'm stuck is in deciding whether we want to go with dual-core\n> > > pentiums with 2M cache, or with HT pentiums with 8M cache.\n> > \n> > Given a choice between those two processors, I'd choose the AMD 64 x 2\n> > CPU. It's a significantly better processor than either of the Intel\n> > choices. And if you get the HT processor, you might as well turn of HT\n> > on a PostgreSQL machine. I've yet to see it make postgresql run faster,\n> > but I've certainly seen HT make it run slower.\n> \n> Actually, believe it or not, a coworker just saw HT double the\n> performance of pgbench on his desktop machine. Granted, not really a\n> representative test case, but it still blew my mind. This was with a\n> database that fit in his 1G of memory, and running windows XP. Both\n> cases were newly minted pgbench databases with a scale of 40. Testing\n> was 40 connections and 100 transactions. With HT he saw 47.6 TPS,\n> without it was 21.1.\n> \n> I actually had IT build put w2k3 server on a HT box specifically so I\n> could do more testing.\n\nJust to clarify, this is PostgreSQL on Windows, right?\n\nI wonder if the latest Linux kernel can do that well... I'm guessing\nthat the kernel scheduler in Windows has had a lot of work to make it\ngood at scheduling on a HT architecture than the linux kernel has.\n",
"msg_date": "Wed, 26 Apr 2006 10:17:58 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "David Boreham wrote:\n> It isn't only Postgres. I work on a number of other server applications\n> that also run much faster on Opterons than the published benchmark\n> figures would suggest they should. They're all compiled with gcc4,\n> so possibly there's a compiler issue. I don't run Windows on any\n> of our Opteron boxes so I can't easily compare using the MS compiler.\n\n\nMaybe it's just a fact that the majority of x86 64-bit development for \nopen source software happens on Opteron/A64 machines. 64-bit AMD \nmachines were selling a good year before 64-bit Intel machines were \navailable. And even after Intel EMT64 were available, anybody in their \nright mind would have picked AMD machines over Intel due to \ncost/heat/performance. So you end up with 64-bit OSS being \ndeveloped/optimized for Opterons and the 10% running Intel EMT64 handle \ncompatibility issues.\n\nWould be interesting to see a survey of what machines OSS developers use \nto write/test/optimize their code.\n",
"msg_date": "Wed, 26 Apr 2006 08:24:35 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "On Tue, 2006-04-25 at 20:17, [email protected] wrote:\n> On Tue, Apr 25, 2006 at 08:54:40PM -0400, [email protected] wrote:\n> > I made the choice I describe based on a lot of research. I was going\n> > to go both Intel, until I noticed that the Intel prices were dropping\n> > fast. 30% price cut in 2 months. AMD didn't drop at all during the\n> > same time.\n> \n> Errr.. big mistake. That was going to be - I was going to go both AMD.\n> \n> > There are plenty of reasons to choose one over the other. Generally\n> > the AMD comes out on top. It is *not* 2X though. Anybody who claims\n> > this is being highly selective about which benchmarks they consider.\n> \n> I have an Intel Pentium D 920, and an AMD X2 3800+. These are very\n> close in performance. The retail price difference is:\n> \n> Intel Pentium D 920 is selling for $310 CDN\n> AMD X2 3800+ is selling for $347 CDN\n\nLet me be clear. The performance difference between those boxes running\nthe latest first person shooter is not what I was alluding to in my\nfirst post. While the price of the Intel's may have dropped, there's a\nhuge difference (often 2x or more) in performance when running\nPostgreSQL on otherwise similar chips from Intel and AMD.\n\nNote that my workstation at work, my workstation at home, and my laptop\nare all intel based machines. They work fine for that. But if I needed\nto build a big fast oracle or postgresql server, I'd almost certainly go\nwith the AMD, especially so if I needed >2 cores, where the performance\ndifference becomes greater and greater.\n\nYou'd likely find that for PostgreSQL, the slowest dual core AMDs out\nwould still beat the fasted Intel Dual cores, because of the issue we've\nseen on the list with context switching storms.\n\nIf you haven't actually run a heavy benchmark of postgresql on the two\narchitectures, please don't make your decision based on other\nbenchmarks. Since you've got both a D920 and an X2 3800, that'd be a\ngreat place to start. Mock up some benchmark with a couple dozen\nthreads hitting the server at once and see if the Intel can keep up. It\nshould do OK, but not great. If you can get your hands on a dual\ndual-core setup for either, you should really start to see the advantage\ngoing to AMD, and by the time you get to a quad dual core setup, it\nwon't even be a contest.\n",
"msg_date": "Wed, 26 Apr 2006 10:27:18 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "On Wed, Apr 26, 2006 at 10:27:18AM -0500, Scott Marlowe wrote:\n> If you haven't actually run a heavy benchmark of postgresql on the two\n> architectures, please don't make your decision based on other\n> benchmarks. Since you've got both a D920 and an X2 3800, that'd be a\n> great place to start. Mock up some benchmark with a couple dozen\n> threads hitting the server at once and see if the Intel can keep up. It\n\nOr better yet, use dbt* or even pgbench so others can reproduce...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 26 Apr 2006 17:09:29 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "On Wed, Apr 26, 2006 at 10:17:58AM -0500, Scott Marlowe wrote:\n> On Tue, 2006-04-25 at 18:55, Jim C. Nasby wrote:\n> > On Tue, Apr 25, 2006 at 01:33:38PM -0500, Scott Marlowe wrote:\n> > > On Tue, 2006-04-25 at 13:14, Bill Moran wrote:\n> > > > I've been given the task of making some hardware recommendations for\n> > > > the next round of server purchases. The machines to be purchased\n> > > > will be running FreeBSD & PostgreSQL.\n> > > > \n> > > > Where I'm stuck is in deciding whether we want to go with dual-core\n> > > > pentiums with 2M cache, or with HT pentiums with 8M cache.\n> > > \n> > > Given a choice between those two processors, I'd choose the AMD 64 x 2\n> > > CPU. It's a significantly better processor than either of the Intel\n> > > choices. And if you get the HT processor, you might as well turn of HT\n> > > on a PostgreSQL machine. I've yet to see it make postgresql run faster,\n> > > but I've certainly seen HT make it run slower.\n> > \n> > Actually, believe it or not, a coworker just saw HT double the\n> > performance of pgbench on his desktop machine. Granted, not really a\n> > representative test case, but it still blew my mind. This was with a\n> > database that fit in his 1G of memory, and running windows XP. Both\n> > cases were newly minted pgbench databases with a scale of 40. Testing\n> > was 40 connections and 100 transactions. With HT he saw 47.6 TPS,\n> > without it was 21.1.\n> > \n> > I actually had IT build put w2k3 server on a HT box specifically so I\n> > could do more testing.\n> \n> Just to clarify, this is PostgreSQL on Windows, right?\n> \n> I wonder if the latest Linux kernel can do that well... I'm guessing\n> that the kernel scheduler in Windows has had a lot of work to make it\n> good at scheduling on a HT architecture than the linux kernel has.\n\nYes, this is on Windows XP. Larry might also have a HT box with some\nother OS on it we can check with (though I suspect that maybe that's\nbeen beaten to death...)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 26 Apr 2006 17:14:47 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Wed, Apr 26, 2006 at 10:27:18AM -0500, Scott Marlowe wrote:\n> > If you haven't actually run a heavy benchmark of postgresql on the two\n> > architectures, please don't make your decision based on other\n> > benchmarks. Since you've got both a D920 and an X2 3800, that'd be a\n> > great place to start. Mock up some benchmark with a couple dozen\n> > threads hitting the server at once and see if the Intel can keep up. It\n> \n> Or better yet, use dbt* or even pgbench so others can reproduce...\n\nFor why Opterons are superior to Intel for PostgreSQL, see:\n\n\thttp://techreport.com/reviews/2005q2/opteron-x75/index.x?pg=2\n\nSection \"MESI-MESI-MOESI Banana-fana...\". Specifically, this part about\nthe Intel implementation:\n\n\tThe processor with the Invalid data in its cache (CPU 0, let's say)\n\tmight then wish to modify that chunk of data, but it could not do so\n\twhile the only valid copy of the data is in the cache of the other\n\tprocessor (CPU 1). Instead, CPU 0 would have to wait until CPU 1 wrote\n\tthe modified data back to main memory before proceeding.and that takes\n\ttime, bus bandwidth, and memory bandwidth. This is the great drawback of\n\tMESI.\n\nAMD transfers the dirty cache line directly from cpu to cpu. I can\nimaging that helping our test-and-set shared memory usage quite a bit.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Wed, 26 Apr 2006 18:16:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "On Wed, Apr 26, 2006 at 06:16:46PM -0400, Bruce Momjian wrote:\n> Jim C. Nasby wrote:\n> > On Wed, Apr 26, 2006 at 10:27:18AM -0500, Scott Marlowe wrote:\n> > > If you haven't actually run a heavy benchmark of postgresql on the two\n> > > architectures, please don't make your decision based on other\n> > > benchmarks. Since you've got both a D920 and an X2 3800, that'd be a\n> > > great place to start. Mock up some benchmark with a couple dozen\n> > > threads hitting the server at once and see if the Intel can keep up. It\n> > \n> > Or better yet, use dbt* or even pgbench so others can reproduce...\n> \n> For why Opterons are superior to Intel for PostgreSQL, see:\n> \n> \thttp://techreport.com/reviews/2005q2/opteron-x75/index.x?pg=2\n> \n> Section \"MESI-MESI-MOESI Banana-fana...\". Specifically, this part about\n> the Intel implementation:\n> \n> \tThe processor with the Invalid data in its cache (CPU 0, let's say)\n> \tmight then wish to modify that chunk of data, but it could not do so\n> \twhile the only valid copy of the data is in the cache of the other\n> \tprocessor (CPU 1). Instead, CPU 0 would have to wait until CPU 1 wrote\n> \tthe modified data back to main memory before proceeding.and that takes\n> \ttime, bus bandwidth, and memory bandwidth. This is the great drawback of\n> \tMESI.\n> \n> AMD transfers the dirty cache line directly from cpu to cpu. I can\n> imaging that helping our test-and-set shared memory usage quite a bit.\n\nWasn't the whole point of test-and-set that it's the recommended way to\ndo lightweight spinlocks according to AMD/Intel? You'd think they'd have\na way to make that performant on multiple CPUs (though if it's relying\non possibly modifying an underlying data page I can't really think of\nhow to do that without snaking through the cache...)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 26 Apr 2006 17:37:31 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "On Wed, Apr 26, 2006 at 05:37:31PM -0500, Jim C. Nasby wrote:\n> On Wed, Apr 26, 2006 at 06:16:46PM -0400, Bruce Momjian wrote:\n> > AMD transfers the dirty cache line directly from cpu to cpu. I can\n> > imaging that helping our test-and-set shared memory usage quite a bit.\n> Wasn't the whole point of test-and-set that it's the recommended way to\n> do lightweight spinlocks according to AMD/Intel? You'd think they'd have\n> a way to make that performant on multiple CPUs (though if it's relying\n> on possibly modifying an underlying data page I can't really think of\n> how to do that without snaking through the cache...)\n\nIt's expensive no matter what. One method might be less expensive than\nanother. :-)\n\nAMD definately seems to have things right for lowest absolute latency.\n2X still sounds like an extreme case - but until I've actually tried a\nvery large, or thread intensive PostgreSQL db on both, I probably\nshouldn't doubt the work of others too much. :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Wed, 26 Apr 2006 22:56:02 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "\nOn Apr 25, 2006, at 2:14 PM, Bill Moran wrote:\n\n> Where I'm stuck is in deciding whether we want to go with dual-core\n> pentiums with 2M cache, or with HT pentiums with 8M cache.\n\nIn order of preference:\n\nOpterons (dual core or single core)\nXeon with HT *disabled* at the BIOS level (dual or single core)\n\n\nNotice Xeon with HT is not on my list :-)\n\n",
"msg_date": "Thu, 27 Apr 2006 11:11:42 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "Hi all,\n\nVivek Khera schrieb:\n > On Apr 25, 2006, at 2:14 PM, Bill Moran wrote:\n >> Where I'm stuck is in deciding whether we want to go with dual-core\n >> pentiums with 2M cache, or with HT pentiums with 8M cache.\n >\n > In order of preference:\n >\n > Opterons (dual core or single core)\n > Xeon with HT *disabled* at the BIOS level (dual or single core)\n >\n >\n > Notice Xeon with HT is not on my list :-)\n >\n\nI support Vivek's order of preference. I have been going through a \nnightmare of performance issues with different x86 hardware.\nAt the end of the day I can say the Opterons are faster because of their \nmemory bandwidth. I also had to disable HT on all our customers servers \n which were still using XEON's with HT.\n\nThere is a paper from HP which describes the advantage of the memory \narchitecture of the Opterons. This is the best explanation to me why \nOpteron 875 is faster than a XEON MP 3 GHz, which I did compare last year.\n\nI remember a thread in the postgresql devel list around HT in 2004, \nwhere you can find the reason why you should disable HT.\nThis thread refers to Intel Developer Manual Volume 4 (Architecture \nOptimisation) where there is some advice regarding spin-wait loop.\nThis is related to the code of src/include/storage/s_lock.h.\n\nCheers Sven.\n\n======\n From Intel Developer Manual Volume 4\n\nSynchronization for Short Periods\n\nThe frequency and duration that a thread needs to synchronize with\nother threads depends application characteristics. When a\nsynchronization loop needs very fast response, applications may use a\nspin-wait loop.\n\nA spin-wait loop is typically used when one thread needs to wait a short\namount of time for another thread to reach a point of synchronization. A\nspin-wait loop consists of a loop that compares a synchronization\nvariable with some pre-defined value [see Example 7-1(a)].\n\nOn a modern microprocessor with a superscalar speculative execution\nengine, a loop like this results in the issue of multiple simultaneous read\nrequests from the spinning thread. These requests usually execute\nout-of-order with each read request being allocated a buffer resource.\nOn detection of a write by a worker thread to a load that is in progress,\nthe processor must guarantee no violations of memory order occur. The\nnecessity of maintaining the order of outstanding memory operations\ninevitably costs the processor a severe penalty that impacts all threads.\n\nThis penalty occurs on the Pentium Pro processor, the Pentium II\nprocessor and the Pentium III processor. However, the penalty on these\nprocessors is small compared with penalties suffered on the Pentium 4\nand Intel Xeon processors. There the performance penalty for exiting\nthe loop is about 25 times more severe.\n\nOn a processor supporting Hyper-Threading Technology, spin-wait\nloops can consume a significant portion of the execution bandwidth of\nthe processor. One logical processor executing a spin-wait loop can\nseverely impact the performance of the other logical processor.\n\n====\n",
"msg_date": "Fri, 28 Apr 2006 10:32:16 +0200",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
}
] |
[
{
"msg_contents": "The col is a varchar. I am currently testing with the inet data type(and\nalso the ipv4 pgfoundry data type).\n\nDue to time constraints, I am trying to minimize code changes.\n\nWhat kind of index do I need to create to enable efficient range scans\n(e.g anything between 172.16.x.x thru 172.31.x.x) on the inet data type?\n\nThanks\n\nSriram\n\n-----Original Message-----\nFrom: Jim C. Nasby [mailto:[email protected]] \nSent: Tuesday, April 25, 2006 11:25 AM\nTo: Sriram Dandapani\nCc: Pgsql-Performance (E-mail)\nSubject: Re: [PERFORM] planner not using index for like operator\n\nOn Tue, Apr 25, 2006 at 10:08:02AM -0700, Sriram Dandapani wrote:\nHere's the key:\n\n> \" Filter: ((col1)::text ~~ '172.%'::text)\"\n\nIn order to do a like comparison, it has to convert col1 (which I'm\nguessing is of type 'inet') to text, so there's no way it can use an\nindex on col1 (maybe a function index, but that's a different story).\n\nIs there some reason you're not doing\n\nWHERE col1 <<= '172/8'::inet\n\n?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 25 Apr 2006 11:31:12 -0700",
"msg_from": "\"Sriram Dandapani\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: planner not using index for like operator"
}
] |
[
{
"msg_contents": "Using an index on col1 with the operator class varchar_pattern_ops , I\nwas able to get a 3 second response time. That will work for me.\nI used a like '172.%' and an extra pattern matching condition to\nrestrict\nBetween 172.16.x.x and 172.31.x.x\n\nThanks for the input..I will also test the inet data type to see if\nthere are differences.\n\nSriram\n\n-----Original Message-----\nFrom: Jim C. Nasby [mailto:[email protected]] \nSent: Tuesday, April 25, 2006 11:25 AM\nTo: Sriram Dandapani\nCc: Pgsql-Performance (E-mail)\nSubject: Re: [PERFORM] planner not using index for like operator\n\nOn Tue, Apr 25, 2006 at 10:08:02AM -0700, Sriram Dandapani wrote:\nHere's the key:\n\n> \" Filter: ((col1)::text ~~ '172.%'::text)\"\n\nIn order to do a like comparison, it has to convert col1 (which I'm\nguessing is of type 'inet') to text, so there's no way it can use an\nindex on col1 (maybe a function index, but that's a different story).\n\nIs there some reason you're not doing\n\nWHERE col1 <<= '172/8'::inet\n\n?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 25 Apr 2006 11:52:19 -0700",
"msg_from": "\"Sriram Dandapani\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: planner not using index for like operator"
}
] |
[
{
"msg_contents": "As others have noted, the current price/performance \"sweet spot\" for DB servers is 2S 2C AMD CPUs. These CPUs are also the highest performing x86 compatible solution for pg.\n\nIf you must go Intel for some reason, then wait until the new NGMA CPU's (Conroe, Merom, Woodcrest) come out and see how they bench on DB workloads. Preliminary benches on these chips look good, but I would not recommend making a purchase decision based on just preliminary benches of unreleased products.\n\nIf you must buy soon, then the decision is clear cut from anything except possinly a political/religious standpoint.\nThe NetBurst based Pentium and Xeon solutions are simply not worth the money spent or the PITA they will put you through compared to the AMD dual cores. The new Intel NGMA CPUs may be different, but all the pertinent evidence is not yet available.\n\nMy personal favorite pg platform at this time is one based on a 2 socket, dual core ready mainboard with 16 DIMM slots combined with dual core AMD Kx's.\n\nLess money than the \"comparable\" Intel solution and _far_ more performance.\n\n...and even if you do buy Intel, =DON\"T= buy Dell unless you like causing trouble for yourself.\nBad experiences with Dell in general and their poor PERC RAID controllers in specific are all over this and other DB forums.\n\nRon\n\n\n-----Original Message-----\n>From: Bill Moran <[email protected]>\n>Sent: Apr 25, 2006 2:14 PM\n>To: [email protected]\n>Subject: [PERFORM] Large (8M) cache vs. dual-core CPUs\n>\n>\n>I've been given the task of making some hardware recommendations for\n>the next round of server purchases. The machines to be purchased\n>will be running FreeBSD & PostgreSQL.\n>\n>Where I'm stuck is in deciding whether we want to go with dual-core\n>pentiums with 2M cache, or with HT pentiums with 8M cache.\n>\n>Both of these are expensive bits of hardware, and I'm trying to\n>gather as much evidence as possible before making a recommendation.\n>The FreeBSD community seems pretty divided over which is likely to\n>be better, and I have been unable to discover a method for estimating\n>how much of the 2M cache on our existing systems is being used.\n>\n>Does anyone in the PostgreSQL community have any experience with\n>large caches or dual-core pentiums that could make any recommendations?\n>Our current Dell 2850 systems are CPU bound - i.e. they have enough\n>RAM, and fast enough disks that the CPUs seem to be the limiting\n>factor. As a result, this decision on what kind of CPUs to get in\n>the next round of servers is pretty important.\n>\n>Any advice is much appreciated.\n>\n",
"msg_date": "Tue, 25 Apr 2006 17:09:53 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "\n>My personal favorite pg platform at this time is one based on a 2 socket, dual core ready mainboard with 16 DIMM slots combined with dual core AMD Kx's.\n> \n>\nRight. We've been buying Tyan bare-bones boxes like this.\nIt's better to go with bare-bones than building boxes from bare metal\nbecause the cooling issues are addressed correctly.\n\nNote that if you need a large number of machines, then Intel\nCore Duo may give the best overall price/performance because\nthey're cheaper to run and cool.\n\n\n\n",
"msg_date": "Tue, 25 Apr 2006 15:15:06 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "Ron Peacetree wrote:\n> As others have noted, the current price/performance \"sweet spot\" for DB servers is 2S 2C AMD CPUs. These CPUs are also the highest performing x86 compatible solution for pg.\n> \n> If you must go Intel for some reason, then wait until the new NGMA CPU's (Conroe, Merom, Woodcrest) come out and see how they bench on DB workloads. Preliminary benches on these chips look good, but I would not recommend making a purchase decision based on just preliminary benches of unreleased products.\n> \n> If you must buy soon, then the decision is clear cut from anything except possinly a political/religious standpoint.\n> The NetBurst based Pentium and Xeon solutions are simply not worth the money spent or the PITA they will put you through compared to the AMD dual cores. The new Intel NGMA CPUs may be different, but all the pertinent evidence is not yet available.\n> \n> My personal favorite pg platform at this time is one based on a 2 socket, dual core ready mainboard with 16 DIMM slots combined with dual core AMD Kx's.\n> \n> Less money than the \"comparable\" Intel solution and _far_ more performance.\n> \n> ...and even if you do buy Intel, =DON\"T= buy Dell unless you like causing trouble for yourself.\n> Bad experiences with Dell in general and their poor PERC RAID controllers in specific are all over this and other DB forums.\n> \n> Ron\n> \n\nTo add to this... the HP DL 385 is a pretty nice dual core capable \nopteron box. Just don't buy the extra ram from HP (they like to charge \nentirely too much).\n\nJoshua D. Drake\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 25 Apr 2006 15:03:35 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
},
{
"msg_contents": "\nOn Apr 25, 2006, at 5:09 PM, Ron Peacetree wrote:\n\n> ...and even if you do buy Intel, =DON\"T= buy Dell unless you like \n> causing trouble for yourself.\n> Bad experiences with Dell in general and their poor PERC RAID \n> controllers in specific are all over this and other DB forums.\n\nI don't think that their current controllers suck like their older \nones did. That's what you'll read about in the archives -- the old \nstuff. Eg, the 1850's embedded RAID controller really flies, but it \nonly works with the internal disks. I can't comment on the external \narray controller for the 1850, but I cannot imagine it being any slower.\n\nAnd personally, I've not experienced any major problems aside from \ntwo bad PE1550's 4 years ago. And I have currently about 15 Dell \nservers running 24x7x365 doing various tasks, including postgres.\n\nHowever, my *big* databases always go on dual opteron boxes. my \ncurrent favorite is the SunFire X4100 with an external RAID.\n\n",
"msg_date": "Thu, 27 Apr 2006 11:19:36 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
}
] |
[
{
"msg_contents": "Hi all,\nwe encounter issues when deleting from a table based on id (primary key). On\ncertain 'id', it took forever to delete and the i/o is 100% busy.\nTable scenario has around 1400 entries. It is the parent of 3 other table.\n Table \"public.scenario\"\n Column | Type | Modifiers\n-----------------+-----------------------+------------------------------------------------\n id | bigint | not null default\nnextval('scenario_seq'::text)\n name | character varying(50) |\n description | text |\n subscriber_id | bigint |\n organization_id | bigint |\n schedule_id | bigint |\nIndexes:\n \"scenario_pkey\" primary key, btree (id)\n \"org_ind_scenario_index\" btree (organization_id)\n \"sch_ind_scenario_index\" btree (schedule_id)\n \"sub_ind_scenario_index\" btree (subscriber_id)\nCheck constraints:\n \"$3\" CHECK (schedule_id >= 0)\n \"$2\" CHECK (organization_id >= 0)\n \"$1\" CHECK (subscriber_id >= 0)\nForeign-key constraints:\n \"0_4774\" FOREIGN KEY (schedule_id) REFERENCES schedule(id) ON DELETE\nCASCADE\n \"0_4773\" FOREIGN KEY (organization_id) REFERENCES organization(id) ON\nDELETE CASCADE\n \"0_4772\" FOREIGN KEY (subscriber_id) REFERENCES subscriber(id) ON DELETE\nCASCADE\n\nIn all the child tables, the foreign key has the same data type and are\nindexed.\nWhen I do \"delete from scenario where id='1023', it takes less than 200 ms.\nBut when i do \"delete from scenario where id='1099', it took forever (more\nthan 10 minutes that i decided to cancel it.\nI can't do explain analyze, but here is the explain:\nMONSOON=# begin;\nBEGIN\nMONSOON=# explain delete from scenario where id='1099';\n QUERY PLAN\n------------------------------------------------------------------------------\n Index Scan using scenario_pkey on scenario (cost=0.00..3.14 rows=1\nwidth=6)\n Index Cond: (id = 1099::bigint)\n(2 rows)\n\nMONSOON=# explain delete from scenario where id='1023';\n QUERY PLAN\n------------------------------------------------------------------------------\n Index Scan using scenario_pkey on scenario (cost=0.00..3.14 rows=1\nwidth=6)\n Index Cond: (id = 1023::bigint)\n(2 rows)\n\nMONSOON=# explain analyze delete from scenario where id='1023';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Index Scan using scenario_pkey on scenario (cost=0.00..3.14 rows=1\nwidth=6) (actual time=0.028..0.030 rows=1 loops=1)\n Index Cond: (id = 1023::bigint)\n Total runtime: 0.174 ms\n(3 rows)\n\nI have also tried increasing statistics on both parent and child tables to\n100, vacuum analyze parent and all child tables. But still the same\nslowness.\nThe o/s is Solaris 10, with fsync = true.\nAny ideas what's going on?\nThanks in advance,\n\nJ\n\nHi all,\nwe encounter issues when deleting from a table based on id (primary\nkey). On certain 'id', it took forever to delete and the i/o is 100%\nbusy.\nTable scenario has around 1400 entries. It is the parent of 3 other table.\n \nTable \"public.scenario\"\n Column \n| \nType \n| \nModifiers\n-----------------+-----------------------+------------------------------------------------\n id \n|\nbigint \n| not null default nextval('scenario_seq'::text)\n name | character varying(50) |\n description |\ntext \n|\n subscriber_id |\nbigint \n|\n organization_id | bigint |\n schedule_id |\nbigint \n|\nIndexes:\n \"scenario_pkey\" primary key, btree (id)\n \"org_ind_scenario_index\" btree (organization_id)\n \"sch_ind_scenario_index\" btree (schedule_id)\n \"sub_ind_scenario_index\" btree (subscriber_id)\nCheck constraints:\n \"$3\" CHECK (schedule_id >= 0)\n \"$2\" CHECK (organization_id >= 0)\n \"$1\" CHECK (subscriber_id >= 0)\nForeign-key constraints:\n \"0_4774\" FOREIGN KEY (schedule_id) REFERENCES schedule(id) ON DELETE CASCADE\n \"0_4773\" FOREIGN KEY (organization_id) REFERENCES organization(id) ON DELETE CASCADE\n \"0_4772\" FOREIGN KEY (subscriber_id) REFERENCES subscriber(id) ON DELETE CASCADE\n\nIn all the child tables, the foreign key has the same data type and are indexed.\nWhen I do \"delete from scenario where id='1023', it takes less than 200 ms.\nBut when i do \"delete from scenario where id='1099', it took forever (more than 10 minutes that i decided to cancel it.\nI can't do explain analyze, but here is the explain:\nMONSOON=# begin;\nBEGIN\nMONSOON=# explain delete from scenario where id='1099';\n \nQUERY PLAN\n------------------------------------------------------------------------------\n Index Scan using scenario_pkey on scenario (cost=0.00..3.14 rows=1 width=6)\n Index Cond: (id = 1099::bigint)\n(2 rows)\n\nMONSOON=# explain delete from scenario where id='1023';\n \nQUERY PLAN\n------------------------------------------------------------------------------\n Index Scan using scenario_pkey on scenario (cost=0.00..3.14 rows=1 width=6)\n Index Cond: (id = 1023::bigint)\n(2 rows)\n\nMONSOON=# explain analyze delete from scenario where id='1023';\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Index Scan using scenario_pkey on scenario (cost=0.00..3.14\nrows=1 width=6) (actual time=0.028..0.030 rows=1 loops=1)\n Index Cond: (id = 1023::bigint)\n Total runtime: 0.174 ms\n(3 rows)\n\nI have also tried increasing statistics on both parent and child tables\nto 100, vacuum analyze parent and all child tables. But still the same\nslowness.\nThe o/s is Solaris 10, with fsync = true.\nAny ideas what's going on?\nThanks in advance,\n\nJ",
"msg_date": "Tue, 25 Apr 2006 14:41:03 -0700",
"msg_from": "\"Junaili Lie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow deletes on pgsql 7.4"
},
{
"msg_contents": "I should also mention that select ... for update is fast:\nMONSOON=# begin;explain analyze select * from SCENARIO WHERE id = '1099' FOR\nUPDATE;\nBEGIN\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Index Scan using scenario_pkey on scenario (cost=0.00..3.17 rows=1\nwidth=64) (actual time=0.016..0.017 rows=1 loops=1)\n Index Cond: (id = 1099::bigint)\n Total runtime: 0.072 ms\n(3 rows)\n\n\n\n\nOn 4/25/06, Junaili Lie <[email protected]> wrote:\n>\n> Hi all,\n> we encounter issues when deleting from a table based on id (primary key).\n> On certain 'id', it took forever to delete and the i/o is 100% busy.\n> Table scenario has around 1400 entries. It is the parent of 3 other table.\n> Table \"public.scenario\"\n> Column | Type | Modifiers\n>\n> -----------------+-----------------------+------------------------------------------------\n> id | bigint | not null default\n> nextval('scenario_seq'::text)\n> name | character varying(50) |\n> description | text |\n> subscriber_id | bigint |\n> organization_id | bigint |\n> schedule_id | bigint |\n> Indexes:\n> \"scenario_pkey\" primary key, btree (id)\n> \"org_ind_scenario_index\" btree (organization_id)\n> \"sch_ind_scenario_index\" btree (schedule_id)\n> \"sub_ind_scenario_index\" btree (subscriber_id)\n> Check constraints:\n> \"$3\" CHECK (schedule_id >= 0)\n> \"$2\" CHECK (organization_id >= 0)\n> \"$1\" CHECK (subscriber_id >= 0)\n> Foreign-key constraints:\n> \"0_4774\" FOREIGN KEY (schedule_id) REFERENCES schedule(id) ON DELETE\n> CASCADE\n> \"0_4773\" FOREIGN KEY (organization_id) REFERENCES organization(id) ON\n> DELETE CASCADE\n> \"0_4772\" FOREIGN KEY (subscriber_id) REFERENCES subscriber(id) ON\n> DELETE CASCADE\n>\n> In all the child tables, the foreign key has the same data type and are\n> indexed.\n> When I do \"delete from scenario where id='1023', it takes less than 200\n> ms.\n> But when i do \"delete from scenario where id='1099', it took forever (more\n> than 10 minutes that i decided to cancel it.\n> I can't do explain analyze, but here is the explain:\n> MONSOON=# begin;\n> BEGIN\n> MONSOON=# explain delete from scenario where id='1099';\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------\n> Index Scan using scenario_pkey on scenario (cost=0.00..3.14 rows=1\n> width=6)\n> Index Cond: (id = 1099::bigint)\n> (2 rows)\n>\n> MONSOON=# explain delete from scenario where id='1023';\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------\n> Index Scan using scenario_pkey on scenario (cost=0.00..3.14 rows=1\n> width=6)\n> Index Cond: (id = 1023::bigint)\n> (2 rows)\n>\n> MONSOON=# explain analyze delete from scenario where id='1023';\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------\n> Index Scan using scenario_pkey on scenario (cost=0.00..3.14 rows=1\n> width=6) (actual time=0.028..0.030 rows=1 loops=1)\n> Index Cond: (id = 1023::bigint)\n> Total runtime: 0.174 ms\n> (3 rows)\n>\n> I have also tried increasing statistics on both parent and child tables to\n> 100, vacuum analyze parent and all child tables. But still the same\n> slowness.\n> The o/s is Solaris 10, with fsync = true.\n> Any ideas what's going on?\n> Thanks in advance,\n>\n> J\n>\n\nI should also mention that select ... for update is fast:\nMONSOON=# begin;explain analyze select * from SCENARIO WHERE id = '1099' FOR UPDATE;\nBEGIN\n \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Index Scan using scenario_pkey on scenario (cost=0.00..3.17\nrows=1 width=64) (actual time=0.016..0.017 rows=1 loops=1)\n Index Cond: (id = 1099::bigint)\n Total runtime: 0.072 ms\n(3 rows)\n\n\nOn 4/25/06, Junaili Lie <[email protected]> wrote:\nHi all,\nwe encounter issues when deleting from a table based on id (primary\nkey). On certain 'id', it took forever to delete and the i/o is 100%\nbusy.\nTable scenario has around 1400 entries. It is the parent of 3 other table. \nTable \"public.scenario\"\n Column \n| \nType \n| \nModifiers\n-----------------+-----------------------+------------------------------------------------\n id \n|\nbigint \n| not null default nextval('scenario_seq'::text)\n name | character varying(50) |\n description |\ntext \n|\n subscriber_id |\nbigint \n|\n organization_id | bigint |\n schedule_id |\nbigint \n|\nIndexes:\n \"scenario_pkey\" primary key, btree (id)\n \"org_ind_scenario_index\" btree (organization_id)\n \"sch_ind_scenario_index\" btree (schedule_id)\n \"sub_ind_scenario_index\" btree (subscriber_id)\nCheck constraints:\n \"$3\" CHECK (schedule_id >= 0)\n \"$2\" CHECK (organization_id >= 0)\n \"$1\" CHECK (subscriber_id >= 0)\nForeign-key constraints:\n \"0_4774\" FOREIGN KEY (schedule_id) REFERENCES schedule(id) ON DELETE CASCADE\n \"0_4773\" FOREIGN KEY (organization_id) REFERENCES organization(id) ON DELETE CASCADE\n \"0_4772\" FOREIGN KEY (subscriber_id) REFERENCES subscriber(id) ON DELETE CASCADE\n\nIn all the child tables, the foreign key has the same data type and are indexed.\nWhen I do \"delete from scenario where id='1023', it takes less than 200 ms.\nBut when i do \"delete from scenario where id='1099', it took forever (more than 10 minutes that i decided to cancel it.\nI can't do explain analyze, but here is the explain:\nMONSOON=# begin;\nBEGIN\nMONSOON=# explain delete from scenario where id='1099';\n \nQUERY PLAN\n------------------------------------------------------------------------------\n Index Scan using scenario_pkey on scenario (cost=0.00..3.14 rows=1 width=6)\n Index Cond: (id = 1099::bigint)\n(2 rows)\n\nMONSOON=# explain delete from scenario where id='1023';\n \nQUERY PLAN\n------------------------------------------------------------------------------\n Index Scan using scenario_pkey on scenario (cost=0.00..3.14 rows=1 width=6)\n Index Cond: (id = 1023::bigint)\n(2 rows)\n\nMONSOON=# explain analyze delete from scenario where id='1023';\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Index Scan using scenario_pkey on scenario (cost=0.00..3.14\nrows=1 width=6) (actual time=0.028..0.030 rows=1 loops=1)\n Index Cond: (id = 1023::bigint)\n Total runtime: 0.174 ms\n(3 rows)\n\nI have also tried increasing statistics on both parent and child tables\nto 100, vacuum analyze parent and all child tables. But still the same\nslowness.\nThe o/s is Solaris 10, with fsync = true.\nAny ideas what's going on?\nThanks in advance,\n\nJ",
"msg_date": "Tue, 25 Apr 2006 14:46:30 -0700",
"msg_from": "\"Junaili Lie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow deletes on pgsql 7.4"
},
{
"msg_contents": "\"Junaili Lie\" <[email protected]> writes:\n> we encounter issues when deleting from a table based on id (primary key). O=\n> n\n> certain 'id', it took forever to delete and the i/o is 100% busy.\n\nAlmost always, if delete is slow when selecting the same rows is fast,\nit's because you've got a trigger performance problem --- most commonly,\nthere are foreign keys referencing this table from other tables and you\ndon't have the referencing columns indexed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 Apr 2006 19:09:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow deletes on pgsql 7.4 "
},
{
"msg_contents": "hi,\nThanks for the answer.\nI have double checked that all the foreign key that are referencing \"id\" on\nscenario are indexed.\nI have even vacuum analyze scenario table and all the tables that referenced\nthis table.\nSomething that is interesting is that: it only happens for a certain values.\n\nie. delete from scenario where id='1023' is very fast, but delete from\nscenario where id='1099' is running forever.\n\nAny ideas?\n\nJ\n\n\n\n\nOn 4/25/06, Tom Lane <[email protected]> wrote:\n>\n> \"Junaili Lie\" <[email protected]> writes:\n> > we encounter issues when deleting from a table based on id (primary\n> key). O=\n> > n\n> > certain 'id', it took forever to delete and the i/o is 100% busy.\n>\n> Almost always, if delete is slow when selecting the same rows is fast,\n> it's because you've got a trigger performance problem --- most commonly,\n> there are foreign keys referencing this table from other tables and you\n> don't have the referencing columns indexed.\n>\n> regards, tom lane\n>\n\nhi,\nThanks for the answer.\nI have double checked that all the foreign key that are referencing \"id\" on scenario are indexed.\nI have even vacuum analyze scenario table and all the tables that referenced this table.\nSomething that is interesting is that: it only happens for a certain values. \nie. delete from scenario where id='1023' is very fast, but delete from scenario where id='1099' is running forever.\n\nAny ideas?\n\nJ\n\n\nOn 4/25/06, Tom Lane <[email protected]> wrote:\n\"Junaili Lie\" <[email protected]> writes:> we encounter issues when deleting from a table based on id (primary key). O=> n> certain 'id', it took forever to delete and the i/o is 100% busy.\nAlmost always, if delete is slow when selecting the same rows is fast,it's because you've got a trigger performance problem --- most commonly,there are foreign keys referencing this table from other tables and you\ndon't have the referencing columns indexed. regards,\ntom lane",
"msg_date": "Tue, 25 Apr 2006 17:14:56 -0700",
"msg_from": "\"Junaili Lie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow deletes on pgsql 7.4"
},
{
"msg_contents": "\"Junaili Lie\" <[email protected]> writes:\n> ie. delete from scenario where id=3D'1023' is very fast, but delete from\n> scenario where id=3D'1099' is running forever.\n\nWhat does EXPLAIN show for each of those cases?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 25 Apr 2006 21:04:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow deletes on pgsql 7.4 "
},
{
"msg_contents": "It was on my first email.\nHere it is again:\nMONSOON=# explain delete from scenario where id='1099';\n QUERY PLAN\n------------------------------------------------------------------------------\n Index Scan using scenario_pkey on scenario (cost=0.00..3.14 rows=1\nwidth=6)\n Index Cond: (id = 1099::bigint)\n(2 rows)\n\nMONSOON=# explain delete from scenario where id='1023';\n QUERY PLAN\n------------------------------------------------------------------------------\n Index Scan using scenario_pkey on scenario (cost=0.00..3.14 rows=1\nwidth=6)\n Index Cond: (id = 1023::bigint)\n(2 rows)\n\nThanks,\nJ\n\n\nOn 4/25/06, Tom Lane <[email protected]> wrote:\n>\n> \"Junaili Lie\" <[email protected]> writes:\n> > ie. delete from scenario where id=3D'1023' is very fast, but delete from\n> > scenario where id=3D'1099' is running forever.\n>\n> What does EXPLAIN show for each of those cases?\n>\n> regards, tom lane\n>\n\nIt was on my first email.\nHere it is again:\nMONSOON=# explain delete from scenario where id='1099'; QUERY PLAN------------------------------------------------------------------------------ Index Scan using scenario_pkey on scenario (cost=\n0.00..3.14 rows=1 width=6) Index Cond: (id = 1099::bigint)(2 rows)MONSOON=# explain delete from scenario where id='1023'; QUERY PLAN------------------------------------------------------------------------------\n Index Scan using scenario_pkey on scenario (cost=0.00..3.14 rows=1 width=6) Index Cond: (id = 1023::bigint)(2 rows) \nThanks,\nJ \nOn 4/25/06, Tom Lane <[email protected]> wrote:\n\"Junaili Lie\" <[email protected]> writes:> ie. delete from scenario where id=3D'1023' is very fast, but delete from\n> scenario where id=3D'1099' is running forever.What does EXPLAIN show for each of those cases? regards, tom lane",
"msg_date": "Wed, 26 Apr 2006 07:27:13 -0700",
"msg_from": "\"Junaili Lie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow deletes on pgsql 7.4"
}
] |
[
{
"msg_contents": "I've had intermittent \"freeze and reboot\" and, worse, just plain freeze problems with the Core Duo's I've been testing. I have not been able to narrow it down so I do not know if it is a platform issue or a CPU issue. It appears to be HW, not SW, related since I have experienced the problem both under M$ and Linux 2.6 based OS's. I have not tested the Core Duo's under *BSD.\n\nAlso, being that they are only 32b Core Duo's have limited utility for a present day DB server.\n\nPower and space critical applications where 64b is not required may be a reasonable place for them... ...if the \npresent reliability problems I'm seeing go away.\n\nRon\n\n\n-----Original Message-----\n>From: David Boreham <[email protected]>\n>Sent: Apr 25, 2006 5:15 PM\n>To: [email protected]\n>Subject: Re: [PERFORM] Large (8M) cache vs. dual-core CPUs\n>\n>\n>>My personal favorite pg platform at this time is one based on a 2 socket, dual core ready mainboard with 16 DIMM slots combined with dual core AMD Kx's.\n>> \n>>\n>Right. We've been buying Tyan bare-bones boxes like this.\n>It's better to go with bare-bones than building boxes from bare metal\n>because the cooling issues are addressed correctly.\n>\n>Note that if you need a large number of machines, then Intel\n>Core Duo may give the best overall price/performance because\n>they're cheaper to run and cool.\n>\n",
"msg_date": "Tue, 25 Apr 2006 17:43:14 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Large (8M) cache vs. dual-core CPUs"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.