threads
listlengths
1
275
[ { "msg_contents": "All good ideas, unfortunately, we can't change the inserting applicatin\ncode easily. \n\n> -----Original Message-----\n> From: Simon Riggs [mailto:[email protected]] \n> Sent: Tuesday, February 07, 2006 5:09 PM\n> To: Marc Morin\n> Cc: Markus Schaber; [email protected]\n> Subject: Re: [PERFORM] partitioning and locking problems\n> \n> On Thu, 2006-02-02 at 11:27 -0500, Marc Morin wrote:\n> \n> > > > \t1- long running report is running on view\n> > > > \t2- continuous inserters into view into a table \n> via a rule\n> > > > \t3- truncate or rule change occurs, taking an \n> exclusive lock.\n> > > > Must wait for #1 to finish.\n> > > > \t4- new reports and inserters must now wait for #3.\n> > > > \t5- now everyone is waiting for a single query \n> in #1. Results\n> > > > in loss of insert data granularity (important for our \n> application).\n> \n> > Using a separate lock table is what we've decided to do in this \n> > particular case to serialize #1 and #3. Inserters don't take this \n> > lock and as such will not be stalled.\n> \n> Would it not be simpler to have the Inserters change from one \n> table to another either upon command, on a fixed timing cycle \n> or even better based upon one of the inserted values \n> (Logdate?) (or all 3?). (Requires changes in the application \n> layer: 3GL or db functions).\n> \n> The truncates can wait until the data has stopped being used.\n> \n> I'd be disinclined to using the locking system as a scheduling tool.\n> \n> Best Regards, Simon Riggs\n> \n> \n> \n", "msg_date": "Tue, 7 Feb 2006 17:57:03 -0500", "msg_from": "\"Marc Morin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and locking problems" } ]
[ { "msg_contents": "Postgres doesn't seem to optimize away unnecessary joins in a view\ndefinition when the view is queried in such a way that the join need not\nbe executed. In the example below, I define two tables, foo and bar,\nwith a foreign key on bar referencing foo, and a view on the natural\njoin of the tables. The tables are defined so that the relationship\nfrom bar to foo is allowed to be many to one, with the column of bar\nreferencing foo (column a) set NOT NULL, so that there must be exactly\none foo record for every bar record. I then EXPLAIN selecting the \"b\"\ncolumn from bar, through the view and from bar directly. The tables\nhave been ANALYZEd but have no data. EXPLAIN shows the join actually\noccurring when selecting b from the view quux. If I understand\ncorrectly (maybe I don't), this is guaranteed to be exactly the same as\nthe selecting b directly from the bar table. The practical import of\nthis comes into play when views are provided to simplify queries for end\nusers, and those views use joins to include related data. If the user\nenters a query that is equivalent to a query on a base table, why should\nthe query pay a performance penalty ? \n\ntable foo:\n\nColumn | Type | Modifiers\n--------+---------+-----------\na | integer | not null\nIndexes:\n \"foo_pkey\" PRIMARY KEY, btree (a)\n\n\ntable bar:\n\nColumn | Type | Modifiers\n--------+---------+-----------\na | integer | not null\nb | integer |\nForeign-key constraints:\n \"bar_a_fkey\" FOREIGN KEY (a) REFERENCES foo(a)\n\n\nview quux:\n\nColumn | Type | Modifiers\n--------+---------+-----------\na | integer |\nb | integer |\nView definition:\nSELECT bar.a, bar.b\n FROM bar\nNATURAL JOIN foo\n\n\nEXPLAINed Queries:\n\nexplain select b from bar;\n\n QUERY PLAN\n---------------------------------------------------\nSeq Scan on bar (cost=0.00..1.00 rows=1 width=4)\n(1 row)\n\nexplain select b from quux;\n\n QUERY PLAN\n--------------------------------------------------------------------------\nNested Loop (cost=0.00..5.84 rows=1 width=4)\n -> Seq Scan on bar (cost=0.00..1.00 rows=1 width=8)\n -> Index Scan using foo_pkey on foo (cost=0.00..4.82 rows=1\nwidth=4)\n Index Cond: (\"outer\".a = foo.a)\n(4 rows)\n\n-- \nJacob Costello <[email protected]>\nSun Trading, LLC\n\n\n", "msg_date": "Wed, 08 Feb 2006 07:54:51 -0600", "msg_from": "Jacob Costello <[email protected]>", "msg_from_op": true, "msg_subject": "optimizing away join when querying view" }, { "msg_contents": "Jacob Costello <[email protected]> writes:\n> Postgres doesn't seem to optimize away unnecessary joins\n\nThere is no such thing as an unnecessary join, unless you are willing to\nstake the correctness of the query on constraints that could be dropped\nafter the query is planned. Until we have some infrastructure to deal\nwith that situation, nothing like this is going to happen.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Feb 2006 10:37:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing away join when querying view " }, { "msg_contents": "On Wed, 8 Feb 2006, Jacob Costello wrote:\n\n> Postgres doesn't seem to optimize away unnecessary joins in a view\n> definition when the view is queried in such a way that the join need not\n> be executed. In the example below, I define two tables, foo and bar,\n> with a foreign key on bar referencing foo, and a view on the natural\n> join of the tables. The tables are defined so that the relationship\n> from bar to foo is allowed to be many to one, with the column of bar\n> referencing foo (column a) set NOT NULL, so that there must be exactly\n> one foo record for every bar record. I then EXPLAIN selecting the \"b\"\n> column from bar, through the view and from bar directly. The tables\n> have been ANALYZEd but have no data. EXPLAIN shows the join actually\n> occurring when selecting b from the view quux. If I understand\n> correctly (maybe I don't), this is guaranteed to be exactly the same as\n> the selecting b directly from the bar table.\n\nAFAIK there are periods in which a foreign key does not guarantee that\nthere's one foo record for every bar record between an action and the\nconstraint check for that action at statement end so you'd probably have\nto be careful in any case.\n", "msg_date": "Wed, 8 Feb 2006 07:46:39 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing away join when querying view" } ]
[ { "msg_contents": "I'm specifically interested in the default C Locale; but if there's a \ndifference in the answer for other locales, I'd like to hear about \nthat as well.\n\nThanks in Advance,\nRon\n\n\n", "msg_date": "Wed, 08 Feb 2006 09:11:11 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": true, "msg_subject": "Size and performance hit from using UTF8 vs. ASCII?" }, { "msg_contents": "On Wed, 2006-02-08 at 09:11 -0500, Ron wrote:\n> I'm specifically interested in the default C Locale; but if there's a \n> difference in the answer for other locales, I'd like to hear about \n> that as well.\n\nThe size hit will be effectively zero if your data is mainly of the\nASCII variety, since ASCII printable characters to UTF-8 is an identity\ntransform. However anything involving string comparisons, including\nequality, similarity (LIKE, regular expressions), or any other kind of\ncomparison (ORDER BY, GROUP BY) will be slower. In my experience the\nperformance hit varies from zero to 100% in CPU time. UTF-8 is never\nfaster that ASCII, as far as I know.\n\nHowever, if you need UTF-8 then you need it, and there's no point in\nworrying about the performance hit.\n\nYou may as well just do two benchmark runs with your database\ninitialized in either character set to see for yourself.\n\n-jwb\n\n\n", "msg_date": "Wed, 08 Feb 2006 07:54:01 -0800", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Size and performance hit from using UTF8 vs. ASCII?" } ]
[ { "msg_contents": "In an attempt to save myself some time, I thought I ask The Community \nif anyone has guidance here.\n\nHW: Intel PM (very likely to be upgraded to an AMD Turion when the \nproper HW becomes available) w/ 2GB of RAM (shortly to be 4GB) and a \n5400rpm 100GB HD (will be dual 7200rpm 160GB HD's as soon as they \nbecome available)\n\nOS: The laptop in question is running the latest WinXP service pack \nand patches. When the CPU and HD upgrades mentioned above happen, I \nwill probably start running dual boot FC5 + 64b Windows.\n\nPossible optional HW: external \"box of HDs\" for doing and/or \nmodelling stuff that can't be using only the internal ones.\n\nI want to get as much performance as I can out of the HW + OS.\n\nAnyone want to take a stab at what the config files and options \nshould be for best performance under most circumstances?\n\nThis is intended to be a portable development platform, so opinions \nas to which contrib modules are worth/not worth installing is also appreciated.\n\nTiA,\nRon\n\n\n", "msg_date": "Wed, 08 Feb 2006 17:05:02 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": true, "msg_subject": "Sane configuration options for a WinXP laptop 8.1 install?" }, { "msg_contents": "On Wed, Feb 08, 2006 at 05:05:02PM -0500, Ron wrote:\n> In an attempt to save myself some time, I thought I ask The Community \n> if anyone has guidance here.\n> \n> HW: Intel PM (very likely to be upgraded to an AMD Turion when the \n> proper HW becomes available) w/ 2GB of RAM (shortly to be 4GB) and a \n> 5400rpm 100GB HD (will be dual 7200rpm 160GB HD's as soon as they \n> become available)\n> \n> OS: The laptop in question is running the latest WinXP service pack \n> and patches. When the CPU and HD upgrades mentioned above happen, I \n> will probably start running dual boot FC5 + 64b Windows.\n> \n> Possible optional HW: external \"box of HDs\" for doing and/or \n> modelling stuff that can't be using only the internal ones.\n> \n> I want to get as much performance as I can out of the HW + OS.\n> \n> Anyone want to take a stab at what the config files and options \n> should be for best performance under most circumstances?\n> \n> This is intended to be a portable development platform, so opinions \n> as to which contrib modules are worth/not worth installing is also \n> appreciated.\n\nOff the top of my head...\n\nshared_buffers=30000\ndrop max_connections to what you'll actually be using\nmaintenance_work_mem=100000\nwork_mem=2000000/max_connections (maybe * 0.9 for some added margin)\nautovacuum=on\nautovacuum_vacuum_cost_delay=20\nautovacuum_vacuum_scale_factor=0.2\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 9 Feb 2006 00:32:35 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sane configuration options for a WinXP laptop 8.1 install?" } ]
[ { "msg_contents": "Hello\n\nWe are running an application via web that use a lot of time to perform\nsome operations. We are trying to find out if some of the sql statements\nused are the reason of the slow speed.\n\nWe have identified a sql that takes like 4-5000ms more than the second\nslowest sql in out test server. I hope that we will get some help to try\nto optimize it.\n\nThanks in advance for any help.\n\nSome information:\n********************************************************************************\nrttest=# EXPLAIN ANALYZE SELECT DISTINCT main.* \n\nFROM Users main , \n Principals Principals_1, \n ACL ACL_2, \n Groups Groups_3, \n CachedGroupMembers CachedGroupMembers_4 \n\nWHERE ((ACL_2.RightName = 'OwnTicket')) \nAND ((CachedGroupMembers_4.MemberId = Principals_1.id)) \nAND ((Groups_3.id = CachedGroupMembers_4.GroupId)) \nAND ((Principals_1.Disabled = '0') or (Principals_1.Disabled = '0')) \nAND ((Principals_1.id != '1')) \nAND ((main.id = Principals_1.id)) \nAND ( ( ACL_2.PrincipalId = Groups_3.id AND ACL_2.PrincipalType =\n'Group' AND ( Groups_3.Domain = 'SystemInternal' OR Groups_3.Domain =\n'UserDefined' OR Groups_3.Domain = 'ACLEquivalence')) OR ( (\n(Groups_3.Domain = 'RT::Queue-Role' ) ) AND Groups_3.Type\n=ACL_2.PrincipalType) )\nAND (ACL_2.ObjectType = 'RT::System' OR (ACL_2.ObjectType = 'RT::Queue')\n) \n\t\t\t \nORDER BY main.Name ASC\n\nQUERY PLAN\n-----------------------------------------------------------------\n Unique (cost=28394.99..28395.16 rows=2 width=706) (actual\ntime=15574.272..15787.681 rows=254 loops=1)\n -> Sort (cost=28394.99..28394.99 rows=2 width=706) (actual\ntime=15574.267..15607.310 rows=22739 loops=1)\n Sort Key: main.name, main.id, main.\"password\", main.comments,\nmain.signature, main.emailaddress, main.freeformcontactinfo,\nmain.organization, main.realname, main.nickname, main.lang,\nmain.emailencoding, main.webencoding, main.externalcontactinfoid,\nmain.contactinfosystem, main.externalauthid, main.authsystem,\nmain.gecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.address1, main.address2, main.city, main.state,\nmain.zip, main.country, main.timezone, main.pgpkey, main.creator,\nmain.created, main.lastupdatedby, main.lastupdated\n -> Nested Loop (cost=20825.91..28394.98 rows=2 width=706)\n(actual time=1882.608..14589.596 rows=22739 loops=1)\n Join Filter: ((((\"inner\".\"domain\")::text =\n'RT::Queue-Role'::text) OR (\"outer\".principalid = \"inner\".id)) AND\n(((\"inner\".\"type\")::text = (\"outer\".principaltype)::text) OR\n(\"outer\".principalid = \"inner\".id)) AND (((\"inner\".\"domain\")::text =\n'RT::Queue-Role'::text) OR ((\"outer\".principaltype)::text =\n'Group'::text)) AND (((\"inner\".\"type\")::text =\n(\"outer\".principaltype)::text) OR ((\"outer\".principaltype)::text =\n'Group'::text)) AND (((\"inner\".\"type\")::text =\n(\"outer\".principaltype)::text) OR ((\"inner\".\"domain\")::text =\n'SystemInternal'::text) OR ((\"inner\".\"domain\")::text =\n'UserDefined'::text) OR ((\"inner\".\"domain\")::text =\n'ACLEquivalence'::text)))\n -> Seq Scan on acl acl_2 (cost=0.00..40.57 rows=45\nwidth=13) (actual time=0.020..1.730 rows=51 loops=1)\n Filter: (((rightname)::text = 'OwnTicket'::text)\nAND (((objecttype)::text = 'RT::System'::text) OR ((objecttype)::text =\n'RT::Queue'::text)))\n -> Materialize (cost=20825.91..20859.37 rows=3346\nwidth=738) (actual time=36.925..166.374 rows=66823 loops=51)\n -> Merge Join (cost=15259.56..20825.91 rows=3346\nwidth=738) (actual time=1882.539..3538.258 rows=66823 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".memberid)\n -> Merge Join (cost=0.00..5320.37\nrows=13182 width=710) (actual time=0.116..874.960 rows=13167 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n -> Index Scan using users_pkey on\nusers main (cost=0.00..1063.60 rows=13181 width=706) (actual\ntime=0.032..52.355 rows=13181 loops=1)\n -> Index Scan using principals_pkey on\nprincipals principals_1 (cost=0.00..3737.49 rows=141801 width=4)\n(actual time=0.020..463.043 rows=141778 loops=1)\n Filter: ((disabled = 0::smallint)\nAND (id <> 1))\n -> Sort (cost=15259.56..15349.54 rows=35994\nwidth=36) (actual time=1882.343..1988.353 rows=80357 loops=1)\n Sort Key: cachedgroupmembers_4.memberid\n -> Hash Join (cost=3568.51..12535.63\nrows=35994 width=36) (actual time=96.151..1401.537 rows=80357 loops=1)\n Hash Cond: (\"outer\".groupid =\n\"inner\".id)\n -> Seq Scan on\ncachedgroupmembers cachedgroupmembers_4 (cost=0.00..5961.53 rows=352753\nwidth=8) (actual time=0.011..500.508 rows=352753 loops=1)\n -> Hash (cost=3535.70..3535.70\nrows=13124 width=32) (actual time=95.966..95.966 rows=0 loops=1)\n -> Index Scan using\ngroups1, groups1, groups1, groups1 on groups groups_3 \n(cost=0.00..3535.70 rows=13124 width=32) (actual time=0.045..76.506\nrows=13440 loops=1)\n Index Cond:\n(((\"domain\")::text = 'RT::Queue-Role'::text) OR ((\"domain\")::text =\n'SystemInternal'::text) OR ((\"domain\")::text = 'UserDefined'::text) OR\n((\"domain\")::text = 'ACLEquivalence'::text))\n\n Total runtime: 15825.022 ms\n\n********************************************************************************\nrttest=# \\d users\n Table \"public.users\"\n Column | Type | \nModifiers \n-----------------------+-----------------------------+------------------------------------------------\n id | integer | not null default\nnextval('users_id_seq'::text)\n name | character varying(200) | not null\n password | character varying(40) | \n comments | text | \n signature | text | \n emailaddress | character varying(120) | \n freeformcontactinfo | text | \n organization | character varying(200) | \n realname | character varying(120) | \n nickname | character varying(16) | \n lang | character varying(16) | \n emailencoding | character varying(16) | \n webencoding | character varying(16) | \n externalcontactinfoid | character varying(100) | \n contactinfosystem | character varying(30) | \n externalauthid | character varying(100) | \n authsystem | character varying(30) | \n gecos | character varying(16) | \n homephone | character varying(30) | \n workphone | character varying(30) | \n mobilephone | character varying(30) | \n pagerphone | character varying(30) | \n address1 | character varying(200) | \n address2 | character varying(200) | \n city | character varying(100) | \n state | character varying(100) | \n zip | character varying(16) | \n country | character varying(50) | \n timezone | character varying(50) | \n pgpkey | text | \n creator | integer | not null default\n0\n created | timestamp without time zone | \n lastupdatedby | integer | not null default\n0\n lastupdated | timestamp without time zone | \nIndexes:\n \"users_pkey\" primary key, btree (id)\n \"users1\" unique, btree (name)\n \"users2\" btree (name)\n \"users3\" btree (id, emailaddress)\n \"users4\" btree (emailaddress)\n********************************************************************************\nrttest=# \\d principals\n\n Table \"public.principals\"\n Column | Type | \nModifiers \n---------------+-----------------------+----------------------------------------\n id | integer | not null default\nnextval('principals_id_seq'::text)\n principaltype | character varying(16) | not null\n objectid | integer | \n disabled | smallint | not null default 0\nIndexes:\n \"principals_pkey\" primary key, btree (id)\n \"principals2\" btree (objectid)\n\n********************************************************************************\nrttest=# \\d acl\n\n Table \"public.acl\"\n Column | Type | \nModifiers \n---------------+-----------------------+----------------------------------------------\n id | integer | not null default\nnextval('acl_id_seq'::text)\n principaltype | character varying(25) | not null\n principalid | integer | not null\n rightname | character varying(25) | not null\n objecttype | character varying(25) | not null\n objectid | integer | not null default 0\n delegatedby | integer | not null default 0\n delegatedfrom | integer | not null default 0\nIndexes:\n \"acl_pkey\" primary key, btree (id)\n \"acl1\" btree (rightname, objecttype, objectid, principaltype,\nprincipalid)\n\n\n********************************************************************************\nrttest=# \\d groups\n\n Table \"public.groups\"\n Column | Type | \nModifiers \n-------------+------------------------+-------------------------------------------------\n id | integer | not null default\nnextval('groups_id_seq'::text)\n name | character varying(200) | \n description | character varying(255) | \n domain | character varying(64) | \n type | character varying(64) | \n instance | integer | \nIndexes:\n \"groups_pkey\" primary key, btree (id)\n \"groups1\" unique, btree (\"domain\", instance, \"type\", id, name)\n \"groups2\" btree (\"type\", instance, \"domain\")\n\n\n********************************************************************************\n rttest=# \\d cachedgroupmembers\"\n\n Table \"public.cachedgroupmembers\"\n Column | Type | \nModifiers \n-------------------+----------+-------------------------------------------------------------\n id | integer | not null default\nnextval('cachedgroupmembers_id_seq'::text)\n groupid | integer | \n memberid | integer | \n via | integer | \n immediateparentid | integer | \n disabled | smallint | not null default 0\nIndexes:\n \"cachedgroupmembers_pkey\" primary key, btree (id)\n \"cachedgroupmembers2\" btree (memberid)\n \"cachedgroupmembers3\" btree (groupid)\n \"disgroumem\" btree (groupid, memberid, disabled)\n\n\n********************************************************************************\n\n-- \nRafael Martinez, <[email protected]>\nCenter for Information Technology Services\nUniversity of Oslo, Norway\n\nPGP Public Key: http://folk.uio.no/rafael/\n\n", "msg_date": "Thu, 09 Feb 2006 16:10:27 +0100", "msg_from": "Rafael Martinez Guerrero <[email protected]>", "msg_from_op": true, "msg_subject": "Help with optimizing a sql statement" }, { "msg_contents": "At least part of the problem is that it's way off on some of the row\nestimates. I'd suggest upping the statisticss target on at least all of\nthe join columns to at least 100. (Note that it's doing a nested loop\nthinking it will have only 2 rows but it actually has 22000 rows).\n\nOn Thu, Feb 09, 2006 at 04:10:27PM +0100, Rafael Martinez Guerrero wrote:\n> Hello\n> \n> We are running an application via web that use a lot of time to perform\n> some operations. We are trying to find out if some of the sql statements\n> used are the reason of the slow speed.\n> \n> We have identified a sql that takes like 4-5000ms more than the second\n> slowest sql in out test server. I hope that we will get some help to try\n> to optimize it.\n> \n> Thanks in advance for any help.\n> \n> Some information:\n> ********************************************************************************\n> rttest=# EXPLAIN ANALYZE SELECT DISTINCT main.* \n> \n> FROM Users main , \n> Principals Principals_1, \n> ACL ACL_2, \n> Groups Groups_3, \n> CachedGroupMembers CachedGroupMembers_4 \n> \n> WHERE ((ACL_2.RightName = 'OwnTicket')) \n> AND ((CachedGroupMembers_4.MemberId = Principals_1.id)) \n> AND ((Groups_3.id = CachedGroupMembers_4.GroupId)) \n> AND ((Principals_1.Disabled = '0') or (Principals_1.Disabled = '0')) \n> AND ((Principals_1.id != '1')) \n> AND ((main.id = Principals_1.id)) \n> AND ( ( ACL_2.PrincipalId = Groups_3.id AND ACL_2.PrincipalType =\n> 'Group' AND ( Groups_3.Domain = 'SystemInternal' OR Groups_3.Domain =\n> 'UserDefined' OR Groups_3.Domain = 'ACLEquivalence')) OR ( (\n> (Groups_3.Domain = 'RT::Queue-Role' ) ) AND Groups_3.Type\n> =ACL_2.PrincipalType) )\n> AND (ACL_2.ObjectType = 'RT::System' OR (ACL_2.ObjectType = 'RT::Queue')\n> ) \n> \t\t\t \n> ORDER BY main.Name ASC\n> \n> QUERY PLAN\n> -----------------------------------------------------------------\n> Unique (cost=28394.99..28395.16 rows=2 width=706) (actual\n> time=15574.272..15787.681 rows=254 loops=1)\n> -> Sort (cost=28394.99..28394.99 rows=2 width=706) (actual\n> time=15574.267..15607.310 rows=22739 loops=1)\n> Sort Key: main.name, main.id, main.\"password\", main.comments,\n> main.signature, main.emailaddress, main.freeformcontactinfo,\n> main.organization, main.realname, main.nickname, main.lang,\n> main.emailencoding, main.webencoding, main.externalcontactinfoid,\n> main.contactinfosystem, main.externalauthid, main.authsystem,\n> main.gecos, main.homephone, main.workphone, main.mobilephone,\n> main.pagerphone, main.address1, main.address2, main.city, main.state,\n> main.zip, main.country, main.timezone, main.pgpkey, main.creator,\n> main.created, main.lastupdatedby, main.lastupdated\n> -> Nested Loop (cost=20825.91..28394.98 rows=2 width=706)\n> (actual time=1882.608..14589.596 rows=22739 loops=1)\n> Join Filter: ((((\"inner\".\"domain\")::text =\n> 'RT::Queue-Role'::text) OR (\"outer\".principalid = \"inner\".id)) AND\n> (((\"inner\".\"type\")::text = (\"outer\".principaltype)::text) OR\n> (\"outer\".principalid = \"inner\".id)) AND (((\"inner\".\"domain\")::text =\n> 'RT::Queue-Role'::text) OR ((\"outer\".principaltype)::text =\n> 'Group'::text)) AND (((\"inner\".\"type\")::text =\n> (\"outer\".principaltype)::text) OR ((\"outer\".principaltype)::text =\n> 'Group'::text)) AND (((\"inner\".\"type\")::text =\n> (\"outer\".principaltype)::text) OR ((\"inner\".\"domain\")::text =\n> 'SystemInternal'::text) OR ((\"inner\".\"domain\")::text =\n> 'UserDefined'::text) OR ((\"inner\".\"domain\")::text =\n> 'ACLEquivalence'::text)))\n> -> Seq Scan on acl acl_2 (cost=0.00..40.57 rows=45\n> width=13) (actual time=0.020..1.730 rows=51 loops=1)\n> Filter: (((rightname)::text = 'OwnTicket'::text)\n> AND (((objecttype)::text = 'RT::System'::text) OR ((objecttype)::text =\n> 'RT::Queue'::text)))\n> -> Materialize (cost=20825.91..20859.37 rows=3346\n> width=738) (actual time=36.925..166.374 rows=66823 loops=51)\n> -> Merge Join (cost=15259.56..20825.91 rows=3346\n> width=738) (actual time=1882.539..3538.258 rows=66823 loops=1)\n> Merge Cond: (\"outer\".id = \"inner\".memberid)\n> -> Merge Join (cost=0.00..5320.37\n> rows=13182 width=710) (actual time=0.116..874.960 rows=13167 loops=1)\n> Merge Cond: (\"outer\".id = \"inner\".id)\n> -> Index Scan using users_pkey on\n> users main (cost=0.00..1063.60 rows=13181 width=706) (actual\n> time=0.032..52.355 rows=13181 loops=1)\n> -> Index Scan using principals_pkey on\n> principals principals_1 (cost=0.00..3737.49 rows=141801 width=4)\n> (actual time=0.020..463.043 rows=141778 loops=1)\n> Filter: ((disabled = 0::smallint)\n> AND (id <> 1))\n> -> Sort (cost=15259.56..15349.54 rows=35994\n> width=36) (actual time=1882.343..1988.353 rows=80357 loops=1)\n> Sort Key: cachedgroupmembers_4.memberid\n> -> Hash Join (cost=3568.51..12535.63\n> rows=35994 width=36) (actual time=96.151..1401.537 rows=80357 loops=1)\n> Hash Cond: (\"outer\".groupid =\n> \"inner\".id)\n> -> Seq Scan on\n> cachedgroupmembers cachedgroupmembers_4 (cost=0.00..5961.53 rows=352753\n> width=8) (actual time=0.011..500.508 rows=352753 loops=1)\n> -> Hash (cost=3535.70..3535.70\n> rows=13124 width=32) (actual time=95.966..95.966 rows=0 loops=1)\n> -> Index Scan using\n> groups1, groups1, groups1, groups1 on groups groups_3 \n> (cost=0.00..3535.70 rows=13124 width=32) (actual time=0.045..76.506\n> rows=13440 loops=1)\n> Index Cond:\n> (((\"domain\")::text = 'RT::Queue-Role'::text) OR ((\"domain\")::text =\n> 'SystemInternal'::text) OR ((\"domain\")::text = 'UserDefined'::text) OR\n> ((\"domain\")::text = 'ACLEquivalence'::text))\n> \n> Total runtime: 15825.022 ms\n> \n> ********************************************************************************\n> rttest=# \\d users\n> Table \"public.users\"\n> Column | Type | \n> Modifiers \n> -----------------------+-----------------------------+------------------------------------------------\n> id | integer | not null default\n> nextval('users_id_seq'::text)\n> name | character varying(200) | not null\n> password | character varying(40) | \n> comments | text | \n> signature | text | \n> emailaddress | character varying(120) | \n> freeformcontactinfo | text | \n> organization | character varying(200) | \n> realname | character varying(120) | \n> nickname | character varying(16) | \n> lang | character varying(16) | \n> emailencoding | character varying(16) | \n> webencoding | character varying(16) | \n> externalcontactinfoid | character varying(100) | \n> contactinfosystem | character varying(30) | \n> externalauthid | character varying(100) | \n> authsystem | character varying(30) | \n> gecos | character varying(16) | \n> homephone | character varying(30) | \n> workphone | character varying(30) | \n> mobilephone | character varying(30) | \n> pagerphone | character varying(30) | \n> address1 | character varying(200) | \n> address2 | character varying(200) | \n> city | character varying(100) | \n> state | character varying(100) | \n> zip | character varying(16) | \n> country | character varying(50) | \n> timezone | character varying(50) | \n> pgpkey | text | \n> creator | integer | not null default\n> 0\n> created | timestamp without time zone | \n> lastupdatedby | integer | not null default\n> 0\n> lastupdated | timestamp without time zone | \n> Indexes:\n> \"users_pkey\" primary key, btree (id)\n> \"users1\" unique, btree (name)\n> \"users2\" btree (name)\n> \"users3\" btree (id, emailaddress)\n> \"users4\" btree (emailaddress)\n> ********************************************************************************\n> rttest=# \\d principals\n> \n> Table \"public.principals\"\n> Column | Type | \n> Modifiers \n> ---------------+-----------------------+----------------------------------------\n> id | integer | not null default\n> nextval('principals_id_seq'::text)\n> principaltype | character varying(16) | not null\n> objectid | integer | \n> disabled | smallint | not null default 0\n> Indexes:\n> \"principals_pkey\" primary key, btree (id)\n> \"principals2\" btree (objectid)\n> \n> ********************************************************************************\n> rttest=# \\d acl\n> \n> Table \"public.acl\"\n> Column | Type | \n> Modifiers \n> ---------------+-----------------------+----------------------------------------------\n> id | integer | not null default\n> nextval('acl_id_seq'::text)\n> principaltype | character varying(25) | not null\n> principalid | integer | not null\n> rightname | character varying(25) | not null\n> objecttype | character varying(25) | not null\n> objectid | integer | not null default 0\n> delegatedby | integer | not null default 0\n> delegatedfrom | integer | not null default 0\n> Indexes:\n> \"acl_pkey\" primary key, btree (id)\n> \"acl1\" btree (rightname, objecttype, objectid, principaltype,\n> principalid)\n> \n> \n> ********************************************************************************\n> rttest=# \\d groups\n> \n> Table \"public.groups\"\n> Column | Type | \n> Modifiers \n> -------------+------------------------+-------------------------------------------------\n> id | integer | not null default\n> nextval('groups_id_seq'::text)\n> name | character varying(200) | \n> description | character varying(255) | \n> domain | character varying(64) | \n> type | character varying(64) | \n> instance | integer | \n> Indexes:\n> \"groups_pkey\" primary key, btree (id)\n> \"groups1\" unique, btree (\"domain\", instance, \"type\", id, name)\n> \"groups2\" btree (\"type\", instance, \"domain\")\n> \n> \n> ********************************************************************************\n> rttest=# \\d cachedgroupmembers\"\n> \n> Table \"public.cachedgroupmembers\"\n> Column | Type | \n> Modifiers \n> -------------------+----------+-------------------------------------------------------------\n> id | integer | not null default\n> nextval('cachedgroupmembers_id_seq'::text)\n> groupid | integer | \n> memberid | integer | \n> via | integer | \n> immediateparentid | integer | \n> disabled | smallint | not null default 0\n> Indexes:\n> \"cachedgroupmembers_pkey\" primary key, btree (id)\n> \"cachedgroupmembers2\" btree (memberid)\n> \"cachedgroupmembers3\" btree (groupid)\n> \"disgroumem\" btree (groupid, memberid, disabled)\n> \n> \n> ********************************************************************************\n> \n> -- \n> Rafael Martinez, <[email protected]>\n> Center for Information Technology Services\n> University of Oslo, Norway\n> \n> PGP Public Key: http://folk.uio.no/rafael/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 9 Feb 2006 13:33:45 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with optimizing a sql statement" }, { "msg_contents": "First I'm wondering if the tables have been recently analyzed. If an\nanalyze has been run recently, then it is probably a good idea to look\nat the statistics target.\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Jim C.\nNasby\nSent: Thursday, February 09, 2006 1:34 PM\nTo: Rafael Martinez Guerrero\nCc: [email protected]\nSubject: Re: [PERFORM] Help with optimizing a sql statement\n\nAt least part of the problem is that it's way off on some of the row\nestimates. I'd suggest upping the statisticss target on at least all of\nthe join columns to at least 100. (Note that it's doing a nested loop\nthinking it will have only 2 rows but it actually has 22000 rows).\n\nOn Thu, Feb 09, 2006 at 04:10:27PM +0100, Rafael Martinez Guerrero\nwrote:\n> Hello\n> \n> We are running an application via web that use a lot of time to\nperform\n> some operations. We are trying to find out if some of the sql\nstatements\n> used are the reason of the slow speed.\n> \n> We have identified a sql that takes like 4-5000ms more than the second\n> slowest sql in out test server. I hope that we will get some help to\ntry\n> to optimize it.\n> \n> Thanks in advance for any help.\n> \n[Snip]\n\n", "msg_date": "Thu, 9 Feb 2006 13:44:22 -0600", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with optimizing a sql statement" }, { "msg_contents": "I looked at the estimates for the table access methods and they all\nlooked ok, so I think the statistics are pretty up-to-date; there just\naren't enough of them for the planner to do a good job.\n\nOn Thu, Feb 09, 2006 at 01:44:22PM -0600, Dave Dutcher wrote:\n> First I'm wondering if the tables have been recently analyzed. If an\n> analyze has been run recently, then it is probably a good idea to look\n> at the statistics target.\n> \n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Jim C.\n> Nasby\n> Sent: Thursday, February 09, 2006 1:34 PM\n> To: Rafael Martinez Guerrero\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Help with optimizing a sql statement\n> \n> At least part of the problem is that it's way off on some of the row\n> estimates. I'd suggest upping the statisticss target on at least all of\n> the join columns to at least 100. (Note that it's doing a nested loop\n> thinking it will have only 2 rows but it actually has 22000 rows).\n> \n> On Thu, Feb 09, 2006 at 04:10:27PM +0100, Rafael Martinez Guerrero\n> wrote:\n> > Hello\n> > \n> > We are running an application via web that use a lot of time to\n> perform\n> > some operations. We are trying to find out if some of the sql\n> statements\n> > used are the reason of the slow speed.\n> > \n> > We have identified a sql that takes like 4-5000ms more than the second\n> > slowest sql in out test server. I hope that we will get some help to\n> try\n> > to optimize it.\n> > \n> > Thanks in advance for any help.\n> > \n> [Snip]\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 9 Feb 2006 13:46:03 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with optimizing a sql statement" }, { "msg_contents": "On Thu, 2006-02-09 at 13:46 -0600, Jim C. Nasby wrote:\n> I looked at the estimates for the table access methods and they all\n> looked ok, so I think the statistics are pretty up-to-date; there just\n> aren't enough of them for the planner to do a good job.\n> \n\nVACUUM ANALYZE runs 4 times every hour, so yes, statistics are\nup-to-date. I will increase default_statistics_target tomorrow at work\nand see what happens.\n\nThanks for your help.\n\n-- \nRafael Martinez, <[email protected]>\nCenter for Information Technology Services\nUniversity of Oslo, Norway\n\nPGP Public Key: http://folk.uio.no/rafael/\n\n", "msg_date": "Thu, 09 Feb 2006 23:44:42 +0100", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with optimizing a sql statement" }, { "msg_contents": "Rafael Martinez Guerrero <[email protected]> writes:\n> WHERE ((ACL_2.RightName = 'OwnTicket')) \n> AND ((CachedGroupMembers_4.MemberId = Principals_1.id)) \n> AND ((Groups_3.id = CachedGroupMembers_4.GroupId)) \n> AND ((Principals_1.Disabled = '0') or (Principals_1.Disabled = '0')) \n> AND ((Principals_1.id != '1')) \n> AND ((main.id = Principals_1.id)) \n> AND ( ( ACL_2.PrincipalId = Groups_3.id AND ACL_2.PrincipalType =\n> 'Group' AND ( Groups_3.Domain = 'SystemInternal' OR Groups_3.Domain =\n> 'UserDefined' OR Groups_3.Domain = 'ACLEquivalence')) OR ( (\n> (Groups_3.Domain = 'RT::Queue-Role' ) ) AND Groups_3.Type\n> =ACL_2.PrincipalType) )\n> AND (ACL_2.ObjectType = 'RT::System' OR (ACL_2.ObjectType = 'RT::Queue')\n> ) \n\nAre you sure this WHERE clause really expresses your intent? It seems\nawfully oddly constructed. Removing the redundant parens and clarifying\nthe layout, I get\n\nWHERE ACL_2.RightName = 'OwnTicket'\nAND CachedGroupMembers_4.MemberId = Principals_1.id\nAND Groups_3.id = CachedGroupMembers_4.GroupId\nAND (Principals_1.Disabled = '0' or Principals_1.Disabled = '0') \nAND Principals_1.id != '1'\nAND main.id = Principals_1.id\nAND ( ( ACL_2.PrincipalId = Groups_3.id AND ACL_2.PrincipalType = 'Group' AND\n (Groups_3.Domain = 'SystemInternal' OR Groups_3.Domain = 'UserDefined' OR Groups_3.Domain = 'ACLEquivalence') )\n OR\n ( Groups_3.Domain = 'RT::Queue-Role' AND Groups_3.Type = ACL_2.PrincipalType )\n )\nAND (ACL_2.ObjectType = 'RT::System' OR ACL_2.ObjectType = 'RT::Queue') \n\nThat next-to-last major AND clause seems a rather unholy mix of join and\nrestriction clauses; I wonder if it's not buggy in itself. If it is\ncorrect, I think most of the performance problem comes from the fact\nthat the planner can't break it down into independent clauses. You\nmight try getting rid of the central OR in favor of doing a UNION of\ntwo queries that comprise all the other terms. More repetitious, but\nwould likely perform better.\n\nBTW, what PG version is this? It looks to me like it's doing some\nmanipulations of the WHERE clause that we got rid of a couple years ago.\nIf this is 7.4 or older then you really ought to be thinking about an\nupdate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 Feb 2006 18:22:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with optimizing a sql statement " }, { "msg_contents": "On Thu, 2006-02-09 at 18:22 -0500, Tom Lane wrote:\n> Rafael Martinez Guerrero <[email protected]> writes:\n> > WHERE ((ACL_2.RightName = 'OwnTicket')) \n> > AND ((CachedGroupMembers_4.MemberId = Principals_1.id)) \n> > AND ((Groups_3.id = CachedGroupMembers_4.GroupId)) \n> > AND ((Principals_1.Disabled = '0') or (Principals_1.Disabled = '0')) \n> > AND ((Principals_1.id != '1')) \n> > AND ((main.id = Principals_1.id)) \n> > AND ( ( ACL_2.PrincipalId = Groups_3.id AND ACL_2.PrincipalType =\n> > 'Group' AND ( Groups_3.Domain = 'SystemInternal' OR Groups_3.Domain =\n> > 'UserDefined' OR Groups_3.Domain = 'ACLEquivalence')) OR ( (\n> > (Groups_3.Domain = 'RT::Queue-Role' ) ) AND Groups_3.Type\n> > =ACL_2.PrincipalType) )\n> > AND (ACL_2.ObjectType = 'RT::System' OR (ACL_2.ObjectType = 'RT::Queue')\n> > ) \n> \n> Are you sure this WHERE clause really expresses your intent? It seems\n> awfully oddly constructed. Removing the redundant parens and clarifying\n> the layout, I get\n> \n[............]\n\nThis is an application that we have not programmed, so I am not sure\nwhat they are trying to do here. I will contact the developers. Tomorrow\nI will try to test some of your suggestions.\n\n> BTW, what PG version is this? It looks to me like it's doing some\n> manipulations of the WHERE clause that we got rid of a couple years ago.\n> If this is 7.4 or older then you really ought to be thinking about an\n> update.\n> \n\nWe are running 7.4.8 in this server and will upgrade to 8.0.6 in a few\nweeks.\n\nThanks.\n-- \nRafael Martinez, <[email protected]>\nCenter for Information Technology Services\nUniversity of Oslo, Norway\n\nPGP Public Key: http://folk.uio.no/rafael/\n\n", "msg_date": "Fri, 10 Feb 2006 00:36:34 +0100", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with optimizing a sql statement" }, { "msg_contents": "\nOn Feb 9, 2006, at 6:36 PM, Rafael Martinez wrote:\n\n> This is an application that we have not programmed, so I am not sure\n> what they are trying to do here. I will contact the developers. \n> Tomorrow\n> I will try to test some of your suggestions.\n\nwell, obviously you're running RT... what you want to do is update \nall your software to the latest versions. in particular update RT to \n3.4.5 and all the dependent modules to their latest. We run with Pg \n8.0 which is plenty fast. one of these days I'll update to 8.1 but \nneed to test it out first. i'm not sure how much RT has been tested \nagainst 8.1\n\n", "msg_date": "Thu, 9 Feb 2006 21:28:53 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with optimizing a sql statement" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Are you sure this WHERE clause really expresses your intent? It seems\n> awfully oddly constructed. Removing the redundant parens and clarifying\n> the layout, I get\n...\n> That next-to-last major AND clause seems a rather unholy mix of join and\n> restriction clauses; I wonder if it's not buggy in itself. \n\nFYI RT uses a perl module called SearchBuilder which constructs these queries\ndynamically. So he's probably not really free to fiddle with the query all he\nwants.\n\nAt the very least I would suggest checking the changelog for SearchBuilder for\nmore recent versions. There have been a lot of tweaks for working with\nPostgres. In the past it really only worked properly with MySQL.\n\n-- \ngreg\n\n", "msg_date": "10 Feb 2006 01:05:42 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with optimizing a sql statement" } ]
[ { "msg_contents": "\nHello All,\n\nI've inherited a postgresql database that I would like to refactor. It \nwas origionally designed for Postgres 7.0 on a PIII 500Mhz and some \ndesign decisions were made that don't make sense any more. Here's the \nproblem:\n\n1) The database is very large, the largest table has 40 million tuples.\n\n2) The database needs to import 10's of thousands of tuples each night \nquickly. The current method is VERY slow.\n\n3) I can't import new records with a COPY or drop my indexes b/c some of \nthem are new records (INSERTS) and some are altered records (UPDATES) \nand the only way I can think of to identify these records is to perform \na select for each record.\n\nHere is how the database is currently laid out and you'll see why I have \na problem with it\n\n1) The data is easily partitionable by client ID. In an attempt to keep \nthe indexes small and the inserts fast one table was made per client \nID. Thus the primary table in the database (the one with 40 million \ntuples) is really 133 tables each ending with a three digit suffix. \nThe largest of these client tables has 8 million of the 40 million \ntuples. The system started with around a half dozen clients and is now \na huge pain to manage with so many tables. I was hoping new hardware \nand new postgres features would allow for this data to be merged safely \ninto a single table.\n\n2) The imports are not done inside of transactions. I'm assuming the \nsystem designers excluded this for a reason. Will I run into problems \nperforming tens of thousands of inserts and updates inside a single \ntransaction?\n\n3) The current code that bulk loads data into the database is a loop \nthat looks like this:\n\n $result = exe(\"INSERT INTO $table ($name_str) SELECT \n$val_str WHERE NOT EXISTS (SELECT 1 FROM $table WHERE $keys)\");\n if ($result == 0)\n {\n $result = exe(\"UPDATE $table SET $non_keys WHERE \n$keys\");\n } \n\nIs there a faster way to bulk load data when it's not known ahead of \ntime if it's a new record or an updated record?\n\nWhat I would LIKE to do but am afraid I will hit a serious performance \nwall (or am missing an obvious / better way to do it)\n\n1) Merge all 133 client tables into a single new table, add a client_id \ncolumn, do the data partitioning on the indexes not the tables as seen here:\n\n CREATE INDEX actioninfo_order_number_XXX_idx ON actioninfo ( \norder_number ) WHERE client_id = XXX;\n CREATE INDEX actioninfo_trans_date_XXX_idx ON actioninfo ( \ntransaction_date ) WHERE client_id = XXX;\n\n (Aside question: if I were to find a way to use COPY and I were \nloading data on a single client_id, would dropping just the indexes for \nthat client_id accelerate the load?)\n\n2) Find some way to make the bulk loads faster or more efficent (help!)\n\n3) Wrap each load into a transaction ( tens of thousands of records per \nload )\n\nIs this a good plan? Is there a better way? Am I walking into a trap? \nShould I leave well enough alone and not try and fix something that's \nnot broken?\n\nFWIW here's the hardware and the profile of the current uber table:\n\n Column | Type | Modifiers\n-------------------+---------+-----------\n order_number | integer | not null\n order_line_number | integer | not null\n action_number | integer | not null\n transaction_date | date |\n code | text |\n trans_group_code | text |\n quantity | integer |\n extension | money |\n sales_tax | money |\n shipping | money |\n discount | money |\n\nDual Opteron 246, 4 disk SCSI RAID5, 4GB of RAM\n\n# du -sh /var/lib/postgres/data/\n16G /var/lib/postgres/data/ \n\n( the current database is PG 7.4 - I intend to upgrade it to 8.1 if and \nwhen I do this refactoring )\n( the current OS is Debian Unstable but I intend to be running RHEL 4.0 \nif and when I do this refactoring )\n\n", "msg_date": "Thu, 09 Feb 2006 11:07:06 -0800", "msg_from": "Orion Henry <[email protected]>", "msg_from_op": true, "msg_subject": "Large Database Design Help" }, { "msg_contents": "\nOrion Henry <[email protected]> writes:\n\n> What I would LIKE to do but am afraid I will hit a serious performance wall\n> (or am missing an obvious / better way to do it)\n> \n> 1) Merge all 133 client tables into a single new table, add a client_id column,\n> do the data partitioning on the indexes not the tables as seen here:\n> \n> CREATE INDEX actioninfo_order_number_XXX_idx ON actioninfo ( order_number )\n> WHERE client_id = XXX;\n> CREATE INDEX actioninfo_trans_date_XXX_idx ON actioninfo ( transaction_date )\n> WHERE client_id = XXX;\n\nThe advantages to the partitioned scheme are a) you can drop a client quickly\nin a single operation b) the indexes are only half as wide since they don't\ninclude client_id and c) you can do a sequential scan of an entire client\nwithout using the index at all.\n\nUnless any of these are overwhelming I would say to go ahead and merge them.\nIf you frequently scan all the records of a single client or frequently drop\nentire clients then the current scheme may be helpful.\n\n> (Aside question: if I were to find a way to use COPY and I were loading\n> data on a single client_id, would dropping just the indexes for that client_id\n> accelerate the load?)\n\nDropping indexes would accelerate the load but unless you're loading a large\nnumber of records relative the current size I'm not sure it would be a win\nsince you would then have to rebuild the index for the entire segment.\n\n> 2) Find some way to make the bulk loads faster or more efficent (help!)\n\nIf your existing data isn't changing while you're doing the load (and if it is\nthen your existing load process has a race condition btw) then you could do it\nin a couple big queries:\n\nCOPY ${table}_new FROM '...';\nCREATE TABLE ${table}_exists as SELECT * FROM ${table}_new WHERE EXISTS (select 1 from $table where ${table}_new.key = $table.key);\nCREATE TABLE ${table}_insert as SELECT * FROM ${table}_new WHERE NOT EXISTS (select 1 from $table where ${table}_new.key = $table.key);\n\nUPDATE $table set ... FROM ${table}_exists WHERE ${table}_exists.key = ${table}.key\nINSERT INTO $table (select * from ${table}_insert)\n\nactually you could skip the whole ${table_insert} step there and just do the\ninsert I guess. There are also other approaches you could use like adding a\nnew column to ${table}_new instead of creating new tables, etc.\n\n> 3) Wrap each load into a transaction ( tens of thousands of records per load )\n\nYes, Postgres is faster if you do more operations in a single transaction.\nEvery COMMIT means waiting for an fsync. The only disadvantage to batching\nthem into a large transaction is if it lasts a *long* time then it could\ncreate problems with your vacuum strategy. Any vacuum that runs while the\ntransaction is still running won't be able to vacuum anything.\n\nYou might consider running VACUUM FULL or CLUSTER on the table when you're\ndone with the loading process. It will lock the table while it runs though. \n\n-- \ngreg\n\n", "msg_date": "09 Feb 2006 14:45:02 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Database Design Help" }, { "msg_contents": "On Thu, Feb 09, 2006 at 11:07:06AM -0800, Orion Henry wrote:\n> \n> Hello All,\n> \n> I've inherited a postgresql database that I would like to refactor. It \n> was origionally designed for Postgres 7.0 on a PIII 500Mhz and some \n> design decisions were made that don't make sense any more. Here's the \n> problem:\n> \n> 1) The database is very large, the largest table has 40 million tuples.\n> \n> 2) The database needs to import 10's of thousands of tuples each night \n> quickly. The current method is VERY slow.\n> \n> 3) I can't import new records with a COPY or drop my indexes b/c some of \n> them are new records (INSERTS) and some are altered records (UPDATES) \n> and the only way I can think of to identify these records is to perform \n> a select for each record.\n> \n> Here is how the database is currently laid out and you'll see why I have \n> a problem with it\n> \n> 1) The data is easily partitionable by client ID. In an attempt to keep \n> the indexes small and the inserts fast one table was made per client \n> ID. Thus the primary table in the database (the one with 40 million \n> tuples) is really 133 tables each ending with a three digit suffix. \n> The largest of these client tables has 8 million of the 40 million \n> tuples. The system started with around a half dozen clients and is now \n> a huge pain to manage with so many tables. I was hoping new hardware \n> and new postgres features would allow for this data to be merged safely \n> into a single table.\n\nIf all the clients are equally active then partitioning by client\nprobably makes little sense. If some clients are much more active than\nothers then leaving this as-is could be a pretty big win. If the\npartitioning is done with either a view and rules or inherited tables\nand rules it shouldn't be too hard to manage.\n\n> 2) The imports are not done inside of transactions. I'm assuming the \n> system designers excluded this for a reason. Will I run into problems \n> performing tens of thousands of inserts and updates inside a single \n> transaction?\n\nNever attribute to thoughtful design that which can be fully explained\nby ignorance. :) I'd bet they just didn't know any better.\n\n> 3) The current code that bulk loads data into the database is a loop \n> that looks like this:\n> \n> $result = exe(\"INSERT INTO $table ($name_str) SELECT \n> $val_str WHERE NOT EXISTS (SELECT 1 FROM $table WHERE $keys)\");\n> if ($result == 0)\n> {\n> $result = exe(\"UPDATE $table SET $non_keys WHERE \n> $keys\");\n> } \n> \n> Is there a faster way to bulk load data when it's not known ahead of \n> time if it's a new record or an updated record?\n\nUuuugly. :) Instead, load everything into a temp table using COPY and\nthen UPDATE real_table ... FROM temp_table t WHERE real_table.key =\nt.key and INSERT SELECT ... WHERE NOT EXISTS. But take note that this is\na race condition so you can only do it if you know nothing else will be\ninserting into the real table at the same time.\n\nYou might want to look at the stats-proc code at\nhttp://cvs.distributed.net; it does exactly this type of thing.\n\n> What I would LIKE to do but am afraid I will hit a serious performance \n> wall (or am missing an obvious / better way to do it)\n> \n> 1) Merge all 133 client tables into a single new table, add a client_id \n> column, do the data partitioning on the indexes not the tables as seen here:\n> \n> CREATE INDEX actioninfo_order_number_XXX_idx ON actioninfo ( \n> order_number ) WHERE client_id = XXX;\n> CREATE INDEX actioninfo_trans_date_XXX_idx ON actioninfo ( \n> transaction_date ) WHERE client_id = XXX;\n> \n> (Aside question: if I were to find a way to use COPY and I were \n> loading data on a single client_id, would dropping just the indexes for \n> that client_id accelerate the load?)\n\nHrm, I believe it would...\n\n> 2) Find some way to make the bulk loads faster or more efficent (help!)\nDon't do things row-by-row. If you can't ensure that there will be only\none process inserting to eliminate the race condition I mentioned above\nthen reply back and I'll point you at code that should still be much\nfaster than what you're doing now.\n\n> 3) Wrap each load into a transaction ( tens of thousands of records per \n> load )\n\nGetting rid of row-by-row will be your biggest win. If you do have to do\nrow-by-row, at least wrap it in a transaction. As long as the\ntransaction doesn't take *too* long it won't be an issue.\n\n> Is this a good plan? Is there a better way? Am I walking into a trap? \n> Should I leave well enough alone and not try and fix something that's \n> not broken?\n> \n> FWIW here's the hardware and the profile of the current uber table:\n> \n> Column | Type | Modifiers\n> -------------------+---------+-----------\n> order_number | integer | not null\n> order_line_number | integer | not null\n> action_number | integer | not null\n> transaction_date | date |\n> code | text |\n> trans_group_code | text |\n> quantity | integer |\n> extension | money |\n> sales_tax | money |\n> shipping | money |\n> discount | money |\n> \n> Dual Opteron 246, 4 disk SCSI RAID5, 4GB of RAM\n\nRemember that the write performance of raid5 normally stinks.\n\n> # du -sh /var/lib/postgres/data/\n> 16G /var/lib/postgres/data/ \n> \n> ( the current database is PG 7.4 - I intend to upgrade it to 8.1 if and \n> when I do this refactoring )\n\nGoing to 8.1 would help in a large number of ways even if you don't\nrefactor. The stats-proc code I mentioned runs 2x faster under 8.1 than\nit does under 7.4.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 9 Feb 2006 13:45:07 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Database Design Help" }, { "msg_contents": "On 2/9/06, Orion Henry <[email protected]> wrote:\n>\n> Hello All,\n>\n> I've inherited a postgresql database that I would like to refactor. It\n> was origionally designed for Postgres 7.0 on a PIII 500Mhz and some\n> design decisions were made that don't make sense any more. Here's the\n> problem:\n>\n> 1) The database is very large, the largest table has 40 million tuples.\n>\n> 2) The database needs to import 10's of thousands of tuples each night\n> quickly. The current method is VERY slow.\n>\n> 3) I can't import new records with a COPY or drop my indexes b/c some of\n> them are new records (INSERTS) and some are altered records (UPDATES)\n> and the only way I can think of to identify these records is to perform\n> a select for each record.\n\n [snip]\n>\n> 3) The current code that bulk loads data into the database is a loop\n> that looks like this:\n>\n> $result = exe(\"INSERT INTO $table ($name_str) SELECT\n> $val_str WHERE NOT EXISTS (SELECT 1 FROM $table WHERE $keys)\");\n> if ($result == 0)\n> {\n> $result = exe(\"UPDATE $table SET $non_keys WHERE\n> $keys\");\n> }\n>\n> Is there a faster way to bulk load data when it's not known ahead of\n> time if it's a new record or an updated record?\n\nI experimented with something like this and I was able to successively\ndecrease the amount of time needed with an import. The final solution\nthat took my import down from aproximately 24 hours to about 30 min\nwas to use a C#/Java hashtable or a python dictionary. For example,\nthe unique data in one particular table was \"User_Agent\" so I made it\nthe key in my hashtable. I actually added a method to the hashtable so\nthat when I added a new record to the hashtable it would do the insert\ninto the db.\n\nThe downside to this is that it used *GOBS* of RAM.\n\nUsing Python, I was able to dramatically decrease the ram usage by\nswitching to a GDB based dictionary instead of the standard\ndictionary. It only increased the time by about 50% so the total\nprocessing time was about 45 min vs the previous 30 min.\n\nI only had about 35 million records and my technique was getting to\nthe point where it was unweldy, so with your 40 million and counting\nrecords you would probably want to start with the GDB technique unless\nyou have a ton of available ram.\n\nYou might interpret this as being a knock against PostgreSQL since I\npulled the data out of the db, but it's not; You'd be hard pressed to\nfind anything as fast as the in-memory hashtable or the on disk GDB;\nhowever it's usefullness is very limited and for anything more complex\nthan just key=>value lookups moving to PostgreSQL is likely a big win.\n\n--\nMatthew Nuzum\nwww.bearfruit.org\n", "msg_date": "Thu, 9 Feb 2006 15:49:56 -0600", "msg_from": "Matthew Nuzum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Database Design Help" }, { "msg_contents": "Hi, Greg,\n\nGreg Stark wrote:\n\n>> (Aside question: if I were to find a way to use COPY and I were loading\n>>data on a single client_id, would dropping just the indexes for that client_id\n>>accelerate the load?)\n> Dropping indexes would accelerate the load but unless you're loading a large\n> number of records relative the current size I'm not sure it would be a win\n> since you would then have to rebuild the index for the entire segment.\n\nAnd, additionally, rebuilding a partial index with \"WHERE client_id=42\"\nneeds a full table scan, which is very slow, so temporarily dropping the\nindices will not be useful if you merge the tables.\n\nBtw, I don't know whether PostgreSQL can make use of partial indices\nwhen building other partial indices. If yes, you could temporarily drop\nall but one of the partial indices for a specific client.\n\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Fri, 10 Feb 2006 11:04:33 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Database Design Help" }, { "msg_contents": "Hi, Henry,\n\nOrion Henry wrote:\n\n> 1) The database is very large, the largest table has 40 million tuples.\n\nI'm afraid this doesn't qualify as '_very_ large' yet, but it\ndefinitively is large enough to have some deep thoughts about it. :-)\n\n> 1) The data is easily partitionable by client ID. In an attempt to keep\n> the indexes small and the inserts fast one table was made per client\n> ID. Thus the primary table in the database (the one with 40 million\n> tuples) is really 133 tables each ending with a three digit suffix. \n> The largest of these client tables has 8 million of the 40 million\n> tuples. The system started with around a half dozen clients and is now\n> a huge pain to manage with so many tables. I was hoping new hardware\n> and new postgres features would allow for this data to be merged safely\n> into a single table.\n\nIt possibly is a good idea to merge them.\n\nIf you decide to keep them separated for whatever reason, you might want\nto use schemas instead of three digit suffixes. Together with\nappropriate named users or 'set search_path', this may help you to\nsimplify your software.\n\nIn case you want to keep separate tables, but need some reports touching\nall tables from time to time, table inheritance may help you. Just\ncreate a base table, and then inherit all user specific tables from that\nbase table. Of course, this can be combined with the schema approach by\nhaving the child tables in their appropriate schemas.\n\n> 2) The imports are not done inside of transactions. I'm assuming the\n> system designers excluded this for a reason. Will I run into problems\n> performing tens of thousands of inserts and updates inside a single\n> transaction?\n\nYes, it should give you a huge boost. Every commit has to flush the WAL\nout to disk, which takes at least one disk spin. So on a simple 7200 RPM\ndisk, you cannot have more than 120 transactions/second.\n\nIt may make sense to split such a bulk load into transactions of some\ntens of thousands of rows, but that depends on how easy it is for your\napplication to resume in the middle of the bulk if the connection\naborts, and how much concurrent access you have on the backend.\n\n> 3) The current code that bulk loads data into the database is a loop\n> that looks like this:\n> \n> $result = exe(\"INSERT INTO $table ($name_str) SELECT\n> $val_str WHERE NOT EXISTS (SELECT 1 FROM $table WHERE $keys)\");\n> if ($result == 0)\n> {\n> $result = exe(\"UPDATE $table SET $non_keys WHERE\n> $keys\");\n> } \n> Is there a faster way to bulk load data when it's not known ahead of\n> time if it's a new record or an updated record?\n\nPerhaps the easiest way might be to issue the update first. Update\nreturns a row count of the updated rows. If it is 0, you have to insert\nthe row.\n\nThis can even be encapsulated into a \"before insert\" trigger on the\ntable, which tries the update and ignores the insert if the update\nsucceeded. This way, you can even use COPY on the client side.\n\nWe're using this approach for one of our databases, where a client side\ncrash can result in occasional duplicates being COPYed to the table.\n\n> Dual Opteron 246, 4 disk SCSI RAID5, 4GB of RAM\n\nFor lots non-read-only database workloads, RAID5 is a performance\nkiller. Raid 1/0 might be better, or having two mirrors of two disks\neach, the first mirror holding system, swap, and the PostgreSQL WAL\nfiles, the second one holding the data. Don't forget to tune the\npostgresql settings appropriately. :-)\n\n> # du -sh /var/lib/postgres/data/\n> 16G /var/lib/postgres/data/ \n\nYour database seems to be small enough to fit on a single disk, so the\ntwo mirrors approach I described above will be the best IMHO.\n\n> ( the current database is PG 7.4 - I intend to upgrade it to 8.1 if and\n> when I do this refactoring )\n\nThis is a very good idea, 8.1 is miles ahead of 7.4 in many aspects.\n\n> ( the current OS is Debian Unstable but I intend to be running RHEL 4.0\n> if and when I do this refactoring )\n\nThis should not make too much difference.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Fri, 10 Feb 2006 11:24:42 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Database Design Help" }, { "msg_contents": "\n> was origionally designed for Postgres 7.0 on a PIII 500Mhz and some\n\n\tArgh.\n\n> 1) The database is very large, the largest table has 40 million tuples.\n\n\tIs this simple types (like a few ints, text...) ?\n\tHow much space does it use on disk ? can it fit in RAM ?\n\n> 2) The database needs to import 10's of thousands of tuples each night \n> quickly. The current method is VERY slow.\n\n\tYou bet, COMMIT'ing after each insert or update is about the worst that \ncan be done. It works fine on MySQL/MyISAM (which doesn't know about \ncommit...) so I'd guess the system designer had a previous experience with \nMySQL.\n\n\tMy advice woule be :\n\n\t- get a decent machine with some RAM (I guess you already knew this)...\n\n\tNow, the update.\n\n\tI would tend to do this :\n\n- Generate a text file with your update data, using whatever tool you like \nbest (perl, php, python, java...)\n- CREATE TEMPORARY TABLE blah ...\n- COPY blah FROM your update file.\n\n\tCOPY is super fast. I think temporary tables don't write to the xlog, so \nthey are also very fast. This should not take more than a few seconds for \na few 10 K's of simple rows on modern hardware. It actually takes a \nfraction of a second on my PC for about 9K rows with 5 INTEGERs on them.\n\n\tYou can also add constraints on your temporary table, to sanitize your \ndata, in order to be reasonably sure that the following updates will work.\n\n\tThe data you feed to copy should be correct, or it will rollback. This is \nyour script's job to escape everything.\n\n\tNow you got your data in the database. You have several options :\n\n\t- You are confident that the UPDATE will work without being rolled back \nby some constraint violation. Therefore, you issue a big joined UPDATE to \nupdate all the rows in your main table which are also in your temp table. \nThen you issue an INSERT INTO ... SELECT ... to insert the ones which were \nnot already in the big table.\n\n\tJoined updates can be slow if your RAM is too small and it has to thrash \nthe disk looking for every tuple around.\n\tYou can cheat and CLUSTER your main table (say, once a week), so it is \nall in index order. Then you arrange your update data so it is in the same \norder (for instance, you SELECT INTO another temp table, with an ORDER BY \ncorresponding to the CLUSTER on the main table). Having both in the same \norder will help reducing random disk accesses.\n\n\t- If you don't like this method, then you might want to use the same \nstrategy as before (ie. a zillion queries), but write it in PSQL instead. \nPSQL is a lot faster, because everything is already parsed and planned \nbeforehand. So you could do the following :\n\n- for each row in the temporary update table :\n\t- UPDATE the corresponding row in the main table\n\t\t- IF FOUND, then cool, it was updated, nothing more to do.\n\t\t You don't need to SELECT in order to know if the row is there.\n\t\t UPDATE does it for you, without the race condition.\n\t\t- IF NOT FOUND, then insert.\n\t\tThis has a race condition.\n\t\tYou know your application, so you'll know if it matters or not.\n\n\tWhat do you think ?\n\n> 3) I can't import new records with a COPY or drop my indexes b/c some of \n> them are new records (INSERTS) and some are altered records (UPDATES) \n> and the only way I can think of to identify these records is to perform \n> a select for each record.\n\n\tYes and no ; if you must do this, then use PSQL, it's a lot faster. And \nskip the SELECT.\n\tAlso, use the latest version. It really rocks.\n\tLike many said on the list, put pg_xlog on its own physical disk, with \next2fs.\n\n> 3) Wrap each load into a transaction ( tens of thousands of records per \n> load )\n\n\tThat's the idea. The first strategy here (big update) uses one \ntransaction anyway. For the other one, your choice. You can either do it \nall in 1 transaction, or in bunches of 1000 rows... but 1 row at a time \nwould be horrendously slow.\n\n\tRegards,\n\n\tP.F.C\n", "msg_date": "Fri, 10 Feb 2006 20:48:53 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Database Design Help" }, { "msg_contents": "On Fri, 2006-02-10 at 11:24 +0100, Markus Schaber wrote:\n\n> For lots non-read-only database workloads, RAID5 is a performance\n> killer. Raid 1/0 might be better, or having two mirrors of two disks\n> each, the first mirror holding system, swap, and the PostgreSQL WAL\n> files, the second one holding the data.\n\nI was under the impression that it is preferable to keep the WAL on \nits own spindles with no other activity there, to take full advantage\nof the sequential nature of the WAL writes.\n\nThat would mean one mirror for the WAL, and one for the rest.\nThis, of course, may sometimes be too much wasted disk space, as the WAL\ntypically will not use a whole disk, so you might partition this mirror\ninto a small ext2 filesystem for WAL, and use the rest for files seldom \naccessed, such as backups. \n\ngnari\n\n\n", "msg_date": "Fri, 10 Feb 2006 22:39:54 +0000", "msg_from": "Ragnar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Database Design Help" }, { "msg_contents": "On Fri, 2006-02-10 at 16:39, Ragnar wrote:\n> On Fri, 2006-02-10 at 11:24 +0100, Markus Schaber wrote:\n> \n> > For lots non-read-only database workloads, RAID5 is a performance\n> > killer. Raid 1/0 might be better, or having two mirrors of two disks\n> > each, the first mirror holding system, swap, and the PostgreSQL WAL\n> > files, the second one holding the data.\n> \n> I was under the impression that it is preferable to keep the WAL on \n> its own spindles with no other activity there, to take full advantage\n> of the sequential nature of the WAL writes.\n> \n> That would mean one mirror for the WAL, and one for the rest.\n> This, of course, may sometimes be too much wasted disk space, as the WAL\n> typically will not use a whole disk, so you might partition this mirror\n> into a small ext2 filesystem for WAL, and use the rest for files seldom \n> accessed, such as backups. \n\nWell, on most database servers, the actual access to the OS and swap\ndrives should drop to about zero over time, so this is a workable\nsolution if you've only got enough drives / drive slots for two mirrors.\n", "msg_date": "Fri, 10 Feb 2006 16:42:25 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Database Design Help" } ]
[ { "msg_contents": "So I'm trying to figure out how to optimize my PG install (8.0.3) to\nget better performance without dropping one of my indexes.\n\nBasically, I have a table of 5M records with 3 columns:\n\npri_key (SERIAL)\ndata char(48)\ngroupid integer\n\nthere is an additional unique index on the data column.\n\nThe problem is that when I update the groupid column for all the\nrecords, the query takes over 10hrs (after that I just canceled the\nupdate). Looking at iostat, top, vmstat shows I'm horribly disk IO\nbound (for data not WAL, CPU 85-90% iowait) and not swapping.\n\nDropping the unique index on data (which isn't used in the query),\nrunning the update and recreating the index runs in under 15 min. \nHence it's pretty clear to me that the index is the problem and\nthere's really nothing worth optimizing in my query.\n\nAs I understand from #postgresql, doing an UPDATE on one column causes\nall indexes for the effected row to have to be updated due to the way\nPG replaces the old row with a new one for updates. This seems to\nexplain why dropping the unique index on data solves the performance\nproblem.\n\ninteresting settings:\nshared_buffers = 32768\nmaintenance_work_mem = 262144\nfsync = true\nwal_sync_method = open_sync\nwal_buffers = 512\ncheckpoint_segments = 30\neffective_cache_size = 10000\nwork_mem = <default> (1024 i think?)\n\nbox:\nLinux 2.6.9-11EL (CentOS 4.1)\n2x Xeon 3.4 HT\n2GB of RAM (but Apache and other services are running)\n4 disk raid 10 (74G Raptor) for data\n4 disk raid 10 (7200rpm) for WAL\n\nother then throwing more spindles at the problem, any suggestions?\n\nThanks,\nAaron\n\n--\nAaron Turner\nhttp://synfin.net/\n", "msg_date": "Fri, 10 Feb 2006 00:16:49 -0800", "msg_from": "Aaron Turner <[email protected]>", "msg_from_op": true, "msg_subject": "10+hrs vs 15min because of just one index" }, { "msg_contents": "On 2/10/06, Aaron Turner <[email protected]> wrote:\n> So I'm trying to figure out how to optimize my PG install (8.0.3) to\n> get better performance without dropping one of my indexes.\n> Basically, I have a table of 5M records with 3 columns:\n> pri_key (SERIAL)\n> data char(48)\n> groupid integer\n> there is an additional unique index on the data column.\n> The problem is that when I update the groupid column for all the\n> records, the query takes over 10hrs (after that I just canceled the\n> update). Looking at iostat, top, vmstat shows I'm horribly disk IO\n> bound (for data not WAL, CPU 85-90% iowait) and not swapping.\n> Dropping the unique index on data (which isn't used in the query),\n\nfor such a large update i would suggest to go with different scenario:\nsplit update into packets (10000, or 50000 rows at the time)\nand do:\nupdate packet\nvacuum table\nfor all packets. and then reindex the table. should work much nicer.\n\ndepesz\n", "msg_date": "Fri, 10 Feb 2006 10:00:34 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 10+hrs vs 15min because of just one index" }, { "msg_contents": "On 2/10/06, hubert depesz lubaczewski <[email protected]> wrote:\n> On 2/10/06, Aaron Turner <[email protected]> wrote:\n> > So I'm trying to figure out how to optimize my PG install (8.0.3) to\n> > get better performance without dropping one of my indexes.\n> > Basically, I have a table of 5M records with 3 columns:\n> > pri_key (SERIAL)\n> > data char(48)\n> > groupid integer\n> > there is an additional unique index on the data column.\n> > The problem is that when I update the groupid column for all the\n> > records, the query takes over 10hrs (after that I just canceled the\n> > update). Looking at iostat, top, vmstat shows I'm horribly disk IO\n> > bound (for data not WAL, CPU 85-90% iowait) and not swapping.\n> > Dropping the unique index on data (which isn't used in the query),\n>\n> for such a large update i would suggest to go with different scenario:\n> split update into packets (10000, or 50000 rows at the time)\n> and do:\n> update packet\n> vacuum table\n> for all packets. and then reindex the table. should work much nicer.\n\nThe problem is that all 5M records are being updated by a single\nUPDATE statement, not 5M individual statements. Also, vacuum can't\nrun inside of a transaction.\n\nOn a side note, is there any performance information on updating\nindexes (via insert/update) over the size of the column? Obviously,\nchar(48) is larger then most for indexing purposes, but I wonder if\nperformance drops linerally or exponentially as the column width\nincreases. Right now my column is hexidecimal... if I stored it as a\nbinary representation it would be smaller.\n\nThanks,\nAaron\n", "msg_date": "Fri, 10 Feb 2006 08:35:49 -0800", "msg_from": "Aaron Turner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 10+hrs vs 15min because of just one index" }, { "msg_contents": "Aaron Turner wrote:\n> So I'm trying to figure out how to optimize my PG install (8.0.3) to\n> get better performance without dropping one of my indexes.\n\nWhat about something like this:\n\nbegin;\ndrop slow_index_name;\nupdate;\ncreate index slow_index_name;\ncommit;\nvacuum;\n\nMatt\n", "msg_date": "Fri, 10 Feb 2006 12:13:35 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 10+hrs vs 15min because of just one index" }, { "msg_contents": "On 2/10/06, Matthew T. O'Connor <[email protected]> wrote:\n> Aaron Turner wrote:\n> > So I'm trying to figure out how to optimize my PG install (8.0.3) to\n> > get better performance without dropping one of my indexes.\n>\n> What about something like this:\n>\n> begin;\n> drop slow_index_name;\n> update;\n> create index slow_index_name;\n> commit;\n> vacuum;\n\nRight. That's exactly what I'm doing to get the update to occur in 15\nminutes. Unfortunately though, I'm basically at the point of every\ntime I insert/update into that table I have to drop the index which is\nmaking my life very painful (having to de-dupe records in RAM in my\napplication is a lot faster but also more complicated/error prone).\n\nBasically, I need some way to optimize PG so that I don't have to drop\nthat index every time.\n\nSuggestions?\n\n--\nAaron Turner\nhttp://synfin.net/\n", "msg_date": "Fri, 10 Feb 2006 09:24:39 -0800", "msg_from": "Aaron Turner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 10+hrs vs 15min because of just one index" }, { "msg_contents": "On Fri, Feb 10, 2006 at 09:24:39AM -0800, Aaron Turner wrote:\n> On 2/10/06, Matthew T. O'Connor <[email protected]> wrote:\n> > Aaron Turner wrote:\n> > > So I'm trying to figure out how to optimize my PG install (8.0.3) to\n> > > get better performance without dropping one of my indexes.\n> >\n> > What about something like this:\n> >\n> > begin;\n> > drop slow_index_name;\n> > update;\n> > create index slow_index_name;\n> > commit;\n> > vacuum;\n> \n> Right. That's exactly what I'm doing to get the update to occur in 15\n> minutes. Unfortunately though, I'm basically at the point of every\n> time I insert/update into that table I have to drop the index which is\n> making my life very painful (having to de-dupe records in RAM in my\n> application is a lot faster but also more complicated/error prone).\n> \n> Basically, I need some way to optimize PG so that I don't have to drop\n> that index every time.\n> \n> Suggestions?\n\nI think you'll have a tough time making this faster; or I'm just not\nunderstanding the problem well enough. It's probably time to start\nthinking about re-architecting some things in the application so that\nyou don't have to do this.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Sat, 11 Feb 2006 15:24:53 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 10+hrs vs 15min because of just one index" }, { "msg_contents": "On 2/11/06, Jim C. Nasby <[email protected]> wrote:\n> On Fri, Feb 10, 2006 at 09:24:39AM -0800, Aaron Turner wrote:\n> > On 2/10/06, Matthew T. O'Connor <[email protected]> wrote:\n> > > Aaron Turner wrote:\n> >\n> > Basically, I need some way to optimize PG so that I don't have to drop\n> > that index every time.\n> >\n> > Suggestions?\n>\n> I think you'll have a tough time making this faster; or I'm just not\n> understanding the problem well enough. It's probably time to start\n> thinking about re-architecting some things in the application so that\n> you don't have to do this.\n\nWell before I go about re-architecting things, it would be good to\nhave a strong understanding of just what is going on. Obviously, the\nunique index on the char(48) is the killer. What I don't know is:\n\n1) Is this because the column is so long?\n2) Is this because PG is not optimized for char(48) (maybe it wants\npowers of 2? or doesn't like even numbers... I don't know, just\nthrowing it out there)\n3) Is there some algorithm I can use to estimate relative UPDATE\nspeed? Ie, if I cut the column length in 1/2 does that make it 50%\nfaster?\n4) Does decoding the data (currently base64) and storing the binary\ndata improve the distribution of the index, thereby masking it more\nefficent?\n\nObviously, one solution would be to store the column to be UPDATED in\na seperate joined table. That would cost more disk space, and be more\ncomplex, but it would be more efficient for updates (inserts would of\ncourse be more expensive since now I have to do two).\n\n--\nAaron Turner\nhttp://synfin.net/\n", "msg_date": "Sat, 11 Feb 2006 23:58:48 -0800", "msg_from": "Aaron Turner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 10+hrs vs 15min because of just one index" }, { "msg_contents": "Aaron Turner <[email protected]> writes:\n> Well before I go about re-architecting things, it would be good to\n> have a strong understanding of just what is going on. Obviously, the\n> unique index on the char(48) is the killer. What I don't know is:\n\nYou have another unique index on the integer primary key, so it's not\nthe mere fact of a unique index that's hurting you.\n\n> 1) Is this because the column is so long?\n\nPossibly. Allowing for 12 bytes index-entry overhead, the char keys\nwould be 60 bytes vs 16 for the integer column, so this index is\nphysically almost 4x larger than the other. You might say \"but that\nshould only cause 4x more I/O\" but it's not necessarily so. What's\nhard to tell is whether you are running out of RAM disk cache space,\nresulting in re-reads of pages that could have stayed in memory when\ndealing with one-fifth as much index data. You did not show us the\niostat numbers for the two cases, but it'd be interesting to look at\nthe proportion of writes to reads on the data drive in both cases.\n\n> 2) Is this because PG is not optimized for char(48) (maybe it wants\n> powers of 2? or doesn't like even numbers... I don't know, just\n> throwing it out there)\n\nAre the key values really all 48 chars long? If not, you made a\nbad datatype choice: varchar(n) (or even text) would be a lot\nsmarter. char(n) wastes space on blank-padding.\n\nAnother thing to think about is whether this is C locale or not.\nString comparisons in non-C locales can be horrendously expensive\n... though I'd expect that to cost CPU not I/O. (Hmm ... is it\npossible your libc is hitting locale config files constantly?\nMight be worth strace'ing to confirm exactly where the I/O is\ngoing.)\n\n> 4) Does decoding the data (currently base64) and storing the binary\n> data improve the distribution of the index, thereby masking it more\n> efficent?\n\nNo, but it'd reduce the size of the index, which you certainly want.\nStoring as bytea would also eliminate any questions about wasteful\nlocale-dependent comparisons.\n\nThe only one of these effects that looks to me like it could result in\nworse-than-linear degradation of I/O demand is maxing out the available\nRAM for disk cache. So while improving the datatype choice would\nprobably be worth your while, you should first see if fooling with\nshared_buffers helps, and if not it's time to buy RAM not disk.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 12 Feb 2006 10:54:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 10+hrs vs 15min because of just one index " }, { "msg_contents": "On 2/12/06, Tom Lane <[email protected]> wrote:\n> Aaron Turner <[email protected]> writes:\n> > Well before I go about re-architecting things, it would be good to\n> > have a strong understanding of just what is going on. Obviously, the\n> > unique index on the char(48) is the killer. What I don't know is:\n>\n> You have another unique index on the integer primary key, so it's not\n> the mere fact of a unique index that's hurting you.\n\nUnderstood. I just wasn't sure if in general unique indexes are some\nhow more expensive then non-unique indexes.\n\n> > 1) Is this because the column is so long?\n>\n> Possibly. Allowing for 12 bytes index-entry overhead, the char keys\n> would be 60 bytes vs 16 for the integer column, so this index is\n> physically almost 4x larger than the other. You might say \"but that\n> should only cause 4x more I/O\" but it's not necessarily so. What's\n> hard to tell is whether you are running out of RAM disk cache space,\n> resulting in re-reads of pages that could have stayed in memory when\n> dealing with one-fifth as much index data. You did not show us the\n> iostat numbers for the two cases, but it'd be interesting to look at\n> the proportion of writes to reads on the data drive in both cases.\n\nSounds a lot like what Marc mentioned.\n\n> > 2) Is this because PG is not optimized for char(48) (maybe it wants\n> > powers of 2? or doesn't like even numbers... I don't know, just\n> > throwing it out there)\n>\n> Are the key values really all 48 chars long? If not, you made a\n> bad datatype choice: varchar(n) (or even text) would be a lot\n> smarter. char(n) wastes space on blank-padding.\n\nYep, everything exactly 48. Looks like I'll be storing it as a bytea\nin the near future though.\n\n> The only one of these effects that looks to me like it could result in\n> worse-than-linear degradation of I/O demand is maxing out the available\n> RAM for disk cache. So while improving the datatype choice would\n> probably be worth your while, you should first see if fooling with\n> shared_buffers helps, and if not it's time to buy RAM not disk.\n\nYeah, that's what it's beginning to sound like. Thanks Tom.\n\n--\nAaron Turner\nhttp://synfin.net/\n", "msg_date": "Sun, 12 Feb 2006 11:33:57 -0800", "msg_from": "Aaron Turner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 10+hrs vs 15min because of just one index" }, { "msg_contents": "Hi, Aaron,\n\nAaron Turner wrote:\n\n> 4) Does decoding the data (currently base64) and storing the binary\n> data improve the distribution of the index, thereby masking it more\n> efficent?\n\nYes, but then you should not use varchar, but a bytea.\n\nIf your data is some numer internally, numeric or decimal may be even\nbetter.\n\nIf most of your data is different in the first 8 bytes, it may also make\nsense to duplicate them into a bigint, and create the bigint on them.\nThen you can use AND in your query to test for the 8 bytes (uses index)\nand the bytea. Ugly, but may work.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Sun, 12 Feb 2006 22:04:18 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 10+hrs vs 15min because of just one index" }, { "msg_contents": "\n>> Are the key values really all 48 chars long? If not, you made a\n>> bad datatype choice: varchar(n) (or even text) would be a lot\n>> smarter. char(n) wastes space on blank-padding.\n>\n> Yep, everything exactly 48. Looks like I'll be storing it as a bytea\n> in the near future though.\n\n\tIt's a good idea not to bloat a column by base64 encoding it if you want \nto index it. BYTEA should be your friend.\n\tIf your values are not random, you might want to exploit the correlation. \nBut if they are already quite uncorrelated, and you don't need the index \nfor < >, just for =, you can create an index on the md5 of your column and \nuse it to search. It will use a lot less data but the data will be more \nrandom. With a functional index, you don't need to modify your application \ntoo much.\n", "msg_date": "Mon, 13 Feb 2006 09:55:20 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 10+hrs vs 15min because of just one index" }, { "msg_contents": "Well just a little update:\n\n1) Looks like I'm definately RAM constrained. Just placed an order\nfor another 4GB.\n2) I ended up dropping the primary key too which helped with disk\nthrashing a lot (average disk queue wait was between 500ms and 8500ms\nbefore and 250-500ms after)\n3) Playing with most of the settings in the postgresql.conf actually\ndropped performance significantly. Looks like I'm starving the disk\ncache.\n4) I'm going to assume going to a bytea helped some (width is 54 vs\n66) but nothing really measurable\n\nThanks everyone for your help!\n\n--\nAaron Turner\nhttp://synfin.net/\n", "msg_date": "Tue, 14 Feb 2006 15:14:03 -0800", "msg_from": "Aaron Turner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 10+hrs vs 15min because of just one index" } ]
[ { "msg_contents": "Hi Guys,\n\n \n\nApologies if this is a novice queston, but I think it is a performance one\nnevertheless. We are running a prototype of a system running on\nPHP/Postgresql on an Intel Xeon 2ghz server, 1GB RAM, 40GB hard drive, as a\ntest bench. The system will be used for tens of thousands of users, and at\nthe moment we are testing on a base of around 400 users concurrently during\nthe day.\n\n \n\nDuring the day, the system is incredibly slow to a point where it is\nunusable. The reason we are testing on such as small server is to test\nperformance under pressure, and my estimation is that spec should handle\nthousands of users.\n\n \n\nThe server spikes from 5% usage to 95% up and down. The system is a very\nsimple e-learning and management system and has not given us any issues to\ndate, only since we've been testing with more users has it done so. The fact\nthat 400 users doing inserts and queries every few minutes is very\nconcerning, I would like to know if I could be tweaking some config\nsettings.\n\n\nWe are running PG 7.4 on a Debian Sarge server, and will be upgrading to\npg8.0 on a new server, but have some migration issues (that's for another\nlist!)\n\n\nAny help would be greatly appreciated!\n\n\nAll the very best,\n\n \n\nJames Dey\n\n \n\ntel +27 11 704-1945\n\ncell +27 82 785-5102\n\nfax +27 11 388-8907\n\nmail <mailto:[email protected]> [email protected]\n\n \n\nmyGUS / SLT retains all its intellectual property rights in any information\ncontained in e-mail messages (or any attachments thereto) which relates to\nthe official business of myGUS / SLT or of any of its associates. Such\ninformation may be legally privileged, is to be treated as confidential and\nmyGUS / SLT will take legal steps against any unauthorised use. myGUS / SLT\ndoes not take any responsibility for, or endorses any information which does\nnot relate to its official business, including personal mail and/or opinions\nby senders who may or may not be employed by myGUS / SLT. In the event that\nyou receive a message not intended for you, we request that you notify the\nsender immediately, do not read, disclose or use the content in any way\nwhatsoever and destroy/delete the message immediately. While myGUS / SLT\nwill take reasonable precautions, it cannot ensure that this e-mail will be\nfree of errors, viruses, interception or interference therewith. myGUS / SLT\ndoes not, therefore, issue any guarantees or warranties in this regard and\ncannot be held liable for any loss or damages incurred by the recipient\nwhich have been caused by any of the above-mentioned factors.\n\n \n\n\n\n\n\n\n\n\n\n\nHi Guys,\n \nApologies if this is a novice queston, but I think it is a\nperformance one nevertheless. We are running a prototype of a system running on\nPHP/Postgresql on an Intel Xeon 2ghz server, 1GB RAM, 40GB hard drive, as a\ntest bench. The system will be used for tens of thousands of users, and at the\nmoment we are testing on a base of around 400 users concurrently during the\nday.\n \nDuring the day, the system is incredibly slow to a point\nwhere it is unusable. The reason we are testing on such as small server is to\ntest performance under pressure, and my estimation is that spec should handle\nthousands of users.\n \nThe server spikes from 5% usage to 95% up and down. The system\nis a very simple e-learning and management system and has not given us any\nissues to date, only since we’ve been testing with more users has it done\nso. The fact that 400 users doing inserts and queries every few minutes is very\nconcerning, I would like to know if I could be tweaking some config settings.\n\nWe are running PG 7.4 on a Debian Sarge server, and will be upgrading to pg8.0\non a new server, but have some migration issues (that’s for another\nlist!)\n\nAny help would be greatly appreciated!\n\nAll the very best,\n \nJames Dey\n \ntel           +27 11 704-1945\ncell          +27\n82 785-5102\nfax           +27\n11 388-8907\nmail        [email protected]\n \nmyGUS / SLT retains all its intellectual\nproperty rights in any information contained in e-mail messages (or any\nattachments thereto) which relates to the official business of myGUS / SLT or\nof any of its associates. Such information may be legally privileged, is to be\ntreated as confidential and myGUS / SLT will take legal steps against any\nunauthorised use. myGUS / SLT does not take any responsibility for, or endorses\nany information which does not relate to its official business, including\npersonal mail and/or opinions by senders who may or may not be employed by\nmyGUS / SLT. In the event that you receive a message not intended for you, we\nrequest that you notify the sender immediately, do not read, disclose or use\nthe content in any way whatsoever and destroy/delete the message immediately.\nWhile myGUS / SLT will take reasonable precautions, it cannot ensure that this\ne-mail will be free of errors, viruses, interception or interference therewith.\nmyGUS / SLT does not, therefore, issue any guarantees or warranties in this\nregard and cannot be held liable for any loss or damages incurred by the\nrecipient which have been caused by any of the above-mentioned factors.", "msg_date": "Fri, 10 Feb 2006 10:22:35 +0200", "msg_from": "\"James Dey\" <[email protected]>", "msg_from_op": true, "msg_subject": "Basic Database Performance" }, { "msg_contents": "James Dey wrote:\n> \n> Apologies if this is a novice queston, but I think it is a performance one\n> nevertheless. We are running a prototype of a system running on\n> PHP/Postgresql on an Intel Xeon 2ghz server, 1GB RAM, 40GB hard drive, as a\n> test bench. The system will be used for tens of thousands of users, and at\n> the moment we are testing on a base of around 400 users concurrently during\n> the day.\n\nOK, that's 400 web-users, so presumably a fraction of that for \nconcurrent database connections.\n\n> During the day, the system is incredibly slow to a point where it is\n> unusable. The reason we are testing on such as small server is to test\n> performance under pressure, and my estimation is that spec should handle\n> thousands of users.\n\nIt'll depend on what the users are doing\nIt'll depend on what your code is doing\nIt'll depend on how you've configured PostgreSQL.\n\n> The server spikes from 5% usage to 95% up and down.\n\nUsage? Do you mean CPU?\n\n > The system is a very\n> simple e-learning and management system and has not given us any issues to\n> date, only since we've been testing with more users has it done so. The fact\n> that 400 users doing inserts and queries every few minutes is very\n> concerning, I would like to know if I could be tweaking some config\n> settings.\n\nYou haven't said what config settings you're working with.\n\nOK - the main questions have to be:\n1. Are you limited by CPU, memory or disk i/o?\n2. Are you happy your config settings are good?\n How do you know?\n3. Are there particular queries that are causing the problem, or lock \ncontention?\n\n> We are running PG 7.4 on a Debian Sarge server, and will be upgrading to\n> pg8.0 on a new server, but have some migration issues (that's for another\n> list!)\n\nGo straight to 8.1 - no point in upgrading half-way. If you don't like \ncompiling from source it's in backports.org\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 10 Feb 2006 09:36:35 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Database Performance" }, { "msg_contents": "Hi, James,\n\nJames Dey wrote:\n\n> Apologies if this is a novice queston, but I think it is a performance\n> one nevertheless. We are running a prototype of a system running on\n> PHP/Postgresql on an Intel Xeon 2ghz server, 1GB RAM, 40GB hard drive,\n> as a test bench. The system will be used for tens of thousands of users,\n> and at the moment we are testing on a base of around 400 users\n> concurrently during the day.\n\nThe first thing that comes into my mind here is \"connection pooling /\nrecycling\".\n\nTry to make shure that connections are reused between http requests.\nReopening the connection on every http request will break your system,\nas the backend startup time is rather high.\n\n> During the day, the system is incredibly slow to a point where it is\n> unusable. The reason we are testing on such as small server is to test\n> performance under pressure, and my estimation is that spec should handle\n> thousands of users.\n\nNote that amount of data, concurrent users, hardware and speed don't\nalways scale linearly.\n\n> The server spikes from 5% usage to 95% up and down. The system is a very\n> simple e-learning and management system and has not given us any issues\n> to date, only since we�ve been testing with more users has it done so.\n> The fact that 400 users doing inserts and queries every few minutes is\n> very concerning, I would like to know if I could be tweaking some config\n> settings.\n\nYou should make shure that you run vacuum / analyze regularly (either\nautovacuum, or vacuum full at night when you have no users on the system).\n\nUse statement logging or other profiling means to isolate the slow\nqueries, and EXPLAIN ANALYZE them to see what goes wrong. Create the\nneeded indices, and drop unneded one. (insert usual performance tuning\ntips here...)\n\n> We are running PG 7.4 on a Debian Sarge server, and will be upgrading to\n> pg8.0 on a new server, but have some migration issues (that�s for\n> another list!)\n\nIgnore 8.0 and go to 8.1 directly.\n\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Fri, 10 Feb 2006 11:37:07 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Database Performance" }, { "msg_contents": "> We are running a prototype of a system running on\n> PHP/Postgresql on an Intel Xeon 2ghz server, 1GB RAM, 40GB hard drive,\n\n\tI think this is a decent server...\n\n\tNow, I guess you are using Apache and PHP like everyone.\n\n\tKnow these facts :\n\n\t- A client connection means an apache process (think HTTP 1.1 \nKeep-Alives...)\n\t- The PHP interpreter in mod_php will be active during all the time it \ntakes to receive the request, parse it, generate the dynamic page, and \nsend it to the client to the last byte (because it is sent streaming). So, \na php page that might take 10 ms to generate will actually hog an \ninterpreter for between 200 ms and 1 second, depending on client ping time \nand other network latency figures.\n\t- This is actually on-topic for this list, because it will also hog a \npostgres connection and server process during all that time. Thus, it will \nmost probably be slow and unscalable.\n\n\tThe solutions I use are simple :\n\n\tFirst, use lighttpd instead of apache. Not only is it simpler to use and \nconfigure, it uses a lot less RAM and resources, is faster, lighter, etc. \nIt uses an asynchronous model. It's there on my server, a crap Celeron, \npushing about 100 hits/s, and it sits at 4% CPU and 18 megabytes of RAM in \nthe top. It's impossible to overload this thing unless you benchmark it on \ngigabit lan, with 100 bytes files.\n\n\tThen, plug php in, using the fast-cgi protocol. Basically php spawns a \nprocess pool, and you chose the size of this pool. Say you spawn 20 PHP \ninterpreters for instance.\n\n\tWhen a PHP page is requested, lighttpd asks the process pool to generate \nit. Then, a PHP interpreter from the pool does the job, and hands the page \nover to lighttpd. This is very fast. lighttpd handles the slow \ntransmission of the data to the client, while the PHP interpreter goes \nback to the pool to service another request.\n\n\tThis gives you database connection pooling for free, actually. The \nconnections are limited to the number of processes in the pool, so you \nwon't get hundreds of them all over the place. You can use php's \npersistent connections without worries. You don't need to configure a \nconnection pool. It just works (TM).\n\n\tAlso you might want to use eaccelerator on your PHP. It precompiles your \nPHP pages, so you don't lose time on parsing. Page time on my site went \n from 50-200 ms to 5-20 ms just by installing this. It's free.\n\n\tTry this and you might realize that after all, postgres was fast enough !\n\n\n\n", "msg_date": "Fri, 10 Feb 2006 21:14:06 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Database Performance" } ]
[ { "msg_contents": "\nDon't forget to cc: the list.\n\nJames Dey wrote:\n> Hi Richard,\n> \n> Firstly, thanks a million for the reply.\n> \n> To answer your questions:\n> 1. Are you limited by CPU, memory or disk i/o?\n> I am not limited, but would like to get the most out of the config I have in\n> order to be able to know what I'll get, when I scale up.\n\nBut you said: \"During the day, the system is incredibly slow to a point \nwhere it is unusable\". So presumably one or more of cpu,memory or disk \ni/o is the problem.\n\n> 2. Are you happy your config settings are good? How do you know?\n> I'm not, and would appreciate any help with these.\n\nIf you have a look here, there is an introduction for 7.4\n http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\nFor 8.x you might find the following more useful.\n http://www.powerpostgresql.com/PerfList\n\n> 3. Are there particular queries that are causing the problem, or lock \n> contention?\n> Not that I can see\n\nWhat is the balance between activity on Apache/PHP/PostgreSQL?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 10 Feb 2006 09:50:21 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Basic Database Performance" }, { "msg_contents": "Sorry about that\n\nJames Dey\n\ntel\t+27 11 704-1945\ncell\t+27 82 785-5102\nfax\t+27 11 388-8907\nmail\[email protected]\n\n\n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]] \nSent: 10 February 2006 11:50 AM\nTo: James Dey\nCc: 'Postgresql Performance'\nSubject: Re: [PERFORM] Basic Database Performance\n\n\nDon't forget to cc: the list.\n\nJames Dey wrote:\n> Hi Richard,\n> \n> Firstly, thanks a million for the reply.\n> \n> To answer your questions:\n> 1. Are you limited by CPU, memory or disk i/o?\n> I am not limited, but would like to get the most out of the config I have\nin\n> order to be able to know what I'll get, when I scale up.\n\nBut you said: \"During the day, the system is incredibly slow to a point \nwhere it is unusable\". So presumably one or more of cpu,memory or disk \ni/o is the problem.\n\n> 2. Are you happy your config settings are good? How do you know?\n> I'm not, and would appreciate any help with these.\n\nIf you have a look here, there is an introduction for 7.4\n http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\nFor 8.x you might find the following more useful.\n http://www.powerpostgresql.com/PerfList\n\n> 3. Are there particular queries that are causing the problem, or lock \n> contention?\n> Not that I can see\n\nWhat is the balance between activity on Apache/PHP/PostgreSQL?\n\n-- \n Richard Huxton\n Archonet Ltd\n\n", "msg_date": "Fri, 10 Feb 2006 11:55:49 +0200", "msg_from": "\"James Dey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Database Performance" } ]
[ { "msg_contents": "Subject line says it all. I'm going to be testing changes under both \nLinux and WinXP, so I'm hoping those of you that do M$ hacking will \npass along your list of suggestions and/or favorite (and hated so I \nknow what to avoid) tools.\n\nTiA,\nRon\n\n\n", "msg_date": "Fri, 10 Feb 2006 14:22:31 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": true, "msg_subject": "What do the Windows pg hackers out there like for dev tools?" }, { "msg_contents": "On 2/10/06, Ron <[email protected]> wrote:\n> Subject line says it all. I'm going to be testing changes under both\n> Linux and WinXP, so I'm hoping those of you that do M$ hacking will\n> pass along your list of suggestions and/or favorite (and hated so I\n> know what to avoid) tools.\n\nIf you mean hacking postgresql source code, you pretty much have to\nuse the built in make/build system...this more or less rules out IDEs\nand such.\n\nI like UltraEdit for a text editor. Another good choice for editor is\nsource insight. Winmerge is a fantastic tool and you may want to\ncheck out wincvs/tortoisesvn if you want to do checkouts from the gui.\n\nOf course, to make/build postgresql in windows, you can go with cygwin\nor mingw. cygwin is a bit easier to set up and has a more of a unix\nflavor but mignw allows you to compile native executables.\n\nThe upcoming windows vista will most likely be able to compile\npostgresql without an external build system.\n\nMerlin\n", "msg_date": "Fri, 10 Feb 2006 19:40:10 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What do the Windows pg hackers out there like for dev tools?" }, { "msg_contents": "Ron wrote:\n> Subject line says it all. I'm going to be testing changes under both \n> Linux and WinXP, so I'm hoping those of you that do M$ hacking will pass \n> along your list of suggestions and/or favorite (and hated so I know what \n> to avoid) tools.\n> \n\nTesting only? So you really only need to build and run on Windows...\n\nI was doing exactly this about a year ago and used Mingw. The only \nannoyance was that I could compile everything on Linux in about 3 \nminutes (P4 2.8Ghz), but had to wait about 60-90 minutes for the same \nthing on Windows 2003 Server! (also a P4 2.8Ghz...). So I used to build \na 'go for coffee' task into the build and test cycle.\n\nCheers\n\nMark\n", "msg_date": "Sat, 11 Feb 2006 14:57:28 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What do the Windows pg hackers out there like for dev" }, { "msg_contents": "Mark Kirkwood <[email protected]> writes:\n> I was doing exactly this about a year ago and used Mingw. The only \n> annoyance was that I could compile everything on Linux in about 3 \n> minutes (P4 2.8Ghz), but had to wait about 60-90 minutes for the same \n> thing on Windows 2003 Server! (also a P4 2.8Ghz...). So I used to build \n> a 'go for coffee' task into the build and test cycle.\n\nYouch! That seems unbelievably bad, even for Microsloth. Did you ever\nidentify what was the bottleneck?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 11 Feb 2006 11:09:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] What do the Windows pg hackers out there like for dev " }, { "msg_contents": "Tom Lane wrote:\n> Mark Kirkwood <[email protected]> writes:\n> \n>>I was doing exactly this about a year ago and used Mingw. The only \n>>annoyance was that I could compile everything on Linux in about 3 \n>>minutes (P4 2.8Ghz), but had to wait about 60-90 minutes for the same \n>>thing on Windows 2003 Server! (also a P4 2.8Ghz...). So I used to build \n>>a 'go for coffee' task into the build and test cycle.\n> \n> \n> Youch! That seems unbelievably bad, even for Microsloth. Did you ever\n> identify what was the bottleneck?\n> \n\nNo - I was connecting using an RDB client from a Linux box (over a LAN), \nso was never sure how much that was hurting things... but (as noted by \nMagnus) the compiler itself is noticeablely slower (easily observed \nduring the 'configure' step).\n\ncheers\n\nMark\n", "msg_date": "Sun, 12 Feb 2006 17:06:22 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] What do the Windows pg hackers out there like" } ]
[ { "msg_contents": "Hi,\n\nI have an unique requirement. I have a feed of 2.5 - 3 million rows of data\nwhich arrives every 1/2 an hour. Each row has 2 small string values (about\n50 chars each) and 10 int values. I need searcheability and running\narbitrary queries on any of these values. This means i have to create an\nindex on every column. The feed comes in as a text file comma separated.\nHere is what i am planning to do\n\n1) create a new table every time a new feed file comes in. Create table with\nindexes. Use the copy command to dump the data into the table.\n2) rename the current table to some old table name and rename the new table\nto current table name so that applications can access them directly.\n\nNote that these are read only tables and it is fine if the step 2 takes a\nsmall amount of time (it is not a mission critical table hence, a small\ndowntime of some secs is fine).\n\nMy question is what is the best way to do step (1) so that after the copy is\ndone, the table is fully indexed and properly balanced and optimized for\nquery.\nShould i create indexes before or after import ? I need to do this in\nshortest period of time so that the data is always uptodate. Note that\nincremental updates are not possible since almost every row will be changed\nin the new file.\n\nmy table creation script looks like this\n\ncreate table datatablenew(fe varchar(40), va varchar(60), a int, b int, c\nint, d int, e int, f int, g int, h int, i int, j int, k int, l int, m int, n\nint, o int, p int, q real);\ncreate index fe_idx on datatablenew using hash (fe);\ncreate index va_idx on datatablenew using hash(va);\ncreate index a_idx on datatablenew (a);\n......\ncreate index q_idx on datatablenew(q);\n\n\nplease advice.\n\nthanks\nvijay\n\nHi,\n\nI have an unique requirement. I have a feed of 2.5 - 3 million rows of\ndata which arrives every 1/2 an hour. Each row has 2 small string\nvalues  (about 50 chars each) and 10 int values. I need\nsearcheability and running arbitrary queries on any of these values.\nThis means i have to create an index on every column. The feed comes in\nas a text file comma separated. Here is what i am planning to do\n\n1) create a new table every time a new feed file comes in. Create table\nwith indexes. Use the copy command to dump the data into the table. \n2) rename the current table to some old table name and rename the new\ntable to current table name so that applications can access them\ndirectly. \n\nNote that these are read only tables and it is fine if the step 2 takes\na small amount of time (it is not a mission critical table hence, a\nsmall downtime of some secs is fine).\n\nMy question is what is the best way to do step (1) so that after the\ncopy is done, the table is fully indexed  and properly balanced\nand optimized for query.\nShould i create indexes before or after import ? I need to do this in\nshortest period of time so that the data is always uptodate. Note that\nincremental updates are not possible since almost every row will be\nchanged in the new file.\n\nmy table creation script looks like this\n\ncreate table datatablenew(fe varchar(40), va varchar(60), a int, b int,\nc int, d int, e int, f int, g int, h int, i int, j int, k int, l int, m\nint, n int, o int, p int, q real);\ncreate index fe_idx on datatablenew using hash (fe);\ncreate index va_idx on datatablenew using hash(va);\ncreate index a_idx on datatablenew (a);\n......\ncreate index q_idx on datatablenew(q);\n\n\nplease advice.\n\nthanks\nvijay", "msg_date": "Fri, 10 Feb 2006 12:20:34 -0800", "msg_from": "david drummard <[email protected]>", "msg_from_op": true, "msg_subject": "help required in design of database" }, { "msg_contents": "On Fri, Feb 10, 2006 at 12:20:34PM -0800, david drummard wrote:\n> 1) create a new table every time a new feed file comes in. Create table with\n> indexes. Use the copy command to dump the data into the table.\n> 2) rename the current table to some old table name and rename the new table\n> to current table name so that applications can access them directly.\n\nThat sounds like a working plan.\n\n> Should i create indexes before or after import ? I need to do this in\n> shortest period of time so that the data is always uptodate. Note that\n> incremental updates are not possible since almost every row will be changed\n> in the new file.\n\nYou should create indexes after the import. Remember to pump up your memory\nsettings (maintenance_work_mem) if you want this to be quick.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 10 Feb 2006 22:03:19 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help required in design of database" }, { "msg_contents": "Hi, David,\n\ndavid drummard wrote:\n\n> 1) create a new table every time a new feed file comes in. Create table\n> with indexes. Use the copy command to dump the data into the table.\n\nIts faster to obey the following order:\n\n- Create the table\n- COPY the data into the table\n- Create the indices\n- ANALYZE the table.\n\nand probably CLUSTER the table on the most-used index, between index\ncreation and ANALYZE. You also might want to increase the statistics\ntarget on some columns before ANALYZE, depending on your data.\n\n> 2) rename the current table to some old table name and rename the new\n> table to current table name so that applications can access them directly.\n\nYou can also use a view, and then use CREATE OR REPLACE VIEW to switch\nbetween the tables.\n\nBut two table renames inside a transaction should do as well, and\nshorten the outage time, as with the transaction encapsulation, no\nexternal app should see the change inside their transaction.\n\nHTH,\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Mon, 13 Feb 2006 12:49:15 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help required in design of database" } ]
[ { "msg_contents": "I am trying to join two tables and keep getting a sequential scan in the\nplan even though there is an index on the columns I am joining on.\nBasically this the deal ... I have two tables with docid in them which\nis what I am using for the join. \n \nClinicalDocs ... (no primary key) though it does not help if I make\ndocid primary key\ndocid integer (index)\npatientid integer (index)\nvisitid integer (index)\n ...\n \nDocumentversions\ndocid integer (index)\ndocversionnumber (index)\ndocversionidentifier (primary key)\n \nIt seems to do an index scan if I put the primary key as docid. This is\nwhat occurs when I link on the patid from ClinicalDocs to patient table.\nHowever I can not make the docid primary key because it gets repeated\ndepending on how may versions of a document I have. I have tried using\na foreign key on documentversions with no sucess. \n \nIn addition this query\n \nselect * from documentversions join clinicaldocuments on\ndocumentversions.documentidentifier\n= clinicaldocuments.dssdocumentidentifier where\ndocumentversions.documentstatus = 'AC'; \n \ndoes index scan \nbut if I change the order e.g\n \nselect * from clinicaldocuments join documentversions on\nclinicaldocuments.dssdocumentidentifier\n= documentversions .documentidentifier where\nclinicaldocuments.patientidentifier= 123;\n \ndoes sequential scan what I need is bottom query\nit is extremely slow ... Any ideas ?\n \nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n \n\n\n\n\n\n \nI am trying to join \ntwo tables and keep getting a sequential scan in the plan even though there is \nan index on the columns I am joining on.  Basically this the deal  ... \nI have two tables with docid in them which is what I am using for the \njoin.  \n \nClinicalDocs ... (no \nprimary key) though it does not help if I make docid primary \nkey\ndocid integer \n(index)\npatientid integer \n(index)\nvisitid integer \n(index)\n ...\n \nDocumentversions\ndocid integer \n(index)\ndocversionnumber \n(index)\ndocversionidentifier \n(primary key)\n \nIt seems to do an \nindex scan if I put the primary key as docid.  This is what occurs when I \nlink on the patid from ClinicalDocs to patient table.  However I can not \nmake the docid primary key because it gets repeated depending on how may \nversions of a document I have.  I have tried using a foreign key on \ndocumentversions with no sucess. \n \nIn addition this \nquery\n \nselect * from \ndocumentversions join clinicaldocuments on \ndocumentversions.documentidentifier= clinicaldocuments.dssdocumentidentifier \nwhere documentversions.documentstatus = 'AC'; \n \ndoes index scan \n\nbut if I change the \norder e.g\n \nselect * from clinicaldocuments \njoin documentversions on clinicaldocuments.dssdocumentidentifier= \ndocumentversions .documentidentifier where clinicaldocuments.patientidentifier= \n123;\n \ndoes sequential scan what I need is bottom \nquery\nit is extremely slow ... Any ideas \n?\n \nTim Jones\nHealthcare Project Manager\nOptio Software, \nInc.\n(770) 576-3555", "msg_date": "Fri, 10 Feb 2006 17:06:35 -0500", "msg_from": "\"Tim Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "joining two tables slow due to sequential scan" }, { "msg_contents": "What version of postgres are you using? Can you post the output from\nEXPLAIN ANALYZE?\n \n \n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tim Jones\nSent: Friday, February 10, 2006 4:07 PM\nTo: [email protected]\nSubject: [PERFORM] joining two tables slow due to sequential scan\n \n \nI am trying to join two tables and keep getting a sequential scan in the\nplan even though there is an index on the columns I am joining on.\nBasically this the deal ... I have two tables with docid in them which\nis what I am using for the join. \n \nClinicalDocs ... (no primary key) though it does not help if I make\ndocid primary key\ndocid integer (index)\npatientid integer (index)\nvisitid integer (index)\n ...\n \nDocumentversions\ndocid integer (index)\ndocversionnumber (index)\ndocversionidentifier (primary key)\n \nIt seems to do an index scan if I put the primary key as docid. This is\nwhat occurs when I link on the patid from ClinicalDocs to patient table.\nHowever I can not make the docid primary key because it gets repeated\ndepending on how may versions of a document I have. I have tried using\na foreign key on documentversions with no sucess. \n \nIn addition this query\n \nselect * from documentversions join clinicaldocuments on\ndocumentversions.documentidentifier\n= clinicaldocuments.dssdocumentidentifier where\ndocumentversions.documentstatus = 'AC'; \n \ndoes index scan \nbut if I change the order e.g\n \nselect * from clinicaldocuments join documentversions on\nclinicaldocuments.dssdocumentidentifier\n= documentversions .documentidentifier where\nclinicaldocuments.patientidentifier= 123;\n \ndoes sequential scan what I need is bottom query\nit is extremely slow ... Any ideas ?\n \nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWhat version of postgres\nare you using?  Can you post the\noutput from EXPLAIN ANALYZE?\n \n \n-----Original Message-----\nFrom:\[email protected]\n[mailto:[email protected]] On Behalf Of Tim Jones\nSent: Friday, February 10, 2006\n4:07 PM\nTo:\[email protected]\nSubject: [PERFORM] joining two\ntables slow due to sequential scan\n \n\n \n\n\nI am trying to join two tables and\nkeep getting a sequential scan in the plan even though there is an index on the\ncolumns I am joining on.  Basically this the deal  ... I have two\ntables with docid in them which is what I am using for the join.  \n\n\n \n\n\nClinicalDocs ... (no primary key)\nthough it does not help if I make docid primary key\n\n\ndocid integer (index)\n\n\npatientid integer (index)\n\n\nvisitid integer (index)\n\n\n ...\n\n\n \n\n\nDocumentversions\n\n\ndocid integer (index)\n\n\ndocversionnumber (index)\n\n\ndocversionidentifier (primary key)\n\n\n \n\n\nIt seems to do an index scan if I\nput the primary key as docid.  This is what occurs when I link on the\npatid from ClinicalDocs to patient table.  However I can not make the\ndocid primary key because it gets repeated depending on how may versions of a\ndocument I have.  I have tried using a foreign key on documentversions\nwith no sucess. \n\n\n \n\n\nIn addition this query\n\n\n \n\n\nselect * from documentversions join\nclinicaldocuments on documentversions.documentidentifier\n= clinicaldocuments.dssdocumentidentifier where documentversions.documentstatus\n= 'AC'; \n\n\n \n\n\ndoes index scan \n\n\nbut if I change the order e.g\n\n\n \n\n\nselect * from clinicaldocuments join\ndocumentversions on clinicaldocuments.dssdocumentidentifier\n= documentversions .documentidentifier where\nclinicaldocuments.patientidentifier= 123;\n\n\n \n\n\ndoes sequential scan what I\nneed is bottom query\n\n\nit is extremely slow ... Any ideas ?\n\n\n \n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555", "msg_date": "Fri, 10 Feb 2006 16:14:30 -0600", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: joining two tables slow due to sequential scan" }, { "msg_contents": "On Fri, 2006-02-10 at 16:06, Tim Jones wrote:\n> \n> I am trying to join two tables and keep getting a sequential scan in\n> the plan even though there is an index on the columns I am joining\n> on. Basically this the deal ... I have two tables with docid in them\n> which is what I am using for the join. \n> \n\nSNIP\n \n> select * from documentversions join clinicaldocuments on\n> documentversions.documentidentifier\n> = clinicaldocuments.dssdocumentidentifier where\n> documentversions.documentstatus = 'AC'; \n> \n> does index scan \n> but if I change the order e.g\n> \n> select * from clinicaldocuments join documentversions on\n> clinicaldocuments.dssdocumentidentifier\n> = documentversions .documentidentifier where\n> clinicaldocuments.patientidentifier= 123;\n\nOK. I'm gonna make a couple of guesses here:\n\n1: clinicaldocuments.patientidentifier is an int8 and you're running\n7.4 or before.\n2: There are more rows with clinicaldocuments.patientidentifier= 123\nthan with documentversions.documentstatus = 'AC'.\n3: documentversions.documentidentifier and \nclinicaldocuments.dssdocumentidentifier are not the same type.\n\nAny of those things true?\n", "msg_date": "Fri, 10 Feb 2006 16:22:05 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: joining two tables slow due to sequential scan" } ]
[ { "msg_contents": "\n\nOK. I'm gonna make a couple of guesses here:\n\n1: clinicaldocuments.patientidentifier is an int8 and you're running\n7.4 or before.\n\n-- nope int4 and 8.1\n\n2: There are more rows with clinicaldocuments.patientidentifier= 123\nthan with documentversions.documentstatus = 'AC'.\n\n-- nope generally speaking all statuses are 'AC'\n\n3: documentversions.documentidentifier and\nclinicaldocuments.dssdocumentidentifier are not the same type.\n\n-- nope both int4\n\nAny of those things true?\n", "msg_date": "Fri, 10 Feb 2006 17:35:50 -0500", "msg_from": "\"Tim Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: joining two tables slow due to sequential scan" }, { "msg_contents": "On Fri, 2006-02-10 at 16:35, Tim Jones wrote:\n> OK. I'm gonna make a couple of guesses here:\n> \n> 1: clinicaldocuments.patientidentifier is an int8 and you're running\n> 7.4 or before.\n> \n> -- nope int4 and 8.1\n> \n> 2: There are more rows with clinicaldocuments.patientidentifier= 123\n> than with documentversions.documentstatus = 'AC'.\n> \n> -- nope generally speaking all statuses are 'AC'\n> \n> 3: documentversions.documentidentifier and\n> clinicaldocuments.dssdocumentidentifier are not the same type.\n> \n> -- nope both int4\n\nOK then, I guess we'll need to see the explain analyze output of both of\nthose queries.\n", "msg_date": "Fri, 10 Feb 2006 16:36:52 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: joining two tables slow due to sequential scan" } ]
[ { "msg_contents": "for first query\n\nQUERY PLAN\n'Limit (cost=4.69..88.47 rows=10 width=1350) (actual\ntime=32.195..32.338 rows=10 loops=1)'\n' -> Nested Loop (cost=4.69..4043.09 rows=482 width=1350) (actual\ntime=32.190..32.316 rows=10 loops=1)'\n' -> Bitmap Heap Scan on documentversions (cost=4.69..1139.40\nrows=482 width=996) (actual time=32.161..32.171 rows=10 loops=1)'\n' Recheck Cond: (documentstatus = ''AC''::bpchar)'\n' -> Bitmap Index Scan on ix_docstatus (cost=0.00..4.69\nrows=482 width=0) (actual time=31.467..31.467 rows=96368 loops=1)'\n' Index Cond: (documentstatus = ''AC''::bpchar)'\n' -> Index Scan using ix_cdocdid on clinicaldocuments\n(cost=0.00..6.01 rows=1 width=354) (actual time=0.006..0.007 rows=1\nloops=10)'\n' Index Cond: (\"outer\".documentidentifier =\nclinicaldocuments.dssdocumentidentifier)'\n \n \n for second query\n\nQUERY PLAN\n'Hash Join (cost=899.83..4384.17 rows=482 width=1350)'\n' Hash Cond: (\"outer\".documentidentifier =\n\"inner\".dssdocumentidentifier)'\n' -> Seq Scan on documentversions (cost=0.00..2997.68 rows=96368\nwidth=996)'\n' -> Hash (cost=898.62..898.62 rows=482 width=354)'\n' -> Bitmap Heap Scan on clinicaldocuments (cost=4.69..898.62\nrows=482 width=354)'\n' Recheck Cond: (patientidentifier = 123)'\n' -> Bitmap Index Scan on ix_cdocpid (cost=0.00..4.69\nrows=482 width=0)'\n' Index Cond: (patientidentifier = 123)'\n\n\nthnx\n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n \n\n________________________________\n\nFrom: Dave Dutcher [mailto:[email protected]] \nSent: Friday, February 10, 2006 5:15 PM\nTo: Tim Jones; [email protected]\nSubject: RE: [PERFORM] joining two tables slow due to sequential scan\n\n\n\nWhat version of postgres are you using? Can you post the output from\nEXPLAIN ANALYZE?\n\n \n\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tim Jones\nSent: Friday, February 10, 2006 4:07 PM\nTo: [email protected]\nSubject: [PERFORM] joining two tables slow due to sequential scan\n\n \n\n \n\nI am trying to join two tables and keep getting a sequential scan in the\nplan even though there is an index on the columns I am joining on.\nBasically this the deal ... I have two tables with docid in them which\nis what I am using for the join. \n\n \n\nClinicalDocs ... (no primary key) though it does not help if I make\ndocid primary key\n\ndocid integer (index)\n\npatientid integer (index)\n\nvisitid integer (index)\n\n ...\n\n \n\nDocumentversions\n\ndocid integer (index)\n\ndocversionnumber (index)\n\ndocversionidentifier (primary key)\n\n \n\nIt seems to do an index scan if I put the primary key as docid. This is\nwhat occurs when I link on the patid from ClinicalDocs to patient table.\nHowever I can not make the docid primary key because it gets repeated\ndepending on how may versions of a document I have. I have tried using\na foreign key on documentversions with no sucess. \n\n \n\nIn addition this query\n\n \n\nselect * from documentversions join clinicaldocuments on\ndocumentversions.documentidentifier\n= clinicaldocuments.dssdocumentidentifier where\ndocumentversions.documentstatus = 'AC'; \n\n \n\ndoes index scan \n\nbut if I change the order e.g\n\n \n\nselect * from clinicaldocuments join documentversions on\nclinicaldocuments.dssdocumentidentifier\n= documentversions .documentidentifier where\nclinicaldocuments.patientidentifier= 123;\n\n \n\ndoes sequential scan what I need is bottom query\n\nit is extremely slow ... Any ideas ?\n\n \n\nTim Jones\n\nHealthcare Project Manager\n\nOptio Software, Inc.\n\n(770) 576-3555\n\n \n\n", "msg_date": "Fri, 10 Feb 2006 17:37:32 -0500", "msg_from": "\"Tim Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: joining two tables slow due to sequential scan" }, { "msg_contents": "On Fri, 2006-02-10 at 16:37, Tim Jones wrote:\n> for first query\n> \n> QUERY PLAN\n> 'Limit (cost=4.69..88.47 rows=10 width=1350) (actual\n> time=32.195..32.338 rows=10 loops=1)'\n> ' -> Nested Loop (cost=4.69..4043.09 rows=482 width=1350) (actual\n> time=32.190..32.316 rows=10 loops=1)'\n> ' -> Bitmap Heap Scan on documentversions (cost=4.69..1139.40\n> rows=482 width=996) (actual time=32.161..32.171 rows=10 loops=1)'\n> ' Recheck Cond: (documentstatus = ''AC''::bpchar)'\n> ' -> Bitmap Index Scan on ix_docstatus (cost=0.00..4.69\n> rows=482 width=0) (actual time=31.467..31.467 rows=96368 loops=1)'\n> ' Index Cond: (documentstatus = ''AC''::bpchar)'\n> ' -> Index Scan using ix_cdocdid on clinicaldocuments\n> (cost=0.00..6.01 rows=1 width=354) (actual time=0.006..0.007 rows=1\n> loops=10)'\n> ' Index Cond: (\"outer\".documentidentifier =\n> clinicaldocuments.dssdocumentidentifier)'\n> \n> \n> for second query\n> \n> QUERY PLAN\n> 'Hash Join (cost=899.83..4384.17 rows=482 width=1350)'\n> ' Hash Cond: (\"outer\".documentidentifier =\n> \"inner\".dssdocumentidentifier)'\n> ' -> Seq Scan on documentversions (cost=0.00..2997.68 rows=96368\n> width=996)'\n> ' -> Hash (cost=898.62..898.62 rows=482 width=354)'\n> ' -> Bitmap Heap Scan on clinicaldocuments (cost=4.69..898.62\n> rows=482 width=354)'\n> ' Recheck Cond: (patientidentifier = 123)'\n> ' -> Bitmap Index Scan on ix_cdocpid (cost=0.00..4.69\n> rows=482 width=0)'\n> ' Index Cond: (patientidentifier = 123)'\n\nOK, the first one is explain analyze, but the second one is just plain\nexplain...\n", "msg_date": "Fri, 10 Feb 2006 16:39:19 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: joining two tables slow due to sequential scan" }, { "msg_contents": "\"Tim Jones\" <[email protected]> writes:\n> QUERY PLAN\n> 'Hash Join (cost=899.83..4384.17 rows=482 width=1350)'\n> ' Hash Cond: (\"outer\".documentidentifier =\n> \"inner\".dssdocumentidentifier)'\n\nThis is not EXPLAIN ANALYZE output. Also, the rowcount estimates\nseem far enough off in the other query to make me wonder how long\nit's been since you ANALYZEd the tables...\n\nMore generally, though, I don't see anything particularly wrong\nwith this query plan. You're selecting enough of the table that\nan indexscan isn't necessarily a good plan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 10 Feb 2006 17:44:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: joining two tables slow due to sequential scan " } ]
[ { "msg_contents": "oops\n\nQUERY PLAN\n'Hash Join (cost=899.83..4384.17 rows=482 width=1350) (actual\ntime=0.203..0.203 rows=0 loops=1)'\n' Hash Cond: (\"outer\".documentidentifier =\n\"inner\".dssdocumentidentifier)'\n' -> Seq Scan on documentversions (cost=0.00..2997.68 rows=96368\nwidth=996) (actual time=0.007..0.007 rows=1 loops=1)'\n' -> Hash (cost=898.62..898.62 rows=482 width=354) (actual\ntime=0.161..0.161 rows=0 loops=1)'\n' -> Bitmap Heap Scan on clinicaldocuments (cost=4.69..898.62\nrows=482 width=354) (actual time=0.159..0.159 rows=0 loops=1)'\n' Recheck Cond: (patientidentifier = 123)'\n' -> Bitmap Index Scan on ix_cdocpid (cost=0.00..4.69\nrows=482 width=0) (actual time=0.153..0.153 rows=0 loops=1)'\n' Index Cond: (patientidentifier = 123)'\n'Total runtime: 0.392 ms'\n\nnote I have done these on a smaller db than what I am using but the\nplans are the same \n\n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Friday, February 10, 2006 5:39 PM\nTo: Tim Jones\nCc: Dave Dutcher; [email protected]\nSubject: Re: [PERFORM] joining two tables slow due to sequential scan\n\nOn Fri, 2006-02-10 at 16:37, Tim Jones wrote:\n> for first query\n> \n> QUERY PLAN\n> 'Limit (cost=4.69..88.47 rows=10 width=1350) (actual\n> time=32.195..32.338 rows=10 loops=1)'\n> ' -> Nested Loop (cost=4.69..4043.09 rows=482 width=1350) (actual\n> time=32.190..32.316 rows=10 loops=1)'\n> ' -> Bitmap Heap Scan on documentversions (cost=4.69..1139.40\n> rows=482 width=996) (actual time=32.161..32.171 rows=10 loops=1)'\n> ' Recheck Cond: (documentstatus = ''AC''::bpchar)'\n> ' -> Bitmap Index Scan on ix_docstatus (cost=0.00..4.69\n> rows=482 width=0) (actual time=31.467..31.467 rows=96368 loops=1)'\n> ' Index Cond: (documentstatus = ''AC''::bpchar)'\n> ' -> Index Scan using ix_cdocdid on clinicaldocuments\n> (cost=0.00..6.01 rows=1 width=354) (actual time=0.006..0.007 rows=1 \n> loops=10)'\n> ' Index Cond: (\"outer\".documentidentifier =\n> clinicaldocuments.dssdocumentidentifier)'\n> \n> \n> for second query\n> \n> QUERY PLAN\n> 'Hash Join (cost=899.83..4384.17 rows=482 width=1350)'\n> ' Hash Cond: (\"outer\".documentidentifier = \n> \"inner\".dssdocumentidentifier)'\n> ' -> Seq Scan on documentversions (cost=0.00..2997.68 rows=96368 \n> width=996)'\n> ' -> Hash (cost=898.62..898.62 rows=482 width=354)'\n> ' -> Bitmap Heap Scan on clinicaldocuments (cost=4.69..898.62\n> rows=482 width=354)'\n> ' Recheck Cond: (patientidentifier = 123)'\n> ' -> Bitmap Index Scan on ix_cdocpid (cost=0.00..4.69\n> rows=482 width=0)'\n> ' Index Cond: (patientidentifier = 123)'\n\nOK, the first one is explain analyze, but the second one is just plain\nexplain...\n", "msg_date": "Fri, 10 Feb 2006 17:43:58 -0500", "msg_from": "\"Tim Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: joining two tables slow due to sequential scan" }, { "msg_contents": "On Fri, 2006-02-10 at 16:43, Tim Jones wrote:\n> oops\n> \n> QUERY PLAN\n> 'Hash Join (cost=899.83..4384.17 rows=482 width=1350) (actual\n> time=0.203..0.203 rows=0 loops=1)'\n> ' Hash Cond: (\"outer\".documentidentifier =\n> \"inner\".dssdocumentidentifier)'\n> ' -> Seq Scan on documentversions (cost=0.00..2997.68 rows=96368\n> width=996) (actual time=0.007..0.007 rows=1 loops=1)'\n> ' -> Hash (cost=898.62..898.62 rows=482 width=354) (actual\n> time=0.161..0.161 rows=0 loops=1)'\n> ' -> Bitmap Heap Scan on clinicaldocuments (cost=4.69..898.62\n> rows=482 width=354) (actual time=0.159..0.159 rows=0 loops=1)'\n> ' Recheck Cond: (patientidentifier = 123)'\n> ' -> Bitmap Index Scan on ix_cdocpid (cost=0.00..4.69\n> rows=482 width=0) (actual time=0.153..0.153 rows=0 loops=1)'\n> ' Index Cond: (patientidentifier = 123)'\n> 'Total runtime: 0.392 ms'\n> \n> note I have done these on a smaller db than what I am using but the\n> plans are the same \n\n\nHmmmm. We really need to see what's happening on the real database to\nsee what's going wrong. i.e. if the real database thinks it'll get 30\nrows and it gets back 5,000,000 that's a problem.\n\nThe query planner in pgsql is cost based, so until you have real data\nunderneath it, and analyze it, you can't really say how it will behave\nfor you. I.e. small test sets don't work.\n", "msg_date": "Fri, 10 Feb 2006 16:46:14 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: joining two tables slow due to sequential scan" }, { "msg_contents": "\"Tim Jones\" <[email protected]> writes:\n> QUERY PLAN\n> 'Hash Join (cost=899.83..4384.17 rows=482 width=1350) (actual\n> time=0.203..0.203 rows=0 loops=1)'\n> ...\n> 'Total runtime: 0.392 ms'\n\nHardly seems like evidence of a performance problem ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 10 Feb 2006 17:51:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: joining two tables slow due to sequential scan " } ]
[ { "msg_contents": "ok here is real db\n\nthe first query I had seems to make no sense because it is only fast if\nI limit the rows since almost all rows have status = 'AC'\n\nsecond query\n tables both have about 10 million rows and it takes a long time as you\ncan see but this person only has approx 160 total documents\n\n\n QUERY PLAN \n------------------------------------------------------------------------\n-------------------------------------------------------------------\n Hash Join (cost=84813.14..1510711.97 rows=48387 width=555) (actual\ntime=83266.854..91166.315 rows=3 loops=1)\n Hash Cond: (\"outer\".documentidentifier =\n\"inner\".dssdocumentidentifier)\n -> Seq Scan on documentversions (cost=0.00..269141.98 rows=9677398\nwidth=415) (actual time=0.056..49812.459 rows=9677398 loops=1)\n -> Hash (cost=83660.05..83660.05 rows=48036 width=140) (actual\ntime=10.833..10.833 rows=3 loops=1)\n -> Bitmap Heap Scan on clinicaldocuments\n(cost=301.13..83660.05 rows=48036 width=140) (actual time=0.243..0.258\nrows=3 loops=1)\n Recheck Cond: (patientidentifier = 690193)\n -> Bitmap Index Scan on ix_cdocpid (cost=0.00..301.13\nrows=48036 width=0) (actual time=0.201..0.201 rows=3 loops=1)\n Index Cond: (patientidentifier = 690193)\n Total runtime: 91166.540 ms\n\n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Friday, February 10, 2006 5:52 PM\nTo: Tim Jones\nCc: Scott Marlowe; Dave Dutcher; [email protected]\nSubject: Re: [PERFORM] joining two tables slow due to sequential scan \n\n\"Tim Jones\" <[email protected]> writes:\n> QUERY PLAN\n> 'Hash Join (cost=899.83..4384.17 rows=482 width=1350) (actual\n> time=0.203..0.203 rows=0 loops=1)'\n> ...\n> 'Total runtime: 0.392 ms'\n\nHardly seems like evidence of a performance problem ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 10 Feb 2006 17:59:03 -0500", "msg_from": "\"Tim Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: joining two tables slow due to sequential scan " }, { "msg_contents": "OK, if I'm reading this correctly, it looks like the planner is choosing\na sequential scan because it expects 48,000 rows for that\npatientidentifier, but its actually only getting 3. The planner has the\nnumber of rows right for the sequential scan, so it seems like the stats\nare up to date. I would try increasing the stats for the\npatientindentifier column with 'alter table set statistics...' or\nincreasing the default_statistics_target for the whole DB. Once you\nhave changed the stats I believe you need to run analyze again.\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tim Jones\nSent: Friday, February 10, 2006 4:59 PM\nTo: Tom Lane\nCc: [email protected]\nSubject: Re: [PERFORM] joining two tables slow due to sequential scan \n\nok here is real db\n\nthe first query I had seems to make no sense because it is only fast if\nI limit the rows since almost all rows have status = 'AC'\n\nsecond query\n tables both have about 10 million rows and it takes a long time as you\ncan see but this person only has approx 160 total documents\n\n\n QUERY PLAN \n------------------------------------------------------------------------\n-------------------------------------------------------------------\n Hash Join (cost=84813.14..1510711.97 rows=48387 width=555) (actual\ntime=83266.854..91166.315 rows=3 loops=1)\n Hash Cond: (\"outer\".documentidentifier =\n\"inner\".dssdocumentidentifier)\n -> Seq Scan on documentversions (cost=0.00..269141.98 rows=9677398\nwidth=415) (actual time=0.056..49812.459 rows=9677398 loops=1)\n -> Hash (cost=83660.05..83660.05 rows=48036 width=140) (actual\ntime=10.833..10.833 rows=3 loops=1)\n -> Bitmap Heap Scan on clinicaldocuments\n(cost=301.13..83660.05 rows=48036 width=140) (actual time=0.243..0.258\nrows=3 loops=1)\n Recheck Cond: (patientidentifier = 690193)\n -> Bitmap Index Scan on ix_cdocpid (cost=0.00..301.13\nrows=48036 width=0) (actual time=0.201..0.201 rows=3 loops=1)\n Index Cond: (patientidentifier = 690193)\n Total runtime: 91166.540 ms\n\n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Friday, February 10, 2006 5:52 PM\nTo: Tim Jones\nCc: Scott Marlowe; Dave Dutcher; [email protected]\nSubject: Re: [PERFORM] joining two tables slow due to sequential scan \n\n\"Tim Jones\" <[email protected]> writes:\n> QUERY PLAN\n> 'Hash Join (cost=899.83..4384.17 rows=482 width=1350) (actual\n> time=0.203..0.203 rows=0 loops=1)'\n> ...\n> 'Total runtime: 0.392 ms'\n\nHardly seems like evidence of a performance problem ...\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Fri, 10 Feb 2006 17:25:20 -0600", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: joining two tables slow due to sequential scan " } ]
[ { "msg_contents": "I have a question with regard to GEQO optimizer of Postgresql.\n\nFor complex queries with over 12 tables in a join, (12 is the\ndefault value), the Postgresql optimizer by default will not use the dynamic\nprogramming style optimizer. Instead, it uses genetic algorithm to compute a\nsub-optimal query plan. The reason is that GEQO takes sub-seconds to find a\nquery plan while the DP style optimizer will take minutes or even hours to\noptimize a complex query with large join degree.\n\nI am wondering if anyone here ever had complex queries that the GEQO fails\nto work properly, i.e., finds a terrible query plan as compared to one\nfound by DP optimizer (by forcing Postgresql always uses DP). This is\nimportant to me since I am trying to see what type of queries will be worth\nspending a lot of time doing a thorough DP optimization (if it is going to\nbe executed again and again).\n\nthanks a lot!\n\nI have a question with regard to GEQO optimizer of Postgresql.\n \nFor complex queries with over 12 tables in a join, (12 is the\ndefault value), the Postgresql optimizer by default will not use the dynamic programming style optimizer. Instead, it uses genetic algorithm to compute a sub-optimal query plan.  The reason is that GEQO takes sub-seconds to find a query plan while the DP style optimizer will take minutes or even hours to optimize a complex query with large join degree.\n\n \nI am wondering if anyone here ever had complex queries that the GEQO fails to work properly, i.e.,  finds a terrible query plan as compared to one found by DP optimizer (by forcing Postgresql always uses DP).    This is important to me since I am trying to see what type of queries will be worth spending a lot of time doing a thorough DP optimization (if it is going to be executed again and again).\n\n \nthanks a lot!", "msg_date": "Fri, 10 Feb 2006 20:46:14 -0500", "msg_from": "uwcssa <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql geqo optimization" }, { "msg_contents": "On Fri, Feb 10, 2006 at 08:46:14PM -0500, uwcssa wrote:\n> I am wondering if anyone here ever had complex queries that the GEQO fails\n> to work properly, i.e., finds a terrible query plan as compared to one\n> found by DP optimizer (by forcing Postgresql always uses DP). This is\n> important to me since I am trying to see what type of queries will be worth\n> spending a lot of time doing a thorough DP optimization (if it is going to\n> be executed again and again).\n\nThere have been a few problems earlier on this list which might have been the\ngeqo's fault; search the list archives for \"geqo\" or \"genetic\", and you\nshould be able to find them quite easily.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 11 Feb 2006 02:52:06 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql geqo optimization" } ]
[ { "msg_contents": "We've done a lot of testing on large DB's with a lot of \"inserts\" and\nhave a few comments.\n\nThe updates are \"treated\" as a large \"insert\" as we all know from pg's\npoint of view.\n\nWe've run into 2 classes of problems: excessing WAL checkpoints and\naffects of low correlation.\n\nWAL log write's full 8K block for first modification, then only changes.\nThis can be the source of \"undesireable\" behaviour during large batch\ninserts like this. \n\n From your config, a check point will be forced when\n\n(checkpoint_segments * 16 M) < rows * (8K/N*h + (1-h)*8K) * B\n\nWhere h is the \"hitrate\" or correlation between the update scan and the\nindex. Do you have a sense of what this is? In the limits, we have 100%\ncorrelation or 0% correlation. N is the lower cost of putting the\nchange in the WAL entry, not sure what this is, but small, I am\nassuming, say N=100. B is the average number of blocks changed per\nupdated row (assume B=1.1 for your case, heap,serial index have very\nhigh correlation)\n\nIn the 0% correlation case, each updated row will cause the index update\nto read/modify the block. The modified block will be entirely written to\nthe WAL log. After (30 * 16M) / (8K) / 1.1 ~ 55k rows, a checkpoint\nwill be forced and all modified blocks in shared buffers will be written\nout.\n\nIncreasing checkpoint_segments to 300 and seeing if that makes a\ndifference. If so, the excessive WAL checkpoints are your issue. If\nperformance is exactly the same, then I would assume that you have close\nto 0% correlation between the rows in the heap and index.\n\nCan you increase shared_buffers? With a low correlation index, the only\nsolution is to hold the working set of blocks in memory. Also, make\nsure that the checkpoint segments are big enough for you to modify them\nin place, don't want checkpoints occurring....\n\nNote that the more updates you do, the larger the tables/index become\nand the worse the problem becomes. Vacuuming the table is an \"answer\"\nbut unfortunately, it tends to decrease correlation from our\nobservations. :-(\n\n From our observations, dropping index and rebuilding them is not always\npractical, depends on your application; table will be exclusively locked\nduring the transaction due to drop index. \n\nI haven't looked at pg's code for creating an index, but seriously\nsuspect it's doing an extern sort then insert into the index. Such\noperations would have 100% correlation from the index insert point of\nview and the \"sort\" could be in memory or the tape variety (more\nefficient i/o pattern).\n\nSummary, # of indexes, index correlation, pg's multi versioning,\nshared_buffers and checkpoint_segments are interconnected in weird and\nwonderful ways... Seldom have found \"simple\" solutions to performance\nproblems.\n\nMarc\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Aaron Turner\n> Sent: Friday, February 10, 2006 3:17 AM\n> To: [email protected]\n> Subject: [PERFORM] 10+hrs vs 15min because of just one index\n> \n> So I'm trying to figure out how to optimize my PG install \n> (8.0.3) to get better performance without dropping one of my indexes.\n> \n> Basically, I have a table of 5M records with 3 columns:\n> \n> pri_key (SERIAL)\n> data char(48)\n> groupid integer\n> \n> there is an additional unique index on the data column.\n> \n> The problem is that when I update the groupid column for all \n> the records, the query takes over 10hrs (after that I just \n> canceled the update). Looking at iostat, top, vmstat shows \n> I'm horribly disk IO bound (for data not WAL, CPU 85-90% \n> iowait) and not swapping.\n> \n> Dropping the unique index on data (which isn't used in the \n> query), running the update and recreating the index runs in \n> under 15 min. \n> Hence it's pretty clear to me that the index is the problem \n> and there's really nothing worth optimizing in my query.\n> \n> As I understand from #postgresql, doing an UPDATE on one \n> column causes all indexes for the effected row to have to be \n> updated due to the way PG replaces the old row with a new one \n> for updates. This seems to explain why dropping the unique \n> index on data solves the performance problem.\n> \n> interesting settings:\n> shared_buffers = 32768\n> maintenance_work_mem = 262144\n> fsync = true\n> wal_sync_method = open_sync\n> wal_buffers = 512\n> checkpoint_segments = 30\n> effective_cache_size = 10000\n> work_mem = <default> (1024 i think?)\n> \n> box:\n> Linux 2.6.9-11EL (CentOS 4.1)\n> 2x Xeon 3.4 HT\n> 2GB of RAM (but Apache and other services are running)\n> 4 disk raid 10 (74G Raptor) for data\n> 4 disk raid 10 (7200rpm) for WAL\n> \n> other then throwing more spindles at the problem, any suggestions?\n> \n> Thanks,\n> Aaron\n> \n> --\n> Aaron Turner\n> http://synfin.net/\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n", "msg_date": "Sun, 12 Feb 2006 12:37:13 -0500", "msg_from": "\"Marc Morin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 10+hrs vs 15min because of just one index" }, { "msg_contents": "On 2/12/06, Marc Morin <[email protected]> wrote:\n> From your config, a check point will be forced when\n>\n> (checkpoint_segments * 16 M) < rows * (8K/N*h + (1-h)*8K) * B\n>\n> Where h is the \"hitrate\" or correlation between the update scan and the\n> index. Do you have a sense of what this is?\n\nI know my checkpoints happen > 30 secs apart, since PG isn't\ncomplaining in my log. I have no clue what the correlation is.\n\n> In the limits, we have 100%\n> correlation or 0% correlation. N is the lower cost of putting the\n> change in the WAL entry, not sure what this is, but small, I am\n> assuming, say N=100. B is the average number of blocks changed per\n> updated row (assume B=1.1 for your case, heap,serial index have very\n> high correlation)\n>\n> In the 0% correlation case, each updated row will cause the index update\n> to read/modify the block. The modified block will be entirely written to\n> the WAL log. After (30 * 16M) / (8K) / 1.1 ~ 55k rows, a checkpoint\n> will be forced and all modified blocks in shared buffers will be written\n> out.\n>\n> Increasing checkpoint_segments to 300 and seeing if that makes a\n> difference. If so, the excessive WAL checkpoints are your issue. If\n> performance is exactly the same, then I would assume that you have close\n> to 0% correlation between the rows in the heap and index.\n\nOk, i'll have to give that a try.\n\n> Can you increase shared_buffers? With a low correlation index, the only\n> solution is to hold the working set of blocks in memory. Also, make\n> sure that the checkpoint segments are big enough for you to modify them\n> in place, don't want checkpoints occurring....\n\nI'll have to look at my memory usage on this server... with only 2GB\nand a bunch of other processes running around I'm not sure if I can go\nup much more without causing swapping. Of course RAM is cheap...\n\n> Note that the more updates you do, the larger the tables/index become\n> and the worse the problem becomes. Vacuuming the table is an \"answer\"\n> but unfortunately, it tends to decrease correlation from our\n> observations. :-(\n\nGood to know.\n\n> From our observations, dropping index and rebuilding them is not always\n> practical, depends on your application; table will be exclusively locked\n> during the transaction due to drop index.\n\nYep. In my case it's not a huge problem right now, but I know it will\nbecome a serious one sooner or later.\n\nThanks a lot Marc. Lots of useful info.\n\n--\nAaron Turner\nhttp://synfin.net/\n", "msg_date": "Sun, 12 Feb 2006 11:04:37 -0800", "msg_from": "Aaron Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 10+hrs vs 15min because of just one index" } ]
[ { "msg_contents": "Hi all,\n\n My database has an SQL function. The result comes in 30-40 seconds when i use the SQL function. On the other hand; The result comes\n 300-400 milliseconds when i run the SQL statement. Any idea ?? My database is Postgresql 8.1.2..\n \n Function is below :\n\n CREATE OR REPLACE FUNCTION fn_online_seferler_satis(\"varchar\", date, int4, \"varchar\", \"varchar\")\n RETURNS SETOF record AS\n $BODY$\n SELECT (S.KALKIS_YERI||' '||S.VARIS_YERI||' '||S.SAAT)::varchar AS SEFER_BILGI,\n sum((i.bilet_ucreti + coalesce(i.police_ucreti,0)) - coalesce(i.int_artik_ucret,0)) as top_satis,\n count(1)::int4 as top_koltuk\n FROM T_KOLTUK_ISLEM I,\n T_KOLTUK_SON_DURUM SD,\n T_LOKAL_PLAN LP,\n W_SEFERLER S\n WHERE I.FIRMA_NO = SD.FIRMA_NO\n AND I.HAT_NO = SD.HAT_NO\n AND I.SEFER_KOD = SD.SEFER_KOD\n AND I.PLAN_TARIHI = SD.PLAN_TARIHI\n AND I.BIN_YER_KOD = SD.BIN_YER_KOD\n AND I.KOLTUK_NO = SD.KOLTUK_NO\n AND I.KOD = SD.ISLEM_KOD\n AND SD.ISLEM = 'S'\n AND LP.FIRMA_NO = I.FIRMA_NO\n AND LP.HAT_NO = I.HAT_NO\n AND LP.SEFER_KOD = I.SEFER_KOD\n AND LP.PLAN_TARIHI = I.PLAN_TARIHI\n AND LP.YER_KOD = I.BIN_YER_KOD\n AND I.FIRMA_NO = $1\n AND S.FIRMA_NO = LP.FIRMA_NO \n AND S.HAT_NO = LP.HAT_NO\n AND S.KOD = LP.SEFER_KOD\n AND S.IPTAL = 'H'\n AND ((I.ISLEM_TARIHI = $2 AND $5 = 'I') OR (LP.KALKIS_TARIHI = $2 AND $5 = 'K'))\n AND (((LP.LOKAL_KOD = $3 AND $4 = 'K')) OR ((I.ypt_lcl_kod = $3 AND $4 = 'I'))) \n GROUP BY S.KALKIS_YERI,S.VARIS_YERI,S.SAAT;\n $BODY$\n LANGUAGE 'sql' VOLATILE;\n\n Adnan DURSUN\n ASRIN Bilişim Ltd.Şti\n Turkey\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n            Hi \nall,\n \n            My \n database has an SQL function. The result comes in 30-40 seconds \n when i use the SQL function. On the other hand; The result comes\n        300-400 milliseconds when \n i run the SQL statement. Any idea ?? My database is Postgresql \n 8.1.2..\n        \n            Function is \n below :\n \nCREATE OR REPLACE FUNCTION fn_online_seferler_satis(\"varchar\", date, \n int4, \"varchar\", \"varchar\")  RETURNS SETOF record \n AS$BODY$SELECT  (S.KALKIS_YERI||' '||S.VARIS_YERI||' \n '||S.SAAT)::varchar AS SEFER_BILGI, sum((i.bilet_ucreti + \n coalesce(i.police_ucreti,0)) - coalesce(i.int_artik_ucret,0)) as \n top_satis, count(1)::int4 as top_koltuk   FROM \n T_KOLTUK_ISLEM I, T_KOLTUK_SON_DURUM SD, T_LOKAL_PLAN \n LP, W_SEFERLER S  WHERE I.FIRMA_NO = \n SD.FIRMA_NO    AND I.HAT_NO = \n SD.HAT_NO    AND I.SEFER_KOD = \n SD.SEFER_KOD    AND I.PLAN_TARIHI = \n SD.PLAN_TARIHI    AND I.BIN_YER_KOD = \n SD.BIN_YER_KOD    AND I.KOLTUK_NO = \n SD.KOLTUK_NO    AND I.KOD = \n SD.ISLEM_KOD    AND SD.ISLEM = \n 'S'    AND LP.FIRMA_NO = \n I.FIRMA_NO    AND LP.HAT_NO = \n I.HAT_NO    AND LP.SEFER_KOD = \n I.SEFER_KOD    AND LP.PLAN_TARIHI = \n I.PLAN_TARIHI    AND LP.YER_KOD = \n I.BIN_YER_KOD    AND I.FIRMA_NO = \n $1    AND S.FIRMA_NO = LP.FIRMA_NO \n     AND S.HAT_NO = LP.HAT_NO    AND \n S.KOD = LP.SEFER_KOD    AND S.IPTAL = \n 'H'    AND ((I.ISLEM_TARIHI =  $2 AND $5 = 'I') OR \n (LP.KALKIS_TARIHI = $2 AND $5 = 'K'))    AND \n (((LP.LOKAL_KOD = $3 AND $4 = 'K')) OR  ((I.ypt_lcl_kod = $3 AND $4 = \n 'I'))) GROUP BY S.KALKIS_YERI,S.VARIS_YERI,S.SAAT;\n$BODY$  LANGUAGE 'sql' VOLATILE;\n \nAdnan DURSUN\nASRIN Bilişim Ltd.Şti\nTurkey", "msg_date": "Sun, 12 Feb 2006 22:25:28 +0200", "msg_from": "\"Adnan DURSUN\" <[email protected]>", "msg_from_op": true, "msg_subject": "SQL Function Performance" }, { "msg_contents": "On Sun, Feb 12, 2006 at 10:25:28PM +0200, Adnan DURSUN wrote:\n> My database has an SQL function. The result comes in 30-40 seconds\n> when i use the SQL function. On the other hand; The result comes\n> 300-400 milliseconds when i run the SQL statement. Any idea ??\n\nHave you analyzed the tables? If that's not the problem then could\nyou post the EXPLAIN ANALYZE output for the direct query and for a\nprepared query? For the prepared query do this:\n\nPREPARE stmt (varchar, date, int4, varchar, varchar) AS SELECT ... ;\n\nwhere \"...\" is the same SQL as in the function body, including the\nnumbered parameters ($1, $2, etc.). To execute the query do this:\n\nEXPLAIN ANALYZE EXECUTE stmt (...);\n\nWhere \"...\" is the same parameter list you'd pass to the function\n(the same values you used in the direct query).\n\nIf you need to re-prepare the query then run \"DEALLOCATE stmt\"\nbefore doing so.\n\n-- \nMichael Fuhr\n", "msg_date": "Sun, 12 Feb 2006 22:46:01 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Performance" } ]
[ { "msg_contents": "If you have only recently analyzed the tables in the query, close your psql session (if that's what you were using) and then restart it. I've gotten burned by asking a query using the function, which I believe is when PG creates the plan for the function, and then making significant changes to the tables behind it (new index, bulk insert, etc.). By starting a new session, the function will be re-planned according to up to date statistics or using newly created indices.\n", "msg_date": "Sun, 12 Feb 2006 17:05:27 -0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: SQL Function Performance" } ]
[ { "msg_contents": "ok I am retarded :) Apparently I thought I had done analyze on these\ntables but I actually had not and that was all that was needed. but\nthanks for the help.\n\n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n\n-----Original Message-----\nFrom: Dave Dutcher [mailto:[email protected]] \nSent: Friday, February 10, 2006 6:25 PM\nTo: Tim Jones\nCc: [email protected]\nSubject: RE: [PERFORM] joining two tables slow due to sequential scan \n\nOK, if I'm reading this correctly, it looks like the planner is choosing\na sequential scan because it expects 48,000 rows for that\npatientidentifier, but its actually only getting 3. The planner has the\nnumber of rows right for the sequential scan, so it seems like the stats\nare up to date. I would try increasing the stats for the\npatientindentifier column with 'alter table set statistics...' or\nincreasing the default_statistics_target for the whole DB. Once you\nhave changed the stats I believe you need to run analyze again.\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tim Jones\nSent: Friday, February 10, 2006 4:59 PM\nTo: Tom Lane\nCc: [email protected]\nSubject: Re: [PERFORM] joining two tables slow due to sequential scan \n\nok here is real db\n\nthe first query I had seems to make no sense because it is only fast if\nI limit the rows since almost all rows have status = 'AC'\n\nsecond query\n tables both have about 10 million rows and it takes a long time as you\ncan see but this person only has approx 160 total documents\n\n\n QUERY PLAN \n------------------------------------------------------------------------\n-------------------------------------------------------------------\n Hash Join (cost=84813.14..1510711.97 rows=48387 width=555) (actual\ntime=83266.854..91166.315 rows=3 loops=1)\n Hash Cond: (\"outer\".documentidentifier =\n\"inner\".dssdocumentidentifier)\n -> Seq Scan on documentversions (cost=0.00..269141.98 rows=9677398\nwidth=415) (actual time=0.056..49812.459 rows=9677398 loops=1)\n -> Hash (cost=83660.05..83660.05 rows=48036 width=140) (actual\ntime=10.833..10.833 rows=3 loops=1)\n -> Bitmap Heap Scan on clinicaldocuments\n(cost=301.13..83660.05 rows=48036 width=140) (actual time=0.243..0.258\nrows=3 loops=1)\n Recheck Cond: (patientidentifier = 690193)\n -> Bitmap Index Scan on ix_cdocpid (cost=0.00..301.13\nrows=48036 width=0) (actual time=0.201..0.201 rows=3 loops=1)\n Index Cond: (patientidentifier = 690193) Total\nruntime: 91166.540 ms\n\n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Friday, February 10, 2006 5:52 PM\nTo: Tim Jones\nCc: Scott Marlowe; Dave Dutcher; [email protected]\nSubject: Re: [PERFORM] joining two tables slow due to sequential scan \n\n\"Tim Jones\" <[email protected]> writes:\n> QUERY PLAN\n> 'Hash Join (cost=899.83..4384.17 rows=482 width=1350) (actual\n> time=0.203..0.203 rows=0 loops=1)'\n> ...\n> 'Total runtime: 0.392 ms'\n\nHardly seems like evidence of a performance problem ...\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Mon, 13 Feb 2006 11:21:58 -0500", "msg_from": "\"Tim Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: joining two tables slow due to sequential scan " } ]
[ { "msg_contents": "From: Michael Fuhr\n Date: 02/13/06 07:46:05\n To: Adnan DURSUN\n Cc: [email protected]\n Subject: Re: [PERFORM] SQL Function Performance\n\n On Sun, Feb 12, 2006 at 10:25:28PM +0200, Adnan DURSUN wrote:\n >> My database has an SQL function. The result comes in 30-40 seconds\n >> when i use the SQL function. On the other hand; The result comes\n >> 300-400 milliseconds when i run the SQL statement. Any idea ??\n\n >Have you analyzed the tables? If that's not the problem then could\n >you post the EXPLAIN ANALYZE output for the direct query and for a\n >prepared query? For the prepared query do this:\n\n EXPLAIN ANALYZE for direct query :\n\n QUERY PLAN\n \"HashAggregate (cost=29.37..29.40 rows=1 width=58) (actual time=12.114..12.114 rows=0 loops=1)\"\n \" -> Nested Loop (cost=9.55..29.36 rows=1 width=58) (actual time=12.107..12.107 rows=0 loops=1)\"\n \" Join Filter: (((\"\"inner\"\".hat_no)::text = (\"\"outer\"\".hat_no)::text) AND (\"\"inner\"\".sefer_kod = \"\"outer\"\".sefer_kod) AND (\"\"inner\"\".plan_tarihi = \"\"outer\"\".plan_tarihi) AND (\"\"inner\"\".bin_yer_kod = \"\"outer\"\".bin_yer_kod) AND (\"\"inner\"\".koltuk_no = \"\"outer\"\".koltuk_no))\"\n \" -> Nested Loop (cost=9.55..26.15 rows=1 width=93) (actual time=12.102..12.102 rows=0 loops=1)\"\n \" -> Nested Loop (cost=9.55..20.60 rows=1 width=65) (actual time=8.984..12.012 rows=1 loops=1)\"\n \" -> Nested Loop (cost=9.55..14.62 rows=1 width=48) (actual time=6.155..7.919 rows=41 loops=1)\"\n \" Join Filter: (\"\"outer\"\".sefer_tip_kod = \"\"inner\"\".kod)\"\n \" -> Hash Join (cost=9.55..13.58 rows=1 width=52) (actual time=6.129..6.846 rows=41 loops=1)\"\n \" Hash Cond: (\"\"outer\"\".kod = \"\"inner\"\".varis_yer_kod)\"\n \" -> Seq Scan on t_yer y2 (cost=0.00..3.44 rows=115 width=14) (actual time=0.018..0.374 rows=115 loops=1)\"\n \" Filter: ((iptal)::text = 'H'::text)\"\n \" -> Hash (cost=9.55..9.55 rows=1 width=46) (actual time=6.058..6.058 rows=41 loops=1)\"\n \" -> Merge Join (cost=9.45..9.55 rows=1 width=46) (actual time=4.734..5.894 rows=41 loops=1)\"\n \" Merge Cond: (\"\"outer\"\".kod = \"\"inner\"\".kalkis_yer_kod)\"\n \" -> Index Scan using t_yer_pkey on t_yer y1 (cost=0.00..9.62 rows=115 width=14) (actual time=0.021..0.183 rows=40 loops=1)\"\n \" Filter: ((iptal)::text = 'H'::text)\"\n \" -> Sort (cost=9.45..9.45 rows=1 width=40) (actual time=4.699..4.768 rows=41 loops=1)\"\n \" Sort Key: h.kalkis_yer_kod\"\n \" -> Nested Loop (cost=4.51..9.44 rows=1 width=40) (actual time=0.410..4.427 rows=41 loops=1)\"\n \" Join Filter: ((\"\"inner\"\".\"\"no\"\")::text = (\"\"outer\"\".hat_no)::text)\"\n \" -> Hash Join (cost=4.51..8.09 rows=1 width=27) (actual time=0.384..1.036 rows=41 loops=1)\"\n \" Hash Cond: ((\"\"outer\"\".durumu)::text = (\"\"inner\"\".kod)::text)\"\n \" -> Hash Join (cost=2.25..5.80 rows=3 width=32) (actual time=0.193..0.652 rows=41 loops=1)\"\n \" Hash Cond: ((\"\"outer\"\".ek_dev)::text = (\"\"inner\"\".kod)::text)\"\n \" -> Seq Scan on t_seferler s (cost=0.00..3.21 rows=41 width=37) (actual time=0.009..0.256 rows=41 loops=1)\"\n \" Filter: (((iptal)::text = 'H'::text) AND ((iptal)::text = 'H'::text) AND ((firma_no)::text = '1'::text))\"\n \" -> Hash (cost=2.25..2.25 rows=2 width=5) (actual time=0.156..0.156 rows=2 loops=1)\"\n \" -> Seq Scan on t_domains d1 (cost=0.00..2.25 rows=2 width=5) (actual time=0.055..0.138 rows=2 loops=1)\"\n \" Filter: ((name)::text = 'EKDEV'::text)\"\n \" -> Hash (cost=2.25..2.25 rows=2 width=5) (actual time=0.164..0.164 rows=2 loops=1)\"\n \" -> Seq Scan on t_domains d2 (cost=0.00..2.25 rows=2 width=5) (actual time=0.057..0.142 rows=2 loops=1)\"\n \" Filter: ((name)::text = 'SFR_DURUMU'::text)\"\n \" -> Seq Scan on t_hatlar h (cost=0.00..1.23 rows=10 width=18) (actual time=0.004..0.042 rows=10 loops=41)\"\n \" Filter: ('1'::text = (firma_no)::text)\"\n \" -> Seq Scan on t_sefer_tip t (cost=0.00..1.03 rows=1 width=9) (actual time=0.005..0.009 rows=1 loops=41)\"\n \" Filter: (((iptal)::text = 'H'::text) AND ('1'::text = (firma_no)::text))\"\n \" -> Index Scan using t_lokal_plan_sefer_liste_idx on t_lokal_plan lp (cost=0.00..5.97 rows=1 width=22) (actual time=0.091..0.092 rows=0 loops=41)\"\n \" Index Cond: (((lp.firma_no)::text = '1'::text) AND ((\"\"outer\"\".hat_no)::text = (lp.hat_no)::text) AND (\"\"outer\"\".kod = lp.sefer_kod) AND (lp.kalkis_tarihi = '2006-02-13'::date))\"\n \" Filter: (lokal_kod = 62)\"\n \" -> Index Scan using t_koltuk_son_durum_pkey on t_koltuk_son_durum sd (cost=0.00..5.53 rows=1 width=28) (actual time=0.079..0.079 rows=0 loops=1)\"\n \" Index Cond: (('1'::text = (sd.firma_no)::text) AND ((\"\"outer\"\".hat_no)::text = (sd.hat_no)::text) AND (\"\"outer\"\".kod = sd.sefer_kod) AND (\"\"outer\"\".plan_tarihi = sd.plan_tarihi) AND (\"\"outer\"\".yer_kod = sd.bin_yer_kod))\"\n \" Filter: ((islem)::text = 'S'::text)\"\n \" -> Index Scan using t_koltuk_islem_kod_ukey on t_koltuk_islem i (cost=0.00..3.18 rows=1 width=57) (never executed)\"\n \" Index Cond: (i.kod = \"\"outer\"\".islem_kod)\"\n \" Filter: ((firma_no)::text = '1'::text)\"\n \"Total runtime: 13.984 ms\"\n\n\n Adnan DURSUN\n ASRIN Bilişim Ltd.Şti\n Ankara / TURKEY\n\n\n\n\n\n\n\n \n\n\n\n\nFrom: Michael Fuhr\nDate: 02/13/06 \n 07:46:05\nTo: Adnan DURSUN\nCc: [email protected]\nSubject: Re: [PERFORM] \n SQL Function Performance\n \nOn Sun, Feb 12, 2006 at 10:25:28PM +0200, Adnan DURSUN wrote:\n>> My database has an SQL function. The result comes in 30-40 \n seconds\n>> when i use the SQL function. On the other hand; The result \n comes\n>> 300-400 milliseconds when i run the SQL statement. Any idea \n ??\n \n>Have you analyzed the tables?  If that's not the problem \n then could\n>you post the EXPLAIN ANALYZE output for the direct query and for \n a\n>prepared query?  For the prepared query do this:\n \nEXPLAIN ANALYZE for direct query :\n \nQUERY PLAN\"HashAggregate  (cost=29.37..29.40 rows=1 width=58) \n (actual time=12.114..12.114 rows=0 loops=1)\"\"  ->  Nested \n Loop  (cost=9.55..29.36 rows=1 width=58) (actual time=12.107..12.107 \n rows=0 loops=1)\"\"        Join Filter: \n (((\"\"inner\"\".hat_no)::text = (\"\"outer\"\".hat_no)::text) AND \n (\"\"inner\"\".sefer_kod = \"\"outer\"\".sefer_kod) AND (\"\"inner\"\".plan_tarihi = \n \"\"outer\"\".plan_tarihi) AND (\"\"inner\"\".bin_yer_kod = \"\"outer\"\".bin_yer_kod) \n AND (\"\"inner\"\".koltuk_no = \n \"\"outer\"\".koltuk_no))\"\"        \n ->  Nested Loop  (cost=9.55..26.15 rows=1 width=93) (actual \n time=12.102..12.102 rows=0 \n loops=1)\"\"              \n ->  Nested Loop  (cost=9.55..20.60 rows=1 width=65) (actual \n time=8.984..12.012 rows=1 \n loops=1)\"\"                    \n ->  Nested Loop  (cost=9.55..14.62 rows=1 width=48) (actual \n time=6.155..7.919 rows=41 \n loops=1)\"\"                          \n Join Filter: (\"\"outer\"\".sefer_tip_kod = \n \"\"inner\"\".kod)\"\"                          \n ->  Hash Join  (cost=9.55..13.58 rows=1 width=52) (actual \n time=6.129..6.846 rows=41 \n loops=1)\"\"                                \n Hash Cond: (\"\"outer\"\".kod = \n \"\"inner\"\".varis_yer_kod)\"\"                                \n ->  Seq Scan on t_yer y2  (cost=0.00..3.44 rows=115 width=14) \n (actual time=0.018..0.374 rows=115 \n loops=1)\"\"                                      \n Filter: ((iptal)::text = \n 'H'::text)\"\"                                \n ->  Hash  (cost=9.55..9.55 rows=1 width=46) (actual \n time=6.058..6.058 rows=41 \n loops=1)\"\"                                      \n ->  Merge Join  (cost=9.45..9.55 rows=1 width=46) (actual \n time=4.734..5.894 rows=41 \n loops=1)\"\"                                            \n Merge Cond: (\"\"outer\"\".kod = \n \"\"inner\"\".kalkis_yer_kod)\"\"                                            \n ->  Index Scan using t_yer_pkey on t_yer y1  (cost=0.00..9.62 \n rows=115 width=14) (actual time=0.021..0.183 rows=40 \n loops=1)\"\"                                                  \n Filter: ((iptal)::text = \n 'H'::text)\"\"                                            \n ->  Sort  (cost=9.45..9.45 rows=1 width=40) (actual \n time=4.699..4.768 rows=41 \n loops=1)\"\"                                                  \n Sort Key: \n h.kalkis_yer_kod\"\"                                                  \n ->  Nested Loop  (cost=4.51..9.44 rows=1 width=40) (actual \n time=0.410..4.427 rows=41 \n loops=1)\"\"                                                        \n Join Filter: ((\"\"inner\"\".\"\"no\"\")::text = \n (\"\"outer\"\".hat_no)::text)\"\"                                                        \n ->  Hash Join  (cost=4.51..8.09 rows=1 width=27) (actual \n time=0.384..1.036 rows=41 \n loops=1)\"\"                                                              \n Hash Cond: ((\"\"outer\"\".durumu)::text = \n (\"\"inner\"\".kod)::text)\"\"                                                              \n ->  Hash Join  (cost=2.25..5.80 rows=3 width=32) (actual \n time=0.193..0.652 rows=41 \n loops=1)\"\"                                                                    \n Hash Cond: ((\"\"outer\"\".ek_dev)::text = \n (\"\"inner\"\".kod)::text)\"\"                                                                    \n ->  Seq Scan on t_seferler s  (cost=0.00..3.21 rows=41 \n width=37) (actual time=0.009..0.256 rows=41 \n loops=1)\"\"                                                                          \n Filter: (((iptal)::text = 'H'::text) AND ((iptal)::text = 'H'::text) AND \n ((firma_no)::text = \n '1'::text))\"\"                                                                    \n ->  Hash  (cost=2.25..2.25 rows=2 width=5) (actual \n time=0.156..0.156 rows=2 \n loops=1)\"\"                                                                          \n ->  Seq Scan on t_domains d1  (cost=0.00..2.25 rows=2 width=5) \n (actual time=0.055..0.138 rows=2 \n loops=1)\"\"                                                                                \n Filter: ((name)::text = \n 'EKDEV'::text)\"\"                                                              \n ->  Hash  (cost=2.25..2.25 rows=2 width=5) (actual \n time=0.164..0.164 rows=2 \n loops=1)\"\"                                                                    \n ->  Seq Scan on t_domains d2  (cost=0.00..2.25 rows=2 width=5) \n (actual time=0.057..0.142 rows=2 \n loops=1)\"\"                                                                          \n Filter: ((name)::text = \n 'SFR_DURUMU'::text)\"\"                                                        \n ->  Seq Scan on t_hatlar h  (cost=0.00..1.23 rows=10 width=18) \n (actual time=0.004..0.042 rows=10 \n loops=41)\"\"                                                              \n Filter: ('1'::text = \n (firma_no)::text)\"\"                          \n ->  Seq Scan on t_sefer_tip t  (cost=0.00..1.03 rows=1 width=9) \n (actual time=0.005..0.009 rows=1 \n loops=41)\"\"                                \n Filter: (((iptal)::text = 'H'::text) AND ('1'::text = \n (firma_no)::text))\"\"                    \n ->  Index Scan using t_lokal_plan_sefer_liste_idx on t_lokal_plan \n lp  (cost=0.00..5.97 rows=1 width=22) (actual time=0.091..0.092 rows=0 \n loops=41)\"\"                          \n Index Cond: (((lp.firma_no)::text = '1'::text) AND ((\"\"outer\"\".hat_no)::text \n = (lp.hat_no)::text) AND (\"\"outer\"\".kod = lp.sefer_kod) AND \n (lp.kalkis_tarihi = \n '2006-02-13'::date))\"\"                          \n Filter: (lokal_kod = \n 62)\"\"              \n ->  Index Scan using t_koltuk_son_durum_pkey on t_koltuk_son_durum \n sd  (cost=0.00..5.53 rows=1 width=28) (actual time=0.079..0.079 rows=0 \n loops=1)\"\"                    \n Index Cond: (('1'::text = (sd.firma_no)::text) AND ((\"\"outer\"\".hat_no)::text \n = (sd.hat_no)::text) AND (\"\"outer\"\".kod = sd.sefer_kod) AND \n (\"\"outer\"\".plan_tarihi = sd.plan_tarihi) AND (\"\"outer\"\".yer_kod = \n sd.bin_yer_kod))\"\"                    \n Filter: ((islem)::text = \n 'S'::text)\"\"        ->  Index \n Scan using t_koltuk_islem_kod_ukey on t_koltuk_islem i  \n (cost=0.00..3.18 rows=1 width=57) (never \n executed)\"\"              \n Index Cond: (i.kod = \n \"\"outer\"\".islem_kod)\"\"              \n Filter: ((firma_no)::text = '1'::text)\"\"Total runtime: 13.984 \n ms\"\n \nAdnan DURSUN\nASRIN Bilişim Ltd.Şti\nAnkara / TURKEY", "msg_date": "Mon, 13 Feb 2006 21:44:40 +0200", "msg_from": "\"Adnan DURSUN\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL Function Performance" } ]
[ { "msg_contents": "I've run into this issue. It basically comes down to the plan that is being used inside the function is not the same as the plan used when you issue the query manually outside of the function. Although I'm no expert on when plans are prepared and re-evaluated for functions, I know that they are not re-evaluated each time to execute the function.\n\nSo, what I did in such cases was to build up the sql query in a text variable inside my function, and then use the EXECUTE command inside the function. When you use the EXECUTE command, the plan is prepared each time. I know there is some minimal overhead of preparing the plan each time, but it seems like it's minor compared to the saving's you'll get.\n\n- Mark\n\n\n\n\n\n\nRE: [PERFORM] SQL Function Performance\n\n\n\nI've run into this issue. It basically comes down to the plan that is being used inside the function is not the same as the plan used when you issue the query manually outside of the function.  Although I'm no expert on when plans are prepared and re-evaluated for functions, I know that they are not re-evaluated each time to execute the function.\n\nSo, what I did in such cases was to build up the sql query in a text variable inside my function, and then use the EXECUTE command inside the function.  When you use the EXECUTE command, the plan is prepared each time.  I know there is some minimal overhead of preparing the plan each time, but it seems like it's minor compared to the saving's you'll get.\n\n- Mark", "msg_date": "Mon, 13 Feb 2006 12:06:27 -0800", "msg_from": "\"Mark Liberman\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL Function Performance" } ]
[ { "msg_contents": "From: Michael Fuhr\n Date: 02/13/06 07:46:05\n To: Adnan DURSUN\n Cc: [email protected]\n Subject: Re: [PERFORM] SQL Function Performance\n\n On Sun, Feb 12, 2006 at 10:25:28PM +0200, Adnan DURSUN wrote:\n >> My database has an SQL function. The result comes in 30-40 seconds\n >> when i use the SQL function. On the other hand; The result comes\n >> 300-400 milliseconds when i run the SQL statement. Any idea ??\n\n >Have you analyzed the tables? If that's not the problem then could\n >you post the EXPLAIN ANALYZE output for the direct query and for a\n >prepared query? For the prepared query do this:\n\n >EXPLAIN ANALYZE EXECUTE stmt (...);\n\n Here is the EXPLAIN ANALYZE output for prepared statement :\n\n QUERY PLAN\n \"HashAggregate (cost=29.37..29.40 rows=1 width=58) (actual time=10.600..10.600 rows=0 loops=1)\"\n \" -> Nested Loop (cost=9.55..29.36 rows=1 width=58) (actual time=10.594..10.594 rows=0 loops=1)\"\n \" Join Filter: (((\"\"inner\"\".hat_no)::text = (\"\"outer\"\".hat_no)::text) AND (\"\"inner\"\".sefer_kod = \"\"outer\"\".sefer_kod) AND (\"\"inner\"\".plan_tarihi = \"\"outer\"\".plan_tarihi) AND (\"\"inner\"\".bin_yer_kod = \"\"outer\"\".bin_yer_kod) AND (\"\"inner\"\".koltuk_no = \"\"outer\"\".koltuk_no))\"\n \" -> Nested Loop (cost=9.55..26.15 rows=1 width=93) (actual time=10.588..10.588 rows=0 loops=1)\"\n \" -> Nested Loop (cost=9.55..20.60 rows=1 width=65) (actual time=7.422..10.499 rows=1 loops=1)\"\n \" -> Nested Loop (cost=9.55..14.62 rows=1 width=48) (actual time=5.455..7.247 rows=41 loops=1)\"\n \" Join Filter: (\"\"outer\"\".sefer_tip_kod = \"\"inner\"\".kod)\"\n \" -> Hash Join (cost=9.55..13.58 rows=1 width=52) (actual time=5.432..6.131 rows=41 loops=1)\"\n \" Hash Cond: (\"\"outer\"\".kod = \"\"inner\"\".varis_yer_kod)\"\n \" -> Seq Scan on t_yer y2 (cost=0.00..3.44 rows=115 width=14) (actual time=0.018..0.375 rows=115 loops=1)\"\n \" Filter: ((iptal)::text = 'H'::text)\"\n \" -> Hash (cost=9.55..9.55 rows=1 width=46) (actual time=5.352..5.352 rows=41 loops=1)\"\n \" -> Merge Join (cost=9.45..9.55 rows=1 width=46) (actual time=4.713..5.182 rows=41 loops=1)\"\n \" Merge Cond: (\"\"outer\"\".kod = \"\"inner\"\".kalkis_yer_kod)\"\n \" -> Index Scan using t_yer_pkey on t_yer y1 (cost=0.00..9.62 rows=115 width=14) (actual time=0.021..0.176 rows=40 loops=1)\"\n \" Filter: ((iptal)::text = 'H'::text)\"\n \" -> Sort (cost=9.45..9.45 rows=1 width=40) (actual time=4.678..4.747 rows=41 loops=1)\"\n \" Sort Key: h.kalkis_yer_kod\"\n \" -> Nested Loop (cost=4.51..9.44 rows=1 width=40) (actual time=0.412..4.389 rows=41 loops=1)\"\n \" Join Filter: ((\"\"inner\"\".\"\"no\"\")::text = (\"\"outer\"\".hat_no)::text)\"\n \" -> Hash Join (cost=4.51..8.09 rows=1 width=27) (actual time=0.386..1.137 rows=41 loops=1)\"\n \" Hash Cond: ((\"\"outer\"\".durumu)::text = (\"\"inner\"\".kod)::text)\"\n \" -> Hash Join (cost=2.25..5.80 rows=3 width=32) (actual time=0.193..0.751 rows=41 loops=1)\"\n \" Hash Cond: ((\"\"outer\"\".ek_dev)::text = (\"\"inner\"\".kod)::text)\"\n \" -> Seq Scan on t_seferler s (cost=0.00..3.21 rows=41 width=37) (actual time=0.009..0.258 rows=41 loops=1)\"\n \" Filter: (((iptal)::text = 'H'::text) AND ((iptal)::text = 'H'::text) AND ((firma_no)::text = '1'::text))\"\n \" -> Hash (cost=2.25..2.25 rows=2 width=5) (actual time=0.141..0.141 rows=2 loops=1)\"\n \" -> Seq Scan on t_domains d1 (cost=0.00..2.25 rows=2 width=5) (actual time=0.048..0.131 rows=2 loops=1)\"\n \" Filter: ((name)::text = 'EKDEV'::text)\"\n \" -> Hash (cost=2.25..2.25 rows=2 width=5) (actual time=0.160..0.160 rows=2 loops=1)\"\n \" -> Seq Scan on t_domains d2 (cost=0.00..2.25 rows=2 width=5) (actual time=0.056..0.139 rows=2 loops=1)\"\n \" Filter: ((name)::text = 'SFR_DURUMU'::text)\"\n \" -> Seq Scan on t_hatlar h (cost=0.00..1.23 rows=10 width=18) (actual time=0.004..0.045 rows=10 loops=41)\"\n \" Filter: ('1'::text = (firma_no)::text)\"\n \" -> Seq Scan on t_sefer_tip t (cost=0.00..1.03 rows=1 width=9) (actual time=0.004..0.009 rows=1 loops=41)\"\n \" Filter: (((iptal)::text = 'H'::text) AND ('1'::text = (firma_no)::text))\"\n \" -> Index Scan using t_lokal_plan_sefer_liste_idx on t_lokal_plan lp (cost=0.00..5.97 rows=1 width=22) (actual time=0.071..0.072 rows=0 loops=41)\"\n \" Index Cond: (((lp.firma_no)::text = '1'::text) AND ((\"\"outer\"\".hat_no)::text = (lp.hat_no)::text) AND (\"\"outer\"\".kod = lp.sefer_kod) AND (lp.kalkis_tarihi = '2006-02-13'::date))\"\n \" Filter: (lokal_kod = 62)\"\n \" -> Index Scan using t_koltuk_son_durum_pkey on t_koltuk_son_durum sd (cost=0.00..5.53 rows=1 width=28) (actual time=0.078..0.078 rows=0 loops=1)\"\n \" Index Cond: (('1'::text = (sd.firma_no)::text) AND ((\"\"outer\"\".hat_no)::text = (sd.hat_no)::text) AND (\"\"outer\"\".kod = sd.sefer_kod) AND (\"\"outer\"\".plan_tarihi = sd.plan_tarihi) AND (\"\"outer\"\".yer_kod = sd.bin_yer_kod))\"\n \" Filter: ((islem)::text = 'S'::text)\"\n \" -> Index Scan using t_koltuk_islem_kod_ukey on t_koltuk_islem i (cost=0.00..3.18 rows=1 width=57) (never executed)\"\n \" Index Cond: (i.kod = \"\"outer\"\".islem_kod)\"\n \" Filter: ((firma_no)::text = '1'::text)\"\n \"Total runtime: 11.856 ms\"\n\n\n Adnan DURSUN\n ASRIN Bilişim Ltd.Şti\n Ankara / TURKEY\n\n\n\n\n\n\n\n \n\n\n\nFrom: Michael Fuhr\nDate: 02/13/06 \n 07:46:05\nTo: Adnan DURSUN\nCc: [email protected]\nSubject: Re: [PERFORM] SQL \n Function Performance\n \nOn Sun, Feb 12, 2006 at 10:25:28PM +0200, Adnan DURSUN wrote:\n>> My database has an SQL function. The result comes in 30-40 \n seconds\n>> when i use the SQL function. On the other hand; The result \n comes\n>> 300-400 milliseconds when i run the SQL statement. Any idea \n ??\n \n>Have you analyzed the tables?  If that's not the problem \n then could\n>you post the EXPLAIN ANALYZE output for the direct query and for \n a\n>prepared query?  For the prepared query do this:\n \n>EXPLAIN ANALYZE EXECUTE stmt (...);\n \n Here is the EXPLAIN ANALYZE output for prepared statement :\n \nQUERY PLAN\"HashAggregate  (cost=29.37..29.40 rows=1 width=58) \n (actual time=10.600..10.600 rows=0 loops=1)\"\"  ->  Nested \n Loop  (cost=9.55..29.36 rows=1 width=58) (actual time=10.594..10.594 \n rows=0 loops=1)\"\"        Join Filter: \n (((\"\"inner\"\".hat_no)::text = (\"\"outer\"\".hat_no)::text) AND \n (\"\"inner\"\".sefer_kod = \"\"outer\"\".sefer_kod) AND (\"\"inner\"\".plan_tarihi = \n \"\"outer\"\".plan_tarihi) AND (\"\"inner\"\".bin_yer_kod = \"\"outer\"\".bin_yer_kod) AND \n (\"\"inner\"\".koltuk_no = \n \"\"outer\"\".koltuk_no))\"\"        \n ->  Nested Loop  (cost=9.55..26.15 rows=1 width=93) (actual \n time=10.588..10.588 rows=0 \n loops=1)\"\"              \n ->  Nested Loop  (cost=9.55..20.60 rows=1 width=65) (actual \n time=7.422..10.499 rows=1 \n loops=1)\"\"                    \n ->  Nested Loop  (cost=9.55..14.62 rows=1 width=48) (actual \n time=5.455..7.247 rows=41 \n loops=1)\"\"                          \n Join Filter: (\"\"outer\"\".sefer_tip_kod = \n \"\"inner\"\".kod)\"\"                          \n ->  Hash Join  (cost=9.55..13.58 rows=1 width=52) (actual \n time=5.432..6.131 rows=41 \n loops=1)\"\"                                \n Hash Cond: (\"\"outer\"\".kod = \n \"\"inner\"\".varis_yer_kod)\"\"                                \n ->  Seq Scan on t_yer y2  (cost=0.00..3.44 rows=115 width=14) \n (actual time=0.018..0.375 rows=115 \n loops=1)\"\"                                      \n Filter: ((iptal)::text = \n 'H'::text)\"\"                                \n ->  Hash  (cost=9.55..9.55 rows=1 width=46) (actual \n time=5.352..5.352 rows=41 \n loops=1)\"\"                                      \n ->  Merge Join  (cost=9.45..9.55 rows=1 width=46) (actual \n time=4.713..5.182 rows=41 \n loops=1)\"\"                                            \n Merge Cond: (\"\"outer\"\".kod = \n \"\"inner\"\".kalkis_yer_kod)\"\"                                            \n ->  Index Scan using t_yer_pkey on t_yer y1  (cost=0.00..9.62 \n rows=115 width=14) (actual time=0.021..0.176 rows=40 \n loops=1)\"\"                                                  \n Filter: ((iptal)::text = \n 'H'::text)\"\"                                            \n ->  Sort  (cost=9.45..9.45 rows=1 width=40) (actual \n time=4.678..4.747 rows=41 \n loops=1)\"\"                                                  \n Sort Key: \n h.kalkis_yer_kod\"\"                                                  \n ->  Nested Loop  (cost=4.51..9.44 rows=1 width=40) (actual \n time=0.412..4.389 rows=41 \n loops=1)\"\"                                                        \n Join Filter: ((\"\"inner\"\".\"\"no\"\")::text = \n (\"\"outer\"\".hat_no)::text)\"\"                                                        \n ->  Hash Join  (cost=4.51..8.09 rows=1 width=27) (actual \n time=0.386..1.137 rows=41 \n loops=1)\"\"                                                              \n Hash Cond: ((\"\"outer\"\".durumu)::text = \n (\"\"inner\"\".kod)::text)\"\"                                                              \n ->  Hash Join  (cost=2.25..5.80 rows=3 width=32) (actual \n time=0.193..0.751 rows=41 \n loops=1)\"\"                                                                    \n Hash Cond: ((\"\"outer\"\".ek_dev)::text = \n (\"\"inner\"\".kod)::text)\"\"                                                                    \n ->  Seq Scan on t_seferler s  (cost=0.00..3.21 rows=41 width=37) \n (actual time=0.009..0.258 rows=41 \n loops=1)\"\"                                                                          \n Filter: (((iptal)::text = 'H'::text) AND ((iptal)::text = 'H'::text) AND \n ((firma_no)::text = \n '1'::text))\"\"                                                                    \n ->  Hash  (cost=2.25..2.25 rows=2 width=5) (actual \n time=0.141..0.141 rows=2 \n loops=1)\"\"                                                                          \n ->  Seq Scan on t_domains d1  (cost=0.00..2.25 rows=2 width=5) \n (actual time=0.048..0.131 rows=2 \n loops=1)\"\"                                                                                \n Filter: ((name)::text = \n 'EKDEV'::text)\"\"                                                              \n ->  Hash  (cost=2.25..2.25 rows=2 width=5) (actual \n time=0.160..0.160 rows=2 \n loops=1)\"\"                                                                    \n ->  Seq Scan on t_domains d2  (cost=0.00..2.25 rows=2 width=5) \n (actual time=0.056..0.139 rows=2 \n loops=1)\"\"                                                                          \n Filter: ((name)::text = \n 'SFR_DURUMU'::text)\"\"                                                        \n ->  Seq Scan on t_hatlar h  (cost=0.00..1.23 rows=10 width=18) \n (actual time=0.004..0.045 rows=10 \n loops=41)\"\"                                                              \n Filter: ('1'::text = \n (firma_no)::text)\"\"                          \n ->  Seq Scan on t_sefer_tip t  (cost=0.00..1.03 rows=1 width=9) \n (actual time=0.004..0.009 rows=1 \n loops=41)\"\"                                \n Filter: (((iptal)::text = 'H'::text) AND ('1'::text = \n (firma_no)::text))\"\"                    \n ->  Index Scan using t_lokal_plan_sefer_liste_idx on t_lokal_plan \n lp  (cost=0.00..5.97 rows=1 width=22) (actual time=0.071..0.072 rows=0 \n loops=41)\"\"                          \n Index Cond: (((lp.firma_no)::text = '1'::text) AND ((\"\"outer\"\".hat_no)::text = \n (lp.hat_no)::text) AND (\"\"outer\"\".kod = lp.sefer_kod) AND (lp.kalkis_tarihi = \n '2006-02-13'::date))\"\"                          \n Filter: (lokal_kod = \n 62)\"\"              \n ->  Index Scan using t_koltuk_son_durum_pkey on t_koltuk_son_durum \n sd  (cost=0.00..5.53 rows=1 width=28) (actual time=0.078..0.078 rows=0 \n loops=1)\"\"                    \n Index Cond: (('1'::text = (sd.firma_no)::text) AND ((\"\"outer\"\".hat_no)::text = \n (sd.hat_no)::text) AND (\"\"outer\"\".kod = sd.sefer_kod) AND \n (\"\"outer\"\".plan_tarihi = sd.plan_tarihi) AND (\"\"outer\"\".yer_kod = \n sd.bin_yer_kod))\"\"                    \n Filter: ((islem)::text = \n 'S'::text)\"\"        ->  Index \n Scan using t_koltuk_islem_kod_ukey on t_koltuk_islem i  (cost=0.00..3.18 \n rows=1 width=57) (never \n executed)\"\"              \n Index Cond: (i.kod = \n \"\"outer\"\".islem_kod)\"\"              \n Filter: ((firma_no)::text = '1'::text)\"\"Total runtime: 11.856 \nms\"\n \nAdnan DURSUN\nASRIN Bilişim Ltd.Şti\nAnkara / TURKEY", "msg_date": "Mon, 13 Feb 2006 22:23:52 +0200", "msg_from": "\"Adnan DURSUN\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL Function Performance" }, { "msg_contents": "\"Adnan DURSUN\" <[email protected]> writes:\n>>>> EXPLAIN ANALYZE EXECUTE stmt (...);\n\n> Here is the EXPLAIN ANALYZE output for prepared statement :\n\nThis is exactly the same as the other plan --- you did not parameterize\nthe query. To see what's going on, you need to insert PREPARE\nparameters in the places where the function uses plpgsql variables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Feb 2006 19:57:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Performance " }, { "msg_contents": "On Mon, Feb 13, 2006 at 07:57:07PM -0500, Tom Lane wrote:\n> \"Adnan DURSUN\" <[email protected]> writes:\n> >>>> EXPLAIN ANALYZE EXECUTE stmt (...);\n> \n> > Here is the EXPLAIN ANALYZE output for prepared statement :\n> \n> This is exactly the same as the other plan --- you did not parameterize\n> the query. To see what's going on, you need to insert PREPARE\n> parameters in the places where the function uses plpgsql variables.\n\nActually it was an SQL function, but that also does PREPARE/EXECUTE,\nright?\n\nAdnan, what Tom is saying is that I requested this (simplified):\n\nPREPARE stmt (integer) AS SELECT * FROM foo WHERE id = $1;\nEXPLAIN ANALYZE EXECUTE stmt (12345);\n\nbut instead you appear to have done this:\n\nPREPARE stmt AS SELECT * FROM foo WHERE id = 12345;\nEXPLAIN ANALYZE EXECUTE stmt;\n\nWe can tell because if you had done it the first way (parameterized)\nthen the EXPLAIN ANALYZE output would have shown the parameters as\n$1, $2, $3, etc., which it didn't.\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 13 Feb 2006 18:31:54 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Performance" } ]
[ { "msg_contents": ">From: Mark Liberman\n>Date: 02/13/06 22:09:48\n>To: Adnan DURSUN; [email protected]\n>Subject: RE: [PERFORM] SQL Function Performance\n\n>I've run into this issue. It basically comes down to the plan that is being used inside the function is not the same as the plan used when you issue the query manually >outside of the function. Although I'm no expert on when plans are prepared and re-evaluated for functions, I know that they are not re-evaluated each time to execute the >function.\n\n\n in my case; both direct query and sql function gererate same execution plan. Also, execution plan belongs to the sql function better than direct sql query plan. But, direct sql result comes less than 1 second. sql function result comes about in 50 seconds.\n\nAdnan DURSUN\nASRIN Bilişim Ltd.Şti\n\n\n\n\n\n\n\n>From: Mark \nLiberman\n\n>Date: 02/13/06 \n22:09:48\n>To: Adnan DURSUN; [email protected]\n>Subject: RE: [PERFORM] \nSQL Function Performance\n \n>I've run into this issue. It basically comes down to the \nplan that is being used inside the function is not the same as the plan used \nwhen you issue the query manually >outside of the function.  Although \nI'm no expert on when plans are prepared and re-evaluated for functions, I know \nthat they are not re-evaluated each time to execute the \n>function.\n in my case; both direct query and sql function gererate \nsame execution plan. Also, execution plan belongs to the sql function better \nthan direct sql query plan. But, direct sql result comes less than 1 second. sql \nfunction result comes about in 50 seconds.\n\nAdnan DURSUN\nASRIN Bilişim Ltd.Şti", "msg_date": "Mon, 13 Feb 2006 23:58:59 +0200", "msg_from": "\"Adnan DURSUN\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL Function Performance" } ]
[ { "msg_contents": "> in my case; both direct query and sql function gererate same execution plan. Also, execution plan belongs to the sql function better than direct sql > query plan. But, direct sql result comes less than 1 second. sql function result comes about in 50 seconds.\n\nHow are you getting at the plan inside your function? If you just do an EXPLAIN on the function call you get a FUNCTION SCAN line in your plan, which tells you nothing. I remember I had to work through some process for catching the output of the Explain plan in a cursor and returning that to actually see the plan. I saw in a previous response he suggested using a PREPARE and EXECUTE against that. I'm not sure that's the same as what's going on in the function (although I could be wrong).\n\nJust humor me and try creating the sql query in the fuction in a text variable and then Executing it. \n\nPrior to that, however, you might try just recreating the function. The plan may be re-evaluated at that point.\n\n- Mark\n\n\n\n\n\n\n\nRE: [PERFORM] SQL Function Performance\n\n\n\n> in my case; both direct query and sql function gererate same execution plan. Also, execution plan belongs to the sql function better than direct sql > query plan. But, direct sql result comes less than 1 second. sql function result comes about in 50 seconds.\n\nHow are you getting at the plan inside your function?  If you just do an EXPLAIN on the function call you get a FUNCTION SCAN line in your plan, which tells you nothing.  I remember I had to work through some process for catching the output of the Explain plan in a cursor and returning that to actually see the plan.  I saw in a previous response he suggested using a PREPARE and EXECUTE against that.  I'm not sure that's the same as what's going on in the function (although I could be wrong).\n\nJust humor me and try creating the sql query in the fuction in a text variable and then Executing it. \n\nPrior to that, however, you might try just recreating the function.  The plan may be re-evaluated at that point.\n\n- Mark", "msg_date": "Mon, 13 Feb 2006 15:45:13 -0800", "msg_from": "\"Mark Liberman\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL Function Performance" } ]
[ { "msg_contents": "-------Original Message-------\n\nFrom: Mark Liberman\nDate: 02/14/06 01:46:16\nTo: Adnan DURSUN; [email protected]\nSubject: RE: [PERFORM] SQL Function Performance\n\n>> in my case; both direct query and sql function gererate same execution plan. Also, execution plan belongs to the sql function better than direct sql \n>> query plan. But, direct sql result comes less than 1 second. sql function result comes about in 50 seconds.\n\n>How are you getting at the plan inside your function? If you just do an EXPLAIN on the function call you get a FUNCTION SCAN line in your plan, which tells you >nothing. I remember I had to work through some process for catching the output of the Explain plan in a cursor and returning that to actually see the plan. I saw in a >previous response he suggested using a PREPARE and EXECUTE against that. I'm not sure that's the same as what's going on in the function (although I could be >wrong).\n\n Yes, i have got sql function prepared execution plan using PREPARE and EXECUTE that he suggested to me. \n\n\n>Just humor me and try creating the sql query in the fuction in a text variable and then Executing it. \n\n But i believe that, that behavior of PostgreSQL is not good. It should handle this case. PostgreSQL has this \"sql function\" functionality and it should\n give good serve...Of course, i will do your suggesion if i dont solve it.\n\n>Prior to that, however, you might try just recreating the function. The plan may be re-evaluated at that point.\n Ok. i did it many times. But nothing was changed..\n- Mark\n\n\n\nAdnan DURSUN\nASRIN Bilişim Ltd.Şti\n\n\n\n\n\n\n\n\n\n\n\n-------Original Message-------\n \n\nFrom: Mark Liberman\nDate: 02/14/06 01:46:16\nTo: Adnan DURSUN; [email protected]\nSubject: RE: [PERFORM] SQL \nFunction Performance\n \n>> in my case; both direct query and sql function gererate \nsame execution plan. Also, execution plan belongs to the sql function better \nthan direct sql >> query plan. But, direct sql result comes less than \n1 second. sql function result comes about in 50 seconds.>How are you \ngetting at the plan inside your function?  If you just do an EXPLAIN on the \nfunction call you get a FUNCTION SCAN line in your plan, which tells you \n>nothing.  I remember I had to work through some process for catching \nthe output of the Explain plan in a cursor and returning that to actually see \nthe plan.  I saw in a >previous response he suggested using a PREPARE \nand EXECUTE against that.  I'm not sure that's the same as what's going on \nin the function (although I could be >wrong).   Yes, i have \ngot sql function prepared execution plan using PREPARE and EXECUTE that he \nsuggested to me. \n>Just humor me and try creating the sql query in the fuction in a text \nvariable and then Executing it.    But i believe that, \nthat behavior of PostgreSQL is not good. It should handle this case. PostgreSQL \nhas this \"sql function\" functionality and it \nshould   give good serve...Of course, i will do your \nsuggesion if i dont solve it.>Prior to that, however, you might try \njust recreating the function.  The plan may be re-evaluated at that \npoint.    Ok. i did it many times. But nothing was \nchanged..- Mark\n \nAdnan DURSUN\nASRIN Bilişim Ltd.Şti", "msg_date": "Tue, 14 Feb 2006 02:16:49 +0200", "msg_from": "\"Adnan DURSUN\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL Function Performance" } ]
[ { "msg_contents": "\n> Indexing the t_name.name field, I can increase speed, but only if I\n> restrict my search to something like :\n> \n> select *\n> from t_name\n> where t_name.name like 'my_search%'\n> \n> (In this case it takes generally less than 1 second)\n> \n> \n> My question : Are there algorithms or tools that can speed up such a\n> type of queries (\"like\" condition begining with a \"%\" symbol) ?\n\nApart from indexing the field you could use full text indexing. See \nhttp://techdocs.postgresql.org/techdocs/fulltextindexing.php\n\n\nWhat other types of queries are you running that you want to speed up ?\n", "msg_date": "Tue, 14 Feb 2006 13:32:32 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing performance of a like '%...%' condition" } ]
[ { "msg_contents": "From: Michael Fuhr\nDate: 02/14/06 03:32:28\nTo: Tom Lane\nCc: Adnan DURSUN; [email protected]\nSubject: Re: [PERFORM] SQL Function Performance\n\nOn Mon, Feb 13, 2006 at 07:57:07PM -0500, Tom Lane wrote:\n>> \"Adnan DURSUN\" <[email protected]> writes:\n>> >>>> EXPLAIN ANALYZE EXECUTE stmt (...);\n>>\n>> > Here is the EXPLAIN ANALYZE output for prepared statement :\n>>\n>> This is exactly the same as the other plan --- you did not parameterize\n>> the query. To see what's going on, you need to insert PREPARE\n>> parameters in the places where the function uses plpgsql variables.\n\n>Actually it was an SQL function, but that also does PREPARE/EXECUTE,\n>right?\n\n>Adnan, what Tom is saying is that I requested this (simplified):\n\n>PREPARE stmt (integer) AS SELECT * FROM foo WHERE id = $1;\n>EXPLAIN ANALYZE EXECUTE stmt (12345);\n\nOk. I am sending right execution plan. I made mistake apologize me..\n\n QUERY PLAN\n\"HashAggregate (cost=276.73..276.76 rows=1 width=58) (actual time=192648.385..192648.385 rows=0 loops=1)\"\n\" -> Nested Loop (cost=5.90..276.71 rows=1 width=58) (actual time=192648.377..192648.377 rows=0 loops=1)\"\n\" Join Filter: (((\"\"inner\"\".hat_no)::text = (\"\"outer\"\".hat_no)::text) AND (\"\"inner\"\".sefer_kod = \"\"outer\"\".sefer_kod) AND (\"\"inner\"\".plan_tarihi = \"\"outer\"\".plan_tarihi) AND (\"\"inner\"\".bin_yer_kod = \"\"outer\"\".bin_yer_kod) AND (\"\"inner\"\".koltuk_no = \"\"outer\"\".koltuk_no) AND (((\"\"inner\"\".islem_tarihi = $2) AND (($5)::text = 'I'::text)) OR ((\"\"outer\"\".kalkis_tarihi = $2) AND (($5)::text = 'K'::text))) AND (((\"\"outer\"\".lokal_kod = $3) AND (($4)::text = 'K'::text)) OR ((\"\"inner\"\".ypt_lcl_kod = $3) AND (($4)::text = 'I'::text))))\"\n\" -> Nested Loop (cost=5.90..267.19 rows=3 width=101) (actual time=76.240..30974.777 rows=63193 loops=1)\"\n\" -> Nested Loop (cost=5.90..123.48 rows=26 width=73) (actual time=32.082..4357.786 rows=14296 loops=1)\"\n\" -> Nested Loop (cost=3.62..15.29 rows=1 width=48) (actual time=1.279..46.882 rows=41 loops=1)\"\n\" Join Filter: ((\"\"inner\"\".kod)::text = (\"\"outer\"\".durumu)::text)\"\n\" -> Nested Loop (cost=3.62..13.01 rows=1 width=53) (actual time=1.209..40.010 rows=41 loops=1)\"\n\" -> Nested Loop (cost=3.62..8.49 rows=1 width=47) (actual time=1.150..38.928 rows=41 loops=1)\"\n\" Join Filter: ((\"\"inner\"\".\"\"no\"\")::text = (\"\"outer\"\".hat_no)::text)\"\n\" -> Nested Loop (cost=2.25..6.79 rows=1 width=28) (actual time=0.710..24.708 rows=41 loops=1)\"\n\" Join Filter: (\"\"inner\"\".sefer_tip_kod = \"\"outer\"\".kod)\"\n\" -> Seq Scan on t_sefer_tip t (cost=0.00..1.03 rows=1 width=9) (actual time=0.117..0.126 rows=1 loops=1)\"\n\" Filter: (((iptal)::text = 'H'::text) AND (($1)::text = (firma_no)::text))\"\n\" -> Hash Join (cost=2.25..5.74 rows=2 width=32) (actual time=0.567..24.349 rows=41 loops=1)\"\n\" Hash Cond: ((\"\"outer\"\".ek_dev)::text = (\"\"inner\"\".kod)::text)\"\n\" -> Seq Scan on t_seferler s (cost=0.00..3.21 rows=34 width=37) (actual time=0.077..23.466 rows=41 loops=1)\"\n\" Filter: (((iptal)::text = 'H'::text) AND ((iptal)::text = 'H'::text) AND ((firma_no)::text = ($1)::text))\"\n\" -> Hash (cost=2.25..2.25 rows=2 width=5) (actual time=0.451..0.451 rows=2 loops=1)\"\n\" -> Seq Scan on t_domains d1 (cost=0.00..2.25 rows=2 width=5) (actual time=0.346..0.429 rows=2 loops=1)\"\n\" Filter: ((name)::text = 'EKDEV'::text)\"\n\" -> Merge Join (cost=1.37..1.59 rows=9 width=24) (actual time=0.032..0.313 rows=10 loops=41)\"\n\" Merge Cond: (\"\"outer\"\".kod = \"\"inner\"\".kalkis_yer_kod)\"\n\" -> Index Scan using t_yer_pkey on t_yer y1 (cost=0.00..9.62 rows=115 width=14) (actual time=0.013..0.164 rows=40 loops=41)\"\n\" Filter: ((iptal)::text = 'H'::text)\"\n\" -> Sort (cost=1.37..1.39 rows=9 width=18) (actual time=0.007..0.025 rows=10 loops=41)\"\n\" Sort Key: h.kalkis_yer_kod\"\n\" -> Seq Scan on t_hatlar h (cost=0.00..1.23 rows=9 width=18) (actual time=0.078..0.125 rows=10 loops=1)\"\n\" Filter: (($1)::text = (firma_no)::text)\"\n\" -> Index Scan using t_yer_pkey on t_yer y2 (cost=0.00..4.51 rows=1 width=14) (actual time=0.011..0.015 rows=1 loops=41)\"\n\" Index Cond: (\"\"outer\"\".varis_yer_kod = y2.kod)\"\n\" Filter: ((iptal)::text = 'H'::text)\"\n\" -> Seq Scan on t_domains d2 (cost=0.00..2.25 rows=2 width=5) (actual time=0.054..0.140 rows=2 loops=41)\"\n\" Filter: ((name)::text = 'SFR_DURUMU'::text)\"\n\" -> Bitmap Heap Scan on t_lokal_plan lp (cost=2.28..107.70 rows=33 width=30) (actual time=9.709..103.130 rows=349 loops=41)\"\n\" Recheck Cond: (((lp.firma_no)::text = ($1)::text) AND ((\"\"outer\"\".hat_no)::text = (lp.hat_no)::text) AND (\"\"outer\"\".kod = lp.sefer_kod))\"\n\" -> Bitmap Index Scan on t_lokal_plan_pkey (cost=0.00..2.28 rows=33 width=0) (actual time=8.340..8.340 rows=349 loops=41)\"\n\" Index Cond: (((lp.firma_no)::text = ($1)::text) AND ((\"\"outer\"\".hat_no)::text = (lp.hat_no)::text) AND (\"\"outer\"\".kod = lp.sefer_kod))\"\n\" -> Index Scan using t_koltuk_son_durum_pkey on t_koltuk_son_durum sd (cost=0.00..5.51 rows=1 width=28) (actual time=0.467..1.829 rows=4 loops=14296)\"\n\" Index Cond: ((($1)::text = (sd.firma_no)::text) AND ((\"\"outer\"\".hat_no)::text = (sd.hat_no)::text) AND (\"\"outer\"\".kod = sd.sefer_kod) AND (\"\"outer\"\".plan_tarihi = sd.plan_tarihi) AND (\"\"outer\"\".yer_kod = sd.bin_yer_kod))\"\n\" Filter: ((islem)::text = 'S'::text)\"\n\" -> Index Scan using t_koltuk_islem_kod_ukey on t_koltuk_islem i (cost=0.00..3.13 rows=1 width=65) (actual time=2.534..2.538 rows=1 loops=63193)\"\n\" Index Cond: (i.kod = \"\"outer\"\".islem_kod)\"\n\" Filter: ((firma_no)::text = ($1)::text)\"\n\"Total runtime: 192649.904 ms\"\n\nAdnan DURSUN\nASRIN Bilişim Ltd.Şti\n\n\n\n\n\n\n\n\n \n\nFrom: Michael Fuhr\nDate: 02/14/06 03:32:28\nTo: Tom Lane\nCc: Adnan DURSUN; [email protected]\nSubject: Re: [PERFORM] SQL \nFunction Performance\n \nOn Mon, Feb 13, 2006 at 07:57:07PM -0500, Tom Lane wrote:\n>> \"Adnan DURSUN\" <[email protected]> writes:\n>> >>>> EXPLAIN ANALYZE EXECUTE stmt (...);\n>>\n>> >    Here is the EXPLAIN ANALYZE output for \nprepared statement :\n>>\n>> This is exactly the same as the other plan --- you did not \nparameterize\n>> the query.  To see what's going on, you need to insert \nPREPARE\n>> parameters in the places where the function uses plpgsql \nvariables.\n \n>Actually it was an SQL function, but that also does \nPREPARE/EXECUTE,\n>right?\n \n>Adnan, what Tom is saying is that I requested this (simplified):\n \n>PREPARE stmt (integer) AS SELECT * FROM foo WHERE id = $1;\n>EXPLAIN ANALYZE EXECUTE stmt (12345);\n \nOk. I am sending right execution plan.  I made mistake apologize \nme..\n \n\n   QUERY PLAN\"HashAggregate  (cost=276.73..276.76 rows=1 \nwidth=58) (actual time=192648.385..192648.385 rows=0 loops=1)\"\"  \n->  Nested Loop  (cost=5.90..276.71 rows=1 width=58) (actual \ntime=192648.377..192648.377 rows=0 \nloops=1)\"\"        Join Filter: \n(((\"\"inner\"\".hat_no)::text = (\"\"outer\"\".hat_no)::text) AND (\"\"inner\"\".sefer_kod \n= \"\"outer\"\".sefer_kod) AND (\"\"inner\"\".plan_tarihi = \"\"outer\"\".plan_tarihi) AND \n(\"\"inner\"\".bin_yer_kod = \"\"outer\"\".bin_yer_kod) AND (\"\"inner\"\".koltuk_no = \n\"\"outer\"\".koltuk_no) AND (((\"\"inner\"\".islem_tarihi = $2) AND (($5)::text = \n'I'::text)) OR ((\"\"outer\"\".kalkis_tarihi = $2) AND (($5)::text = 'K'::text))) \nAND (((\"\"outer\"\".lokal_kod = $3) AND (($4)::text = 'K'::text)) OR \n((\"\"inner\"\".ypt_lcl_kod = $3) AND (($4)::text = \n'I'::text))))\"\"        ->  Nested \nLoop  (cost=5.90..267.19 rows=3 width=101) (actual time=76.240..30974.777 \nrows=63193 \nloops=1)\"\"              \n->  Nested Loop  (cost=5.90..123.48 rows=26 width=73) (actual \ntime=32.082..4357.786 rows=14296 \nloops=1)\"\"                    \n->  Nested Loop  (cost=3.62..15.29 rows=1 width=48) (actual \ntime=1.279..46.882 rows=41 \nloops=1)\"\"                          \nJoin Filter: ((\"\"inner\"\".kod)::text = \n(\"\"outer\"\".durumu)::text)\"\"                          \n->  Nested Loop  (cost=3.62..13.01 rows=1 width=53) (actual \ntime=1.209..40.010 rows=41 \nloops=1)\"\"                                \n->  Nested Loop  (cost=3.62..8.49 rows=1 width=47) (actual \ntime=1.150..38.928 rows=41 \nloops=1)\"\"                                      \nJoin Filter: ((\"\"inner\"\".\"\"no\"\")::text = \n(\"\"outer\"\".hat_no)::text)\"\"                                      \n->  Nested Loop  (cost=2.25..6.79 rows=1 width=28) (actual \ntime=0.710..24.708 rows=41 \nloops=1)\"\"                                            \nJoin Filter: (\"\"inner\"\".sefer_tip_kod = \n\"\"outer\"\".kod)\"\"                                            \n->  Seq Scan on t_sefer_tip t  (cost=0.00..1.03 rows=1 width=9) \n(actual time=0.117..0.126 rows=1 \nloops=1)\"\"                                                  \nFilter: (((iptal)::text = 'H'::text) AND (($1)::text = \n(firma_no)::text))\"\"                                            \n->  Hash Join  (cost=2.25..5.74 rows=2 width=32) (actual \ntime=0.567..24.349 rows=41 \nloops=1)\"\"                                                  \nHash Cond: ((\"\"outer\"\".ek_dev)::text = \n(\"\"inner\"\".kod)::text)\"\"                                                  \n->  Seq Scan on t_seferler s  (cost=0.00..3.21 rows=34 width=37) \n(actual time=0.077..23.466 rows=41 \nloops=1)\"\"                                                        \nFilter: (((iptal)::text = 'H'::text) AND ((iptal)::text = 'H'::text) AND \n((firma_no)::text = \n($1)::text))\"\"                                                  \n->  Hash  (cost=2.25..2.25 rows=2 width=5) (actual \ntime=0.451..0.451 rows=2 \nloops=1)\"\"                                                        \n->  Seq Scan on t_domains d1  (cost=0.00..2.25 rows=2 width=5) \n(actual time=0.346..0.429 rows=2 \nloops=1)\"\"                                                              \nFilter: ((name)::text = \n'EKDEV'::text)\"\"                                      \n->  Merge Join  (cost=1.37..1.59 rows=9 width=24) (actual \ntime=0.032..0.313 rows=10 \nloops=41)\"\"                                            \nMerge Cond: (\"\"outer\"\".kod = \n\"\"inner\"\".kalkis_yer_kod)\"\"                                            \n->  Index Scan using t_yer_pkey on t_yer y1  (cost=0.00..9.62 \nrows=115 width=14) (actual time=0.013..0.164 rows=40 \nloops=41)\"\"                                                  \nFilter: ((iptal)::text = \n'H'::text)\"\"                                            \n->  Sort  (cost=1.37..1.39 rows=9 width=18) (actual \ntime=0.007..0.025 rows=10 \nloops=41)\"\"                                                  \nSort Key: \nh.kalkis_yer_kod\"\"                                                  \n->  Seq Scan on t_hatlar h  (cost=0.00..1.23 rows=9 width=18) \n(actual time=0.078..0.125 rows=10 \nloops=1)\"\"                                                        \nFilter: (($1)::text = \n(firma_no)::text)\"\"                                \n->  Index Scan using t_yer_pkey on t_yer y2  (cost=0.00..4.51 \nrows=1 width=14) (actual time=0.011..0.015 rows=1 \nloops=41)\"\"                                      \nIndex Cond: (\"\"outer\"\".varis_yer_kod = \ny2.kod)\"\"                                      \nFilter: ((iptal)::text = \n'H'::text)\"\"                          \n->  Seq Scan on t_domains d2  (cost=0.00..2.25 rows=2 width=5) \n(actual time=0.054..0.140 rows=2 \nloops=41)\"\"                                \nFilter: ((name)::text = \n'SFR_DURUMU'::text)\"\"                    \n->  Bitmap Heap Scan on t_lokal_plan lp  (cost=2.28..107.70 rows=33 \nwidth=30) (actual time=9.709..103.130 rows=349 \nloops=41)\"\"                          \nRecheck Cond: (((lp.firma_no)::text = ($1)::text) AND ((\"\"outer\"\".hat_no)::text \n= (lp.hat_no)::text) AND (\"\"outer\"\".kod = \nlp.sefer_kod))\"\"                          \n->  Bitmap Index Scan on t_lokal_plan_pkey  (cost=0.00..2.28 \nrows=33 width=0) (actual time=8.340..8.340 rows=349 \nloops=41)\"\"                                \nIndex Cond: (((lp.firma_no)::text = ($1)::text) AND ((\"\"outer\"\".hat_no)::text = \n(lp.hat_no)::text) AND (\"\"outer\"\".kod = \nlp.sefer_kod))\"\"              \n->  Index Scan using t_koltuk_son_durum_pkey on t_koltuk_son_durum \nsd  (cost=0.00..5.51 rows=1 width=28) (actual time=0.467..1.829 rows=4 \nloops=14296)\"\"                    \nIndex Cond: ((($1)::text = (sd.firma_no)::text) AND ((\"\"outer\"\".hat_no)::text = \n(sd.hat_no)::text) AND (\"\"outer\"\".kod = sd.sefer_kod) AND (\"\"outer\"\".plan_tarihi \n= sd.plan_tarihi) AND (\"\"outer\"\".yer_kod = \nsd.bin_yer_kod))\"\"                    \nFilter: ((islem)::text = \n'S'::text)\"\"        ->  Index \nScan using t_koltuk_islem_kod_ukey on t_koltuk_islem i  (cost=0.00..3.13 \nrows=1 width=65) (actual time=2.534..2.538 rows=1 \nloops=63193)\"\"              \nIndex Cond: (i.kod = \n\"\"outer\"\".islem_kod)\"\"              \nFilter: ((firma_no)::text = ($1)::text)\"\"Total runtime: 192649.904 \nms\"\nAdnan DURSUN\nASRIN Bilişim Ltd.Şti", "msg_date": "Tue, 14 Feb 2006 11:33:57 +0200", "msg_from": "\"Adnan DURSUN\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL Function Performance" }, { "msg_contents": "On Tue, Feb 14, 2006 at 11:33:57AM +0200, Adnan DURSUN wrote:\n> -> Nested Loop (cost=5.90..267.19 rows=3 width=101) (actual time=76.240..30974.777 rows=63193 loops=1)\n> -> Nested Loop (cost=5.90..123.48 rows=26 width=73) (actual time=32.082..4357.786 rows=14296 loops=1)\n\nA prepared query is planned before the parameters' values are known,\nso the planner can't take full advantage of column statistics to\nestimate row counts. The planner must therefore decide on a plan\nthat should be reasonable in most cases; apparently this isn't one\nof those cases, as the disparity between estimated and actual rows\nshows. Maybe Tom (one of the core developers) can comment on whether\nanything can be done to improve the plan in this case.\n\nAbsent a better solution, you could write a PL/pgSQL function and\nbuild the query as a text string, then EXECUTE it. That would give\nyou a new plan each time, one that can take better advantage of\nstatistics, at the cost of having to plan the query each time you\ncall the function (but you probably don't care about that cost\nas long as the overall results are better). Here's an example:\n\nCREATE FUNCTION fooquery(qval text) RETURNS SETOF foo AS $$\nDECLARE\n row foo%ROWTYPE;\n query text;\nBEGIN\n query := 'SELECT * FROM foo WHERE val = ' || quote_literal(qval);\n\n FOR row IN EXECUTE query LOOP\n RETURN NEXT row;\n END LOOP;\n\n RETURN;\nEND;\n$$ LANGUAGE plpgsql STABLE STRICT;\n\n-- \nMichael Fuhr\n", "msg_date": "Tue, 14 Feb 2006 14:04:05 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Function Performance" } ]
[ { "msg_contents": "\nhi,\n\ni load data from files using copy method.\nFiles contain between 2 and 7 millions of rows, spread on 5 tables.\n\nFor loading all the data, it takes 40mn, and the same processing takes 17mn with Oracle.\nI think that this time can be improved by changing postgresql configuration file.\nBut which parameters i need to manipulate and with which values ?\n\nHere are the specifications of my system :\nV250 architecture sun4u\n2xCPU UltraSparc IIIi 1.28 GHz.\n8 Go RAM.\n\nRegards.\n\n\tWill\n\n\nThis e-mail is intended only for the above addressee. It may contain\nprivileged information. If you are not the addressee you must not copy,\ndistribute, disclose or use any of the information in it. If you have\nreceived it in error please delete it and immediately notify the sender.\nSecurity Notice: all e-mail, sent to or from this address, may be\naccessed by someone other than the recipient, for system management and\nsecurity reasons. This access is controlled under Regulation of\nInvestigatory Powers Act 2000, Lawful Business Practises.\n", "msg_date": "Tue, 14 Feb 2006 10:44:59 +0100", "msg_from": "\"FERREIRA, William (VALTECH)\" <[email protected]>", "msg_from_op": true, "msg_subject": "copy and postgresql.conf" }, { "msg_contents": "Hi William,\n\twhich PostgreSQL version are you using? Newer (8.0+) versions have some \nimportant performance improvements for the COPY command.\n\n\tAlso, you'll notice significant improvements by creating primary & foreign \nkeys after the copy command. I think config tweaking can improve key and \nindex creation but I don't think you can improve the COPY command itself.\n\n\tThere are also many threads in this list commenting on this issue, you'll \nfind it easely in the archives.\n\nA Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:\n> hi,\n>\n> i load data from files using copy method.\n> Files contain between 2 and 7 millions of rows, spread on 5 tables.\n>\n> For loading all the data, it takes 40mn, and the same processing takes 17mn\n> with Oracle. I think that this time can be improved by changing postgresql\n> configuration file. But which parameters i need to manipulate and with\n> which values ?\n>\n> Here are the specifications of my system :\n> V250 architecture sun4u\n> 2xCPU UltraSparc IIIi 1.28 GHz.\n> 8 Go RAM.\n>\n> Regards.\n>\n> \tWill\n>\n>\n> This e-mail is intended only for the above addressee. It may contain\n> privileged information. If you are not the addressee you must not copy,\n> distribute, disclose or use any of the information in it. If you have\n> received it in error please delete it and immediately notify the sender.\n> Security Notice: all e-mail, sent to or from this address, may be\n> accessed by someone other than the recipient, for system management and\n> security reasons. This access is controlled under Regulation of\n> Investigatory Powers Act 2000, Lawful Business Practises.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n-- \nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n \n", "msg_date": "Tue, 14 Feb 2006 12:38:24 +0100", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: copy and postgresql.conf" }, { "msg_contents": "Hi, Ferreira,\n\nFERREIRA, William (VALTECH) wrote:\n\n> i load data from files using copy method.\n> Files contain between 2 and 7 millions of rows, spread on 5 tables.\n> \n> For loading all the data, it takes 40mn, and the same processing takes 17mn with Oracle.\n> I think that this time can be improved by changing postgresql configuration file.\n> But which parameters i need to manipulate and with which values ?\n\nIncrease the size of the wal.\n\nIf its just a develpoment environment, or you don't mind data\ninconsistency in case of a crash, disable fsync.\n\nHTH,\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Tue, 14 Feb 2006 17:29:32 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: copy and postgresql.conf" } ]
[ { "msg_contents": "\nthanks,\n\ni'm using postgresql 8.0.3\nthere is no primary key and no index on my tables\n\nregards\n\n-----Message d'origine-----\nDe : [email protected]\n[mailto:[email protected]]De la part de Albert\nCervera Areny\nEnvoyé : mardi 14 février 2006 12:38\nÀ : [email protected]\nObjet : Re: [PERFORM] copy and postgresql.conf\n\n\n\nHi William,\n\twhich PostgreSQL version are you using? Newer (8.0+) versions have some\n\nimportant performance improvements for the COPY command.\n\n\tAlso, you'll notice significant improvements by creating primary & foreign\n\nkeys after the copy command. I think config tweaking can improve key and\n\nindex creation but I don't think you can improve the COPY command itself.\n\n\tThere are also many threads in this list commenting on this issue, you'll\n\nfind it easely in the archives.\n\nA Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:\n> hi,\n>\n> i load data from files using copy method.\n> Files contain between 2 and 7 millions of rows, spread on 5 tables.\n>\n> For loading all the data, it takes 40mn, and the same processing takes 17mn\n> with Oracle. I think that this time can be improved by changing postgresql\n> configuration file. But which parameters i need to manipulate and with\n> which values ?\n>\n> Here are the specifications of my system :\n> V250 architecture sun4u\n> 2xCPU UltraSparc IIIi 1.28 GHz.\n> 8 Go RAM.\n>\n> Regards.\n>\n> \tWill\n>\n>\n> This e-mail is intended only for the above addressee. It may contain\n> privileged information. If you are not the addressee you must not copy,\n> distribute, disclose or use any of the information in it. If you have\n> received it in error please delete it and immediately notify the sender.\n> Security Notice: all e-mail, sent to or from this address, may be\n> accessed by someone other than the recipient, for system management and\n> security reasons. This access is controlled under Regulation of\n> Investigatory Powers Act 2000, Lawful Business Practises.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n--\n\nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n\nThis mail has originated outside your organization,\neither from an external partner or the Global Internet.\nKeep this in mind if you answer this message.\n\n\r\nThis e-mail is intended only for the above addressee. It may contain\nprivileged information. If you are not the addressee you must not copy,\ndistribute, disclose or use any of the information in it. If you have\nreceived it in error please delete it and immediately notify the sender.\nSecurity Notice: all e-mail, sent to or from this address, may be\naccessed by someone other than the recipient, for system management and\nsecurity reasons. This access is controlled under Regulation of\nInvestigatory Powers Act 2000, Lawful Business Practises.\n", "msg_date": "Tue, 14 Feb 2006 14:26:57 +0100", "msg_from": "\"FERREIRA, William (VALTECH)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: copy and postgresql.conf" }, { "msg_contents": "Sorry, COPY improvements came with 8.1 \n(http://www.postgresql.org/docs/whatsnew)\n\nA Dimarts 14 Febrer 2006 14:26, FERREIRA, William (VALTECH) va escriure:\n> thanks,\n>\n> i'm using postgresql 8.0.3\n> there is no primary key and no index on my tables\n>\n> regards\n>\n> -----Message d'origine-----\n> De : [email protected]\n> [mailto:[email protected]]De la part de Albert\n> Cervera Areny\n> Envoyé : mardi 14 février 2006 12:38\n> À : [email protected]\n> Objet : Re: [PERFORM] copy and postgresql.conf\n>\n>\n>\n> Hi William,\n> \twhich PostgreSQL version are you using? Newer (8.0+) versions have some\n>\n> important performance improvements for the COPY command.\n>\n> \tAlso, you'll notice significant improvements by creating primary & foreign\n>\n> keys after the copy command. I think config tweaking can improve key and\n>\n> index creation but I don't think you can improve the COPY command itself.\n>\n> \tThere are also many threads in this list commenting on this issue, you'll\n>\n> find it easely in the archives.\n>\n> A Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:\n> > hi,\n> >\n> > i load data from files using copy method.\n> > Files contain between 2 and 7 millions of rows, spread on 5 tables.\n> >\n> > For loading all the data, it takes 40mn, and the same processing takes\n> > 17mn with Oracle. I think that this time can be improved by changing\n> > postgresql configuration file. But which parameters i need to manipulate\n> > and with which values ?\n> >\n> > Here are the specifications of my system :\n> > V250 architecture sun4u\n> > 2xCPU UltraSparc IIIi 1.28 GHz.\n> > 8 Go RAM.\n> >\n> > Regards.\n> >\n> > \tWill\n> >\n> >\n> > This e-mail is intended only for the above addressee. It may contain\n> > privileged information. If you are not the addressee you must not copy,\n> > distribute, disclose or use any of the information in it. If you have\n> > received it in error please delete it and immediately notify the sender.\n> > Security Notice: all e-mail, sent to or from this address, may be\n> > accessed by someone other than the recipient, for system management and\n> > security reasons. This access is controlled under Regulation of\n> > Investigatory Powers Act 2000, Lawful Business Practises.\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n>\n> --\n>\n> Albert Cervera Areny\n> Dept. Informàtica Sedifa, S.L.\n>\n> Av. Can Bordoll, 149\n> 08202 - Sabadell (Barcelona)\n> Tel. 93 715 51 11\n> Fax. 93 715 51 12\n>\n> ====================================================================\n> ........................ AVISO LEGAL ............................\n> La presente comunicación y sus anexos tiene como destinatario la\n> persona a la que va dirigida, por lo que si usted lo recibe\n> por error debe notificarlo al remitente y eliminarlo de su\n> sistema, no pudiendo utilizarlo, total o parcialmente, para\n> ningún fin. Su contenido puede tener información confidencial o\n> protegida legalmente y únicamente expresa la opinión del\n> remitente. El uso del correo electrónico vía Internet no\n> permite asegurar ni la confidencialidad de los mensajes\n> ni su correcta recepción. En el caso de que el\n> destinatario no consintiera la utilización del correo electrónico,\n> deberá ponerlo en nuestro conocimiento inmediatamente.\n> ====================================================================\n> ........................... DISCLAIMER .............................\n> This message and its attachments are intended exclusively for the\n> named addressee. If you receive this message in error, please\n> immediately delete it from your system and notify the sender. You\n> may not use this message or any part of it for any purpose.\n> The message may contain information that is confidential or\n> protected by law, and any opinions expressed are those of the\n> individual sender. Internet e-mail guarantees neither the\n> confidentiality nor the proper receipt of the message sent.\n> If the addressee of this message does not consent to the use\n> of internet e-mail, please inform us inmmediately.\n> ====================================================================\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n>\n>\n> This mail has originated outside your organization,\n> either from an external partner or the Global Internet.\n> Keep this in mind if you answer this message.\n>\n>\n> This e-mail is intended only for the above addressee. It may contain\n> privileged information. If you are not the addressee you must not copy,\n> distribute, disclose or use any of the information in it. If you have\n> received it in error please delete it and immediately notify the sender.\n> Security Notice: all e-mail, sent to or from this address, may be\n> accessed by someone other than the recipient, for system management and\n> security reasons. This access is controlled under Regulation of\n> Investigatory Powers Act 2000, Lawful Business Practises.\n\n-- \nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n \n", "msg_date": "Tue, 14 Feb 2006 17:06:59 +0100", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: copy and postgresql.conf" } ]
[ { "msg_contents": "Hello,\n\nI've error \"out of memory\" with these traces :\n\n\nTopMemoryContext: 32768 total in 3 blocks; 5152 free (1 chunks); 27616 used\nTopTransactionContext: 8192 total in 1 blocks; 8136 free (0 chunks); 56 used\nDeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nMessageContext: 24576 total in 2 blocks; 2688 free (14 chunks); 21888 used\nPortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used\nPortalHeapMemory: 8192 total in 1 blocks; 3936 free (0 chunks); 4256 used\nPortalHeapMemory: 23552 total in 5 blocks; 1160 free (4 chunks); 22392 used\nExecutorState: 8192 total in 1 blocks; 3280 free (4 chunks); 4912 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExecutorState: 24576 total in 2 blocks; 11264 free (14 chunks); 13312 used\nExprContext: 8192 total in 1 blocks; 8128 free (0 chunks); 64 used\nAggContext: -1976573952 total in 287 blocks; 25024 free (414 chunks);\n-1976598976 used\nDynaHashTable: 503439384 total in 70 blocks; 6804760 free (257 chunks);\n496634624 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nCacheMemoryContext: 516096 total in 6 blocks; 126648 free (2 chunks); 389448\nused\ntest_query: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\ntest_date: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\nquery_string_query_string_key: 1024 total in 1 blocks; 640 free (0 chunks); 384\nused\nquery_string_pkey: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_index_indrelid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 320 free (0 chunks); 704\nused\npg_amop_opc_strategy_index: 1024 total in 1 blocks; 320 free (0 chunks); 704\nused\npg_shadow_usename_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_amop_opr_opc_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_conversion_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_language_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_statistic_relid_att_index: 1024 total in 1 blocks; 320 free (0 chunks); 704\nused\npg_attribute_relid_attnam_index: 1024 total in 1 blocks; 320 free (0 chunks);\n704 used\npg_shadow_usesysid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_cast_source_target_index: 1024 total in 1 blocks; 320 free (0 chunks); 704\nused\npg_conversion_name_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks); 704\nused\npg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 320 free (0 chunks);\n704 used\npg_namespace_nspname_index: 1024 total in 1 blocks; 640 free (0 chunks); 384\nused\npg_conversion_default_index: 2048 total in 1 blocks; 704 free (0 chunks); 1344\nused\npg_class_relname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks); 704\nused\npg_aggregate_fnoid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_inherits_relid_seqno_index: 1024 total in 1 blocks; 320 free (0 chunks); 704\nused\npg_language_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_type_typname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_group_sysid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_namespace_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_proc_proname_args_nsp_index: 2048 total in 1 blocks; 704 free (0 chunks);\n1344 used\npg_opclass_am_name_nsp_index: 2048 total in 1 blocks; 768 free (0 chunks); 1280\nused\npg_group_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_proc_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_operator_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_amproc_opc_procnum_index: 1024 total in 1 blocks; 320 free (0 chunks); 704\nused\npg_index_indexrelid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_operator_oprname_l_r_n_index: 2048 total in 1 blocks; 704 free (0 chunks);\n1344 used\npg_opclass_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 320 free (0 chunks); 704\nused\npg_type_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_attribute_relid_attnum_index: 1024 total in 1 blocks; 320 free (0 chunks);\n704 used\npg_class_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\nMdSmgr: 8192 total in 1 blocks; 5712 free (0 chunks); 2480 used\nDynaHash: 8192 total in 1 blocks; 6912 free (0 chunks); 1280 used\nDynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used\nDynaHashTable: 8192 total in 1 blocks; 5080 free (0 chunks); 3112 used\nDynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used\nDynaHashTable: 8192 total in 1 blocks; 3016 free (0 chunks); 5176 used\nDynaHashTable: 8192 total in 1 blocks; 4040 free (0 chunks); 4152 used\nDynaHashTable: 24576 total in 2 blocks; 13240 free (4 chunks); 11336 used\nDynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nDynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nDynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nDynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nDynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nErrorContext: 8192 total in 1 blocks; 8176 free (6 chunks); 16 used\n2006-02-14 16:06:14 [25816] ERROR: out of memory\nDETAIL: Failed on request of size 88.\nERROR: out of memory\nDETAIL: Failed on request of size 88.\n\n\nAnybody could help me ??\n\nThanks a lot\n\nMB\n\n", "msg_date": "Tue, 14 Feb 2006 16:24:11 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "out of memory" }, { "msg_contents": "[email protected] writes:\n> I've error \"out of memory\" with these traces :\n\nDoing what?\n\n> AggContext: -1976573952 total in 287 blocks; 25024 free (414 chunks);\n> -1976598976 used\n> DynaHashTable: 503439384 total in 70 blocks; 6804760 free (257 chunks);\n> 496634624 used\n\nI'd guess that a HashAgg operation ran out of memory ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Feb 2006 10:32:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: out of memory " }, { "msg_contents": "Thanks for your response,\n\nI've made this request :\n\nSELECT query_string, DAY.ocu from search_data.query_string,\n (SELECT SUM(occurence) as ocu, query\nFROM daily.queries_detail_statistics\n WHERE date >= '2006-01-01' AND date <= '2006-01-30'\n AND portal IN (1,2)\n GROUP BY query\n ORDER BY ocu DESC\n LIMIT 1000) as DAY\n WHERE DAY.query=id;\n\nand after few minutes, i've error \"out of memory\" with this execution plan :\nNested Loop (cost=8415928.63..8418967.13 rows=1001 width=34)\n -> Subquery Scan \"day\" (cost=8415928.63..8415941.13 rows=1000 width=16)\n -> Limit (cost=8415928.63..8415931.13 rows=1000 width=12)\n -> Sort (cost=8415928.63..8415932.58 rows=1582 width=12)\n Sort Key: sum(occurence)\n -> HashAggregate (cost=8415840.61..8415844.56 rows=1582\nwidth=12)\n -> Seq Scan on queries_detail_statistics \n(cost=0.00..8414056.00 rows=356922 width=12)\n Filter: ((date >= '2006-01-01'::date) AND (date\n<= '2006-01-30'::date) AND (((portal)::text = '1'::text) OR ((portal)::text =\n'2'::text)))\n -> Index Scan using query_string_pkey on query_string (cost=0.00..3.01\nrows=1 width=34)\n Index Cond: (\"outer\".query = query_string.id)\n(10 rows)\n\nif HashAgg operation ran out of memory, what can i do ?\n\nthanks a lot\nmartial\n\n> [email protected] writes:\n> > I've error \"out of memory\" with these traces :\n>\n> Doing what?\n>\n> > AggContext: -1976573952 total in 287 blocks; 25024 free (414 chunks);\n> > -1976598976 used\n> > DynaHashTable: 503439384 total in 70 blocks; 6804760 free (257 chunks);\n> > 496634624 used\n>\n> I'd guess that a HashAgg operation ran out of memory ...\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n\n", "msg_date": "Tue, 14 Feb 2006 17:03:38 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: out of memory " }, { "msg_contents": "On Tue, 2006-02-14 at 10:03, [email protected] wrote:\n> Thanks for your response,\n\nSNIP\n\n> if HashAgg operation ran out of memory, what can i do ?\n\n1: Don't top post.\n\n2: Have you run analyze? Normally when hash agg runs out of memory, the\nplanner THOUGHT the hash agg would fit in memory, but it was larger than\nexpected. This is commonly a problem when you haven't run analyze.\n", "msg_date": "Tue, 14 Feb 2006 10:06:41 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: out of memory" }, { "msg_contents": "Yes, I've launched ANALYZE command before sending request.\nI precise that's postgres version is 7.3.4\n\n> On Tue, 2006-02-14 at 10:03, [email protected] wrote:\n> > Thanks for your response,\n>\n> SNIP\n>\n> > if HashAgg operation ran out of memory, what can i do ?\n>\n> 1: Don't top post.\n>\n> 2: Have you run analyze? Normally when hash agg runs out of memory, the\n> planner THOUGHT the hash agg would fit in memory, but it was larger than\n> expected. This is commonly a problem when you haven't run analyze.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\n", "msg_date": "Tue, 14 Feb 2006 17:15:20 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: out of memory" }, { "msg_contents": "On Tue, 2006-02-14 at 10:15, [email protected] wrote:\n> Yes, I've launched ANALYZE command before sending request.\n> I precise that's postgres version is 7.3.4\n\nSo what does explain analyze show for this query, if anything? Can you\nincrease your sort_mem or shared_buffers (I forget which hash_agg uses\noff the top of my head...) if necessary to make it work. Note you can\nincrease sort_mem on the fly for a given connection.\n", "msg_date": "Tue, 14 Feb 2006 10:21:37 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: out of memory" }, { "msg_contents": "command explain analyze crash with the \"out of memory\" error\n\nI precise that I've tried a lot of values from parameters shared_buffer and\nsort_mem\n\nnow, in config file, values are :\nsort_mem=32768\nand shared_buffer=30000\n\nserver has 4Go RAM.\nand kernel.shmmax=307200000\n\n\n\n> On Tue, 2006-02-14 at 10:15, [email protected] wrote:\n> > Yes, I've launched ANALYZE command before sending request.\n> > I precise that's postgres version is 7.3.4\n>\n> So what does explain analyze show for this query, if anything? Can you\n> increase your sort_mem or shared_buffers (I forget which hash_agg uses\n> off the top of my head...) if necessary to make it work. Note you can\n> increase sort_mem on the fly for a given connection.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\n", "msg_date": "Tue, 14 Feb 2006 17:32:33 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: out of memory" }, { "msg_contents": "On Tue, 2006-02-14 at 10:32, [email protected] wrote:\n> command explain analyze crash with the \"out of memory\" error\n> \n> I precise that I've tried a lot of values from parameters shared_buffer and\n> sort_mem\n> \n> now, in config file, values are :\n> sort_mem=32768\n> and shared_buffer=30000\n\nOK, on the command line, try increasing the sort_mem until hash_agg can\nwork. With a 4 gig machine, you should be able to go as high as needed\nhere, I'd think. Try as high as 500000 or so or more. Then when\nexplain analyze works, compare the actual versus estimated number of\nrows.\n", "msg_date": "Tue, 14 Feb 2006 10:50:35 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: out of memory" }, { "msg_contents": "[email protected] writes:\n> Yes, I've launched ANALYZE command before sending request.\n> I precise that's postgres version is 7.3.4\n\nCan't possibly be 7.3.4, that version didn't have HashAggregate.\n\nHow many distinct values of \"query\" actually exist in the table?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Feb 2006 12:36:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: out of memory " }, { "msg_contents": "On Tue, 2006-02-14 at 11:36, Tom Lane wrote:\n> [email protected] writes:\n> > Yes, I've launched ANALYZE command before sending request.\n> > I precise that's postgres version is 7.3.4\n> \n> Can't possibly be 7.3.4, that version didn't have HashAggregate.\n> \n> How many distinct values of \"query\" actually exist in the table?\n\nI thought that looked odd.\n", "msg_date": "Tue, 14 Feb 2006 11:47:33 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: out of memory" }, { "msg_contents": "Good morning,\n\nI've increased sort_mem until 2Go !!\nand the error \"out of memory\" appears again.\n\nHere the request I try to pass with her explain plan,\n\n Nested Loop (cost=2451676.23..2454714.73 rows=1001 width=34)\n -> Subquery Scan \"day\" (cost=2451676.23..2451688.73 rows=1000 width=16)\n -> Limit (cost=2451676.23..2451678.73 rows=1000 width=12)\n -> Sort (cost=2451676.23..2451684.63 rows=3357 width=12)\n Sort Key: sum(occurence)\n -> HashAggregate (cost=2451471.24..2451479.63 rows=3357\nwidth=12)\n -> Index Scan using test_date on\nqueries_detail_statistics (cost=0.00..2449570.55 rows=380138 width=12)\n Index Cond: ((date >= '2006-01-01'::date) AND\n(date <= '2006-01-30'::date))\n Filter: (((portal)::text = '1'::text) OR\n((portal)::text = '2'::text))\n -> Index Scan using query_string_pkey on query_string (cost=0.00..3.01\nrows=1 width=34)\n Index Cond: (\"outer\".query = query_string.id)\n(11 rows)\n\nAny new ideas ?,\nthanks\n\nMB.\n\n\n\n\n> On Tue, 2006-02-14 at 10:32, [email protected] wrote:\n> > command explain analyze crash with the \"out of memory\" error\n> >\n> > I precise that I've tried a lot of values from parameters shared_buffer and\n> > sort_mem\n> >\n> > now, in config file, values are :\n> > sort_mem=32768\n> > and shared_buffer=30000\n>\n> OK, on the command line, try increasing the sort_mem until hash_agg can\n> work. With a 4 gig machine, you should be able to go as high as needed\n> here, I'd think. Try as high as 500000 or so or more. Then when\n> explain analyze works, compare the actual versus estimated number of\n> rows.\n>\n\n\n", "msg_date": "Wed, 15 Feb 2006 09:34:06 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: out of memory" }, { "msg_contents": "You're right, release is 7.4.7.\n\nthere's twenty millions records \"query\"\n\n> On Tue, 2006-02-14 at 11:36, Tom Lane wrote:\n> > [email protected] writes:\n> > > Yes, I've launched ANALYZE command before sending request.\n> > > I precise that's postgres version is 7.3.4\n> >\n> > Can't possibly be 7.3.4, that version didn't have HashAggregate.\n> >\n> > How many distinct values of \"query\" actually exist in the table?\n>\n> I thought that looked odd.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\n", "msg_date": "Wed, 15 Feb 2006 17:45:20 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: out of memory" } ]
[ { "msg_contents": "I am running some simple queries to benchmark Postgres 8.1 against MS Access\nand Postgres is 2 to 3 times slower that Access. \n\n \n\nHardware:\n\nDell Optiplex GX280\n\nP4 3.20 GHz\n\n3GB RAM\n\nWindows XP SP1\n\n \n\nDatabase has one table with 1.2 million rows\n\n \n\nQuery:\n\nUPDATE ntdn SET gha=area/10000\n\n \n\nI could post the EXPLAIN ANALYZE results but its 4,000+ lines long\n\n \n\nI've run various tests on a number of Postgres parameters; none of which\nhave come close to Access' time of 5.00 min. Postgres times range between\n24 min and 121 min.\n\n \n\nSome of the Postgres variables and ranges I've tested.\n\n \n\nwork_mem: 1,000 to 2,000,000\n\ntemp_buffers: 1,000 to 10,000\n\nshared_buffers: 1,000 to 64,000\n\nsort_mem: 1,024,000\n\nfsync on / off\n\n \n\nWhy does Access run so much faster? How can I get Postgres to run as fast\nas Access?\n\n \n\nThanks, \n\n \n\nJay \n\n \n\n \n\n \n\n \n\n \n\n \n\n\n\n\n\n\n\n\n\n\nI am running some simple queries to benchmark\nPostgres 8.1 against MS Access and Postgres is 2 to 3 times slower that\nAccess.  \n \nHardware:\nDell Optiplex GX280\nP4 3.20 GHz\n3GB RAM\nWindows XP SP1\n \nDatabase has one table with 1.2 million rows\n \nQuery:\nUPDATE ntdn SET gha=area/10000\n \nI could post the EXPLAIN ANALYZE results but its\n4,000+ lines long\n \nI’ve run various tests on a number of Postgres\nparameters; none of which have come close to Access’ time of 5.00 min.  Postgres\ntimes range between 24 min and 121 min.\n \nSome of the Postgres variables and ranges I’ve\ntested.\n \nwork_mem:  1,000 to 2,000,000\ntemp_buffers:  1,000 to 10,000\nshared_buffers:  1,000 to 64,000\nsort_mem:  1,024,000\nfsync on / off\n \nWhy does Access run so much faster?  How can I\nget Postgres to run as fast as Access?\n \nThanks, \n \nJay", "msg_date": "Tue, 14 Feb 2006 07:51:23 -0800", "msg_from": "\"Jay Greenfield\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres slower than MS ACCESS" }, { "msg_contents": "On Tue, 2006-02-14 at 09:51, Jay Greenfield wrote:\n> I am running some simple queries to benchmark Postgres 8.1 against MS\n> Access and Postgres is 2 to 3 times slower that Access. \n\nA BUNCH OF STUFF SNIPPED\n\n> Why does Access run so much faster? How can I get Postgres to run as\n> fast as Access?\n\nBecause Access is not a multi-user database management system designed\nto handle anywhere from a couple to several thousand users at the same\ntime?\n\nPostgreSQL can do this update while still allowing users to access the\ndata in the database, and can handle updates to the same table at the\nsame time, as long as they aren't hitting the same rows.\n\nThey're two entirely different beasts.\n\nOne is good at batch processing moderate amounts of data for one user at\na time. The other is good for real time processing of very large\namounts of data for a fairly large number of users while running at an\nacceptable, if slower speed.\n", "msg_date": "Tue, 14 Feb 2006 10:04:45 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres slower than MS ACCESS" }, { "msg_contents": "Is it possible to configure Postgres to behave like Access - a single user\nand use as much of the recourses as required? \n\nThanks, \n\nJay.\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Scott Marlowe\nSent: Tuesday, February 14, 2006 8:05 AM\nTo: Jay Greenfield\nCc: [email protected]\nSubject: Re: [PERFORM] Postgres slower than MS ACCESS\n\nOn Tue, 2006-02-14 at 09:51, Jay Greenfield wrote:\n> I am running some simple queries to benchmark Postgres 8.1 against MS\n> Access and Postgres is 2 to 3 times slower that Access. \n\nA BUNCH OF STUFF SNIPPED\n\n> Why does Access run so much faster? How can I get Postgres to run as\n> fast as Access?\n\nBecause Access is not a multi-user database management system designed\nto handle anywhere from a couple to several thousand users at the same\ntime?\n\nPostgreSQL can do this update while still allowing users to access the\ndata in the database, and can handle updates to the same table at the\nsame time, as long as they aren't hitting the same rows.\n\nThey're two entirely different beasts.\n\nOne is good at batch processing moderate amounts of data for one user at\na time. The other is good for real time processing of very large\namounts of data for a fairly large number of users while running at an\nacceptable, if slower speed.\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n", "msg_date": "Tue, 14 Feb 2006 08:17:08 -0800", "msg_from": "\"Jay Greenfield\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres slower than MS ACCESS" }, { "msg_contents": "On Tue, 2006-02-14 at 10:17, Jay Greenfield wrote:\n> Is it possible to configure Postgres to behave like Access - a single user\n> and use as much of the recourses as required? \n\nNo. If you want something akin to that, try SQL Lite. it's not as\nfeatureful as PostgreSQL, but it's closer to it than Access.\n", "msg_date": "Tue, 14 Feb 2006 10:23:09 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres slower than MS ACCESS" }, { "msg_contents": "* Jay Greenfield ([email protected]) wrote:\n> Database has one table with 1.2 million rows\n> Query:\n> \n> UPDATE ntdn SET gha=area/10000\n> \n> I could post the EXPLAIN ANALYZE results but its 4,000+ lines long\n\nHow do you get 4,000+ lines of explain analyze for one update query in a\ndatabase with only one table? Something a bit fishy there. Perhaps you \nmean explain verbose, though I don't really see how that'd be so long \neither, but it'd be closer. Could you provide some more sane\ninformation?\n\n> I've run various tests on a number of Postgres parameters; none of which\n> have come close to Access' time of 5.00 min. Postgres times range between\n> 24 min and 121 min.\n> \n> Some of the Postgres variables and ranges I've tested.\n> work_mem: 1,000 to 2,000,000\n> temp_buffers: 1,000 to 10,000\n> shared_buffers: 1,000 to 64,000\n> sort_mem: 1,024,000\n> fsync on / off\n> \n> Why does Access run so much faster? How can I get Postgres to run as fast\n> as Access?\n\nWhile it's true that Access almost certainly takes some shortcuts, 24\nminutes for an update across 1.2 millon rows seems an awefully long time\nfor Postgres. Is this table exceptionally large in same way (ie: lots \nof columns)? I expect running with fsync off would be closer to 'Access\nmode' though it has risks (of course). Also, it might be faster to\ninsert into a seperate table rather than run a huge update like that in\nPostgres. Also, if there are indexes on the table in question, you\nmight drop them before doing the update/insert and recreate them after\nthe query has finished.\n\nYou really havn't provided anywhere near enough information to figure\nout what the actual problem is here. Access does take shortcuts but the\ntimes you're posting for Postgres seem quite far off based on the\nhardware and commands you've described...\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 14 Feb 2006 12:20:22 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres slower than MS ACCESS" }, { "msg_contents": "Stephen Frost <[email protected]> writes:\n> While it's true that Access almost certainly takes some shortcuts, 24\n> minutes for an update across 1.2 millon rows seems an awefully long time\n> for Postgres.\n\nI did some experiments along this line with a trivial table (2 integer\ncolumns) of 1.28M rows. I used CVS tip with all parameters at defaults.\nWith no indexes, an UPDATE took about 50 seconds. With one index, it\ntook 628 seconds. It's not hard to believe you could get to Jay's\nfigures with multiple indexes.\n\nLooking in the postmaster log, I see I was getting checkpoints every few\nseconds. Increasing checkpoint_segments to 30 (a factor of 10) brought\nit down to 355 seconds, and then increasing shared_buffers to 20000\nbrought it down to 165 sec. Separating WAL and data onto different\ndisks would have helped too, no doubt, but I'm too lazy to try it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Feb 2006 15:42:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres slower than MS ACCESS " }, { "msg_contents": "* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > While it's true that Access almost certainly takes some shortcuts, 24\n> > minutes for an update across 1.2 millon rows seems an awefully long time\n> > for Postgres.\n> \n> I did some experiments along this line with a trivial table (2 integer\n> columns) of 1.28M rows. I used CVS tip with all parameters at defaults.\n> With no indexes, an UPDATE took about 50 seconds. With one index, it\n> took 628 seconds. It's not hard to believe you could get to Jay's\n> figures with multiple indexes.\n\nWith multiple indexes, you might want to drop them and recreate them\nwhen you're updating an entire table.\n\n> Looking in the postmaster log, I see I was getting checkpoints every few\n> seconds. Increasing checkpoint_segments to 30 (a factor of 10) brought\n> it down to 355 seconds, and then increasing shared_buffers to 20000\n> brought it down to 165 sec. Separating WAL and data onto different\n> disks would have helped too, no doubt, but I'm too lazy to try it.\n\nSure, this was kind of my point, we need more information about the\ndatabase if we're going to have much of a chance of improving the\nresults he's seeing. 165 seconds is certainly a great deal better than\n24 minutes. :)\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 14 Feb 2006 15:55:10 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres slower than MS ACCESS" }, { "msg_contents": "> How do you get 4,000+ lines of explain analyze for one update query in a\n> database with only one table? Something a bit fishy there. Perhaps you \n> mean explain verbose, though I don't really see how that'd be so long \n> either, but it'd be closer. Could you provide some more sane\n> information?\n\nMy mistake - there was 4,000 lines in the EXPLAIN ANALYZE VERBOSE output.\nHere is the output of EXPLAIN ANALYZE:\n\nQUERY PLAN\n\"Seq Scan on ntdn (cost=0.00..3471884.39 rows=1221391 width=1592) (actual\ntime=57292.580..1531300.003 rows=1221391 loops=1)\"\n\"Total runtime: 4472646.988 ms\"\n\n\n> Is this table exceptionally large in same way (ie: lots \n> of columns)?\n\nThe table is 1.2 million rows X 246 columns. The only index is the primary\nkey. I will try to remove that index to see if that improves performance at\nall.\n\nJay\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, February 14, 2006 12:43 PM\nTo: Stephen Frost\nCc: Jay Greenfield; [email protected]\nSubject: Re: [PERFORM] Postgres slower than MS ACCESS \n\nStephen Frost <[email protected]> writes:\n> While it's true that Access almost certainly takes some shortcuts, 24\n> minutes for an update across 1.2 millon rows seems an awefully long time\n> for Postgres.\n\nI did some experiments along this line with a trivial table (2 integer\ncolumns) of 1.28M rows. I used CVS tip with all parameters at defaults.\nWith no indexes, an UPDATE took about 50 seconds. With one index, it\ntook 628 seconds. It's not hard to believe you could get to Jay's\nfigures with multiple indexes.\n\nLooking in the postmaster log, I see I was getting checkpoints every few\nseconds. Increasing checkpoint_segments to 30 (a factor of 10) brought\nit down to 355 seconds, and then increasing shared_buffers to 20000\nbrought it down to 165 sec. Separating WAL and data onto different\ndisks would have helped too, no doubt, but I'm too lazy to try it.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 14 Feb 2006 12:56:18 -0800", "msg_from": "\"Jay Greenfield\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres slower than MS ACCESS " }, { "msg_contents": "\"Jay Greenfield\" <[email protected]> writes:\n> The table is 1.2 million rows X 246 columns. The only index is the primary\n> key. I will try to remove that index to see if that improves performance at\n> all.\n\nHmm, the large number of columns might have something to do with it ...\nwhat datatypes are the columns?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Feb 2006 16:02:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres slower than MS ACCESS " }, { "msg_contents": "> Hmm, the large number of columns might have something to do with it ...\n> what datatypes are the columns?\n\nAll sorts, but mostly float4 and varchar(2 to 10)\n\nJay\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, February 14, 2006 1:03 PM\nTo: Jay Greenfield\nCc: 'Stephen Frost'; [email protected]\nSubject: Re: [PERFORM] Postgres slower than MS ACCESS \n\n\"Jay Greenfield\" <[email protected]> writes:\n> The table is 1.2 million rows X 246 columns. The only index is the\nprimary\n> key. I will try to remove that index to see if that improves performance\nat\n> all.\n\nHmm, the large number of columns might have something to do with it ...\nwhat datatypes are the columns?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 14 Feb 2006 13:25:32 -0800", "msg_from": "\"Jay Greenfield\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres slower than MS ACCESS " }, { "msg_contents": "\nOn Feb 14, 2006, at 3:56 PM, Jay Greenfield wrote:\n\n>> How do you get 4,000+ lines of explain analyze for one update \n>> query in a\n>> database with only one table? Something a bit fishy there. \n>> Perhaps you\n>> mean explain verbose, though I don't really see how that'd be so long\n>> either, but it'd be closer. Could you provide some more sane\n>> information?\n>\n> My mistake - there was 4,000 lines in the EXPLAIN ANALYZE VERBOSE \n> output.\n> Here is the output of EXPLAIN ANALYZE:\n>\n> QUERY PLAN\n> \"Seq Scan on ntdn (cost=0.00..3471884.39 rows=1221391 width=1592) \n> (actual\n> time=57292.580..1531300.003 rows=1221391 loops=1)\"\n> \"Total runtime: 4472646.988 ms\"\n>\n\nHave you been vacuuming or running autovacuum?\nIf you keep running queries like this you're certianly going to have \na ton of dead tuples, which would def explain these times too.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Wed, 15 Feb 2006 09:23:03 -0500", "msg_from": "Jeff Trout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres slower than MS ACCESS " }, { "msg_contents": "\nI've been vacuuming between each test run. \n\nNot vacuuming results in times all the way up to 121 minutes. For a direct\ncomparison with Access, the vacuuming time with Postgres should really be\nincluded as this is not required with Access.\n\nBy removing all of the indexes I have been able to get the Postgres time\ndown to 4.35 minutes with default setting for all except the following:\nfsync: off\nwork_mem: 1024000\nshared_buffers: 10000\n\nI did a run with checkpoint_segments @ 30 (from 3 in 4.35 min run) and\nposted a time of 6.78 minutes. Any idea why this would increase the time?\n\nThanks, \n\nJay.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Jeff Trout\nSent: Wednesday, February 15, 2006 6:23 AM\nTo: Jay Greenfield\nCc: 'Tom Lane'; 'Stephen Frost'; [email protected]\nSubject: Re: [PERFORM] Postgres slower than MS ACCESS \n\n\nOn Feb 14, 2006, at 3:56 PM, Jay Greenfield wrote:\n\n>> How do you get 4,000+ lines of explain analyze for one update \n>> query in a\n>> database with only one table? Something a bit fishy there. \n>> Perhaps you\n>> mean explain verbose, though I don't really see how that'd be so long\n>> either, but it'd be closer. Could you provide some more sane\n>> information?\n>\n> My mistake - there was 4,000 lines in the EXPLAIN ANALYZE VERBOSE \n> output.\n> Here is the output of EXPLAIN ANALYZE:\n>\n> QUERY PLAN\n> \"Seq Scan on ntdn (cost=0.00..3471884.39 rows=1221391 width=1592) \n> (actual\n> time=57292.580..1531300.003 rows=1221391 loops=1)\"\n> \"Total runtime: 4472646.988 ms\"\n>\n\nHave you been vacuuming or running autovacuum?\nIf you keep running queries like this you're certianly going to have \na ton of dead tuples, which would def explain these times too.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n", "msg_date": "Wed, 15 Feb 2006 13:29:51 -0800", "msg_from": "\"Jay Greenfield\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres slower than MS ACCESS " }, { "msg_contents": "\"Jay Greenfield\" <[email protected]> writes:\n> I did a run with checkpoint_segments @ 30 (from 3 in 4.35 min run) and\n> posted a time of 6.78 minutes. Any idea why this would increase the time?\n\nThe first time through might take longer while the machine creates empty\nxlog segment files (though I'd not have expected a hit that big). Once\nit's fully populated pg_xlog it'll just recycle the files, so you might\nfind that a second try is faster.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Feb 2006 16:36:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres slower than MS ACCESS " }, { "msg_contents": "On 15/02/06, Jay Greenfield <[email protected]> wrote:\n>\n>\n> I've been vacuuming between each test run.\n>\n> Not vacuuming results in times all the way up to 121 minutes. For a\n> direct\n> comparison with Access, the vacuuming time with Postgres should really be\n> included as this is not required with Access.\n\n\n\nHmm but then you would have to include Access Vacuum too I'll think you will\nfind \"Tools -> Database Utils -> Compact Database\" preforms a simular\npurpose and is just as important as I've seen many Access Databases bloat in\nmy time.\n\nPeter Childs\n\nOn 15/02/06, Jay Greenfield <[email protected]> wrote:\nI've been vacuuming between each test run.Not vacuuming results in times all the way up to 121 minutes.  For a directcomparison with Access, the vacuuming time with Postgres should really beincluded as this is not required with Access.\n\n\nHmm but then you would have to include Access Vacuum too I'll think you\nwill find \"Tools -> Database Utils -> Compact Database\" preforms\na simular purpose and is just as important as I've seen many Access\nDatabases bloat in my time.\n\nPeter Childs", "msg_date": "Thu, 16 Feb 2006 13:32:34 +0000", "msg_from": "Peter Childs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres slower than MS ACCESS" } ]
[ { "msg_contents": " \nThis is the second time I'm getting out of memory error when I start a\ndatabase vacuum or try to vacuum any table. Note this machine has been\nused for data load batch purposes. \n\n=# vacuum analyze code;\nERROR: out of memory\nDETAIL: Failed on request of size 1073741820.\n\nI'm running Postgres 8.1.1 on RedHat 2.6 kernel (HP server). \nMy maintenance work area never been changed. It's set to 1GB.\n(maintenance_work_mem = 1048576). Physical memory: 32 GB. \n\nBouncing the database does not help. \n\nTwo workarounds I have used so far:\n\n 1) Decreasing the maintenance_work_mem to 512MB, vacuum analyze would\nwork just fine.\n\nOr \n\n 2) Bouncing the server (maintaining the original 1GB\nmaintenance_work_mem) would also work.\n\nI have not had that error on the production instances (which are\nidentical copies of the loading instance) - only the loading instance..\n\nAny explanation as to why and how to avoid that ? Thanks\n\n\n----\n \n Husam \n**********************************************************************\nThis message contains confidential information intended only for the use of the addressee(s) named above and may contain information that is legally privileged. If you are not the addressee, or the person responsible for delivering it to the addressee, you are hereby notified that reading, disseminating, distributing or copying this message is strictly prohibited. If you have received this message by mistake, please immediately notify us by replying to the message and delete the original message immediately thereafter.\n\nThank you.\n\r\n FADLD Tag\n**********************************************************************\n\n", "msg_date": "Tue, 14 Feb 2006 08:45:08 -0800", "msg_from": "\"Tomeh, Husam\" <[email protected]>", "msg_from_op": true, "msg_subject": "0ut of Memory Error during Vacuum Analyze" }, { "msg_contents": "\"Tomeh, Husam\" <[email protected]> writes:\n> =# vacuum analyze code;\n> ERROR: out of memory\n> DETAIL: Failed on request of size 1073741820.\n\nThat looks a whole lot like a corrupt-data issue. The apparent\ndependency on maintenance_work_mem is probably illusory --- I suspect\nsome of your trials are selecting the corrupted row to use in the\nANALYZE stats, and others are randomly selecting other rows.\n\nIf you are able to pg_dump the table in question then this theory is\nwrong, but I'd suggest trying that first.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Feb 2006 12:50:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 0ut of Memory Error during Vacuum Analyze " } ]
[ { "msg_contents": "\n30% faster !!! i will test this new version ...\n\nthanks a lot\n\n-----Message d'origine-----\nDe : [email protected]\n[mailto:[email protected]]De la part de Albert\nCervera Areny\nEnvoyé : mardi 14 février 2006 17:07\nÀ : [email protected]\nObjet : Re: [PERFORM] copy and postgresql.conf\n\n\n\nSorry, COPY improvements came with 8.1\n\n(http://www.postgresql.org/docs/whatsnew)\n\nA Dimarts 14 Febrer 2006 14:26, FERREIRA, William (VALTECH) va escriure:\n> thanks,\n>\n> i'm using postgresql 8.0.3\n> there is no primary key and no index on my tables\n>\n> regards\n>\n> -----Message d'origine-----\n> De : [email protected]\n> [mailto:[email protected]]De la part de Albert\n> Cervera Areny\n> Envoyé : mardi 14 février 2006 12:38\n> À : [email protected]\n> Objet : Re: [PERFORM] copy and postgresql.conf\n>\n>\n>\n> Hi William,\n> \twhich PostgreSQL version are you using? Newer (8.0+) versions have some\n>\n> important performance improvements for the COPY command.\n>\n> \tAlso, you'll notice significant improvements by creating primary & foreign\n>\n> keys after the copy command. I think config tweaking can improve key and\n>\n> index creation but I don't think you can improve the COPY command itself.\n>\n> \tThere are also many threads in this list commenting on this issue, you'll\n>\n> find it easely in the archives.\n>\n> A Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:\n> > hi,\n> >\n> > i load data from files using copy method.\n> > Files contain between 2 and 7 millions of rows, spread on 5 tables.\n> >\n> > For loading all the data, it takes 40mn, and the same processing takes\n> > 17mn with Oracle. I think that this time can be improved by changing\n> > postgresql configuration file. But which parameters i need to manipulate\n> > and with which values ?\n> >\n> > Here are the specifications of my system :\n> > V250 architecture sun4u\n> > 2xCPU UltraSparc IIIi 1.28 GHz.\n> > 8 Go RAM.\n> >\n> > Regards.\n> >\n> > \tWill\n> >\n> >\n> > This e-mail is intended only for the above addressee. It may contain\n> > privileged information. If you are not the addressee you must not copy,\n> > distribute, disclose or use any of the information in it. If you have\n> > received it in error please delete it and immediately notify the sender.\n> > Security Notice: all e-mail, sent to or from this address, may be\n> > accessed by someone other than the recipient, for system management and\n> > security reasons. This access is controlled under Regulation of\n> > Investigatory Powers Act 2000, Lawful Business Practises.\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n>\n> --\n>\n> Albert Cervera Areny\n> Dept. Informàtica Sedifa, S.L.\n>\n> Av. Can Bordoll, 149\n> 08202 - Sabadell (Barcelona)\n> Tel. 93 715 51 11\n> Fax. 93 715 51 12\n>\n> ====================================================================\n> ........................ AVISO LEGAL ............................\n> La presente comunicación y sus anexos tiene como destinatario la\n> persona a la que va dirigida, por lo que si usted lo recibe\n> por error debe notificarlo al remitente y eliminarlo de su\n> sistema, no pudiendo utilizarlo, total o parcialmente, para\n> ningún fin. Su contenido puede tener información confidencial o\n> protegida legalmente y únicamente expresa la opinión del\n> remitente. El uso del correo electrónico vía Internet no\n> permite asegurar ni la confidencialidad de los mensajes\n> ni su correcta recepción. En el caso de que el\n> destinatario no consintiera la utilización del correo electrónico,\n> deberá ponerlo en nuestro conocimiento inmediatamente.\n> ====================================================================\n> ........................... DISCLAIMER .............................\n> This message and its attachments are intended exclusively for the\n> named addressee. If you receive this message in error, please\n> immediately delete it from your system and notify the sender. You\n> may not use this message or any part of it for any purpose.\n> The message may contain information that is confidential or\n> protected by law, and any opinions expressed are those of the\n> individual sender. Internet e-mail guarantees neither the\n> confidentiality nor the proper receipt of the message sent.\n> If the addressee of this message does not consent to the use\n> of internet e-mail, please inform us inmmediately.\n> ====================================================================\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n>\n>\n> This mail has originated outside your organization,\n> either from an external partner or the Global Internet.\n> Keep this in mind if you answer this message.\n>\n>\n> This e-mail is intended only for the above addressee. It may contain\n> privileged information. If you are not the addressee you must not copy,\n> distribute, disclose or use any of the information in it. If you have\n> received it in error please delete it and immediately notify the sender.\n> Security Notice: all e-mail, sent to or from this address, may be\n> accessed by someone other than the recipient, for system management and\n> security reasons. This access is controlled under Regulation of\n> Investigatory Powers Act 2000, Lawful Business Practises.\n\n--\n\nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n\nThis mail has originated outside your organization,\neither from an external partner or the Global Internet.\nKeep this in mind if you answer this message.\n\n\r\nThis e-mail is intended only for the above addressee. It may contain\nprivileged information. If you are not the addressee you must not copy,\ndistribute, disclose or use any of the information in it. If you have\nreceived it in error please delete it and immediately notify the sender.\nSecurity Notice: all e-mail, sent to or from this address, may be\naccessed by someone other than the recipient, for system management and\nsecurity reasons. This access is controlled under Regulation of\nInvestigatory Powers Act 2000, Lawful Business Practises.\n", "msg_date": "Tue, 14 Feb 2006 18:05:09 +0100", "msg_from": "\"FERREIRA, William (VALTECH)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: copy and postgresql.conf" }, { "msg_contents": "What version of Solaris are you using?\n\nDo you have the recommendations while using COPY on Solaris?\nhttp://blogs.sun.com/roller/page/jkshah?entry=postgresql_on_solaris_better_use\n\nwal_sync_method = fsync\nwal_buffers = 128\ncheckpoint_segments = 128\nbgwriter_percent = 0\nbgwriter_maxpages = 0\n\n\nAnd also for /etc/system on Solaris 10, 9 SPARC use the following\n\nset maxphys=1048576\nset md:md_maxphys=1048576\nset segmap_percent=50\nset ufs:freebehind=0\nset msgsys:msginfo_msgmni = 3584\nset semsys:seminfo_semmni = 4096\nset shmsys:shminfo_shmmax = 15392386252\nset shmsys:shminfo_shmmni = 4096\n\n\nCan you try putting in one run with this values and send back your \nexperiences on whether it helps your workload or not?\n\nAtleast I saw improvements using the above settings with COPY with \nPostgres 8.0 and Postgres 8.1 on Solaris.\n\nRegards,\nJignesh\n\n\n\n\nFERREIRA, William (VALTECH) wrote:\n\n>30% faster !!! i will test this new version ...\n>\n>thanks a lot\n>\n>-----Message d'origine-----\n>De : [email protected]\n>[mailto:[email protected]]De la part de Albert\n>Cervera Areny\n>Envoy� : mardi 14 f�vrier 2006 17:07\n>� : [email protected]\n>Objet : Re: [PERFORM] copy and postgresql.conf\n>\n>\n>\n>Sorry, COPY improvements came with 8.1\n>\n>(http://www.postgresql.org/docs/whatsnew)\n>\n>A Dimarts 14 Febrer 2006 14:26, FERREIRA, William (VALTECH) va escriure:\n> \n>\n>>thanks,\n>>\n>>i'm using postgresql 8.0.3\n>>there is no primary key and no index on my tables\n>>\n>>regards\n>>\n>>-----Message d'origine-----\n>>De : [email protected]\n>>[mailto:[email protected]]De la part de Albert\n>>Cervera Areny\n>>Envoy� : mardi 14 f�vrier 2006 12:38\n>>� : [email protected]\n>>Objet : Re: [PERFORM] copy and postgresql.conf\n>>\n>>\n>>\n>>Hi William,\n>>\twhich PostgreSQL version are you using? Newer (8.0+) versions have some\n>>\n>>important performance improvements for the COPY command.\n>>\n>>\tAlso, you'll notice significant improvements by creating primary & foreign\n>>\n>>keys after the copy command. I think config tweaking can improve key and\n>>\n>>index creation but I don't think you can improve the COPY command itself.\n>>\n>>\tThere are also many threads in this list commenting on this issue, you'll\n>>\n>>find it easely in the archives.\n>>\n>>A Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:\n>> \n>>\n>>>hi,\n>>>\n>>>i load data from files using copy method.\n>>>Files contain between 2 and 7 millions of rows, spread on 5 tables.\n>>>\n>>>For loading all the data, it takes 40mn, and the same processing takes\n>>>17mn with Oracle. I think that this time can be improved by changing\n>>>postgresql configuration file. But which parameters i need to manipulate\n>>>and with which values ?\n>>>\n>>>Here are the specifications of my system :\n>>>V250 architecture sun4u\n>>>2xCPU UltraSparc IIIi 1.28 GHz.\n>>>8 Go RAM.\n>>>\n>>>Regards.\n>>>\n>>>\tWill\n>>>\n>>>\n>>>This e-mail is intended only for the above addressee. It may contain\n>>>privileged information. If you are not the addressee you must not copy,\n>>>distribute, disclose or use any of the information in it. If you have\n>>>received it in error please delete it and immediately notify the sender.\n>>>Security Notice: all e-mail, sent to or from this address, may be\n>>>accessed by someone other than the recipient, for system management and\n>>>security reasons. This access is controlled under Regulation of\n>>>Investigatory Powers Act 2000, Lawful Business Practises.\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 4: Have you searched our list archives?\n>>>\n>>> http://archives.postgresql.org\n>>> \n>>>\n>>--\n>>\n>>Albert Cervera Areny\n>>Dept. Inform�tica Sedifa, S.L.\n>>\n>>Av. Can Bordoll, 149\n>>08202 - Sabadell (Barcelona)\n>>Tel. 93 715 51 11\n>>Fax. 93 715 51 12\n>>\n>>====================================================================\n>>........................ AVISO LEGAL ............................\n>>La presente comunicaci�n y sus anexos tiene como destinatario la\n>>persona a la que va dirigida, por lo que si usted lo recibe\n>>por error debe notificarlo al remitente y eliminarlo de su\n>>sistema, no pudiendo utilizarlo, total o parcialmente, para\n>>ning�n fin. Su contenido puede tener informaci�n confidencial o\n>>protegida legalmente y �nicamente expresa la opini�n del\n>>remitente. El uso del correo electr�nico v�a Internet no\n>>permite asegurar ni la confidencialidad de los mensajes\n>>ni su correcta recepci�n. En el caso de que el\n>>destinatario no consintiera la utilizaci�n del correo electr�nico,\n>>deber� ponerlo en nuestro conocimiento inmediatamente.\n>>====================================================================\n>>........................... DISCLAIMER .............................\n>>This message and its attachments are intended exclusively for the\n>>named addressee. If you receive this message in error, please\n>>immediately delete it from your system and notify the sender. You\n>>may not use this message or any part of it for any purpose.\n>>The message may contain information that is confidential or\n>>protected by law, and any opinions expressed are those of the\n>>individual sender. Internet e-mail guarantees neither the\n>>confidentiality nor the proper receipt of the message sent.\n>>If the addressee of this message does not consent to the use\n>>of internet e-mail, please inform us inmmediately.\n>>====================================================================\n>>\n>>\n>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 5: don't forget to increase your free space map settings\n>>\n>>\n>>\n>>This mail has originated outside your organization,\n>>either from an external partner or the Global Internet.\n>>Keep this in mind if you answer this message.\n>>\n>>\n>>This e-mail is intended only for the above addressee. It may contain\n>>privileged information. If you are not the addressee you must not copy,\n>>distribute, disclose or use any of the information in it. If you have\n>>received it in error please delete it and immediately notify the sender.\n>>Security Notice: all e-mail, sent to or from this address, may be\n>>accessed by someone other than the recipient, for system management and\n>>security reasons. This access is controlled under Regulation of\n>>Investigatory Powers Act 2000, Lawful Business Practises.\n>> \n>>\n>\n>--\n>\n>Albert Cervera Areny\n>Dept. Inform�tica Sedifa, S.L.\n>\n>Av. Can Bordoll, 149\n>08202 - Sabadell (Barcelona)\n>Tel. 93 715 51 11\n>Fax. 93 715 51 12\n>\n>====================================================================\n>........................ AVISO LEGAL ............................\n>La presente comunicaci�n y sus anexos tiene como destinatario la\n>persona a la que va dirigida, por lo que si usted lo recibe\n>por error debe notificarlo al remitente y eliminarlo de su\n>sistema, no pudiendo utilizarlo, total o parcialmente, para\n>ning�n fin. Su contenido puede tener informaci�n confidencial o\n>protegida legalmente y �nicamente expresa la opini�n del\n>remitente. El uso del correo electr�nico v�a Internet no\n>permite asegurar ni la confidencialidad de los mensajes\n>ni su correcta recepci�n. En el caso de que el\n>destinatario no consintiera la utilizaci�n del correo electr�nico,\n>deber� ponerlo en nuestro conocimiento inmediatamente.\n>====================================================================\n>........................... DISCLAIMER .............................\n>This message and its attachments are intended exclusively for the\n>named addressee. If you receive this message in error, please\n>immediately delete it from your system and notify the sender. You\n>may not use this message or any part of it for any purpose.\n>The message may contain information that is confidential or\n>protected by law, and any opinions expressed are those of the\n>individual sender. Internet e-mail guarantees neither the\n>confidentiality nor the proper receipt of the message sent.\n>If the addressee of this message does not consent to the use\n>of internet e-mail, please inform us inmmediately.\n>====================================================================\n>\n>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: don't forget to increase your free space map settings\n>\n>\n>\n>This mail has originated outside your organization,\n>either from an external partner or the Global Internet.\n>Keep this in mind if you answer this message.\n>\n>\n>This e-mail is intended only for the above addressee. It may contain\n>privileged information. If you are not the addressee you must not copy,\n>distribute, disclose or use any of the information in it. If you have\n>received it in error please delete it and immediately notify the sender.\n>Security Notice: all e-mail, sent to or from this address, may be\n>accessed by someone other than the recipient, for system management and\n>security reasons. This access is controlled under Regulation of\n>Investigatory Powers Act 2000, Lawful Business Practises.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: explain analyze is your friend\n> \n>\n", "msg_date": "Tue, 14 Feb 2006 16:47:20 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: copy and postgresql.conf" } ]
[ { "msg_contents": "Hi Guys.\n\n\nWe are running v8.1.2 on FreeBSD 5.4-RELEASE and the server is running\nwith above averege load.\n\nWhen i do top i see alot of postmaster processes in \"sbwait\" state:\n\n# uptime\n4:29PM up 23 days, 20:01, 3 users, load averages: 3.73, 1.97, 1.71\n\n# top\n82808 pgsql 1 4 0 15580K 12008K sbwait 0 107:06 7.52%\npostgres\n82804 pgsql 1 4 0 15612K 12028K sbwait 0 106:13 7.08%\npostgres\n82806 pgsql 1 4 0 15576K 12008K sbwait 0 106:07 6.84%\npostgres\n82793 pgsql 1 4 0 15576K 12008K sbwait 0 106:05 6.54%\npostgres\n82801 pgsql 1 4 0 15612K 12032K sbwait 0 106:13 5.57%\npostgres\n82800 pgsql 1 4 0 15580K 12012K sbwait 0 105:45 4.88%\npostgres\n 6613 pgsql 1 4 0 15612K 12020K sbwait 0 28:47 4.59%\npostgres\n82798 pgsql 1 4 0 15612K 12036K sbwait 0 106:10 4.49%\npostgres\n82799 pgsql 1 4 0 15612K 12036K sbwait 0 106:27 4.39%\npostgres\n82797 pgsql 1 4 0 15612K 12036K sbwait 1 106:23 4.25%\npostgres\n82748 pgsql 1 4 0 15564K 11864K sbwait 0 48:12 3.08%\npostgres\n82747 pgsql 1 4 0 15560K 11848K sbwait 0 47:58 3.08%\npostgres\n82749 pgsql 1 4 0 15564K 11868K sbwait 0 48:27 1.95%\npostgres\n82751 pgsql 1 4 0 15564K 11864K sbwait 0 48:14 1.66%\npostgres\n82739 pgsql 1 4 0 15564K 11868K sbwait 1 48:38 1.37%\npostgres\n82750 pgsql 1 4 0 15564K 11864K sbwait 0 48:07 1.27%\npostgres\n\n\nThe server is not very busy, but it has more or less as many writes as\nreads.\n\nI have not seen more then 10-15 simultaneous queries.\n\n\nAny idea why idle postmaster consume 3-5% CPU ?\n\nThis is a FreeBSD 5.4-RELEASE server with 2x3G Xeon CPUs, 2G memory,\nRAID1 mirrored U320 drives.\n\n\nThanx\nPaul", "msg_date": "Tue, 14 Feb 2006 16:36:54 -0500", "msg_from": "\"Paul Khavkine\" <[email protected]>", "msg_from_op": true, "msg_subject": "8.2.1 on FreeBSD 5.4-RELEASE" } ]
[ { "msg_contents": " \nI have run pg_dump and had no errors. I also got this error when\ncreating one index but not another. When I lowered my\nmaintenance_work_mem, the create index succeeded. \n\nRegards,\n\n----\n \n Husam \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, February 14, 2006 9:51 AM\nTo: Tomeh, Husam\nCc: [email protected]\nSubject: Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze\n\n\"Tomeh, Husam\" <[email protected]> writes:\n> =# vacuum analyze code;\n> ERROR: out of memory\n> DETAIL: Failed on request of size 1073741820.\n\nThat looks a whole lot like a corrupt-data issue. The apparent\ndependency on maintenance_work_mem is probably illusory --- I suspect\nsome of your trials are selecting the corrupted row to use in the\nANALYZE stats, and others are randomly selecting other rows.\n\nIf you are able to pg_dump the table in question then this theory is\nwrong, but I'd suggest trying that first.\n\n\t\t\tregards, tom lane\n**********************************************************************\nThis message contains confidential information intended only for the use of the addressee(s) named above and may contain information that is legally privileged. If you are not the addressee, or the person responsible for delivering it to the addressee, you are hereby notified that reading, disseminating, distributing or copying this message is strictly prohibited. If you have received this message by mistake, please immediately notify us by replying to the message and delete the original message immediately thereafter.\n\nThank you.\n\r\n FADLD Tag\n**********************************************************************\n\n", "msg_date": "Tue, 14 Feb 2006 14:01:29 -0800", "msg_from": "\"Tomeh, Husam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 0ut of Memory Error during Vacuum Analyze and" }, { "msg_contents": "\"Tomeh, Husam\" <[email protected]> writes:\n> I have run pg_dump and had no errors. I also got this error when\n> creating one index but not another. When I lowered my\n> maintenance_work_mem, the create index succeeded. \n\nCreate index too? Hm. That begins to sound more like a memory leak.\nDo you have any custom data types or anything like that in this\ntable? Can you put together a self-contained test case using dummy\ndata?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Feb 2006 17:15:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 0ut of Memory Error during Vacuum Analyze and Create Index " } ]
[ { "msg_contents": "-------Original Message-------\n\nFrom: Michael Fuhr\nDate: 02/14/06 23:05:55\nTo: Adnan DURSUN\nCc: [email protected]\nSubject: Re: [PERFORM] SQL Function Performance\n\n>On Tue, Feb 14, 2006 at 11:33:57AM +0200, Adnan DURSUN wrote:\n>> -> Nested Loop (cost=5.90..267.19 rows=3 width=101) (actual time=76.240..30974.777 rows=63193 loops=1)\n>> -> Nested Loop (cost=5.90..123.48 rows=26 width=73) (actual time=32.082..4357.786 rows=14296 loops=1)\n\n>Absent a better solution, you could write a PL/pgSQL function and\n>build the query as a text string, then EXECUTE it. That would give\n>you a new plan each time, one that can take better advantage of\n>statistics, at the cost of having to plan the query each time you\n>call the function (but you probably don't care about that cost\n>as long as the overall results are better). Here's an example:\n\n Yes, i did it. i wrote a PL/pgSQL function. Now results come at 100 ms.. :-)\nI dont like that method but i have to do it for perfomance....\n\nMany thanks to everyone who helps...\n\nAdnan DURSUN\nASRIN Bilisim Ltd.\nAnkara /TURKEY\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n\n\n\n\n\n \n \n-------Original Message-------\n \n\nFrom: Michael Fuhr\nDate: 02/14/06 23:05:55\nTo: Adnan DURSUN\nCc: [email protected]\nSubject: Re: [PERFORM] SQL \nFunction Performance\n \n>On Tue, Feb 14, 2006 at 11:33:57AM +0200, Adnan DURSUN wrote:\n>>         \n->  Nested Loop  (cost=5.90..267.19 rows=3 width=101) \n(actual time=76.240..30974.777 rows=63193 loops=1)\n>>               \n->  Nested Loop  (cost=5.90..123.48 rows=26 width=73) \n(actual time=32.082..4357.786 rows=14296 loops=1)\n \n>Absent a better solution, you could write a PL/pgSQL function and\n>build the query as a text string, then EXECUTE it.  That \nwould give\n>you a new plan each time, one that can take better advantage of\n>statistics, at the cost of having to plan the query each time you\n>call the function (but you probably don't care about that cost\n>as long as the overall results are better).  Here's an \nexample:\n \n    Yes, i did it. i wrote a PL/pgSQL function. Now results \ncome at 100 ms.. :-)\nI dont like that method but i have to do it for perfomance....\n \nMany thanks to everyone who helps...\n \n\nAdnan DURSUN\nASRIN Bilisim Ltd.\nAnkara /TURKEY\n---------------------------(end of \nbroadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster", "msg_date": "Wed, 15 Feb 2006 00:35:59 +0200", "msg_from": "\"Adnan DURSUN\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL Function Performance" } ]
[ { "msg_contents": "\nNo special data types. The table is pretty large one with over 15GB. The\nindex is about 1.5 GB. Here's the table structure :\n\n Column | Type | Modifiers\n-----------------+-----------------------+-----------\n county_id | numeric(5,0) | not null\n batch_dt | numeric(8,0) | not null\n batch_seq | numeric(5,0) | not null\n mtg_seq_nbr | numeric(1,0) | not null\n mtg_rec_dt | numeric(8,0) |\n mtg_doc_nbr | character varying(12) |\n mtg_rec_bk | character varying(6) |\n mtg_rec_pg | character varying(6) |\n mtg_amt | numeric(11,0) |\n lndr_cd | character varying(10) |\n lndr_nm | character varying(30) |\n mtg_assm_ind | character(1) |\n mtg_typ | character varying(5) |\n adj_rate_ind | character(1) |\n mtg_term_nbr | numeric(5,0) |\n mtg_term_cd | character varying(4) |\n mtg_due_dt | numeric(8,0) |\n mtg_deed_typ | character varying(6) |\n reverse_mtg_ind | character(1) |\n refi_ind | character(1) |\n conform_ind | character(1) |\n cnstr_ln_ind | character(1) |\n title_co_cd | character varying(5) |\n state_id | numeric(5,0) |\n msa | numeric(4,0) |\nIndexes:\n \"uq_mortgage\" UNIQUE, btree (county_id, batch_dt, batch_seq,\nmtg_seq_nbr)\n \"mortgage_idxc_county_id_mtg_rec_dt\" btree (county_id, mtg_rec_dt)\n \"mortgage_idxc_state_id_mtg_rec_dt\" btree (state_id, mtg_rec_dt)\n\n---------\n\n Here's the test I did with maintenance_work_mem = 1GB:\n\n\nmtrac=# show maintenance_work_mem ;\n maintenance_work_mem\n----------------------\n 1048576 <======\n(1 row)\n\nmtrac=#\nmtrac=#\nmtrac=# create index mort_ht on mortgage(county_id,mtg_rec_dt);\nERROR: out of memory <===\nDETAIL: Failed on request of size 134217728. <===\n\n............ Then I changed the parameter to 512 MB: \n\n\nmtrac=# show maintenance_work_mem ;\n maintenance_work_mem\n----------------------\n 524288 <===\n(1 row)\n\nmtrac=# create index mort_ht_512 on mortgage(county_id,mtg_rec_dt);\nCREATE INDEX\n-----------------------------------------------\n\n\n\nRegards,\n----\n \n Husam \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, February 14, 2006 2:16 PM\nTo: Tomeh, Husam\nCc: [email protected]\nSubject: Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and\nCreate Index\n\n\"Tomeh, Husam\" <[email protected]> writes:\n> I have run pg_dump and had no errors. I also got this error when\n> creating one index but not another. When I lowered my\n> maintenance_work_mem, the create index succeeded. \n\nCreate index too? Hm. That begins to sound more like a memory leak.\nDo you have any custom data types or anything like that in this\ntable? Can you put together a self-contained test case using dummy\ndata?\n\n\t\t\tregards, tom lane\n**********************************************************************\nThis message contains confidential information intended only for the use of the addressee(s) named above and may contain information that is legally privileged. If you are not the addressee, or the person responsible for delivering it to the addressee, you are hereby notified that reading, disseminating, distributing or copying this message is strictly prohibited. If you have received this message by mistake, please immediately notify us by replying to the message and delete the original message immediately thereafter.\n\nThank you.\n\r\n FADLD Tag\n**********************************************************************\n\n", "msg_date": "Tue, 14 Feb 2006 15:40:17 -0800", "msg_from": "\"Tomeh, Husam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 0ut of Memory Error during Vacuum Analyze and" }, { "msg_contents": "\"Tomeh, Husam\" <[email protected]> writes:\n> mtrac=# show maintenance_work_mem ;\n> maintenance_work_mem\n> ----------------------\n> 1048576 <======\n> (1 row)\n\n> mtrac=#\n> mtrac=#\n> mtrac=# create index mort_ht on mortgage(county_id,mtg_rec_dt);\n> ERROR: out of memory <===\n> DETAIL: Failed on request of size 134217728. <===\n\nIt would be useful to look at the detailed allocation info that this\n(should have) put into the postmaster log. Also, if you could get\na stack trace back from the error, that would be even more useful.\nTo do that,\n\t* start psql\n\t* determine PID of connected backend (use pg_backend_pid())\n\t* in another window, as postgres user,\n\t\tgdb /path/to/postgres backend-PID\n\t\tgdb> break errfinish\n\t\tgdb> cont\n\t* issue failing command in psql\n\t* when breakpoint is reached,\n\t\tgdb> bt\n\t\t... stack trace printed here ...\n\t\tgdb> q\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Feb 2006 18:49:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 0ut of Memory Error during Vacuum Analyze and Create Index " } ]
[ { "msg_contents": "Hi,\n\nI'm using Postgres 7.4. I have a web application built with php4 using\npostgres7.4\n\nI was going through /var/log/messages of my linux box ( SLES 9). I\nencountered the following messages quite a few times.\n\n>postgres[20199]: [4-1] ERROR: could not send data to client: Broken pipe\n>postgres[30391]: [6-1] LOG: could not send data to client: Broken pipe\n>postgres[30570]: [6-1] LOG: could not send data to client: Broken pipe\n\nCan anyone help me in interpreting these messages?\nWhat is causing this error msg? What is the severity?\n\n\nRegards\n\n-- Pradeep\n\nHi,\n\nI'm using Postgres 7.4. I have a web application built with php4 using postgres7.4\n\nI was going through /var/log/messages of my linux box ( SLES 9). I encountered the following messages quite a few times.\n\n>postgres[20199]: [4-1] ERROR:  could not send data to client: Broken pipe\n>postgres[30391]: [6-1] LOG:  could not send data to client: Broken pipe\n>postgres[30570]: [6-1] LOG:  could not send data to client: Broken pipe\n\nCan anyone help me in interpreting these messages? \nWhat is causing this error msg? What is the severity?\n\n\nRegards\n\n-- Pradeep", "msg_date": "Wed, 15 Feb 2006 12:59:47 +0530", "msg_from": "Pradeep Parmar <[email protected]>", "msg_from_op": true, "msg_subject": "could not send data to client: Broken pipe" }, { "msg_contents": "Pradeep Parmar wrote:\n> Hi,\n> \n> I'm using Postgres 7.4. I have a web application built with php4 using\n> postgres7.4\n> \n> I was going through /var/log/messages of my linux box ( SLES 9). I\n> encountered the following messages quite a few times.\n> \n>> postgres[20199]: [4-1] ERROR: could not send data to client: Broken pipe\n>> postgres[30391]: [6-1] LOG: could not send data to client: Broken pipe\n>> postgres[30570]: [6-1] LOG: could not send data to client: Broken pipe\n> \n> Can anyone help me in interpreting these messages?\n> What is causing this error msg? What is the severity?\n\nNot really a performance question, but at a guess your client went away. \nIs there anything to indicate this in your php/apache logs? Can you \nreproduce it by hitting cancel in your web-browser?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 15 Feb 2006 09:39:46 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] could not send data to client: Broken pipe" } ]
[ { "msg_contents": "\ni'm using Solaris8\ni tried changing only postgresql parameters\nand time has increased of 10mn\n\ni keep in mind your idea, we will soon upgraded to solaris 10\n\nregards\n\n\tWill\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]]\nEnvoyé : mardi 14 février 2006 22:47\nÀ : FERREIRA, William (VALTECH)\nCc : Albert Cervera Areny; [email protected]\nObjet : Re: [PERFORM] copy and postgresql.conf\n\n\n\nWhat version of Solaris are you using?\n\nDo you have the recommendations while using COPY on Solaris?\nhttp://blogs.sun.com/roller/page/jkshah?entry=postgresql_on_solaris_better_use\n\nwal_sync_method = fsync\nwal_buffers = 128\ncheckpoint_segments = 128\nbgwriter_percent = 0\nbgwriter_maxpages = 0\n\n\nAnd also for /etc/system on Solaris 10, 9 SPARC use the following\n\nset maxphys=1048576\nset md:md_maxphys=1048576\nset segmap_percent=50\nset ufs:freebehind=0\nset msgsys:msginfo_msgmni = 3584\nset semsys:seminfo_semmni = 4096\nset shmsys:shminfo_shmmax = 15392386252\nset shmsys:shminfo_shmmni = 4096\n\n\nCan you try putting in one run with this values and send back your\r\nexperiences on whether it helps your workload or not?\n\nAtleast I saw improvements using the above settings with COPY with\r\nPostgres 8.0 and Postgres 8.1 on Solaris.\n\nRegards,\nJignesh\n\n\n\n\nFERREIRA, William (VALTECH) wrote:\n\n>30% faster !!! i will test this new version ...\n>\n>thanks a lot\n>\n>-----Message d'origine-----\n>De : [email protected]\n>[mailto:[email protected]]De la part de Albert\n>Cervera Areny\n>Envoyé : mardi 14 février 2006 17:07\n>À : [email protected]\n>Objet : Re: [PERFORM] copy and postgresql.conf\n>\n>\n>\n>Sorry, COPY improvements came with 8.1\n>\n>(http://www.postgresql.org/docs/whatsnew)\n>\n>A Dimarts 14 Febrer 2006 14:26, FERREIRA, William (VALTECH) va escriure:\n> \r\n>\n>>thanks,\n>>\n>>i'm using postgresql 8.0.3\n>>there is no primary key and no index on my tables\n>>\n>>regards\n>>\n>>-----Message d'origine-----\n>>De : [email protected]\n>>[mailto:[email protected]]De la part de Albert\n>>Cervera Areny\n>>Envoyé : mardi 14 février 2006 12:38\n>>À : [email protected]\n>>Objet : Re: [PERFORM] copy and postgresql.conf\n>>\n>>\n>>\n>>Hi William,\n>>\twhich PostgreSQL version are you using? Newer (8.0+) versions have some\n>>\n>>important performance improvements for the COPY command.\n>>\n>>\tAlso, you'll notice significant improvements by creating primary & foreign\n>>\n>>keys after the copy command. I think config tweaking can improve key and\n>>\n>>index creation but I don't think you can improve the COPY command itself.\n>>\n>>\tThere are also many threads in this list commenting on this issue, you'll\n>>\n>>find it easely in the archives.\n>>\n>>A Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:\n>> \r\n>>\n>>>hi,\n>>>\n>>>i load data from files using copy method.\n>>>Files contain between 2 and 7 millions of rows, spread on 5 tables.\n>>>\n>>>For loading all the data, it takes 40mn, and the same processing takes\n>>>17mn with Oracle. I think that this time can be improved by changing\n>>>postgresql configuration file. But which parameters i need to manipulate\n>>>and with which values ?\n>>>\n>>>Here are the specifications of my system :\n>>>V250 architecture sun4u\n>>>2xCPU UltraSparc IIIi 1.28 GHz.\n>>>8 Go RAM.\n>>>\n>>>Regards.\n>>>\n>>>\tWill\n>>>\n>>>\n>>>This e-mail is intended only for the above addressee. It may contain\n>>>privileged information. If you are not the addressee you must not copy,\n>>>distribute, disclose or use any of the information in it. If you have\n>>>received it in error please delete it and immediately notify the sender.\n>>>Security Notice: all e-mail, sent to or from this address, may be\n>>>accessed by someone other than the recipient, for system management and\n>>>security reasons. This access is controlled under Regulation of\n>>>Investigatory Powers Act 2000, Lawful Business Practises.\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 4: Have you searched our list archives?\n>>>\n>>> http://archives.postgresql.org\n>>> \r\n>>>\n>>--\n>>\n>>Albert Cervera Areny\n>>Dept. Informàtica Sedifa, S.L.\n>>\n>>Av. Can Bordoll, 149\n>>08202 - Sabadell (Barcelona)\n>>Tel. 93 715 51 11\n>>Fax. 93 715 51 12\n>>\n>>====================================================================\n>>........................ AVISO LEGAL ............................\n>>La presente comunicación y sus anexos tiene como destinatario la\n>>persona a la que va dirigida, por lo que si usted lo recibe\n>>por error debe notificarlo al remitente y eliminarlo de su\n>>sistema, no pudiendo utilizarlo, total o parcialmente, para\n>>ningún fin. Su contenido puede tener información confidencial o\n>>protegida legalmente y únicamente expresa la opinión del\n>>remitente. El uso del correo electrónico vía Internet no\n>>permite asegurar ni la confidencialidad de los mensajes\n>>ni su correcta recepción. En el caso de que el\n>>destinatario no consintiera la utilización del correo electrónico,\n>>deberá ponerlo en nuestro conocimiento inmediatamente.\n>>====================================================================\n>>........................... DISCLAIMER .............................\n>>This message and its attachments are intended exclusively for the\n>>named addressee. If you receive this message in error, please\n>>immediately delete it from your system and notify the sender. You\n>>may not use this message or any part of it for any purpose.\n>>The message may contain information that is confidential or\n>>protected by law, and any opinions expressed are those of the\n>>individual sender. Internet e-mail guarantees neither the\n>>confidentiality nor the proper receipt of the message sent.\n>>If the addressee of this message does not consent to the use\n>>of internet e-mail, please inform us inmmediately.\n>>====================================================================\n>>\n>>\n>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 5: don't forget to increase your free space map settings\n>>\n>>\n>>\n>>This mail has originated outside your organization,\n>>either from an external partner or the Global Internet.\n>>Keep this in mind if you answer this message.\n>>\n>>\n>>This e-mail is intended only for the above addressee. It may contain\n>>privileged information. If you are not the addressee you must not copy,\n>>distribute, disclose or use any of the information in it. If you have\n>>received it in error please delete it and immediately notify the sender.\n>>Security Notice: all e-mail, sent to or from this address, may be\n>>accessed by someone other than the recipient, for system management and\n>>security reasons. This access is controlled under Regulation of\n>>Investigatory Powers Act 2000, Lawful Business Practises.\n>> \r\n>>\n>\n>--\n>\n>Albert Cervera Areny\n>Dept. Informàtica Sedifa, S.L.\n>\n>Av. Can Bordoll, 149\n>08202 - Sabadell (Barcelona)\n>Tel. 93 715 51 11\n>Fax. 93 715 51 12\n>\n>====================================================================\n>........................ AVISO LEGAL ............................\n>La presente comunicación y sus anexos tiene como destinatario la\n>persona a la que va dirigida, por lo que si usted lo recibe\n>por error debe notificarlo al remitente y eliminarlo de su\n>sistema, no pudiendo utilizarlo, total o parcialmente, para\n>ningún fin. Su contenido puede tener información confidencial o\n>protegida legalmente y únicamente expresa la opinión del\n>remitente. El uso del correo electrónico vía Internet no\n>permite asegurar ni la confidencialidad de los mensajes\n>ni su correcta recepción. En el caso de que el\n>destinatario no consintiera la utilización del correo electrónico,\n>deberá ponerlo en nuestro conocimiento inmediatamente.\n>====================================================================\n>........................... DISCLAIMER .............................\n>This message and its attachments are intended exclusively for the\n>named addressee. If you receive this message in error, please\n>immediately delete it from your system and notify the sender. You\n>may not use this message or any part of it for any purpose.\n>The message may contain information that is confidential or\n>protected by law, and any opinions expressed are those of the\n>individual sender. Internet e-mail guarantees neither the\n>confidentiality nor the proper receipt of the message sent.\n>If the addressee of this message does not consent to the use\n>of internet e-mail, please inform us inmmediately.\n>====================================================================\n>\n>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: don't forget to increase your free space map settings\n>\n>\n>\n>This mail has originated outside your organization,\n>either from an external partner or the Global Internet.\n>Keep this in mind if you answer this message.\n>\n>\n>This e-mail is intended only for the above addressee. It may contain\n>privileged information. If you are not the addressee you must not copy,\n>distribute, disclose or use any of the information in it. If you have\n>received it in error please delete it and immediately notify the sender.\n>Security Notice: all e-mail, sent to or from this address, may be\n>accessed by someone other than the recipient, for system management and\n>security reasons. This access is controlled under Regulation of\n>Investigatory Powers Act 2000, Lawful Business Practises.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: explain analyze is your friend\n> \r\n>\n\n\r\nThis mail has originated outside your organization,\neither from an external partner or the Global Internet.\nKeep this in mind if you answer this message.\n\n\r\nThis e-mail is intended only for the above addressee. It may contain\nprivileged information. If you are not the addressee you must not copy,\ndistribute, disclose or use any of the information in it. If you have\nreceived it in error please delete it and immediately notify the sender.\nSecurity Notice: all e-mail, sent to or from this address, may be\naccessed by someone other than the recipient, for system management and\nsecurity reasons. This access is controlled under Regulation of\nInvestigatory Powers Act 2000, Lawful Business Practises.\n", "msg_date": "Wed, 15 Feb 2006 11:25:46 +0100", "msg_from": "\"FERREIRA, William (VALTECH)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: copy and postgresql.conf" } ]
[ { "msg_contents": "Hi!\n\nI have a question about the query optimizer and the function scan. See \nthe next case:\n\nCREATE TABLE a (id SERIAL PRIMARY KEY, userid INT4, col TEXT);\nCREATE TABLE b (id SERIAL PRIMARY KEY, userid INT4, a_id INT4 REFERENCES \na (id), col TEXT);\nCREATE INDEX idx_a_uid ON a(userid);\nCREATE INDEX idx_b_uid ON b(userid);\nCREATE INDEX idx_a_col ON a(col);\nCREATE INDEX idx_b_col ON b(col);\n\nFirst solution:\n\n CREATE VIEW ab_view AS\n SELECT a.id AS id,\n a.userid AS userid_a, b.userid AS userid_b,\n a.col AS col_a, b.col AS col_b\n FROM a LEFT JOIN b ON (a.id = b.a_id);\n\n EXPLAIN ANALYSE SELECT * FROM ab_view\n WHERE userid_a = 23 AND userid_b = 23 AND col_a LIKE 's%'\n ORDER BY col_b\n LIMIT 10 OFFSET 10;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=15.70..15.70 rows=1 width=76) (actual time=0.108..0.108 \nrows=0 loops=1)\n -> Sort (cost=15.69..15.70 rows=1 width=76) (actual \ntime=0.104..0.104 rows=0 loops=1)\n Sort Key: b.col\n -> Nested Loop (cost=3.32..15.68 rows=1 width=76) (actual \ntime=0.085..0.085 rows=0 loops=1)\n Join Filter: (\"outer\".id = \"inner\".a_id)\n -> Bitmap Heap Scan on a (cost=2.30..6.13 rows=1 \nwidth=40) (actual time=0.082..0.082 rows=0 loops=1)\n Recheck Cond: (userid = 23)\n Filter: (col ~~ 's%'::text)\n -> BitmapAnd (cost=2.30..2.30 rows=1 width=0) \n(actual time=0.077..0.077 rows=0 loops=1)\n -> Bitmap Index Scan on idx_a_uid \n(cost=0.00..1.02 rows=6 width=0) (actual time=0.075..0.075 rows=0 loops=1)\n Index Cond: (userid = 23)\n -> Bitmap Index Scan on idx_a_col \n(cost=0.00..1.03 rows=6 width=0) (never executed)\n Index Cond: ((col >= 's'::text) AND \n(col < 't'::text))\n -> Bitmap Heap Scan on b (cost=1.02..9.49 rows=5 \nwidth=40) (never executed)\n Recheck Cond: (userid = 23)\n -> Bitmap Index Scan on idx_b_uid \n(cost=0.00..1.02 rows=5 width=0) (never executed)\n Index Cond: (userid = 23)\n Total runtime: 0.311 ms\n\n\nIn the first solution the query optimizer can work on the view and the \nfull execution of the query will be optimal. But I have to use 2 \ncondition for the userid fields (userid_a = 23 AND userid_b = 23 ). If I \nhave to eliminate the duplication I can try to use stored function.\n\nSecond solution:\n CREATE FUNCTION ab_select(INT4) RETURNS setof ab_view AS $$\n SELECT a.id AS id,\n a.userid AS userid_a, b.userid AS userid_b,\n a.col AS col_a, b.col AS col_b\n FROM a LEFT JOIN b ON (a.id = b.a_id AND b.userid = $1)\n WHERE a.userid = $1;\n $$ LANGUAGE SQL STABLE;\n\n EXPLAIN ANALYSE SELECT * FROM ab_select(23)\n WHERE col_a LIKE 's%'\n ORDER BY col_b\n LIMIT 10 OFFSET 10;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Limit (cost=15.07..15.07 rows=1 width=76) (actual time=1.034..1.034 \nrows=0 loops=1)\n -> Sort (cost=15.06..15.07 rows=5 width=76) (actual \ntime=1.030..1.030 rows=0 loops=1)\n Sort Key: col_b\n -> Function Scan on ab_select (cost=0.00..15.00 rows=5 \nwidth=76) (actual time=1.004..1.004 rows=0 loops=1)\n Filter: (col_a ~~ 's%'::text)\n Total runtime: 1.103 ms\n\nThe second solution have 2 advantage:\n 1. The second query is more beautiful and shorter.\n 2. You can rewrite easier the stored function without modify the query.\n\nBut I have heartache, because the optimizer give up the game. It cannot \noptimize the query globally (inside and outside the stored function) in \nspite of the STABLE keyword. It use function scan on the result of the \nstored function.\n\nHow can I eliminate the function scan while I want to keep the advantages?\n\nIn my opinion the optimizer cannot replace the function scan with a more \noptimal plan, but this feature may be implemented in the next versions \nof PostgreSQL. I would like to suggest this.\n\nI built this case theoretically, but I have more stored procedure which \nworks with bad performance therefore.\n\nRegards,\nAntal Attila\n", "msg_date": "Wed, 15 Feb 2006 13:57:03 +0100", "msg_from": "Antal Attila <[email protected]>", "msg_from_op": true, "msg_subject": "Stored proc and optimizer question" } ]
[ { "msg_contents": "\ni tested the last version version of PostgreSQL\nand for the same test :\nbefore : 40mn\nand now : 12mn :)\nfaster than Oracle (exactly what i wanted :p )\n\nthanks to everybody\n\n\tWill\n\n\n-----Message d'origine-----\nDe : [email protected]\n[mailto:[email protected]]De la part de Albert\nCervera Areny\nEnvoyé : mardi 14 février 2006 17:07\nÀ : [email protected]\nObjet : Re: [PERFORM] copy and postgresql.conf\n\n\n\nSorry, COPY improvements came with 8.1\n\n(http://www.postgresql.org/docs/whatsnew)\n\nA Dimarts 14 Febrer 2006 14:26, FERREIRA, William (VALTECH) va escriure:\n> thanks,\n>\n> i'm using postgresql 8.0.3\n> there is no primary key and no index on my tables\n>\n> regards\n>\n> -----Message d'origine-----\n> De : [email protected]\n> [mailto:[email protected]]De la part de Albert\n> Cervera Areny\n> Envoyé : mardi 14 février 2006 12:38\n> À : [email protected]\n> Objet : Re: [PERFORM] copy and postgresql.conf\n>\n>\n>\n> Hi William,\n> \twhich PostgreSQL version are you using? Newer (8.0+) versions have some\n>\n> important performance improvements for the COPY command.\n>\n> \tAlso, you'll notice significant improvements by creating primary & foreign\n>\n> keys after the copy command. I think config tweaking can improve key and\n>\n> index creation but I don't think you can improve the COPY command itself.\n>\n> \tThere are also many threads in this list commenting on this issue, you'll\n>\n> find it easely in the archives.\n>\n> A Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:\n> > hi,\n> >\n> > i load data from files using copy method.\n> > Files contain between 2 and 7 millions of rows, spread on 5 tables.\n> >\n> > For loading all the data, it takes 40mn, and the same processing takes\n> > 17mn with Oracle. I think that this time can be improved by changing\n> > postgresql configuration file. But which parameters i need to manipulate\n> > and with which values ?\n> >\n> > Here are the specifications of my system :\n> > V250 architecture sun4u\n> > 2xCPU UltraSparc IIIi 1.28 GHz.\n> > 8 Go RAM.\n> >\n> > Regards.\n> >\n> > \tWill\n> >\n> >\n> > This e-mail is intended only for the above addressee. It may contain\n> > privileged information. If you are not the addressee you must not copy,\n> > distribute, disclose or use any of the information in it. If you have\n> > received it in error please delete it and immediately notify the sender.\n> > Security Notice: all e-mail, sent to or from this address, may be\n> > accessed by someone other than the recipient, for system management and\n> > security reasons. This access is controlled under Regulation of\n> > Investigatory Powers Act 2000, Lawful Business Practises.\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n>\n> --\n>\n> Albert Cervera Areny\n> Dept. Informàtica Sedifa, S.L.\n>\n> Av. Can Bordoll, 149\n> 08202 - Sabadell (Barcelona)\n> Tel. 93 715 51 11\n> Fax. 93 715 51 12\n>\n> ====================================================================\n> ........................ AVISO LEGAL ............................\n> La presente comunicación y sus anexos tiene como destinatario la\n> persona a la que va dirigida, por lo que si usted lo recibe\n> por error debe notificarlo al remitente y eliminarlo de su\n> sistema, no pudiendo utilizarlo, total o parcialmente, para\n> ningún fin. Su contenido puede tener información confidencial o\n> protegida legalmente y únicamente expresa la opinión del\n> remitente. El uso del correo electrónico vía Internet no\n> permite asegurar ni la confidencialidad de los mensajes\n> ni su correcta recepción. En el caso de que el\n> destinatario no consintiera la utilización del correo electrónico,\n> deberá ponerlo en nuestro conocimiento inmediatamente.\n> ====================================================================\n> ........................... DISCLAIMER .............................\n> This message and its attachments are intended exclusively for the\n> named addressee. If you receive this message in error, please\n> immediately delete it from your system and notify the sender. You\n> may not use this message or any part of it for any purpose.\n> The message may contain information that is confidential or\n> protected by law, and any opinions expressed are those of the\n> individual sender. Internet e-mail guarantees neither the\n> confidentiality nor the proper receipt of the message sent.\n> If the addressee of this message does not consent to the use\n> of internet e-mail, please inform us inmmediately.\n> ====================================================================\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n>\n>\n> This mail has originated outside your organization,\n> either from an external partner or the Global Internet.\n> Keep this in mind if you answer this message.\n>\n>\n> This e-mail is intended only for the above addressee. It may contain\n> privileged information. If you are not the addressee you must not copy,\n> distribute, disclose or use any of the information in it. If you have\n> received it in error please delete it and immediately notify the sender.\n> Security Notice: all e-mail, sent to or from this address, may be\n> accessed by someone other than the recipient, for system management and\n> security reasons. This access is controlled under Regulation of\n> Investigatory Powers Act 2000, Lawful Business Practises.\n\n--\n\nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n\nThis mail has originated outside your organization,\neither from an external partner or the Global Internet.\nKeep this in mind if you answer this message.\n\n\r\nThis e-mail is intended only for the above addressee. It may contain\nprivileged information. If you are not the addressee you must not copy,\ndistribute, disclose or use any of the information in it. If you have\nreceived it in error please delete it and immediately notify the sender.\nSecurity Notice: all e-mail, sent to or from this address, may be\naccessed by someone other than the recipient, for system management and\nsecurity reasons. This access is controlled under Regulation of\nInvestigatory Powers Act 2000, Lawful Business Practises.\n", "msg_date": "Wed, 15 Feb 2006 15:07:40 +0100", "msg_from": "\"FERREIRA, William (VALTECH)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: copy and postgresql.conf" }, { "msg_contents": "What's your postgresql.conf parameter for the equivalent ones that I \nsuggested?\nI believe your wal_buffers and checkpoint_segments could be bigger. If \nthat's the case then yep you are fine.\n\nAs for the background writer I am seeing mixed results yet so not sure \nabout that.\n\nBut thanks for the feedback.\n\n-Jignesh\n\n\nFERREIRA, William (VALTECH) wrote:\n\n>i tested the last version version of PostgreSQL\n>and for the same test :\n>before : 40mn\n>and now : 12mn :)\n>faster than Oracle (exactly what i wanted :p )\n>\n>thanks to everybody\n>\n>\tWill\n>\n>\n>-----Message d'origine-----\n>De : [email protected]\n>[mailto:[email protected]]De la part de Albert\n>Cervera Areny\n>Envoy� : mardi 14 f�vrier 2006 17:07\n>� : [email protected]\n>Objet : Re: [PERFORM] copy and postgresql.conf\n>\n>\n>\n>Sorry, COPY improvements came with 8.1\n>\n>(http://www.postgresql.org/docs/whatsnew)\n>\n>A Dimarts 14 Febrer 2006 14:26, FERREIRA, William (VALTECH) va escriure:\n> \n>\n>>thanks,\n>>\n>>i'm using postgresql 8.0.3\n>>there is no primary key and no index on my tables\n>>\n>>regards\n>>\n>>-----Message d'origine-----\n>>De : [email protected]\n>>[mailto:[email protected]]De la part de Albert\n>>Cervera Areny\n>>Envoy� : mardi 14 f�vrier 2006 12:38\n>>� : [email protected]\n>>Objet : Re: [PERFORM] copy and postgresql.conf\n>>\n>>\n>>\n>>Hi William,\n>>\twhich PostgreSQL version are you using? Newer (8.0+) versions have some\n>>\n>>important performance improvements for the COPY command.\n>>\n>>\tAlso, you'll notice significant improvements by creating primary & foreign\n>>\n>>keys after the copy command. I think config tweaking can improve key and\n>>\n>>index creation but I don't think you can improve the COPY command itself.\n>>\n>>\tThere are also many threads in this list commenting on this issue, you'll\n>>\n>>find it easely in the archives.\n>>\n>>A Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:\n>> \n>>\n>>>hi,\n>>>\n>>>i load data from files using copy method.\n>>>Files contain between 2 and 7 millions of rows, spread on 5 tables.\n>>>\n>>>For loading all the data, it takes 40mn, and the same processing takes\n>>>17mn with Oracle. I think that this time can be improved by changing\n>>>postgresql configuration file. But which parameters i need to manipulate\n>>>and with which values ?\n>>>\n>>>Here are the specifications of my system :\n>>>V250 architecture sun4u\n>>>2xCPU UltraSparc IIIi 1.28 GHz.\n>>>8 Go RAM.\n>>>\n>>>Regards.\n>>>\n>>>\tWill\n>>>\n>>>\n>>>This e-mail is intended only for the above addressee. It may contain\n>>>privileged information. If you are not the addressee you must not copy,\n>>>distribute, disclose or use any of the information in it. If you have\n>>>received it in error please delete it and immediately notify the sender.\n>>>Security Notice: all e-mail, sent to or from this address, may be\n>>>accessed by someone other than the recipient, for system management and\n>>>security reasons. This access is controlled under Regulation of\n>>>Investigatory Powers Act 2000, Lawful Business Practises.\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 4: Have you searched our list archives?\n>>>\n>>> http://archives.postgresql.org\n>>> \n>>>\n>>--\n>>\n>>Albert Cervera Areny\n>>Dept. Inform�tica Sedifa, S.L.\n>>\n>>Av. Can Bordoll, 149\n>>08202 - Sabadell (Barcelona)\n>>Tel. 93 715 51 11\n>>Fax. 93 715 51 12\n>>\n>>====================================================================\n>>........................ AVISO LEGAL ............................\n>>La presente comunicaci�n y sus anexos tiene como destinatario la\n>>persona a la que va dirigida, por lo que si usted lo recibe\n>>por error debe notificarlo al remitente y eliminarlo de su\n>>sistema, no pudiendo utilizarlo, total o parcialmente, para\n>>ning�n fin. Su contenido puede tener informaci�n confidencial o\n>>protegida legalmente y �nicamente expresa la opini�n del\n>>remitente. El uso del correo electr�nico v�a Internet no\n>>permite asegurar ni la confidencialidad de los mensajes\n>>ni su correcta recepci�n. En el caso de que el\n>>destinatario no consintiera la utilizaci�n del correo electr�nico,\n>>deber� ponerlo en nuestro conocimiento inmediatamente.\n>>====================================================================\n>>........................... DISCLAIMER .............................\n>>This message and its attachments are intended exclusively for the\n>>named addressee. If you receive this message in error, please\n>>immediately delete it from your system and notify the sender. You\n>>may not use this message or any part of it for any purpose.\n>>The message may contain information that is confidential or\n>>protected by law, and any opinions expressed are those of the\n>>individual sender. Internet e-mail guarantees neither the\n>>confidentiality nor the proper receipt of the message sent.\n>>If the addressee of this message does not consent to the use\n>>of internet e-mail, please inform us inmmediately.\n>>====================================================================\n>>\n>>\n>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 5: don't forget to increase your free space map settings\n>>\n>>\n>>\n>>This mail has originated outside your organization,\n>>either from an external partner or the Global Internet.\n>>Keep this in mind if you answer this message.\n>>\n>>\n>>This e-mail is intended only for the above addressee. It may contain\n>>privileged information. If you are not the addressee you must not copy,\n>>distribute, disclose or use any of the information in it. If you have\n>>received it in error please delete it and immediately notify the sender.\n>>Security Notice: all e-mail, sent to or from this address, may be\n>>accessed by someone other than the recipient, for system management and\n>>security reasons. This access is controlled under Regulation of\n>>Investigatory Powers Act 2000, Lawful Business Practises.\n>> \n>>\n>\n>--\n>\n>Albert Cervera Areny\n>Dept. Inform�tica Sedifa, S.L.\n>\n>Av. Can Bordoll, 149\n>08202 - Sabadell (Barcelona)\n>Tel. 93 715 51 11\n>Fax. 93 715 51 12\n>\n>====================================================================\n>........................ AVISO LEGAL ............................\n>La presente comunicaci�n y sus anexos tiene como destinatario la\n>persona a la que va dirigida, por lo que si usted lo recibe\n>por error debe notificarlo al remitente y eliminarlo de su\n>sistema, no pudiendo utilizarlo, total o parcialmente, para\n>ning�n fin. Su contenido puede tener informaci�n confidencial o\n>protegida legalmente y �nicamente expresa la opini�n del\n>remitente. El uso del correo electr�nico v�a Internet no\n>permite asegurar ni la confidencialidad de los mensajes\n>ni su correcta recepci�n. En el caso de que el\n>destinatario no consintiera la utilizaci�n del correo electr�nico,\n>deber� ponerlo en nuestro conocimiento inmediatamente.\n>====================================================================\n>........................... DISCLAIMER .............................\n>This message and its attachments are intended exclusively for the\n>named addressee. If you receive this message in error, please\n>immediately delete it from your system and notify the sender. You\n>may not use this message or any part of it for any purpose.\n>The message may contain information that is confidential or\n>protected by law, and any opinions expressed are those of the\n>individual sender. Internet e-mail guarantees neither the\n>confidentiality nor the proper receipt of the message sent.\n>If the addressee of this message does not consent to the use\n>of internet e-mail, please inform us inmmediately.\n>====================================================================\n>\n>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: don't forget to increase your free space map settings\n>\n>\n>\n>This mail has originated outside your organization,\n>either from an external partner or the Global Internet.\n>Keep this in mind if you answer this message.\n>\n>\n>This e-mail is intended only for the above addressee. It may contain\n>privileged information. If you are not the addressee you must not copy,\n>distribute, disclose or use any of the information in it. If you have\n>received it in error please delete it and immediately notify the sender.\n>Security Notice: all e-mail, sent to or from this address, may be\n>accessed by someone other than the recipient, for system management and\n>security reasons. This access is controlled under Regulation of\n>Investigatory Powers Act 2000, Lawful Business Practises.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n>\n", "msg_date": "Wed, 15 Feb 2006 09:14:10 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: copy and postgresql.conf" } ]
[ { "msg_contents": "\nwith PostgreSQL 8.1.3, here are my parameters (it's the default configuration)\n\nwal_sync_method = fsync\nwal_buffers = 8\ncheckpoint_segments = 3\nbgwriter_lru_percent = 1.0\nbgwriter_lru_maxpages = 5\nbgwriter_all_percent = 0.333\nbgwriter_all_maxpages = 5\n\nand you think times can be improved again ?\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]]\nEnvoyé : mercredi 15 février 2006 15:14\nÀ : FERREIRA, William (VALTECH)\nCc : Albert Cervera Areny; [email protected]\nObjet : Re: [PERFORM] copy and postgresql.conf\n\n\n\nWhat's your postgresql.conf parameter for the equivalent ones that I\r\nsuggested?\nI believe your wal_buffers and checkpoint_segments could be bigger. If\r\nthat's the case then yep you are fine.\n\nAs for the background writer I am seeing mixed results yet so not sure\r\nabout that.\n\nBut thanks for the feedback.\n\n-Jignesh\n\n\nFERREIRA, William (VALTECH) wrote:\n\n>i tested the last version version of PostgreSQL\n>and for the same test :\n>before : 40mn\n>and now : 12mn :)\n>faster than Oracle (exactly what i wanted :p )\n>\n>thanks to everybody\n>\n>\tWill\n>\n>\n>-----Message d'origine-----\n>De : [email protected]\n>[mailto:[email protected]]De la part de Albert\n>Cervera Areny\n>Envoyé : mardi 14 février 2006 17:07\n>À : [email protected]\n>Objet : Re: [PERFORM] copy and postgresql.conf\n>\n>\n>\n>Sorry, COPY improvements came with 8.1\n>\n>(http://www.postgresql.org/docs/whatsnew)\n>\n>A Dimarts 14 Febrer 2006 14:26, FERREIRA, William (VALTECH) va escriure:\n> \r\n>\n>>thanks,\n>>\n>>i'm using postgresql 8.0.3\n>>there is no primary key and no index on my tables\n>>\n>>regards\n>>\n>>-----Message d'origine-----\n>>De : [email protected]\n>>[mailto:[email protected]]De la part de Albert\n>>Cervera Areny\n>>Envoyé : mardi 14 février 2006 12:38\n>>À : [email protected]\n>>Objet : Re: [PERFORM] copy and postgresql.conf\n>>\n>>\n>>\n>>Hi William,\n>>\twhich PostgreSQL version are you using? Newer (8.0+) versions have some\n>>\n>>important performance improvements for the COPY command.\n>>\n>>\tAlso, you'll notice significant improvements by creating primary & foreign\n>>\n>>keys after the copy command. I think config tweaking can improve key and\n>>\n>>index creation but I don't think you can improve the COPY command itself.\n>>\n>>\tThere are also many threads in this list commenting on this issue, you'll\n>>\n>>find it easely in the archives.\n>>\n>>A Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:\n>> \r\n>>\n>>>hi,\n>>>\n>>>i load data from files using copy method.\n>>>Files contain between 2 and 7 millions of rows, spread on 5 tables.\n>>>\n>>>For loading all the data, it takes 40mn, and the same processing takes\n>>>17mn with Oracle. I think that this time can be improved by changing\n>>>postgresql configuration file. But which parameters i need to manipulate\n>>>and with which values ?\n>>>\n>>>Here are the specifications of my system :\n>>>V250 architecture sun4u\n>>>2xCPU UltraSparc IIIi 1.28 GHz.\n>>>8 Go RAM.\n>>>\n>>>Regards.\n>>>\n>>>\tWill\n>>>\n>>>\n>>>This e-mail is intended only for the above addressee. It may contain\n>>>privileged information. If you are not the addressee you must not copy,\n>>>distribute, disclose or use any of the information in it. If you have\n>>>received it in error please delete it and immediately notify the sender.\n>>>Security Notice: all e-mail, sent to or from this address, may be\n>>>accessed by someone other than the recipient, for system management and\n>>>security reasons. This access is controlled under Regulation of\n>>>Investigatory Powers Act 2000, Lawful Business Practises.\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 4: Have you searched our list archives?\n>>>\n>>> http://archives.postgresql.org\n>>> \r\n>>>\n>>--\n>>\n>>Albert Cervera Areny\n>>Dept. Informàtica Sedifa, S.L.\n>>\n>>Av. Can Bordoll, 149\n>>08202 - Sabadell (Barcelona)\n>>Tel. 93 715 51 11\n>>Fax. 93 715 51 12\n>>\n>>====================================================================\n>>........................ AVISO LEGAL ............................\n>>La presente comunicación y sus anexos tiene como destinatario la\n>>persona a la que va dirigida, por lo que si usted lo recibe\n>>por error debe notificarlo al remitente y eliminarlo de su\n>>sistema, no pudiendo utilizarlo, total o parcialmente, para\n>>ningún fin. Su contenido puede tener información confidencial o\n>>protegida legalmente y únicamente expresa la opinión del\n>>remitente. El uso del correo electrónico vía Internet no\n>>permite asegurar ni la confidencialidad de los mensajes\n>>ni su correcta recepción. En el caso de que el\n>>destinatario no consintiera la utilización del correo electrónico,\n>>deberá ponerlo en nuestro conocimiento inmediatamente.\n>>====================================================================\n>>........................... DISCLAIMER .............................\n>>This message and its attachments are intended exclusively for the\n>>named addressee. If you receive this message in error, please\n>>immediately delete it from your system and notify the sender. You\n>>may not use this message or any part of it for any purpose.\n>>The message may contain information that is confidential or\n>>protected by law, and any opinions expressed are those of the\n>>individual sender. Internet e-mail guarantees neither the\n>>confidentiality nor the proper receipt of the message sent.\n>>If the addressee of this message does not consent to the use\n>>of internet e-mail, please inform us inmmediately.\n>>====================================================================\n>>\n>>\n>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 5: don't forget to increase your free space map settings\n>>\n>>\n>>\n>>This mail has originated outside your organization,\n>>either from an external partner or the Global Internet.\n>>Keep this in mind if you answer this message.\n>>\n>>\n>>This e-mail is intended only for the above addressee. It may contain\n>>privileged information. If you are not the addressee you must not copy,\n>>distribute, disclose or use any of the information in it. If you have\n>>received it in error please delete it and immediately notify the sender.\n>>Security Notice: all e-mail, sent to or from this address, may be\n>>accessed by someone other than the recipient, for system management and\n>>security reasons. This access is controlled under Regulation of\n>>Investigatory Powers Act 2000, Lawful Business Practises.\n>> \r\n>>\n>\n>--\n>\n>Albert Cervera Areny\n>Dept. Informàtica Sedifa, S.L.\n>\n>Av. Can Bordoll, 149\n>08202 - Sabadell (Barcelona)\n>Tel. 93 715 51 11\n>Fax. 93 715 51 12\n>\n>====================================================================\n>........................ AVISO LEGAL ............................\n>La presente comunicación y sus anexos tiene como destinatario la\n>persona a la que va dirigida, por lo que si usted lo recibe\n>por error debe notificarlo al remitente y eliminarlo de su\n>sistema, no pudiendo utilizarlo, total o parcialmente, para\n>ningún fin. Su contenido puede tener información confidencial o\n>protegida legalmente y únicamente expresa la opinión del\n>remitente. El uso del correo electrónico vía Internet no\n>permite asegurar ni la confidencialidad de los mensajes\n>ni su correcta recepción. En el caso de que el\n>destinatario no consintiera la utilización del correo electrónico,\n>deberá ponerlo en nuestro conocimiento inmediatamente.\n>====================================================================\n>........................... DISCLAIMER .............................\n>This message and its attachments are intended exclusively for the\n>named addressee. If you receive this message in error, please\n>immediately delete it from your system and notify the sender. You\n>may not use this message or any part of it for any purpose.\n>The message may contain information that is confidential or\n>protected by law, and any opinions expressed are those of the\n>individual sender. Internet e-mail guarantees neither the\n>confidentiality nor the proper receipt of the message sent.\n>If the addressee of this message does not consent to the use\n>of internet e-mail, please inform us inmmediately.\n>====================================================================\n>\n>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: don't forget to increase your free space map settings\n>\n>\n>\n>This mail has originated outside your organization,\n>either from an external partner or the Global Internet.\n>Keep this in mind if you answer this message.\n>\n>\n>This e-mail is intended only for the above addressee. It may contain\n>privileged information. If you are not the addressee you must not copy,\n>distribute, disclose or use any of the information in it. If you have\n>received it in error please delete it and immediately notify the sender.\n>Security Notice: all e-mail, sent to or from this address, may be\n>accessed by someone other than the recipient, for system management and\n>security reasons. This access is controlled under Regulation of\n>Investigatory Powers Act 2000, Lawful Business Practises.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \r\n>\n\n\r\nThis mail has originated outside your organization,\neither from an external partner or the Global Internet.\nKeep this in mind if you answer this message.\n\n\r\nThis e-mail is intended only for the above addressee. It may contain\nprivileged information. If you are not the addressee you must not copy,\ndistribute, disclose or use any of the information in it. If you have\nreceived it in error please delete it and immediately notify the sender.\nSecurity Notice: all e-mail, sent to or from this address, may be\naccessed by someone other than the recipient, for system management and\nsecurity reasons. This access is controlled under Regulation of\nInvestigatory Powers Act 2000, Lawful Business Practises.\n", "msg_date": "Wed, 15 Feb 2006 15:25:41 +0100", "msg_from": "\"FERREIRA, William (VALTECH)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: copy and postgresql.conf" }, { "msg_contents": "Actually fsync is not the default on solaris (verify using \"show all;)\n\n(If you look closely in postgresql.conf it is commented out and \nmentioned as default but show all tells a different story)\n\nIn all my cases I saw the default as\n wal_sync_method | open_datasync\n\nAlso I had seen quite an improvement by changing the default \ncheckpoint_segments from 3 to 64 or 128 and also increasing wal_buffers \nto 64 depending on how heavy is your load.\n\nAlso open_datasync type of operations benefit with forcedirectio on \nSolaris and hence either move wal to forcedirectio mounted file system \nor try changing default sync to fsync (the *said* default)\n\nNow if you use fsync then you need a bigger file system cache since by \ndefault Solaris's segmap mechanism only maps 12% of your physical ram to \nbe used for file system buffer cache. Increasing segmap_percent to 50 on \nSPARC allows to use 50% of your RAM to be mapped to be used for 50% ( \nNOTE: It does not reserve but just allow mapping of the memory which can \nbe used for file system buffer cache)\n\nChanging maxphys allows the file system buffer cache to coalesce writes \nfrom the 8Ks that PostgreSQL is doing to bigger writes/reads. Also since \nyou are now exploiting file system buffer cache, file system Logging is \nvery much recommended (available from a later update of Solaris 8 I \nbelieve).\n\n\nRegards,\nJignesh\n\n\n\n\n\nFERREIRA, William (VALTECH) wrote:\n\n>with PostgreSQL 8.1.3, here are my parameters (it's the default configuration)\n>\n>wal_sync_method = fsync\n>wal_buffers = 8\n>checkpoint_segments = 3\n>bgwriter_lru_percent = 1.0\n>bgwriter_lru_maxpages = 5\n>bgwriter_all_percent = 0.333\n>bgwriter_all_maxpages = 5\n>\n>and you think times can be improved again ?\n>\n>-----Message d'origine-----\n>De : [email protected] [mailto:[email protected]]\n>Envoy� : mercredi 15 f�vrier 2006 15:14\n>� : FERREIRA, William (VALTECH)\n>Cc : Albert Cervera Areny; [email protected]\n>Objet : Re: [PERFORM] copy and postgresql.conf\n>\n>\n>\n>What's your postgresql.conf parameter for the equivalent ones that I\n>suggested?\n>I believe your wal_buffers and checkpoint_segments could be bigger. If\n>that's the case then yep you are fine.\n>\n>As for the background writer I am seeing mixed results yet so not sure\n>about that.\n>\n>But thanks for the feedback.\n>\n>-Jignesh\n>\n>\n>FERREIRA, William (VALTECH) wrote:\n>\n> \n>\n>>i tested the last version version of PostgreSQL\n>>and for the same test :\n>>before : 40mn\n>>and now : 12mn :)\n>>faster than Oracle (exactly what i wanted :p )\n>>\n>>thanks to everybody\n>>\n>>\tWill\n>>\n>>\n>>-----Message d'origine-----\n>>De : [email protected]\n>>[mailto:[email protected]]De la part de Albert\n>>Cervera Areny\n>>Envoy� : mardi 14 f�vrier 2006 17:07\n>>� : [email protected]\n>>Objet : Re: [PERFORM] copy and postgresql.conf\n>>\n>>\n>>\n>>Sorry, COPY improvements came with 8.1\n>>\n>>(http://www.postgresql.org/docs/whatsnew)\n>>\n>>A Dimarts 14 Febrer 2006 14:26, FERREIRA, William (VALTECH) va escriure:\n>>\n>>\n>> \n>>\n>>>thanks,\n>>>\n>>>i'm using postgresql 8.0.3\n>>>there is no primary key and no index on my tables\n>>>\n>>>regards\n>>>\n>>>-----Message d'origine-----\n>>>De : [email protected]\n>>>[mailto:[email protected]]De la part de Albert\n>>>Cervera Areny\n>>>Envoy� : mardi 14 f�vrier 2006 12:38\n>>>� : [email protected]\n>>>Objet : Re: [PERFORM] copy and postgresql.conf\n>>>\n>>>\n>>>\n>>>Hi William,\n>>>\twhich PostgreSQL version are you using? Newer (8.0+) versions have some\n>>>\n>>>important performance improvements for the COPY command.\n>>>\n>>>\tAlso, you'll notice significant improvements by creating primary & foreign\n>>>\n>>>keys after the copy command. I think config tweaking can improve key and\n>>>\n>>>index creation but I don't think you can improve the COPY command itself.\n>>>\n>>>\tThere are also many threads in this list commenting on this issue, you'll\n>>>\n>>>find it easely in the archives.\n>>>\n>>>A Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:\n>>> \n>>>\n>>> \n>>>\n>>>>hi,\n>>>>\n>>>>i load data from files using copy method.\n>>>>Files contain between 2 and 7 millions of rows, spread on 5 tables.\n>>>>\n>>>>For loading all the data, it takes 40mn, and the same processing takes\n>>>>17mn with Oracle. I think that this time can be improved by changing\n>>>>postgresql configuration file. But which parameters i need to manipulate\n>>>>and with which values ?\n>>>>\n>>>>Here are the specifications of my system :\n>>>>V250 architecture sun4u\n>>>>2xCPU UltraSparc IIIi 1.28 GHz.\n>>>>8 Go RAM.\n>>>>\n>>>>Regards.\n>>>>\n>>>>\tWill\n>>>>\n>>>>\n>>>>This e-mail is intended only for the above addressee. It may contain\n>>>>privileged information. If you are not the addressee you must not copy,\n>>>>distribute, disclose or use any of the information in it. If you have\n>>>>received it in error please delete it and immediately notify the sender.\n>>>>Security Notice: all e-mail, sent to or from this address, may be\n>>>>accessed by someone other than the recipient, for system management and\n>>>>security reasons. This access is controlled under Regulation of\n>>>>Investigatory Powers Act 2000, Lawful Business Practises.\n>>>>\n>>>>---------------------------(end of broadcast)---------------------------\n>>>>TIP 4: Have you searched our list archives?\n>>>>\n>>>> http://archives.postgresql.org\n>>>> \n>>>>\n>>>> \n>>>>\n>>>--\n>>>\n>>>Albert Cervera Areny\n>>>Dept. Inform�tica Sedifa, S.L.\n>>>\n>>>Av. Can Bordoll, 149\n>>>08202 - Sabadell (Barcelona)\n>>>Tel. 93 715 51 11\n>>>Fax. 93 715 51 12\n>>>\n>>>====================================================================\n>>>........................ AVISO LEGAL ............................\n>>>La presente comunicaci�n y sus anexos tiene como destinatario la\n>>>persona a la que va dirigida, por lo que si usted lo recibe\n>>>por error debe notificarlo al remitente y eliminarlo de su\n>>>sistema, no pudiendo utilizarlo, total o parcialmente, para\n>>>ning�n fin. Su contenido puede tener informaci�n confidencial o\n>>>protegida legalmente y �nicamente expresa la opini�n del\n>>>remitente. El uso del correo electr�nico v�a Internet no\n>>>permite asegurar ni la confidencialidad de los mensajes\n>>>ni su correcta recepci�n. En el caso de que el\n>>>destinatario no consintiera la utilizaci�n del correo electr�nico,\n>>>deber� ponerlo en nuestro conocimiento inmediatamente.\n>>>====================================================================\n>>>........................... DISCLAIMER .............................\n>>>This message and its attachments are intended exclusively for the\n>>>named addressee. If you receive this message in error, please\n>>>immediately delete it from your system and notify the sender. You\n>>>may not use this message or any part of it for any purpose.\n>>>The message may contain information that is confidential or\n>>>protected by law, and any opinions expressed are those of the\n>>>individual sender. Internet e-mail guarantees neither the\n>>>confidentiality nor the proper receipt of the message sent.\n>>>If the addressee of this message does not consent to the use\n>>>of internet e-mail, please inform us inmmediately.\n>>>====================================================================\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 5: don't forget to increase your free space map settings\n>>>\n>>>\n>>>\n>>>This mail has originated outside your organization,\n>>>either from an external partner or the Global Internet.\n>>>Keep this in mind if you answer this message.\n>>>\n>>>\n>>>This e-mail is intended only for the above addressee. It may contain\n>>>privileged information. If you are not the addressee you must not copy,\n>>>distribute, disclose or use any of the information in it. If you have\n>>>received it in error please delete it and immediately notify the sender.\n>>>Security Notice: all e-mail, sent to or from this address, may be\n>>>accessed by someone other than the recipient, for system management and\n>>>security reasons. This access is controlled under Regulation of\n>>>Investigatory Powers Act 2000, Lawful Business Practises.\n>>> \n>>>\n>>> \n>>>\n>>--\n>>\n>>Albert Cervera Areny\n>>Dept. Inform�tica Sedifa, S.L.\n>>\n>>Av. Can Bordoll, 149\n>>08202 - Sabadell (Barcelona)\n>>Tel. 93 715 51 11\n>>Fax. 93 715 51 12\n>>\n>>====================================================================\n>>........................ AVISO LEGAL ............................\n>>La presente comunicaci�n y sus anexos tiene como destinatario la\n>>persona a la que va dirigida, por lo que si usted lo recibe\n>>por error debe notificarlo al remitente y eliminarlo de su\n>>sistema, no pudiendo utilizarlo, total o parcialmente, para\n>>ning�n fin. Su contenido puede tener informaci�n confidencial o\n>>protegida legalmente y �nicamente expresa la opini�n del\n>>remitente. El uso del correo electr�nico v�a Internet no\n>>permite asegurar ni la confidencialidad de los mensajes\n>>ni su correcta recepci�n. En el caso de que el\n>>destinatario no consintiera la utilizaci�n del correo electr�nico,\n>>deber� ponerlo en nuestro conocimiento inmediatamente.\n>>====================================================================\n>>........................... DISCLAIMER .............................\n>>This message and its attachments are intended exclusively for the\n>>named addressee. If you receive this message in error, please\n>>immediately delete it from your system and notify the sender. You\n>>may not use this message or any part of it for any purpose.\n>>The message may contain information that is confidential or\n>>protected by law, and any opinions expressed are those of the\n>>individual sender. Internet e-mail guarantees neither the\n>>confidentiality nor the proper receipt of the message sent.\n>>If the addressee of this message does not consent to the use\n>>of internet e-mail, please inform us inmmediately.\n>>====================================================================\n>>\n>>\n>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 5: don't forget to increase your free space map settings\n>>\n>>\n>>\n>>This mail has originated outside your organization,\n>>either from an external partner or the Global Internet.\n>>Keep this in mind if you answer this message.\n>>\n>>\n>>This e-mail is intended only for the above addressee. It may contain\n>>privileged information. If you are not the addressee you must not copy,\n>>distribute, disclose or use any of the information in it. If you have\n>>received it in error please delete it and immediately notify the sender.\n>>Security Notice: all e-mail, sent to or from this address, may be\n>>accessed by someone other than the recipient, for system management and\n>>security reasons. This access is controlled under Regulation of\n>>Investigatory Powers Act 2000, Lawful Business Practises.\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>>\n>>\n>> \n>>\n>\n>\n>This mail has originated outside your organization,\n>either from an external partner or the Global Internet.\n>Keep this in mind if you answer this message.\n>\n>\n>This e-mail is intended only for the above addressee. It may contain\n>privileged information. If you are not the addressee you must not copy,\n>distribute, disclose or use any of the information in it. If you have\n>received it in error please delete it and immediately notify the sender.\n>Security Notice: all e-mail, sent to or from this address, may be\n>accessed by someone other than the recipient, for system management and\n>security reasons. This access is controlled under Regulation of\n>Investigatory Powers Act 2000, Lawful Business Practises.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n> \n>\n", "msg_date": "Wed, 15 Feb 2006 10:07:02 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: copy and postgresql.conf" }, { "msg_contents": "\"FERREIRA, William (VALTECH)\" <[email protected]> writes:\n> with PostgreSQL 8.1.3, here are my parameters (it's the default configuration)\n\n> wal_sync_method = fsync\n> wal_buffers = 8\n> checkpoint_segments = 3\n> bgwriter_lru_percent = 1.0\n> bgwriter_lru_maxpages = 5\n> bgwriter_all_percent = 0.333\n> bgwriter_all_maxpages = 5\n\n> and you think times can be improved again ?\n\nIncreasing checkpoint_segments will definitely help for any write-intensive\nsituation. It costs you in disk space of course, as well as the time\nneeded for post-crash recovery.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Feb 2006 10:18:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: copy and postgresql.conf " } ]
[ { "msg_contents": "Hi!\n\nI have a question about the query optimizer and the function scan. See\nthe next case:\n\nCREATE TABLE a (id SERIAL PRIMARY KEY, userid INT4, col TEXT);\nCREATE TABLE b (id SERIAL PRIMARY KEY, userid INT4, a_id INT4 REFERENCES\na (id), col TEXT);\nCREATE INDEX idx_a_uid ON a(userid);\nCREATE INDEX idx_b_uid ON b(userid);\nCREATE INDEX idx_a_col ON a(col);\nCREATE INDEX idx_b_col ON b(col);\n\nFirst solution:\n\n CREATE VIEW ab_view AS\n SELECT a.id AS id,\n a.userid AS userid_a, b.userid AS userid_b,\n a.col AS col_a, b.col AS col_b\n FROM a LEFT JOIN b ON (a.id = b.a_id);\n\n EXPLAIN ANALYSE SELECT * FROM ab_view\n WHERE userid_a = 23 AND userid_b = 23 AND col_a LIKE 's%'\n ORDER BY col_b\n LIMIT 10 OFFSET 10;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=15.70..15.70 rows=1 width=76) (actual time=0.108..0.108\nrows=0 loops=1)\n -> Sort (cost=15.69..15.70 rows=1 width=76) (actual\ntime=0.104..0.104 rows=0 loops=1)\n Sort Key: b.col\n -> Nested Loop (cost=3.32..15.68 rows=1 width=76) (actual\ntime=0.085..0.085 rows=0 loops=1)\n Join Filter: (\"outer\".id = \"inner\".a_id)\n -> Bitmap Heap Scan on a (cost=2.30..6.13 rows=1\nwidth=40) (actual time=0.082..0.082 rows=0 loops=1)\n Recheck Cond: (userid = 23)\n Filter: (col ~~ 's%'::text)\n -> BitmapAnd (cost=2.30..2.30 rows=1 width=0)\n(actual time=0.077..0.077 rows=0 loops=1)\n -> Bitmap Index Scan on idx_a_uid\n(cost=0.00..1.02 rows=6 width=0) (actual time=0.075..0.075 rows=0 loops=1)\n Index Cond: (userid = 23)\n -> Bitmap Index Scan on idx_a_col\n(cost=0.00..1.03 rows=6 width=0) (never executed)\n Index Cond: ((col >= 's'::text) AND\n(col < 't'::text))\n -> Bitmap Heap Scan on b (cost=1.02..9.49 rows=5\nwidth=40) (never executed)\n Recheck Cond: (userid = 23)\n -> Bitmap Index Scan on idx_b_uid\n(cost=0.00..1.02 rows=5 width=0) (never executed)\n Index Cond: (userid = 23)\nTotal runtime: 0.311 ms\n\n\nIn the first solution the query optimizer can work on the view and the\nfull execution of the query will be optimal. But I have to use 2\ncondition for the userid fields (userid_a = 23 AND userid_b = 23 ). If I\nhave to eliminate the duplication I can try to use stored function.\n\nSecond solution:\n CREATE FUNCTION ab_select(INT4) RETURNS setof ab_view AS $$\n SELECT a.id AS id,\n a.userid AS userid_a, b.userid AS userid_b,\n a.col AS col_a, b.col AS col_b\n FROM a LEFT JOIN b ON (a.id = b.a_id AND b.userid = $1)\n WHERE a.userid = $1;\n $$ LANGUAGE SQL STABLE;\n\n EXPLAIN ANALYSE SELECT * FROM ab_select(23)\n WHERE col_a LIKE 's%'\n ORDER BY col_b\n LIMIT 10 OFFSET 10;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\nLimit (cost=15.07..15.07 rows=1 width=76) (actual time=1.034..1.034\nrows=0 loops=1)\n -> Sort (cost=15.06..15.07 rows=5 width=76) (actual\ntime=1.030..1.030 rows=0 loops=1)\n Sort Key: col_b\n -> Function Scan on ab_select (cost=0.00..15.00 rows=5\nwidth=76) (actual time=1.004..1.004 rows=0 loops=1)\n Filter: (col_a ~~ 's%'::text)\nTotal runtime: 1.103 ms\n\nThe second solution have 2 advantage:\n 1. The second query is more beautiful and shorter.\n 2. You can rewrite easier the stored function without modify the query.\n\nBut I have heartache, because the optimizer give up the game. It cannot\noptimize the query globally (inside and outside the stored function) in\nspite of the STABLE keyword. It use function scan on the result of the\nstored function.\n\nHow can I eliminate the function scan while I want to keep the advantages?\n\nIn my opinion the optimizer cannot replace the function scan with a more\noptimal plan, but this feature may be implemented in the next versions\nof PostgreSQL. I would like to suggest this.\n\nI built this case theoretically, but I have more stored procedure which\nworks with bad performance therefore.\n\nRegards,\nAntal Attila\n\n", "msg_date": "Wed, 15 Feb 2006 16:51:47 +0100", "msg_from": "Antal Attila <[email protected]>", "msg_from_op": true, "msg_subject": "Stored proc and optimizer question" } ]
[ { "msg_contents": "\nGood morning,\n\n\n\n\nI've increased sort_mem until 2Go !!\nand the error \"out of memory\" appears again.\n\nHere the request I try to pass with her explain plan,\n\n Nested Loop (cost=2451676.23..2454714.73 rows=1001 width=34)\n -> Subquery Scan \"day\" (cost=2451676.23..2451688.73 rows=1000 width=16)\n -> Limit (cost=2451676.23..2451678.73 rows=1000 width=12)\n -> Sort (cost=2451676.23..2451684.63 rows=3357 width=12)\n Sort Key: sum(occurence)\n -> HashAggregate (cost=2451471.24..2451479.63 rows=3357\nwidth=12)\n -> Index Scan using test_date on\nqueries_detail_statistics (cost=0.00..2449570.55 rows=380138 width=12)\n Index Cond: ((date >= '2006-01-01'::date) AND\n(date <= '2006-01-30'::date))\n Filter: (((portal)::text = '1'::text) OR\n((portal)::text = '2'::text))\n -> Index Scan using query_string_pkey on query_string (cost=0.00..3.01\nrows=1 width=34)\n Index Cond: (\"outer\".query = query_string.id)\n(11 rows)\n\nAny new ideas ?,\nthanks\n\nMB.\n\n\n\n\n> On Tue, 2006-02-14 at 10:32, [email protected] wrote:\n> > command explain analyze crash with the \"out of memory\" error\n> >\n> > I precise that I've tried a lot of values from parameters shared_buffer and\n> > sort_mem\n> >\n> > now, in config file, values are :\n> > sort_mem=32768\n> > and shared_buffer=30000\n>\n> OK, on the command line, try increasing the sort_mem until hash_agg can\n> work. With a 4 gig machine, you should be able to go as high as needed\n> here, I'd think. Try as high as 500000 or so or more. Then when\n> explain analyze works, compare the actual versus estimated number of\n> rows.\n>\n\n", "msg_date": "Wed, 15 Feb 2006 16:55:30 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "out of memory" }, { "msg_contents": "On Wed, 2006-02-15 at 09:55, [email protected] wrote:\n> Good morning,\n> \n> \n> \n> \n> I've increased sort_mem until 2Go !!\n> and the error \"out of memory\" appears again.\n> \n> Here the request I try to pass with her explain plan,\n> \n> Nested Loop (cost=2451676.23..2454714.73 rows=1001 width=34)\n> -> Subquery Scan \"day\" (cost=2451676.23..2451688.73 rows=1000 width=16)\n> -> Limit (cost=2451676.23..2451678.73 rows=1000 width=12)\n> -> Sort (cost=2451676.23..2451684.63 rows=3357 width=12)\n> Sort Key: sum(occurence)\n> -> HashAggregate (cost=2451471.24..2451479.63 rows=3357\n> width=12)\n> -> Index Scan using test_date on\n> queries_detail_statistics (cost=0.00..2449570.55 rows=380138 width=12)\n> Index Cond: ((date >= '2006-01-01'::date) AND\n> (date <= '2006-01-30'::date))\n> Filter: (((portal)::text = '1'::text) OR\n> ((portal)::text = '2'::text))\n> -> Index Scan using query_string_pkey on query_string (cost=0.00..3.01\n> rows=1 width=34)\n> Index Cond: (\"outer\".query = query_string.id)\n> (11 rows)\n\nOK, so it looks like something is horrible wrong here. Try running the\nexplain analyze query after running the following:\n\n set enable_hashagg=off;\n\nand see what you get then.\n", "msg_date": "Wed, 15 Feb 2006 10:50:57 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: out of memory" }, { "msg_contents": "Here the result with hashAgg to false :\n Nested Loop (cost=2487858.08..2490896.58 rows=1001 width=34) (actual\ntime=1028044.781..1030251.260 rows=1000 loops=1)\n -> Subquery Scan \"day\" (cost=2487858.08..2487870.58 rows=1000 width=16)\n(actual time=1027996.748..1028000.969 rows=1000 loops=1)\n -> Limit (cost=2487858.08..2487860.58 rows=1000 width=12) (actual\ntime=1027996.737..1027999.199 rows=1000 loops=1)\n -> Sort (cost=2487858.08..2487866.47 rows=3357 width=12)\n(actual time=1027996.731..1027998.066 rows=1000 loops=1)\n Sort Key: sum(occurence)\n -> GroupAggregate (cost=2484802.05..2487661.48 rows=3357\nwidth=12) (actual time=810623.035..914550.262 rows=19422774 loops=1)\n -> Sort (cost=2484802.05..2485752.39 rows=380138\nwidth=12) (actual time=810612.248..845427.013 rows=36724340 loops=1)\n Sort Key: query\n -> Index Scan using test_date on\nqueries_detail_statistics (cost=0.00..2449570.55 rows=380138 width=12) (actual\ntime=25.393..182029.205 rows=36724340 loops=1)\n Index Cond: ((date >= '2006-01-01'::date)\nAND (date <= '2006-01-30'::date))\n Filter: (((portal)::text = '1'::text) OR\n((portal)::text = '2'::text))\n -> Index Scan using query_string_pkey on query_string (cost=0.00..3.01\nrows=1 width=34) (actual time=2.244..2.246 rows=1 loops=1000)\n Index Cond: (\"outer\".query = query_string.id)\n Total runtime: 1034357.390 ms\n(14 rows)\n\n\nthanks\n\ntable daily has 250 millions records\nand field query (bigint) 2 millions, occurence is int.\n\nrequest with HashAggregate is OK when date is restricted about 15 days like :\n\n SELECT query_string, DAY.ocu from search_data.query_string,\n (SELECT SUM(occurence) as ocu, query\nFROM daily.queries_detail_statistics\n WHERE date >= '2006-01-01' AND date <= '2006-01-15'\n AND portal IN (1,2)\n GROUP BY query\n ORDER BY ocu DESC\n LIMIT 1000) as DAY\n WHERE DAY.query=id;\n\n> On Wed, 2006-02-15 at 09:55, [email protected] wrote:\n> > Good morning,\n> >\n> >\n> >\n> >\n> > I've increased sort_mem until 2Go !!\n> > and the error \"out of memory\" appears again.\n> >\n> > Here the request I try to pass with her explain plan,\n> >\n> > Nested Loop (cost=2451676.23..2454714.73 rows=1001 width=34)\n> > -> Subquery Scan \"day\" (cost=2451676.23..2451688.73 rows=1000\n> width=16)\n> > -> Limit (cost=2451676.23..2451678.73 rows=1000 width=12)\n> > -> Sort (cost=2451676.23..2451684.63 rows=3357 width=12)\n> > Sort Key: sum(occurence)\n> > -> HashAggregate (cost=2451471.24..2451479.63\n> rows=3357\n> > width=12)\n> > -> Index Scan using test_date on\n> > queries_detail_statistics (cost=0.00..2449570.55 rows=380138 width=12)\n> > Index Cond: ((date >= '2006-01-01'::date)\n> AND\n> > (date <= '2006-01-30'::date))\n> > Filter: (((portal)::text = '1'::text) OR\n> > ((portal)::text = '2'::text))\n> > -> Index Scan using query_string_pkey on query_string (cost=0.00..3.01\n> > rows=1 width=34)\n> > Index Cond: (\"outer\".query = query_string.id)\n> > (11 rows)\n>\n> OK, so it looks like something is horrible wrong here. Try running the\n> explain analyze query after running the following:\n>\n> set enable_hashagg=off;\n>\n> and see what you get then.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n\n", "msg_date": "Wed, 15 Feb 2006 18:18:21 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: out of memory" }, { "msg_contents": "On Wed, 2006-02-15 at 11:18, [email protected] wrote:\n> Here the result with hashAgg to false :\n> Nested Loop (cost=2487858.08..2490896.58 rows=1001 width=34) (actual\n> time=1028044.781..1030251.260 rows=1000 loops=1)\n> -> Subquery Scan \"day\" (cost=2487858.08..2487870.58 rows=1000 width=16)\n> (actual time=1027996.748..1028000.969 rows=1000 loops=1)\n> -> Limit (cost=2487858.08..2487860.58 rows=1000 width=12) (actual\n> time=1027996.737..1027999.199 rows=1000 loops=1)\n> -> Sort (cost=2487858.08..2487866.47 rows=3357 width=12)\n> (actual time=1027996.731..1027998.066 rows=1000 loops=1)\n> Sort Key: sum(occurence)\n> -> GroupAggregate (cost=2484802.05..2487661.48 rows=3357\n> width=12) (actual time=810623.035..914550.262 rows=19422774 loops=1)\n> -> Sort (cost=2484802.05..2485752.39 rows=380138\n> width=12) (actual time=810612.248..845427.013 rows=36724340 loops=1)\n> Sort Key: query\n> -> Index Scan using test_date on\n> queries_detail_statistics (cost=0.00..2449570.55 rows=380138 width=12) (actual\n> time=25.393..182029.205 rows=36724340 loops=1)\n> Index Cond: ((date >= '2006-01-01'::date)\n> AND (date <= '2006-01-30'::date))\n> Filter: (((portal)::text = '1'::text) OR\n> ((portal)::text = '2'::text))\n> -> Index Scan using query_string_pkey on query_string (cost=0.00..3.01\n> rows=1 width=34) (actual time=2.244..2.246 rows=1 loops=1000)\n> Index Cond: (\"outer\".query = query_string.id)\n> Total runtime: 1034357.390 ms\n\nOK, in the index scan using test_date, you get 36724340 when the planner\nexpects 380138. That's off by a factor of about 10, so I'm guessing\nthat your statistics aren't reflecting what's really in your db. You\nsaid before you'd run analyze, so I'd try increasing the stats target on\nthat column and rerun analyze to see if things get any better.\n\n", "msg_date": "Wed, 15 Feb 2006 11:38:17 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: out of memory" }, { "msg_contents": "Hello,\n\nI want to split table partitioned across two servers postgres (two hosts).\nTo query this remote object, I want to make view with union on two servers with\ntwo dblink.\n\nBut, How to be sure that optimizer plan on remote node is same than local node\n(ie : optimizer scan only the selected partitions and not make full scan of\nthe remote object) ?\n\nexample : server 1 (table test partionned on field number and 1 < number <10)\n server 2 (table test partitioned on field number 10 <number <20)\n\nserver 3 has view like :\ncreate view remote_test\nas\nselect * from dblink('conn_server1', select * from test) as test_server1(....)\nunion\nselect * from dblink('conn_server2', select * from test) as test_server2(....)\n\nIf I've made select on view remote_test like :\nselect * from remote_test where number<5 and number >15.\n\noptimizer made full scan of all partitions on all servers or\nscan only partition 1 to partition 4 on server1\nand scan partiton 16 to partition 19 on server2\nand union ?\n\nIn fact, I don't know how to have explain plan of remote node.\n\nThanks a lot.\n\nMB\n\n\n", "msg_date": "Fri, 17 Feb 2006 09:12:39 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "split partitioned table across several postgres servers" }, { "msg_contents": "[email protected] writes:\n> In fact, I don't know how to have explain plan of remote node.\n\nYou send it an EXPLAIN.\n\nYou can *not* use a view defined as you suggest if you want decent\nperformance --- the dblink functions will fetch the entire table\ncontents and the filtering will be done locally. You'll need to\npass the WHERE conditions over to the remote servers, which more\nor less means that you have to give them to the dblink functions\nas text.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Feb 2006 09:49:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: split partitioned table across several postgres servers " }, { "msg_contents": "Selon Tom Lane <[email protected]>:\n\n> [email protected] writes:\n> > In fact, I don't know how to have explain plan of remote node.\n>\n> You send it an EXPLAIN.\n\n\nPlease, Could you send me what to put at end of request :\n\n select * from dblink('my_connexion', 'EXPLAIN select * from test where\nnumber='1' ') as ........\n\nI want to be sure that remote test is seen as partitioned object.\n\nthanks a lot.\n\n\n>\n> You can *not* use a view defined as you suggest if you want decent\n> performance --- the dblink functions will fetch the entire table\n> contents and the filtering will be done locally. You'll need to\n> pass the WHERE conditions over to the remote servers, which more\n> or less means that you have to give them to the dblink functions\n> as text.\n>\n> regards, tom lane\n>\n", "msg_date": "Fri, 17 Feb 2006 16:18:39 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: split partitioned table across several postgres servers " } ]
[ { "msg_contents": "Good morning,\n\nI try to understand how optimizer uses HashAggregate instead of GroupAggregate\nand I want to know what is exactly this two functionnality (benefits\n/inconvenients)\n\nIn my case, I've this explain plan.\n-----------------------\n Nested Loop (cost=2451676.23..2454714.73 rows=1001 width=34)\n -> Subquery Scan \"day\" (cost=2451676.23..2451688.73 rows=1000 width=16)\n -> Limit (cost=2451676.23..2451678.73 rows=1000 width=12)\n -> Sort (cost=2451676.23..2451684.63 rows=3357 width=12)\n Sort Key: sum(occurence)\n -> HashAggregate (cost=2451471.24..2451479.63 rows=3357\nwidth=12)\n -> Index Scan using test_date on\nqueries_detail_statistics (cost=0.00..2449570.55 rows=380138 width=12)\n Index Cond: ((date >= '2006-01-01'::date) AND\n(date <= '2006-01-30'::date))\n Filter: (((portal)::text = '1'::text) OR\n((portal)::text = '2'::text))\n -> Index Scan using query_string_pkey on query_string (cost=0.00..3.01\nrows=1 width=34)\n Index Cond: (\"outer\".query = query_string.id)\n----------------------------\nHow to get necessary memory RAM for this explain plan ?\n\n\n\nthanks a lot\n\n", "msg_date": "Wed, 15 Feb 2006 16:56:07 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: explain hashAggregate" }, { "msg_contents": "We are a small company looking to put together the most cost effective\nsolution for our production database environment. Currently in\nproduction Postgres 8.1 is running on this machine:\n\nDell 2850\n2 x 3.0 Ghz Xeon 800Mhz FSB 2MB Cache\n4 GB DDR2 400 Mhz\n2 x 73 GB 10K SCSI RAID 1 (for xlog and OS)\n4 x 146 GB 10K SCSI RAID 10 (for postgres data)\nPerc4ei controller\n\nThe above is a standard Dell box with nothing added or modified beyond\nthe options available directly through Dell. We had a bad processor last\nweek that effectively put us down for an entire weekend. Though it was\nthe web server that failed, the experience has caused us to step back\nand spend time coming up with a more reliable/fail-safe solution that\ncan reduce downtime.\n\nOur load won't be substantial so extreme performance and load balancing\nare not huge concerns. We are looking for good performance, at a good\nprice, configured in the most redundant, high availability manner\npossible. Availability is the biggest priority.\n\nI sent our scenario to our sales team at Dell and they came back with\nall manner of SAN, DAS, and configuration costing as much as $50k.\n\nWe have the budget to purchase 2-3 additional machines along the lines\nof the one listed above. As a startup with a limited budget, what would\nthis list suggest as options for clustering/replication or setting our\ndatabase up well in general?\n", "msg_date": "Wed, 15 Feb 2006 11:21:30 -0500", "msg_from": "\"Jeremy Haile\" <[email protected]>", "msg_from_op": false, "msg_subject": "Reliability recommendations" }, { "msg_contents": "Jeremy Haile wrote:\n> We are a small company looking to put together the most cost effective\n> solution for our production database environment. Currently in\n> production Postgres 8.1 is running on this machine:\n> \n> Dell 2850\n> 2 x 3.0 Ghz Xeon 800Mhz FSB 2MB Cache\n> 4 GB DDR2 400 Mhz\n> 2 x 73 GB 10K SCSI RAID 1 (for xlog and OS)\n> 4 x 146 GB 10K SCSI RAID 10 (for postgres data)\n> Perc4ei controller\n> \n> ... I sent our scenario to our sales team at Dell and they came back with\n> all manner of SAN, DAS, and configuration costing as much as $50k.\n\nGiven what you've told us, a $50K machine is not appropriate.\n\nInstead, think about a simple system with several clones of the database and a load-balancing web server, even if one machine could handle your load. If a machine goes down, the load balancer automatically switches to the other.\n\nLook at the MTBF figures of two hypothetical machines:\n\n Machine 1: Costs $2,000, MTBF of 2 years, takes two days to fix on average.\n Machine 2: Costs $50,000, MTBF of 100 years (!), takes one hour to fix on average.\n\nNow go out and buy three of the $2,000 machines. Use a load-balancer front end web server that can send requests round-robin fashion to a \"server farm\". Clone your database. In fact, clone the load-balancer too so that all three machines have all software and databases installed. Call these A, B, and C machines.\n\nAt any given time, your Machine A is your web front end, serving requests to databases on A, B and C. If B or C goes down, no problem - the system keeps running. If A goes down, you switch the IP address of B or C and make it your web front end, and you're back in business in a few minutes.\n\nNow compare the reliability -- in order for this system to be disabled, you'd have to have ALL THREE computers fail at the same time. With the MTBF and repair time of two days, each machine has a 99.726% uptime. The \"MTBF\", that is, the expected time until all three machines will fail simultaneously, is well over 100,000 years! Of course, this is silly, machines don't last that long, but it illustrates the point: Redundancy is beats reliability (which is why RAID is so useful). \n\nAll for $6,000.\n\nCraig\n", "msg_date": "Wed, 15 Feb 2006 09:19:04 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Machine 1: $2000\nMachine 2: $2000\nMachine 3: $2000\n\nKnowing how to rig them together and maintain them in a fully fault-\ntolerant way: priceless.\n\n\n(Sorry for the off-topic post, I couldn't resist).\n\n-- Mark Lewis\n\nOn Wed, 2006-02-15 at 09:19 -0800, Craig A. James wrote:\n> Jeremy Haile wrote:\n> > We are a small company looking to put together the most cost effective\n> > solution for our production database environment. Currently in\n> > production Postgres 8.1 is running on this machine:\n> > \n> > Dell 2850\n> > 2 x 3.0 Ghz Xeon 800Mhz FSB 2MB Cache\n> > 4 GB DDR2 400 Mhz\n> > 2 x 73 GB 10K SCSI RAID 1 (for xlog and OS)\n> > 4 x 146 GB 10K SCSI RAID 10 (for postgres data)\n> > Perc4ei controller\n> > \n> > ... I sent our scenario to our sales team at Dell and they came back with\n> > all manner of SAN, DAS, and configuration costing as much as $50k.\n> \n> Given what you've told us, a $50K machine is not appropriate.\n> \n> Instead, think about a simple system with several clones of the database and a load-balancing web server, even if one machine could handle your load. If a machine goes down, the load balancer automatically switches to the other.\n> \n> Look at the MTBF figures of two hypothetical machines:\n> \n> Machine 1: Costs $2,000, MTBF of 2 years, takes two days to fix on average.\n> Machine 2: Costs $50,000, MTBF of 100 years (!), takes one hour to fix on average.\n> \n> Now go out and buy three of the $2,000 machines. Use a load-balancer front end web server that can send requests round-robin fashion to a \"server farm\". Clone your database. In fact, clone the load-balancer too so that all three machines have all software and databases installed. Call these A, B, and C machines.\n> \n> At any given time, your Machine A is your web front end, serving requests to databases on A, B and C. If B or C goes down, no problem - the system keeps running. If A goes down, you switch the IP address of B or C and make it your web front end, and you're back in business in a few minutes.\n> \n> Now compare the reliability -- in order for this system to be disabled, you'd have to have ALL THREE computers fail at the same time. With the MTBF and repair time of two days, each machine has a 99.726% uptime. The \"MTBF\", that is, the expected time until all three machines will fail simultaneously, is well over 100,000 years! Of course, this is silly, machines don't last that long, but it illustrates the point: Redundancy is beats reliability (which is why RAID is so useful). \n> \n> All for $6,000.\n> \n> Craig\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n", "msg_date": "Wed, 15 Feb 2006 09:32:16 -0800", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Jeremy Haile wrote:\n> We are a small company looking to put together the most cost effective\n> solution for our production database environment. Currently in\n> production Postgres 8.1 is running on this machine:\n>\n> Dell 2850\n> 2 x 3.0 Ghz Xeon 800Mhz FSB 2MB Cache\n> 4 GB DDR2 400 Mhz\n> 2 x 73 GB 10K SCSI RAID 1 (for xlog and OS)\n> 4 x 146 GB 10K SCSI RAID 10 (for postgres data)\n> Perc4ei controller\n>\n> The above is a standard Dell box with nothing added or modified beyond\n> the options available directly through Dell.\nYou should probably review the archives for PostgreSQL user\nexperience with Dell's before you purchase one.\n\n> I sent our scenario to our sales team at Dell and they came back with\n> all manner of SAN, DAS, and configuration costing as much as $50k.\n> \nHAHAHAHAHA.... Don't do that. Dell is making the assumption\nyou won't do your homework. Make sure you cross quote with IBM,\nCompaq and Penguin Computing...\n\nSincerely,\n\nJoshua D. Drake\n\n-- \nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: PLphp, PLperl - http://www.commandprompt.com/\n\n", "msg_date": "Wed, 15 Feb 2006 10:05:05 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "\n\"Joshua D. Drake\" <[email protected]> writes:\n\n> Jeremy Haile wrote:\n> > We are a small company looking to put together the most cost effective\n> > solution for our production database environment. Currently in\n> > production Postgres 8.1 is running on this machine:\n> >\n> > Dell 2850\n> > 2 x 3.0 Ghz Xeon 800Mhz FSB 2MB Cache\n> > 4 GB DDR2 400 Mhz\n> > 2 x 73 GB 10K SCSI RAID 1 (for xlog and OS)\n> > 4 x 146 GB 10K SCSI RAID 10 (for postgres data)\n> > Perc4ei controller\n\nYou don't say how this box is performing. There's no way to give\nrecommendations in a vacuum. Some users really need $50k boxes and others\n(most) don't. This looks like a pretty good setup for Postgres and you would\nhave to be pushing things pretty hard to need much more.\n\nThat said some users have reported problems with Dell's raid controllers even\nwhen the same brand's regular controllers worked well. That's what Joshua is\nreferring to.\n\n> > The above is a standard Dell box with nothing added or modified beyond\n> > the options available directly through Dell.\n>\n> You should probably review the archives for PostgreSQL user\n> experience with Dell's before you purchase one.\n\n\n-- \ngreg\n\n", "msg_date": "15 Feb 2006 13:17:35 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "After takin a swig o' Arrakan spice grog, [email protected] (\"Joshua D. Drake\") belched out:\n> Jeremy Haile wrote:\n>> We are a small company looking to put together the most cost effective\n>> solution for our production database environment. Currently in\n>> production Postgres 8.1 is running on this machine:\n>>\n>> Dell 2850\n>> 2 x 3.0 Ghz Xeon 800Mhz FSB 2MB Cache\n>> 4 GB DDR2 400 Mhz\n>> 2 x 73 GB 10K SCSI RAID 1 (for xlog and OS)\n>> 4 x 146 GB 10K SCSI RAID 10 (for postgres data)\n>> Perc4ei controller\n>>\n>> The above is a standard Dell box with nothing added or modified beyond\n>> the options available directly through Dell.\n\n> You should probably review the archives for PostgreSQL user\n> experience with Dell's before you purchase one.\n\nHear, hear! We found Dell servers were big-time underperformers.\n\nGeneric hardware put together with generally the same brand names of\ncomponents (e.g. - for SCSI controllers and such) would generally play\nmuch better.\n\nFor the cheapo desktop boxes they obviously have to buy the \"cheapest\nhardware available this week;\" it sure seems as though they engage in\nthe same sort of thing with the \"server class\" hardware.\n\nI don't think anyone has been able to forcibly point out any\ncompletely precise shortcoming; just that they underperform what the\nspecs suggest they ought to be able to provide.\n\n>> I sent our scenario to our sales team at Dell and they came back\n>> with all manner of SAN, DAS, and configuration costing as much as\n>> $50k.\n\n> HAHAHAHAHA.... Don't do that. Dell is making the assumption you\n> won't do your homework. Make sure you cross quote with IBM, Compaq\n> and Penguin Computing...\n\nIndeed.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"gmail.com\")\nhttp://linuxdatabases.info/info/rdbms.html\nRules of the Evil Overlord #141. \"As an alternative to not having\nchildren, I will have _lots_ of children. My sons will be too busy\njockeying for position to ever be a real threat, and the daughters\nwill all sabotage each other's attempts to win the hero.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Wed, 15 Feb 2006 13:44:27 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "On Wed, 2006-02-15 at 12:44, Christopher Browne wrote:\n> After takin a swig o' Arrakan spice grog, [email protected] (\"Joshua D. Drake\") belched out:\n\n> > You should probably review the archives for PostgreSQL user\n> > experience with Dell's before you purchase one.\n> \n> Hear, hear! We found Dell servers were big-time underperformers.\n> \n> Generic hardware put together with generally the same brand names of\n> components (e.g. - for SCSI controllers and such) would generally play\n> much better.\n\nMy experience has been that:\n\nA: Their rebranded LSI and Adaptec RAID controllers underperform.\nB: Their BIOS updates for said cards and the mobos for the 26xx series\ncomes in a format that requires you to have a friggin bootable DOS\nfloppy. What is this, 1987???\nC: They use poorly performing mobo chipsets.\n\nWe had a dual P-III-750 with a REAL LSI RAID card and an intel mobo, and\nreplaced it with a dual P-IV 2800 Dell 2600 with twice the RAM. As a\ndatabase server the P-III-750 was easily a match for the new dell, and\nin some ways (i/o) outran it.\n\nWe also had a dual 2400 PIV Intel generic box, and it spanked the Dell\nhandily at everything, was easier to work on, the parts cost less, and\nit used bog standard RAID cards and such. I would highly recommend the\nIntel Generic hardware over Dell any day.\n", "msg_date": "Wed, 15 Feb 2006 13:11:20 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "At 11:21 AM 2/15/2006, Jeremy Haile wrote:\n>We are a small company looking to put together the most cost effective\n>solution for our production database environment. Currently in\n>production Postgres 8.1 is running on this machine:\n>\n>Dell 2850\n>2 x 3.0 Ghz Xeon 800Mhz FSB 2MB Cache\n>4 GB DDR2 400 Mhz\n>2 x 73 GB 10K SCSI RAID 1 (for xlog and OS)\n>4 x 146 GB 10K SCSI RAID 10 (for postgres data)\n>Perc4ei controller\n>\n>The above is a standard Dell box with nothing added or modified beyond\n>the options available directly through Dell. We had a bad processor last\n>week that effectively put us down for an entire weekend. Though it was\n>the web server that failed, the experience has caused us to step back\n>and spend time coming up with a more reliable/fail-safe solution that\n>can reduce downtime.\n>\n>Our load won't be substantial so extreme performance and load balancing\n>are not huge concerns. We are looking for good performance, at a good\n>price, configured in the most redundant, high availability manner\n>possible. Availability is the biggest priority.\n>\n>I sent our scenario to our sales team at Dell and they came back with\n>all manner of SAN, DAS, and configuration costing as much as $50k.\n>\n>We have the budget to purchase 2-3 additional machines along the lines\n>of the one listed above. As a startup with a limited budget, what would\n>this list suggest as options for clustering/replication or setting our\n>database up well in general?\n\n1= Tell Dell \"Thanks but no thanks.\" and do not buy any more \nequipment from them. Their value per $$ is less than other options \navailable to you.\n\n2= The current best bang for the buck HW (and in many cases, best \nperforming as well) for pg:\n a= AMD K8 and K9 (dual core) CPUs. Particularly the A64 X2 3800+ \nwhen getting the most for your $$ matters a lot\n pg gets a nice performance boost from running in 64b.\n b= Decent Kx server boards are available from Gigabyte, IWill, \nMSI, Supermicro, and Tyan to name a few.\n IWill has a 2P 16 DIMM slot board that is particularly nice \nfor a server that needs lots of RAM.\n c= Don't bother with SCSI or FC HD's unless you are doing the most \ndemanding kind of OLTP. SATA II HD's provide better value.\n d= HW RAID controllers are only worth it in certain \nscenarios. Using RAID 5 almost always means you should use a HW RAID \ncontroller.\n e= The only HW RAID controllers worth the $$ for you are 3ware \nEscalade 9550SX's and Areca ARC-11xx or ARC-12xx's.\n *For the vast majority of throughput situations, the ARC-1xxx's \nwith >= 1GB of battery backed WB cache are the best value*\n f= 1GB RAM sticks are cheap enough and provide enough value that \nyou should max out any system you get with them.\n g= for +high+ speed fail over, Chelsio and others are now making \nPCI-X and PCI-E 10GbE NICs at reasonable prices.\nThe above should serve as a good \"pick list\" for the components of \nany servers you need.\n\n3= The most economically sound HW and SW architecture that best suits \nyour performance and reliability needs is context dependent to your \nspecific circumstances.\n\n\nWhere are you located?\nRon\n\n\n\n", "msg_date": "Wed, 15 Feb 2006 14:53:28 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Thanks for everyone's feedback. I will definitely take the hardware\ncomments into consideration when purchasing future hardware. I am\nlocated in Atlanta, GA. If Dell has such a bad reputation with this\nlist, does anyone have good vendor recommendations? \n\nAlthough most of the responses were hardware-oriented (which was\nprobably my fault for not clearly stating my question), I am mostly\ninterested in replication/clustering ways of solving the issue. My\nexample of Dell quoting us $50k for a SAN was meant to sound ridiculous\nand is definitely not something we are considering. \n\nWhat we are really after is a good clustering or replication solution\nwhere we can run PostgreSQL on a small set of servers and have failover\ncapabilities. While RAID is great, our last failure was a CPU failure\nso a multi-server approach is something we want. Does anyone have any\nrecommendations as far as a clustering/replication solutions, regardless\nof hardware? I know there are several open-source and commercial\npostgres replication solutions - any good or bad experiences? Also, any\nopinions on shared storage and clustering vs separate internal storage. \n\nSince performance is not our current bottleneck, I would imagine\nMaster->Slave replication would be sufficient, although performance\ngains are always welcome. I don't have much experience with setting\nPostgreSQL in a replicated or clustered manner, so anything to point me\nin the right direction both hardware and software wise would be\nappreciated!\n\nThanks for all of the responses!\n\nOn Wed, 15 Feb 2006 14:53:28 -0500, \"Ron\" <[email protected]> said:\n> At 11:21 AM 2/15/2006, Jeremy Haile wrote:\n> >We are a small company looking to put together the most cost effective\n> >solution for our production database environment. Currently in\n> >production Postgres 8.1 is running on this machine:\n> >\n> >Dell 2850\n> >2 x 3.0 Ghz Xeon 800Mhz FSB 2MB Cache\n> >4 GB DDR2 400 Mhz\n> >2 x 73 GB 10K SCSI RAID 1 (for xlog and OS)\n> >4 x 146 GB 10K SCSI RAID 10 (for postgres data)\n> >Perc4ei controller\n> >\n> >The above is a standard Dell box with nothing added or modified beyond\n> >the options available directly through Dell. We had a bad processor last\n> >week that effectively put us down for an entire weekend. Though it was\n> >the web server that failed, the experience has caused us to step back\n> >and spend time coming up with a more reliable/fail-safe solution that\n> >can reduce downtime.\n> >\n> >Our load won't be substantial so extreme performance and load balancing\n> >are not huge concerns. We are looking for good performance, at a good\n> >price, configured in the most redundant, high availability manner\n> >possible. Availability is the biggest priority.\n> >\n> >I sent our scenario to our sales team at Dell and they came back with\n> >all manner of SAN, DAS, and configuration costing as much as $50k.\n> >\n> >We have the budget to purchase 2-3 additional machines along the lines\n> >of the one listed above. As a startup with a limited budget, what would\n> >this list suggest as options for clustering/replication or setting our\n> >database up well in general?\n> \n> 1= Tell Dell \"Thanks but no thanks.\" and do not buy any more \n> equipment from them. Their value per $$ is less than other options \n> available to you.\n> \n> 2= The current best bang for the buck HW (and in many cases, best \n> performing as well) for pg:\n> a= AMD K8 and K9 (dual core) CPUs. Particularly the A64 X2 3800+ \n> when getting the most for your $$ matters a lot\n> pg gets a nice performance boost from running in 64b.\n> b= Decent Kx server boards are available from Gigabyte, IWill, \n> MSI, Supermicro, and Tyan to name a few.\n> IWill has a 2P 16 DIMM slot board that is particularly nice \n> for a server that needs lots of RAM.\n> c= Don't bother with SCSI or FC HD's unless you are doing the most \n> demanding kind of OLTP. SATA II HD's provide better value.\n> d= HW RAID controllers are only worth it in certain \n> scenarios. Using RAID 5 almost always means you should use a HW RAID \n> controller.\n> e= The only HW RAID controllers worth the $$ for you are 3ware \n> Escalade 9550SX's and Areca ARC-11xx or ARC-12xx's.\n> *For the vast majority of throughput situations, the ARC-1xxx's \n> with >= 1GB of battery backed WB cache are the best value*\n> f= 1GB RAM sticks are cheap enough and provide enough value that \n> you should max out any system you get with them.\n> g= for +high+ speed fail over, Chelsio and others are now making \n> PCI-X and PCI-E 10GbE NICs at reasonable prices.\n> The above should serve as a good \"pick list\" for the components of \n> any servers you need.\n> \n> 3= The most economically sound HW and SW architecture that best suits \n> your performance and reliability needs is context dependent to your \n> specific circumstances.\n> \n> \n> Where are you located?\n> Ron\n> \n> \n> \n", "msg_date": "Wed, 15 Feb 2006 15:57:50 -0500", "msg_from": "\"Jeremy Haile\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Jeremy Haile wrote:\n> Thanks for everyone's feedback. I will definitely take the hardware\n> comments into consideration when purchasing future hardware. I am\n> located in Atlanta, GA. If Dell has such a bad reputation with this\n> list, does anyone have good vendor recommendations? \n> \nI can recommend Penguin Computing (even our windows weenies like them),\nASA, and HP Proliant. AMD Opteron *is* the way to go.\n\n\n", "msg_date": "Wed, 15 Feb 2006 16:20:23 -0500", "msg_from": "Josh Rovero <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Christopher Browne wrote:\n> After takin a swig o' Arrakan spice grog, [email protected] (\"Joshua D. Drake\") belched out:\n> > Jeremy Haile wrote:\n> >> We are a small company looking to put together the most cost effective\n> >> solution for our production database environment. Currently in\n> >> production Postgres 8.1 is running on this machine:\n> >>\n> >> Dell 2850\n> >> 2 x 3.0 Ghz Xeon 800Mhz FSB 2MB Cache\n> >> 4 GB DDR2 400 Mhz\n> >> 2 x 73 GB 10K SCSI RAID 1 (for xlog and OS)\n> >> 4 x 146 GB 10K SCSI RAID 10 (for postgres data)\n> >> Perc4ei controller\n> >>\n> >> The above is a standard Dell box with nothing added or modified beyond\n> >> the options available directly through Dell.\n> \n> > You should probably review the archives for PostgreSQL user\n> > experience with Dell's before you purchase one.\n> \n> Hear, hear! We found Dell servers were big-time underperformers.\n> \n> Generic hardware put together with generally the same brand names of\n> components (e.g. - for SCSI controllers and such) would generally play\n> much better.\n> \n> For the cheapo desktop boxes they obviously have to buy the \"cheapest\n> hardware available this week;\" it sure seems as though they engage in\n> the same sort of thing with the \"server class\" hardware.\n> \n> I don't think anyone has been able to forcibly point out any\n> completely precise shortcoming; just that they underperform what the\n> specs suggest they ought to be able to provide.\n\nDell often says part X is included, but part X is not the exact same as\npart X sold by the original manufacturer. To hit a specific price\npoint, Dell is willing to strip thing out of commodity hardware, and\noften does so even when performance suffers. For many people, this is\nunacceptable.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n SRA OSS, Inc. http://www.sraoss.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Fri, 24 Feb 2006 09:29:58 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Bruce Momjian wrote:\n> Dell often says part X is included, but part X is not the exact same as\n> part X sold by the original manufacturer. To hit a specific price\n> point, Dell is willing to strip thing out of commodity hardware, and\n> often does so even when performance suffers. For many people, this is\n> unacceptable.\n\nI find this strains credibility, that this major manufacturer of PC's would do something deceptive that hurts performance, when it would be easily detected and widely reported. Can anyone cite a specific instances where this has happened? Such as, \"I bought Dell model XYZ, which was advertised to have these parts and these specs, but in fact had these other parts and here are the actual specs.\"\n\nDell seems to take quite a beating in this forum, and I don't recall seeing any other manufacturer blasted this way. Is it that they are deceptive, or simply that their \"servers\" are designed to be office servers, not database servers?\n\nThere's nothing wrong with Dell designing their servers for a different market than ours; they need to go for the profits, and that may not include us. But it's not fair for us to claim Dell is being deceptive unless we have concrete evidence.\n\nCraig\n\n", "msg_date": "Fri, 24 Feb 2006 07:04:53 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Bruce,\n\nOn 2/24/06 6:29 AM, \"Bruce Momjian\" <[email protected]> wrote:\n\n> Christopher Browne wrote:\n>> After takin a swig o' Arrakan spice grog, [email protected] (\"Joshua D.\n>> Drake\") belched out:\n\nAlways more fun to read drunken posts :-)\n\n>>>> Dell 2850\n>>>> 2 x 3.0 Ghz Xeon 800Mhz FSB 2MB Cache\n>>>> 4 GB DDR2 400 Mhz\n>>>> 2 x 73 GB 10K SCSI RAID 1 (for xlog and OS)\n>>>> 4 x 146 GB 10K SCSI RAID 10 (for postgres data)\n>>>> Perc4ei controller\n\n> Dell often says part X is included, but part X is not the exact same as\n> part X sold by the original manufacturer. To hit a specific price\n> point, Dell is willing to strip thing out of commodity hardware, and\n> often does so even when performance suffers. For many people, this is\n> unacceptable.\n\nI'll register the contrarian's point of view on this: we just had a customer\n(an airline) buy the exact machines that Jeremy lists here, including the\nPerc4 controller (an LSI MPT RAID controller).\n\nBesides the complete lack of real RAID 10 support, the machines are actually\nperforming very well in hardware RAID5 mode. They are running Redhat 4\nlinux, and we did have to tune the I/O readahead to get the performance we\nneeded - about 250MB/s on 2 banks of RAID5, not bad at all for only 4 active\ndisks.\n\nI think the build quality and the fit/finish/features of their chassis were\nall pretty good.\n\nSo, if you want RAID5, these machines work for me. The lack of RAID 10\ncould knock them out of contention for people.\n\nThe problem with their RAID10 is actually hidden from view BTW. You can\nconfigure the controller with a \"spanning\" and other options that sound like\nRAID10, but in actuality it's spanning pairs of RAID1 disks, which does not\nprovide the performance benefit of RAID10 (see James Thornton's post here:\nhttp://openacs.org/forums/message-view?message_id=178447), the important bit\nof which says:\n\n\"RAID-10 on PERC 2/SC, 2/DC, 3/SC, 3/DCL, 3/DC, 3/QC, 4/Di, and CERC\nATA100/4ch controllers is implemented as RAID Level 1-Concatenated. RAID-1\nConcatenated is a RAID-1 array that spans across more than a single pair of\narray disks. This combines the advantages of concatenation with the\nredundancy of RAID-1. No striping is involved in this RAID type. Also,\nRAID-1 Concatenated can be implemented on hardware that supports only RAID-1\nby creating multiple RAID-1 virtual disks, upgrading the virtual disks to\ndynamic disks, and then using spanning to concatenate all of the RAID-1\nvirtual disks into one large dynamic volume. In a concatenation (spanned\nvolume), when an array disk in a concatenated or spanned volume fails, the\nentire volume becomes unavailable.\"\n\n- Luke\n\n\n", "msg_date": "Fri, 24 Feb 2006 07:14:20 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Bruce,\n\nOn 2/24/06 7:14 AM, \"Luke Lonergan\" <[email protected]> wrote:\n\n> So, if you want RAID5, these machines work for me. The lack of RAID 10\n> could knock them out of contention for people.\n\nSorry in advance for the double post, but there's some more information on\nthis, which altogether demonstrates why people get frustrated with Dell IMO.\n\nLater on in the article I cited, the same person who listed Dell's technical\ndescription of their strange, not really RAID10 support, he mentions that he\ncalled LSI directly to find out the real scoop. He says that they (LSI)\ntold him that the LSI controller does support real RAID10, so he concluded\nthat the Dell could do it too.\n\nHowever, Dell's documentation seems unambiguous to me, and matches our\ndirect experience. Also, more online documentation from Dell reinforces\nthis. See their definition of both RAID10 and spanning in this\nbackgrounder: \nhttp://support.dell.com/support/edocs/storage/RAID/RAIDbk0.pdf. Seems that\nthey are clear about what striping and spanning do, and what RAID10 does,\nmaking their delineation of the standard RAID10 support versus what some of\ntheir controllers do (all of the PERC 4 series) pretty clear.\n\nSo, I'd conclude at this point that Dell seems to have implemented a RAID\nBIOS that does not allow true RAID10 or RAID50 on their embedded LSI\nadapters, which could otherwise support RAID10/50. They have done this\nintentionally, for some reason unknown to me and it seems to some people at\nLSI as well.\n\n- Luke\n\n\n", "msg_date": "Fri, 24 Feb 2006 07:41:12 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "On Feb 24, 2006, at 9:29 AM, Bruce Momjian wrote:\n\n> Dell often says part X is included, but part X is not the exact \n> same as\n> part X sold by the original manufacturer. To hit a specific price\n> point, Dell is willing to strip thing out of commodity hardware, and\n> often does so even when performance suffers. For many people, this is\n> unacceptable.\n\nThe last dell box I bought, a PE1850, came with a PERC 4e/Si card, \nwhich I believe is the same as the card the OP was looking at. It is \nvery fast in RAID1 with two U320 disks.\n\nFor real DB work, I'd look more to a dual channel card and have 1/2 \nof each mirror pair on opposing channels. Dell can configure that \nfor you, I'm sure.\n\nI think the well tossed-around notion of Dells being underperforming \nneeds to be re-evaluated with the EM64T Xeon based systems. They are \nquite fast. I haven't put a very large db with extreme loads on any \nof these systems, but the simple benchmarking I did on them shows \nthem to be acceptable performers. The high-end RAID cards they sell \nthese days do not seem to me to be skimpy.\n\nNow if they'd only get on the Opteron bandwagon....", "msg_date": "Fri, 24 Feb 2006 11:27:37 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "On Fri, 2006-02-24 at 10:27, Vivek Khera wrote:\n> On Feb 24, 2006, at 9:29 AM, Bruce Momjian wrote:\n> \n> > Dell often says part X is included, but part X is not the exact \n> > same as\n> > part X sold by the original manufacturer. To hit a specific price\n> > point, Dell is willing to strip thing out of commodity hardware, and\n> > often does so even when performance suffers. For many people, this is\n> > unacceptable.\n> \n> The last dell box I bought, a PE1850, came with a PERC 4e/Si card, \n> which I believe is the same as the card the OP was looking at. It is \n> very fast in RAID1 with two U320 disks.\n\nMy bad experiences were with the 2600 series machines. We now have some\n2800 and they're much better than the 2600/2650s I've used in the past.\n\nThat said, I've not tried to do anything too high end with them, like\nRAID 1+0 or anything.\n\nMy past experience has been that Intel and SuperMicro make much better\n\"white boxes\" you can get from almost any halfway decent local\nsupplier. They're faster, more reliable, and the parts are easy to get.\n", "msg_date": "Fri, 24 Feb 2006 10:32:22 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "On Feb 24, 2006, at 11:32 AM, Scott Marlowe wrote:\n\n> My bad experiences were with the 2600 series machines. We now have \n> some\n> 2800 and they're much better than the 2600/2650s I've used in the \n> past.\n\nYes, the 2450 and 2650 were CRAP disk performers. I haven't any 2850 \nto compare, just an 1850.", "msg_date": "Fri, 24 Feb 2006 11:40:25 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "On Fri, 2006-02-24 at 10:40, Vivek Khera wrote:\n> On Feb 24, 2006, at 11:32 AM, Scott Marlowe wrote:\n> \n> > My bad experiences were with the 2600 series machines. We now have \n> > some\n> > 2800 and they're much better than the 2600/2650s I've used in the \n> > past.\n> \n> Yes, the 2450 and 2650 were CRAP disk performers. I haven't any 2850 \n> to compare, just an 1850.\n> \nAnd the real problem with the 2650s we have now is that under very heavy\nload, they just lock up. all of them, latest BIOS updates, etc... \nnothing makes them stable. Some things make them a little less dodgy,\nbut they are never truly reliable. Something the Intel and Supermicro\nwhite boxes have always been for me.\n\nI don't want support from a company like Dell, I want a reliable\nmachine.\n", "msg_date": "Fri, 24 Feb 2006 10:51:28 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "\n> I find this strains credibility, that this major manufacturer of PC's \n> would do something deceptive that hurts performance, when it would be \n> easily detected and widely reported. Can anyone cite a specific \n> instances where this has happened? Such as, \"I bought Dell model XYZ, \n> which was advertised to have these parts and these specs, but in fact \n> had these other parts and here are the actual specs.\"\nI can :)\n\nFeb 20 07:33:52 master kernel: [4294682.803000] Vendor: MegaRAID \nModel: LD 0 RAID1 51G Rev: 196T\nFeb 20 07:33:52 master kernel: [4294682.803000] Type: \nDirect-Access ANSI SCSI revision: 02\nFeb 20 07:33:52 master kernel: [4294682.818000] tg3.c:v3.31 (June 8, 2005)\nFeb 20 07:33:52 master kernel: [4294682.818000] ACPI: PCI Interrupt \n0000:0a:01.0[A] -> GSI 17 (level, low) -> IRQ 17\nFeb 20 07:33:52 master kernel: [4294683.510000] eth0: Tigon3 \n[partno(BCM95700A6) rev 7104 PHY(5411)] (PCI:66MHz:64-bit) \n10/100/1000BaseT Ethernet 00:0f:1f:6e:01:f\nFeb 20 07:33:52 master kernel: [4294683.510000] eth0: RXcsums[1] \nLinkChgREG[1] MIirq[1] ASF[0] Split[0] WireSpeed[0] TSOcap[0]\nFeb 20 07:33:52 master kernel: [4294683.510000] eth0: dma_rwctrl[76ff000f]\nFeb 20 07:33:52 master kernel: [4294683.510000] ACPI: PCI Interrupt \n0000:0a:02.0[A] -> GSI 18 (level, low) -> IRQ 18\nFeb 20 07:33:52 master kernel: [4294684.203000] eth1: Tigon3 \n[partno(BCM95700A6) rev 7104 PHY(5411)] (PCI:66MHz:64-bit) \n10/100/1000BaseT Ethernet 00:0f:1f:6e:01:f\nFeb 20 07:33:52 master kernel: [4294684.203000] eth1: RXcsums[1] \nLinkChgREG[1] MIirq[1] ASF[0] Split[0] WireSpeed[0] TSOcap[0]\nFeb 20 07:33:52 master kernel: [4294684.203000] eth1: dma_rwctrl[76ff000f]\nFeb 20 07:33:52 master kernel: [4294686.228000] SCSI device sda: \n106168320 512-byte hdwr sectors (54358 MB)\nFeb 20 07:33:52 master kernel: [4294686.228000] SCSI device sda: \n106168320 512-byte hdwr sectors (54358 MB)\nFeb 20 07:33:52 master kernel: [4294686.228000] \n/dev/scsi/host0/bus2/target0/lun0: p1 p2 < p5 > p3\nFeb 20 07:33:52 master kernel: [4294686.243000] Attached scsi disk sda \nat scsi0, channel 2, id 0, lun 0\nFeb 20 07:33:52 master kernel: [4294686.578000] ACPI: CPU0 (power \nstates: C1[C1])\nFeb 20 07:33:52 master kernel: [4294686.655000] Attempting manual resume\nFeb 20 07:33:52 master kernel: [4294686.659000] swsusp: Suspend \npartition has wrong signature?\nFeb 20 07:33:52 master kernel: [4294686.671000] kjournald starting. \nCommit interval 5 seconds\nFeb 20 07:33:52 master kernel: [4294686.671000] EXT3-fs: mounted \nfilesystem with ordered data mode.\nFeb 20 07:33:52 master kernel: [4294687.327000] md: md driver 0.90.1 \nMAX_MD_DEVS=256, MD_SB_DISKS=27\nFeb 20 07:33:52 master kernel: [4294688.633000] Adding 3903784k swap on \n/dev/sda5. Priority:-1 extents:1\nFeb 20 07:33:52 master kernel: [4294688.705000] EXT3 FS on sda3, \ninternal journal\nFeb 20 07:33:52 master kernel: [4294692.533000] lp: driver loaded but no \ndevices found\nFeb 20 07:33:52 master kernel: [4294692.557000] mice: PS/2 mouse device \ncommon for all mice\nFeb 20 07:33:52 master kernel: [4294695.263000] device-mapper: \n4.4.0-ioctl (2005-01-12) initialised: [email protected]\nFeb 20 07:33:52 master kernel: [4294695.479000] kjournald starting. \nCommit interval 5 seconds\nFeb 20 07:33:52 master kernel: [4294695.479000] EXT3 FS on sda1, \ninternal journal\nFeb 20 07:33:52 master kernel: [4294695.479000] EXT3-fs: mounted \nfilesystem with ordered data mode.\nFeb 20 07:33:52 master kernel: [4294695.805000] Linux agpgart interface \nv0.101 (c) Dave Jones\nFeb 20 07:33:52 master kernel: [4294696.071000] piix4_smbus \n0000:00:0f.0: Found 0000:00:0f.0 device\nFeb 20 07:33:52 master kernel: [4294696.584000] pci_hotplug: PCI Hot \nPlug PCI Core version: 0.5\n\n\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 15\nmodel : 2\nmodel name : Intel(R) Xeon(TM) MP CPU 2.20GHz\nstepping : 6\ncpu MHz : 2194.056\ncache size : 512 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge \nmca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe cid xtpr\nbogomips : 4325.37\n\n\n\nThis machine... if you run it in raid 5 will only get 7-9 megabytes a \nsecond READ! performance. That is with 6 SCSI drives.\nIf you run it in RAID 10 you get a more reasonable 50-55 megabytes per \nsecond.\n\nI don't have it sitting in front of me or I would give you an exact \nmodel number.\n\nThis machine also uses the serverworks chipset which is known to be a \ncatastrophe.\n\nJoshua D. Drake\n\n\n\n\n>\n> Dell seems to take quite a beating in this forum, and I don't recall \n> seeing any other manufacturer blasted this way. Is it that they are \n> deceptive, or simply that their \"servers\" are designed to be office \n> servers, not database servers?\n>\n> There's nothing wrong with Dell designing their servers for a \n> different market than ours; they need to go for the profits, and that \n> may not include us. But it's not fair for us to claim Dell is being \n> deceptive unless we have concrete evidence.\n>\n> Craig\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n\n-- \nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: PLphp, PLperl - http://www.commandprompt.com/\n\n", "msg_date": "Fri, 24 Feb 2006 09:19:57 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Joshua D. Drake wrote:\n>> I find this strains credibility, that this major manufacturer of PC's \n>> would do something deceptive that hurts performance, when it would be \n>> easily detected and widely reported. Can anyone cite a specific \n>> instances where this has happened? Such as, \"I bought Dell model XYZ, \n>> which was advertised to have these parts and these specs, but in fact \n>> had these other parts and here are the actual specs.\"\n> \n> I can :)\n> \n> Feb 20 07:33:52 master kernel: [4294682.803000] Vendor: MegaRAID \n> Model: LD 0 RAID1 51G Rev: 196T\n> --- snip ---\n> This machine... if you run it in raid 5 will only get 7-9 megabytes a \n> second READ! performance. That is with 6 SCSI drives.\n> If you run it in RAID 10 you get a more reasonable 50-55 megabytes per \n> second.\n\nBut you don't say how this machine was advertised. Are there components in that list that were not as advertised? Was the machine advertised as capable of RAID 5? Were performance figures published for RAID 5?\n\nIf Dell advertised that the machine could do what you asked, then you're right -- they screwed you. But if it was designed for and advertised to a different market, then I've made my point: People are blaming Dell for something that's not their fault.\n\nCraig\n", "msg_date": "Fri, 24 Feb 2006 15:12:43 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "On Fri, 2006-02-24 at 17:12, Craig A. James wrote:\n> Joshua D. Drake wrote:\n> >> I find this strains credibility, that this major manufacturer of PC's \n> >> would do something deceptive that hurts performance, when it would be \n> >> easily detected and widely reported. Can anyone cite a specific \n> >> instances where this has happened? Such as, \"I bought Dell model XYZ, \n> >> which was advertised to have these parts and these specs, but in fact \n> >> had these other parts and here are the actual specs.\"\n> > \n> > I can :)\n> > \n> > Feb 20 07:33:52 master kernel: [4294682.803000] Vendor: MegaRAID \n> > Model: LD 0 RAID1 51G Rev: 196T\n> > --- snip ---\n> > This machine... if you run it in raid 5 will only get 7-9 megabytes a \n> > second READ! performance. That is with 6 SCSI drives.\n> > If you run it in RAID 10 you get a more reasonable 50-55 megabytes per \n> > second.\n> \n> But you don't say how this machine was advertised. Are there components in that list that were not as advertised? Was the machine advertised as capable of RAID 5? Were performance figures published for RAID 5?\n> \n> If Dell advertised that the machine could do what you asked, then you're right -- they screwed you. But if it was designed for and advertised to a different market, then I've made my point: People are blaming Dell for something that's not their fault.\n\nIT was advertised as a rackmount server with dual processors and a RAID\ncontroller with 6 drive bays. I would expect such a machine to perform\nwell in both RAID 5 and RAID 1+0 configurations.\n\nIt certainly didn't do what we expected of a machine with the specs it\nhad. For the same price, form factor and basic setup, i.e. dual P-IV 2\nto 4 gig ram, 5 or 6 drive bays, I'd expect the same thing. They were\ncrap. Honestly. Did you see the post where I mentioned that under\nheavy I/O load they lock up about once every month or so. They all do,\nevery one I've ever seen. Some take more time than others, but they all\neventually lock up while running.\n\nI was pretty much agnostic as to which servers we bought at my last job,\nuntil someone started ordering from Dell and we got 2600 series\nmachines. No one should have to live with these things.\n", "msg_date": "Fri, 24 Feb 2006 17:59:14 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Joshua,\n\nOn 2/24/06 9:19 AM, \"Joshua D. Drake\" <[email protected]> wrote:\n\n> This machine... if you run it in raid 5 will only get 7-9 megabytes a\n> second READ! performance. That is with 6 SCSI drives.\n> If you run it in RAID 10 you get a more reasonable 50-55 megabytes per\n> second.\n> \n> I don't have it sitting in front of me or I would give you an exact\n> model number.\n> \n> This machine also uses the serverworks chipset which is known to be a\n> catastrophe.\n\nI'd be more shocked if this weren't also true of nearly all SCSI HW RAID\nadapters of this era. If you had ordered an HP DL380 server you'd get about\nthe same performance.\n\nBTW - I don't think there's anything reasonable about 50-55 MB/s from 6\ndisks, I'd put the minimum for this era machine at 5 x 30 = 150MB/s.\n\n- Luke\n\n\n", "msg_date": "Fri, 24 Feb 2006 16:06:57 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Do you have a hw reference that runs that fast (5 x 30 = 150MB/s) ?\n\n\nLuke Lonergan a écrit :\n> Joshua,\n>\n> On 2/24/06 9:19 AM, \"Joshua D. Drake\" <[email protected]> wrote:\n>\n> \n>> This machine... if you run it in raid 5 will only get 7-9 megabytes a\n>> second READ! performance. That is with 6 SCSI drives.\n>> If you run it in RAID 10 you get a more reasonable 50-55 megabytes per\n>> second.\n>>\n>> I don't have it sitting in front of me or I would give you an exact\n>> model number.\n>>\n>> This machine also uses the serverworks chipset which is known to be a\n>> catastrophe.\n>> \n>\n> I'd be more shocked if this weren't also true of nearly all SCSI HW RAID\n> adapters of this era. If you had ordered an HP DL380 server you'd get about\n> the same performance.\n>\n> BTW - I don't think there's anything reasonable about 50-55 MB/s from 6\n> disks, I'd put the minimum for this era machine at 5 x 30 = 150MB/s.\n>\n> - Luke\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n>\n> \n\n\n\n\n\n\n\nDo you have a hw reference that runs that fast (5 x 30 = 150MB/s) ?\n\n\nLuke Lonergan a écrit :\n\nJoshua,\n\nOn 2/24/06 9:19 AM, \"Joshua D. Drake\" <[email protected]> wrote:\n\n \n\nThis machine... if you run it in raid 5 will only get 7-9 megabytes a\nsecond READ! performance. That is with 6 SCSI drives.\nIf you run it in RAID 10 you get a more reasonable 50-55 megabytes per\nsecond.\n\nI don't have it sitting in front of me or I would give you an exact\nmodel number.\n\nThis machine also uses the serverworks chipset which is known to be a\ncatastrophe.\n \n\n\nI'd be more shocked if this weren't also true of nearly all SCSI HW RAID\nadapters of this era. If you had ordered an HP DL380 server you'd get about\nthe same performance.\n\nBTW - I don't think there's anything reasonable about 50-55 MB/s from 6\ndisks, I'd put the minimum for this era machine at 5 x 30 = 150MB/s.\n\n- Luke\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly", "msg_date": "Sat, 25 Feb 2006 01:18:52 +0100", "msg_from": "Philippe Marzin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Luke Lonergan wrote:\n\n> \n> I'd be more shocked if this weren't also true of nearly all SCSI HW RAID\n> adapters of this era. If you had ordered an HP DL380 server you'd get about\n> the same performance.\n> \n> BTW - I don't think there's anything reasonable about 50-55 MB/s from 6\n> disks, I'd put the minimum for this era machine at 5 x 30 = 150MB/s.\n> \n\nHe was quoting for 6 disk RAID 10 - I'm thinking 3 x 30MB/s = 90MB/s is \nprobably more correct? Having aid that, your point is still completely \ncorrect - the performance @55MB/s is poor (e.g. my *ata* system with 2 \ndisk RAID0 does reads @110MB/s).\n\ncheers\n\nMark\n", "msg_date": "Sat, 25 Feb 2006 13:29:55 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "All,\n\nWas that sequential reads? If so, yeah you'll get 110MB/s? How big \nwas the datafile size? 8MB? Yeah, you'll get 110MB/s. 2GB? No, they \ncan't sustain that. There are so many details missing from this test \nthat it's hard to have any context around it :)\n\nI was getting about 40-50MB/s on a PV with 14 disks on a RAID10 in \nreal world usage. (random IO and fully saturating a Dell 1850 with 4 \nconcurrent threads (to peg the cpu on selects) and raw data files)\n\nBest Regards,\nDan Gorman\n\nOn Feb 24, 2006, at 4:29 PM, Mark Kirkwood wrote:\n\n> Luke Lonergan wrote:\n>\n>> I'd be more shocked if this weren't also true of nearly all SCSI \n>> HW RAID\n>> adapters of this era. If you had ordered an HP DL380 server you'd \n>> get about\n>> the same performance.\n>> BTW - I don't think there's anything reasonable about 50-55 MB/s \n>> from 6\n>> disks, I'd put the minimum for this era machine at 5 x 30 = 150MB/s.\n>\n> He was quoting for 6 disk RAID 10 - I'm thinking 3 x 30MB/s = 90MB/ \n> s is probably more correct? Having aid that, your point is still \n> completely correct - the performance @55MB/s is poor (e.g. my *ata* \n> system with 2 disk RAID0 does reads @110MB/s).\n>\n> cheers\n>\n> Mark\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n", "msg_date": "Fri, 24 Feb 2006 16:47:23 -0800", "msg_from": "Dan Gorman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Dan Gorman wrote:\n> All,\n> \n> Was that sequential reads? If so, yeah you'll get 110MB/s? How big was \n> the datafile size? 8MB? Yeah, you'll get 110MB/s. 2GB? No, they can't \n> sustain that. There are so many details missing from this test that \n> it's hard to have any context around it :)\n>\n\nActually they can. Datafile size was 8G, machine had 2G RAM (i.e. \ndatafile 4 times memory). The test was for a sequential read with 8K \nblocks. I believe this is precisely the type of test that the previous \nposters were referring to - while clearly, its not a real-world measure, \nwe are comparing like to like, and as such terrible results on such a \nsimple test are indicative of something 'not right'.\n\nregards\n\nMark\n\nP.s. FWIW - I'm quoting a test from a few years ago - the (same) machine \nnow has 4 RAID0 ata disks and does 175MB/s on the same test....\n\n\n\n", "msg_date": "Sat, 25 Feb 2006 14:18:31 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Dan,\n\nOn 2/24/06 4:47 PM, \"Dan Gorman\" <[email protected]> wrote:\n\n> Was that sequential reads? If so, yeah you'll get 110MB/s? How big\n> was the datafile size? 8MB? Yeah, you'll get 110MB/s. 2GB? No, they\n> can't sustain that. There are so many details missing from this test\n> that it's hard to have any context around it :)\n> \n> I was getting about 40-50MB/s on a PV with 14 disks on a RAID10 in\n> real world usage. (random IO and fully saturating a Dell 1850 with 4\n> concurrent threads (to peg the cpu on selects) and raw data files)\n\nOK, how about some proof?\n\nIn a synthetic test that writes 32GB of sequential 8k pages on a machine\nwith 16GB of RAM:\n========================= Write test results ==============================\ntime bash -c \"dd if=/dev/zero of=/dbfast1/llonergan/bigfile bs=8k\ncount=2000000 && sync\" &\ntime bash -c \"dd if=/dev/zero of=/dbfast3/llonergan/bigfile bs=8k\ncount=2000000 && sync\" &\n\n2000000�records in\n2000000�records out\n2000000�records in\n2000000�records out\n\nreal 1m0.046s\nuser 0m0.270s\nsys 0m30.008s\n\nreal 1m0.047s\nuser 0m0.287s\nsys 0m30.675s\n\nSo that's 32,000 MB written in 60.05 seconds, which is 533MB/s sustained\nwith two threads.\n\nNow to read the same files in parallel:\n========================= Read test results ==============================\nsync\ntime dd of=/dev/null if=/dbfast1/llonergan/bigfile bs=8k &\ntime dd of=/dev/null if=/dbfast3/llonergan/bigfile bs=8k &\n\n2000000�records in\n2000000�records out\n\nreal 0m39.849s\nuser 0m0.282s\nsys 0m22.294s\n2000000�records in\n2000000�records out\n\nreal 0m40.410s\nuser 0m0.251s\nsys 0m22.515s\n\nAnd that's 32,000MB in 40.4 seconds, or 792MB/s sustained from disk (not\nmemory).\n\nThese are each RAID5 arrays of 8 internal SATA disks on 3Ware HW RAID\ncontrollers.\n\nNow for real usage, let's run a simple sequential scan query on 123,434 MB\nof data in a single table on 4 of these machines in parallel. All tables\nare distributed evenly by Bizgres MPP over all 8 filesystems:\n\n============= Bizgres MPP sequential scan results =========================\n\n[llonergan@salerno]$ !psql\npsql -p 9999 -U mppdemo1 demo\nWelcome to psql 8.1.1 (server 8.1.3), the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help with psql commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ndemo=# \\timing\nTiming is on.\ndemo=# select version();\n \nversion \n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-----\n PostgreSQL 8.1.3 (Bizgres MPP 2.1) on x86_64-unknown-linux-gnu, compiled by\nGCC gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2) compiled on Feb 23 2006\n11:34:06\n(1 row)\n\nTime: 0.570 ms\ndemo=# select relname,8*relpages/128 as MB from pg_class order by relpages\ndesc limit 6;\n relname | mb\n--------------------------------�-------\n lineitem | 123434\n orders | 24907\n partsupp | 14785\n part | 3997\n customer | 3293\n supplier | 202\n(6 rows)\n\nTime: 1.824 ms\ndemo=# select count(*) from lineitem;\n count \n-----------\n 600037902\n(1 row)\n\nTime: 60300.960 ms\n\nSo that's 123,434 MB of data scanned in 60.3 seconds, or 2,047 MB/s on 4\nmachines, which uses 512MB/s of disk bandwidth on each machine.\n\nNow let's do a query that uses a this big table (a two way join) using all 4\nmachines:\n============= Bizgres MPP Query results =========================\ndemo=# select\ndemo-# sum(l_extendedprice* (1 - l_discount)) as revenue\ndemo-# from \ndemo-# lineitem,\ndemo-# part\ndemo-# where \ndemo-# (\ndemo(# p_partkey = l_partkey\ndemo(# and p_brand = 'Brand#42'\ndemo(# and p_container in ('SM CASE', 'SM BOX', 'SM PACK',\n'SM PKG')\ndemo(# and l_quantity >= 7 and l_quantity <= 7 �10\ndemo(# and p_size between 1 and 5\ndemo(# and l_shipmode in ('AIR', 'AIR REG')\ndemo(# and l_shipinstruct = 'DELIVER IN PERSON'\ndemo(# ) \ndemo-# or \ndemo-# ( \ndemo(# p_partkey = l_partkey\ndemo(# and p_brand = 'Brand#15'\ndemo(# and p_container in ('MED BAG', 'MED BOX', 'MED PKG',\n'MED PACK')\ndemo(# and l_quantity >= 14 and l_quantity <= 14 �10\ndemo(# and p_size between 1 and 10\ndemo(# and l_shipmode in ('AIR', 'AIR REG')\ndemo(# and l_shipinstruct = 'DELIVER IN PERSON'\ndemo(# )\ndemo-# or\ndemo-# (\ndemo(# p_partkey = l_partkey\ndemo(# and p_brand = 'Brand#53'\ndemo(# and p_container in ('LG CASE', 'LG BOX', 'LG PACK',\n'LG PKG')\ndemo(# and l_quantity >= 22 and l_quantity <= 22 �10\ndemo(# and p_size between 1 and 15\ndemo(# and l_shipmode in ('AIR', 'AIR REG')\ndemo(# and l_shipinstruct = 'DELIVER IN PERSON'\ndemo(# );\n revenue \n----------------\n 356492404.3164\n(1 row)\n\nTime: 114908.149 ms\n\n\nAnd now a 6-way join among 4 tables in this same schema:\n\ndemo=# SELECT\ndemo-# \ns.s_acctbal,s.s_name,n.n_name,p.p_partkey,p.p_mfgr,s.s_address,s.s_phone,s.s\n_comment\ndemo-# FROM \ndemo-# supplier s,partsupp ps,nation n,region r,\ndemo-# part p, (\ndemo(# SELECT p_partkey, min(ps_supplycost) as\nmin_ps_cost from part, partsupp ,\ndemo(# supplier,nation, region\ndemo(# WHERE\ndemo(# p_partkey=ps_partkey\ndemo(# and s_suppkey = ps_suppkey\ndemo(# and s_nationkey = n_nationkey\ndemo(# and n_regionkey = r_regionkey\ndemo(# and r_name = 'EUROPE'\ndemo(# GROUP BY\ndemo(# p_partkey\ndemo(# ) g\ndemo-# WHERE \ndemo-# p.p_partkey = ps.ps_partkey\ndemo-# and g.p_partkey = p.p_partkey\ndemo-# and g. min_ps_cost = ps.ps_supplycost\ndemo-# and s.s_suppkey = ps.ps_suppkey\ndemo-# and p.p_size = 15\ndemo-# and p.p_type like '%BRASS'\ndemo-# and s.s_nationkey = n.n_nationkey\ndemo-# and n.n_regionkey = r.r_regionkey\ndemo-# and r.r_name = 'EUROPE'\ndemo-# ORDER BY\ndemo-# s. s_acctbal desc,n.n_name,s.s_name,p.p_partkey\ndemo-# LIMIT 100;\n s_acctbal | s_name | n_name |\np_partkey | p_mfgr |\n s_address | s_phone |\ns_comment \n \n-----------�--------------------------�--------------------------�-------\n---�--------------------------�-----\n------------------------------------�----------------�--------------------\n--------------------------------------\n-------------------------------------------\n 9999.70 | Supplier#000239544 | UNITED KINGDOM |\n6739531 | Manufacturer#4 | 1UCMu\n3TLyUThghoeZ8arg6cV3Mr | 33-509-584-9496 | carefully ironic\nasymptotes cajole quickly. slyly silent a\nccounts sleep. fl\n...\n...\n 9975.53 | Supplier#000310136 | ROMANIA |\n10810115 | Manufacturer#5 | VNWON\nA5Sr B | 29-977-903-6199 | pending deposits\nwake permanently; final accounts sleep ab\nout the pending deposits.\n(100 rows)\n\nTime: 424981.813 ms\n\n\n- Luke\n\n\n", "msg_date": "Fri, 24 Feb 2006 20:41:52 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Luke Lonergan wrote:\n\n> \n> OK, how about some proof?\n> \n> In a synthetic test that writes 32GB of sequential 8k pages on a machine\n> with 16GB of RAM:\n> ========================= Write test results ==============================\n> time bash -c \"dd if=/dev/zero of=/dbfast1/llonergan/bigfile bs=8k\n> count=2000000 && sync\" &\n> time bash -c \"dd if=/dev/zero of=/dbfast3/llonergan/bigfile bs=8k\n> count=2000000 && sync\" &\n> \n> 2000000 records in\n> 2000000 records out\n> 2000000 records in\n> 2000000 records out\n> \n> real 1m0.046s\n> user 0m0.270s\n> sys 0m30.008s\n> \n> real 1m0.047s\n> user 0m0.287s\n> sys 0m30.675s\n> \n> So that's 32,000 MB written in 60.05 seconds, which is 533MB/s sustained\n> with two threads.\n> \n\nWell, since this is always fun (2G memory, 3Ware 7506, 4xPATA), writing:\n\n$ dd if=/dev/zero of=/data0/dump/bigfile bs=8k count=500000\n500000�records in\n500000�records out\n4096000000 bytes transferred in 32.619208 secs (125570185 bytes/sec)\n\n> Now to read the same files in parallel:\n> ========================= Read test results ==============================\n> sync\n> time dd of=/dev/null if=/dbfast1/llonergan/bigfile bs=8k &\n> time dd of=/dev/null if=/dbfast3/llonergan/bigfile bs=8k &\n> \n> 2000000 records in\n> 2000000 records out\n> \n> real 0m39.849s\n> user 0m0.282s\n> sys 0m22.294s\n> 2000000 records in\n> 2000000 records out\n> \n> real 0m40.410s\n> user 0m0.251s\n> sys 0m22.515s\n> \n> And that's 32,000MB in 40.4 seconds, or 792MB/s sustained from disk (not\n> memory).\n> \n\nReading:\n\n$ dd of=/dev/null if=/data0/dump/bigfile bs=8k count=500000\n500000�records in\n500000�records out\n4096000000 bytes transferred in 24.067298 secs (170189442 bytes/sec)\n\nOk - didn't quite get my quoted 175MB/s, (FWIW if bs=32k I get exactly\n175MB/s).\n\nHmmm - a bit humbled by Luke's machinery :-), however, mine is probably\ncompetitive on (MB/s)/$....\n\n\nIt would be interesting to see what Dan's system would do on a purely\nsequential workload - as 40-50MB of purely random IO is high.\n\nCheers\n\nMark\n", "msg_date": "Sat, 25 Feb 2006 19:10:55 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Mark,\n\nOn 2/24/06 10:10 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n\n> Well, since this is always fun (2G memory, 3Ware 7506, 4xPATA), writing:\n> \n> $ dd if=/dev/zero of=/data0/dump/bigfile bs=8k count=500000\n> 500000 records in\n> 500000 records out\n> 4096000000 bytes transferred in 32.619208 secs (125570185 bytes/sec)\n\n> Reading:\n> \n> $ dd of=/dev/null if=/data0/dump/bigfile bs=8k count=500000\n> 500000 records in\n> 500000 records out\n> 4096000000 bytes transferred in 24.067298 secs (170189442 bytes/sec)\n\nNot bad at all! I have one of these cards in my home machine running WinXP\nand it's not nearly this fast.\n \n> Hmmm - a bit humbled by Luke's machinery :-), however, mine is probably\n> competitive on (MB/s)/$....\n\nNot sure - the machines I cite are about $10K each. The machine you tested\nwas probably about $1500 a few years ago (my guess), and with a 5:1 ratio in\nspeed versus about a 6:1 ratio in price, we're not too far off in MB/s/$\nafter all :-)\n \n> It would be interesting to see what Dan's system would do on a purely\n> sequential workload - as 40-50MB of purely random IO is high.\n\nYeah - that is really high if the I/O is really random. I'd normally expect\nmaybe 500-600 iops / second and if each IO is 8KB, that would be 4MB/s. The\nI/O is probably not really completely random, or it's random over cachable\nbits of the occupied disk area.\n\n- Luke \n\n\n", "msg_date": "Fri, 24 Feb 2006 22:22:49 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Luke Lonergan wrote:\n> Mark,\n> \n>>Hmmm - a bit humbled by Luke's machinery :-), however, mine is probably\n>>competitive on (MB/s)/$....\n> \n> \n> Not sure - the machines I cite are about $10K each. The machine you tested\n> was probably about $1500 a few years ago (my guess), and with a 5:1 ratio in\n> speed versus about a 6:1 ratio in price, we're not too far off in MB/s/$\n> after all :-)\n> \n\nWow - that is remarkable performance for $10K!\n\nCheers\n\nMark\n", "msg_date": "Sat, 25 Feb 2006 19:28:15 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "At 11:41 PM 2/24/2006, Luke Lonergan wrote:\n>Dan,\n>\n>On 2/24/06 4:47 PM, \"Dan Gorman\" <[email protected]> wrote:\n>\n> > Was that sequential reads? If so, yeah you'll get 110MB/s? How big\n> > was the datafile size? 8MB? Yeah, you'll get 110MB/s. 2GB? No, they\n> > can't sustain that. There are so many details missing from this test\n> > that it's hard to have any context around it :)\n> >\n> > I was getting about 40-50MB/s on a PV with 14 disks on a RAID10 in\n> > real world usage. (random IO and fully saturating a Dell 1850 with 4\n> > concurrent threads (to peg the cpu on selects) and raw data files)\n>\n>OK, how about some proof?\n>\n>In a synthetic test that writes 32GB of sequential 8k pages on a machine\n>with 16GB of RAM:\n>========================= Write test results ==============================\n>time bash -c \"dd if=/dev/zero of=/dbfast1/llonergan/bigfile bs=8k\n>count=2000000 && sync\" &\n>time bash -c \"dd if=/dev/zero of=/dbfast3/llonergan/bigfile bs=8k\n>count=2000000 && sync\" &\n>\n>2000000++0 records in\n>2000000++0 records out\n>2000000++0 records in\n>2000000++0 records out\n>\n>real 1m0.046s\n>user 0m0.270s\n>sys 0m30.008s\n>\n>real 1m0.047s\n>user 0m0.287s\n>sys 0m30.675s\n>\n>So that's 32,000 MB written in 60.05 seconds, which is 533MB/s sustained\n>with two threads.\n>\n>Now to read the same files in parallel:\n>========================= Read test results ==============================\n>sync\n>time dd of=/dev/null if=/dbfast1/llonergan/bigfile bs=8k &\n>time dd of=/dev/null if=/dbfast3/llonergan/bigfile bs=8k &\n>\n>2000000++0 records in\n>2000000++0 records out\n>\n>real 0m39.849s\n>user 0m0.282s\n>sys 0m22.294s\n>2000000++0 records in\n>2000000++0 records out\n>\n>real 0m40.410s\n>user 0m0.251s\n>sys 0m22.515s\n>\n>And that's 32,000MB in 40.4 seconds, or 792MB/s sustained from disk (not\n>memory).\n>\n>These are each RAID5 arrays of 8 internal SATA disks on 3Ware HW RAID\n>controllers.\n\nImpressive IO rates. A more detailed HW list would help put them in context.\n\nWhich 3Ware? The 9550SX? How much cache on it (AFAIK, the only \noptions are 128MB and 256MB?)?\n\nWhich HDs?\n\nWhat CPUs (looks like Opterons, but which flavor?) and mainboard?\n\nWhat's CPU utilization when hammering the physical IO subsystem this hard?\n\n\n>Now for real usage, let's run a simple sequential scan query on 123,434 MB\n>of data in a single table on 4 of these machines in parallel. All tables\n>are distributed evenly by Bizgres MPP over all 8 filesystems:\n>\n>============= Bizgres MPP sequential scan results =========================\n>\n>[llonergan@salerno0 +AH4]$ !psql\n>psql -p 9999 -U mppdemo1 demo\n>Welcome to psql 8.1.1 (server 8.1.3), the PostgreSQL interactive terminal.\n>\n>Type: +AFw-copyright for distribution terms\n> +AFw-h for help with SQL commands\n> +AFw? for help with psql commands\n> +AFw-g or terminate with semicolon to execute query\n> +AFw-q to quit\n>\n>demo=# +AFw-timing\n>Timing is on.\n>demo=# select version();\n>\n>version\n>----------------------------------------------------------------------------\n>----------------------------------------------------------------------------\n>-----\n> PostgreSQL 8.1.3 (Bizgres MPP 2.1) on x86_64-unknown-linux-gnu, compiled by\n>GCC gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2) compiled on Feb 23 2006\n>11:34:06\n>(1 row)\n>\n>Time: 0.570 ms\n>demo=# select relname,8*relpages/128 as MB from pg_class order by relpages\n>desc limit 6;\n> relname | mb\n>--------------------------------++--------\n> lineitem | 123434\n> orders | 24907\n> partsupp | 14785\n> part | 3997\n> customer | 3293\n> supplier | 202\n>(6 rows)\n>\n>Time: 1.824 ms\n>demo=# select count(*) from lineitem;\n> count\n>-----------\n> 600037902\n>(1 row)\n>\n>Time: 60300.960 ms\n>\n>So that's 123,434 MB of data scanned in 60.3 seconds, or 2,047 MB/s on 4\n>machines, which uses 512MB/s of disk bandwidth on each machine.\n>\n>Now let's do a query that uses a this big table (a two way join) using all 4\n>machines:\n>============= Bizgres MPP Query results =========================\n>demo=# select\n>demo-# sum(l_extendedprice* (1 - l_discount)) as revenue\n>demo-# from\n>demo-# lineitem,\n>demo-# part\n>demo-# where\n>demo-# (\n>demo(# p_partkey = l_partkey\n>demo(# and p_brand = 'Brand#42'\n>demo(# and p_container in ('SM CASE', 'SM BOX', 'SM PACK',\n>'SM PKG')\n>demo(# and l_quantity >= 7 and l_quantity <= 7 ++ 10\n>demo(# and p_size between 1 and 5\n>demo(# and l_shipmode in ('AIR', 'AIR REG')\n>demo(# and l_shipinstruct = 'DELIVER IN PERSON'\n>demo(# )\n>demo-# or\n>demo-# (\n>demo(# p_partkey = l_partkey\n>demo(# and p_brand = 'Brand#15'\n>demo(# and p_container in ('MED BAG', 'MED BOX', 'MED PKG',\n>'MED PACK')\n>demo(# and l_quantity >= 14 and l_quantity <= 14 ++ 10\n>demo(# and p_size between 1 and 10\n>demo(# and l_shipmode in ('AIR', 'AIR REG')\n>demo(# and l_shipinstruct = 'DELIVER IN PERSON'\n>demo(# )\n>demo-# or\n>demo-# (\n>demo(# p_partkey = l_partkey\n>demo(# and p_brand = 'Brand#53'\n>demo(# and p_container in ('LG CASE', 'LG BOX', 'LG PACK',\n>'LG PKG')\n>demo(# and l_quantity >= 22 and l_quantity <= 22 ++ 10\n>demo(# and p_size between 1 and 15\n>demo(# and l_shipmode in ('AIR', 'AIR REG')\n>demo(# and l_shipinstruct = 'DELIVER IN PERSON'\n>demo(# );\n> revenue\n>----------------\n> 356492404.3164\n>(1 row)\n>\n>Time: 114908.149 ms\nHmmm. ~115secs @ ~500MBps => ~57.5GB of data manipulated.\n\n\n>And now a 6-way join among 4 tables in this same schema:\n>\n>demo=# SELECT\n>demo-#\n>s.s_acctbal,s.s_name,n.n_name,p.p_partkey,p.p_mfgr,s.s_address,s.s_phone,s.s\n>_comment\n>demo-# FROM\n>demo-# supplier s,partsupp ps,nation n,region r,\n>demo-# part p, (\n>demo(# SELECT p_partkey, min(ps_supplycost) as\n>min_ps_cost from part, partsupp ,\n>demo(# supplier,nation, region\n>demo(# WHERE\n>demo(# p_partkey=ps_partkey\n>demo(# and s_suppkey = ps_suppkey\n>demo(# and s_nationkey = n_nationkey\n>demo(# and n_regionkey = r_regionkey\n>demo(# and r_name = 'EUROPE'\n>demo(# GROUP BY\n>demo(# p_partkey\n>demo(# ) g\n>demo-# WHERE\n>demo-# p.p_partkey = ps.ps_partkey\n>demo-# and g.p_partkey = p.p_partkey\n>demo-# and g. min_ps_cost = ps.ps_supplycost\n>demo-# and s.s_suppkey = ps.ps_suppkey\n>demo-# and p.p_size = 15\n>demo-# and p.p_type like '%BRASS'\n>demo-# and s.s_nationkey = n.n_nationkey\n>demo-# and n.n_regionkey = r.r_regionkey\n>demo-# and r.r_name = 'EUROPE'\n>demo-# ORDER BY\n>demo-# s. s_acctbal desc,n.n_name,s.s_name,p.p_partkey\n>demo-# LIMIT 100;\n> s_acctbal | s_name | n_name |\n>p_partkey | p_mfgr |\n> s_address | s_phone |\n>s_comment\n>\n>-----------++---------------------------++---------------------------++--------\n>---++---------------------------++------\n>------------------------------------++-----------------++---------------------\n>--------------------------------------\n>-------------------------------------------\n> 9999.70 | Supplier#000239544 | UNITED KINGDOM |\n>6739531 | Manufacturer#4 | 1UCMu\n>3TLyUThghoeZ8arg6cV3Mr | 33-509-584-9496 | carefully ironic\n>asymptotes cajole quickly. slyly silent a\n>ccounts sleep. fl\n>...\n>...\n> 9975.53 | Supplier#000310136 | ROMANIA |\n>10810115 | Manufacturer#5 | VNWON\n>A5Sr B | 29-977-903-6199 | pending deposits\n>wake permanently; final accounts sleep ab\n>out the pending deposits.\n>(100 rows)\n>\n>Time: 424981.813 ms\n...and this implies ~425secs @ ~500MBps => 212.5GB\n\nWhat are the IO rates during these joins?\n\nHow much data is being handled to complete these joins?\n\nHow much data is being exchanged between these machines to complete the joins?\n\nWhat is the connectivity between these 4 machines?\n\nPutting these numbers in context may help the advocacy effort \nconsiderably as well as help us improve things even further. ;-)\n\nTiA,\nRon \n\n\n", "msg_date": "Sat, 25 Feb 2006 06:24:55 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "At 01:22 AM 2/25/2006, Luke Lonergan wrote:\n>Mark,\n>\n>On 2/24/06 10:10 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n>\n> > Well, since this is always fun (2G memory, 3Ware 7506, 4xPATA), writing:\n> >\n> > $ dd if=/dev/zero of=/data0/dump/bigfile bs=8k count=500000\n> > 500000 records in\n> > 500000 records out\n> > 4096000000 bytes transferred in 32.619208 secs (125570185 bytes/sec)\n>\n> > Reading:\n> >\n> > $ dd of=/dev/null if=/data0/dump/bigfile bs=8k count=500000\n> > 500000 records in\n> > 500000 records out\n> > 4096000000 bytes transferred in 24.067298 secs (170189442 bytes/sec)\n>\n>Not bad at all! I have one of these cards in my home machine running WinXP\n>and it's not nearly this fast.\n>\n> > Hmmm - a bit humbled by Luke's machinery :-), however, mine is probably\n> > competitive on (MB/s)/$....\n>\n>Not sure - the machines I cite are about $10K each. The machine you tested\n>was probably about $1500 a few years ago (my guess), and with a 5:1 ratio in\n>speed versus about a 6:1 ratio in price, we're not too far off in MB/s/$\n>after all :-)\n>\n> > It would be interesting to see what Dan's system would do on a purely\n> > sequential workload - as 40-50MB of purely random IO is high.\n>\n>Yeah - that is really high if the I/O is really random. I'd normally expect\n>maybe 500-600 iops / second and if each IO is 8KB, that would be 4MB/s. The\n>I/O is probably not really completely random, or it's random over cachable\n>bits of the occupied disk area.\nSide note: the new WD 150GB Raptors (10Krpm 1.5Gbps SATA w/ NCQ \nsupport) have benched at ~1000 IOps _per drive_\n\nhttp://www.storagereview.com/articles/200601/WD1500ADFD_4.html\n\n(Now if we can just get WD to make a 300GB Raptor, increase that \nwimpy 16MB buffer, and implement 6Gbps SATA...;-) )\n\nAn array of these things plugged into a PCI-E <-> SATA RAID \ncontroller with 1-2GB of cache should set a new bar for performance \nas well as making that performance more resilient than ever to \nvariations in usage patterns.\n\n\n\n\n", "msg_date": "Sat, 25 Feb 2006 06:56:33 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" }, { "msg_contents": "Ron,\n\nOn 2/25/06 3:24 AM, \"Ron\" <[email protected]> wrote:\n\n>> These are each RAID5 arrays of 8 internal SATA disks on 3Ware HW RAID\n>> controllers.\n> \n> Impressive IO rates. A more detailed HW list would help put them in context.\n> \n> Which 3Ware? The 9550SX? How much cache on it (AFAIK, the only\n> options are 128MB and 256MB?)?\n> \n> Which HDs?\n> \n> What CPUs (looks like Opterons, but which flavor?) and mainboard?\n> \n> What's CPU utilization when hammering the physical IO subsystem this hard?\n\nOK. There are four machines. Each machine has:\nQty 2 of 3Ware 9550SX/PCIX/128MB cache SATAII RAID controllers\nQty 2 of AMD Opteron 250 CPUs (2.4 GHz)\nQty 16 of 1GB PC3200 RAM (16GB total)\nQty 1 of Tyan 2882S Motherboard\nQty 16 Western Digital 400GB Raid Edition 2 SATA disks\nCost: About $10,000 each\n\nThey are connected together using a Netgear 48 port Gigabit Ethernet switch\nand copper CAT6 cables.\nCost: About $1,500\n\nTotal of all machines:\n8 CPUs\n64GB RAM\n64 Disks\n20TB storage in RAID5\nTotal HW cost: About $45,000.\n\nCPU utilization is apparently (adding up usr+system and dividing by real):\nWriting at 2,132MB/s: 51%\nReading at 3,168MB/s: 56%\n\n>> revenue\n>> ----------------\n>> 356492404.3164\n>> (1 row)\n>> \n>> Time: 114908.149 ms\n> Hmmm. ~115secs @ ~500MBps => ~57.5GB of data manipulated.\n\nActually, this query sequential scans all of both lineitem and part, so it¹s\naccessing 127.5GB of data, and performing the work of the hash join, in\nabout double the scan time. That¹s possible because we¹re using all 8 CPUs,\n4 network interfaces and the 64 disks while we are performing the query:\n\n Aggregate (cost=3751996.97..3751996.99 rows=1 width=22)\n -> Gather Motion (cost=3751996.97..3751996.99 rows=1 width=22)\n -> Aggregate (cost=3751996.97..3751996.99 rows=1 width=22)\n -> Hash Join (cost=123440.49..3751993.62 rows=1339\nwidth=22)\n Hash Cond: (\"outer\".l_partkey = \"inner\".p_partkey)\n Join Filter: (((\"inner\".p_brand = 'Brand#42'::bpchar)\nAND ((\"inner\".p_container = 'SM CASE'::bpchar) OR (\"inner\".p_container = 'SM\nBOX'::bpchar) OR (\"inner\".p_container = 'SM PACK'::bpchar) OR\n(\"inner\".p_container = 'SM PKG'::bpchar)) AND (\"outer\".l_quantity >=\n7::numeric) AND (\"outer\".l_quantity <= 17::numeric) AND (\"inner\".p_size <=\n5)) OR ((\"inner\".p_brand = 'Brand#15'::bpchar) AND ((\"inner\".p_container =\n'MED BAG'::bpchar) OR (\"inner\".p_container = 'MED BOX'::bpchar) OR\n(\"inner\".p_container = 'MED PKG'::bpchar) OR (\"inner\".p_container = 'MED\nPACK'::bpchar)) AND (\"outer\".l_quantity >= 14::numeric) AND\n(\"outer\".l_quantity <= 24::numeric) AND (\"inner\".p_size <= 10)) OR\n((\"inner\".p_brand = 'Brand#53'::bpchar) AND ((\"inner\".p_container = 'LG\nCASE'::bpchar) OR (\"inner\".p_container = 'LG BOX'::bpchar) OR\n(\"inner\".p_container = 'LG PACK'::bpchar) OR (\"inner\".p_container = 'LG\nPKG'::bpchar)) AND (\"outer\".l_quantity >= 22::numeric) AND\n(\"outer\".l_quantity <= 32::numeric) AND (\"inner\".p_size <= 15)))\n -> Redistribute Motion (cost=0.00..3340198.25\nrows=2611796 width=36)\n Hash Key: l_partkey\n -> Seq Scan on lineitem (cost=0.00..3287962.32\nrows=2611796 width=36)\n Filter: (((l_shipmode = 'AIR'::bpchar) OR\n(l_shipmode = 'AIR REG'::bpchar)) AND (l_shipinstruct = 'DELIVER IN\nPERSON'::bpchar))\n -> Hash (cost=95213.75..95213.75 rows=2500296\nwidth=36)\n -> Seq Scan on part (cost=0.00..95213.75\nrows=2500296 width=36)\n Filter: (p_size >= 1)\n(13 rows)\n \n>> Time: 424981.813 ms\n> ...and this implies ~425secs @ ~500MBps => 212.5GB\n\nI've attached this explain plan because it's large. Same thing applies as\nin previous, all of the involved tables are scanned, and in this case we've\ngot all manner of CPU work being performed: sorts, hashes, aggregations,...\n\n> What are the IO rates during these joins?\n\nThey burst then disappear during the progression of the queries. These are\nnow CPU bound because the I/O available to each CPU is so fast.\n\n> How much data is being handled to complete these joins?\n\nSee above, basically all of it because there are no indexes.\n \n> How much data is being exchanged between these machines to complete the joins?\n\nGood question, you can infer it from reading the EXPLAIN plans, but the\nstats are not what they appear - generally multiply them by 8.\n \n> What is the connectivity between these 4 machines?\n\nSee above.\n \n> Putting these numbers in context may help the advocacy effort\n> considerably as well as help us improve things even further. ;-)\n\nThanks! (bait bait) I can safely say that there have never been query\nspeeds with Postgres this fast, not even close. (bait bait)\n\nWhat's next is a machine 4 or maybe 8 times this fast.\n\nExplain for second query is attached, compressed with gzip (it¹s kinda\nhuge).\n\n- Luke", "msg_date": "Sat, 25 Feb 2006 07:49:51 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reliability recommendations" } ]
[ { "msg_contents": "Platform: FreeBSD 6.0, Postgresql 8.1.2 compiled from the ports collection.\n\nNot sure if this belongs in performance or bugs..\n\nA pg_restore of my 2.5GB database was taking up to 2 hours to complete \ninstead of the expected 10-15 minutes. Checking the server it was mostly \nCPU bound. Testing further I discovered that is was spending huge \namounts of CPU time creating some indexes.\n\nIt took a while to find out, but basically it boils down to this:\n\nIf the column that is having the index created has a certain \ndistribution of values then create index takes a very long time. If the \ndata values (integer in this case) a fairly evenly distributed then \ncreate index is very quick, if the data values are all the same it is \nvery quick. I discovered that in the slow cases the column had \napproximately half the values as zero and the rest fairly spread out. \nOne column started off with around 400,000 zeros and the rest of the \nfollowing rows spread between values of 1 to 500,000.\n\nI have put together a test case that demonstrates the problem (see \nbelow). I create a simple table, as close in structure to one of my \nproblem tables and populate an integer column with 100,000 zeros follow \nby 100,000 random integers between 0 and 100,000. Then create an index \non this column. I then drop the table and repeat. The create index \nshould take around 1-2 seconds. A fair proportion of the time it takes \n50 seconds!!!\n\nIf I fill the same row with all random data the create index always \ntakes a second or two. If I fill the column with all zeros everything is \nstill OK.\n\nWhen my tables that I am trying to restore are over 2 million rows the \ncreating one index can take an hour!! (almost all CPU time).\n\nAll other areas of performance, once the dump is restored and analysed \nseem to be OK, even large hash/merge joins and sorts\n\nThis is entirely repeatable in FreeBSD in that around half the time \ncreate index will be incredibly slow.\n\nAll postgresql.conf settings are at the defaults for the test initially \n(fresh install)\n\nThe final interesting thing is that as I increase shared buffers to 2000 \nor 3000 the problem gets *worse*\n\nThe following text is output from the test script..\n\nselect version();\n version \n\n------------------------------------------------------------------------------------------------\n PostgreSQL 8.1.2 on i386-portbld-freebsd6.0, compiled by GCC cc (GCC) \n3.4.4 [FreeBSD] 20050518\n(1 row)\n\n\\timing\nTiming is on.\n\n----- Many slow cases, note the 50+ seconds cases\n\ncreate table atest(i int4, r int4,d1 timestamp, d2 timestamp);\nCREATE TABLE\nTime: 81.859 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),0,now(),now();\nINSERT 0 100000\nTime: 1482.141 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 1543.508 ms\ncreate index idx on atest(r);\nCREATE INDEX\nTime: 56685.230 ms\n\ndrop table atest;\nDROP TABLE\nTime: 4.616 ms\ncreate table atest(i int4, r int4,d1 timestamp, d2 timestamp);\nCREATE TABLE\nTime: 6.889 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),0,now(),now();\nINSERT 0 100000\nTime: 2009.787 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 1828.663 ms\ncreate index idx on atest(r);\nCREATE INDEX\nTime: 3991.257 ms\n\ndrop table atest;\nDROP TABLE\nTime: 3.796 ms\ncreate table atest(i int4, r int4,d1 timestamp, d2 timestamp);\nCREATE TABLE\nTime: 19.965 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),0,now(),now();\nINSERT 0 100000\nTime: 1625.059 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 2622.827 ms\ncreate index idx on atest(r);\nCREATE INDEX\nTime: 1082.799 ms\n\ndrop table atest;\nDROP TABLE\nTime: 4.627 ms\ncreate table atest(i int4, r int4,d1 timestamp, d2 timestamp);\nCREATE TABLE\nTime: 2.953 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),0,now(),now();\nINSERT 0 100000\nTime: 2068.744 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 2671.420 ms\ncreate index idx on atest(r);\nCREATE INDEX\nTime: 8047.660 ms\n\ndrop table atest;\nDROP TABLE\nTime: 3.675 ms\ncreate table atest(i int4, r int4,d1 timestamp, d2 timestamp);\nCREATE TABLE\nTime: 2.582 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),0,now(),now();\nINSERT 0 100000\nTime: 1723.987 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 2263.131 ms\ncreate index idx on atest(r);\nCREATE INDEX\nTime: 50050.308 ms\n\ndrop table atest;\nDROP TABLE\nTime: 52.744 ms\ncreate table atest(i int4, r int4,d1 timestamp, d2 timestamp);\nCREATE TABLE\nTime: 25.370 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),0,now(),now();\nINSERT 0 100000\nTime: 2052.733 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 2631.317 ms\ncreate index idx on atest(r);\nCREATE INDEX\nTime: 61440.897 ms\n\ndrop table atest;\nDROP TABLE\nTime: 26.137 ms\ncreate table atest(i int4, r int4,d1 timestamp, d2 timestamp);\nCREATE TABLE\nTime: 24.794 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),0,now(),now();\nINSERT 0 100000\nTime: 2851.977 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 1553.046 ms\ncreate index idx on atest(r);\nCREATE INDEX\nTime: 1774.920 ms\n\n\n---- Fast (Normal?) cases\n\ndrop table atest;\nDROP TABLE\nTime: 4.422 ms\ncreate table atest(i int4, r int4,d1 timestamp, d2 timestamp);\nCREATE TABLE\nTime: 2.543 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 1516.246 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 1407.400 ms\ncreate index idx on atest(r);\nCREATE INDEX\nTime: 903.503 ms\n\ndrop table atest;\nDROP TABLE\nTime: 3.820 ms\ncreate table atest(i int4, r int4,d1 timestamp, d2 timestamp);\nCREATE TABLE\nTime: 22.861 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 1455.556 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 2037.996 ms\ncreate index idx on atest(r);\nCREATE INDEX\nTime: 718.286 ms\n\ndrop table atest;\nDROP TABLE\nTime: 4.503 ms\ncreate table atest(i int4, r int4,d1 timestamp, d2 timestamp);\nCREATE TABLE\nTime: 3.448 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 1523.540 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 1261.473 ms\ncreate index idx on atest(r);\nCREATE INDEX\nTime: 727.707 ms\n\ndrop table atest;\nDROP TABLE\nTime: 3.564 ms\ncreate table atest(i int4, r int4,d1 timestamp, d2 timestamp);\nCREATE TABLE\nTime: 2.897 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 1447.504 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 1403.525 ms\ncreate index idx on atest(r);\nCREATE INDEX\nTime: 754.577 ms\n\ndrop table atest;\nDROP TABLE\nTime: 4.633 ms\ncreate table atest(i int4, r int4,d1 timestamp, d2 timestamp);\nCREATE TABLE\nTime: 3.196 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 1618.544 ms\ninsert into atest (i,r,d1,d2) select \ngenerate_series(1,100000),random()*100000,now(),now();\nINSERT 0 100000\nTime: 1530.450 ms\ncreate index idx on atest(r);\nCREATE INDEX\nTime: 802.980 ms\ndrop table atest;\nDROP TABLE\nTime: 4.707 ms\nmserver#\n\nRegards,\nGary.\n", "msg_date": "Wed, 15 Feb 2006 20:00:39 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": true, "msg_subject": "Strange Create Index behaviour" }, { "msg_contents": "Gary Doades <[email protected]> writes:\n> Platform: FreeBSD 6.0, Postgresql 8.1.2 compiled from the ports collection.\n\n> If the column that is having the index created has a certain \n> distribution of values then create index takes a very long time. If the \n> data values (integer in this case) a fairly evenly distributed then \n> create index is very quick, if the data values are all the same it is \n> very quick. I discovered that in the slow cases the column had \n> approximately half the values as zero and the rest fairly spread out. \n\nInteresting. I tried your test script and got fairly close times\nfor all the cases on two different machines:\n\told HPUX machine: shortest 5800 msec, longest 7960 msec\n\tnew Fedora 4 machine: shortest 461 msec, longest 608 msec\n(the HPUX machine was doing other stuff at the same time, so some\nof its variation is probably only noise).\n\nSo what this looks like to me is a corner case that FreeBSD's qsort\nfails to handle well.\n\nYou might try forcing Postgres to use our private copy of qsort, as we\ndo on Solaris for similar reasons. (The easy way to do this by hand\nis to configure as normal, then alter the LIBOBJS setting in\nsrc/Makefile.global to add \"qsort.o\", then proceed with normal build.)\nHowever, I think that our private copy is descended from *BSD sources,\nso it might have the same failure mode. It'd be worth finding out.\n\n> The final interesting thing is that as I increase shared buffers to 2000 \n> or 3000 the problem gets *worse*\n\nshared_buffers is unlikely to impact index build time noticeably in\nrecent PG releases. maintenance_work_mem would affect it a lot, though.\nWhat setting were you using for that?\n\nCan anyone else try these test cases on other platforms?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Feb 2006 15:56:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange Create Index behaviour " }, { "msg_contents": "Tom Lane wrote:\n> Interesting. I tried your test script and got fairly close times\n> for all the cases on two different machines:\n> \told HPUX machine: shortest 5800 msec, longest 7960 msec\n> \tnew Fedora 4 machine: shortest 461 msec, longest 608 msec\n> (the HPUX machine was doing other stuff at the same time, so some\n> of its variation is probably only noise).\n> \n> So what this looks like to me is a corner case that FreeBSD's qsort\n> fails to handle well.\n> \n> You might try forcing Postgres to use our private copy of qsort, as we\n> do on Solaris for similar reasons. (The easy way to do this by hand\n> is to configure as normal, then alter the LIBOBJS setting in\n> src/Makefile.global to add \"qsort.o\", then proceed with normal build.)\n> However, I think that our private copy is descended from *BSD sources,\n> so it might have the same failure mode. It'd be worth finding out.\n> \n>> The final interesting thing is that as I increase shared buffers to 2000 \n>> or 3000 the problem gets *worse*\n> \n> shared_buffers is unlikely to impact index build time noticeably in\n> recent PG releases. maintenance_work_mem would affect it a lot, though.\n> What setting were you using for that?\n> \n> Can anyone else try these test cases on other platforms?\n> \n\nThanks for that.\n\nI've since tried it on Windows (pg 8.1.2) and the times were all \nsimilar, around 1200ms so it might just be BSD.\n\nI'll have to wait until tomorrow to get back to my BSD box. FreeBSD \nports makes it easy to install, so I'll have to figure out how to get in \nand change things manually. I guess the appropriate files are still left \naround after the ports make command finishes, so I just edit the file \nand make again?\n\nIf it can't be fixed though I guess we may have a problem using BSD. I'm \nsurprised this hasn't been brought up before, the case doesn't seem \n*that* rare. Maybe not that many using FreeBSD?\n\nI'd certainly be interested if anyone else can repro it on FreeBSD though.\n\nRegards,\nGary.\n\n", "msg_date": "Wed, 15 Feb 2006 21:06:51 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange Create Index behaviour" }, { "msg_contents": "Tom Lane wrote:\n> shared_buffers is unlikely to impact index build time noticeably in\n> recent PG releases. maintenance_work_mem would affect it a lot, though.\n> What setting were you using for that?\n> \n\nAlso, i tried upping maintenance_work_mem to 65536 and it didn't make \nmuch difference (maybe 10% faster for the \"normal\" cases). Upping the \nshared_buffers *definitely* makes the bad cases worse though, but I \nagree I don't see why...\n\nRegards,\nGary.\n", "msg_date": "Wed, 15 Feb 2006 21:11:15 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange Create Index behaviour" }, { "msg_contents": "On Wed, 2006-02-15 at 20:00 +0000, Gary Doades wrote:\n\n> I have put together a test case \n\nPlease enable trace_sort=on and then repeat tests and post the\naccompanying log file.\n\nI think this is simply the sort taking longer depending upon the data\ndistribution, but I'd like to know for sure.\n\nThanks,\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 15 Feb 2006 21:27:27 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange Create Index behaviour" }, { "msg_contents": "I wrote:\n> Interesting. I tried your test script and got fairly close times\n> for all the cases on two different machines:\n> \told HPUX machine: shortest 5800 msec, longest 7960 msec\n> \tnew Fedora 4 machine: shortest 461 msec, longest 608 msec\n\n> So what this looks like to me is a corner case that FreeBSD's qsort\n> fails to handle well.\n\nI tried forcing PG to use src/port/qsort.c on the Fedora machine,\nand lo and behold:\n\tnew Fedora 4 machine: shortest 434 msec, longest 8530 msec\n\nSo it sure looks like this script does expose a problem on BSD-derived\nqsorts. Curiously, the case that's much the worst for me is the third\nin the script, while the shortest time is the first case, which was slow\nfor Gary. So I'd venture that the *BSD code has been tweaked somewhere\nalong the way, in a manner that moves the problem around without really\nfixing it. (Anyone want to compare the actual FreeBSD source to what\nwe have?)\n\nThis is pretty relevant stuff, because there was a thread recently\nadvocating that we stop using the platform qsort on all platforms:\nhttp://archives.postgresql.org/pgsql-hackers/2005-12/msg00610.php\n\nIt's really interesting to see a case where port/qsort is radically\nworse than other qsorts ... unless we figure that out and fix it,\nI think the idea of using port/qsort everywhere has just taken a\nmajor hit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Feb 2006 16:27:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange Create Index behaviour " }, { "msg_contents": "Tom Lane wrote:\n > I tried forcing PG to use src/port/qsort.c on the Fedora machine,\n> and lo and behold:\n> \tnew Fedora 4 machine: shortest 434 msec, longest 8530 msec\n> \n> So it sure looks like this script does expose a problem on BSD-derived\n> qsorts. Curiously, the case that's much the worst for me is the third\n> in the script, while the shortest time is the first case, which was slow\n> for Gary. So I'd venture that the *BSD code has been tweaked somewhere\n> along the way, in a manner that moves the problem around without really\n> fixing it. (Anyone want to compare the actual FreeBSD source to what\n> we have?)\n> \n\nIf I run the script again, it is not always the first case that is slow, \nit varies from run to run, which is why I repeated it quite a few times \nfor the test.\n\nInterestingly, if I don't delete the table after a run, but just drop \nand re-create the index repeatedly it stays a pretty consistent time, \neither repeatedly good or repeatedly bad!\n\nRegards,\nGary.\n", "msg_date": "Wed, 15 Feb 2006 21:34:11 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange Create Index behaviour" }, { "msg_contents": "Tom Lane wrote:\n> \n> So it sure looks like this script does expose a problem on BSD-derived\n> qsorts. Curiously, the case that's much the worst for me is the third\n> in the script, while the shortest time is the first case, which was slow\n> for Gary. So I'd venture that the *BSD code has been tweaked somewhere\n> along the way, in a manner that moves the problem around without really\n> fixing it. (Anyone want to compare the actual FreeBSD source to what\n> we have?)\n> \n> It's really interesting to see a case where port/qsort is radically\n> worse than other qsorts ... unless we figure that out and fix it,\n> I think the idea of using port/qsort everywhere has just taken a\n> major hit.\n> \n\nMore specifically to BSD, is there any way I can use a non-BSD qsort for \nbuilding Postresql server?\n\nRegards,\nGary.\n", "msg_date": "Wed, 15 Feb 2006 21:47:46 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange Create Index behaviour" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> Please enable trace_sort=on and then repeat tests and post the\n> accompanying log file.\n\nI did this on my Fedora machine with port/qsort.c, and got the results\nattached. Curiously, this run has the spikes in completely different\nplaces than the prior one did. So the random component of the test data\nis affecting the results quite a lot. There seems absolutely no doubt\nthat we are looking at data-dependent qsort misbehavior, though. The\nCPU time eaten by performsort accounts for all but about 100 msec of the\nelapsed time reported on the psql side.\n\n\t\t\tregards, tom lane\n\n\nLOG: begin index sort: unique = f, workMem = 16384, randomAccess = f\nLOG: performsort starting: CPU 0.00s/0.15u sec elapsed 0.15 sec\nLOG: performsort done: CPU 0.00s/12.43u sec elapsed 12.44 sec\nLOG: internal sort ended, 9861 KB used: CPU 0.01s/12.51u sec elapsed 12.52 sec\nLOG: begin index sort: unique = f, workMem = 16384, randomAccess = f\nLOG: performsort starting: CPU 0.00s/0.14u sec elapsed 0.15 sec\nLOG: performsort done: CPU 0.00s/0.78u sec elapsed 0.78 sec\nLOG: internal sort ended, 9861 KB used: CPU 0.02s/0.85u sec elapsed 0.87 sec\nLOG: begin index sort: unique = f, workMem = 16384, randomAccess = f\nLOG: performsort starting: CPU 0.01s/0.14u sec elapsed 0.15 sec\nLOG: performsort done: CPU 0.01s/0.96u sec elapsed 0.97 sec\nLOG: internal sort ended, 9861 KB used: CPU 0.02s/1.03u sec elapsed 1.06 sec\nLOG: begin index sort: unique = f, workMem = 16384, randomAccess = f\nLOG: performsort starting: CPU 0.00s/0.14u sec elapsed 0.15 sec\nLOG: performsort done: CPU 0.00s/0.31u sec elapsed 0.32 sec\nLOG: internal sort ended, 9861 KB used: CPU 0.02s/0.38u sec elapsed 0.40 sec\nLOG: begin index sort: unique = f, workMem = 16384, randomAccess = f\nLOG: performsort starting: CPU 0.00s/0.14u sec elapsed 0.15 sec\nLOG: performsort done: CPU 0.00s/7.91u sec elapsed 7.92 sec\nLOG: internal sort ended, 9861 KB used: CPU 0.02s/7.99u sec elapsed 8.01 sec\nLOG: begin index sort: unique = f, workMem = 16384, randomAccess = f\nLOG: performsort starting: CPU 0.01s/0.13u sec elapsed 0.15 sec\nLOG: performsort done: CPU 0.01s/0.61u sec elapsed 0.63 sec\nLOG: internal sort ended, 9861 KB used: CPU 0.04s/0.67u sec elapsed 0.71 sec\nLOG: begin index sort: unique = f, workMem = 16384, randomAccess = f\nLOG: performsort starting: CPU 0.01s/0.13u sec elapsed 0.15 sec\nLOG: performsort done: CPU 0.01s/11.52u sec elapsed 11.54 sec\nLOG: internal sort ended, 9861 KB used: CPU 0.03s/11.59u sec elapsed 11.62 sec\nLOG: begin index sort: unique = f, workMem = 16384, randomAccess = f\nLOG: performsort starting: CPU 0.00s/0.14u sec elapsed 0.15 sec\nLOG: performsort done: CPU 0.00s/0.45u sec elapsed 0.46 sec\nLOG: internal sort ended, 9861 KB used: CPU 0.02s/0.55u sec elapsed 0.57 sec\nLOG: begin index sort: unique = f, workMem = 16384, randomAccess = f\nLOG: performsort starting: CPU 0.00s/0.14u sec elapsed 0.15 sec\nLOG: performsort done: CPU 0.00s/0.45u sec elapsed 0.46 sec\nLOG: internal sort ended, 9861 KB used: CPU 0.04s/0.54u sec elapsed 0.57 sec\nLOG: begin index sort: unique = f, workMem = 16384, randomAccess = f\nLOG: performsort starting: CPU 0.02s/0.12u sec elapsed 0.15 sec\nLOG: performsort done: CPU 0.02s/0.44u sec elapsed 0.46 sec\nLOG: internal sort ended, 9861 KB used: CPU 0.03s/0.55u sec elapsed 0.58 sec\nLOG: begin index sort: unique = f, workMem = 16384, randomAccess = f\nLOG: performsort starting: CPU 0.02s/0.13u sec elapsed 0.15 sec\nLOG: performsort done: CPU 0.02s/0.44u sec elapsed 0.46 sec\nLOG: internal sort ended, 9861 KB used: CPU 0.03s/0.54u sec elapsed 0.58 sec\nLOG: begin index sort: unique = f, workMem = 16384, randomAccess = f\nLOG: performsort starting: CPU 0.02s/0.13u sec elapsed 0.15 sec\nLOG: performsort done: CPU 0.02s/0.44u sec elapsed 0.46 sec\nLOG: internal sort ended, 9861 KB used: CPU 0.04s/0.54u sec elapsed 0.59 sec\n", "msg_date": "Wed, 15 Feb 2006 16:48:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange Create Index behaviour " }, { "msg_contents": "Gary Doades <[email protected]> writes:\n> Interestingly, if I don't delete the table after a run, but just drop \n> and re-create the index repeatedly it stays a pretty consistent time, \n> either repeatedly good or repeatedly bad!\n\nThis is consistent with the theory of a data-dependent performance\nproblem in qsort. If you don't generate a fresh set of random test\ndata, then you get repeatable runtimes. With a new set of test data,\nyou might or might not hit the not-so-sweet-spot that we seem to have\ndetected.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Feb 2006 16:51:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange Create Index behaviour " }, { "msg_contents": "Gary Doades <[email protected]> writes:\n> If I run the script again, it is not always the first case that is slow, \n> it varies from run to run, which is why I repeated it quite a few times \n> for the test.\n\nFor some reason I hadn't immediately twigged to the fact that your test\nscript is just N repetitions of the exact same structure with random data.\nSo it's not so surprising that you get random variations in behavior\nwith different test data sets.\n\nI did some experimentation comparing the qsort from Fedora Core 4\n(glibc-2.3.5-10.3) with our src/port/qsort.c. For those who weren't\nfollowing the pgsql-performance thread, the test case is just this\nrepeated a lot of times:\n\ncreate table atest(i int4, r int4);\ninsert into atest (i,r) select generate_series(1,100000), 0;\ninsert into atest (i,r) select generate_series(1,100000), random()*100000;\n\\timing\ncreate index idx on atest(r);\n\\timing\ndrop table atest;\n\nI did this 100 times and sorted the reported runtimes. (Investigation\nwith trace_sort = on confirms that the runtime is almost entirely spent\nin qsort() called from our performsort --- the Postgres overhead is\nabout 100msec on this machine.) Results are below.\n\nIt seems clear that our qsort.c is doing a pretty awful job of picking\nqsort pivots, while glibc is mostly managing not to make that mistake.\nI haven't looked at the glibc code yet to see what they are doing\ndifferently.\n\nI'd say this puts a considerable damper on my enthusiasm for using our\nqsort all the time, as was recently debated in this thread:\nhttp://archives.postgresql.org/pgsql-hackers/2005-12/msg00610.php\nWe need to fix our qsort.c before pushing ahead with that idea.\n\n\t\t\tregards, tom lane\n\n\n100 runtimes for glibc qsort, sorted ascending:\n\nTime: 459.860 ms\nTime: 460.209 ms\nTime: 460.704 ms\nTime: 461.317 ms\nTime: 461.538 ms\nTime: 461.652 ms\nTime: 461.988 ms\nTime: 462.573 ms\nTime: 462.638 ms\nTime: 462.716 ms\nTime: 462.917 ms\nTime: 463.219 ms\nTime: 463.455 ms\nTime: 463.650 ms\nTime: 463.723 ms\nTime: 463.737 ms\nTime: 463.750 ms\nTime: 463.852 ms\nTime: 463.964 ms\nTime: 463.988 ms\nTime: 464.003 ms\nTime: 464.135 ms\nTime: 464.372 ms\nTime: 464.458 ms\nTime: 464.496 ms\nTime: 464.551 ms\nTime: 464.599 ms\nTime: 464.655 ms\nTime: 464.656 ms\nTime: 464.722 ms\nTime: 464.814 ms\nTime: 464.827 ms\nTime: 464.878 ms\nTime: 464.899 ms\nTime: 464.905 ms\nTime: 464.987 ms\nTime: 465.055 ms\nTime: 465.138 ms\nTime: 465.159 ms\nTime: 465.194 ms\nTime: 465.310 ms\nTime: 465.316 ms\nTime: 465.375 ms\nTime: 465.450 ms\nTime: 465.535 ms\nTime: 465.595 ms\nTime: 465.680 ms\nTime: 465.769 ms\nTime: 465.865 ms\nTime: 465.892 ms\nTime: 465.903 ms\nTime: 466.003 ms\nTime: 466.154 ms\nTime: 466.164 ms\nTime: 466.203 ms\nTime: 466.305 ms\nTime: 466.344 ms\nTime: 466.364 ms\nTime: 466.388 ms\nTime: 466.502 ms\nTime: 466.593 ms\nTime: 466.725 ms\nTime: 466.794 ms\nTime: 466.798 ms\nTime: 466.904 ms\nTime: 466.971 ms\nTime: 466.997 ms\nTime: 467.122 ms\nTime: 467.146 ms\nTime: 467.221 ms\nTime: 467.224 ms\nTime: 467.244 ms\nTime: 467.277 ms\nTime: 467.587 ms\nTime: 468.142 ms\nTime: 468.207 ms\nTime: 468.237 ms\nTime: 468.471 ms\nTime: 468.663 ms\nTime: 468.700 ms\nTime: 469.235 ms\nTime: 469.840 ms\nTime: 470.472 ms\nTime: 471.140 ms\nTime: 472.811 ms\nTime: 472.959 ms\nTime: 474.858 ms\nTime: 477.210 ms\nTime: 479.571 ms\nTime: 479.671 ms\nTime: 482.797 ms\nTime: 488.852 ms\nTime: 514.639 ms\nTime: 529.287 ms\nTime: 612.185 ms\nTime: 660.748 ms\nTime: 742.227 ms\nTime: 866.814 ms\nTime: 1234.848 ms\nTime: 1267.398 ms\n\n\n100 runtimes for port/qsort.c, sorted ascending:\n\nTime: 418.905 ms\nTime: 420.611 ms\nTime: 420.764 ms\nTime: 420.904 ms\nTime: 421.706 ms\nTime: 422.466 ms\nTime: 422.627 ms\nTime: 423.189 ms\nTime: 423.302 ms\nTime: 425.096 ms\nTime: 425.731 ms\nTime: 425.851 ms\nTime: 427.253 ms\nTime: 430.113 ms\nTime: 432.756 ms\nTime: 432.963 ms\nTime: 440.502 ms\nTime: 440.640 ms\nTime: 450.452 ms\nTime: 458.143 ms\nTime: 459.212 ms\nTime: 467.706 ms\nTime: 468.006 ms\nTime: 468.574 ms\nTime: 470.003 ms\nTime: 472.313 ms\nTime: 483.622 ms\nTime: 492.395 ms\nTime: 509.564 ms\nTime: 531.037 ms\nTime: 533.366 ms\nTime: 535.610 ms\nTime: 575.523 ms\nTime: 582.688 ms\nTime: 593.545 ms\nTime: 647.364 ms\nTime: 660.612 ms\nTime: 677.312 ms\nTime: 680.288 ms\nTime: 697.626 ms\nTime: 833.066 ms\nTime: 834.511 ms\nTime: 851.819 ms\nTime: 920.443 ms\nTime: 926.731 ms\nTime: 954.289 ms\nTime: 1045.214 ms\nTime: 1059.200 ms\nTime: 1062.328 ms\nTime: 1136.018 ms\nTime: 1260.091 ms\nTime: 1276.883 ms\nTime: 1319.351 ms\nTime: 1438.854 ms\nTime: 1475.457 ms\nTime: 1538.211 ms\nTime: 1549.004 ms\nTime: 1744.642 ms\nTime: 1771.258 ms\nTime: 1959.530 ms\nTime: 2300.140 ms\nTime: 2589.641 ms\nTime: 2612.780 ms\nTime: 3100.024 ms\nTime: 3284.125 ms\nTime: 3379.792 ms\nTime: 3750.278 ms\nTime: 4302.278 ms\nTime: 4780.624 ms\nTime: 5000.056 ms\nTime: 5092.604 ms\nTime: 5168.722 ms\nTime: 5292.941 ms\nTime: 5895.964 ms\nTime: 7003.164 ms\nTime: 7099.449 ms\nTime: 7115.083 ms\nTime: 7384.940 ms\nTime: 8214.010 ms\nTime: 8700.771 ms\nTime: 9331.225 ms\nTime: 10503.360 ms\nTime: 12496.026 ms\nTime: 12982.474 ms\nTime: 15192.390 ms\nTime: 15392.161 ms\nTime: 15958.295 ms\nTime: 18375.693 ms\nTime: 18617.706 ms\nTime: 18927.515 ms\nTime: 19898.018 ms\nTime: 20865.979 ms\nTime: 21000.907 ms\nTime: 21297.585 ms\nTime: 21714.518 ms\nTime: 25423.235 ms\nTime: 27543.052 ms\nTime: 28314.182 ms\nTime: 29400.278 ms\nTime: 34142.534 ms\n", "msg_date": "Wed, 15 Feb 2006 18:28:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "qsort again (was Re: [PERFORM] Strange Create Index behaviour)" }, { "msg_contents": "On Wed, 2006-02-15 at 16:51 -0500, Tom Lane wrote:\n> Gary Doades <[email protected]> writes:\n> > Interestingly, if I don't delete the table after a run, but just drop \n> > and re-create the index repeatedly it stays a pretty consistent time, \n> > either repeatedly good or repeatedly bad!\n> \n> This is consistent with the theory of a data-dependent performance\n> problem in qsort. If you don't generate a fresh set of random test\n> data, then you get repeatable runtimes. With a new set of test data,\n> you might or might not hit the not-so-sweet-spot that we seem to have\n> detected.\n\nAgreed. Good analysis...\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 15 Feb 2006 23:51:21 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange Create Index behaviour" }, { "msg_contents": "Tom Lane wrote:\n> For some reason I hadn't immediately twigged to the fact that your test\n> script is just N repetitions of the exact same structure with random data.\n> So it's not so surprising that you get random variations in behavior\n> with different test data sets.\n> \n > It seems clear that our qsort.c is doing a pretty awful job of picking\n> qsort pivots, while glibc is mostly managing not to make that mistake.\n> I haven't looked at the glibc code yet to see what they are doing\n> differently.\n> \n> I'd say this puts a considerable damper on my enthusiasm for using our\n> qsort all the time, as was recently debated in this thread:\n> http://archives.postgresql.org/pgsql-hackers/2005-12/msg00610.php\n> We need to fix our qsort.c before pushing ahead with that idea.\n\n[snip]\n\n> Time: 28314.182 ms\n> Time: 29400.278 ms\n> Time: 34142.534 ms\n\nOuch! That confirms my problem. I generated the random test case because \nit was easier than including the dump of my tables, but you can \nappreciate that tables 20 times the size are basically crippled when it \ncomes to creating an index on them.\n\nExamining the dump and the associated times during restore it looks like \nI have 7 tables with this approximate distribution, thus the \nridiculously long restore time. Better not re-index soon!\n\nIs this likely to hit me in a random fashion during normal operation, \njoins, sorts, order by for example?\n\nSo the options are:\n1) Fix the included qsort.c code and use that\n2) Get FreeBSD to fix their qsort code\n3) Both\n\nI guess that 1 is the real solution in case anyone else's qsort is \nbroken in the same way. Then at least you *could* use it all the time :)\n\nRegards,\nGary.\n\n\n\n", "msg_date": "Wed, 15 Feb 2006 23:55:30 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": true, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour)" }, { "msg_contents": "Gary Doades <[email protected]> writes:\n> Is this likely to hit me in a random fashion during normal operation, \n> joins, sorts, order by for example?\n\nYup, anytime you're passing data with that kind of distribution\nthrough a sort.\n\n> So the options are:\n> 1) Fix the included qsort.c code and use that\n> 2) Get FreeBSD to fix their qsort code\n> 3) Both\n\n> I guess that 1 is the real solution in case anyone else's qsort is \n> broken in the same way. Then at least you *could* use it all the time :)\n\nIt's reasonable to assume that most of the *BSDen have basically the\nsame qsort code. Ours claims to have come from NetBSD sources, but\nI don't doubt that they all trace back to a common ancestor.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Feb 2006 19:04:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour) " }, { "msg_contents": "Gary Doades <[email protected]> writes:\n> Ouch! That confirms my problem. I generated the random test case because \n> it was easier than including the dump of my tables, but you can \n> appreciate that tables 20 times the size are basically crippled when it \n> comes to creating an index on them.\n\nActually... we only use qsort when we have a sorting problem that fits\nwithin the allowed sort memory. The external-sort logic doesn't go\nthrough that code at all. So all the analysis we just did on your test\ncase doesn't necessarily apply to sort problems that are too large for\nthe sort_mem setting.\n\nThe test case would be sorting 200000 index entries, which'd probably\noccupy at least 24 bytes apiece of sort memory, so probably about 5 meg.\nA problem 20 times that size would definitely not fit in the default\n16MB maintenance_work_mem. Were you using a large value of\nmaintenance_work_mem for your restore?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Feb 2006 19:17:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour) " }, { "msg_contents": "This behavior is consistent with the pivot choosing algorithm \nassuming certain distribution(s) for the data. For instance, \nmedian-of-three partitioning is known to be pessimal when the data is \ngeometrically or hyper-geometrically distributed. Also, care must be \ntaken that sometimes is not when there are many equal values in the \ndata. Even pseudo random number generator based pivot choosing \nalgorithms are not immune if the PRNG is flawed in some way.\n\nHow are we choosing our pivots?\n\n\nAt 06:28 PM 2/15/2006, Tom Lane wrote:\n\n>I did some experimentation comparing the qsort from Fedora Core 4\n>(glibc-2.3.5-10.3) with our src/port/qsort.c. For those who weren't\n>following the pgsql-performance thread, the test case is just this\n>repeated a lot of times:\n>\n>create table atest(i int4, r int4);\n>insert into atest (i,r) select generate_series(1,100000), 0;\n>insert into atest (i,r) select generate_series(1,100000), random()*100000;\n>\\timing\n>create index idx on atest(r);\n>\\timing\n>drop table atest;\n>\n>I did this 100 times and sorted the reported runtimes. (Investigation\n>with trace_sort = on confirms that the runtime is almost entirely spent\n>in qsort() called from our performsort --- the Postgres overhead is\n>about 100msec on this machine.) Results are below.\n>\n>It seems clear that our qsort.c is doing a pretty awful job of picking\n>qsort pivots, while glibc is mostly managing not to make that mistake.\n>I haven't looked at the glibc code yet to see what they are doing\n>differently.\n>\n>I'd say this puts a considerable damper on my enthusiasm for using our\n>qsort all the time, as was recently debated in this thread:\n>http://archives.postgresql.org/pgsql-hackers/2005-12/msg00610.php\n>We need to fix our qsort.c before pushing ahead with that idea.\n>\n> regards, tom lane\n>\n>\n>100 runtimes for glibc qsort, sorted ascending:\n>\n>Time: 459.860 ms\n><snip>\n>Time: 488.852 ms\n>Time: 514.639 ms\n>Time: 529.287 ms\n>Time: 612.185 ms\n>Time: 660.748 ms\n>Time: 742.227 ms\n>Time: 866.814 ms\n>Time: 1234.848 ms\n>Time: 1267.398 ms\n>\n>\n>100 runtimes for port/qsort.c, sorted ascending:\n>\n>Time: 418.905 ms\n><snip>\n>Time: 20865.979 ms\n>Time: 21000.907 ms\n>Time: 21297.585 ms\n>Time: 21714.518 ms\n>Time: 25423.235 ms\n>Time: 27543.052 ms\n>Time: 28314.182 ms\n>Time: 29400.278 ms\n>Time: 34142.534 ms\n\n\n", "msg_date": "Wed, 15 Feb 2006 19:57:51 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "I wrote:\n> Gary Doades <[email protected]> writes:\n>> Ouch! That confirms my problem. I generated the random test case because \n>> it was easier than including the dump of my tables, but you can \n>> appreciate that tables 20 times the size are basically crippled when it \n>> comes to creating an index on them.\n\n> Actually... we only use qsort when we have a sorting problem that fits\n> within the allowed sort memory. The external-sort logic doesn't go\n> through that code at all. So all the analysis we just did on your test\n> case doesn't necessarily apply to sort problems that are too large for\n> the sort_mem setting.\n\nI increased the size of the test case by 10x (basically s/100000/1000000/)\nwhich is enough to push it into the external-sort regime. I get\namazingly stable runtimes now --- I didn't have the patience to run 100\ntrials, but in 30 trials I have slowest 11538 msec and fastest 11144 msec.\nSo this code path is definitely not very sensitive to this data\ndistribution.\n\nWhile these numbers aren't glittering in comparison to the best-case\nqsort times (~450 msec to sort 10% as much data), they are sure a lot\nbetter than the worst-case times. So maybe a workaround for you is\nto decrease maintenance_work_mem, counterintuitive though that be.\n(Now, if you *weren't* using maintenance_work_mem of 100MB or more\nfor your problem restore, then I'm not sure I know what's going on...)\n\nWe still ought to try to fix qsort of course.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Feb 2006 19:59:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour) " }, { "msg_contents": "Ron <[email protected]> writes:\n> How are we choosing our pivots?\n\nSee qsort.c: it looks like median of nine equally spaced inputs (ie,\nthe 1/8th points of the initial input array, plus the end points),\nimplemented as two rounds of median-of-three choices. With half of the\ndata inputs zero, it's not too improbable for two out of the three\nsamples to be zeroes in which case I think the med3 result will be zero\n--- so choosing a pivot of zero is much more probable than one would\nlike, and doing so in many levels of recursion causes the problem.\n\nI think. I'm not too sure if the code isn't just being sloppy about the\ncase where many data values are equal to the pivot --- there's a special\ncase there to switch to insertion sort, and maybe that's getting invoked\ntoo soon. It'd be useful to get a line-level profile of the behavior of\nthis code in the slow cases...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Feb 2006 20:21:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour) " }, { "msg_contents": "On Wed, 2006-02-15 at 20:00 +0000, Gary Doades wrote:\n\n> I have put together a test case that demonstrates the problem (see \n> below). I create a simple table, as close in structure to one of my \n> problem tables and populate an integer column with 100,000 zeros follow \n> by 100,000 random integers between 0 and 100,000. Then create an index \n> on this column. I then drop the table and repeat. The create index \n> should take around 1-2 seconds. A fair proportion of the time it takes \n> 50 seconds!!!\n> \n> If I fill the same row with all random data the create index always \n> takes a second or two. If I fill the column with all zeros everything is \n> still OK.\n\nAside from the importance of investigating sort behaviour, have you\ntried to build a partial index WHERE col > 0 ? That way you wouldn't\neven be indexing the zeros.\n\nBest Regards, Simon Riggs\n\n\n\n", "msg_date": "Thu, 16 Feb 2006 01:52:09 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange Create Index behaviour" }, { "msg_contents": "> Ouch! That confirms my problem. I generated the random test case because \n> it was easier than including the dump of my tables, but you can \n> appreciate that tables 20 times the size are basically crippled when it \n> comes to creating an index on them.\n\n\nI have to say that I restored a few gigabyte dump on freebsd the other \nday, and most of the restore time was in index creation - I didn't think \ntoo much of it though at the time. FreeBSD 4.x.\n\nChris\n\n", "msg_date": "Thu, 16 Feb 2006 09:52:46 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour)" }, { "msg_contents": "On Wed, 2006-02-15 at 19:59 -0500, Tom Lane wrote:\n\n> I get\n> amazingly stable runtimes now --- I didn't have the patience to run 100\n> trials, but in 30 trials I have slowest 11538 msec and fastest 11144 msec.\n> So this code path is definitely not very sensitive to this data\n> distribution.\n\n\"The worst-case behavior of replacement-selection is very close to its\naverage behavior, while the worst-case behavior of QuickSort is terrible\n(N2) – a strong argument in favor of replacement-selection. Despite this\nrisk, QuickSort is widely used because, in practice, it has superior\nperformance.\" p.8, \"AlphaSort: A Cache-Sensitive Parallel External\nSort\", Nyberg et al, VLDB Journal 4(4): 603-627 (1995)\n\nI think your other comment about flipping to insertion sort too early\n(and not returning...) is a plausible cause for the poor pg qsort\nbehaviour, but the overall spread of values seems as expected.\n\nSome test results I've seen seem consistent with the view that\nincreasing memory also increases run-time for larger settings of\nwork_mem/maintenance_work_mem. Certainly, as I observed a while back,\nhaving a large memory settings doesn't help you at all when you are\ndoing final run merging on the external sort. Whatever we do, we should\nlook at the value high memory settings bring to each phase of a sort\nseparately from the other phases.\n\nThere is work underway on improving external sorts, so I hear (not me).\nPlus my WIP on randomAccess requirements.\n\nBest Regards, Simon Riggs\n\n\n\n", "msg_date": "Thu, 16 Feb 2006 01:56:55 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "On Wed, 2006-02-15 at 18:28 -0500, Tom Lane wrote:\n> It seems clear that our qsort.c is doing a pretty awful job of picking\n> qsort pivots, while glibc is mostly managing not to make that mistake.\n> I haven't looked at the glibc code yet to see what they are doing\n> differently.\n\nglibc qsort is actually merge sort, so I'm not surprised it avoids this\nproblem.\n\n-Neil\n\n\n", "msg_date": "Wed, 15 Feb 2006 21:12:52 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "\n\"Tom Lane\" <[email protected]> wrote\n>\n> I did this 100 times and sorted the reported runtimes.\n>\n> I'd say this puts a considerable damper on my enthusiasm for using our\n> qsort all the time, as was recently debated in this thread:\n> http://archives.postgresql.org/pgsql-hackers/2005-12/msg00610.php\n>\n> 100 runtimes for glibc qsort, sorted ascending:\n>\n> Time: 866.814 ms\n> Time: 1234.848 ms\n> Time: 1267.398 ms\n>\n> 100 runtimes for port/qsort.c, sorted ascending:\n>\n> Time: 28314.182 ms\n> Time: 29400.278 ms\n> Time: 34142.534 ms\n>\n\nBy \"did this 100 times\" do you mean generate a sequence of at most\n200000*100 numbers, and for every 200000 numbers, the first half are all\nzeros and the other half are uniform random numbers? I tried to confirm it\nby patching the program mentioned in the link, but seems BSDqsort is still a\nlittle bit leading.\n\nRegards,\nQingqing\n\n---\nResult\n\nsort#./sort\n[3] [glibc qsort]: nelem(20000000), range(4294901760) distr(halfhalf)\nccost(2) : 18887.285000 ms\n[3] [BSD qsort]: nelem(20000000), range(4294901760) distr(halfhalf) ccost(2)\n: 18801.018000 ms\n[3] [qsortG]: nelem(20000000), range(4294901760) distr(halfhalf) ccost(2) :\n22997.004000 ms\n\n---\nPatch to sort.c\n\nsort#diff -c sort.c sort1.c\n*** sort.c Thu Dec 15 12:18:59 2005\n--- sort1.c Wed Feb 15 22:21:15 2006\n***************\n*** 35,43 ****\n {\"BSD qsort\", qsortB},\n {\"qsortG\", qsortG}\n };\n! static const size_t d_nelem[] = {1000, 10000, 100000, 1000000, 5000000};\n! static const size_t d_range[] = {2, 32, 1024, 0xFFFF0000L};\n! static const char *d_distr[] = {\"uniform\", \"gaussian\", \"95sorted\",\n\"95reversed\"};\n static const size_t d_ccost[] = {2};\n\n /* factor index */\n--- 35,43 ----\n {\"BSD qsort\", qsortB},\n {\"qsortG\", qsortG}\n };\n! static const size_t d_nelem[] = {5000000, 10000000, 20000000};\n! static const size_t d_range[] = {0xFFFF0000L};\n! static const char *d_distr[] = {\"halfhalf\"};\n static const size_t d_ccost[] = {2};\n\n /* factor index */\n***************\n*** 180,185 ****\n--- 180,192 ----\n swap(karray[i], karray[nelem-i-1]);\n }\n }\n+ else if (!strcmp(distr, \"halfhalf\"))\n+ {\n+ int j;\n+ for (i = 0; i < nelem/200000; i++)\n+ for (j = 0; j < 100000; j++)\n+ karray[i*200000 + j] = 0;\n+ }\n\n return array;\n }\n\n\n\n", "msg_date": "Thu, 16 Feb 2006 11:28:37 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour)" }, { "msg_contents": "At 08:21 PM 2/15/2006, Tom Lane wrote:\n>Ron <[email protected]> writes:\n> > How are we choosing our pivots?\n>\n>See qsort.c: it looks like median of nine equally spaced inputs (ie,\n>the 1/8th points of the initial input array, plus the end points),\n>implemented as two rounds of median-of-three choices.\n\nOK, this is a bad way to do median-of-n partitioning for a few \nreasons. See Sedgewick's PhD thesis for details.\n\nBasically, if one is using median-of-n partitioning to choose a \npivot, one should do it in =one= pass, and n for that pass should be \n<= the numbers of registers in the CPU. Since the x86 ISA has 8 \nGPR's, n should be <= 8. 7 for instance.\n\nSpecial purposing the median-of-n code so that the minimal number of \ncomparisons and moves is used to sort the sample and then \n\"partitioning in place\" is the best way to do it. In addition, care \nmust be taken to deal with the possibility that many of the keys may be equal.\n\nThe (pseudo) code looks something like this:\n\nqs(a[],L,R){\nif((R-L) > SAMPLE_SIZE){ // Not worth using qs for too few elements\n SortSample(SAMPLE_SIZE,a[],L,R);\n // Sorts SAMPLE_SIZE= n elements and does median-of-n \npartitioning for small n\n // using the minimal number of comparisons and moves.\n // In the process it ends up partitioning the first n/2 and last \nn/2 elements\n // SAMPLE_SIZE is a constant chosen to work best for a given CPU.\n // #GPRs - 1 is a good initial guess.\n // For the x86 ISA, #GPRs - 1 = 7. For native x86-64, it's 15.\n // For most RISC CPUs it's 31 or 63. For Itanium, it's 127 (!)\n pivot= a[(L+R)>>1]; i= L+(SAMPLE_SIZE>>1); j= R-(SAMPLE_SIZE>>1);\n for(;;){\n while(a[++i] < pivot);\n while(a[--j] > pivot);\n if(i >= j) break;\n if(a[i] > a[j]) swap(a[i],a[j]);\n }\n if((i-R) >= (j-L)){qs(a,L,i-1);}\n else{qs(a,i,R);}\nelse{OofN^2_Sort(a,L,R);}\n// SelectSort may be better than InsertSort if KeySize in bits << \nRecordSize in bits\n} // End of qs\n\nGiven that the most common CPU ISA in existence has 8 GPRs, \nSAMPLE_SIZE= 7 is probably optimal:\nt= (L+R);\nthe set would be {L; t/8; t/4; t/2; 3*t/4; 7*t/8; R;}\n==> {L; t>>3; t>>2; t>>1; (3*t)>>2; (7*t)>>3; R} as the locations.\nEven better (and more easily scaled as the number of GPR's in the CPU \nchanges) is to use\nthe set {L; L+1; L+2; t>>1; R-2; R-1; R}\nThis means that instead of 7 random memory accesses, we have 3; two \nof which result in a\nburst access for three elements each.\nThat's much faster; _and_ using a sample of 9, 15, 31, 63, etc (to \nmax of ~GPRs -1) elements is more easily done.\n\nIt also means that the work we do sorting the sample can be taken \nadvantage of when starting\ninner loop of quicksort: items L..L+2, t, and R-2..R are already \npartitioned by SortSample().\n\nInsuring that the minimum number of comparisons and moves is done in \nSortSample can be down by using a code generator to create a \ncomparison tree that identifies which permutation(s) of n we are \ndealing with and then moving them into place with the minimal number of moves.\n\nSIDE NOTE: IIRC glibc's qsort is actually merge sort. Merge sort \nperformance is insensitive to all inputs, and there are way to \noptimize it as well.\n\nI'll leave the actual coding to someone who knows the pg source \nbetter than I do.\nRon \n\n\n", "msg_date": "Wed, 15 Feb 2006 23:30:54 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "\"Qingqing Zhou\" <[email protected]> writes:\n> By \"did this 100 times\" do you mean generate a sequence of at most\n> 200000*100 numbers, and for every 200000 numbers, the first half are all\n> zeros and the other half are uniform random numbers?\n\nNo, I mean I ran the bit of SQL script I gave 100 separate times.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Feb 2006 23:40:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour) " }, { "msg_contents": "\n\"Tom Lane\" <[email protected]> wrote\n> \"Qingqing Zhou\" <[email protected]> writes:\n> > By \"did this 100 times\" do you mean generate a sequence of at most\n> > 200000*100 numbers, and for every 200000 numbers, the first half are all\n> > zeros and the other half are uniform random numbers?\n>\n> No, I mean I ran the bit of SQL script I gave 100 separate times.\n>\n\nI must misunderstand something here -- I can't figure out that why the cost\nof the same procedure keep climbing?\n\nRegards,\nQingqing\n\n\n", "msg_date": "Thu, 16 Feb 2006 12:47:50 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour)" }, { "msg_contents": "\n\"Qingqing Zhou\" <[email protected]> wrote\n>\n> I must misunderstand something here -- I can't figure out that why the\ncost\n> of the same procedure keep climbing?\n>\n\nOoops, I mis-intepret the sentence -- you sorted the results ...\n\nRegards,\nQingqing\n\n\n", "msg_date": "Thu, 16 Feb 2006 12:51:32 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour)" }, { "msg_contents": "\"Qingqing Zhou\" <[email protected]> writes:\n> \"Tom Lane\" <[email protected]> wrote\n>> No, I mean I ran the bit of SQL script I gave 100 separate times.\n\n> I must misunderstand something here -- I can't figure out that why the cost\n> of the same procedure keep climbing?\n\nNo, the run cost varies randomly depending on the random data supplied\nby the test script. The reason the numbers are increasing is that I\nsorted them for ease of inspection.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Feb 2006 23:54:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour) " }, { "msg_contents": "Tom Lane wrote:\n> I increased the size of the test case by 10x (basically s/100000/1000000/)\n> which is enough to push it into the external-sort regime. I get\n> amazingly stable runtimes now --- I didn't have the patience to run 100\n> trials, but in 30 trials I have slowest 11538 msec and fastest 11144 msec.\n> So this code path is definitely not very sensitive to this data\n> distribution.\n>\n> While these numbers aren't glittering in comparison to the best-case\n> qsort times (~450 msec to sort 10% as much data), they are sure a lot\n> better than the worst-case times. So maybe a workaround for you is\n> to decrease maintenance_work_mem, counterintuitive though that be.\n> (Now, if you *weren't* using maintenance_work_mem of 100MB or more\n> for your problem restore, then I'm not sure I know what's going on...)\n>\n\nGood call. I basically reversed your test by keeping the number of rows\nthe same (200000), but reducing maintenance_work_mem. Reducing to 8192\nmade no real difference. Reducing to 4096 flattened out all the times\nnicely. Slower overall, but at least predictable. Hopefully only a\ntemporary solution until qsort is fixed.\n\nMy restore now takes 22 minutes :)\n\nI think the reason I wasn't seeing performance issues with normal sort\noperations is because they use work_mem not maintenance_work_mem which was\nonly set to 2048 anyway. Does that sound right?\n\nRegards,\nGary.\n\n\n", "msg_date": "Thu, 16 Feb 2006 11:06:32 -0000 (GMT)", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] qsort again (was Re: Strange Create Index " }, { "msg_contents": "On Wed, Feb 15, 2006 at 11:30:54PM -0500, Ron wrote:\n> Even better (and more easily scaled as the number of GPR's in the CPU \n> changes) is to use\n> the set {L; L+1; L+2; t>>1; R-2; R-1; R}\n> This means that instead of 7 random memory accesses, we have 3; two \n> of which result in a\n> burst access for three elements each.\n\nIsn't that improvement going to disappear competely if you choose a bad\npivot?\n\n> SIDE NOTE: IIRC glibc's qsort is actually merge sort. Merge sort \n> performance is insensitive to all inputs, and there are way to \n> optimize it as well.\n\nglibc-2.3.5/stdlib/qsort.c:\n\n /* Order size using quicksort. This implementation incorporates\n four optimizations discussed in Sedgewick:\n\nI can't see any references to merge sort in there at all.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 16 Feb 2006 12:35:22 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: Strange Create Index" }, { "msg_contents": "* Neil Conway:\n\n> On Wed, 2006-02-15 at 18:28 -0500, Tom Lane wrote:\n>> It seems clear that our qsort.c is doing a pretty awful job of picking\n>> qsort pivots, while glibc is mostly managing not to make that mistake.\n>> I haven't looked at the glibc code yet to see what they are doing\n>> differently.\n>\n> glibc qsort is actually merge sort, so I'm not surprised it avoids this\n> problem.\n\nqsort also performs twice as many key comparisons as the theoretical\nminimum. If key comparison is not very cheap, other schemes (like\nheapsort, for example) are more attractive.\n", "msg_date": "Thu, 16 Feb 2006 13:10:48 +0100", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again" }, { "msg_contents": "On Thu, Feb 16, 2006 at 01:10:48PM +0100, Florian Weimer wrote:\n> * Neil Conway:\n> \n> > On Wed, 2006-02-15 at 18:28 -0500, Tom Lane wrote:\n> >> It seems clear that our qsort.c is doing a pretty awful job of picking\n> >> qsort pivots, while glibc is mostly managing not to make that mistake.\n> >> I haven't looked at the glibc code yet to see what they are doing\n> >> differently.\n> >\n> > glibc qsort is actually merge sort, so I'm not surprised it avoids this\n> > problem.\n> \n> qsort also performs twice as many key comparisons as the theoretical\n> minimum. If key comparison is not very cheap, other schemes (like\n> heapsort, for example) are more attractive.\n\nLast time around there were a number of different algorithms tested.\nDid anyone run those tests while getting it to count the number of\nactual comparisons (which could easily swamp the time taken to do the\nactual sort in some cases)?\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Thu, 16 Feb 2006 13:49:18 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again" }, { "msg_contents": "Martijn van Oosterhout schrieb:\n> \n> Last time around there were a number of different algorithms tested.\n> Did anyone run those tests while getting it to count the number of\n> actual comparisons (which could easily swamp the time taken to do the\n> actual sort in some cases)?\n> \n\nThe last time I did such tests is almost 10 years ago. I had used \nMetroWerks CodeWarrior C/C++, which had Quicksort as algorithm in the Lib C.\nAnyhow, I tested a few algorithms including merge sort and heapsort. I \nend up with heapsort because it was the fastest algorithm for our issue. \nWe joined two arrays where each array was sorted and run qsort to sort \nthe new array.\n\nSven.\n", "msg_date": "Thu, 16 Feb 2006 14:08:40 +0100", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] qsort again" }, { "msg_contents": "At 06:35 AM 2/16/2006, Steinar H. Gunderson wrote:\n>On Wed, Feb 15, 2006 at 11:30:54PM -0500, Ron wrote:\n> > Even better (and more easily scaled as the number of GPR's in the CPU\n> > changes) is to use\n> > the set {L; L+1; L+2; t>>1; R-2; R-1; R}\n> > This means that instead of 7 random memory accesses, we have 3; two\n> > of which result in a burst access for three elements each.\n>\n>Isn't that improvement going to disappear competely if you choose a bad\n>pivot?\nOnly if you _consistently_ (read: \"the vast majority of the time\": \nquicksort is actually darn robust) choose a _pessimal_, not just \n\"bad\", pivot quicksort will degenerate to the O(N^2) behavior \neveryone worries about. See Corman & Rivest for a proof on this.\n\nEven then, doing things as above has benefits:\n1= The worst case is less bad since the guaranteed O(lgs!) pivot \nchoosing algorithm puts s elements into final position.\nWorst case becomes better than O(N^2/(s-1)).\n\n2= The overhead of pivot choosing can overshadow the benefits using \nmore traditional methods for even moderate values of s. See \ndiscussions on the quicksort variant known as \"samplesort\" and \nSedgewick's PhD thesis for details. Using a pivot choosing algorithm \nthat actually does some of the partitioning (and does it more \nefficiently than the \"usual\" partitioning algorithm does) plus using \npartition-in-place (rather then Lomuto's method) reduces overhead \nvery effectively (at the \"cost\" of more complicated / delicate to get \nright partitioning code). The above reduces the number of moves used \nin a quicksort pass considerably regardless of the number of compares used.\n\n3= Especially in modern systems where the gap between internal CPU \nbandwidth and memory bandwidth is so great, the overhead of memory \naccesses for comparisons and moves is the majority of the overhead \nfor both the pivot choosing and the partitioning algorithms within \nquicksort. Particularly random memory accesses. The reason (#GPRs - \n1) is a magic constant is that it's the most you can compare and move \nusing only register-to-register operations.\n\nIn addition, replacing as many of the memory accesses you must do \nwith sequential rather than random memory accesses is a big deal: \nsequential memory access is measured in 10's of CPU cycles while \nrandom memory access is measured in hundreds of CPU cycles. It's no \naccident that the advances in Grey's sorting contest have involved \nalgorithms that are both register and cache friendly, minimizing \noverall memory access and using sequential memory access as much as \npossible when said access can not be avoided. As caches grow larger \nand memory accesses more expensive, it's often worth it to use a \nBucketSort+QuickSort hybrid rather than just QuickSort.\n\n...and of course if you know enough about the data to be sorted so as \nto constrain it appropriately, one should use a non comparison based \nO(N) sorting algorithm rather than any of the general comparison \nbased O(NlgN) methods.\n\n\n> > SIDE NOTE: IIRC glibc's qsort is actually merge sort. Merge sort\n> > performance is insensitive to all inputs, and there are way to\n> > optimize it as well.\n>\n>glibc-2.3.5/stdlib/qsort.c:\n>\n> /* Order size using quicksort. This implementation incorporates\n> four optimizations discussed in Sedgewick:\n>\n>I can't see any references to merge sort in there at all.\nWell, then I'm not the only person on the lists whose memory is faulty ;-)\n\nThe up side of MergeSort is that its performance is always O(NlgN).\nThe down sides are that it is far more memory hungry than QuickSort and slower.\n\n\nRon \n\n\n", "msg_date": "Thu, 16 Feb 2006 08:22:55 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "At 07:10 AM 2/16/2006, Florian Weimer wrote:\n>* Neil Conway:\n>\n> > On Wed, 2006-02-15 at 18:28 -0500, Tom Lane wrote:\n> >> It seems clear that our qsort.c is doing a pretty awful job of picking\n> >> qsort pivots, while glibc is mostly managing not to make that mistake.\n> >> I haven't looked at the glibc code yet to see what they are doing\n> >> differently.\n> >\n> > glibc qsort is actually merge sort, so I'm not surprised it avoids this\n> > problem.\n>\n>qsort also performs twice as many key comparisons as the theoretical\n>minimum.\n\nThe theoretical minimum number of comparisons for a general purpose \ncomparison based sort is O(lgN!).\nQuickSort uses 2NlnN ~= 1.38NlgN or ~1.38x the optimum without tuning \n(see Knuth, Sedgewick, Corman, ... etc)\nOTOH, QuickSort uses ~2x as many =moves= as the theoretical minimum \nunless tuned, and moves are more expensive than compares in modern systems.\n\nSee my other posts for QuickSort tuning methods that attempt to \ndirectly address both issues.\n\n\nRon \n\n\n", "msg_date": "Thu, 16 Feb 2006 08:38:45 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] qsort again" }, { "msg_contents": "Hi, Ron,\n\nRon wrote:\n\n> ...and of course if you know enough about the data to be sorted so as to\n> constrain it appropriately, one should use a non comparison based O(N)\n> sorting algorithm rather than any of the general comparison based\n> O(NlgN) methods.\n\nSounds interesting, could you give us some pointers (names, URLs,\npapers) to such algorithms?\n\nThanks a lot,\nMarkus\n\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Thu, 16 Feb 2006 14:44:45 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "Last night I implemented a non-recursive introsort in C... let me test it a\nbit more and then I'll post it here for everyone else to try out.\n\nOn 2/16/06, Markus Schaber <[email protected]> wrote:\n>\n> Hi, Ron,\n>\n> Ron wrote:\n>\n> > ...and of course if you know enough about the data to be sorted so as to\n> > constrain it appropriately, one should use a non comparison based O(N)\n> > sorting algorithm rather than any of the general comparison based\n> > O(NlgN) methods.\n>\n> Sounds interesting, could you give us some pointers (names, URLs,\n> papers) to such algorithms?\n>\n> Thanks a lot,\n> Markus\n>\n>\n>\n> --\n> Markus Schaber | Logical Tracking&Tracing International AG\n> Dipl. Inf. | Software Development GIS\n>\n> Fight against software patents in EU! www.ffii.org\n> www.nosoftwarepatents.org\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n\n--\nJonah H. Harris, Database Internals Architect\nEnterpriseDB Corporation\n732.331.1324\n\nLast night I implemented a non-recursive introsort in C... let me test it a bit more and then I'll post it here for everyone else to try out.On 2/16/06, Markus Schaber\n <[email protected]> wrote:Hi, Ron,\nRon wrote:> ...and of course if you know enough about the data to be sorted so as to> constrain it appropriately, one should use a non comparison based O(N)> sorting algorithm rather than any of the general comparison based\n> O(NlgN) methods.Sounds interesting, could you give us some pointers (names, URLs,papers) to such algorithms?Thanks a lot,Markus--Markus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf.     | Software Development GISFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?               http://archives.postgresql.org-- Jonah H. Harris, Database Internals Architect\nEnterpriseDB Corporation732.331.1324", "msg_date": "Thu, 16 Feb 2006 09:19:44 -0500", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "\"Gary Doades\" <[email protected]> writes:\n> I think the reason I wasn't seeing performance issues with normal sort\n> operations is because they use work_mem not maintenance_work_mem which was\n> only set to 2048 anyway. Does that sound right?\n\nVery probable. Do you want to test the theory by jacking that up? ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Feb 2006 09:42:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] qsort again (was Re: Strange Create Index behaviour) " }, { "msg_contents": "On Thu, Feb 16, 2006 at 08:22:55AM -0500, Ron wrote:\n> 3= Especially in modern systems where the gap between internal CPU \n> bandwidth and memory bandwidth is so great, the overhead of memory \n> accesses for comparisons and moves is the majority of the overhead \n> for both the pivot choosing and the partitioning algorithms within \n> quicksort. Particularly random memory accesses. The reason (#GPRs - \n> 1) is a magic constant is that it's the most you can compare and move \n> using only register-to-register operations.\n\nBut how much of this applies to us? We're not sorting arrays of\nintegers, we're sorting pointers to tuples. So while moves cost very\nlittle, a comparison costs hundreds, maybe thousands of cycles. A tuple\ncan easily be two or three cachelines and you're probably going to\naccess all of it, not to mention the Fmgr structures and the Datums\nthemselves.\n\nNone of this is cache friendly. The actual tuples themselves could be\nspread all over memory (I don't think any particular effort is expended\ntrying to minimize fragmentation).\n\nDo these algorithms discuss the case where a comparison is more than\n1000 times the cost of a move?\n\nWhere this does become interesting is where we can convert a datum to\nan integer such that if f(A) > f(B) then A > B. Then we can sort on\nf(X) first with just integer comparisons and then do a full tuple\ncomparison only if f(A) = f(B). This would be much more cache-coherent\nand make these algorithms much more applicable in my mind.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Thu, 16 Feb 2006 15:48:33 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "> \"Gary Doades\" <[email protected]> writes:\n>> I think the reason I wasn't seeing performance issues with normal sort\n>> operations is because they use work_mem not maintenance_work_mem which\n>> was\n>> only set to 2048 anyway. Does that sound right?\n>\n> Very probable. Do you want to test the theory by jacking that up? ;-)\n\nHmm, played around a bit. I have managed to get it to do a sort on one of\nthe \"bad\" columns using a select of two whole tables that results in a\nsequntial scan, sort and merge join. I also tried a simple select column\norder by column for a bad column.\n\nI tried varying maintenance_work_mem and work_mem up and down between 2048\nand 65536 but I always get similar results. The sort phase always takes 4\nto 5 seconds which seems about right for 900,000 rows.\n\nThis was on a colunm that took 12 minutes to create an index on.\n\nI've no idea why it should behave this way, but probably explains why I\n(and others) may not have noticed it before.\n\nRegards,\nGary.\n\n\n", "msg_date": "Thu, 16 Feb 2006 15:42:36 -0000 (GMT)", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] qsort again (was Re: Strange Create Index " }, { "msg_contents": "At 09:48 AM 2/16/2006, Martijn van Oosterhout wrote:\n>On Thu, Feb 16, 2006 at 08:22:55AM -0500, Ron wrote:\n> > 3= Especially in modern systems where the gap between internal CPU\n> > bandwidth and memory bandwidth is so great, the overhead of memory\n> > accesses for comparisons and moves is the majority of the overhead\n> > for both the pivot choosing and the partitioning algorithms within\n> > quicksort. Particularly random memory accesses. The reason (#GPRs -\n> > 1) is a magic constant is that it's the most you can compare and move\n> > using only register-to-register operations.\n>\n>But how much of this applies to us? We're not sorting arrays of\n>integers, we're sorting pointers to tuples. So while moves cost very\n>little, a comparison costs hundreds, maybe thousands of cycles. A tuple\n>can easily be two or three cachelines and you're probably going to\n>access all of it, not to mention the Fmgr structures and the Datums\n>themselves.\nPointers are simply fixed size 32b or 64b quantities. They are \nessentially integers. Comparing and moving pointers or fixed size \nkeys to those pointers is exactly the same problem as comparing and \nmoving integers.\n\nComparing =or= moving the actual data structures is a much more \nexpensive and variable cost proposition. I'm sure that pg's sort \nfunctionality is written intelligently enough that the only real data \nmoves are done in a final pass after the exact desired order has been \nfound using pointer compares and (re)assignments during the sorting \nprocess. That's a standard technique for sorting data whose \"key\" or \npointer is much smaller than a datum.\n\nYour cost comment basically agrees with mine regarding the cost of \nrandom memory accesses. The good news is that the number of datums \nto be examined during the pivot choosing process is small enough that \nthe datums can fit into CPU cache while the pointers to them can be \nassigned to registers: making pivot choosing +very+ fast when done correctly.\n\nAs you've noted, actual partitioning is going to be more expensive \nsince it involves accessing enough actual datums that they can't all \nfit into CPU cache. The good news is that QuickSort has a very \nsequential access pattern within its inner loop. So while we must go \nto memory for compares, we are at least keeping the cost for it down \nit a minimum. In addition, said access is nice enough to be very \nprefetch and CPU cache hierarchy friendly.\n\n\n>None of this is cache friendly. The actual tuples themselves could be\n>spread all over memory (I don't think any particular effort is expended\n>trying to minimize fragmentation).\nIt probably would be worth it to spend some effort on memory layout \njust as we do for HD layout.\n\n\n>Do these algorithms discuss the case where a comparison is more than\n>1000 times the cost of a move?\nA move is always more expensive than a compare when the datum is \nlarger than its pointer or key. A move is always more expensive than \na compare when it involves memory to memory movement rather than CPU \nlocation to CPU location movement. A move is especially more \nexpensive than a compare when it involves both factors. Most moves \ndo involve both.\n\nWhat I suspect you meant is that a key comparison that involves \naccessing the data in memory is more expensive than reassigning the \npointers associated with those keys. That is certainly true.\n\nYes. The problem has been extensively studied. ;-)\n\n\n>Where this does become interesting is where we can convert a datum to\n>an integer such that if f(A) > f(B) then A > B. Then we can sort on\n>f(X) first with just integer comparisons and then do a full tuple\n>comparison only if f(A) = f(B). This would be much more cache-coherent\n>and make these algorithms much more applicable in my mind.\nIn fact we can do better.\nUsing hash codes or what-not to map datums to keys and then sorting \njust the keys and the pointers to those datums followed by an \noptional final pass where we do the actual data movement is also a \nstandard technique for handling large data structures.\n\n\nRegardless of what tweaks beyond the basic algorithms we use, the \nalgorithms themselves have been well studied and their performance \nwell established. QuickSort is the best performing of the O(nlgn) \ncomparison based sorts and it uses less resources than HeapSort or MergeSort.\n\nRon\n\n\n", "msg_date": "Thu, 16 Feb 2006 10:52:48 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create" }, { "msg_contents": "Ron <[email protected]> writes:\n> Your cost comment basically agrees with mine regarding the cost of \n> random memory accesses. The good news is that the number of datums \n> to be examined during the pivot choosing process is small enough that \n> the datums can fit into CPU cache while the pointers to them can be \n> assigned to registers: making pivot choosing +very+ fast when done correctly.\n\nThis is more or less irrelevant given that comparing the pointers is not\nthe operation we need to do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Feb 2006 11:20:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create " }, { "msg_contents": "Markus Schaber wrote:\n> Ron wrote:\n>>...and of course if you know enough about the data to be sorted so as to\n>>constrain it appropriately, one should use a non comparison based O(N)\n>>sorting algorithm rather than any of the general comparison based\n>>O(NlgN) methods.\n> \n> Sounds interesting, could you give us some pointers (names, URLs,\n> papers) to such algorithms?\n\nMost of these techniques boil down to good ol' \"bucket sort\". A simple example: suppose you have a column of integer percentages, range zero to 100. You know there are only 101 distinct values. So create 101 \"buckets\" (e.g. linked lists), make a single pass through your data and drop each row's ID into the right bucket, then make a second pass through the buckets, and write the row ID's out in bucket order. This is an O(N) sort technique.\n\nAny time you have a restricted data range, you can do this. Say you have 100 million rows of scientific results known to be good to only three digits -- it can have at most 1,000 distinct values (regardless of the magnitude of the values), so you can do this with 1,000 buckets and just two passes through the data.\n\nYou can also use this trick when the optimizer is asked for \"fastest first result.\" Say you have a cursor on a column of numbers with good distribution. If you do a bucket sort on the first two or three digits only, you know the first \"page\" of results will be in the first bucket. So you only need to apply qsort to that first bucket (which is very fast, since it's small), and you can deliver the first page of data to the application. This can be particularly effective in interactive situations, where the user typically looks at a few pages of data and then abandons the search. \n\nI doubt this is very relevant to Postgres. A relational database has to be general purpose, and it's hard to give it \"hints\" that would tell it when to use this particular optimization.\n\nCraig\n", "msg_date": "Thu, 16 Feb 2006 08:27:04 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "At 10:52 AM 2/16/2006, Ron wrote:\n>At 09:48 AM 2/16/2006, Martijn van Oosterhout wrote:\n>\n>>Where this does become interesting is where we can convert a datum to\n>>an integer such that if f(A) > f(B) then A > B. Then we can sort on\n>>f(X) first with just integer comparisons and then do a full tuple\n>>comparison only if f(A) = f(B). This would be much more cache-coherent\n>>and make these algorithms much more applicable in my mind.\n>In fact we can do better.\n>Using hash codes or what-not to map datums to keys and then sorting \n>just the keys and the pointers to those datums followed by an \n>optional final pass where we do the actual data movement is also a \n>standard technique for handling large data structures.\nI thought some follow up might be in order here.\n\nLet's pretend that we have the typical DB table where rows are ~2-4KB \napiece. 1TB of storage will let us have 256M-512M rows in such a table.\n\nA 32b hash code can be assigned to each row value such that only \nexactly equal rows will have the same hash code.\nA 32b pointer can locate any of the 256M-512M rows.\n\nNow instead of sorting 1TB of data we can sort 2^28 to 2^29 32b+32b= \n64b*(2^28 to 2^29)= 2-4GB of pointers+keys followed by an optional \npass to rearrange the actual rows if we so wish.\n\nWe get the same result while only examining and manipulating 1/50 to \n1/25 as much data during the sort.\n\nIf we want to spend more CPU time in order to save more space, we can \ncompress the key+pointer representation. That usually reduces the \namount of data to be manipulated to ~1/4 the original key+pointer \nrepresentation, reducing things to ~512M-1GB worth of compressed \npointers+keys. Or ~1/200 - ~1/100 the original amount of data we \nwere discussing.\n\nEither representation is small enough to fit within RAM rather than \nrequiring HD IO, so we solve the HD IO bottleneck in the best \npossible way: we avoid ever doing it.\n\nRon \n\n\n", "msg_date": "Thu, 16 Feb 2006 11:32:55 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create" }, { "msg_contents": "On Thu, Feb 16, 2006 at 11:32:55AM -0500, Ron wrote:\n> At 10:52 AM 2/16/2006, Ron wrote:\n> >In fact we can do better.\n> >Using hash codes or what-not to map datums to keys and then sorting \n> >just the keys and the pointers to those datums followed by an \n> >optional final pass where we do the actual data movement is also a \n> >standard technique for handling large data structures.\n\nOr in fact required if the Datums are not all the same size, which is\nthe case in PostgreSQL.\n\n> I thought some follow up might be in order here.\n> \n> Let's pretend that we have the typical DB table where rows are ~2-4KB \n> apiece. 1TB of storage will let us have 256M-512M rows in such a table.\n> \n> A 32b hash code can be assigned to each row value such that only \n> exactly equal rows will have the same hash code.\n> A 32b pointer can locate any of the 256M-512M rows.\n\nThat hash code is impossible the way you state it, since the set of\nstrings is not mappable to a 32bit integer. You probably meant that a\nhash code can be assigned such that equal rows have equal hashes (drop\nthe only).\n\n> Now instead of sorting 1TB of data we can sort 2^28 to 2^29 32b+32b= \n> 64b*(2^28 to 2^29)= 2-4GB of pointers+keys followed by an optional \n> pass to rearrange the actual rows if we so wish.\n> \n> We get the same result while only examining and manipulating 1/50 to \n> 1/25 as much data during the sort.\n\nBut this is what we do now. The tuples are loaded, we sort an array of\npointers, then we write the output. Except we don't have the hash, so\nwe require access to the 1TB of data to do the actual comparisons. Even\nif we did have the hash, we'd *still* need access to the data to handle\ntie-breaks.\n\nThat's why your comment about moves always being more expensive than\ncompares makes no sense. A move can be acheived simply by swapping two\npointers in the array. A compare actually needs to call all sorts of\nfunctions. If and only if we have functions for every data type to\nproduce an ordered hash, we can optimise sorts based on single\nintegers.\n\nFor reference, look at comparetup_heap(). It's just 20 lines, but each\nfunction call there expands to maybe a dozen lines of code. And it has\na loop. I don't think we're anywhere near the stage where locality of\nreference makes much difference.\n\nWe very rarely needs the tuples actualised in memory in the required\norder, just the pointers are enough.\n\nHave a ncie day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Thu, 16 Feb 2006 17:59:59 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> You can also use this trick when the optimizer is asked for \"fastest first result.\" Say you have a cursor on a column of numbers with good distribution. If you do a bucket sort on the first two or three digits only, you know the first \"page\" of results will be in the first bucket. So you only need to apply qsort to that first bucket (which is very fast, since it's small), and you can deliver the first page of data to the application. This can be particularly effective in interactive situations, where the user typically looks at a few pages of data and then abandons the search. \n\n> I doubt this is very relevant to Postgres. A relational database has to be general purpose, and it's hard to give it \"hints\" that would tell it when to use this particular optimization.\n\nActually, LIMIT does nicely for that hint; the PG planner has definitely\ngot a concept of preferring fast-start plans for limited queries. The\nreal problem in applying bucket-sort ideas is the lack of any\ndatatype-independent way of setting up the buckets.\n\nOnce or twice we've kicked around the idea of having some\ndatatype-specific sorting code paths alongside the general-purpose one,\nbut I can't honestly see this as being workable from a code maintenance\nstandpoint.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Feb 2006 12:15:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index " }, { "msg_contents": "On Feb 16, 2006, at 8:32 AM, Ron wrote:\n> Let's pretend that we have the typical DB table where rows are \n> ~2-4KB apiece. 1TB of storage will let us have 256M-512M rows in \n> such a table.\n>\n> A 32b hash code can be assigned to each row value such that only \n> exactly equal rows will have the same hash code.\n> A 32b pointer can locate any of the 256M-512M rows.\n>\n> Now instead of sorting 1TB of data we can sort 2^28 to 2^29 32b \n> +32b= 64b*(2^28 to 2^29)= 2-4GB of pointers+keys followed by an \n> optional pass to rearrange the actual rows if we so wish.\n\nI don't understand this.\n\nThis is a true statement: (H(x) != H(y)) => (x != y)\nThis is not: (H(x) < H(y)) => (x < y)\n\nHash keys can tell you there's an inequality, but they can't tell you \nhow the values compare. If you want 32-bit keys that compare in the \nsame order as the original values, here's how you have to get them:\n\n(1) sort the values into an array\n(2) use each value's array index as its key\n\nIt reduces to the problem you're trying to use it to solve.\n\n\n-- \nScott Lamb <http://www.slamb.org/>\n\n\n", "msg_date": "Thu, 16 Feb 2006 09:19:24 -0800", "msg_from": "Scott Lamb <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create" }, { "msg_contents": "At 12:19 PM 2/16/2006, Scott Lamb wrote:\n>On Feb 16, 2006, at 8:32 AM, Ron wrote:\n>>Let's pretend that we have the typical DB table where rows are\n>>~2-4KB apiece. 1TB of storage will let us have 256M-512M rows in\n>>such a table.\n>>\n>>A 32b hash code can be assigned to each row value such that only\n>>exactly equal rows will have the same hash code.\n>>A 32b pointer can locate any of the 256M-512M rows.\n>>\n>>Now instead of sorting 1TB of data we can sort 2^28 to 2^29 32b \n>>+32b= 64b*(2^28 to 2^29)= 2-4GB of pointers+keys followed by an\n>>optional pass to rearrange the actual rows if we so wish.\n>\n>I don't understand this.\n>\n>This is a true statement: (H(x) != H(y)) => (x != y)\n>This is not: (H(x) < H(y)) => (x < y)\n>\n>Hash keys can tell you there's an inequality, but they can't tell you\n>how the values compare. If you want 32-bit keys that compare in the\n>same order as the original values, here's how you have to get them:\nFor most hash codes, you are correct. There is a class of hash or \nhash-like codes that maintains the mapping to support that second statement.\n\nMore later when I can get more time.\nRon \n\n\n", "msg_date": "Thu, 16 Feb 2006 13:47:14 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create" }, { "msg_contents": "On Thu, 2006-02-16 at 12:35 +0100, Steinar H. Gunderson wrote:\n> glibc-2.3.5/stdlib/qsort.c:\n> \n> /* Order size using quicksort. This implementation incorporates\n> four optimizations discussed in Sedgewick:\n> \n> I can't see any references to merge sort in there at all.\n\nstdlib/qsort.c defines _quicksort(), not qsort(), which is defined by\nmsort.c. On looking closer, it seems glibc actually tries to determine\nthe physical memory in the machine -- if it is sorting a single array\nthat exceeds 1/4 of the machine's physical memory, it uses quick sort,\notherwise it uses merge sort.\n\n-Neil\n\n\n", "msg_date": "Thu, 16 Feb 2006 14:14:03 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "On Thu, 2006-02-16 at 12:15 -0500, Tom Lane wrote:\n> Once or twice we've kicked around the idea of having some\n> datatype-specific sorting code paths alongside the general-purpose one,\n> but I can't honestly see this as being workable from a code maintenance\n> standpoint.\n> \n> \t\t\tregards, tom lane\n\n\nIt seems that instead of maintaining a different sorting code path for\neach data type, you could get away with one generic path and one\n(hopefully faster) path if you allowed data types to optionally support\na 'sortKey' interface by providing a function f which maps inputs to 32-\nbit int outputs, such that the following two properties hold:\n\nf(a)>=f(b) iff a>=b\nif a==b then f(a)==f(b)\n\nSo if a data type supports the sortKey interface you could perform the\nsort on f(value) and only refer back to the actual element comparison\nfunctions when two sortKeys have the same value.\n\nData types which could probably provide a useful function for f would be\nint2, int4, oid, and possibly int8 and text (at least for SQL_ASCII).\n\nDepending on the overhead, you might not even need to maintain 2\nindependent search code paths, since you could always use f(x)=0 as the\ndefault sortKey function which would degenerate to the exact same sort\nbehavior in use today.\n\n-- Mark Lewis\n", "msg_date": "Thu, 16 Feb 2006 14:17:36 -0800", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "Hi, Mark,\n\nMark Lewis schrieb:\n\n> It seems that instead of maintaining a different sorting code path for\n> each data type, you could get away with one generic path and one\n> (hopefully faster) path if you allowed data types to optionally support\n> a 'sortKey' interface by providing a function f which maps inputs to 32-\n> bit int outputs, such that the following two properties hold:\n> \n> f(a)>=f(b) iff a>=b\n> if a==b then f(a)==f(b)\n\nHmm, to remove redundancy, I'd change the <= to a < and define:\n\nif a==b then f(a)==f(b)\nif a<b then f(a)<=f(b)\n\n> Data types which could probably provide a useful function for f would be\n> int2, int4, oid, and possibly int8 and text (at least for SQL_ASCII).\n\nWith int2 or some restricted ranges of oid and int4, we could even\nimplement a bucket sort.\n\nMarkus\n", "msg_date": "Thu, 16 Feb 2006 23:33:48 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "On Thu, Feb 16, 2006 at 02:17:36PM -0800, Mark Lewis wrote:\n> It seems that instead of maintaining a different sorting code path for\n> each data type, you could get away with one generic path and one\n> (hopefully faster) path if you allowed data types to optionally support\n> a 'sortKey' interface by providing a function f which maps inputs to 32-\n> bit int outputs, such that the following two properties hold:\n> \n> f(a)>=f(b) iff a>=b\n> if a==b then f(a)==f(b)\n\nNote this is a property of the collation, not the type. For example\nstrings can be sorted in many ways and the sortKey must reflect that.\nSo in postgres terms it's a property of the btree operator class.\n\nIt's something I'd like to do if I get A Round Tuit. :)\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Thu, 16 Feb 2006 23:40:06 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "Markus Schaber <[email protected]> writes:\n\n> Hmm, to remove redundancy, I'd change the <= to a < and define:\n> \n> if a==b then f(a)==f(b)\n> if a<b then f(a)<=f(b)\n> \n> > Data types which could probably provide a useful function for f would be\n> > int2, int4, oid, and possibly int8 and text (at least for SQL_ASCII).\n\nHow exactly do you imagine doing this for text?\n\nI could see doing it for char(n)/varchar(n) where n<=4 in SQL_ASCII though.\n\n-- \ngreg\n\n", "msg_date": "16 Feb 2006 17:51:02 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "\n\n> It seems that instead of maintaining a different sorting code path for\n> each data type, you could get away with one generic path and one\n> (hopefully faster) path if you allowed data types to optionally support\n> a 'sortKey' interface by providing a function f which maps inputs to 32-\n> bit int outputs, such that the following two properties hold:\n\n\tLooks like the decorate-sort-undecorate pattern, which works quite well. \nGood idea.\n\tI would have said a 64 bit int, but it's the same idea. However it won't \nwork for floats, which is a pity, because floats fit in 64 bits. Unless \nmore types creep in the code path (which would not necessarily make it \nthat slower).\n\tAs for text, the worst case is when all strings start with the same 8 \nletters, but a good case pops up when a few-letter code is used as a key \nin a table. Think about a zipcode, for instance. If a merge join needs to \nsort on zipcodes, it might as well sort on 64-bits integers...\n\n\tBy the way, I'd like to declare my zipcode columns as SQL_ASCII while the \nrest of my database is in UNICODE, so they are faster to index and sort. \nCome on, MySQL does it...\n\n\tKeep up !\n", "msg_date": "Fri, 17 Feb 2006 00:05:23 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] qsort again (was Re: Strange Create Index" }, { "msg_contents": "On Thu, 2006-02-16 at 17:51 -0500, Greg Stark wrote:\n> > > Data types which could probably provide a useful function for f would be\n> > > int2, int4, oid, and possibly int8 and text (at least for SQL_ASCII).\n> \n> How exactly do you imagine doing this for text?\n> \n> I could see doing it for char(n)/varchar(n) where n<=4 in SQL_ASCII though.\n\n\nIn SQL_ASCII, just take the first 4 characters (or 8, if using a 64-bit\nsortKey as elsewhere suggested). The sorting key doesn't need to be a\none-to-one mapping.\n\n-- Mark Lewis\n", "msg_date": "Thu, 16 Feb 2006 15:23:09 -0800", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "Hi, PFC,\n\nPFC schrieb:\n\n> By the way, I'd like to declare my zipcode columns as SQL_ASCII\n> while the rest of my database is in UNICODE, so they are faster to\n> index and sort. Come on, MySQL does it...\n\nAnother use case for parametric column definitions - charset definitions\n- and the first one that cannot be emulated via constraints.\n\nOther use cases I remember were range definitions for numbers or PostGIS\ndimension, subtype and SRID, but those cann all be emulated via checks /\nconstraints.\n\nMarkus\n", "msg_date": "Fri, 17 Feb 2006 01:18:06 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] qsort again (was Re: Strange Create Index" }, { "msg_contents": "On Fri, Feb 17, 2006 at 12:05:23AM +0100, PFC wrote:\n> \tI would have said a 64 bit int, but it's the same idea. However it \n> \twon't work for floats, which is a pity, because floats fit in 64 bits. \n\nActually, you can compare IEEE floats directly as ints, as long as they're\npositive. (If they can be both positive and negative, you need to take\nspecial care of the sign bit first, but it's still doable.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 17 Feb 2006 03:02:19 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] qsort again (was Re: Strange Create Index" }, { "msg_contents": "On Thu, 16 Feb 2006, Mark Lewis wrote:\n\n> On Thu, 2006-02-16 at 17:51 -0500, Greg Stark wrote:\n>>>> Data types which could probably provide a useful function for f would be\n>>>> int2, int4, oid, and possibly int8 and text (at least for SQL_ASCII).\n>>\n>> How exactly do you imagine doing this for text?\n>>\n>> I could see doing it for char(n)/varchar(n) where n<=4 in SQL_ASCII though.\n>\n>\n> In SQL_ASCII, just take the first 4 characters (or 8, if using a 64-bit\n> sortKey as elsewhere suggested). The sorting key doesn't need to be a\n> one-to-one mapping.\n\nthat would violate your second contraint ( f(a)==f(b) iff (a==b) )\n\nif you could drop that constraint (the cost of which would be extra 'real' \ncompares within a bucket) then a helper function per datatype could work \nas you are talking.\n\nDavid Lang\n", "msg_date": "Thu, 16 Feb 2006 21:33:16 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "At 01:47 PM 2/16/2006, Ron wrote:\n>At 12:19 PM 2/16/2006, Scott Lamb wrote:\n>>On Feb 16, 2006, at 8:32 AM, Ron wrote:\n>>>Let's pretend that we have the typical DB table where rows are\n>>>~2-4KB apiece. 1TB of storage will let us have 256M-512M rows in\n>>>such a table.\n>>>\n>>>A 32b hash code can be assigned to each row value such that only\n>>>exactly equal rows will have the same hash code.\n>>>A 32b pointer can locate any of the 256M-512M rows.\n>>>\n>>>Now instead of sorting 1TB of data we can sort 2^28 to 2^29 32b \n>>>+32b= 64b*(2^28 to 2^29)= 2-4GB of pointers+keys followed by an\n>>>optional pass to rearrange the actual rows if we so wish.\n>>\n>>I don't understand this.\n>>\n>>This is a true statement: (H(x) != H(y)) => (x != y)\n>>This is not: (H(x) < H(y)) => (x < y)\n>>\n>>Hash keys can tell you there's an inequality, but they can't tell you\n>>how the values compare. If you want 32-bit keys that compare in the\n>>same order as the original values, here's how you have to get them:\n>For most hash codes, you are correct. There is a class of hash or \n>hash-like codes that maintains the mapping to support that second statement.\n>\n>More later when I can get more time.\n>Ron\n\nOK, so here's _a_ way (there are others) to obtain a mapping such that\n if a < b then f(a) < f (b) and\n if a == b then f(a) == f(b)\n\nPretend each row is a integer of row size (so a 2KB row becomes a \n16Kb integer; a 4KB row becomes a 32Kb integer; etc)\nSince even a 1TB table made of such rows can only have 256M - 512M \npossible values even if each row is unique, a 28b or 29b key is large \nenough to represent each row's value and relative rank compared to \nall of the others even if all row values are unique.\n\nBy scanning the table once, we can map say 0000001h (Hex used to ease \ntyping) to the row with the minimum value and 1111111h to the row \nwith the maximum value as well as mapping everything in between to \ntheir appropriate keys. That same scan can be used to assign a \npointer to each record's location.\n\nWe can now sort the key+pointer pairs instead of the actual data and \nuse an optional final pass to rearrange the actual rows if we wish.\n\nThat initial scan to set up the keys is expensive, but if we wish \nthat cost can be amortized over the life of the table so we don't \nhave to pay it all at once. In addition, once we have created those \nkeys, then can be saved for later searches and sorts.\n\nFurther space savings can be obtained whenever there are duplicate \nkeys and/or when compression methods are used on the Key+pointer pairs.\n\nRon\n\n\n\n\n\n\n", "msg_date": "Fri, 17 Feb 2006 01:20:58 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create" }, { "msg_contents": "On f�s, 2006-02-17 at 01:20 -0500, Ron wrote:\n> At 01:47 PM 2/16/2006, Ron wrote:\n> >At 12:19 PM 2/16/2006, Scott Lamb wrote:\n> >>On Feb 16, 2006, at 8:32 AM, Ron wrote:\n> >>>Let's pretend that we have the typical DB table where rows are\n> >>>~2-4KB apiece. 1TB of storage will let us have 256M-512M rows in\n> >>>such a table.\n> >>>\n> >>>A 32b hash code can be assigned to each row value such that only\n> >>>exactly equal rows will have the same hash code.\n> >>>A 32b pointer can locate any of the 256M-512M rows.\n> >>>\n> >>>Now instead of sorting 1TB of data we can sort 2^28 to 2^29 32b \n> >>>+32b= 64b*(2^28 to 2^29)= 2-4GB of pointers+keys followed by an\n> >>>optional pass to rearrange the actual rows if we so wish.\n> >>\n> >>I don't understand this.\n> >>\n> >>This is a true statement: (H(x) != H(y)) => (x != y)\n> >>This is not: (H(x) < H(y)) => (x < y)\n> >>\n> >>Hash keys can tell you there's an inequality, but they can't tell you\n> >>how the values compare. If you want 32-bit keys that compare in the\n> >>same order as the original values, here's how you have to get them:\n> >For most hash codes, you are correct. There is a class of hash or \n> >hash-like codes that maintains the mapping to support that second statement.\n> >\n> >More later when I can get more time.\n> >Ron\n> \n> OK, so here's _a_ way (there are others) to obtain a mapping such that\n> if a < b then f(a) < f (b) and\n> if a == b then f(a) == f(b)\n\n> By scanning the table once, we can map say 0000001h (Hex used to ease \n> typing) to the row with the minimum value and 1111111h to the row \n> with the maximum value as well as mapping everything in between to \n> their appropriate keys. That same scan can be used to assign a \n> pointer to each record's location.\n\nThis step is just as expensive as the original sort you\nwant to replace/improve. If you want to keep this mapping\nsaved as a sort of an index, or as part ot each row data, this will make\nthe cost of inserts and updates enormous.\n\n> \n> We can now sort the key+pointer pairs instead of the actual data and \n> use an optional final pass to rearrange the actual rows if we wish.\n\nHow are you suggesting this mapping be accessed? If the \nmapping is kept separate from the tuple data, as in an index, then how\nwill you look up the key?\n\n> That initial scan to set up the keys is expensive, but if we wish \n> that cost can be amortized over the life of the table so we don't \n> have to pay it all at once. In addition, once we have created those \n> keys, then can be saved for later searches and sorts.\n\nWhat is the use case where this would work better than a \nregular btree index ?\n\ngnari\n\n\n", "msg_date": "Fri, 17 Feb 2006 09:24:21 +0000", "msg_from": "Ragnar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] qsort again (was Re: Strange Create" }, { "msg_contents": "Hi, David,\n\nDavid Lang schrieb:\n\n>> In SQL_ASCII, just take the first 4 characters (or 8, if using a 64-bit\n>> sortKey as elsewhere suggested). The sorting key doesn't need to be a\n>> one-to-one mapping.\n\n> that would violate your second contraint ( f(a)==f(b) iff (a==b) )\n\nno, it doesn't.\n\nWhen both strings are equal, then the first characters are equal, too.\n\nIf they are not equal, the constraint condition does not match.\n\nThe first characters of the strings may be equal as f(a) may be equal to\nf(b) as to the other constraint.\n\nMarkus\n", "msg_date": "Fri, 17 Feb 2006 11:13:41 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "Hi, Ron,\n\nRon schrieb:\n\n> OK, so here's _a_ way (there are others) to obtain a mapping such that\n> if a < b then f(a) < f (b) and\n> if a == b then f(a) == f(b)\n> \n> Pretend each row is a integer of row size (so a 2KB row becomes a 16Kb\n> integer; a 4KB row becomes a 32Kb integer; etc)\n> Since even a 1TB table made of such rows can only have 256M - 512M\n> possible values even if each row is unique, a 28b or 29b key is large\n> enough to represent each row's value and relative rank compared to all\n> of the others even if all row values are unique.\n> \n> By scanning the table once, we can map say 0000001h (Hex used to ease\n> typing) to the row with the minimum value and 1111111h to the row with\n> the maximum value as well as mapping everything in between to their\n> appropriate keys. That same scan can be used to assign a pointer to\n> each record's location.\n\nBut with a single linear scan, this cannot be accomplished, as the table\ncontents are neither sorted nor distributed linearly between the minimum\nand the maximum.\n\nFor this mapping, you need a full table sort.\n\n> That initial scan to set up the keys is expensive, but if we wish that\n> cost can be amortized over the life of the table so we don't have to pay\n> it all at once. In addition, once we have created those keys, then can\n> be saved for later searches and sorts.\n\nBut for every update or insert, you have to resort the keys, which is\n_very_ expensive as it basically needs to update a huge part of the table.\n\nMarkus\n", "msg_date": "Fri, 17 Feb 2006 11:19:45 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create" }, { "msg_contents": "\n\tHas anybody got some profiler data on the amount of time spent in \ncomparisons during a sort ? Say, the proposals here would give the most \ngains on simple types like INTEGER ; so it would be interesting to know \nhow much time is now spent in comparisons for sorting a column of ints. If \nit's like, 10% of the total time, well...\n\n\tMore hand-waving :\n\n\tWhat are the usage case for sorts ?\n\n\t- potentially huge data sets : create index, big joins, reporting queries \netc.\n\t- small data sets : typically, a query with an ORDER BY which will return \na small amount of rows (website stuff), or joins not small enough to use a \nHashAggregate, but not big enough to create an index just for them.\n\n\tThe cost of a comparison vs. moving stuff around and fetching stuff is \nprobably very different in these two cases. If it all neatly fits in \nsort_mem, you can do fancy stuff (like sorting on SortKeys) which will \nneed to access the data in almost random order when time comes to hand the \nsorted data back. So, I guess the SortKey idea would rather apply to the \nlatter case only, which is CPU limited.\n\n\tAnyway, I was wondering about queries with multipart keys, like ORDER BY \nzipcode, client_name, date and the like. Using just an int64 as the key \nisn't going to help a lot here. Why not use a binary string of limited \nlength ? I'd tend to think it would not be that slower than comparing \nints, and it would be faster than calling each comparison function for \neach column. Each key part would get assigned to a byte range in the \nstring.\n\tIt would waste some memory, but for instance, using 2x the memory for \nhalf the time would be a good tradeoff if the amount of memory involved is \nin the order of megabytes.\n\tAlso, it would solve the problem of locales. Comparisons involving \nlocales are slow, but strings can be converted to a canonical form \n(removing accents and stuff), and then binary sorted.\n\n\tAlso I'll insert a plug for the idea that the Sort needs to know if there \nwill be a LIMIT afterwards ; this way it could reduce its working set by \nsimply discarding the rows which would have been discarded anyway by the \nLIMIT. Say you want the 100 first rows out of a million ordered rows. If \nthe sort knows this, it can be performed in the amount of memory for a 100 \nrows sort.\n\n", "msg_date": "Fri, 17 Feb 2006 11:55:34 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] qsort again (was Re: Strange Create" }, { "msg_contents": "At 04:24 AM 2/17/2006, Ragnar wrote:\n>On fös, 2006-02-17 at 01:20 -0500, Ron wrote:\n> >\n> > OK, so here's _a_ way (there are others) to obtain a mapping such that\n> > if a < b then f(a) < f (b) and\n> > if a == b then f(a) == f(b)\n>\n> > By scanning the table once, we can map say 0000001h (Hex used to ease\n> > typing) to the row with the minimum value and 1111111h to the row\n> > with the maximum value as well as mapping everything in between to\n> > their appropriate keys. That same scan can be used to assign a\n> > pointer to each record's location.\n>\n>This step is just as expensive as the original \n>sort you want to replace/improve.\n\nWhy do you think that? External sorts involve \nthe equivalent of multiple scans of the table to \nbe sorted, sometimes more than lgN (where N is \nthe number of items in the table to be \nsorted). Since this is physical IO we are \ntalking about, each scan is very expensive, and \ntherefore 1 scan is going to take considerably \nless time than >= lgN scans will be.\n\n\n>If you want to keep this mapping saved as a sort \n>of an index, or as part ot each row data, this \n>will make the cost of inserts and updates enormous.\n\nNot sure you've got this right either. Looks to \nme like we are adding a <= 32b quantity to each \nrow. Once we know the mapping, incrementally \nupdating it upon insert or update would seem to \nbe simple matter of a fast search for the correct \nranking [Interpolation search, which we have all \nthe needed data for, is O(lglgN). Hash based \nsearch is O(1)]; plus an increment/decrement of \nthe key values greater/less than the key value of \nthe row being inserted / updated. Given than we \nare updating all the keys in a specific range \nwithin a tree structure, that update can be done \nin O(lgm) (where m is the number of records affected).\n\n> > We can now sort the key+pointer pairs instead of the actual data and\n> > use an optional final pass to rearrange the actual rows if we wish.\n>\n>How are you suggesting this mapping be accessed? \n>If the mapping is kept separate from the tuple \n>data, as in an index, then how will you look up the key?\n??? We've effectively created a data set where \neach record is a pointer to a DB row plus its \nkey. We can now sort the data set by key and \nthen do an optional final pass to rearrange the \nactual DB rows if we so wish. Since that final \npass is very expensive, it is good that not all \nuse scenarios will need that final pass.\n\nThe amount of storage required to sort this \nrepresentation of the table rather than the \nactual table is so much less that it turns an \nexternal sorting problem into a internal sorting \nproblem with an optional final pass that is =1= \nscan (albeit one scan with a lot of seeks and \ndata movement). This is a big win. It is a \nvariation of a well known technique. See Sedgewick, Knuth, etc.\n\n\n> > That initial scan to set up the keys is expensive, but if we wish\n> > that cost can be amortized over the life of the table so we don't\n> > have to pay it all at once. In addition, once we have created those\n> > keys, then can be saved for later searches and sorts.\n>\n>What is the use case where this would work better than a\n>regular btree index ?\nAgain, ??? btree indexes address different \nissues. They do not in any way help create a \ncompact data representation of the original data \nthat saves enough space so as to turn an external \nranking or sorting problem into an internal one.\n\n\nRon \n\n\n", "msg_date": "Fri, 17 Feb 2006 08:01:34 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create" }, { "msg_contents": "At 05:19 AM 2/17/2006, Markus Schaber wrote:\n>Hi, Ron,\n>\n>Ron schrieb:\n>\n> > OK, so here's _a_ way (there are others) to obtain a mapping such that\n> > if a < b then f(a) < f (b) and\n> > if a == b then f(a) == f(b)\n> >\n> > Pretend each row is a integer of row size (so a 2KB row becomes a 16Kb\n> > integer; a 4KB row becomes a 32Kb integer; etc)\n> > Since even a 1TB table made of such rows can only have 256M - 512M\n> > possible values even if each row is unique, a 28b or 29b key is large\n> > enough to represent each row's value and relative rank compared to all\n> > of the others even if all row values are unique.\n> >\n> > By scanning the table once, we can map say 0000001h (Hex used to ease\n> > typing) to the row with the minimum value and 1111111h to the row with\n> > the maximum value as well as mapping everything in between to their\n> > appropriate keys. That same scan can be used to assign a pointer to\n> > each record's location.\n>\n>But with a single linear scan, this cannot be accomplished, as the table\n>contents are neither sorted nor distributed linearly between the minimum\n>and the maximum.\nSo what? We are talking about key assignment here, not anything that \nrequires physically manipulating the actual DB rows.\nOne physical IO pass should be all that's needed.\n\n\n>For this mapping, you need a full table sort.\nOne physical IO pass should be all that's needed. However, let's \npretend you are correct and that we do need to sort the table to get \nthe key mapping. Even so, we would only need to do it =once= and \nthen we would be able to use and incrementally update the results \nforever afterward. Even under this assumption, one external sort to \nsave all subsequent such sorts seems well worth it.\n\nIOW, even if I'm wrong about the initial cost to do this; it is still \nworth doing ;-)\n\n\n> > That initial scan to set up the keys is expensive, but if we wish that\n> > cost can be amortized over the life of the table so we don't have to pay\n> > it all at once. In addition, once we have created those keys, then can\n> > be saved for later searches and sorts.\n>\n>But for every update or insert, you have to resort the keys, which is\n>_very_ expensive as it basically needs to update a huge part of the table.\n\n??? You do not need to resort already ordered data to insert a new \nelement into it such that the data stays ordered! Once we have done \nthe key ordering operation once, we should not ever need to do it \nagain on the original data. Else most sorting algorithms wouldn't work ;-)\n\n\nRon \n\n\n", "msg_date": "Fri, 17 Feb 2006 08:23:40 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create" }, { "msg_contents": "On Fri, Feb 17, 2006 at 08:23:40AM -0500, Ron wrote:\n> >For this mapping, you need a full table sort.\n> One physical IO pass should be all that's needed. However, let's \n> pretend you are correct and that we do need to sort the table to get \n> the key mapping. Even so, we would only need to do it =once= and \n> then we would be able to use and incrementally update the results \n> forever afterward. Even under this assumption, one external sort to \n> save all subsequent such sorts seems well worth it.\n> \n> IOW, even if I'm wrong about the initial cost to do this; it is still \n> worth doing ;-)\n\nI think you're talking about something different here. You're thinking\nof having the whole table sorted and when you add a new value you add\nit in such a way to keep it sorted. The problem is, what do you sort it\nby? If you've sorted the table by col1, then when the user does ORDER\nBY col2 it's useless.\n\nIndeed, this is what btrees do, you store the order of the table\nseperate from the data. And you can store multiple orders. But even\nthen, when someone does ORDER BY lower(col1), it's still useless.\n\nAnd you're right, we still need to do the single mass sort in the\nbeginning, which is precisely what we're trying to optimise here.\n\n> ??? You do not need to resort already ordered data to insert a new \n> element into it such that the data stays ordered! Once we have done \n> the key ordering operation once, we should not ever need to do it \n> again on the original data. Else most sorting algorithms wouldn't work ;-)\n\nWe already do this with btree indexes. I'm not sure what you are\nproposing that improves on that.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Fri, 17 Feb 2006 16:53:58 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create" }, { "msg_contents": "On Feb 16, 2006, at 2:17 PM, Mark Lewis wrote:\n> Data types which could probably provide a useful function for f \n> would be\n> int2, int4, oid, and possibly int8 and text (at least for SQL_ASCII).\n\n...and with some work, floats (I think just the exponent would work, \nif nothing else). bytea. Probably just about anything.\n\nInteresting. If you abandon the idea that collisions should be \nimpossible (they're not indexes) or extremely rare (they're not \nhashes), it's pretty easy to come up with a decent hint to avoid a \nlot of dereferences.\n\n--\nScott Lamb <http://www.slamb.org/>\n\n\n\nOn Feb 16, 2006, at 2:17 PM, Mark Lewis wrote:Data types which could probably provide a useful function for f would be int2, int4, oid, and possibly int8 and text (at least for SQL_ASCII). ...and with some work, floats (I think just the exponent would work, if nothing else). bytea. Probably just about anything.Interesting. If you abandon the idea that collisions should be impossible (they're not indexes) or extremely rare (they're not hashes), it's pretty easy to come up with a decent hint to avoid a lot of dereferences. --Scott Lamb <http://www.slamb.org/>", "msg_date": "Fri, 17 Feb 2006 08:18:41 -0800", "msg_from": "Scott Lamb <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "On Fri, Feb 17, 2006 at 08:18:41AM -0800, Scott Lamb wrote:\n> On Feb 16, 2006, at 2:17 PM, Mark Lewis wrote:\n> >Data types which could probably provide a useful function for f \n> >would be\n> >int2, int4, oid, and possibly int8 and text (at least for SQL_ASCII).\n> \n> ...and with some work, floats (I think just the exponent would work, \n> if nothing else). bytea. Probably just about anything.\n> \n> Interesting. If you abandon the idea that collisions should be \n> impossible (they're not indexes) or extremely rare (they're not \n> hashes), it's pretty easy to come up with a decent hint to avoid a \n> lot of dereferences.\n\nYep, pretty much for any datatype you create a mapping function to map\nit to a signed int32. All you have to guarentee is that f(a) > f(b)\nimplies that a > b. Only if f(a) == f(b) do you need to compare a and\nb.\n\nYou then change the sorting code to have an array of (Datum,int32)\n(ouch, double the storage) where the int32 is the f(Datum). And in the\ncomparison routines you first check the int32. If they give an order\nyou're done. On match you do the full comparison.\n\nFor integer types (int2,int4,int8,oid) the conversion is straight\nforward. For float you'd use the exponent and the first few bits of the\nmantissa. For strings you'd have to bail, or use a strxfrm equivalent.\nNULL would be INT_MAX pr INT_MIN depending on where you want it. Thing\nis, even if you don't have such a function and always return zero, the\nresults will still be right.\n\nNot a new idea, but it would be very nice to implement. If would\nproduce nice speedups for type where comparisons are expensive. And\nmore importantly, the bulk of the comparisons can be moved inline and\nmake the whole cache-friendlyness discussed here much more meaningful.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Fri, 17 Feb 2006 17:31:23 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "At 10:53 AM 2/17/2006, Martijn van Oosterhout wrote:\n>On Fri, Feb 17, 2006 at 08:23:40AM -0500, Ron wrote:\n> > >For this mapping, you need a full table sort.\n> > One physical IO pass should be all that's needed. However, let's\n> > pretend you are correct and that we do need to sort the table to get\n> > the key mapping. Even so, we would only need to do it =once= and\n> > then we would be able to use and incrementally update the results\n> > forever afterward. Even under this assumption, one external sort to\n> > save all subsequent such sorts seems well worth it.\n> >\n> > IOW, even if I'm wrong about the initial cost to do this; it is still\n> > worth doing ;-)\n>\n>I think you're talking about something different here. You're thinking\n>of having the whole table sorted and when you add a new value you add\n>it in such a way to keep it sorted. The problem is, what do you sort it\n>by? If you've sorted the table by col1, then when the user does ORDER\n>BY col2 it's useless.\nNo, I'm thinking about how to represent DB row data in such a way that\na= we use a compact enough representation that we can sort internally \nrather than externally.\nb= we do the sort once and avoid most of the overhead upon subsequent \nsimilar requests.\n\nI used the example of sorting on the entire row to show that the \napproach works even when the original record being sorted by is very large.\nAll my previous comments on this topic hold for the case where we are \nsorting on only part of a row as well.\n\nIf all you are doing is sorting on a column or a few columns, what \nI'm discussing is even easier since treating the columns actually \nbeing used a sort criteria as integers rather than the whole row as \nan atomic unit eats less resources during the key creation and \nmapping process. If the row is 2-4KB in size, but we only care about \nsome combination of columns that only takes on <= 2^8 or <= 2^16 \ndifferent values, then what I've described will be even better than \nthe original example I gave.\n\nBasically, the range of a key is going to be restricted by how\na= big the field is that represents the key (columns and such are \nusually kept narrow for performance reasons) or\nb= big each row is (the more space each row takes, the fewer rows fit \ninto any given amount of storage)\nc= many rows there are in the table\nBetween the conditions, the range of a key tends to be severely \nrestricted and therefore use much less space than sorting the actual \nDB records would. ...and that gives us something we can take advantage of.\n\n\n>Indeed, this is what btrees do, you store the order of the table\n>seperate from the data. And you can store multiple orders. But even\n>then, when someone does ORDER BY lower(col1), it's still useless.\n>\n>And you're right, we still need to do the single mass sort in the\n>beginning, which is precisely what we're trying to optimise here.\nSigh. My points were:\n1= we have information available to us that allows us to map the rows \nin such a way as to turn most external sorts into internal sorts, \nthereby avoiding the entire external sorting problem in those \ncases. This is a huge performance improvement.\n\n2= that an external sort is =NOT= required for initial key \nassignment, but even if it was it would be worth it.\n\n3= that this is a variation of a well known technique so I'm not \nsuggesting heresy here.\n\n\nRon \n\n\n", "msg_date": "Fri, 17 Feb 2006 11:44:51 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create" }, { "msg_contents": "On Thu, 2006-02-16 at 21:33 -0800, David Lang wrote:\n> > In SQL_ASCII, just take the first 4 characters (or 8, if using a 64-bit\n> > sortKey as elsewhere suggested). The sorting key doesn't need to be a\n> > one-to-one mapping.\n> \n> that would violate your second contraint ( f(a)==f(b) iff (a==b) )\n> \n> if you could drop that constraint (the cost of which would be extra 'real' \n> compares within a bucket) then a helper function per datatype could work \n> as you are talking.\n\nI think we're actually on the same page here; you're right that the\nconstraint above ( f(a)==f(b) iff a==b ) can't be extended to data types\nwith more than 32 bits of value space. But the constraint I listed was\nactually:\n\nif a==b then f(a)==f(b)\n\nWhich doesn't imply 'if and only if'. It's a similar constraint to\nhashcodes; the same value will always have the same hash, but you're not\nguaranteed that the hashcodes for two distinct values will be unique.\n\n-- Mark\n", "msg_date": "Fri, 17 Feb 2006 09:30:37 -0800", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "On f�s, 2006-02-17 at 08:01 -0500, Ron wrote:\n> At 04:24 AM 2/17/2006, Ragnar wrote:\n> >On f�s, 2006-02-17 at 01:20 -0500, Ron wrote:\n> > >\n> > > OK, so here's _a_ way (there are others) to obtain a mapping such that\n> > > if a < b then f(a) < f (b) and\n> > > if a == b then f(a) == f(b)\n> >\n> > > By scanning the table once, we can map say 0000001h (Hex used to ease\n> > > typing) to the row with the minimum value and 1111111h to the row\n> > > with the maximum value as well as mapping everything in between to\n> > > their appropriate keys. That same scan can be used to assign a\n> > > pointer to each record's location.\n> >\n> >This step is just as expensive as the original \n> >sort you want to replace/improve.\n> \n> Why do you think that? External sorts involve \n> the equivalent of multiple scans of the table to \n> be sorted, sometimes more than lgN (where N is \n> the number of items in the table to be \n> sorted). Since this is physical IO we are \n> talking about, each scan is very expensive, and \n> therefore 1 scan is going to take considerably \n> less time than >= lgN scans will be.\n\nCall me dim, but please explain exactly how you are going\nto build this mapping in one scan. Are you assuming\nthe map will fit in memory? \n\n> \n> \n> >If you want to keep this mapping saved as a sort \n> >of an index, or as part ot each row data, this \n> >will make the cost of inserts and updates enormous.\n> \n> Not sure you've got this right either. Looks to \n> me like we are adding a <= 32b quantity to each \n> row. Once we know the mapping, incrementally \n> updating it upon insert or update would seem to \n> be simple matter of a fast search for the correct \n> ranking [Interpolation search, which we have all \n> the needed data for, is O(lglgN). Hash based \n> search is O(1)]; plus an increment/decrement of \n> the key values greater/less than the key value of \n> the row being inserted / updated. Given than we \n> are updating all the keys in a specific range \n> within a tree structure, that update can be done \n> in O(lgm) (where m is the number of records affected).\n\nSay again ?\nLet us say you have 1 billion rows, where the\ncolumn in question contains strings like \nbaaaaaaaaaaaaaaa....aaa\nbaaaaaaaaaaaaaaa....aab\nbaaaaaaaaaaaaaaa....aac\n...\nnot necessarily in this order on disc of course\n\nThe minimum value would be keyed as 00000001h,\nthe next one as 00000002h and so on.\n\nNow insert new value 'aaaaa'\n\nNot only will you have to update 1 billion records,\nbut also all the values in your map.\n\nplease explain\n\ngnari\n\n\n", "msg_date": "Fri, 17 Feb 2006 19:22:49 +0000", "msg_from": "Ragnar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create" }, { "msg_contents": "Mark Lewis <[email protected]> writes:\n> I think we're actually on the same page here; you're right that the\n> constraint above ( f(a)==f(b) iff a==b ) can't be extended to data types\n> with more than 32 bits of value space. But the constraint I listed was\n> actually:\n\n> if a==b then f(a)==f(b)\n\nI believe Martijn had it right: the important constraint is\n\n\tf(a) > f(b) implies a > b\n\nwhich implies by commutativity\n\n\tf(a) < f(b) implies a < b\n\nand these two together imply\n\n\ta == b implies f(a) == f(b)\n\nNow you can't do any sorting if you only have the equality rule, you\nneed the inequality rule.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Feb 2006 14:43:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index " }, { "msg_contents": "On 2/17/06, Ragnar <[email protected]> wrote:\n> Say again ?\n> Let us say you have 1 billion rows, where the\n> column in question contains strings like\n> baaaaaaaaaaaaaaa....aaa\n> baaaaaaaaaaaaaaa....aab\n> baaaaaaaaaaaaaaa....aac\n> ...\n> not necessarily in this order on disc of course\n>\n> The minimum value would be keyed as 00000001h,\n> the next one as 00000002h and so on.\n>\n> Now insert new value 'aaaaa'\n>\n> Not only will you have to update 1 billion records,\n> but also all the values in your map.\n>\n> please explain\n\nNo comment on the usefulness of the idea overall.. but the solution\nwould be to insert with the colliding value of the existing one lesser\nthan it..\n\nIt will falsly claim equal, which you then must fix with a second\nlocal sort which should be fast because you only need to sort the\nduplicates/false dupes. If you insert too much then this obviously\nbecomes completely useless.\n", "msg_date": "Fri, 17 Feb 2006 17:36:10 -0500", "msg_from": "\"Gregory Maxwell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create" }, { "msg_contents": "Last month I wrote:\n> It seems clear that our qsort.c is doing a pretty awful job of picking\n> qsort pivots, while glibc is mostly managing not to make that mistake.\n\nI re-ran Gary's test script using the just-committed improvements to\nqsort.c, and got pretty nice numbers (attached --- compare to\nhttp://archives.postgresql.org/pgsql-performance/2006-02/msg00227.php).\nSo it was wrong to blame his problems on the pivot selection --- the\nculprit was that ill-considered switch to insertion sort.\n\n\t\t\tregards, tom lane\n\n100 runtimes for latest port/qsort.c, sorted ascending:\n\nTime: 335.481 ms\nTime: 335.606 ms\nTime: 335.932 ms\nTime: 336.039 ms\nTime: 336.182 ms\nTime: 336.231 ms\nTime: 336.711 ms\nTime: 336.721 ms\nTime: 336.971 ms\nTime: 336.982 ms\nTime: 337.036 ms\nTime: 337.190 ms\nTime: 337.223 ms\nTime: 337.312 ms\nTime: 337.350 ms\nTime: 337.423 ms\nTime: 337.523 ms\nTime: 337.528 ms\nTime: 337.565 ms\nTime: 337.566 ms\nTime: 337.732 ms\nTime: 337.741 ms\nTime: 337.744 ms\nTime: 337.786 ms\nTime: 337.790 ms\nTime: 337.898 ms\nTime: 337.905 ms\nTime: 337.952 ms\nTime: 337.976 ms\nTime: 338.017 ms\nTime: 338.123 ms\nTime: 338.206 ms\nTime: 338.306 ms\nTime: 338.514 ms\nTime: 338.594 ms\nTime: 338.597 ms\nTime: 338.683 ms\nTime: 338.705 ms\nTime: 338.729 ms\nTime: 338.748 ms\nTime: 338.816 ms\nTime: 338.958 ms\nTime: 338.963 ms\nTime: 338.997 ms\nTime: 339.074 ms\nTime: 339.106 ms\nTime: 339.134 ms\nTime: 339.159 ms\nTime: 339.226 ms\nTime: 339.260 ms\nTime: 339.289 ms\nTime: 339.341 ms\nTime: 339.500 ms\nTime: 339.585 ms\nTime: 339.595 ms\nTime: 339.774 ms\nTime: 339.897 ms\nTime: 339.927 ms\nTime: 340.064 ms\nTime: 340.133 ms\nTime: 340.172 ms\nTime: 340.219 ms\nTime: 340.261 ms\nTime: 340.323 ms\nTime: 340.708 ms\nTime: 340.761 ms\nTime: 340.785 ms\nTime: 340.900 ms\nTime: 340.986 ms\nTime: 341.339 ms\nTime: 341.564 ms\nTime: 341.707 ms\nTime: 342.155 ms\nTime: 342.213 ms\nTime: 342.452 ms\nTime: 342.515 ms\nTime: 342.540 ms\nTime: 342.928 ms\nTime: 343.548 ms\nTime: 343.663 ms\nTime: 344.192 ms\nTime: 344.952 ms\nTime: 345.152 ms\nTime: 345.174 ms\nTime: 345.444 ms\nTime: 346.848 ms\nTime: 348.144 ms\nTime: 348.842 ms\nTime: 354.550 ms\nTime: 356.877 ms\nTime: 357.475 ms\nTime: 358.487 ms\nTime: 364.178 ms\nTime: 370.730 ms\nTime: 493.098 ms\nTime: 648.009 ms\nTime: 849.345 ms\nTime: 860.616 ms\nTime: 936.800 ms\nTime: 1727.085 ms\n", "msg_date": "Tue, 21 Mar 2006 15:08:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour) " }, { "msg_contents": "I'm reposting this -- I sent this out a month ago but never got a response, and hope someone can shed some light on this.\n\nThanks,\nCraig\n\n--------------------------\n\nThis is a straightforward query that should be fairly quick, but takes about 30 minutes. It's a query across three tables, call them A, B, and C. The tables are joined on indexed columns.\n\nHere's a quick summary:\n\nTable A -----> Table B -----> Table C\n A_ID B_ID C_ID\n A_ID NAME\n C_ID\n\nTables A and B have 6 million rows each. Table C is small: 67 names, no repeats. All columns involved in the join are indexed. The database has been full-vacuumed and analyzed.\n\nSummary:\n\n1. Query B only: 2.7 seconds, 302175 rows returned\n2. Join B and C: 4.3 seconds, exact same answer\n3. Join A and B: 7.2 minutes, exact same answer\n4. Join A, B, C: 32.7 minutes, exact same answer\n\nLooking at these:\n\nQuery #1 is doing the real work: finding the rows of interest.\n\nQueries #1 and #2 ought to be virtually identical, since Table C has\njust one row with C_ID = 9, but the time almost doubles.\n\nQuery #3 should take a bit longer than Query #1 because it has to join\n300K rows, but the indexes should make this take just a few seconds,\ncertainly well under a minute. \n\nQuery #4 should be identical to Query #3, again because there's only\none row in Table C. 32 minutes is pretty horrible for such a\nstraightforward query.\n\nIt looks to me like the problem is the use of nested loops when a hash join should be used, but I'm no expert at query planning.\n\nThis is psql 8.0.3. Table definitions are at the end. Hardware is a Dell, 2-CPU Xeon, 4 GB memory, database is on a single SATA 7200RPM disk.\n\nThese table and column names are altered to protect the guilty, otherwise these are straight from Postgres.\n\n\nQUERY #1:\n---------\n\nexplain analyze select B.A_ID from B where B.B_ID = 9;\n\nIndex Scan using i_B_B_ID on B (cost=0.00..154401.36 rows=131236 width=4) (actual time=0.158..1387.251 rows=302175 loops=1)\nIndex Cond: (B_ID = 9)\nTotal runtime: 2344.053 ms\n\n\nQUERY #2:\n---------\n\nexplain analyze select B.A_ID from B join C on (B.C_ID = C.C_ID) where C.name = 'Joe';\n\nNested Loop (cost=0.00..258501.92 rows=177741 width=4) (actual time=0.349..3392.532 rows=302175 loops=1)\n-> Seq Scan on C (cost=0.00..12.90 rows=1 width=4) (actual time=0.232..0.336 rows=1 loops=1)\n Filter: ((name)::text = 'Joe'::text)\n-> Index Scan using i_B_C_ID on B (cost=0.00..254387.31 rows=328137 width=8) (actual time=0.102..1290.002 rows=302175 loops=1)\n Index Cond: (B.C_ID = \"outer\".C_ID)\nTotal runtime: 4373.916 ms\n\n\nQUERY #3:\n---------\n\nexplain analyze\nselect A.A_ID from A\n join B on (A.A_ID = B.A_ID) where B.B_ID = 9;\n\nNested Loop (cost=0.00..711336.41 rows=131236 width=4) (actual time=37.118..429419.347 rows=302175 loops=1)\n-> Index Scan using i_B_B_ID on B (cost=0.00..154401.36 rows=131236 width=4) (actual time=27.344..8858.489 rows=302175 loops=1)\n Index Cond: (B_ID = 9)\n-> Index Scan using pk_A_test on A (cost=0.00..4.23 rows=1 width=4) (actual time=1.372..1.376 rows=1 loops=302175)\n Index Cond: (A.A_ID = \"outer\".A_ID)\nTotal runtime: 430467.686 ms\n\n\nQUERY #4:\n---------\nexplain analyze\nselect A.A_ID from A\n join B on (A.A_ID = B.A_ID)\n join C on (B.B_ID = C.B_ID)\n where C.name = 'Joe';\n\nNested Loop (cost=0.00..1012793.38 rows=177741 width=4) (actual time=70.184..1960112.247 rows=302175 loops=1)\n-> Nested Loop (cost=0.00..258501.92 rows=177741 width=4) (actual time=52.114..17753.638 rows=302175 loops=1)\n -> Seq Scan on C (cost=0.00..12.90 rows=1 width=4) (actual time=0.109..0.176 rows=1 loops=1)\n Filter: ((name)::text = 'Joe'::text)\n -> Index Scan using i_B_B_ID on B (cost=0.00..254387.31 rows=328137 width=8) (actual time=51.985..15566.896 rows=302175 loops=1)\n Index Cond: (B.B_ID = \"outer\".B_ID)\n-> Index Scan using pk_A_test on A (cost=0.00..4.23 rows=1 width=4) (actual time=6.407..6.412 rows=1 loops=302175)\n Index Cond: (A.A_ID = \"outer\".A_ID)\nTotal runtime: 1961200.079 ms\n\n\nTABLE DEFINITIONS:\n------------------\n\nxxx => \\d a\n Table \"xxx.a\"\n Column | Type | Modifiers\n ------------------+------------------------+-----------\n a_id | integer | not null\n ... more columns\n\n Indexes:\n \"pk_a_id\" PRIMARY KEY, btree (a_id)\n ... more indexes on other columns\n\nxxx => \\d b\n Table \"xxx.b\"\n Column | Type | Modifiers\n -------------------------+------------------------+-----------\n b_id | integer | not null\n a_id | integer | not null\n c_id | integer | not null\n ... more columns\n\n Indexes:\n \"b_pkey\" PRIMARY KEY, btree (b_id)\n \"i_b_a_id\" btree (a_id)\n \"i_b_c_id\" btree (c_id)\n\n\nxxx=> \\d c\n Table \"xxx.c\"\n Column | Type | Modifiers\n --------------+------------------------+-----------\n c_id | integer | not null\n name | character varying(200) |\n ... more columns\n\n Indexes:\n \"c_pkey\" PRIMARY KEY, btree (c_id)\n\n\n\n\n\n", "msg_date": "Tue, 21 Mar 2006 14:40:17 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Poor performance o" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> It looks to me like the problem is the use of nested loops when a hash join should be used, but I'm no expert at query planning.\n\nGiven the sizes of the tables involved, you'd likely have to boost up\nwork_mem before the planner would consider a hash join. What nondefault\nconfiguration settings do you have, anyway?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Mar 2006 18:33:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance o " }, { "msg_contents": "Tom Lane wrote:\n> \"Craig A. James\" <[email protected]> writes:\n>> It looks to me like the problem is the use of nested loops when a hash\n>> join should be used, but I'm no expert at query planning.\n> \n> Given the sizes of the tables involved, you'd likely have to boost up\n> work_mem before the planner would consider a hash join. What nondefault\n> configuration settings do you have, anyway?\n\nshared_buffers = 20000\nwork_mem = 32768\neffective_cache_size = 300000\n\nThis is on a 4GB machine. Is there a guideline for work_mem that's related to table size? Something like, \"allow 2 MB per million rows\"?\n\nI'm also curious why the big difference between my \"Query #1\" and \"Query #2\". Even though it does a nested loop, #2's outer loop only returns one result from a very tiny table, so shouldn't it be virtually indistinguishable from #1?\n\nThanks,\nCraig\n", "msg_date": "Tue, 21 Mar 2006 17:04:16 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance o" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> Tom Lane wrote:\n>> Given the sizes of the tables involved, you'd likely have to boost up\n>> work_mem before the planner would consider a hash join. What nondefault\n>> configuration settings do you have, anyway?\n\n> shared_buffers = 20000\n> work_mem = 32768\n> effective_cache_size = 300000\n\nSo for a 6M-row table, 32M work_mem would allow ... um ... 5 bytes per\nrow. It's not happening :-(\n\nTry boosting work_mem by a factor of 100 and seeing whether a hash-based\njoin actually wins or not. If so, we can discuss where the sane setting\nreally falls, if not there's no point.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Mar 2006 23:31:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance o " }, { "msg_contents": "On Tue, Mar 21, 2006 at 05:04:16PM -0800, Craig A. James wrote:\n> Tom Lane wrote:\n> >\"Craig A. James\" <[email protected]> writes:\n> >>It looks to me like the problem is the use of nested loops when a hash\n> >>join should be used, but I'm no expert at query planning.\n> >\n> >Given the sizes of the tables involved, you'd likely have to boost up\n> >work_mem before the planner would consider a hash join. What nondefault\n> >configuration settings do you have, anyway?\n> \n> shared_buffers = 20000\n> work_mem = 32768\n> effective_cache_size = 300000\n> \n> This is on a 4GB machine. Is there a guideline for work_mem that's related \n> to table size? Something like, \"allow 2 MB per million rows\"?\n\nNo. The general guide is \"set it as large as possible without making the\nmachine start swapping.\" In some cases, you'll want to bump it up much\nhigher for certain queries, especially if you know those queries will\nonly run one at a time.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 22 Mar 2006 06:25:45 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance o" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-hackers-\n> [email protected]] On Behalf Of Tom Lane\n> Sent: Wednesday, February 15, 2006 5:22 PM\n> To: Ron\n> Cc: [email protected]; [email protected]\n> Subject: Re: [HACKERS] qsort again (was Re: [PERFORM] Strange Create\nIndex\n> behaviour)\n> \n> Ron <[email protected]> writes:\n> > How are we choosing our pivots?\n> \n> See qsort.c: it looks like median of nine equally spaced inputs (ie,\n> the 1/8th points of the initial input array, plus the end points),\n> implemented as two rounds of median-of-three choices. With half of\nthe\n> data inputs zero, it's not too improbable for two out of the three\n> samples to be zeroes in which case I think the med3 result will be\nzero\n> --- so choosing a pivot of zero is much more probable than one would\n> like, and doing so in many levels of recursion causes the problem.\n\nAdding some randomness to the selection of the pivot is a known\ntechnique to fix the oddball partitions problem. However, Bentley and\nSedgewick proved that every quick sort algorithm has some input set that\nmakes it go quadratic (hence the recent popularity of introspective\nsort, which switches to heapsort if quadratic behavior is detected. The\nC++ template I submitted was an example of introspective sort, but\nPostgreSQL does not use C++ so it was not helpful).\n\n> I think. I'm not too sure if the code isn't just being sloppy about\nthe\n> case where many data values are equal to the pivot --- there's a\nspecial\n> case there to switch to insertion sort, and maybe that's getting\ninvoked\n> too soon. \n\nHere are some cases known to make qsort go quadratic:\n1. Data already sorted\n2. Data reverse sorted\n3. Data organ-pipe sorted or ramp\n4. Almost all data of the same value\n\nThere are probably other cases. Randomizing the pivot helps some, as\ndoes check for in-order or reverse order partitions.\n\nImagine if 1/3 of the partitions fall into a category that causes\nquadratic behavior (have one of the above formats and have more than\nCUTOFF elements in them).\n\nIt is doubtful that the switch to insertion sort is causing any sort of\nproblems. It is only going to be invoked on tiny sets, for which it has\na fixed cost that is probably less that qsort() function calls on sets\nof the same size.\n\n>It'd be useful to get a line-level profile of the behavior of\n> this code in the slow cases...\n\nI guess that my in-order or presorted tests [which often arise when\nthere are very few distinct values] may solve the bad partition\nproblems. Don't forget that the algorithm is called recursively.\n\n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n", "msg_date": "Wed, 15 Feb 2006 17:37:58 -0800", "msg_from": "\"Dann Corbit\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour) " }, { "msg_contents": "At 08:37 PM 2/15/2006, Dann Corbit wrote:\n\n>Adding some randomness to the selection of the pivot is a known \n>technique to fix the oddball partitions problem.\n\nTrue, but it makes QuickSort slower than say MergeSort because of the \nexpense of the PRNG being called ~O(lgN) times during a sort.\n\n\n>However, Bentley and Sedgewick proved that every quick sort \n>algorithm has some input set that makes it go quadratic\n\nYep. OTOH, that input set can be so specific and so unusual as to \nrequire astronomically unlikely bad luck or hostile hacking in order \nfor it to actually occur.\n\n\n> (hence the recent popularity of introspective sort, which switches \n> to heapsort if quadratic behavior is detected. The C++ template I \n> submitted was an example of introspective sort, but PostgreSQL does \n> not use C++ so it was not helpful).\n...and there are other QuickSort+Other hybrids that address the issue \nas well. MergeSort, RadixExchangeSort, and BucketSort all come to \nmind. See Gonnet and Baeza-Yates, etc.\n\n\n>Here are some cases known to make qsort go quadratic:\n>1. Data already sorted\n\nOnly if one element is used to choose the pivot; _and_ only if the \npivot is the first or last element of each pass.\nEven just always using the middle element as the pivot avoids this \nproblem. See Sedgewick or Knuth.\n\n\n>2. Data reverse sorted\n\nDitto above.\n\n\n>3. Data organ-pipe sorted or ramp\n\nNot sure what this means? Regardless, median of n partitioning that \nincludes samples from each of the 1st 1/3, 2nd 1/3, and final 3rd of \nthe data is usually enough to guarantee O(NlgN) behavior unless the \n_specific_ distribution known to be pessimal to that sampling \nalgorithm is encountered. The only times I've ever seen it ITRW was \nas a result of hostile activity: purposely arranging the data in such \na manner is essentially a DoS attack.\n\n\n>4. Almost all data of the same value\n\nWell known fixes to inner loop available to avoid this problem.\n\n\n>There are probably other cases. Randomizing the pivot helps some, \n>as does check for in-order or reverse order partitions.\nRandomizing the choice of pivot essentially guarantees O(NlgN) \nbehavior no matter what the distribution of the data at the price of \nincreasing the cost of each pass by a constant factor (the generation \nof a random number or numbers).\n\n\nIn sum, QuickSort gets all sorts of bad press that is far more FUD \nthan fact ITRW.\nRon. \n\n\n", "msg_date": "Sat, 18 Feb 2006 12:01:10 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create" }, { "msg_contents": "\nAdded to TODO:\n\n\t* Improve port/qsort() to handle sorts with 50% unique and 50% duplicate\n\t value [qsort]\n\n\t This involves choosing better pivot points for the quicksort.\n\n\n---------------------------------------------------------------------------\n\nDann Corbit wrote:\n> \n> \n> > -----Original Message-----\n> > From: [email protected] [mailto:pgsql-hackers-\n> > [email protected]] On Behalf Of Tom Lane\n> > Sent: Wednesday, February 15, 2006 5:22 PM\n> > To: Ron\n> > Cc: [email protected]; [email protected]\n> > Subject: Re: [HACKERS] qsort again (was Re: [PERFORM] Strange Create\n> Index\n> > behaviour)\n> > \n> > Ron <[email protected]> writes:\n> > > How are we choosing our pivots?\n> > \n> > See qsort.c: it looks like median of nine equally spaced inputs (ie,\n> > the 1/8th points of the initial input array, plus the end points),\n> > implemented as two rounds of median-of-three choices. With half of\n> the\n> > data inputs zero, it's not too improbable for two out of the three\n> > samples to be zeroes in which case I think the med3 result will be\n> zero\n> > --- so choosing a pivot of zero is much more probable than one would\n> > like, and doing so in many levels of recursion causes the problem.\n> \n> Adding some randomness to the selection of the pivot is a known\n> technique to fix the oddball partitions problem. However, Bentley and\n> Sedgewick proved that every quick sort algorithm has some input set that\n> makes it go quadratic (hence the recent popularity of introspective\n> sort, which switches to heapsort if quadratic behavior is detected. The\n> C++ template I submitted was an example of introspective sort, but\n> PostgreSQL does not use C++ so it was not helpful).\n> \n> > I think. I'm not too sure if the code isn't just being sloppy about\n> the\n> > case where many data values are equal to the pivot --- there's a\n> special\n> > case there to switch to insertion sort, and maybe that's getting\n> invoked\n> > too soon. \n> \n> Here are some cases known to make qsort go quadratic:\n> 1. Data already sorted\n> 2. Data reverse sorted\n> 3. Data organ-pipe sorted or ramp\n> 4. Almost all data of the same value\n> \n> There are probably other cases. Randomizing the pivot helps some, as\n> does check for in-order or reverse order partitions.\n> \n> Imagine if 1/3 of the partitions fall into a category that causes\n> quadratic behavior (have one of the above formats and have more than\n> CUTOFF elements in them).\n> \n> It is doubtful that the switch to insertion sort is causing any sort of\n> problems. It is only going to be invoked on tiny sets, for which it has\n> a fixed cost that is probably less that qsort() function calls on sets\n> of the same size.\n> \n> >It'd be useful to get a line-level profile of the behavior of\n> > this code in the slow cases...\n> \n> I guess that my in-order or presorted tests [which often arise when\n> there are very few distinct values] may solve the bad partition\n> problems. Don't forget that the algorithm is called recursively.\n> \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 3: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/docs/faq\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian http://candle.pha.pa.us\n SRA OSS, Inc. http://www.sraoss.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 2 Mar 2006 13:17:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour)" }, { "msg_contents": "My introsort is almost complete and its the fastest variant of quicksort I\ncan find, I'll submit it to -patches in the next couple days as-well.\n\nOn 3/2/06, Bruce Momjian <[email protected]> wrote:\n>\n>\n> Added to TODO:\n>\n> * Improve port/qsort() to handle sorts with 50% unique and 50%\n> duplicate\n> value [qsort]\n>\n> This involves choosing better pivot points for the quicksort.\n>\n>\n>\n> ---------------------------------------------------------------------------\n>\n> Dann Corbit wrote:\n> >\n> >\n> > > -----Original Message-----\n> > > From: [email protected] [mailto:pgsql-hackers-\n> > > [email protected]] On Behalf Of Tom Lane\n> > > Sent: Wednesday, February 15, 2006 5:22 PM\n> > > To: Ron\n> > > Cc: [email protected]; [email protected]\n> > > Subject: Re: [HACKERS] qsort again (was Re: [PERFORM] Strange Create\n> > Index\n> > > behaviour)\n> > >\n> > > Ron <[email protected]> writes:\n> > > > How are we choosing our pivots?\n> > >\n> > > See qsort.c: it looks like median of nine equally spaced inputs (ie,\n> > > the 1/8th points of the initial input array, plus the end points),\n> > > implemented as two rounds of median-of-three choices. With half of\n> > the\n> > > data inputs zero, it's not too improbable for two out of the three\n> > > samples to be zeroes in which case I think the med3 result will be\n> > zero\n> > > --- so choosing a pivot of zero is much more probable than one would\n> > > like, and doing so in many levels of recursion causes the problem.\n> >\n> > Adding some randomness to the selection of the pivot is a known\n> > technique to fix the oddball partitions problem. However, Bentley and\n> > Sedgewick proved that every quick sort algorithm has some input set that\n> > makes it go quadratic (hence the recent popularity of introspective\n> > sort, which switches to heapsort if quadratic behavior is detected. The\n> > C++ template I submitted was an example of introspective sort, but\n> > PostgreSQL does not use C++ so it was not helpful).\n> >\n> > > I think. I'm not too sure if the code isn't just being sloppy about\n> > the\n> > > case where many data values are equal to the pivot --- there's a\n> > special\n> > > case there to switch to insertion sort, and maybe that's getting\n> > invoked\n> > > too soon.\n> >\n> > Here are some cases known to make qsort go quadratic:\n> > 1. Data already sorted\n> > 2. Data reverse sorted\n> > 3. Data organ-pipe sorted or ramp\n> > 4. Almost all data of the same value\n> >\n> > There are probably other cases. Randomizing the pivot helps some, as\n> > does check for in-order or reverse order partitions.\n> >\n> > Imagine if 1/3 of the partitions fall into a category that causes\n> > quadratic behavior (have one of the above formats and have more than\n> > CUTOFF elements in them).\n> >\n> > It is doubtful that the switch to insertion sort is causing any sort of\n> > problems. It is only going to be invoked on tiny sets, for which it has\n> > a fixed cost that is probably less that qsort() function calls on sets\n> > of the same size.\n> >\n> > >It'd be useful to get a line-level profile of the behavior of\n> > > this code in the slow cases...\n> >\n> > I guess that my in-order or presorted tests [which often arise when\n> > there are very few distinct values] may solve the bad partition\n> > problems. Don't forget that the algorithm is called recursively.\n> >\n> > > regards, tom lane\n> > >\n> > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > TIP 3: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/docs/faq\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: Don't 'kill -9' the postmaster\n> >\n>\n> --\n> Bruce Momjian http://candle.pha.pa.us\n> SRA OSS, Inc. http://www.sraoss.com\n>\n> + If your life is a hard drive, Christ can be your backup. +\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n\n\n--\nJonah H. Harris, Database Internals Architect\nEnterpriseDB Corporation\n732.331.1324\n\nMy introsort is almost complete and its the fastest variant of\nquicksort I can find, I'll submit it to -patches in the next couple\ndays as-well.On 3/2/06, Bruce Momjian <[email protected]> wrote:\nAdded to TODO:        * Improve port/qsort() to handle sorts with 50% unique and 50% duplicate          value [qsort]          This involves choosing better pivot points for the quicksort.\n---------------------------------------------------------------------------Dann Corbit wrote:>>> > -----Original Message-----> > From: \[email protected] [mailto:pgsql-hackers-> > [email protected]] On Behalf Of Tom Lane> > Sent: Wednesday, February 15, 2006 5:22 PM\n> > To: Ron> > Cc: [email protected]; [email protected]> > Subject: Re: [HACKERS] qsort again (was Re: [PERFORM] Strange Create\n> Index> > behaviour)> >> > Ron <[email protected]> writes:> > > How are we choosing our pivots?> >> > See \nqsort.c: it looks like median of nine equally spaced inputs (ie,> > the 1/8th points of the initial input array, plus the end points),> > implemented as two rounds of median-of-three choices.  With half of\n> the> > data inputs zero, it's not too improbable for two out of the three> > samples to be zeroes in which case I think the med3 result will be> zero> > --- so choosing a pivot of zero is much more probable than one would\n> > like, and doing so in many levels of recursion causes the problem.>> Adding some randomness to the selection of the pivot is a known> technique to fix the oddball partitions problem.  However, Bentley and\n> Sedgewick proved that every quick sort algorithm has some input set that> makes it go quadratic (hence the recent popularity of introspective> sort, which switches to heapsort if quadratic behavior is detected.  The\n> C++ template I submitted was an example of introspective sort, but> PostgreSQL does not use C++ so it was not helpful).>> > I think.  I'm not too sure if the code isn't just being sloppy about\n> the> > case where many data values are equal to the pivot --- there's a> special> > case there to switch to insertion sort, and maybe that's getting> invoked> > too soon.\n>> Here are some cases known to make qsort go quadratic:> 1. Data already sorted> 2. Data reverse sorted> 3. Data organ-pipe sorted or ramp> 4. Almost all data of the same value\n>> There are probably other cases.  Randomizing the pivot helps some, as> does check for in-order or reverse order partitions.>> Imagine if 1/3 of the partitions fall into a category that causes\n> quadratic behavior (have one of the above formats and have more than> CUTOFF elements in them).>> It is doubtful that the switch to insertion sort is causing any sort of> problems.  It is only going to be invoked on tiny sets, for which it has\n> a fixed cost that is probably less that qsort() function calls on sets> of the same size.>> >It'd be useful to get a line-level profile of the behavior of> > this code in the slow cases...\n>> I guess that my in-order or presorted tests [which often arise when> there are very few distinct values] may solve the bad partition> problems.  Don't forget that the algorithm is called recursively.\n>>\n>                    \nregards, tom lane> >> > ---------------------------(end of> broadcast)---------------------------> > TIP 3: Have you checked our extensive FAQ?> >>\n>                http://www.postgresql.org/docs/faq>> ---------------------------(end of broadcast)---------------------------> TIP 2: Don't 'kill -9' the postmaster\n>--  Bruce Momjian   http://candle.pha.pa.us  SRA OSS, Inc.   http://www.sraoss.com  + If your life is a hard drive, Christ can be your backup. +\n---------------------------(end of broadcast)---------------------------TIP 6: explain analyze is your friend-- Jonah H. Harris, Database Internals Architect\nEnterpriseDB Corporation732.331.1324", "msg_date": "Thu, 2 Mar 2006 13:50:24 -0500", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index behaviour)" } ]
[ { "msg_contents": "HI ALL,\n\n I have query for a report. Explain analyze result is below. The execution plan tells that it would use \"t_koltuk_islem_pkey\" index on table \"t_koltuk_islem\" to scan. However, there is another index on table \"t_koltuk_islem\" on column \"islem_tarihi\" that can be combined on plan. Why doesn't optimizer choice that ? It prefer to perform a filter on column \"islem_tarihi\" ... Why ?\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n\"Nested Loop (cost=0.00..2411.48 rows=14 width=797) (actual time=117.427..4059.351 rows=55885 loops=1)\"\n\" -> Nested Loop (cost=0.00..35.69 rows=1 width=168) (actual time=0.124..8.714 rows=94 loops=1)\"\n\" Join Filter: ((\"\"outer\"\".sefer_tip_kod = \"\"inner\"\".kod) AND ((\"\"outer\"\".firma_no)::text = (\"\"inner\"\".firma_no)::text))\"\n\" -> Nested Loop (cost=0.00..34.64 rows=1 width=154) (actual time=0.114..7.555 rows=94 loops=1)\"\n\" -> Nested Loop (cost=0.00..30.18 rows=1 width=144) (actual time=0.106..6.654 rows=94 loops=1)\"\n\" -> Nested Loop (cost=0.00..25.71 rows=1 width=134) (actual time=0.089..5.445 rows=94 loops=1)\"\n\" Join Filter: (((\"\"inner\"\".\"\"no\"\")::text = (\"\"outer\"\".hat_no)::text) AND ((\"\"inner\"\".firma_no)::text = (\"\"outer\"\".firma_no)::text))\"\n\" -> Nested Loop (cost=0.00..24.21 rows=1 width=116) (actual time=0.063..1.632 rows=94 loops=1)\"\n\" Join Filter: ((\"\"outer\"\".kod)::text = (\"\"inner\"\".durumu)::text)\"\n\" -> Seq Scan on t_domains d2 (cost=0.00..2.21 rows=2 width=18) (actual time=0.029..0.056 rows=2 loops=1)\"\n\" Filter: ((name)::text = 'SFR_DURUMU'::text)\"\n\" -> Nested Loop (cost=0.00..10.91 rows=7 width=103) (actual time=0.028..0.649 rows=94 loops=2)\"\n\" Join Filter: ((\"\"outer\"\".kod)::text = (\"\"inner\"\".ek_dev)::text)\"\n\" -> Seq Scan on t_domains d1 (cost=0.00..2.21 rows=2 width=18) (actual time=0.017..0.046 rows=2 loops=2)\"\n\" Filter: ((name)::text = 'EKDEV'::text)\"\n\" -> Seq Scan on t_seferler s (cost=0.00..3.17 rows=94 width=90) (actual time=0.003..0.160 rows=94 loops=4)\"\n\" Filter: ((iptal)::text = 'H'::text)\"\n\" -> Seq Scan on t_hatlar h (cost=0.00..1.20 rows=20 width=18) (actual time=0.002..0.020 rows=20 loops=94)\"\n\" -> Index Scan using t_yer_pkey on t_yer y2 (cost=0.00..4.45 rows=1 width=14) (actual time=0.008..0.009 rows=1 loops=94)\"\n\" Index Cond: (\"\"outer\"\".varis_yer_kod = y2.kod)\"\n\" Filter: ((iptal)::text = 'H'::text)\"\n\" -> Index Scan using t_yer_pkey on t_yer y1 (cost=0.00..4.45 rows=1 width=14) (actual time=0.004..0.006 rows=1 loops=94)\"\n\" Index Cond: (\"\"outer\"\".kalkis_yer_kod = y1.kod)\"\n\" Filter: ((iptal)::text = 'H'::text)\"\n\" -> Seq Scan on t_sefer_tip t (cost=0.00..1.02 rows=2 width=18) (actual time=0.002..0.006 rows=2 loops=94)\"\n\" Filter: ((iptal)::text = 'H'::text)\"\n\" -> Index Scan using t_koltuk_islem_pkey on t_koltuk_islem i (cost=0.00..2375.10 rows=39 width=644) (actual time=38.151..41.881 rows=595 loops=94)\"\n\" Index Cond: (((\"\"outer\"\".firma_no)::text = (i.firma_no)::text) AND ((\"\"outer\"\".hat_no)::text = (i.hat_no)::text) AND (\"\"outer\"\".kod = i.sefer_kod))\"\n\" Filter: ((islem_tarihi >= '2006-01-17'::date) AND (islem_tarihi <= '2006-02-16'::date))\"\n\"Total runtime: 4091.242 ms\"\n\nBest Regards\n\nAdnan DURSUN\nASRIN Bilişim Ltd.Şti\nAnkara / TURKEY\n\n\n\n\n\n\nHI ALL,\n \n  I have query for a report. Explain analyze \nresult is below. The execution plan tells that it would use \n\"t_koltuk_islem_pkey\" index on table \"t_koltuk_islem\" to scan. However, \nthere is another index on table \"t_koltuk_islem\" on column \"islem_tarihi\" \nthat can be combined on plan. Why doesn't optimizer choice that ? It prefer \nto perform a filter on column \"islem_tarihi\" ... \nWhy ?\n \nQUERY \nPLAN--------------------------------------------------------------------------------------------------------------------------\"Nested \nLoop  (cost=0.00..2411.48 rows=14 width=797) (actual time=117.427..4059.351 \nrows=55885 loops=1)\"\"  ->  Nested Loop  (cost=0.00..35.69 \nrows=1 width=168) (actual time=0.124..8.714 rows=94 \nloops=1)\"\"        Join Filter: \n((\"\"outer\"\".sefer_tip_kod = \"\"inner\"\".kod) AND ((\"\"outer\"\".firma_no)::text = \n(\"\"inner\"\".firma_no)::text))\"\"        \n->  Nested Loop  (cost=0.00..34.64 rows=1 width=154) (actual \ntime=0.114..7.555 rows=94 \nloops=1)\"\"              \n->  Nested Loop  (cost=0.00..30.18 rows=1 width=144) (actual \ntime=0.106..6.654 rows=94 \nloops=1)\"\"                    \n->  Nested Loop  (cost=0.00..25.71 rows=1 width=134) (actual \ntime=0.089..5.445 rows=94 \nloops=1)\"\"                          \nJoin Filter: (((\"\"inner\"\".\"\"no\"\")::text = (\"\"outer\"\".hat_no)::text) AND \n((\"\"inner\"\".firma_no)::text = \n(\"\"outer\"\".firma_no)::text))\"\"                          \n->  Nested Loop  (cost=0.00..24.21 rows=1 width=116) (actual \ntime=0.063..1.632 rows=94 \nloops=1)\"\"                                \nJoin Filter: ((\"\"outer\"\".kod)::text = \n(\"\"inner\"\".durumu)::text)\"\"                                \n->  Seq Scan on t_domains d2  (cost=0.00..2.21 rows=2 width=18) \n(actual time=0.029..0.056 rows=2 \nloops=1)\"\"                                      \nFilter: ((name)::text = \n'SFR_DURUMU'::text)\"\"                                \n->  Nested Loop  (cost=0.00..10.91 rows=7 width=103) (actual \ntime=0.028..0.649 rows=94 \nloops=2)\"\"                                      \nJoin Filter: ((\"\"outer\"\".kod)::text = \n(\"\"inner\"\".ek_dev)::text)\"\"                                      \n->  Seq Scan on t_domains d1  (cost=0.00..2.21 rows=2 width=18) \n(actual time=0.017..0.046 rows=2 \nloops=2)\"\"                                            \nFilter: ((name)::text = \n'EKDEV'::text)\"\"                                      \n->  Seq Scan on t_seferler s  (cost=0.00..3.17 rows=94 width=90) \n(actual time=0.003..0.160 rows=94 \nloops=4)\"\"                                            \nFilter: ((iptal)::text = \n'H'::text)\"\"                          \n->  Seq Scan on t_hatlar h  (cost=0.00..1.20 rows=20 width=18) \n(actual time=0.002..0.020 rows=20 \nloops=94)\"\"                    \n->  Index Scan using t_yer_pkey on t_yer y2  (cost=0.00..4.45 \nrows=1 width=14) (actual time=0.008..0.009 rows=1 \nloops=94)\"\"                          \nIndex Cond: (\"\"outer\"\".varis_yer_kod = \ny2.kod)\"\"                          \nFilter: ((iptal)::text = \n'H'::text)\"\"              \n->  Index Scan using t_yer_pkey on t_yer y1  (cost=0.00..4.45 \nrows=1 width=14) (actual time=0.004..0.006 rows=1 \nloops=94)\"\"                    \nIndex Cond: (\"\"outer\"\".kalkis_yer_kod = \ny1.kod)\"\"                    \nFilter: ((iptal)::text = \n'H'::text)\"\"        ->  Seq Scan \non t_sefer_tip t  (cost=0.00..1.02 rows=2 width=18) (actual \ntime=0.002..0.006 rows=2 \nloops=94)\"\"              \nFilter: ((iptal)::text = 'H'::text)\"\"  ->  Index Scan using \nt_koltuk_islem_pkey on t_koltuk_islem i  (cost=0.00..2375.10 rows=39 \nwidth=644) (actual time=38.151..41.881 rows=595 \nloops=94)\"\"        Index Cond: \n(((\"\"outer\"\".firma_no)::text = (i.firma_no)::text) AND ((\"\"outer\"\".hat_no)::text \n= (i.hat_no)::text) AND (\"\"outer\"\".kod = \ni.sefer_kod))\"\"        Filter: \n((islem_tarihi >= '2006-01-17'::date) AND (islem_tarihi <= \n'2006-02-16'::date))\"\"Total runtime: 4091.242 ms\"\nBest Regards\n \n\nAdnan DURSUN\nASRIN Bilişim Ltd.Şti\nAnkara / TURKEY", "msg_date": "Thu, 16 Feb 2006 18:56:20 +0200", "msg_from": "\"Adnan DURSUN\" <[email protected]>", "msg_from_op": true, "msg_subject": "Why does not perform index combination" }, { "msg_contents": "\"Adnan DURSUN\" <[email protected]> writes:\n> I have query for a report. Explain analyze result is below. The =\n> execution plan tells that it would use \"t_koltuk_islem_pkey\" index on =\n> table \"t_koltuk_islem\" to scan. However, there is another index on table =\n> \"t_koltuk_islem\" on column \"islem_tarihi\" that can be combined on plan. =\n> Why doesn't optimizer choice that ? It prefer to perform a filter on =\n> column \"islem_tarihi\" ... Why ?\n\nProbably thinks that the extra index doesn't add enough selectivity to\nbe worth scanning. It's probably right, too --- maybe with a narrower\ndate range the answer would be different.\n\nI think the main problem in this plan is the poor estimation of the size\nof the d1/s join. Are your stats up to date on those tables? Maybe\nboosting the statistics target for one or both would help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Feb 2006 12:27:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does not perform index combination " } ]
[ { "msg_contents": "Hi,\n\nI was wondering what the plan is for the future of table partitioning in\nPostgresQL. It is pretty hard for me to implement partitioning right now\nwith its current limitation, specifically the fact that unique\nconstraints cannot be enforced across partitions and that Constraint\nExclusion cannot be used on non-constant values like CURRENT_DATE. It is\nalso quite cumbersome to do all the maintenance work to create the new\nchild table, do the triggers, drop the old one, etc,etc using the table\ninheritance every week since I would need to do weekly and monthly table\npartitioning.\n\nSo, my question in short, Is there any plan to at least do Global unique\ncheck constraints (or at least a global unique index) and is there a\nthread/documentation somewhere about what are the future planned changes\nto table partitioning?\n\n\nThanks\n\nPatrick Carriere\nSoftware Architect\nNexus Telecom (Americas)\n\n\n\n\n\n\n\n\nFuture of Table Partitioning\n\n\n\nHi,\n\nI was wondering what the plan is for the future of table partitioning in PostgresQL. It is pretty hard for me to implement partitioning right now with its current limitation, specifically the fact that unique constraints cannot be enforced across partitions and that Constraint Exclusion cannot be used on non-constant values like CURRENT_DATE. It is also quite cumbersome to do all the maintenance work to create the new child table, do the triggers, drop the old one, etc,etc using the table inheritance every week since I would need to do weekly and monthly table partitioning.\nSo, my question in short, Is there any plan to at least do Global unique check constraints (or at least a global unique index) and is there a thread/documentation somewhere about what are the future planned changes to table partitioning?\n\nThanks\n\nPatrick Carriere\nSoftware Architect\nNexus Telecom (Americas)", "msg_date": "Thu, 16 Feb 2006 12:03:17 -0500", "msg_from": "\"Patrick Carriere\" <[email protected]>", "msg_from_op": true, "msg_subject": "Future of Table Partitioning" } ]
[ { "msg_contents": "Hi List,\n\nI would like some insight from the experts here as to how I can alter\nwhich index PostgreSQL is choosing to run a query.\n\nFirst off, I'm running an active web forum (phpBB) with sometimes\nhundreds of concurrent users. The query in question is one which pulls\nthe lists of topics in the forum. The table in question is here:\n\n--\n\nforums=> \\d phpbb_topics;\n Table \"public.phpbb_topics\"\n Column | Type | \nModifiers\n----------------------+-----------------------+-------------------------------------------------------\n topic_id | integer | not null default\nnextval('phpbb_topics_id_seq'::text)\n forum_id | integer | not null default 0\n topic_title | character varying(60) | not null default\n''::character varying\n topic_poster | integer | not null default 0\n topic_time | integer | not null default 0\n topic_views | integer | not null default 0\n topic_replies | integer | not null default 0\n topic_status | smallint | not null default (0)::smallint\n topic_vote | smallint | not null default (0)::smallint\n topic_type | smallint | not null default (0)::smallint\n topic_first_post_id | integer | not null default 0\n topic_last_post_id | integer | not null default 0\n topic_moved_id | integer | not null default 0\n topic_last_post_time | integer | not null default 0\nIndexes:\n \"forum_id_phpbb_topics_index\" btree (forum_id)\n \"topic_id_phpbb_topics_index\" btree (topic_id)\n \"topic_last_post_id_phpbb_topics_index\" btree (topic_last_post_id)\n \"topic_last_post_time_phpbb_topics_index\" btree (topic_last_post_time)\n \"topic_moved_id_phpbb_topics_index\" btree (topic_moved_id)\n\n--\n\nTo layout the contents of the table, here are some relevant queries\nshowing the number of entries\n\nforums=# SELECT COUNT(*) FROM phpbb_topics; SELECT COUNT(*) FROM\nphpbb_topics WHERE forum_id = 71; SELECT COUNT(*) FROM phpbb_topics\nWHERE forum_id = 55;\n count\n--------\n 190588\n(1 row)\n\n count\n-------\n 1013\n(1 row)\n\n count\n-------\n 35035\n(1 row)\n\n--\n\nOk. Now, here's the problem. I run a query to pull the list of topics\nfor the forum. There pagination, so the first page query looks like\nthis:\n\nSELECT t.topic_id\n\t\t\tFROM phpbb_topics AS t\n\t\t\t\tWHERE t.forum_id = 71\n\t\t\t\t\tAND t.topic_id NOT IN (205026, 29046, 144569, 59780, 187424,\n138635, 184973, 170551, 22419, 181690, 197254, 205130)\n\t\t\t\t\t\tORDER BY t.topic_last_post_time DESC\n\t\t\t\t\t\t\tLIMIT 23 OFFSET 0\n\n \n \n \n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3487.78..3487.87 rows=34 width=8) (actual\ntime=1112.921..1113.005 rows=34 loops=1)\n -> Sort (cost=3486.15..3489.10 rows=1181 width=8) (actual\ntime=1112.087..1112.535 rows=687 loops=1)\n Sort Key: topic_last_post_time\n -> Index Scan using forum_id_phpbb_topics_index on\nphpbb_topics t (cost=0.00..3425.89 rows=1181 width=8) (actual\ntime=54.650..1109.877 rows=1012 loops=1)\n Index Cond: (forum_id = 71)\n Filter: (topic_id <> 205026)\n Total runtime: 1113.268 ms\n(7 rows)\n\n--\n\nThis is the query on one of the lesser active forums (forum_id = 71)\nwhich as list earlier only has 1013 rows. This query slow because\nPostgreSQL is not using the index on the \"forum_id\" column, but\ninstead scanning through the topics via the topic_last_post_time and\nfiltering through the posts. This would be good for the forum_id = 55\nwhere the most recent topics would be quickly found. Now here's the\nstranger part, going deeper into the results (ie selecting pages\nfurther down), the planner does this:\n\n--\n\nSELECT t.topic_id\n\t\t\tFROM phpbb_topics AS t\n\t\t\t\tWHERE t.forum_id = 71\n\t\t\t\t\tAND t.topic_id NOT IN (205026)\n\t\t\t\t\t\tORDER BY t.topic_last_post_time DESC\n\t\t\t\t\t\t\tLIMIT 34 OFFSET 653\n\n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3487.78..3487.87 rows=34 width=8) (actual\ntime=6.140..6.202 rows=34 loops=1)\n -> Sort (cost=3486.15..3489.10 rows=1181 width=8) (actual\ntime=5.306..5.753 rows=687 loops=1)\n Sort Key: topic_last_post_time\n -> Index Scan using forum_id_phpbb_topics_index on\nphpbb_topics t (cost=0.00..3425.89 rows=1181 width=8) (actual\ntime=0.070..3.581 rows=1012 loops=1)\n Index Cond: (forum_id = 71)\n Filter: (topic_id <> 205026)\n Total runtime: 6.343 ms\n(7 rows)\n\n--\n\nThis is more like how it should be done IMO. Results are much faster\nwhen the forum id index is used. Now, the output of the first query on\nthe forum_id = 55 looks like this\n\n--\n\nSELECT t.topic_id\n\t\t\tFROM phpbb_topics AS t\n\t\t\t\tWHERE t.forum_id = 55\n\t\t\t\t\tAND t.topic_id NOT IN (159934, 168973, 79609, 179029, 61593,\n184896, 190572)\n\t\t\t\t\t\tORDER BY t.topic_last_post_time DESC\n\t\t\t\t\t\t\tLIMIT 28 OFFSET 0\n\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..50.50 rows=28 width=8) (actual time=0.060..0.714\nrows=28 loops=1)\n -> Index Scan Backward using\ntopic_last_post_time_phpbb_topics_index on phpbb_topics t \n(cost=0.00..63232.38 rows=35063 width=8) (actual time=0.057..0.675\nrows=28 loops=1)\n Filter: ((forum_id = 55) AND (topic_id <> 159934) AND\n(topic_id <> 168973) AND (topic_id <> 79609) AND (topic_id <> 179029)\nAND (topic_id <> 61593) AND (topic_id <> 184896) AND (topic_id <>\n190572))\n Total runtime: 0.794 ms\n\n--\n\nThis is acceptable usage when the forum_id is heavily populated. Next\nnow, here again puzzles me, pulling entries in the middle of forum_id\n= 55\n\n--\n\nSELECT t.topic_id\n\t\t\tFROM phpbb_topics AS t\n\t\t\t\tWHERE t.forum_id = 55\n\t\t\t\t\tAND t.topic_id NOT IN (159934, 168973)\n\t\t\t\t\t\tORDER BY t.topic_last_post_time DESC\n\t\t\t\t\t\t\tLIMIT 33 OFFSET 17458\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=29302.43..29302.51 rows=33 width=8) (actual\ntime=625.907..625.969 rows=33 loops=1)\n -> Sort (cost=29258.78..29346.44 rows=35064 width=8) (actual\ntime=603.710..615.411 rows=17491 loops=1)\n Sort Key: topic_last_post_time\n -> Seq Scan on phpbb_topics t (cost=0.00..26611.85\nrows=35064 width=8) (actual time=0.067..528.271 rows=35034 loops=1)\n Filter: ((forum_id = 55) AND (topic_id <> 159934) AND\n(topic_id <> 168973))\n Total runtime: 632.444 ms\n(6 rows)\n\n--\n\nWhy is it doing a sequential scan? :(\n\nMy questions... is there a method for me to suggest which index to use\nin the query. I'm think adding logic in my script depending on which\nforum_id is used (since I can hard code in my scripts which are the\npopular forums) and tell the planner to use a specific index first?\n\nSecondly, why in the last output did it opt to do a sequential scan\nover using the forum_id index as it did earlier.\n\nSide note, a vacuum analayze was done just prior to running these tests.\n\nThank you,\n--\nAdam Alkins\nhttp://www.rasadam.com\nMobile: 868-680-4612\n", "msg_date": "Thu, 16 Feb 2006 13:03:29 -0400", "msg_from": "Adam Alkins <[email protected]>", "msg_from_op": true, "msg_subject": "Index Choice Problem" }, { "msg_contents": "Adam Alkins <[email protected]> writes:\n> SELECT t.topic_id\n> \t\t\tFROM phpbb_topics AS t\n> \t\t\t\tWHERE t.forum_id = 71\n> \t\t\t\t\tAND t.topic_id NOT IN (205026, 29046, 144569, 59780, 187424,\n> 138635, 184973, 170551, 22419, 181690, 197254, 205130)\n> \t\t\t\t\t\tORDER BY t.topic_last_post_time DESC\n> \t\t\t\t\t\t\tLIMIT 23 OFFSET 0\n\nIf you're using 8.1, you'd probably find that an index on (forum_id,\ntopic_last_post_time) would work nicely for this. You could use it\nin prior versions too, but you'd have to spell the ORDER BY rather\nstrangely:\n\tORDER BY forum_id desc, topic_last_post_time desc\nThe reason for this trickery is to get the planner to realize that\nthe index order matches the ORDER BY ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 18 Feb 2006 00:53:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Choice Problem " }, { "msg_contents": "Unfortunately I'm using 8.0.4 and this is for a government website, I only\nget so many maintenance windows. Is this the only workaround for this issue?\n\nI did make a test index as you described on my test box and tried the query\nand it used the new index. However, ORDER BY forum_id then last_post_time is\nsimply not the intended sorting order. (Though I'm considering just\nSELECTing the topic_last_post_time field and resorting the results in the\nscript if this is the only workaround).\n\n- Adam\n\nOn 2/18/06, Tom Lane <[email protected]> wrote:\n>\n> Adam Alkins <[email protected]> writes:\n> > SELECT t.topic_id\n> > FROM phpbb_topics AS t\n> > WHERE t.forum_id = 71\n> > AND t.topic_id NOT IN (205026,\n> 29046, 144569, 59780, 187424,\n> > 138635, 184973, 170551, 22419, 181690, 197254, 205130)\n> > ORDER BY\n> t.topic_last_post_time DESC\n> > LIMIT 23 OFFSET 0\n>\n> If you're using 8.1, you'd probably find that an index on (forum_id,\n> topic_last_post_time) would work nicely for this. You could use it\n> in prior versions too, but you'd have to spell the ORDER BY rather\n> strangely:\n> ORDER BY forum_id desc, topic_last_post_time desc\n> The reason for this trickery is to get the planner to realize that\n> the index order matches the ORDER BY ...\n>\n> regards, tom lane\n>\n\n\n\n--\nAdam Alkins\nhttp://www.rasadam.com\nMobile: 868-680-4612\n\nUnfortunately I'm using 8.0.4 and this is for a government website, I only get so many maintenance windows. Is this the only workaround for this issue?I did make a test index as you described on my test box and tried the query and it used the new index. However, ORDER BY forum_id then last_post_time is simply not the intended sorting order. (Though I'm considering just SELECTing the topic_last_post_time field and resorting the results in the script if this is the only workaround).\n- AdamOn 2/18/06, Tom Lane <[email protected]> wrote:\nAdam Alkins <[email protected]> writes:> SELECT t.topic_id>                       FROM phpbb_topics AS t>                               WHERE t.forum_id\n = 71>                                       AND t.topic_id NOT IN (205026, 29046, 144569, 59780, 187424,> 138635, 184973, 170551, 22419, 181690, 197254, 205130)>                                               ORDER BY \nt.topic_last_post_time DESC>                                                       LIMIT 23 OFFSET 0If you're using 8.1, you'd probably find that an index on (forum_id,topic_last_post_time) would work nicely for this.  You could use it\nin prior versions too, but you'd have to spell the ORDER BY ratherstrangely:        ORDER BY forum_id desc, topic_last_post_time descThe reason for this trickery is to get the planner to realize thatthe index order matches the ORDER BY ...\n                        regards, tom lane-- Adam Alkinshttp://www.rasadam.comMobile: 868-680-4612", "msg_date": "Sat, 18 Feb 2006 03:29:00 -0400", "msg_from": "\"Adam Alkins\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Choice Problem" }, { "msg_contents": "Nevermind the reply, blonde moment on the ordering...\n\nThis works :)\n\nThanks\n\nOn 2/18/06, Adam Alkins <[email protected]> wrote:\n>\n> Unfortunately I'm using 8.0.4 and this is for a government website, I only\n> get so many maintenance windows. Is this the only workaround for this issue?\n>\n> I did make a test index as you described on my test box and tried the\n> query and it used the new index. However, ORDER BY forum_id then\n> last_post_time is simply not the intended sorting order. (Though I'm\n> considering just SELECTing the topic_last_post_time field and resorting the\n> results in the script if this is the only workaround).\n>\n> - Adam\n>\n> On 2/18/06, Tom Lane <[email protected]> wrote:\n> >\n> > Adam Alkins <[email protected]> writes:\n> > > SELECT t.topic_id\n> > > FROM phpbb_topics AS t\n> > > WHERE t.forum_id = 71\n> > > AND t.topic_id NOT IN (205026,\n> > 29046, 144569, 59780, 187424,\n> > > 138635, 184973, 170551, 22419, 181690, 197254, 205130)\n> > > ORDER BY\n> > t.topic_last_post_time DESC\n> > > LIMIT 23 OFFSET\n> > 0\n> >\n> > If you're using 8.1, you'd probably find that an index on (forum_id,\n> > topic_last_post_time) would work nicely for this. You could use it\n> > in prior versions too, but you'd have to spell the ORDER BY rather\n> > strangely:\n> > ORDER BY forum_id desc, topic_last_post_time desc\n> > The reason for this trickery is to get the planner to realize that\n> > the index order matches the ORDER BY ...\n> >\n> > regards, tom lane\n> >\n>\n>\n>\n> --\n> Adam Alkins\n> http://www.rasadam.com\n> Mobile: 868-680-4612\n>\n\n\n\n--\nAdam Alkins\nhttp://www.rasadam.com\nMobile: 868-680-4612\n\nNevermind the reply, blonde moment on the ordering...This works :)ThanksOn 2/18/06, Adam Alkins <\[email protected]> wrote:Unfortunately I'm using 8.0.4 and this is for a government website, I only get so many maintenance windows. Is this the only workaround for this issue?\nI did make a test index as you described on my test box and tried the query and it used the new index. However, ORDER BY forum_id then last_post_time is simply not the intended sorting order. (Though I'm considering just SELECTing the topic_last_post_time field and resorting the results in the script if this is the only workaround).\n- AdamOn 2/18/06, Tom Lane <\[email protected]> wrote:\nAdam Alkins <[email protected]> writes:> SELECT t.topic_id>                       FROM phpbb_topics AS t\n>                               WHERE t.forum_id\n = 71>                                       AND t.topic_id NOT IN (205026, 29046, 144569, 59780, 187424,> 138635, 184973, 170551, 22419, 181690, 197254, 205130)>                                               ORDER BY \nt.topic_last_post_time DESC>                                                       LIMIT 23 OFFSET 0If you're using 8.1, you'd probably find that an index on (forum_id,topic_last_post_time) would work nicely for this.  You could use it\nin prior versions too, but you'd have to spell the ORDER BY ratherstrangely:        ORDER BY forum_id desc, topic_last_post_time descThe reason for this trickery is to get the planner to realize thatthe index order matches the ORDER BY ...\n                        regards, tom lane-- Adam Alkins\nhttp://www.rasadam.comMobile: 868-680-4612\n\n-- Adam Alkinshttp://www.rasadam.comMobile: 868-680-4612", "msg_date": "Sat, 18 Feb 2006 03:39:19 -0400", "msg_from": "\"Adam Alkins\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Choice Problem" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-hackers-\n> [email protected]] On Behalf Of Markus Schaber\n> Sent: Thursday, February 16, 2006 5:45 AM\n> To: [email protected]; [email protected]\n> Subject: Re: [HACKERS] qsort again (was Re: [PERFORM] Strange Create\nIndex\n> \n> Hi, Ron,\n> \n> Ron wrote:\n> \n> > ...and of course if you know enough about the data to be sorted so\nas to\n> > constrain it appropriately, one should use a non comparison based\nO(N)\n> > sorting algorithm rather than any of the general comparison based\n> > O(NlgN) methods.\n> \n> Sounds interesting, could you give us some pointers (names, URLs,\n> papers) to such algorithms?\n\nHe refers to counting sort and radix sort (which comes in most\nsignificant digit and least significant digit format). These are also\ncalled distribution (as opposed to comparison) sorts.\n\nThese sorts are O(n) as a function of the data size, but really they are\nO(M*n) where M is the average key length and n is the data set size.\n(In the case of MSD radix sort M is the average length to completely\ndifferentiate strings)\n\nSo if you have an 80 character key, then 80*log(n) will only be faster\nthan n*log(n) when you have 2^80th elements -- in other words -- never.\n\nIf you have short keys, on the other hand, distribution sorts will be\ndramatically faster. On an unsigned integer, for instance, it requires\n4 passes with 8 bit buckets and so 16 elements is the crossover to radix\nis faster than an O(n*log(n)) sort. Of course, there is a fixed\nconstant of proportionality and so it will really be higher than that,\nbut for large data sets distribution sorting is the best thing that\nthere is for small keys.\n\nYou could easily have an introspective MSD radix sort. The nice thing\nabout MSD radix sort is that you can retain the ordering that has\noccurred so far and switch to another algorithm.\n\nAn introspective MSD radix sort could call an introspective quick sort\nalgorithm once it processed a crossover point of buckets of key data.\n\nIn order to have distribution sorts that will work with a database\nsystem, for each and every data type you will need a function to return\nthe bucket of bits of significance for the kth bucket of bits. For a\ncharacter string, you could return key[bucket]. For an unsigned integer\nit is the byte of the integer to return will be a function of the\nendianness of the CPU. And for each other distinct data type a bucket\nfunction would be needed or a sort could not be generated for that type\nusing the distribution method.\n", "msg_date": "Thu, 16 Feb 2006 10:39:45 -0800", "msg_from": "\"Dann Corbit\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "--On Donnerstag, Februar 16, 2006 10:39:45 -0800 Dann Corbit \n<[email protected]> wrote:\n\n> He refers to counting sort and radix sort (which comes in most\n> significant digit and least significant digit format). These are also\n> called distribution (as opposed to comparison) sorts.\n>\n> These sorts are O(n) as a function of the data size, but really they are\n> O(M*n) where M is the average key length and n is the data set size.\n> (In the case of MSD radix sort M is the average length to completely\n> differentiate strings)\n>\n> So if you have an 80 character key, then 80*log(n) will only be faster\nI suppose you meant 80 * n here?\n\n> than n*log(n) when you have 2^80th elements -- in other words -- never.\nI think this is wrong. You can easily improve Radix sort by a constant if \nyou don't take single bytes as the digits but rather k-byte values. At \nleast 2 byte should be possible without problems. This would give you 40 * \nn time, not 80 * n. And you assume that the comparision of an 80-byte wide \nvalue only costs 1, which might (and in many cases will be imho) wrong. \nActually it migh mean to compare 80 bytes as well, but I might be wrong.\n\nWhat I think as the biggest problem is the digit representation necessary \nfor Radix-Sort in cases of locales which sort without looking at spaces. I \nassume that would be hard to implement. The same goes for the proposed \nmapping of string values onto 4/8-byte values.\n\nMit freundlichem Gruß\nJens Schicke\n-- \nJens Schicke\t\t [email protected]\nasco GmbH\t\t http://www.asco.de\nMittelweg 7\t\t Tel 0531/3906-127\n38106 Braunschweig\t Fax 0531/3906-400\n", "msg_date": "Fri, 17 Feb 2006 09:18:39 +0100", "msg_from": "Jens-Wolfhard Schicke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" }, { "msg_contents": "On Fri, Feb 17, 2006 at 09:18:39AM +0100, Jens-Wolfhard Schicke wrote:\n> What I think as the biggest problem is the digit representation necessary \n> for Radix-Sort in cases of locales which sort without looking at spaces. I \n> assume that would be hard to implement. The same goes for the proposed \n> mapping of string values onto 4/8-byte values.\n\nActually, this is easy. The standard C library provides strxfrm() and\nother locale toolkits like ICU provide ucol_getSortKey(). Windows\nprovides LCMapString(). Just pass each string through this and take the\nfirst four bytes of the result to form your integer key.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Fri, 17 Feb 2006 10:17:49 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: qsort again (was Re: [PERFORM] Strange Create Index" } ]
[ { "msg_contents": ">From: Tom Lane\n>Date: 02/16/06 19:29:21\n>To: Adnan DURSUN\n>Cc: [email protected]\n>Subject: Re: [PERFORM] Why does not perform index combination\n\n>\"Adnan DURSUN\" <[email protected]> writes:\n>> I have query for a report. Explain analyze result is below. The =\n>> execution plan tells that it would use \"t_koltuk_islem_pkey\" index on =\n>> table \"t_koltuk_islem\" to scan. However, there is another index on table =\n>> \"t_koltuk_islem\" on column \"islem_tarihi\" that can be combined on plan. =\n>> Why doesn't optimizer choice that ? It prefer to perform a filter on =\n>> column \"islem_tarihi\" ... Why ?\n\n>Probably thinks that the extra index doesn't add enough selectivity to\n>be worth scanning. It's probably right, too --- maybe with a narrower\n>date range the answer would be different.\n\n Yes, a narrower date range solves that.. Thanks for your suggestions...\n\n>I think the main problem in this plan is the poor estimation of the size\n>of the d1/s join. Are your stats up to date on those tables? Maybe\n>boosting the statistics target for one or both would help.\n\n Database was vacuumed and analyzed before got take the plan..\n\nRegards\nAdnan DURSUN\n\n\n\n\n\n\n\n\n\n>From: Tom Lane\n>Date: 02/16/06 \n19:29:21\n>To: Adnan DURSUN\n>Cc: [email protected]\n>Subject: Re: [PERFORM] \nWhy does not perform index combination\n \n>\"Adnan DURSUN\" <[email protected]> writes:\n>>   I have query for a report. Explain analyze result is \nbelow. The =\n>> execution plan tells that it would use \"t_koltuk_islem_pkey\" index \non =\n>> table \"t_koltuk_islem\" to scan. However, there is another index on \ntable =\n>> \"t_koltuk_islem\" on column \"islem_tarihi\" that can be combined on \nplan. =\n>> Why doesn't optimizer choice that ? It prefer to perform a filter \non =\n>> column \"islem_tarihi\" ... Why ?\n \n>Probably thinks that the extra index doesn't add enough selectivity \nto\n>be worth scanning.  It's probably right, too --- maybe with a \nnarrower\n>date range the answer would be different.\n \n    Yes, a narrower date range solves that.. Thanks for your \nsuggestions...\n \n>I think the main problem in this plan is the poor estimation of the \nsize\n>of the d1/s join.  Are your stats up to date on those \ntables?  Maybe\n>boosting the statistics target for one or both would help.\n \n    Database was vacuumed and analyzed before got take \nthe plan..\n \nRegards\nAdnan DURSUN", "msg_date": "Fri, 17 Feb 2006 00:14:56 +0200", "msg_from": "\"Adnan DURSUN\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does not perform index combination" } ]
[ { "msg_contents": "I assume we have such?\n\nRon\n\n\n", "msg_date": "Fri, 17 Feb 2006 11:51:26 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": true, "msg_subject": "Need pointers to \"standard\" pg database(s) for testing" }, { "msg_contents": "On Fri, 2006-02-17 at 10:51, Ron wrote:\n> I assume we have such?\n\nDepends on what you wanna do.\nFor transactional systems, look at some of the stuff OSDL has done.\n\nFor large geospatial type stuff, the government is a good source, like\nwww.usgs.gov or the fcc transmitter database.\n\nThere are other ones out there. Really depends on what you wanna test.\n", "msg_date": "Fri, 17 Feb 2006 10:56:30 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Need pointers to \"standard\" pg database(s) for" }, { "msg_contents": "Ron wrote:\n>I assume we have such?\n\nYou could look at the Sample Databases project on pgfoundry:\nhttp://pgfoundry.org/projects/dbsamples/\n\nBest Regards,\nMichael Paesold\n\n", "msg_date": "Fri, 17 Feb 2006 18:09:52 +0100", "msg_from": "\"Michael Paesold\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need pointers to \"standard\" pg database(s) for testing" }, { "msg_contents": "Not really, but you can check out the sample databases project:\n\nhttp://pgfoundry.org/projects/dbsamples/\n\nChris\n\nRon wrote:\n> I assume we have such?\n> \n> Ron\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Mon, 20 Feb 2006 10:02:23 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Need pointers to \"standard\" pg database(s) for testing" }, { "msg_contents": "Relating to this. If anyone can find govt or other free db's and \nconvert them into pgsql format, I will host them on the dbsamples page. \n The dbsamples are _really_ popular!\n\nChris\n\nScott Marlowe wrote:\n> On Fri, 2006-02-17 at 10:51, Ron wrote:\n>> I assume we have such?\n> \n> Depends on what you wanna do.\n> For transactional systems, look at some of the stuff OSDL has done.\n> \n> For large geospatial type stuff, the government is a good source, like\n> www.usgs.gov or the fcc transmitter database.\n> \n> There are other ones out there. Really depends on what you wanna test.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n", "msg_date": "Mon, 20 Feb 2006 12:47:48 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Need pointers to \"standard\" pg database(s) for" } ]
[ { "msg_contents": "Does anybody know if it is possible to use the statistics collected by\nPostgreSQL to do the following, and how?\n\n- view all locks held by a particular PostgreSQL session (including how to\ndetermine\n the session ID#)\n\n- determine effect of lock contention on overall database performance, as\nwell as the\n extent to which contention varies with overall database traffic\n\nI am using version 8.0.2 on Windows 2003.\n\n\n", "msg_date": "Fri, 17 Feb 2006 15:26:19 -0500", "msg_from": "\"Lane Van Ingen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Measuring Lock Performance" } ]
[ { "msg_contents": "I have some quite huge queries, inside functions, so debugging is kind \nof hard. But I have located the query that for some reason gets 4 times \nas slow after an analyze.\n\nBefore analyze the plan for the query is this:\nNested Loop (cost=16.80..34.33 rows=1 width=28)\n Join Filter: (ischildof(2, \"outer\".calendar) OR (hashed subplan))\n -> Nested Loop (cost=0.00..11.66 rows=1 width=32)\n -> Index Scan using t_events_eventype on t_events e \n(cost=0.00..5.82 rows=1 width=28)\n Index Cond: (eventtype = 1)\n Filter: (rrfreq IS NOT NULL)\n -> Index Scan using t_entities_pkey on t_entities te \n(cost=0.00..5.83 rows=1 width=4)\n Index Cond: (te.\"ID\" = \"outer\".entity)\n Filter: (partof = 'events'::name)\n -> Index Scan using t_entities_pkey on t_entities (cost=0.00..5.85 \nrows=1 width=4)\n Index Cond: (t_entities.\"ID\" = \"outer\".entity)\n Filter: ((haveaccess(createdby, responsible, \"class\", false) OR \nCASE WHEN (partof = 'contacts'::name) THEN ischildof(ancestorof(me()), \n\"ID\") ELSE false END) AND (subplan))\n SubPlan\n -> Function Scan on alleventoccurances (cost=0.00..12.50 \nrows=1000 width=8)\n SubPlan\n -> Seq Scan on t_attendees (cost=0.00..16.38 rows=170 width=4)\n Filter: ischildof(2, contact)\n\nIn reality this takes approximately 1.0s in the general case. After an \nanalyze the plan becomes:\n\nNested Loop (cost=2.09..4.82 rows=1 width=28)\n Join Filter: (\"inner\".\"ID\" = \"outer\".\"ID\")\n -> Hash Join (cost=2.09..3.59 rows=1 width=32)\n Hash Cond: (\"outer\".\"ID\" = \"inner\".entity)\n Join Filter: (ischildof(2, \"inner\".calendar) OR (hashed subplan))\n -> Seq Scan on t_entities (cost=0.00..1.46 rows=6 width=4)\n Filter: ((haveaccess(createdby, responsible, \"class\", \nfalse) OR CASE WHEN (partof = 'contacts'::name) THEN \nischildof(ancestorof(me()), \"ID\") ELSE false END) AND (subplan))\n SubPlan\n -> Function Scan on alleventoccurances \n(cost=0.00..12.50 rows=1000 width=8)\n -> Hash (cost=1.06..1.06 rows=2 width=28)\n -> Seq Scan on t_events e (cost=0.00..1.06 rows=2 width=28)\n Filter: ((rrfreq IS NOT NULL) AND (eventtype = 1))\n SubPlan\n -> Seq Scan on t_attendees (cost=0.00..1.02 rows=1 width=4)\n Filter: ischildof(2, contact)\n -> Seq Scan on t_entities te (cost=0.00..1.16 rows=5 width=4)\n Filter: (partof = 'events'::name)\n\nThis takes on approximately 4.5s. So obviously it has degraded.\n\nI count myself as a newbie here, so any hints on what goes on, why a \nplan might be chosen, and how I can make is better is appreciated. \nNaturally the I can provide scripts to set up all or parts of the \ndatabase if anyone like.\n\nregards\n\n-- \n//Fredrik Olsson\n Treyst AB\n +46-(0)19-362182\n [email protected]\n\n", "msg_date": "Sat, 18 Feb 2006 17:04:34 +0100", "msg_from": "Fredrik Olsson <[email protected]>", "msg_from_op": true, "msg_subject": "Force another plan." }, { "msg_contents": "Fredrik Olsson <[email protected]> writes:\n> I have some quite huge queries, inside functions, so debugging is kind \n> of hard. But I have located the query that for some reason gets 4 times \n> as slow after an analyze.\n\nCould we see EXPLAIN ANALYZE output for these cases, not just EXPLAIN?\nIt seems a bit premature to be discussing ways to \"force\" a plan choice\nwhen you don't even have a clear idea what's going wrong.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 18 Feb 2006 13:36:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Force another plan. " }, { "msg_contents": "Tom Lane skrev:\n> Fredrik Olsson <[email protected]> writes:\n> \n>> I have some quite huge queries, inside functions, so debugging is kind \n>> of hard. But I have located the query that for some reason gets 4 times \n>> as slow after an analyze.\n>> \n>\n> Could we see EXPLAIN ANALYZE output for these cases, not just EXPLAIN?\n> It seems a bit premature to be discussing ways to \"force\" a plan choice\n> when you don't even have a clear idea what's going wrong.\n> \nSorry about that, my fault. Here comes EXPLAIN ANALYZE:\n\nBefore VACUUM ANALYZE:\n===\nNested Loop (cost=16.80..34.33 rows=1 width=28) (actual\ntime=54.197..98.598 rows=1 loops=1)\n Join Filter: (ischildof(2, \"outer\".calendar) OR (hashed subplan))\n -> Nested Loop (cost=0.00..11.66 rows=1 width=32) (actual\ntime=0.307..0.458 rows=3 loops=1)\n -> Index Scan using t_events_eventype on t_events e\n(cost=0.00..5.82 rows=1 width=28) (actual time=0.241..0.307 rows=3 loops=1)\n Index Cond: (eventtype = 1)\n Filter: (rrfreq IS NOT NULL)\n -> Index Scan using t_entities_pkey on t_entities te\n(cost=0.00..5.83 rows=1 width=4) (actual time=0.035..0.039 rows=1 loops=3)\n Index Cond: (te.\"ID\" = \"outer\".entity)\n Filter: (partof = 'events'::name)\n -> Index Scan using t_entities_pkey on t_entities (cost=0.00..5.85\nrows=1 width=4) (actual time=28.445..28.447 rows=0 loops=3)\n Index Cond: (t_entities.\"ID\" = \"outer\".entity)\n Filter: ((haveaccess(createdby, responsible, \"class\", false) OR\nCASE WHEN (partof = 'contacts'::name) THEN ischildof(ancestorof(me()),\n\"ID\") ELSE false END) AND (subplan))\n SubPlan\n -> Function Scan on alleventoccurances (cost=0.00..12.50\nrows=1000 width=8) (actual time=19.745..19.745 rows=0 loops=3)\n SubPlan\n -> Seq Scan on t_attendees (cost=0.00..16.38 rows=170 width=4)\n(actual time=0.422..0.447 rows=2 loops=1)\n Filter: ischildof(2, contact)\nTotal runtime: 99.814 ms\n\nAfter VACUUM ANALYZE:\n===\nNested Loop (cost=2.11..4.92 rows=1 width=28) (actual\ntime=434.321..439.102 rows=1 loops=1)\n Join Filter: (\"inner\".\"ID\" = \"outer\".\"ID\")\n -> Hash Join (cost=2.11..3.67 rows=1 width=32) (actual\ntime=434.001..438.775 rows=1 loops=1)\n Hash Cond: (\"outer\".\"ID\" = \"inner\".entity)\n Join Filter: (ischildof(2, \"inner\".calendar) OR (hashed subplan))\n -> Seq Scan on t_entities (cost=0.00..1.49 rows=7 width=4)\n(actual time=404.539..409.302 rows=2 loops=1)\n Filter: ((haveaccess(createdby, responsible, \"class\",\nfalse) OR CASE WHEN (partof = 'contacts'::name) THEN\nischildof(ancestorof(me()), \"ID\") ELSE false END) AND (subplan))\n SubPlan\n -> Function Scan on alleventoccurances\n(cost=0.00..12.50 rows=1000 width=8) (actual time=27.871..27.871 rows=0\nloops=14)\n -> Hash (cost=1.07..1.07 rows=3 width=28) (actual\ntime=0.063..0.063 rows=3 loops=1)\n -> Seq Scan on t_events e (cost=0.00..1.07 rows=3\nwidth=28) (actual time=0.023..0.034 rows=3 loops=1)\n Filter: ((rrfreq IS NOT NULL) AND (eventtype = 1))\n SubPlan\n -> Seq Scan on t_attendees (cost=0.00..1.02 rows=1 width=4)\n(actual time=0.205..0.228 rows=2 loops=1)\n Filter: ischildof(2, contact)\n -> Seq Scan on t_entities te (cost=0.00..1.18 rows=6 width=4)\n(actual time=0.029..0.045 rows=6 loops=1)\n Filter: (partof = 'events'::name)\nTotal runtime: 440.385 ms\n\nAs I read it, the villain is the sequential sqan on t_entities with the\nhuge filter; haveacces() ...\nAnd that is understandable, doing haveaccess() on all rows is not good.\nA much better solution in this case would be to first get the set that\nconforms to (partof = 'events'::name), that would reduce the set to a\nthird. Secondly applying (eventtype=1) would reduce that to half. Then\nit is time to do the (haveaccess() ...).\n\nPerhaps a small explanation of the tables, and their intent is in order.\n\nWhat I have is one \"master table\" with entities (t_entitites), and two\nchild tables t_contacts and t_events. In a perfect world rt_contacts and\nt_events would have inherited from t_entities as they share muuch data,\nbut then I would not be able to have foreign key referencing just\nevents, or just contacts. So instead every row in events and contacts\nhave a corresponding one to one row in entities. The fourth table used\nin this query is t_attendees, that links well attendees for an event to\ncontacts. Here goes a simplified example (SQL says more then 1000 words):\n\nCREATE TABLE t_entities (\n \"ID\" integer PRIMARY KEY,\n \"createdby\" integer NOT NULL,\n \"class\" integer NOT NULL, -- Defines visibility for entity and is\nused by haveaccess()\n \"partof\" name NOT NULL\n);\n\nCREATE TABLE t_contacts (\n \"entity\" integer PRIMARY KEY REFERENCES t_entities (\"ID\"),\n \"undercontact\" integer REFERENCES t_contacts (\"entity\"), -- Tree\nstructure, used by haveaccess()\n \"name\" varchar(48)\n);\n\nALTER TABLE t_entities ADD FOREIGN KEY (\"createdby\")\n REFERENCES t_contacts (\"entity\");\n\nCREATE TABLE t_events (\n \"entity\" integer PRIMARY KEY REFERENCES t_entities (\"ID\"),\n \"calendar\" integer NOT NULL REFERENCES t_entities (\"ID\"),\n \"eventtype\" integer NOT NULL,\n \"start\" timestamptz,\n \"end\" timestamptz\n);\n\nCREATE TABLE t_attendees (\n \"event\" integer NOT NULL REFERENCES t_events (\"entity\"),\n \"contact\" integer NOT NULL REFERENCES t_contacts (\"entity\"),\n PRIMARY KEY (\"event\", \"contact\")\n);\n\nNo user have privileges to select, update or delete in any of these\ntables. They are instead accessed with views that are made updatable\nusing rules. These rules uses the function haveaccess() to sort out the\nrows that each user is allowed to see. Users are also contacts in the\nt_contacts table.\n\nWell the complete database, with a shell script for setting it up can as\nI said be provided if wanted.\n\n-- \n//Fredrik Olsson\n Treyst AB\n +46-(0)19-362182\n [email protected]\n\n\n", "msg_date": "Sun, 19 Feb 2006 13:38:58 +0100", "msg_from": "Fredrik Olsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Force another plan." }, { "msg_contents": "Fredrik Olsson <[email protected]> writes:\n> -> Seq Scan on t_entities (cost=0.00..1.49 rows=7 width=4)\n> (actual time=404.539..409.302 rows=2 loops=1)\n> Filter: ((haveaccess(createdby, responsible, \"class\",\n> false) OR CASE WHEN (partof = 'contacts'::name) THEN\n> ischildof(ancestorof(me()), \"ID\") ELSE false END) AND (subplan))\n> SubPlan\n> -> Function Scan on alleventoccurances\n> (cost=0.00..12.50 rows=1000 width=8) (actual time=27.871..27.871 rows=0\n> loops=14)\n\nThis seems to be your problem right here: evaluating that subplan for\neach row of t_entities is pretty expensive, and yet the planner's\nestimating a total cost of only 1.49 to run the scan. What PG version\nis this? AFAICT we've accounted for subplan costs in scan quals for\na long time, certainly since 7.4. Can you put together a self-contained\ntest case for this?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 19 Feb 2006 12:31:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Force another plan. " }, { "msg_contents": "Tom Lane skrev:\n> Fredrik Olsson <[email protected]> writes:\n> \n>> -> Seq Scan on t_entities (cost=0.00..1.49 rows=7 width=4)\n>> (actual time=404.539..409.302 rows=2 loops=1)\n>> Filter: ((haveaccess(createdby, responsible, \"class\",\n>> false) OR CASE WHEN (partof = 'contacts'::name) THEN\n>> ischildof(ancestorof(me()), \"ID\") ELSE false END) AND (subplan))\n>> SubPlan\n>> -> Function Scan on alleventoccurances\n>> (cost=0.00..12.50 rows=1000 width=8) (actual time=27.871..27.871 rows=0\n>> loops=14)\n>> \n>\n> This seems to be your problem right here: evaluating that subplan for\n> each row of t_entities is pretty expensive, and yet the planner's\n> estimating a total cost of only 1.49 to run the scan. What PG version\n> is this? AFAICT we've accounted for subplan costs in scan quals for\n> a long time, certainly since 7.4. Can you put together a self-contained\n> test case for this?\n>\n> \nI have found the real bottle-neck, looks like I trusted PG to do a bit \ntoo much magic. Since all table accesses go through views, and the first \n11 columns always look the same, I did a \"proxy-view\" to simplify my \n\"real-views\" to:\nSELECT pv.*, ...\nThe \"proxy-view\" fetches from t_entities, and so does the \"real-views\" \nto do a filter on \"partof\". This resulted in two scans on t_entities \nwhen only one is needed. Skipping the \"proxy-view\" allows PG to chose \nthe best plan for every case. I guess I stressed the optimizer a bit too \nfar :).\n\nIs a self contained test-case for the old way with the \"proxy-view\" is \nstill wanted?\n\n-- \n//Fredrik Olsson\n Treyst AB\n +46-(0)19-362182\n [email protected]\n\n", "msg_date": "Mon, 20 Feb 2006 10:58:41 +0100", "msg_from": "Fredrik Olsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Force another plan." }, { "msg_contents": "Fredrik Olsson <[email protected]> writes:\n> Is a self contained test-case for the old way with the \"proxy-view\" is \n> still wanted?\n\nYes, something still seems funny there. And for that matter it wasn't\nclear what was wrong with your proxy view, either.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 Feb 2006 10:01:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Force another plan. " } ]
[ { "msg_contents": "\n\nThe following query runs much slower than I would have expected. I ran it \nthrough EXPLAIN ANALYZE (results included after) and I don't understand why \nthe planner is doing what it is. All of the columns from the WHERE part of \nthe query are indexed and the indexes are being used. The number of rows \nbeing reported is equal to the size of the table though so it's really no \nbetter than just doing a sequential scan. This is running on Postgres 8.0.7 \nand the system has been freshly vaccumed with the statistics target set to \n800. Does any know why the query behaves like this? Does it have anything to \ndo with the OR statements in the where clause spanning two different tables? \nI tried an experiment where I split this into queries two queries using UNION \nand it ran in less than 1 ms. Which is a solution but I'm still curious why \nthe original was so slow.\n\n\nSELECT DISTINCT a.account_id, l.username, a.status, a.company, a.fax_num,\na.primary_phone, a.responsible_first, a.responsible_last FROM\n accounts a, logins l, supplemental_info i\n WHERE l.account_id=a.account_id and\n i.account_id=a.account_id and\n ((a.primary_phone = 'xxx-xxx-xxxx') OR (a.alternate_phone = 'xxx-xxx-xxxx') \nOR (i.contact_num = 'xxx-xxx-xxxx'))\n ORDER BY a.status, a.primary_phone, a.account_id;\n\n\nEXPLAIN ANALYZE results\n\n Unique (cost=47837.93..47838.02 rows=4 width=92) (actual \ntime=850.250..850.252 rows=1 loops=1)\n -> Sort (cost=47837.93..47837.94 rows=4 width=92) (actual \ntime=850.248..850.248 rows=1 loops=1)\n Sort Key: a.status, a.primary_phone, a.account_id, l.username, \na.company, a.fax_num, a.responsible_first, a.responsible_last\n -> Nested Loop (cost=0.00..47837.89 rows=4 width=92) (actual \ntime=610.641..850.222 rows=1 loops=1)\n -> Merge Join (cost=0.00..47818.70 rows=4 width=88) (actual \ntime=610.602..850.179 rows=1 loops=1)\n Merge Cond: (\"outer\".account_id = \"inner\".account_id)\n Join Filter: (((\"outer\".primary_phone)::text = \n'xxx-xxx-xxxx'::text) OR ((\"outer\".alternate_phone)::text = \n'xxx-xxx-xxxx'::text) OR ((\"inner\".contact_num)::text = \n'xxx-xxx-xxxx'::text))\n -> Index Scan using accounts_pkey on accounts a \n(cost=0.00..18423.73 rows=124781 width=95) (actual time=0.019..173.523 \nrows=124783 loops=1)\n -> Index Scan using supplemental_info_account_id_idx on \nsupplemental_info i (cost=0.00..15393.35 rows=124562 width=24) (actual \ntime=0.014..145.757 rows=124643 loops=1)\n -> Index Scan using logins_account_id_idx on logins l \n(cost=0.00..4.59 rows=2 width=20) (actual time=0.022..0.023rows=1 loops=1)\n Index Cond: (\"outer\".account_id = l.account_id)\n Total runtime: 850.429 ms\n\n", "msg_date": "Sun, 19 Feb 2006 08:58:12 -0500", "msg_from": "Emil Briggs <[email protected]>", "msg_from_op": true, "msg_subject": "Question about query planner" }, { "msg_contents": "Emil Briggs <[email protected]> writes:\n> Does any know why the query behaves like this? Does it have anything to \n> do with the OR statements in the where clause spanning two different tables? \n\nExactly.\n\n> SELECT DISTINCT a.account_id, l.username, a.status, a.company, a.fax_num,\n> a.primary_phone, a.responsible_first, a.responsible_last FROM\n> accounts a, logins l, supplemental_info i\n> WHERE l.account_id=a.account_id and\n> i.account_id=a.account_id and\n> ((a.primary_phone = 'xxx-xxx-xxxx') OR (a.alternate_phone = 'xxx-xxx-xxxx') \n> OR (i.contact_num = 'xxx-xxx-xxxx'))\n> ORDER BY a.status, a.primary_phone, a.account_id;\n\nThe system has to fetch all the rows of a, because any of them might\njoin to a row of i matching the i.contact_num condition, and conversely\nit has to fetch every row of i because any of them might join to a row\nof a matching one of the phone conditions. It is therefore necessary\nto effectively form the entire join of a and i; until you've done that\nthere is no way to eliminate any rows.\n\nI'm a bit surprised that it's using the indexes at all --- a hash join\nwith seqscan inputs would probably run faster. Try increasing work_mem\na bit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 19 Feb 2006 13:15:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about query planner " } ]
[ { "msg_contents": "Here's a simplified version of the schema:\n\nTable A has an ID field, an observation date, and other stuff. There are about 20K IDs and 3K observations per ID. Table B has a matching ID field, minimum and maximum dates, a code, and other stuff, about 0-50 records per ID. For a given ID, the dates in B never overlap. On A, the PK is (id, obsdate). On B, the PK is (id, mindate). I want\n\nSELECT a.id, b.code, AVG(other stuff) FROM A LEFT JOIN B ON a.id=b.id AND a.obsdate BETWEEN b.mindate AND b.maxdate GROUP BY 1,2;\n\nIs there a way to smarten the query to take advantage of the fact at most one record of B matches A? Also, I have a choice between using a LEFT JOIN or inserting dummy records into B to fill in the gaps in the covered dates, which would make exactly one matching record. Would this make a difference?\n\nThanks.\n", "msg_date": "Sun, 19 Feb 2006 20:06:12 -0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "How to optimize a JOIN with BETWEEN?" }, { "msg_contents": "Use a gist index. Easiest way would be to define a box with mindate at\none corner and maxdate at the other corner, and then search for\npoint(obsdate,obsdate) that lie with in the box.\n\nA more detailed explination is in the archives somewhere...\n\nOn Sun, Feb 19, 2006 at 08:06:12PM -0800, [email protected] wrote:\n> Here's a simplified version of the schema:\n> \n> Table A has an ID field, an observation date, and other stuff. There are about 20K IDs and 3K observations per ID. Table B has a matching ID field, minimum and maximum dates, a code, and other stuff, about 0-50 records per ID. For a given ID, the dates in B never overlap. On A, the PK is (id, obsdate). On B, the PK is (id, mindate). I want\n> \n> SELECT a.id, b.code, AVG(other stuff) FROM A LEFT JOIN B ON a.id=b.id AND a.obsdate BETWEEN b.mindate AND b.maxdate GROUP BY 1,2;\n> \n> Is there a way to smarten the query to take advantage of the fact at most one record of B matches A? Also, I have a choice between using a LEFT JOIN or inserting dummy records into B to fill in the gaps in the covered dates, which would make exactly one matching record. Would this make a difference?\n> \n> Thanks.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 21 Feb 2006 18:14:56 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to optimize a JOIN with BETWEEN?" } ]
[ { "msg_contents": "Hi, \n\nI'm developing a search engine using the postgresql's databas. I've\nalready doing some tunnings looking increase the perform. \n\nNow, I'd like of do a realistic test of perfom with number X of queries\nfor know the performance with many queries. \n\nWhat the corret way to do this? \n\nThanks.\n\n", "msg_date": "Mon, 20 Feb 2006 10:50:36 +0000", "msg_from": "Marcos <[email protected]>", "msg_from_op": true, "msg_subject": "Creating a correct and real benchmark" }, { "msg_contents": "\n> I'm developing a search engine using the postgresql's databas. I've\n> already doing some tunnings looking increase the perform.\n>\n> Now, I'd like of do a realistic test of perfom with number X of queries\n> for know the performance with many queries.\n>\n> What the corret way to do this?\n\n\n\tI guess the only way to know how it will perform with your own \napplication is to benchmark it with queries coming from your own \napplication. You can create a test suite with a number of typical queries \nand use your favourite scripting language to spawn a number of threads and \nhammer the database. I find it interesting to measure the responsiveness \nof the server while torturing it, simply by measuring the time it takes to \nrespond to a simple query and graphing it. Also you should not have N \nthreads issue the exact same queries, because then you will hit a too \nsmall dataset. Introduce some randomness in the testing, for instance. \nBenchmarking from another machine makes sure the test client's CPU usage \nis not a part of the problem.\n", "msg_date": "Tue, 21 Feb 2006 01:14:27 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating a correct and real benchmark" }, { "msg_contents": "PFC wrote:\n> \n>> I'm developing a search engine using the postgresql's databas. I've\n>> already doing some tunnings looking increase the perform.\n>>\n>> Now, I'd like of do a realistic test of perfom with number X of queries\n>> for know the performance with many queries.\n>>\n>> What the corret way to do this?\n> \n> \n> \n> I guess the only way to know how it will perform with your own \n> application is to benchmark it with queries coming from your own \n> application. You can create a test suite with a number of typical \n> queries and use your favourite scripting language to spawn a number of \n> threads and hammer the database. I find it interesting to measure the \n> responsiveness of the server while torturing it, simply by measuring \n> the time it takes to respond to a simple query and graphing it. Also \n> you should not have N threads issue the exact same queries, because \n> then you will hit a too small dataset. Introduce some randomness in the \n> testing, for instance. Benchmarking from another machine makes sure the \n> test client's CPU usage is not a part of the problem.\n\nThe other advice on top of this is don't just import a small amount of data.\n\nIf your application is going to end up with 200,000 rows - then test \nwith 200,000 rows or more so you know exactly how it will handle under \n\"production\" conditions.\n", "msg_date": "Tue, 21 Feb 2006 11:21:52 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating a correct and real benchmark" }, { "msg_contents": "Thanks for advises :-D.\n\nMarcos\n\n", "msg_date": "Fri, 24 Feb 2006 17:06:20 +0000", "msg_from": "Marcos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Creating a correct and real benchmark" } ]
[ { "msg_contents": "Hi,\n I have query where I do two inline queries (which involves grouping) and then join them with an outer join.\nThe individual queries run in 50-300 ms. However the optimizer is choosing a nested loop to join them rather than a Hash join\ncausing the complete query to take 500+ seconds. It expects that it will get 1 row out from each of the sources, but here is gets\nseveral thousand rows.\n\nIs there any way I can get a hash join used on the outer join, while preserving the nested loops.\n\n\nexplain analyze\nselect o1.objaddr, o1.fieldname, o1.objsig,\no1.totmem, o1.cnt,\no2.totmem, o2.cnt\nfrom\n( select min(ao.objaddr) as objaddr, count(*) as cnt,\n sum(ao.totmem) as totmem, ao.objsig, ar.fieldname, ao.objtype\n from jam_heapobj ao, jam_heaprel ar\n where ar.heap_id = 1 and ar.parentaddr = 0 and ar.fieldname = 'K'\n and ao.heap_id = ar.heap_id and ao.objaddr = ar.childaddr\n group by ao.objsig, ar.fieldname, ao.objtype) o1\nleft outer join\n(select min(bo.objaddr) as objaddr, count(*) as cnt,\n sum(bo.totmem) as totmem, bo.objsig, br.fieldname, bo.objtype\n from jam_heapobj bo, jam_heaprel br\n where br.heap_id = 0 and br.parentaddr = 0 and br.fieldname = 'K'\n and bo.heap_id = br.heap_id and bo.objaddr = br.childaddr\ngroup by bo.objsig, br.fieldname, bo.objtype) o2\non ( o2.objsig = o1.objsig and o2.objtype = o1.objtype\n and o2.fieldname = o1.fieldname)\n order by o1.totmem - coalesce(o2.totmem,0) desc;\n\n Sort (cost=16305.41..16305.42 rows=1 width=562) (actual time=565997.769..566016.255 rows=6115 loops=1)\n Sort Key: (o1.totmem - COALESCE(o2.totmem, 0::bigint))\n ->Nested Loop Left Join (cost=16305.22..16305.40 rows=1 width=562) (actual time=612.631..565896.047 rows=6115 loops=1)\n Join Filter: (((\"inner\".objsig)::text = (\"outer\".objsig)::text) AND ((\"inner\".objtype)::text = (\"outer\".objtype)::text) AND ((\"inner\".fieldname)::text = (\"outer\".fieldname)::text))\n ->Subquery Scan o1 (cost=12318.12..12318.15 rows=1 width=514) (actual time=309.659..413.311 rows=6115 loops=1)\n ->HashAggregate (cost=12318.12..12318.14 rows=1 width=54) (actual time=309.649..367.206 rows=6115 loops=1)\n ->Nested Loop (cost=0.00..12317.90 rows=10 width=54) (actual time=0.243..264.116 rows=6338 loops=1)\n ->Index Scan using jam_heaprel_n1 on jam_heaprel ar (cost=0.00..12275.00 rows=7 width=19) (actual time=0.176..35.780 rows=6338 loops=1)\n Index Cond: ((heap_id = 1) AND (parentaddr = 0))\n Filter: ((fieldname)::text = 'K'::text)\n ->Index Scan using jam_heapobj_u1 on jam_heapobj ao (cost=0.00..6.10 rows=2 width=51) (actual time=0.019..0.022 rows=1 loops=6338)\n Index Cond: ((ao.heap_id = 1) AND (ao.objaddr = \"outer\".childaddr))\n ->Subquery Scan o2 (cost=3987.10..3987.13 rows=1 width=514) (actual time=0.062..75.171 rows=6038 loops=6115)\n ->HashAggregate (cost=3987.10..3987.12 rows=1 width=54) (actual time=0.056..36.469 rows=6038 loops=6115)\n ->Nested Loop (cost=0.00..3987.05 rows=2 width=54) (actual time=0.145..257.876 rows=6259 loops=1)\n ->Index Scan using jam_heaprel_n1 on jam_heaprel br (cost=0.00..3974.01 rows=3 width=19) (actual time=0.074..35.124 rows=6259 loops=1)\n Index Cond: ((heap_id = 0) AND (parentaddr = 0))\n Filter: ((fieldname)::text = 'K'::text)\n ->Index Scan using jam_heapobj_u1 on jam_heapobj bo (cost=0.00..4.33 rows=1 width=51) (actual time=0.018..0.022 rows=1 loops=6259)\n Index Cond: ((bo.heap_id = 0) AND (bo.objaddr = \"outer\".childaddr))\n Total runtime: 566044.187 ms\n(21 rows)\n\nRegards,\n\nVirag\n\n\n\n\n\n\n\nHi,\n    I have query where I do two \ninline queries (which involves grouping) and then join them with an outer \njoin.\nThe individual queries run in 50-300 ms. However \nthe optimizer is choosing a nested loop to join them rather than a Hash \njoin\ncausing the complete query to take 500+ seconds. It \nexpects that it will get 1 row out from each of the sources, but here is \ngets\nseveral thousand rows.\n \nIs there any way I can get a hash join used on the \nouter join, while preserving the nested loops.\n \n \nexplain analyzeselect o1.objaddr, o1.fieldname, \no1.objsig,o1.totmem, o1.cnt,o2.totmem, o2.cntfrom( select \nmin(ao.objaddr) as objaddr, count(*) as \ncnt,         sum(ao.totmem) as \ntotmem, ao.objsig, ar.fieldname, ao.objtype    from \njam_heapobj ao, jam_heaprel ar   where ar.heap_id = 1  and \nar.parentaddr = 0 and ar.fieldname = 'K'     and \nao.heap_id = ar.heap_id and ao.objaddr = ar.childaddr   group by \nao.objsig, ar.fieldname, ao.objtype) o1left outer join(select \nmin(bo.objaddr) as objaddr, count(*) as \ncnt,        sum(bo.totmem) as totmem, \nbo.objsig, br.fieldname, bo.objtype   from jam_heapobj bo, \njam_heaprel br  where br.heap_id = 0  and br.parentaddr = 0 and \nbr.fieldname = 'K'    and bo.heap_id = br.heap_id and \nbo.objaddr = br.childaddrgroup by bo.objsig, br.fieldname, bo.objtype) \no2on ( o2.objsig = o1.objsig and o2.objtype = o1.objtype and \no2.fieldname = o1.fieldname) order by o1.totmem - coalesce(o2.totmem,0) \ndesc;\n \n Sort (cost=16305.41..16305.42 rows=1 \nwidth=562) (actual time=565997.769..566016.255 rows=6115 loops=1)  Sort \nKey: (o1.totmem - COALESCE(o2.totmem, 0::bigint))  ->Nested Loop \nLeft Join (cost=16305.22..16305.40 rows=1 width=562) (actual \ntime=612.631..565896.047 rows=6115 loops=1)    Join Filter: \n(((\"inner\".objsig)::text = (\"outer\".objsig)::text) AND ((\"inner\".objtype)::text \n= (\"outer\".objtype)::text) AND ((\"inner\".fieldname)::text = \n(\"outer\".fieldname)::text))    ->Subquery Scan o1 \n(cost=12318.12..12318.15 rows=1 width=514) (actual time=309.659..413.311 \nrows=6115 loops=1)      ->HashAggregate \n(cost=12318.12..12318.14 rows=1 width=54) (actual time=309.649..367.206 \nrows=6115 loops=1)        ->Nested \nLoop (cost=0.00..12317.90 rows=10 width=54) (actual time=0.243..264.116 \nrows=6338 loops=1)          \n->Index Scan using jam_heaprel_n1 on jam_heaprel ar (cost=0.00..12275.00 \nrows=7 width=19) (actual time=0.176..35.780 rows=6338 \nloops=1)            \nIndex Cond: ((heap_id = 1) AND (parentaddr = \n0))            \nFilter: ((fieldname)::text = \n'K'::text)          ->Index \nScan using jam_heapobj_u1 on jam_heapobj ao (cost=0.00..6.10 rows=2 width=51) \n(actual time=0.019..0.022 rows=1 \nloops=6338)            \nIndex Cond: ((ao.heap_id = 1) AND (ao.objaddr = \n\"outer\".childaddr))    ->Subquery Scan o2 \n(cost=3987.10..3987.13 rows=1 width=514) (actual time=0.062..75.171 rows=6038 \nloops=6115)      ->HashAggregate \n(cost=3987.10..3987.12 rows=1 width=54) (actual time=0.056..36.469 rows=6038 \nloops=6115)        ->Nested Loop \n(cost=0.00..3987.05 rows=2 width=54) (actual time=0.145..257.876 rows=6259 \nloops=1)          ->Index \nScan using jam_heaprel_n1 on jam_heaprel br (cost=0.00..3974.01 rows=3 width=19) \n(actual time=0.074..35.124 rows=6259 \nloops=1)            \nIndex Cond: ((heap_id = 0) AND (parentaddr = \n0))            \nFilter: ((fieldname)::text = \n'K'::text)          ->Index \nScan using jam_heapobj_u1 on jam_heapobj bo (cost=0.00..4.33 rows=1 width=51) \n(actual time=0.018..0.022 rows=1 \nloops=6259)            \nIndex Cond: ((bo.heap_id = 0) AND (bo.objaddr = \n\"outer\".childaddr)) Total runtime: 566044.187 ms(21 \nrows)\nRegards,\n \nVirag", "msg_date": "Mon, 20 Feb 2006 21:13:34 -0800", "msg_from": "\"Virag Saksena\" <[email protected]>", "msg_from_op": true, "msg_subject": "Cost Issue - How do I force a Hash Join" }, { "msg_contents": "\"Virag Saksena\" <[email protected]> writes:\n> The individual queries run in 50-300 ms. However the optimizer is =\n> choosing a nested loop to join them rather than a Hash join\n> causing the complete query to take 500+ seconds. It expects that it will =\n> get 1 row out from each of the sources, but here is gets\n> several thousand rows.\n\nThe best approach is to see if you can't fix that estimation error.\nAre the stats up to date on these tables? If so, maybe raising the\nstatistics targets would help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Feb 2006 00:35:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cost Issue - How do I force a Hash Join " }, { "msg_contents": "\"Virag Saksena\" <[email protected]> writes:\n>> The individual queries run in 50-300 ms. However the optimizer is\n>> choosing a nested loop to join them rather than a Hash join...\n\nI have what appears to be the identical problem.\n\nThis is a straightforward query that should be fairly quick, but takes about 30 minutes. It's a query across three tables, call them A, B, and C. The tables are joined on indexed columns.\n\nHere's a quick summary:\n\n Table A -----> Table B -----> Table C\n A_ID B_ID C_ID\n A_ID NAME\n C_ID\n\nTables A and B have 6 million rows each. Table C is small: 67 names, no repeats. All columns involved in the join are indexed.\n\nSummary: \n 1. Query B only: 2.7 seconds, 302175 rows returned\n 2. Join B and C: 4.3 seconds, exact same answer\n 3. Join A and B: 7.2 minutes, exact same answer\n 4. Join A, B, C: 32.7 minutes, exact same answer\n\nLooking at these:\n\n Query #1 is doing the real work: finding the rows of interest.\n\n Queries #1 and #2 ought to be virtually identical, since Table C has\n just one row with C_ID = 9, but the time almost doubles.\n\n Query #3 should take a bit longer than Query #1 because it has to join\n 300K rows, but the indexes should make this take just a few seconds,\n certainly well under a minute. \n\n Query #4 should be identical to Query #3, again because there's only\n one row in Table C. 32 minutes is pretty horrible for such a\n straightforward query.\n\nIt looks to me like the problem is the use of nested loops when a hash join should be used, but I'm no expert at query planning.\n\nThis is psql 8.0.3. Table definitions are at the end. (Table and column names are altered to protect the guilty, otherwise these are straight from Postgres.) I ran \"vacuum full analyze\" after the last data were added. Hardware is a Dell, 2-CPU Xeon, 4 GB memory, database is on a single SATA 7200RPM disk.\n\nThanks,\nCraig\n\n----------------------------\n\n\nQUERY #1:\n---------\n\nexplain analyze select B.A_ID from B where B.B_ID = 9;\n\n Index Scan using i_B_B_ID on B (cost=0.00..154401.36 rows=131236 width=4) (actual time=0.158..1387.251 rows=302175 loops=1)\n Index Cond: (B_ID = 9)\n Total runtime: 2344.053 ms\n\n\nQUERY #2:\n---------\n\nexplain analyze select B.A_ID from B join C on (B.C_ID = C.C_ID) where C.name = 'Joe';\n\n Nested Loop (cost=0.00..258501.92 rows=177741 width=4) (actual time=0.349..3392.532 rows=302175 loops=1)\n -> Seq Scan on C (cost=0.00..12.90 rows=1 width=4) (actual time=0.232..0.336 rows=1 loops=1)\n Filter: ((name)::text = 'Joe'::text)\n -> Index Scan using i_B_C_ID on B (cost=0.00..254387.31 rows=328137 width=8) (actual time=0.102..1290.002 rows=302175 loops=1)\n Index Cond: (B.C_ID = \"outer\".C_ID)\n Total runtime: 4373.916 ms\n\n\nQUERY #3:\n---------\n\nexplain analyze\n select A.A_ID from A\n join B on (A.A_ID = B.A_ID) \n where B.B_ID = 9;\n\n Nested Loop (cost=0.00..711336.41 rows=131236 width=4) (actual time=37.118..429419.347 rows=302175 loops=1)\n -> Index Scan using i_B_B_ID on B (cost=0.00..154401.36 rows=131236 width=4) (actual time=27.344..8858.489 rows=302175 loops=1)\n Index Cond: (B_ID = 9)\n -> Index Scan using pk_A_test on A (cost=0.00..4.23 rows=1 width=4) (actual time=1.372..1.376 rows=1 loops=302175)\n Index Cond: (A.A_ID = \"outer\".A_ID)\n Total runtime: 430467.686 ms\n\n\nQUERY #4:\n---------\nexplain analyze\n select A.A_ID from A\n join B on (A.A_ID = B.A_ID)\n join C on (B.B_ID = C.B_ID)\n where C.name = 'Joe';\n\n Nested Loop (cost=0.00..1012793.38 rows=177741 width=4) (actual time=70.184..1960112.247 rows=302175 loops=1)\n -> Nested Loop (cost=0.00..258501.92 rows=177741 width=4) (actual time=52.114..17753.638 rows=302175 loops=1)\n -> Seq Scan on C (cost=0.00..12.90 rows=1 width=4) (actual time=0.109..0.176 rows=1 loops=1)\n Filter: ((name)::text = 'Joe'::text)\n -> Index Scan using i_B_B_ID on B (cost=0.00..254387.31 rows=328137 width=8) (actual time=51.985..15566.896 rows=302175 loops=1)\n Index Cond: (B.B_ID = \"outer\".B_ID)\n -> Index Scan using pk_A_test on A (cost=0.00..4.23 rows=1 width=4) (actual time=6.407..6.412 rows=1 loops=302175)\n Index Cond: (A.A_ID = \"outer\".A_ID)\n Total runtime: 1961200.079 ms\n\n\nTABLE DEFINITIONS:\n------------------\n\nxxx => \\d a\n Table \"xxx.a\"\n Column | Type | Modifiers \n-------------------+------------------------+-----------\n a_id | integer | not null\n ... more columns\n\nIndexes:\n \"pk_a_id\" PRIMARY KEY, btree (a_id)\n ... more indexes on other columns\n\nxxx => \\d b\n Table \"xxx.b\"\n Column | Type | Modifiers \n--------------------------+------------------------+-----------\n b_id | integer | not null\n a_id | integer | not null\n c_id | integer | not null\n ... more columns\n\nIndexes:\n \"b_pkey\" PRIMARY KEY, btree (b_id)\n \"i_b_a_id\" btree (a_id)\n \"i_b_c_id\" btree (c_id)\n\n\nxxx=> \\d c\n Table \"xxx.c\"\n Column | Type | Modifiers \n---------------+------------------------+-----------\n c_id | integer | not null\n name | character varying(200) | \n ... more columns\nIndexes:\n \"c_pkey\" PRIMARY KEY, btree (c_id)\n", "msg_date": "Mon, 20 Feb 2006 21:54:14 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cost Issue - How do I force a Hash Join" }, { "msg_contents": "Tables are analyzed, though I would love to find a way to increase it's\naccuracy of statistics\nTried raising the statistics target upto 100, but it did not help. Should I\nbump it even more\n\nHowever I found that if I add depth to the group by clauses, it somehow\ntells the optimizer that it would get more than 1 row\nand it goes to a Hash Join ....\nFor this query, only rows with one value of depth are accessed, so we are\nokay ... but I would like to see if there is some other\nway I can get a better approximation for the costs\n\n Sort (cost=25214.36..25214.39 rows=10 width=958) (actual\ntime=9798.860..9815.670 rows=6115 loops=1)\n Sort Key: (o1.totmem - COALESCE(o2.totmem, 0::bigint))\n ->Hash Left Join (cost=25213.83..25214.19 rows=10 width=958) (actual\ntime=8526.248..9755.721 rows=6115 loops=1)\n Hash Cond: (((\"outer\".objsig)::text = (\"inner\".objsig)::text) AND\n((\"outer\".objtype)::text = (\"inner\".objtype)::text) AND\n((\"outer\".fieldname)::text = (\"inner\".fieldname)::text))\n ->Subquery Scan o1 (cost=18993.48..18993.66 rows=10 width=990) (actual\ntime=6059.880..6145.223 rows=6115 loops=1)\n ->HashAggregate (cost=18993.48..18993.56 rows=10 width=46) (actual\ntime=6059.871..6094.897 rows=6115 loops=1)\n ->Nested Loop (cost=0.00..18993.22 rows=15 width=46) (actual\ntime=45.510..5980.807 rows=6338 loops=1)\n ->Index Scan using jam_heaprel_n1 on jam_heaprel ar\n(cost=0.00..18932.01 rows=10 width=19) (actual time=45.374..205.520\nrows=6338 loops=1)\n Index Cond: ((heap_id = 1) AND (parentaddr = 0))\n Filter: ((fieldname)::text = 'K'::text)\n ->Index Scan using jam_heapobj_u1 on jam_heapobj ao\n(cost=0.00..6.10 rows=2 width=43) (actual time=0.885..0.890 rows=1\nloops=6338)\n Index Cond: ((ao.heap_id = 1) AND (ao.objaddr =\n\"outer\".childaddr))\n ->Hash (cost=6220.34..6220.34 rows=2 width=982) (actual\ntime=2466.178..2466.178 rows=0 loops=1)\n ->Subquery Scan o2 (cost=6220.30..6220.34 rows=2 width=982) (actual\ntime=2225.242..2433.744 rows=6038 loops=1)\n ->HashAggregate (cost=6220.30..6220.32 rows=2 width=46) (actual\ntime=2225.233..2366.890 rows=6038 loops=1)\n ->Nested Loop (cost=0.00..6220.27 rows=2 width=46) (actual\ntime=0.449..2149.257 rows=6259 loops=1)\n ->Index Scan using jam_heaprel_n1 on jam_heaprel br\n(cost=0.00..6202.89 rows=4 width=19) (actual time=0.296..51.310 rows=6259\nloops=1)\n Index Cond: ((heap_id = 0) AND (parentaddr = 0))\n Filter: ((fieldname)::text = 'K'::text)\n ->Index Scan using jam_heapobj_u1 on jam_heapobj bo\n(cost=0.00..4.33 rows=1 width=43) (actual time=0.294..0.300 rows=1\nloops=6259)\n Index Cond: ((bo.heap_id = 0) AND (bo.objaddr =\n\"outer\".childaddr))\n Total runtime: 9950.192 ms\n\nRegards,\n\nVirag\n\n----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Virag Saksena\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, February 20, 2006 9:35 PM\nSubject: Re: [PERFORM] Cost Issue - How do I force a Hash Join\n\n\n> \"Virag Saksena\" <[email protected]> writes:\n> > The individual queries run in 50-300 ms. However the optimizer is =\n> > choosing a nested loop to join them rather than a Hash join\n> > causing the complete query to take 500+ seconds. It expects that it will\n=\n> > get 1 row out from each of the sources, but here is gets\n> > several thousand rows.\n>\n> The best approach is to see if you can't fix that estimation error.\n> Are the stats up to date on these tables? If so, maybe raising the\n> statistics targets would help.\n>\n> regards, tom lane\n>\n\n", "msg_date": "Mon, 20 Feb 2006 22:33:39 -0800", "msg_from": "\"Virag Saksena\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cost Issue - How do I force a Hash Join " } ]
[ { "msg_contents": "hi,\ni have btree index on a text type field. i want see rows which starts with\ncertain characters on that field. so i write a query like this:\n\nSELECT * FROM mytable WHERE myfield LIKE 'john%'\n\nsince this condition is from start of the field, query planner should use\nindex to find such elements but explain command shows me it will do a\nsequential scan.\n\nis this lack of a feature or i am wrong somewhere?\n\nhi,i have btree index on a text type field. i want see rows which starts with certain characters on that field. so i write a query like this:SELECT * FROM mytable WHERE myfield LIKE 'john%'since this condition is from start of the field, query planner should use index to find such elements but explain command shows me it will do a sequential scan.\nis this lack of a feature or i am wrong somewhere?", "msg_date": "Tue, 21 Feb 2006 17:57:12 +0200", "msg_from": "\"Ibrahim Tekin\" <[email protected]>", "msg_from_op": true, "msg_subject": "LIKE query on indexes" }, { "msg_contents": "On Tue, 2006-02-21 at 09:57, Ibrahim Tekin wrote:\n> hi,\n> i have btree index on a text type field. i want see rows which starts\n> with certain characters on that field. so i write a query like this:\n> \n> SELECT * FROM mytable WHERE myfield LIKE 'john%'\n> \n> since this condition is from start of the field, query planner should\n> use index to find such elements but explain command shows me it will\n> do a sequential scan. \n> \n> is this lack of a feature or i am wrong somewhere?\n\nThis is an artifact of how PostgreSQL handles locales other than ASCII.\n\nIf you want such a query to use an index, you need to back up your\ndatabase, and re-initdb with --locale=C as an argument. Note that you\nthen will NOT get locale specific matching and sorting.\n", "msg_date": "Tue, 21 Feb 2006 10:18:00 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE query on indexes" }, { "msg_contents": "Scott Marlowe wrote:\n> On Tue, 2006-02-21 at 09:57, Ibrahim Tekin wrote:\n> > hi,\n> > i have btree index on a text type field. i want see rows which starts\n> > with certain characters on that field. so i write a query like this:\n> > \n> > SELECT * FROM mytable WHERE myfield LIKE 'john%'\n> > \n> > since this condition is from start of the field, query planner should\n> > use index to find such elements but explain command shows me it will\n> > do a sequential scan. \n> > \n> > is this lack of a feature or i am wrong somewhere?\n> \n> This is an artifact of how PostgreSQL handles locales other than ASCII.\n> \n> If you want such a query to use an index, you need to back up your\n> database, and re-initdb with --locale=C as an argument.\n\n... or you can choose to create an index with the text_pattern_ops\noperator class, which would be used in a LIKE constraint regardless of\nlocale.\n\nhttp://www.postgresql.org/docs/8.1/static/indexes-opclass.html\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 21 Feb 2006 13:34:16 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE query on indexes" }, { "msg_contents": "On Tue, 2006-02-21 at 10:34, Alvaro Herrera wrote:\n> Scott Marlowe wrote:\n> > On Tue, 2006-02-21 at 09:57, Ibrahim Tekin wrote:\n> > > hi,\n> > > i have btree index on a text type field. i want see rows which starts\n> > > with certain characters on that field. so i write a query like this:\n> > > \n> > > SELECT * FROM mytable WHERE myfield LIKE 'john%'\n> > > \n> > > since this condition is from start of the field, query planner should\n> > > use index to find such elements but explain command shows me it will\n> > > do a sequential scan. \n> > > \n> > > is this lack of a feature or i am wrong somewhere?\n> > \n> > This is an artifact of how PostgreSQL handles locales other than ASCII.\n> > \n> > If you want such a query to use an index, you need to back up your\n> > database, and re-initdb with --locale=C as an argument.\n> \n> ... or you can choose to create an index with the text_pattern_ops\n> operator class, which would be used in a LIKE constraint regardless of\n> locale.\n> \n> http://www.postgresql.org/docs/8.1/static/indexes-opclass.html\n\nGood point. I tend to view the world from the perspective of the 7.4\nand before user...\n\n", "msg_date": "Tue, 21 Feb 2006 10:42:24 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE query on indexes" }, { "msg_contents": "On Tue, Feb 21, 2006 at 05:57:12PM +0200, Ibrahim Tekin wrote:\n> i have btree index on a text type field. i want see rows which starts with\n> certain characters on that field. so i write a query like this:\n> SELECT * FROM mytable WHERE myfield LIKE 'john%'\n> since this condition is from start of the field, query planner should use\n> index to find such elements but explain command shows me it will do a\n> sequential scan.\n> is this lack of a feature or i am wrong somewhere?\n\nIs the query fast enough? How big is your table? What does explain\nanalyze select tell you?\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Tue, 21 Feb 2006 12:40:48 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: LIKE query on indexes" }, { "msg_contents": "my database encoding is unicode.\ni have two table, one is 3.64gb on hdd and has 2.2 million records. it takes\n140 secs to run on my AMD Turion 64 M 800MHz/1GB laptop.\nsecond table is 1.2gb, 220000 records, and takes 56 secs to run.\n\nexplain says 'Seq Scan on mytable, ..'\n\nOn 2/21/06, [email protected] <[email protected]> wrote:\n>\n> On Tue, Feb 21, 2006 at 05:57:12PM +0200, Ibrahim Tekin wrote:\n> > i have btree index on a text type field. i want see rows which starts\n> with\n> > certain characters on that field. so i write a query like this:\n> > SELECT * FROM mytable WHERE myfield LIKE 'john%'\n> > since this condition is from start of the field, query planner should\n> use\n> > index to find such elements but explain command shows me it will do a\n> > sequential scan.\n> > is this lack of a feature or i am wrong somewhere?\n>\n> Is the query fast enough? How big is your table? What does explain\n> analyze select tell you?\n>\n> Cheers,\n> mark\n>\n> --\n> [email protected] / [email protected] / [email protected]\n> __________________________\n> . . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n> |\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ |\n> | | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario,\n> Canada\n>\n> One ring to rule them all, one ring to find them, one ring to bring them\n> all\n> and in the darkness bind them...\n>\n> http://mark.mielke.cc/\n>\n>\n\nmy database encoding is unicode.\ni have two table, one is 3.64gb on hdd and has 2.2 million records. it takes 140 secs to run on my AMD Turion 64 M 800MHz/1GB laptop.second table is 1.2gb, 220000 records, and takes 56 secs to run.explain says 'Seq Scan on mytable, ..'\nOn 2/21/06, [email protected] <[email protected]> wrote:\nOn Tue, Feb 21, 2006 at 05:57:12PM +0200, Ibrahim Tekin wrote:> i have btree index on a text type field. i want see rows which starts with\n> certain characters on that field. so i write a query like this:>     SELECT * FROM mytable WHERE myfield LIKE 'john%'> since this condition is from start of the field, query planner should use> index to find such elements but explain command shows me it will do a\n> sequential scan.> is this lack of a feature or i am wrong somewhere?Is the query fast enough? How big is your table? What does explainanalyze select tell you?Cheers,mark--\[email protected] / [email protected] / [email protected]     __________________________.  .  _  ._  . .   .__    .  . ._. .__ .   . . .__  | Neighbourhood Coder\n|\\/| |_| |_| |/    |_     |\\/|  |  |_  |   |/  |_   ||  | | | | \\ | \\   |__ .  |  | .|. |__ |__ | \\ |__  | Ottawa, Ontario, Canada  One ring to rule them all, one ring to find them, one ring to bring them all\n                       and in the darkness bind them...                           http://mark.mielke.cc/", "msg_date": "Tue, 21 Feb 2006 22:12:17 +0200", "msg_from": "\"Ibrahim Tekin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE query on indexes" }, { "msg_contents": "this trick did the job.\nthanks.\n\nOn 2/21/06, Alvaro Herrera <[email protected]> wrote:\n>\n> Scott Marlowe wrote:\n> > On Tue, 2006-02-21 at 09:57, Ibrahim Tekin wrote:\n> > > hi,\n> > > i have btree index on a text type field. i want see rows which starts\n> > > with certain characters on that field. so i write a query like this:\n> > >\n> > > SELECT * FROM mytable WHERE myfield LIKE 'john%'\n> > >\n> > > since this condition is from start of the field, query planner should\n> > > use index to find such elements but explain command shows me it will\n> > > do a sequential scan.\n> > >\n> > > is this lack of a feature or i am wrong somewhere?\n> >\n> > This is an artifact of how PostgreSQL handles locales other than ASCII.\n> >\n> > If you want such a query to use an index, you need to back up your\n> > database, and re-initdb with --locale=C as an argument.\n>\n> ... or you can choose to create an index with the text_pattern_ops\n> operator class, which would be used in a LIKE constraint regardless of\n> locale.\n>\n> http://www.postgresql.org/docs/8.1/static/indexes-opclass.html\n>\n> --\n> Alvaro Herrera\n> http://www.CommandPrompt.com/\n> The PostgreSQL Company - Command Prompt, Inc.\n>\n\nthis trick did the job. thanks.On 2/21/06, Alvaro Herrera <[email protected]> wrote:\nScott Marlowe wrote:> On Tue, 2006-02-21 at 09:57, Ibrahim Tekin wrote:> > hi,\n> > i have btree index on a text type field. i want see rows which starts> > with certain characters on that field. so i write a query like this:> >> > SELECT * FROM mytable WHERE myfield LIKE 'john%'\n> >> > since this condition is from start of the field, query planner should> > use index to find such elements but explain command shows me it will> > do a sequential scan.> >\n> > is this lack of a feature or i am wrong somewhere?>> This is an artifact of how PostgreSQL handles locales other than ASCII.>> If you want such a query to use an index, you need to back up your\n> database, and re-initdb with --locale=C as an argument.... or you can choose to create an index with the text_pattern_opsoperator class, which would be used in a LIKE constraint regardless oflocale.\nhttp://www.postgresql.org/docs/8.1/static/indexes-opclass.html--Alvaro Herrera                                \nhttp://www.CommandPrompt.com/The PostgreSQL Company - Command Prompt, Inc.", "msg_date": "Tue, 21 Feb 2006 22:28:09 +0200", "msg_from": "\"Ibrahim Tekin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE query on indexes" }, { "msg_contents": "Hi,\n\nCan this technique work with case insensitive ILIKE?\n\nIt didn't seem to use the index when I used ILIKE instead of LIKE.\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Feb 21, 2006, at 1:28 PM, Ibrahim Tekin wrote:\n\n> this trick did the job.\n> thanks.\n>\n> On 2/21/06, Alvaro Herrera <[email protected]> wrote:\n> Scott Marlowe wrote:\n> > On Tue, 2006-02-21 at 09:57, Ibrahim Tekin wrote:\n> > > hi,\n> > > i have btree index on a text type field. i want see rows which \n> starts\n> > > with certain characters on that field. so i write a query like \n> this:\n> > >\n> > > SELECT * FROM mytable WHERE myfield LIKE 'john%'\n> > >\n> > > since this condition is from start of the field, query planner \n> should\n> > > use index to find such elements but explain command shows me it \n> will\n> > > do a sequential scan.\n> > >\n> > > is this lack of a feature or i am wrong somewhere?\n> >\n> > This is an artifact of how PostgreSQL handles locales other than \n> ASCII.\n> >\n> > If you want such a query to use an index, you need to back up your\n> > database, and re-initdb with --locale=C as an argument.\n>\n> ... or you can choose to create an index with the text_pattern_ops\n> operator class, which would be used in a LIKE constraint regardless of\n> locale.\n>\n> http://www.postgresql.org/docs/8.1/static/indexes-opclass.html\n>\n> --\n> Alvaro Herrera http:// \n> www.CommandPrompt.com/\n> The PostgreSQL Company - Command Prompt, Inc.\n>", "msg_date": "Wed, 22 Feb 2006 08:48:43 -0700", "msg_from": "Brendan Duddridge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE query on indexes" }, { "msg_contents": "hi,\n\ni ran a query with ILIKE but it doesn't use the index.\n\nbut i tried following method, and it worked. there is 3 extra lower()\noverhead but i don't think it will effect the performance.\n\nCREATE INDEX index_name ON mytable (lower(column) varchar_pattern_ops);\n\nSELECT * FROM mytable WHERE lower(column) LIKE lower('beginswith%')\n\nif insert operations are high in database. you use only this index to search\ncase sensitive.\n\nsay you want this:\nSELECT * FROM mytable WHERE column LIKE 'beGinsWith%'\n\nwrite this:\nSELECT * FROM mytable WHERE lower(column) LIKE lower('beGinsWith%') AND\ncolumn LIKE 'beGinsWith%'\n\nthan query planner will search on index, than scan the resulting bitmap\nheap.\n\n\nOn 2/22/06, Brendan Duddridge <[email protected]> wrote:\n>\n> Hi,\n> Can this technique work with case insensitive ILIKE?\n>\n> It didn't seem to use the index when I used ILIKE instead of LIKE.\n> Thanks,\n> *\n> *____________________________________________________________________\n> *Brendan Duddridge* | CTO | 403-277-5591 x24 | [email protected]\n> *\n> *ClickSpace Interactive Inc.\n> Suite L100, 239 - 10th Ave. SE\n> Calgary, AB T2G 0V9\n>\n> http://www.clickspace.com\n>\n> On Feb 21, 2006, at 1:28 PM, Ibrahim Tekin wrote:\n>\n> this trick did the job.\n> thanks.\n>\n> On 2/21/06, Alvaro Herrera <[email protected]> wrote:\n> >\n> > Scott Marlowe wrote:\n> > > On Tue, 2006-02-21 at 09:57, Ibrahim Tekin wrote:\n> > > > hi,\n> > > > i have btree index on a text type field. i want see rows which\n> > starts\n> > > > with certain characters on that field. so i write a query like this:\n> > > >\n> > > > SELECT * FROM mytable WHERE myfield LIKE 'john%'\n> > > >\n> > > > since this condition is from start of the field, query planner\n> > should\n> > > > use index to find such elements but explain command shows me it will\n> > > > do a sequential scan.\n> > > >\n> > > > is this lack of a feature or i am wrong somewhere?\n> > >\n> > > This is an artifact of how PostgreSQL handles locales other than\n> > ASCII.\n> > >\n> > > If you want such a query to use an index, you need to back up your\n> > > database, and re-initdb with --locale=C as an argument.\n> >\n> > ... or you can choose to create an index with the text_pattern_ops\n> > operator class, which would be used in a LIKE constraint regardless of\n> > locale.\n> >\n> > http://www.postgresql.org/docs/8.1/static/indexes-opclass.html\n> >\n> > --\n> > Alvaro Herrera http://www.CommandPrompt.com/\n> > The PostgreSQL Company - Command Prompt, Inc.\n> >\n>\n>\n>\n>\n\nhi,i ran a query with ILIKE but it doesn't use the index.but i tried following method, and it worked. there is 3 extra lower() overhead but i don't think it will effect the performance.CREATE INDEX index_name ON mytable (lower(column) varchar_pattern_ops);\nSELECT * FROM mytable WHERE lower(column) LIKE lower('beginswith%')if insert operations are high in database. you use only this index to search case sensitive.say you want this:SELECT * FROM mytable WHERE column LIKE 'beGinsWith%'\nwrite this:SELECT * FROM mytable WHERE lower(column) LIKE lower('beGinsWith%') AND column LIKE 'beGinsWith%'than query planner will search on index, than scan the resulting bitmap heap.\nOn 2/22/06, Brendan Duddridge <[email protected]> wrote:\nHi,Can this technique work with case insensitive ILIKE?It didn't seem to use the index when I used ILIKE instead of LIKE.Thanks, \n____________________________________________________________________Brendan Duddridge | CTO | 403-277-5591 x24 |  \[email protected] \nClickSpace Interactive Inc. Suite L100, 239 - 10th Ave. SE Calgary, AB  T2G 0V9 \nhttp://www.clickspace.com  On Feb 21, 2006, at 1:28 PM, Ibrahim Tekin wrote:\nthis trick did the job. thanks.On 2/21/06, Alvaro Herrera <\[email protected]> wrote: Scott Marlowe wrote:> On Tue, 2006-02-21 at 09:57, Ibrahim Tekin wrote:\n> > hi, > > i have btree index on a text type field. i want see rows which starts> > with certain characters on that field. so i write a query like this:> >> > SELECT * FROM mytable WHERE myfield LIKE 'john%' \n> >> > since this condition is from start of the field, query planner should> > use index to find such elements but explain command shows me it will> > do a sequential scan.> > \n> > is this lack of a feature or i am wrong somewhere?>> This is an artifact of how PostgreSQL handles locales other than ASCII.>> If you want such a query to use an index, you need to back up your \n> database, and re-initdb with --locale=C as an argument.... or you can choose to create an index with the text_pattern_opsoperator class, which would be used in a LIKE constraint regardless oflocale. \nhttp://www.postgresql.org/docs/8.1/static/indexes-opclass.html\n--Alvaro Herrera                                 http://www.CommandPrompt.com/The PostgreSQL Company - Command Prompt, Inc.", "msg_date": "Thu, 23 Feb 2006 14:52:35 +0200", "msg_from": "\"Ibrahim Tekin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE query on indexes" } ]
[ { "msg_contents": "I am running 7.4.8 and have a query that I have been running for a while\nthat has recently have experienced a slowdown. The original query\ninvolves a UNION but I have narrowed it down to this half of the query\nas being my issue. (The other half take 4 seconds).\n\nThe only issue that I have had is index bloat which I had to reindex the\nentire cluster to get rid of. My query has been slow since the start of\nthe index bloat.\n\nThanks in advance\n\nWoody\n\n\n\n\nexplain analyze SELECT column1, column2, column3, column4, column5,\ncolumn6, column7, column8 FROM (SELECT CASE status WHEN 0 THEN 0 WHEN 1\nTHEN 1 ELSE -1 END AS column1, mac AS column2, account AS column3,\nnumber || ' ' || address AS column4, 'qmod' || '.' || 'dmod' AS column5,\nnode AS column6, grid AS column7, boxtype AS column8, number, address\nFROM settop_billing LEFT OUTER JOIN (dhct JOIN dhct_davic USING(mac))\nUSING (mac) WHERE region='GTown1E' AND node='1E012' ) AS foo;\n\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n--------------\n Nested Loop Left Join (cost=38659.85..55988.94 rows=2 width=84)\n(actual time=14054.297..294973.046 rows=418 loops=1)\n Join Filter: (\"outer\".mac = \"inner\".mac)\n -> Index Scan using settop_billing_region_node_index on\nsettop_billing (cost=0.00..7.99 rows=2 width=82) (actual\ntime=0.115..8.582 rows=418 loops=1) Index Cond: (((region)::text\n= 'GTown1E'::text) AND ((node)::text = '1E012'::text))\n -> Materialize (cost=38659.85..42508.98 rows=384913 width=8)\n(actual time=2.211..286.267 rows=382934 loops=418)\n -> Hash Join (cost=14784.66..38659.85 rows=384913 width=8)\n(actual time=923.855..13647.840 rows=382934 loops=1)\n Hash Cond: (\"outer\".mac = \"inner\".mac)\n -> Append (cost=0.00..8881.11 rows=384912 width=8)\n(actual time=0.023..10914.365 rows=384900 loops=1)\n -> Seq Scan on dhct_davic (cost=0.00..0.00 rows=1\nwidth=8) (actual time=0.002..0.002 rows=0 loops=1)\n -> Seq Scan on probe_dhct_davic dhct_davic\n(cost=0.00..8881.11 rows=384911 width=8) (actual time=0.018..10505.255\nrows=384900 loops=1)\n -> Hash (cost=12154.13..12154.13 rows=410613 width=6)\n(actual time=923.433..923.433 rows=0 loops=1)\n -> Seq Scan on dhct (cost=0.00..12154.13\nrows=410613 width=6) (actual time=0.019..534.641 rows=409576 loops=1)\n Total runtime: 294994.440 ms\n(13 rows)\n\nThe tables involved are defined as follows:\n\n Table\n\"public.settop_billing\"\n Column | Type |\nModifiers\n-----------------+------------------------+-----------------------------\n---------------------------------------------------\n cable_billingid | integer | not null default\nnextval('public.new_cable_billing_cable_billingid_seq'::text)\n mac | macaddr | not null\n account | character varying(20) |\n number | character varying(12) |\n address | character varying(200) |\n name | character varying(100) |\n phone | character varying(10) |\n region | character varying(30) |\n node | character varying(10) |\n grid | character varying(15) |\n lat | numeric |\n long | numeric |\n boxtype | character(1) |\nIndexes:\n \"settop_billing_mac_index\" unique, btree (mac)\n \"settop_billing_account_index\" btree (account)\n \"settop_billing_lat_log_index\" btree (lat, long)\n \"settop_billing_region_node_index\" btree (region, node)\nInherits: cable_billing\n\n Table \"public.dhct\"\n Column | Type | Modifiers\n------------+-----------------------+-----------------------------------\n--------\n dhctid | integer | not null default\nnextval('dhct_id'::text)\n mac | macaddr | not null\n ip | inet |\n serial | macaddr |\n admin_stat | integer |\n oper_stat | integer |\n qmod | character varying(50) |\n dmod | integer |\n hub | character varying(50) |\n dncs | character varying(50) |\n auth | text |\n updtime | integer |\nIndexes:\n \"dhct_pkey\" primary key, btree (mac)\n \"dhct_qmod_index\" btree (qmod)\n \n Table \"iprobe024.probe_dhct_davic\"\n Column | Type | Modifiers\n---------+-----------------------+--------------------------------------\n----------------------\n davicid | integer | not null default\nnextval('public.davic_davicid_seq'::text)\n mac | macaddr | not null\n type | character varying(10) | default 'davic'::character varying\n source | character varying(20) |\n status | smallint |\n updtime | integer |\n avail1 | integer |\nIndexes:\n \"probe_dhct_davic_mac_index\" unique, btree (mac)\nInherits: dhct_davic\n \n\n----------------------------------------\niGLASS Networks\n211-A S. Salem St\nApex NC 27502\n(919) 387-3550 x813\nwww.iglass.net\n", "msg_date": "Tue, 21 Feb 2006 15:12:03 -0500", "msg_from": "\"George Woodring\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help with nested loop left join performance" }, { "msg_contents": "George Woodring wrote:\n> \n> explain analyze SELECT column1, column2, column3, column4, column5,\n> column6, column7, column8 FROM (SELECT CASE status WHEN 0 THEN 0 WHEN 1\n> THEN 1 ELSE -1 END AS column1, mac AS column2, account AS column3,\n> number || ' ' || address AS column4, 'qmod' || '.' || 'dmod' AS column5,\n> node AS column6, grid AS column7, boxtype AS column8, number, address\n> FROM settop_billing LEFT OUTER JOIN (dhct JOIN dhct_davic USING(mac))\n> USING (mac) WHERE region='GTown1E' AND node='1E012' ) AS foo;\n\nAch y fi! Let's format that a bit better, eh?\n\nexplain analyze\nSELECT column1, column2, column3, column4, column5,column6, column7, column8\nFROM (\n SELECT\n CASE status WHEN 0 THEN 0 WHEN 1 THEN 1 ELSE -1 END AS column1,\n mac AS column2,\n account AS column3,\n number || ' ' || address AS column4,\n 'qmod' || '.' || 'dmod' AS column5,\n node AS column6,\n grid AS column7,\n boxtype AS column8,\n number,\n address\n FROM\n settop_billing\n LEFT OUTER JOIN\n (dhct JOIN dhct_davic USING(mac))\n USING\n (mac)\n WHERE\n region='GTown1E' AND node='1E012'\n) AS foo;\n\nNow we can see what's happening. Well, looking at it laid out like that, \nI'm suspcious of the (dhct JOIN dhct_davic) on the outside of an outer \njoin. Looking at your explain we do indeed have two sequential scans \nover the tables in question - the big one being dhct...\n\n> -> Append (cost=0.00..8881.11 rows=384912 width=8)\n> (actual time=0.023..10914.365 rows=384900 loops=1)\n> -> Seq Scan on dhct_davic (cost=0.00..0.00 rows=1\n> width=8) (actual time=0.002..0.002 rows=0 loops=1)\n> -> Seq Scan on probe_dhct_davic dhct_davic\n> (cost=0.00..8881.11 rows=384911 width=8) (actual time=0.018..10505.255\n> rows=384900 loops=1)\n> -> Hash (cost=12154.13..12154.13 rows=410613 width=6)\n> (actual time=923.433..923.433 rows=0 loops=1)\n> -> Seq Scan on dhct (cost=0.00..12154.13\n> rows=410613 width=6) (actual time=0.019..534.641 rows=409576 loops=1)\n\nWith 7.4 I seem to remember that explicit JOINs force the evaluation \norder, but I'm not if even later versions will rewrite your query. It's \ntoo early in the morning for me to figure out if it's safe in all cases.\n\nAnyway, for your purposes, I'd say something more like:\n FROM settop_billing LEFT JOIN dhct LEFT JOIN dhct_davic\n\nThat should let the planner do the joins in a more reasonable order.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 22 Feb 2006 09:38:04 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with nested loop left join performance" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n> George Woodring wrote:\n>> FROM\n>> settop_billing\n>> LEFT OUTER JOIN\n>> (dhct JOIN dhct_davic USING(mac))\n>> USING\n>> (mac)\n>> WHERE\n>> region='GTown1E' AND node='1E012'\n\n> With 7.4 I seem to remember that explicit JOINs force the evaluation \n> order, but I'm not if even later versions will rewrite your query. It's \n> too early in the morning for me to figure out if it's safe in all cases.\n\nCVS HEAD can re-order left joins in common cases, but no existing\nrelease will touch the ordering of outer joins at all.\n\nIt's impossible to tell here which tables the WHERE-clause restrictions\nactually bear on, so there's no way to say whether a different join\norder would help. My guess though is that George may be stuck --- in\ngeneral you can't move a join into or out of the right side of a left\njoin without changing the answers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Feb 2006 10:04:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with nested loop left join performance " } ]
[ { "msg_contents": "Hello!\n\n \n\nThis is Chethana. I need to know how to improve the performance of\npostgresql. It is rich in features but slow in performance.\n\nPls do reply back ASAP.\n\n \n\nThank you,\n\nChethana.\n\n \n\n\n\n\n\n\n\n\n\n\nHello!\n \nThis is Chethana.  I need to know how to improve the\nperformance of  postgresql.    It is rich in features but slow in performance.\nPls do reply back ASAP.\n \nThank you,\nChethana.", "msg_date": "Wed, 22 Feb 2006 03:38:58 -0700", "msg_from": "\"Chethana, Rao (IE10)\" <[email protected]>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "Chethana, Rao (IE10) wrote:\n> \n> This is Chethana. I need to know how to improve the performance of\n> postgresql. It is rich in features but slow in performance.\n\nYou'll need to provide some details first.\n\nHow are you using PostgreSQL?\nHow many concurrent users?\nMostly updates or small selects or large summary reports?\n\nWhat hardware do you have?\nWhat configuration changes have you made?\n\nAre you having problems with all queries or only some?\nHave you checked the plans for these with EXPLAIN ANALYSE?\nHave you made sure your tables are vacuumed and analysed?\n\nThat should be a start\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 22 Feb 2006 10:47:24 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "try this.\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\nhttp://www.powerpostgresql.com/PerfList\n\nPerformance depends on the postgresql.conf parameters apart from the\nhardware details.\n\n\nOn 2/22/06, Chethana, Rao (IE10) <[email protected]> wrote:\n>\n> Hello!\n>\n>\n>\n> This is Chethana. I need to know how to improve the performance of\n> postgresql. It is rich in features but slow in performance.\n>\n> Pls do reply back ASAP.\n>\n>\n>\n> Thank you,\n>\n> Chethana.\n>\n>\n>\n\n\n\n--\nBest,\nGourish Singbal\n\n \ntry this.\n \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\nhttp://www.powerpostgresql.com/PerfList \nPerformance depends on the postgresql.conf parameters apart from the hardware details.\n \n \nOn 2/22/06, Chethana, Rao (IE10) <[email protected]> wrote:\n\n\nHello!\n \nThis is Chethana.  I need to know how to improve the performance of  postgresql.    It is rich in features but slow in performance.\n\nPls do reply back ASAP.\n \nThank you,\nChethana.\n -- Best,Gourish Singbal", "msg_date": "Wed, 22 Feb 2006 16:21:20 +0530", "msg_from": "\"Gourish Singbal\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "On Feb 22, 2006, at 5:38 AM, Chethana, Rao (IE10) wrote:\n\n> It is rich in features but slow in performance.\n\nNo, it is fast and feature-rich. But you have to tune it for your \nspecific needs; the default configuration is not ideal for large DBs.\n\n\nOn Feb 22, 2006, at 5:38 AM, Chethana, Rao (IE10) wrote:It is rich in features but slow in performance.No, it is fast and feature-rich.  But you have to tune it for your specific needs; the default configuration is not ideal for large DBs.", "msg_date": "Wed, 22 Feb 2006 11:32:33 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "Chethana, Rao (IE10) wrote:\n> Hello!\n> \n> Thank you for responding quickly. I really need ur help.\n\nPlease make sure you cc: the list - I don't read this inbox regularly.\n\n> Sir, here r the answers for ur questions, please do tell me what to do\n> next(regarding increasing performance of postgresql), so that I can\n> proceed further.\n> \n> How are you using PostgreSQL?\n> We r using 7.4.3 with max of (512*6) around 3000 records.\n\nMax of what are (512*6)? Rows? Tables? Sorry - I don't understand what \nyou mean here.\n\nOh, and upgrade to the latest release of 7.4.x - there are important \nbugfixes.\n\n> How many concurrent users?\n> It configures for 100, but we r using 4 or 5 only.\n> \n> Mostly updates or small selects or large summary reports?\n> Update,delete,insert operations.\n> \n> What hardware do you have?\n> X86 based, 233 MHz, 256 MB RAM.\n\nHmm - not blazing fast, but it'll certainly run on that.\n\n> What configuration changes have you made?\n> No changes, we've used default settings.\n\nThat will need changing. As Gourish suggested in another reply, read the \nnotes here:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nYou'll want to be careful with the memory settings given that you've \nonly got 256MB to play with. Don't allocate too much to PostgreSQL \nitself, let the o.s. cache some files for you.\n\n> Are you having problems with all queries or only some?\n> Only some queries, particularly foreign key.\n\nAre you happy that there are indexes on the referring side of the \nforeign key where necessary? The primary keys you reference will have \nindexes on them, the other side will not unless you add them yourself.\n\n> Have you checked the plans for these with EXPLAIN ANALYSE?\n> No.\n\nThat would be something worth doing then. Find a bad query, run EXPLAIN \nANALYSE SELECT ... and post a new question with the output and details \nof the tables involved.\n\n> Have you made sure your tables are vacuumed and analysed?\n> Yes.\n\nGood. With the limited amount of RAM you have, you'll want to use it as \nefficiently as possible.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 22 Feb 2006 14:22:01 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: --pls reply ASAP" }, { "msg_contents": "I know I am sticking my nose in an area here that I have not been \ninvolved in but\nthis issue is important to me.\nChethana I have a couple of questions based on what you said you are \nusing as a\nplatform. see below :\n\nOn Feb 22, 2006, at 8:22 AM, Richard Huxton wrote:\n\n> Chethana, Rao (IE10) wrote:\n>> Hello!\n>> Thank you for responding quickly. I really need ur help.\n>\n> Please make sure you cc: the list - I don't read this inbox regularly.\n>\n>> Sir, here r the answers for ur questions, please do tell me what \n>> to do\n>> next(regarding increasing performance of postgresql), so that I can\n>> proceed further.\n>> How are you using PostgreSQL?\n>> We r using 7.4.3 with max of (512*6) around 3000 records.\n>\n> Max of what are (512*6)? Rows? Tables? Sorry - I don't understand \n> what you mean here.\n>\n> Oh, and upgrade to the latest release of 7.4.x - there are \n> important bugfixes.\n>\n>> How many concurrent users?\n>> It configures for 100, but we r using 4 or 5 only.\n>> Mostly updates or small selects or large summary reports?\n>> Update,delete,insert operations.\n>> What hardware do you have?\n>> X86 based, 233 MHz, 256 MB RAM.\nWhat Operating System are you running this on??\nHow much \"other\" stuff or applications are you running on the box\nIs this a IDE hard drive system?? SCSI?? Bus Speed?? is it a older \nserver or a pc??\nYou dont have a large database at all but quick access to the data \nthat is residing in\nthe database has a lot to do with how the hardware is configured and \nwhat other programs\nare using the limited system resources!\n>\n> Hmm - not blazing fast, but it'll certainly run on that.\n>\n>> What configuration changes have you made?\n>> No changes, we've used default settings.\n>\n> That will need changing. As Gourish suggested in another reply, \n> read the notes here:\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n>\n> You'll want to be careful with the memory settings given that \n> you've only got 256MB to play with. Don't allocate too much to \n> PostgreSQL itself, let the o.s. cache some files for you.\n>\n>> Are you having problems with all queries or only some?\n>> Only some queries, particularly foreign key.\n>\n> Are you happy that there are indexes on the referring side of the \n> foreign key where necessary? The primary keys you reference will \n> have indexes on them, the other side will not unless you add them \n> yourself.\n>\n>> Have you checked the plans for these with EXPLAIN ANALYSE?\n>> No.\n>\n> That would be something worth doing then. Find a bad query, run \n> EXPLAIN ANALYSE SELECT ... and post a new question with the output \n> and details of the tables involved.\n>\n>> Have you made sure your tables are vacuumed and analysed?\n>> Yes.\n>\n> Good. With the limited amount of RAM you have, you'll want to use \n> it as efficiently as possible.\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\nTheodore LoScalzo\n\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n", "msg_date": "Wed, 22 Feb 2006 11:16:30 -0600", "msg_from": "Theodore LoScalzo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --pls reply ASAP" } ]
[ { "msg_contents": "I hesitate to raise this issue again, but I've noticed something which I\nthought might be worth mentioning. I've never thought the performance\nof count(*) on a table was a significant issue, but I'm prepared to say\nthat -- for me, at least -- it is officially and totally a NON-issue.\n\nWe are replicating data from 72 source databases, each with the\nofficial copy of a subset of the data, to four identical consolidated\ndatabases, spread to separate locations, to serve our web site and other\norganization-wide needs. Currently, two of these central databases are\nrunning a commercial product and two are running PostgreSQL. There have\nbeen several times that I have run a SELECT COUNT(*) on an entire table\non all central machines. On identical hardware, with identical data,\nand equivalent query loads, the PostgreSQL databases have responded with\na count in 50% to 70% of the time of the commercial product, in spite of\nthe fact that the commercial product does a scan of a non-clustered\nindex while PostgreSQL scans the data pages.\n\nThe tables have had from a few million to 132 million rows. The\ndatabases are about 415 GB each. The servers have 6 GB RAM each. We've\nbeen running PostgreSQL 8.1, tuned and maintained based on advice from\nthe documentation and these lists.\n\nI suspect that where people report significantly worse performance for\ncount(*) under PostgreSQL than some other product, it may sometimes be\nthe case that they have not properly tuned PostgreSQL, or paid attention\nto maintenance issues regarding dead space in the tables.\n\nMy recent experience, for what it's worth.\n\n-Kevin\n\n", "msg_date": "Wed, 22 Feb 2006 10:57:08 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Good News re count(*) in 8.1" }, { "msg_contents": "Kevin,\n\nOn 2/22/06 8:57 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\n> I hesitate to raise this issue again, but I've noticed something which I\n> thought might be worth mentioning. I've never thought the performance\n> of count(*) on a table was a significant issue, but I'm prepared to say\n> that -- for me, at least -- it is officially and totally a NON-issue.\n\nCool! Kudos to Tom for implementing the improvements in the executor to\nmove tuples faster through the pipeline.\n\nWe see a CPU limit (yes, another limit) of about 300MB/s now on Opteron 250\nprocessors running on Linux. The filesystem can do 420MB/s sequential scan\nin 8k pages, but Postgres count(*) on 8.1.3 can only do about 300MB/s. This\nis still a very large improvement over past versions, but we'd always like\nto see more... \n\n- Luke\n\n\n", "msg_date": "Wed, 22 Feb 2006 09:11:50 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good News re count(*) in 8.1" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> We are replicating data from 72 source databases, each with the\n> official copy of a subset of the data, to four identical consolidated\n> databases, spread to separate locations, to serve our web site and other\n> organization-wide needs. Currently, two of these central databases are\n> running a commercial product and two are running PostgreSQL. There have\n> been several times that I have run a SELECT COUNT(*) on an entire table\n> on all central machines. On identical hardware, with identical data,\n> and equivalent query loads, the PostgreSQL databases have responded with\n> a count in 50% to 70% of the time of the commercial product, in spite of\n> the fact that the commercial product does a scan of a non-clustered\n> index while PostgreSQL scans the data pages.\n\nInteresting. I think though that the people who are complaining come\nfrom databases where COUNT(*) takes constant time because the DB keeps\na running count in the table's metadata.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Feb 2006 12:42:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good News re count(*) in 8.1 " }, { "msg_contents": "\n\"Kevin Grittner\" <[email protected]> writes:\n\n> There have been several times that I have run a SELECT COUNT(*) on an entire\n> table on all central machines. On identical hardware, with identical data,\n> and equivalent query loads, the PostgreSQL databases have responded with a\n> count in 50% to 70% of the time of the commercial product, in spite of the\n> fact that the commercial product does a scan of a non-clustered index while\n> PostgreSQL scans the data pages.\n\nI take it these are fairly narrow rows? The big benefit of index-only scans\ncome in when you're scanning extremely wide tables, often counting rows\nmatching some indexed criteria.\n\n-- \ngreg\n\n", "msg_date": "22 Feb 2006 22:52:48 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good News re count(*) in 8.1" }, { "msg_contents": ">>> On Wed, Feb 22, 2006 at 9:52 pm, in message\n<[email protected]>, Greg Stark <[email protected]> wrote:\n\n\n> \"Kevin Grittner\" <[email protected]> writes:\n> \n>> There have been several times that I have run a SELECT COUNT(*) on\nan entire\n>> table on all central machines. On identical hardware, with identical\ndata,\n>> and equivalent query loads, the PostgreSQL databases have responded\nwith a\n>> count in 50% to 70% of the time of the commercial product, in spite\nof the\n>> fact that the commercial product does a scan of a non- clustered\nindex while\n>> PostgreSQL scans the data pages.\n> \n> I take it these are fairly narrow rows? The big benefit of index-\nonly scans\n> come in when you're scanning extremely wide tables, often counting\nrows\n> matching some indexed criteria.\n\nI'm not sure what you would consider \"fairly narrow rows\" -- so see the\nattached. This is the VACUUM ANALYZE VERBOSE output for the largest\ntable, from last night's regular maintenance run.\n\n-Kevin", "msg_date": "Thu, 23 Feb 2006 12:54:52 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Good News re count(*) in 8.1" }, { "msg_contents": "On Thu, Feb 23, 2006 at 12:54:52PM -0600, Kevin Grittner wrote:\n> >>> On Wed, Feb 22, 2006 at 9:52 pm, in message\n> <[email protected]>, Greg Stark <[email protected]> wrote:\n> \n> \n> > \"Kevin Grittner\" <[email protected]> writes:\n> > \n> >> There have been several times that I have run a SELECT COUNT(*) on\n> an entire\n> >> table on all central machines. On identical hardware, with identical\n> data,\n> >> and equivalent query loads, the PostgreSQL databases have responded\n> with a\n> >> count in 50% to 70% of the time of the commercial product, in spite\n> of the\n> >> fact that the commercial product does a scan of a non- clustered\n> index while\n> >> PostgreSQL scans the data pages.\n> > \n> > I take it these are fairly narrow rows? The big benefit of index-\n> only scans\n> > come in when you're scanning extremely wide tables, often counting\n> rows\n> > matching some indexed criteria.\n> \n> I'm not sure what you would consider \"fairly narrow rows\" -- so see the\n> attached. This is the VACUUM ANALYZE VERBOSE output for the largest\n> table, from last night's regular maintenance run.\n\nLooks to be about 60 rows per page, somewhere around 140 bytes per row\n(including overhead). Accounting for overhead and allowing for some\nempty room, about 100 bytes of data per row, which isn't all that thin.\nNot all that fat, either... The PK index is about 5 times smaller. IF\nthat ratio holds on the commercial product and they can't beat us with\nan index scan.... :)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 23 Feb 2006 14:25:46 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good News re count(*) in 8.1" } ]
[ { "msg_contents": "I am issing a query like this:\nSELECT *\n FROM users users\n LEFT JOIN phorum_users_base ON users.uid = phorum_users_base.user_id\n LEFT JOIN useraux ON useraux.uid = users.uid;\n\nThe joins are all on the PKs of the tables. It takes 1000ms to run on\npostgres. The identical mysql version runs in 230ms. The problem seems\nto stem from postgres's insistence to do three complete table scans,\nwhere mysql does one and joins 1:1 against the results of the first. I\nhave switched the joins to inner joins and the difference is negligible.\nHere are the explains on both postgres and mysql. Is there a way to\noptimize this basic query for postgres that I am missing?\n\nPostgres Explain \n\nMerge Left Join (cost=0.00..2656.36 rows=6528 width=1522)\nMerge Cond: (\"outer\".uid = \"inner\".uid)\n -> Merge Left Join (cost=0.00..1693.09 rows=6528 width=1264)\n Merge Cond: (\"outer\".uid = \"inner\".user_id)\n -> Index Scan using users_pkey on users (cost=0.00..763.81\nrows=6528 width=100)\n -> Index Scan using phorum_users_base_pkey on phorum_users_base\n (cost=0.00..822.92 rows=9902 width=1168)\n -> Index Scan using useraux_pkey on useraux (cost=0.00..846.40\nrows=7582 width=262)\n\n\nMySQL Explain:\n\nid,select_type,table,possible_keys,key,key_len,ref,rows,extra\n1, 'PRIMARY', 'USERS', 'ALL', '', '', '', '', 6528, ''\n1, 'PRIMARY', 'phorum_users_base', 'eq_ref', 'PRIMARY', 'PRIMARY', '4',\n'wh2o.USERS.UID', 1, ''\n1, 'PRIMARY', 'useraux', 'eq_ref', 'PRIMARY', 'PRIMARY', '4',\n'wh2o.USERS.UID', 1, ''\n\n", "msg_date": "Wed, 22 Feb 2006 12:26:47 -0500", "msg_from": "\"ryan groth\" <[email protected]>", "msg_from_op": true, "msg_subject": "Joins and full index scans...mysql vs postgres?" }, { "msg_contents": "On Wed, Feb 22, 2006 at 12:26:47PM -0500, ryan groth wrote:\n> Postgres Explain \n\nWe need to see EXPLAIN ANALYZE results here.\n\nWhat's your work_mem set to?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 22 Feb 2006 18:52:44 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joins and full index scans...mysql vs postgres?" }, { "msg_contents": "ryan groth wrote:\n> I am issing a query like this:\n> SELECT *\n> FROM users users\n> LEFT JOIN phorum_users_base ON users.uid = phorum_users_base.user_id\n> LEFT JOIN useraux ON useraux.uid = users.uid;\n> \n\n\nI'm not sure if postgres would rewrite your query to do the joins \nproperly, though I guess someone else might've already suggested this :)\n\n\nI'm probably wrong but I read that as:\n\njoin users -> phorum_users_base (ON users.uid = phorum_users_base.user_id)\n\njoin phorum_users_base -> useraux (ON useraux.uid = users.uid) which \nwon't be indexable because u.uid doesn't exist in phorum_users_base.\n\n\n\nTry\n\nSELECT *\nFROM users users\nLEFT JOIN phorum_users_base ON users.uid = phorum_users_base.user_id\nLEFT JOIN useraux ON useraux.uid = phorum_users_base.user_id\n\nor\n\nSELECT *\nFROM users u, phorum_users_base pub, useraux ua WHERE u.uid = \npub.user_id AND au.uid = u.uid AND pub.user_id=au.uid;\n\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Thu, 23 Feb 2006 10:58:39 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joins and full index scans...mysql vs postgres?" } ]
[ { "msg_contents": "Does this work:\n\n\"Merge Left Join (cost=0.00..2656.36 rows=6528 width=1522) (actual\ntime=0.057..123.659 rows=6528 loops=1)\"\n\" Merge Cond: (\"outer\".uid = \"inner\".uid)\"\n\" -> Merge Left Join (cost=0.00..1693.09 rows=6528 width=1264)\n(actual time=0.030..58.876 rows=6528 loops=1)\"\n\" Merge Cond: (\"outer\".uid = \"inner\".user_id)\"\n\" -> Index Scan using users_pkey on users (cost=0.00..763.81\nrows=6528 width=100) (actual time=0.016..9.446 rows=6528 loops=1)\"\n\" -> Index Scan using phorum_users_base_pkey on\nphorum_users_base (cost=0.00..822.92 rows=9902 width=1168) (actual\ntime=0.007..15.674 rows=9845 loops=1)\"\n\" -> Index Scan using useraux_pkey on useraux (cost=0.00..846.40\nrows=7582 width=262) (actual time=0.007..11.935 rows=7529 loops=1)\"\n\"Total runtime: 127.442 ms\"\n\n\n> On Wed, Feb 22, 2006 at 12:26:47PM -0500, ryan groth wrote:\n> > Postgres Explain \n> \n> We need to see EXPLAIN ANALYZE results here.\n> \n> What's your work_mem set to?\n> \n> /* Steinar */\n> -- \n> Homepage: http://www.sesse.net/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n> \n\n-- \n\n\n> On Wed, Feb 22, 2006 at 12:26:47PM -0500, ryan groth wrote:\n> > Postgres Explain \n> \n> We need to see EXPLAIN ANALYZE results here.\n> \n> What's your work_mem set to?\n> \n> /* Steinar */\n> -- \n> Homepage: http://www.sesse.net/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n> \n\n-- \n\n", "msg_date": "Wed, 22 Feb 2006 13:11:13 -0500", "msg_from": "\"ryan groth\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Joins and full index scans...mysql vs postgres?" }, { "msg_contents": "On Wed, 22 Feb 2006, ryan groth wrote:\n\n> Does this work:\n>\n> \"Merge Left Join (cost=0.00..2656.36 rows=6528 width=1522) (actual\n> time=0.057..123.659 rows=6528 loops=1)\"\n> \" Merge Cond: (\"outer\".uid = \"inner\".uid)\"\n> \" -> Merge Left Join (cost=0.00..1693.09 rows=6528 width=1264)\n> (actual time=0.030..58.876 rows=6528 loops=1)\"\n> \" Merge Cond: (\"outer\".uid = \"inner\".user_id)\"\n> \" -> Index Scan using users_pkey on users (cost=0.00..763.81\n> rows=6528 width=100) (actual time=0.016..9.446 rows=6528 loops=1)\"\n> \" -> Index Scan using phorum_users_base_pkey on\n> phorum_users_base (cost=0.00..822.92 rows=9902 width=1168) (actual\n> time=0.007..15.674 rows=9845 loops=1)\"\n> \" -> Index Scan using useraux_pkey on useraux (cost=0.00..846.40\n> rows=7582 width=262) (actual time=0.007..11.935 rows=7529 loops=1)\"\n> \"Total runtime: 127.442 ms\"\n\nWell, this implies the query took about 127 ms on the server side. Where\ndid the 1000 ms number come from (was that on a client, and if so, what\ntype)?\n", "msg_date": "Wed, 22 Feb 2006 10:28:25 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joins and full index scans...mysql vs postgres?" }, { "msg_contents": "On Wed, 2006-02-22 at 12:11, ryan groth wrote:\n> Does this work:\n> \n> \"Merge Left Join (cost=0.00..2656.36 rows=6528 width=1522) (actual\n> time=0.057..123.659 rows=6528 loops=1)\"\n> \" Merge Cond: (\"outer\".uid = \"inner\".uid)\"\n> \" -> Merge Left Join (cost=0.00..1693.09 rows=6528 width=1264)\n> (actual time=0.030..58.876 rows=6528 loops=1)\"\n> \" Merge Cond: (\"outer\".uid = \"inner\".user_id)\"\n> \" -> Index Scan using users_pkey on users (cost=0.00..763.81\n> rows=6528 width=100) (actual time=0.016..9.446 rows=6528 loops=1)\"\n> \" -> Index Scan using phorum_users_base_pkey on\n> phorum_users_base (cost=0.00..822.92 rows=9902 width=1168) (actual\n> time=0.007..15.674 rows=9845 loops=1)\"\n> \" -> Index Scan using useraux_pkey on useraux (cost=0.00..846.40\n> rows=7582 width=262) (actual time=0.007..11.935 rows=7529 loops=1)\"\n> \"Total runtime: 127.442 ms\"\n\nIn MySQL, have you tried writing a short perl or php script or even\ntiming the mysql client running in one shot mode (I assume it can do\nthat) from the outside to see how long it takes to actually run the\nquery AND retrieve the data?\n\nMy guess is most of the time for both queries will be taken in\ndelivering the data.\n", "msg_date": "Wed, 22 Feb 2006 13:13:36 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joins and full index scans...mysql vs postgres?" } ]
[ { "msg_contents": "\nworkmem is set to the default, increasing it decreases performance.\n\n> Does this work:\n> \n> \"Merge Left Join (cost=0.00..2656.36 rows=6528 width=1522) (actual\n> time=0.057..123.659 rows=6528 loops=1)\"\n> \" Merge Cond: (\"outer\".uid = \"inner\".uid)\"\n> \" -> Merge Left Join (cost=0.00..1693.09 rows=6528 width=1264)\n> (actual time=0.030..58.876 rows=6528 loops=1)\"\n> \" Merge Cond: (\"outer\".uid = \"inner\".user_id)\"\n> \" -> Index Scan using users_pkey on users (cost=0.00..763.81\n> rows=6528 width=100) (actual time=0.016..9.446 rows=6528 loops=1)\"\n> \" -> Index Scan using phorum_users_base_pkey on\n> phorum_users_base (cost=0.00..822.92 rows=9902 width=1168) (actual\n> time=0.007..15.674 rows=9845 loops=1)\"\n> \" -> Index Scan using useraux_pkey on useraux (cost=0.00..846.40\n> rows=7582 width=262) (actual time=0.007..11.935 rows=7529 loops=1)\"\n> \"Total runtime: 127.442 ms\"\n> \n> \n> > On Wed, Feb 22, 2006 at 12:26:47PM -0500, ryan groth wrote:\n> > > Postgres Explain \n> > \n> > We need to see EXPLAIN ANALYZE results here.\n> > \n> > What's your work_mem set to?\n> > \n> > /* Steinar */\n> > -- \n> > Homepage: http://www.sesse.net/\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/docs/faq\n> > \n> > \n> \n> -- \n> \n> \n> > On Wed, Feb 22, 2006 at 12:26:47PM -0500, ryan groth wrote:\n> > > Postgres Explain \n> > \n> > We need to see EXPLAIN ANALYZE results here.\n> > \n> > What's your work_mem set to?\n> > \n> > /* Steinar */\n> > -- \n> > Homepage: http://www.sesse.net/\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/docs/faq\n> > \n> > \n> \n> -- \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n\n-- \n\n", "msg_date": "Wed, 22 Feb 2006 13:21:04 -0500", "msg_from": "\"ryan groth\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Joins and full index scans...mysql vs postgres?" } ]
[ { "msg_contents": "Hmm, it came from the timer on the pgadmin III sql query tool. I guess\nthe 1,000ms includes the round-trip? See the wierd thing is that\nmysqlserver is running default configuration on a virtual machine\n(P3/1.3GHZ conf'd for 128mb ram) over a 100m/b ethernet connection.\nPostgres is running on a real P4/3.0ghz 4GB running localhost. Timings\nfrom the mysql query tool indicate that the 6.5k record query runs in\n\"1.3346s (.3361s)\" vs. the pgadmin query tool saying that the query runs\n\"997+3522 ms\". Am I reading these numbers wrong? Are these numbers\nreflective of application performance? Is there an optimization I am\nmissing?\n\nRyan\n\n\n> On Wed, 22 Feb 2006, ryan groth wrote:\n> \n> > Does this work:\n> >\n> > \"Merge Left Join (cost=0.00..2656.36 rows=6528 width=1522) (actual\n> > time=0.057..123.659 rows=6528 loops=1)\"\n> > \" Merge Cond: (\"outer\".uid = \"inner\".uid)\"\n> > \" -> Merge Left Join (cost=0.00..1693.09 rows=6528 width=1264)\n> > (actual time=0.030..58.876 rows=6528 loops=1)\"\n> > \" Merge Cond: (\"outer\".uid = \"inner\".user_id)\"\n> > \" -> Index Scan using users_pkey on users (cost=0.00..763.81\n> > rows=6528 width=100) (actual time=0.016..9.446 rows=6528 loops=1)\"\n> > \" -> Index Scan using phorum_users_base_pkey on\n> > phorum_users_base (cost=0.00..822.92 rows=9902 width=1168) (actual\n> > time=0.007..15.674 rows=9845 loops=1)\"\n> > \" -> Index Scan using useraux_pkey on useraux (cost=0.00..846.40\n> > rows=7582 width=262) (actual time=0.007..11.935 rows=7529 loops=1)\"\n> > \"Total runtime: 127.442 ms\"\n> \n> Well, this implies the query took about 127 ms on the server side. Where\n> did the 1000 ms number come from (was that on a client, and if so, what\n> type)?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n> \n\n-- \n\n", "msg_date": "Wed, 22 Feb 2006 13:52:49 -0500", "msg_from": "\"ryan groth\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Joins and full index scans...mysql vs postgres?" }, { "msg_contents": "\n> \"997+3522 ms\". Am I reading these numbers wrong? Are these numbers\n> reflective of application performance? Is there an optimization I am\n> missing?\n\n\tIt also reflects the time it takes to pgadmin to insert the results into \nits GUI...\n\n\tIf you want to get an approximation of the time the server needs to \nprocess your request, without the data marshalling time on the network and \nanything, you can either use EXPLAIN ANALYZE (but mysql doesn't have it, \nand the instrumentation adds overhead), or simply something like \"SELECT \nsum(1) FROM (query to benchmark)\", which only returns 1 row, and the sum() \noverhead is minimal, and it works on most databases. I find it useful \nbecause in knowing which portion of the time is spent by the server \nprocessing the query, or in data transfer, or in data decoding on the \nclient side, or simply in displaying...\n", "msg_date": "Thu, 23 Feb 2006 00:18:58 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joins and full index scans...mysql vs postgres?" }, { "msg_contents": "The pgAdmin query tool is known to give an answer about 5x the real \nanswer - don't believe it!\n\nryan groth wrote:\n> Hmm, it came from the timer on the pgadmin III sql query tool. I guess\n> the 1,000ms includes the round-trip? See the wierd thing is that\n> mysqlserver is running default configuration on a virtual machine\n> (P3/1.3GHZ conf'd for 128mb ram) over a 100m/b ethernet connection.\n> Postgres is running on a real P4/3.0ghz 4GB running localhost. Timings\n> from the mysql query tool indicate that the 6.5k record query runs in\n> \"1.3346s (.3361s)\" vs. the pgadmin query tool saying that the query runs\n> \"997+3522 ms\". Am I reading these numbers wrong? Are these numbers\n> reflective of application performance? Is there an optimization I am\n> missing?\n> \n> Ryan\n> \n> \n>> On Wed, 22 Feb 2006, ryan groth wrote:\n>>\n>>> Does this work:\n>>>\n>>> \"Merge Left Join (cost=0.00..2656.36 rows=6528 width=1522) (actual\n>>> time=0.057..123.659 rows=6528 loops=1)\"\n>>> \" Merge Cond: (\"outer\".uid = \"inner\".uid)\"\n>>> \" -> Merge Left Join (cost=0.00..1693.09 rows=6528 width=1264)\n>>> (actual time=0.030..58.876 rows=6528 loops=1)\"\n>>> \" Merge Cond: (\"outer\".uid = \"inner\".user_id)\"\n>>> \" -> Index Scan using users_pkey on users (cost=0.00..763.81\n>>> rows=6528 width=100) (actual time=0.016..9.446 rows=6528 loops=1)\"\n>>> \" -> Index Scan using phorum_users_base_pkey on\n>>> phorum_users_base (cost=0.00..822.92 rows=9902 width=1168) (actual\n>>> time=0.007..15.674 rows=9845 loops=1)\"\n>>> \" -> Index Scan using useraux_pkey on useraux (cost=0.00..846.40\n>>> rows=7582 width=262) (actual time=0.007..11.935 rows=7529 loops=1)\"\n>>> \"Total runtime: 127.442 ms\"\n>> Well, this implies the query took about 127 ms on the server side. Where\n>> did the 1000 ms number come from (was that on a client, and if so, what\n>> type)?\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>>\n>>\n> \n\n", "msg_date": "Thu, 23 Feb 2006 10:52:11 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joins and full index scans...mysql vs postgres?" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> The pgAdmin query tool is known to give an answer about 5x the real \n> answer - don't believe it!\n\nEverybody please forget immediately the factor 5. It's no factor at all, \nbut the GUI update time that is *added*, which depends on rows*columns.\n\n\n> ryan groth wrote:\n> \n>> the pgadmin query tool saying that the query runs\n>> \"997+3522 ms\".\n\nMeans 997ms until all data is at the client (libpq reports the rowset), \nthe rest is GUI overhead.\n\nRegards,\nAndreas\n", "msg_date": "Thu, 23 Feb 2006 10:21:26 +0000", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joins and full index scans...mysql vs postgres?" } ]
[ { "msg_contents": "I am running a query that joins against several large tables (~5 million\nrows each). The query takes an exteremely long time to run, and the\nexplain output is a bit beyond my level of understanding. It is an\nauto-generated query, so the aliases are fairly ugly. I can clean them\nup (rename them) if it would help. Also, let me know if I can send any\nmore information that would help (e.g. table schema)\n\nAlso, is there any resources where I can get a better understanding of\nwhat PostgreSQL means when it says \"Sort\" \"Sort Key\" \"Bitmap Index Scan\"\n\"Hash Cond\" etc. etc. - and how to recognize problems by looking at the\noutput. I can understand the output for simple queries (e.g. is the\nplanner using an index or performing a seq. scan), but when you get to\nmore complex queries like the one below I lose my way =)\n\nI would really appreciate it if someone from this list could tell me if\nthere is anything that is obviously wrong with the query or schema and\nwhat I could do to improve the performance.\n\nPostgreSQL 8.1\nRedHat Enterprise Linux 4\n\n--QUERY\nselect distinct city4_.region_id as region1_29_, city4_1_.name as\nname29_, city4_.state_id as state2_30_ \nfrom registered_voters registered0_ \n inner join registered_voter_addresses addresses1_ on\n registered0_.registered_voter_id=addresses1_.registered_voter_id \n inner join registered_voter_addresses_regions regions2_ on\n addresses1_.address_id=regions2_.registered_voter_addresses_address_id \n inner join regions region3_ on\n regions2_.regions_region_id=region3_.region_id \n inner join cities city4_ on\n addresses1_.city_id=city4_.region_id \n inner join regions city4_1_ on\n city4_.region_id=city4_1_.region_id \nwhere region3_.region_id='093c44e8-f3b2-4c60-8be3-2b4d148f9f5a' \norder by city4_1_.name\n\n\n--EXPLAIN/ANALYZE OUTPUT\n\"Unique (cost=3572907.42..3623589.94 rows=4076438 width=93) (actual\ntime=2980825.714..3052333.753 rows=1124 loops=1)\"\n\" -> Sort (cost=3572907.42..3585578.05 rows=5068252 width=93) (actual\ntime=2980825.710..2987407.888 rows=4918204 loops=1)\"\n\" Sort Key: city4_1_.name, city4_.region_id, city4_.state_id\"\n\" -> Hash Join (cost=717783.40..1430640.10 rows=5068252\nwidth=93) (actual time=1400141.559..2016131.467 rows=4918204 loops=1)\"\n\" Hash Cond:\n((\"outer\".registered_voter_addresses_address_id)::text =\n(\"inner\".address_id)::text)\"\n\" -> Bitmap Heap Scan on\nregistered_voter_addresses_regions regions2_ (cost=54794.95..575616.49\nrows=5116843 width=80) (actual time=45814.469..155044.478 rows=4918205\nloops=1)\"\n\" Recheck Cond:\n('093c44e8-f3b2-4c60-8be3-2b4d148f9f5a'::text =\n(regions_region_id)::text)\"\n\" -> Bitmap Index Scan on\nreg_voter_address_region_region_idx (cost=0.00..54794.95 rows=5116843\nwidth=0) (actual time=45807.157..45807.157 rows=4918205 loops=1)\"\n\" Index Cond:\n('093c44e8-f3b2-4c60-8be3-2b4d148f9f5a'::text =\n(regions_region_id)::text)\"\n\" -> Hash (cost=642308.89..642308.89 rows=741420\nwidth=173) (actual time=1354217.934..1354217.934 rows=4918204 loops=1)\"\n\" -> Hash Join (cost=328502.66..642308.89\nrows=741420 width=173) (actual time=204565.031..1268303.832 rows=4918204\nloops=1)\"\n\" Hash Cond:\n((\"outer\".registered_voter_id)::text =\n(\"inner\".registered_voter_id)::text)\"\n\" -> Seq Scan on registered_voters\nregistered0_ (cost=0.00..173703.02 rows=4873202 width=40) (actual\ntime=0.005..39364.261 rows=4873167 loops=1)\"\n\" -> Hash (cost=303970.34..303970.34\nrows=748528 width=213) (actual time=204523.861..204523.861 rows=4918204\nloops=1)\"\n\" -> Hash Join (cost=263.22..303970.34\nrows=748528 width=213) (actual time=101.628..140936.062 rows=4918204\nloops=1)\"\n\" Hash Cond:\n((\"outer\".city_id)::text = (\"inner\".region_id)::text)\"\n\" -> Seq Scan on\nregistered_voter_addresses addresses1_ (cost=0.00..271622.23\nrows=4919923 width=120) (actual time=0.025..98416.667 rows=4918205\nloops=1)\"\n\" -> Hash (cost=260.35..260.35\nrows=1147 width=173) (actual time=101.582..101.582 rows=1147 loops=1)\"\n\" -> Hash Join \n(cost=48.80..260.35 rows=1147 width=173) (actual time=88.608..98.984\nrows=1147 loops=1)\"\n\" Hash Cond:\n((\"outer\".region_id)::text = (\"inner\".region_id)::text)\"\n\" -> Seq Scan on\nregions city4_1_ (cost=0.00..162.39 rows=7539 width=53) (actual\ntime=0.048..35.204 rows=7539 loops=1)\"\n\" -> Hash \n(cost=45.93..45.93 rows=1147 width=120) (actual time=48.896..48.896\nrows=1147 loops=1)\"\n\" -> Nested Loop\n (cost=0.00..45.93 rows=1147 width=120) (actual time=35.791..47.012\nrows=1147 loops=1)\"\n\" -> Index\nScan using regions_pkey on regions region3_ (cost=0.00..5.99 rows=1\nwidth=40) (actual time=35.761..35.763 rows=1 loops=1)\"\n\" \nIndex Cond: ((region_id)::text =\n'093c44e8-f3b2-4c60-8be3-2b4d148f9f5a'::text)\"\n\" -> Seq\nScan on cities city4_ (cost=0.00..28.47 rows=1147 width=80) (actual\ntime=0.022..9.476 rows=1147 loops=1)\"\n\"Total runtime: 3052707.269 ms\"\n\n", "msg_date": "Wed, 22 Feb 2006 14:16:10 -0500", "msg_from": "\"Jeremy Haile\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query" }, { "msg_contents": "\"Jeremy Haile\" <[email protected]> writes:\n> I am running a query that joins against several large tables (~5 million\n> rows each). The query takes an exteremely long time to run, and the\n> explain output is a bit beyond my level of understanding. It is an\n> auto-generated query, so the aliases are fairly ugly.\n\nYah :-(\n\n> select distinct city4_.region_id as region1_29_, city4_1_.name as\n> name29_, city4_.state_id as state2_30_ \n> from registered_voters registered0_ \n> inner join registered_voter_addresses addresses1_ on\n> registered0_.registered_voter_id=addresses1_.registered_voter_id \n> inner join registered_voter_addresses_regions regions2_ on\n> addresses1_.address_id=regions2_.registered_voter_addresses_address_id \n> inner join regions region3_ on\n> regions2_.regions_region_id=region3_.region_id \n> inner join cities city4_ on\n> addresses1_.city_id=city4_.region_id \n> inner join regions city4_1_ on\n> city4_.region_id=city4_1_.region_id \n> where region3_.region_id='093c44e8-f3b2-4c60-8be3-2b4d148f9f5a' \n> order by city4_1_.name\n\nAFAICS the planner is doing about the best you can hope the machine to\ndo --- it's not making any serious estimation errors, and the plan is\npretty reasonable for the given query. The problem is that you are\nforming a very large join result (4918204 rows) and then doing a\nDISTINCT that reduces this to only 1124 rows ... but the damage of\ncomputing that huge join has already been done. The machine is not\ngoing to be able to think its way out of this one --- it's up to you\nto think of a better formulation of the query.\n\nOffhand I'd try something involving joining just city4_/city4_1_\n(which should not need DISTINCT, I think) and then using WHERE\nEXISTS(SELECT ... FROM the-other-tables) to filter out the cities\nyou don't want. The reason this can be a win is that the EXISTS\nformulation will stop running the sub-select as soon as it's produced a\nsingle row for the current city, rather than generating thousands of\nsimilar rows that will be thrown away by DISTINCT as you have here.\n\nThis assumes that the fraction of cities passing the query is\nsubstantial, as it appears from the rowcounts in your EXPLAIN output.\nIf only a tiny fraction of them passed, then the time wasted in failing\nEXISTS probes might eat up the savings.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Feb 2006 12:25:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query " } ]
[ { "msg_contents": "I just wanted to thank everyone for your input on my question. You've\ngiven me a lot of tools to solve my problem here.\n\nOrion\n", "msg_date": "Wed, 22 Feb 2006 14:07:35 -0800", "msg_from": "Orion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large Database Design Help" } ]
[ { "msg_contents": "Select and update statements are quite slow on a large table with more \nthan 600,000 rows. The table consists of 11 columns (nothing special). \nThe column \"id\" (int8) is primary key and has a btree index on it.\n\nThe following select statement takes nearly 500ms:\n\nSELECT * FROM table WHERE id = 600000;\n\nA prepending \"EXPLAIN\" to the statement reveals a seq scan:\n\nEXPLAIN SELECT * FROM table WHERE id = 600000;\n\n\"Seq Scan on table (cost=0.00..15946.48 rows=2 width=74)\"\n\" Filter: (id = 600000)\"\n\nI tried a full vacuum and a reindex, but had no effect. Why is \nPostgreSQL not using the created index?\n\nOr is there any other way to improve performance on this query?\n\nThe PostgreSQL installation is an out of the box installation with no \nfurther optimization. The server is running SUSE Linux 9.1, kernel \n2.6.4-52-smp. (Quad Xeon 2.8GHz, 1GB RAM)\n\nSELECT version();\n\"PostgreSQL 7.4.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.3 \n(SuSE Linux)\"\n\n\nThanks for any hints,\nKjeld\n", "msg_date": "Thu, 23 Feb 2006 13:35:51 +0100", "msg_from": "Kjeld Peters <[email protected]>", "msg_from_op": true, "msg_subject": "Created Index is not used" }, { "msg_contents": "Hi, Kjeld,\n\nKjeld Peters wrote:\n> Select and update statements are quite slow on a large table with more\n> than 600,000 rows. The table consists of 11 columns (nothing special).\n> The column \"id\" (int8) is primary key and has a btree index on it.\n> \n> The following select statement takes nearly 500ms:\n> \n> SELECT * FROM table WHERE id = 600000;\n\nKnown issue which is fixed in 8.X servers, postgreSQL sees your 600000\nas int4 literal and does not grasp that the int8 index works for it.\n\nSELECT * FROM table WHERE id = 600000::int8;\n\nshould do it.\n\n> SELECT version();\n> \"PostgreSQL 7.4.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.3\n> (SuSE Linux)\"\n\nBtw, you should update to 7.4.12, there are importand bug fixes and it\nis upgradable \"in place\", without dumping and reloading the database.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Thu, 23 Feb 2006 13:46:02 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Created Index is not used" }, { "msg_contents": "On fim, 2006-02-23 at 13:35 +0100, Kjeld Peters wrote:\n> Select and update statements are quite slow on a large table with more \n> than 600,000 rows. The table consists of 11 columns (nothing special). \n> The column \"id\" (int8) is primary key and has a btree index on it.\n> \n> The following select statement takes nearly 500ms:\n> \n> SELECT * FROM table WHERE id = 600000;\n> \n> A prepending \"EXPLAIN\" to the statement reveals a seq scan:\n> \n> EXPLAIN SELECT * FROM table WHERE id = 600000;\n> \n> \"Seq Scan on table (cost=0.00..15946.48 rows=2 width=74)\"\n> \" Filter: (id = 600000)\"\n\n> I tried a full vacuum and a reindex, but had no effect. Why is \n> PostgreSQL not using the created index?\n\ntry one of:\n\nSELECT * FROM table WHERE id = '600000';\nSELECT * FROM table WHERE id = 600000::int8;\nPostgreSQL 8+\n\ngnari\n\n\n\n", "msg_date": "Thu, 23 Feb 2006 13:03:26 +0000", "msg_from": "Ragnar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Created Index is not used" }, { "msg_contents": "Hi Markus,\n\nfirst of all thanks for your quick reply!\n\nMarkus Schaber wrote:\n> Kjeld Peters wrote:\n>>Select and update statements are quite slow on a large table with more\n>>than 600,000 rows. The table consists of 11 columns (nothing special).\n>>The column \"id\" (int8) is primary key and has a btree index on it.\n>>\n>>The following select statement takes nearly 500ms:\n>>\n>>SELECT * FROM table WHERE id = 600000;\n> \n> \n> Known issue which is fixed in 8.X servers, postgreSQL sees your 600000\n> as int4 literal and does not grasp that the int8 index works for it.\n> \n> SELECT * FROM table WHERE id = 600000::int8;\n> \n> should do it.\n\nAfter I appended \"::int8\" to the query, selecting the table takes only \n40-50ms. That's a great performance boost!\n\n> Btw, you should update to 7.4.12, there are importand bug fixes and it\n> is upgradable \"in place\", without dumping and reloading the database.\n\nI guess I'll test an upgrade to version 8.1.\n\nThanks again for your and Ragnar's help!\n\nKjeld\n", "msg_date": "Thu, 23 Feb 2006 15:09:10 +0100", "msg_from": "Kjeld Peters <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Created Index is not used" } ]
[ { "msg_contents": "On Feb 22, 2006, at 10:44 PM, Chethana, Rao ((IE10)) wrote:\n\n> That is what I wanted to know, how do I tune it?\n\nIf there were a simple formula for doing it, it would already have \nbeen written up as a program that runs once you install postgres.\n\nYou have to monitor your usage, use your understanding of your \napplication, and the Postgres manual to see what things to adjust. \nIt differs if you are CPU bound or I/O bound.\n\nAnd please keep this on list.\n\n\nOn Feb 22, 2006, at 10:44 PM, Chethana, Rao ((IE10)) wrote:That is what I wanted to know,  how do I tune it?If there were a simple formula for doing it, it would already have been written up as a program that runs once you install postgres.You have to monitor your usage, use your understanding of your application, and the Postgres manual to see what things to adjust.   It differs if you are CPU bound or I/O bound.And please keep this on list.", "msg_date": "Thu, 23 Feb 2006 09:38:25 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: " }, { "msg_contents": "On Thu, Feb 23, 2006 at 09:38:25AM -0500, Vivek Khera wrote:\n> \n> On Feb 22, 2006, at 10:44 PM, Chethana, Rao ((IE10)) wrote:\n> \n> >That is what I wanted to know, how do I tune it?\n> \n> If there were a simple formula for doing it, it would already have \n> been written up as a program that runs once you install postgres.\n> \n> You have to monitor your usage, use your understanding of your \n> application, and the Postgres manual to see what things to adjust. \n> It differs if you are CPU bound or I/O bound.\n> \n> And please keep this on list.\n \nFWIW, had you included a bit more of the original post others might have\nbeen able to provide advice... but now I have no idea what the original\nquestion was (of course a blank subject doesn't help either... no idea\nwhere that happened).\n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 23 Feb 2006 14:29:05 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "\nSee the FAQ.\n\n---------------------------------------------------------------------------\n\nJim C. Nasby wrote:\n> On Thu, Feb 23, 2006 at 09:38:25AM -0500, Vivek Khera wrote:\n> > \n> > On Feb 22, 2006, at 10:44 PM, Chethana, Rao ((IE10)) wrote:\n> > \n> > >That is what I wanted to know, how do I tune it?\n> > \n> > If there were a simple formula for doing it, it would already have \n> > been written up as a program that runs once you install postgres.\n> > \n> > You have to monitor your usage, use your understanding of your \n> > application, and the Postgres manual to see what things to adjust. \n> > It differs if you are CPU bound or I/O bound.\n> > \n> > And please keep this on list.\n> \n> FWIW, had you included a bit more of the original post others might have\n> been able to provide advice... but now I have no idea what the original\n> question was (of course a blank subject doesn't help either... no idea\n> where that happened).\n> \n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian http://candle.pha.pa.us\n SRA OSS, Inc. http://www.sraoss.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Fri, 24 Feb 2006 22:42:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "postgresql 8.1, I have two tables, bot hoth vacuumed and analyzed. on\nmsg307 I have altered the entityid and msgid columns statistics values\nto 400. \n\n\ndev20001=# explain analyze SELECT ewm.entity_id, m.agentname, m.filecreatedate AS versioninfo\n FROM msg307 m join entity_watch_map ewm on (ewm.entity_id = m.entityid AND ewm.msgid = m.msgid AND ewm.msg_type = 307);\n\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=6.62..5227.40 rows=1 width=36) (actual time=0.583..962.346 rows=75322 loops=1)\n -> Bitmap Heap Scan on entity_watch_map ewm (cost=6.62..730.47 rows=748 width=8) (actual time=0.552..7.017 rows=1264 loops=1)\n Recheck Cond: (msg_type = 307)\n -> Bitmap Index Scan on ewm_msg_type (cost=0.00..6.62 rows=748 width=0) (actual time=0.356..0.356 rows=1264 loops=1)\n Index Cond: (msg_type = 307)\n -> Index Scan using msg307_entityid_msgid_idx on msg307 m (cost=0.00..6.00 rows=1 width=40) (actual time=0.011..0.295 rows=60 loops=1264)\n Index Cond: ((\"outer\".entity_id = m.entityid) AND (\"outer\".msgid = m.msgid))\n Total runtime: 1223.469 ms\n(8 rows)\n\n\nI guess that the planner can not tell there is no correlation between\nthe distinctness of those two columns, and so makes a really bad\nestimate on the indexscan, and pushes that estimate up into the nested\nloop? (luckily in this case doing an index scan is generally a good\nidea, so it works out, but it wouldn't always be a good idea) \n\nsome pg_statistics information for those two columns\nentityid:\nstarelid | 25580\nstaattnum | 1\nstanullfrac | 0\nstawidth | 4\nstadistinct | 1266\nstakind1 | 1\nstakind2 | 2\nstakind3 | 3\nstakind4 | 0\nstaop1 | 96\nstaop2 | 97\nstaop3 | 97\nstaop4 | 0\nstanumbers1 | {0.00222976,0.00222976,0.00153048,0.00137216,0.00137216}\nstanumbers2 | \nstanumbers3 | {0.100312}\nstanumbers4 | \n\nmsgid:\nstarelid | 25580\nstaattnum | 2\nstanullfrac | 0\nstawidth | 4\nstadistinct | 1272\nstakind1 | 1\nstakind2 | 2\nstakind3 | 3\nstakind4 | 0\nstaop1 | 96\nstaop2 | 97\nstaop3 | 97\nstaop4 | 0\nstanumbers1 | {0.00164923,0.00163604,0.00163604,0.00163604,0.00137216}\nstanumbers2 | \nstanumbers3 | {-0.0660856}\nstanumbers4 | \n\n\nis my interpretation of why i am seeing such bad estimates correct? I\ndon't really think it is, because looking at a similar scenario on a 7.3\nmachine:\n\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=1531.39..5350.90 rows=1 width=48) (actual time=118.44..899.37 rows=58260 loops=1)\n Merge Cond: ((\"outer\".entityid = \"inner\".entity_id) AND (\"outer\".msgid = \"inner\".msgid))\n -> Index Scan using msg307_entityid_msgid_idx on msg307 m (cost=0.00..3669.42 rows=58619 width=40) (actual time=0.31..390.01 rows=58619 loops=1)\n -> Sort (cost=1531.39..1533.16 rows=709 width=8) (actual time=118.09..157.45 rows=58218 loops=1)\n Sort Key: ewm.entity_id, ewm.msgid\n -> Seq Scan on entity_watch_map ewm (cost=0.00..1497.80 rows=709 width=8) (actual time=0.14..114.74 rows=1157 loops=1)\n Filter: (msg_type = 307)\n Total runtime: 951.23 msec\n(8 rows)\n\n\nIt still has the bad estimate at the nested loop stage, but it does seem\nto have a better understanding of the # of rows it will return in the\nindex scan on msg307. This leads me to wonder if there something I could\ndo to improve the estimates on the 8.1 machine? \n\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "23 Feb 2006 11:29:32 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "how to interpret/improve bad row estimates" } ]
[ { "msg_contents": "Where \"*\" == \n{print | save to PDF | save to <mumble> format | display on screen}\n\nAnyone know of one?\n\nTiA\nRon\n", "msg_date": "Thu, 23 Feb 2006 11:38:48 -0500 (GMT-05:00)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Looking for a tool to \"*\" pg tables as ERDs" }, { "msg_contents": "Hi, Ron,\n\nRon Peacetree wrote:\n> Where \"*\" == \n> {print | save to PDF | save to <mumble> format | display on screen}\n> \n> Anyone know of one?\n\npsql with fancy output formatting comes to my mind, or \"COPY table TO\nfile\" SQL command.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Thu, 23 Feb 2006 17:48:01 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Looking for a tool to \"*\" pg tables as ERDs" }, { "msg_contents": "\n\n\nMarkus Schaber wrote:\n\n>Hi, Ron,\n>\n>Ron Peacetree wrote:\n> \n>\n>>Where \"*\" == \n>>{print | save to PDF | save to <mumble> format | display on screen}\n>>\n>>Anyone know of one?\n>> \n>>\n>\n>psql with fancy output formatting comes to my mind, or \"COPY table TO\n>file\" SQL command.\n>\n>\n> \n>\n\nHow on earth can either of these have to do with producing an ERD?\n\npostgresql_autodoc might help: http://pgfoundry.org/projects/autodoc/\n\ncheers\n\nandrew\n\n\n\n", "msg_date": "Thu, 23 Feb 2006 11:59:31 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Looking for a tool to \"*\" pg tables as ERDs" }, { "msg_contents": "\nOn Feb 23, 2006, at 11:38 AM, Ron Peacetree wrote:\n\n> Where \"*\" ==\n> {print | save to PDF | save to <mumble> format | display on screen}\n>\n> Anyone know of one?\n\nThere's a perl module, GraphViz::DBI::General, which does a rather \nnifty job of taking a schema and making a graphviz \"dot\" file from \nit, which can then be processed into any of a bazillion formats.\n\nIt basically makes a box for each table, with fields, and an arrow to \neach FK referenced table. All layed out nicely.\n\nYou may also want to investigate the SQLFairy < http:// \nsqlfairy.sourceforge.net/ > if not for anything besides their awesome \nlogo. :-)\n\n\n", "msg_date": "Thu, 23 Feb 2006 12:06:51 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a tool to \"*\" pg tables as ERDs" }, { "msg_contents": "Hi, Andrew,\n\nAndrew Dunstan wrote:\n\n> How on earth can either of these have to do with producing an ERD?\n\nSorry, the ERD thing got lost in my mind while resolving the \"*\".\n\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org", "msg_date": "Thu, 23 Feb 2006 18:20:18 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Looking for a tool to \"*\" pg tables as ERDs" }, { "msg_contents": "On Thu, 2006-02-23 at 11:38, Ron Peacetree wrote:\n> Where \"*\" == \n> {print | save to PDF | save to <mumble> format | display on screen}\n> \n> Anyone know of one?\n> \n\ncase studio can reverse engineer erd's from existing schema, and you can\nprint out the schema, create html or rdf reports, or export the erd as a\ngraphic. Downside is it can't do direct port to pdf (though you could\nget around that with OO i imagine), plus its windows only and\ncommercial. \n\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "01 Mar 2006 13:31:44 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a tool to \"*\" pg tables as ERDs" } ]
[ { "msg_contents": " \nThank for looking into this Tom. Here's the output from PostgreSQL log:\n\n*** Postgresql Log:\n\nTopMemoryContext: 32768 total in 4 blocks; 7232 free (9 chunks); 25536\nused\nOperator class cache: 8192 total in 1 blocks; 4936 free (0 chunks); 3256\nused\nTopTransactionContext: 8192 total in 1 blocks; 6816 free (0 chunks);\n1376 used\nMessageContext: 8192 total in 1 blocks; 7104 free (1 chunks); 1088 used\nsmgr relation table: 8192 total in 1 blocks; 2872 free (0 chunks); 5320\nused\nPortal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used\nPortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used\nPortalHeapMemory: 1077575324 total in 115158 blocks; 1860896 free\n(115146 chunks); 1075714428 used\nExecutorState: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nRelcache by OID: 8192 total in 1 blocks; 3896 free (0 chunks); 4296 used\nCacheMemoryContext: 516096 total in 6 blocks; 198480 free (2 chunks);\n317616 used\nmort_ht: 1024 total in 1 blocks; 328 free (0 chunks); 696 used\npg_depend_depender_index: 1024 total in 1 blocks; 256 free (0 chunks);\n768 used\npg_depend_reference_index: 1024 total in 1 blocks; 256 free (0 chunks);\n768 used\npg_index_indrelid_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\npg_type_typname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);\n696 used\npg_type_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_statistic_relid_att_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_auth_members_member_role_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_auth_members_role_member_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 256 free (0\nchunks); 768 used\npg_proc_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 192 free (0\nchunks); 832 used\npg_operator_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_opclass_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 256 free (0\nchunks); 768 used\npg_namespace_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_namespace_nspname_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\npg_language_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_language_name_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_inherits_relid_seqno_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_index_indexrelid_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\npg_authid_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_authid_rolname_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\npg_database_datname_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\npg_conversion_oid_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\npg_conversion_name_nsp_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_conversion_default_index: 1024 total in 1 blocks; 192 free (0\nchunks); 832 used\npg_class_relname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);\n696 used\npg_class_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_cast_source_target_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_attribute_relid_attnam_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_amproc_opc_proc_index: 1024 total in 1 blocks; 256 free (0 chunks);\n768 used\npg_amop_opr_opc_index: 1024 total in 1 blocks; 328 free (0 chunks); 696\nused\npg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks);\n768 used\npg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\nMdSmgr: 8192 total in 1 blocks; 7504 free (0 chunks); 688 used\nLockTable (locallock hash): 8192 total in 1 blocks; 3912 free (0\nchunks); 4280 used\nTimezones: 47592 total in 2 blocks; 5968 free (0 chunks); 41624 used\nErrorContext: 8192 total in 1 blocks; 8176 free (4 chunks); 16 used\n[2006-02-23 08:46:26 PST|[local]|mtrac|postgres] ERROR: out of memory\n[2006-02-23 08:46:26 PST|[local]|mtrac|postgres] DETAIL: Failed on\nrequest of size 134217728.\n\n-------------------------\n\n*** Stack trace:\n\nI'm not having luck generating a stack trace so far. Following the gdb\ninstructions, the create index statement never comes back with either\nthe I/O error or a success (created index). I'm still trying to figure\nthis out. Hopefully, the above from the server log may shed some light\non the problem.\n\nThanks again,\n\n----\n \n Husam Tomeh \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, February 14, 2006 3:49 PM\nTo: Tomeh, Husam\nCc: [email protected]\nSubject: Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and\nCreate Index\n\n\"Tomeh, Husam\" <[email protected]> writes:\n> mtrac=# show maintenance_work_mem ;\n> maintenance_work_mem\n> ----------------------\n> 1048576 <======\n> (1 row)\n\n> mtrac=#\n> mtrac=#\n> mtrac=# create index mort_ht on mortgage(county_id,mtg_rec_dt);\n> ERROR: out of memory\n<===\n> DETAIL: Failed on request of size 134217728. <===\n\nIt would be useful to look at the detailed allocation info that this\n(should have) put into the postmaster log. Also, if you could get\na stack trace back from the error, that would be even more useful.\nTo do that,\n\t* start psql\n\t* determine PID of connected backend (use pg_backend_pid())\n\t* in another window, as postgres user,\n\t\tgdb /path/to/postgres backend-PID\n\t\tgdb> break errfinish\n\t\tgdb> cont\n\t* issue failing command in psql\n\t* when breakpoint is reached,\n\t\tgdb> bt\n\t\t... stack trace printed here ...\n\t\tgdb> q\n\n\t\t\tregards, tom lane\n**********************************************************************\nThis message contains confidential information intended only for the use of the addressee(s) named above and may contain information that is legally privileged. If you are not the addressee, or the person responsible for delivering it to the addressee, you are hereby notified that reading, disseminating, distributing or copying this message is strictly prohibited. If you have received this message by mistake, please immediately notify us by replying to the message and delete the original message immediately thereafter.\n\nThank you.\n\r\n FADLD Tag\n**********************************************************************\n\n", "msg_date": "Thu, 23 Feb 2006 11:57:03 -0800", "msg_from": "\"Tomeh, Husam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 0ut of Memory Error during Vacuum Analyze and" } ]
[ { "msg_contents": " \nWhat's more interesting is this:\n\nWhen I first connect to the database via \"psql\" and issue the \"create\nindex\" statement, of course, I get the \"out of memory\" error. If I\ndon't quit my current session and re-ran the same DDL statement again,\nthe index gets created successfully!.. However, if after my first\nunsuccessful run, I exit my session and re-connect again, and then run\nthe DDL, it will fail again and get the same error. I have done that for\nmany times and appears to have a consistent pattern of behavior. Not\nsure if that'll help, but I thought it may be an interesting observation\nto think about.\n\n----\n \n Husam Tomeh\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tomeh,\nHusam\nSent: Thursday, February 23, 2006 11:57 AM\nTo: Tom Lane\nCc: [email protected]\nSubject: Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and\n\n \nThank for looking into this Tom. Here's the output from PostgreSQL log:\n\n*** Postgresql Log:\n\nTopMemoryContext: 32768 total in 4 blocks; 7232 free (9 chunks); 25536\nused\nOperator class cache: 8192 total in 1 blocks; 4936 free (0 chunks); 3256\nused\nTopTransactionContext: 8192 total in 1 blocks; 6816 free (0 chunks);\n1376 used\nMessageContext: 8192 total in 1 blocks; 7104 free (1 chunks); 1088 used\nsmgr relation table: 8192 total in 1 blocks; 2872 free (0 chunks); 5320\nused\nPortal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used\nPortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used\nPortalHeapMemory: 1077575324 total in 115158 blocks; 1860896 free\n(115146 chunks); 1075714428 used\nExecutorState: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nRelcache by OID: 8192 total in 1 blocks; 3896 free (0 chunks); 4296 used\nCacheMemoryContext: 516096 total in 6 blocks; 198480 free (2 chunks);\n317616 used\nmort_ht: 1024 total in 1 blocks; 328 free (0 chunks); 696 used\npg_depend_depender_index: 1024 total in 1 blocks; 256 free (0 chunks);\n768 used\npg_depend_reference_index: 1024 total in 1 blocks; 256 free (0 chunks);\n768 used\npg_index_indrelid_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\npg_type_typname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);\n696 used\npg_type_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_statistic_relid_att_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_auth_members_member_role_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_auth_members_role_member_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 256 free (0\nchunks); 768 used\npg_proc_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 192 free (0\nchunks); 832 used\npg_operator_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_opclass_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 256 free (0\nchunks); 768 used\npg_namespace_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_namespace_nspname_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\npg_language_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_language_name_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_inherits_relid_seqno_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_index_indexrelid_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\npg_authid_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_authid_rolname_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\npg_database_datname_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\npg_conversion_oid_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\npg_conversion_name_nsp_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_conversion_default_index: 1024 total in 1 blocks; 192 free (0\nchunks); 832 used\npg_class_relname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);\n696 used\npg_class_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_cast_source_target_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_attribute_relid_attnam_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_amproc_opc_proc_index: 1024 total in 1 blocks; 256 free (0 chunks);\n768 used\npg_amop_opr_opc_index: 1024 total in 1 blocks; 328 free (0 chunks); 696\nused\npg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks);\n768 used\npg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\nMdSmgr: 8192 total in 1 blocks; 7504 free (0 chunks); 688 used\nLockTable (locallock hash): 8192 total in 1 blocks; 3912 free (0\nchunks); 4280 used\nTimezones: 47592 total in 2 blocks; 5968 free (0 chunks); 41624 used\nErrorContext: 8192 total in 1 blocks; 8176 free (4 chunks); 16 used\n[2006-02-23 08:46:26 PST|[local]|mtrac|postgres] ERROR: out of memory\n[2006-02-23 08:46:26 PST|[local]|mtrac|postgres] DETAIL: Failed on\nrequest of size 134217728.\n\n-------------------------\n\n*** Stack trace:\n\nI'm not having luck generating a stack trace so far. Following the gdb\ninstructions, the create index statement never comes back with either\nthe I/O error or a success (created index). I'm still trying to figure\nthis out. Hopefully, the above from the server log may shed some light\non the problem.\n\nThanks again,\n\n----\n \n Husam Tomeh \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, February 14, 2006 3:49 PM\nTo: Tomeh, Husam\nCc: [email protected]\nSubject: Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and\nCreate Index\n\n\"Tomeh, Husam\" <[email protected]> writes:\n> mtrac=# show maintenance_work_mem ;\n> maintenance_work_mem\n> ----------------------\n> 1048576 <======\n> (1 row)\n\n> mtrac=#\n> mtrac=#\n> mtrac=# create index mort_ht on mortgage(county_id,mtg_rec_dt);\n> ERROR: out of memory\n<===\n> DETAIL: Failed on request of size 134217728. <===\n\nIt would be useful to look at the detailed allocation info that this\n(should have) put into the postmaster log. Also, if you could get\na stack trace back from the error, that would be even more useful.\nTo do that,\n\t* start psql\n\t* determine PID of connected backend (use pg_backend_pid())\n\t* in another window, as postgres user,\n\t\tgdb /path/to/postgres backend-PID\n\t\tgdb> break errfinish\n\t\tgdb> cont\n\t* issue failing command in psql\n\t* when breakpoint is reached,\n\t\tgdb> bt\n\t\t... stack trace printed here ...\n\t\tgdb> q\n\n\t\t\tregards, tom lane\n**********************************************************************\nThis message contains confidential information intended only for the use\nof the addressee(s) named above and may contain information that is\nlegally privileged. If you are not the addressee, or the person\nresponsible for delivering it to the addressee, you are hereby notified\nthat reading, disseminating, distributing or copying this message is\nstrictly prohibited. If you have received this message by mistake,\nplease immediately notify us by replying to the message and delete the\noriginal message immediately thereafter.\n\nThank you.\n\n\n FADLD Tag\n**********************************************************************\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n", "msg_date": "Thu, 23 Feb 2006 14:42:07 -0800", "msg_from": "\"Tomeh, Husam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 0ut of Memory Error during Vacuum Analyze and" }, { "msg_contents": "\"Tomeh, Husam\" <[email protected]> writes:\n> When I first connect to the database via \"psql\" and issue the \"create\n> index\" statement, of course, I get the \"out of memory\" error. If I\n> don't quit my current session and re-ran the same DDL statement again,\n> the index gets created successfully!.. However, if after my first\n> unsuccessful run, I exit my session and re-connect again, and then run\n> the DDL, it will fail again and get the same error. I have done that for\n> many times and appears to have a consistent pattern of behavior.\n\nNow that you know how to reproduce it, please have another go at getting\nthat stack trace. The palloc printout certainly looks like some kind of\nmemory-leak issue, but I can't tell more than that from it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Feb 2006 17:46:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 0ut of Memory Error during Vacuum Analyze and " } ]
[ { "msg_contents": "Hi,\n\nWe're executing a query that has the following plan and we're \nwondering given the size of the data set, what's a better way to \nwrite the query? It's been running since 2pm 2 days ago.\n\nexplain DELETE FROM cds.cds_mspecxx WHERE ProdID not in (SELECT \nstage.ProdID FROM cds_stage.cds_Catalog stage where stage.countryCode \n= 'us') and countryCode = 'us';\nQUERY PLAN\n------------------------------------------------------------------------ \n---------------------------\nIndex Scan using pk_mspecxx on cds_mspecxx \n(cost=53360.87..208989078645.48 rows=7377879 width=6)\nIndex Cond: ((countrycode)::text = 'us'::text)\nFilter: (NOT (subplan))\nSubPlan\n-> Materialize (cost=53360.87..77607.54 rows=1629167 width=12)\n-> Seq Scan on cds_catalog stage (cost=0.00..43776.70 rows=1629167 \nwidth=12)\nFilter: ((countrycode)::text = 'us'::text)\n(7 rows)\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com", "msg_date": "Thu, 23 Feb 2006 23:54:45 -0700", "msg_from": "Brendan Duddridge <[email protected]>", "msg_from_op": true, "msg_subject": "Really really slow query. What's a better way?" }, { "msg_contents": "how about something like:\n\nDELETE FROM cds.cds_mspecxx WHERE NOT EXISTS (SELECT 1 FROM \ncds_stage.cds_Catalog stage where stage.countryCode = 'us' and \nstage.ProdId=cds.cds_mspecxx.ProdId) and countryCode = 'us';\n\nRun explain on it first to see how it will be planned. Both tables \nshould have an index over (countryCode, ProdId) I think.\n\nChris\n\nBrendan Duddridge wrote:\n> Hi,\n> \n> We're executing a query that has the following plan and we're wondering \n> given the size of the data set, what's a better way to write the query? \n> It's been running since 2pm 2 days ago.\n> \n> explain DELETE FROM cds.cds_mspecxx WHERE ProdID not in (SELECT \n> stage.ProdID FROM cds_stage.cds_Catalog stage where stage.countryCode = \n> 'us') and countryCode = 'us';\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------\n> Index Scan using pk_mspecxx on cds_mspecxx \n> (cost=53360.87..208989078645.48 rows=7377879 width=6)\n> Index Cond: ((countrycode)::text = 'us'::text)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Materialize (cost=53360.87..77607.54 rows=1629167 width=12)\n> -> Seq Scan on cds_catalog stage (cost=0.00..43776.70 rows=1629167 width=12)\n> Filter: ((countrycode)::text = 'us'::text)\n> (7 rows)\n> \n> Thanks,\n> *\n> *____________________________________________________________________\n> *Brendan Duddridge* | CTO | 403-277-5591 x24 | [email protected] \n> <mailto:[email protected]>\n> *\n> *ClickSpace Interactive Inc.\n> Suite L100, 239 - 10th Ave. SE\n> Calgary, AB T2G 0V9\n> \n> http://www.clickspace.com \n> \n\n", "msg_date": "Fri, 24 Feb 2006 15:06:34 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow query. What's a better way?" }, { "msg_contents": "Thanks Chris for the very quick response!\n\nJust after posting this message, we tried explain on the same format \nas you just posted:\n\nexplain DELETE FROM cds.cds_mspecxx WHERE not exists (SELECT 'X' FROM \ncds_stage.cds_Catalog stage where stage.countryCode = 'us' and \nstage.prodid = cds.cds_mspecxx.prodid) and countryCode = 'us';\nQUERY PLAN\n------------------------------------------------------------------------ \n----------------------\nBitmap Heap Scan on cds_mspecxx (cost=299654.85..59555205.23 \nrows=7377879 width=6)\nRecheck Cond: ((countrycode)::text = 'us'::text)\nFilter: (NOT (subplan))\n-> Bitmap Index Scan on pk_mspecxx (cost=0.00..299654.85 \nrows=14755759 width=0)\nIndex Cond: ((countrycode)::text = 'us'::text)\nSubPlan\n-> Index Scan using pk_catalog on cds_catalog stage (cost=0.00..7.97 \nrows=2 width=0)\nIndex Cond: (((prodid)::text = ($0)::text) AND ((countrycode)::text = \n'us'::text))\n(8 rows)\n\nSeems way better. I'm not sure it can get any faster though. Not sure \nif having the indexes as (countryCode, ProdId) or (ProdId, \ncountryCode) would make any kind of difference though. Would it?\n\nThanks!\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Feb 24, 2006, at 12:06 AM, Christopher Kings-Lynne wrote:\n\n> how about something like:\n>\n> DELETE FROM cds.cds_mspecxx WHERE NOT EXISTS (SELECT 1 FROM \n> cds_stage.cds_Catalog stage where stage.countryCode = 'us' and \n> stage.ProdId=cds.cds_mspecxx.ProdId) and countryCode = 'us';\n>\n> Run explain on it first to see how it will be planned. Both tables \n> should have an index over (countryCode, ProdId) I think.\n>\n> Chris\n>\n> Brendan Duddridge wrote:\n>> Hi,\n>> We're executing a query that has the following plan and we're \n>> wondering given the size of the data set, what's a better way to \n>> write the query? It's been running since 2pm 2 days ago.\n>> explain DELETE FROM cds.cds_mspecxx WHERE ProdID not in (SELECT \n>> stage.ProdID FROM cds_stage.cds_Catalog stage where \n>> stage.countryCode = 'us') and countryCode = 'us';\n>> QUERY PLAN \n>> --------------------------------------------------------------------- \n>> ------------------------------\n>> Index Scan using pk_mspecxx on cds_mspecxx \n>> (cost=53360.87..208989078645.48 rows=7377879 width=6)\n>> Index Cond: ((countrycode)::text = 'us'::text)\n>> Filter: (NOT (subplan))\n>> SubPlan\n>> -> Materialize (cost=53360.87..77607.54 rows=1629167 width=12)\n>> -> Seq Scan on cds_catalog stage (cost=0.00..43776.70 rows=1629167 \n>> width=12)\n>> Filter: ((countrycode)::text = 'us'::text)\n>> (7 rows)\n>> Thanks,\n>> *\n>> *____________________________________________________________________\n>> *Brendan Duddridge* | CTO | 403-277-5591 x24 | \n>> [email protected] <mailto:[email protected]>\n>> *\n>> *ClickSpace Interactive Inc.\n>> Suite L100, 239 - 10th Ave. SE\n>> Calgary, AB T2G 0V9\n>> http://www.clickspace.com\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>", "msg_date": "Fri, 24 Feb 2006 00:24:00 -0700", "msg_from": "Brendan Duddridge <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really really slow query. What's a better way?" } ]
[ { "msg_contents": "Hi list\n\nI'm fairly new to Postgres so bear with me. Googling and searching the\nlist, I didn't find anything that resembled my problem.\n\nI have a large table with ca. 10 million inserts per day (fairly simple\ndata: timestam, a couple of id's and a varchar message)\n\nI run a query every couple of minutes that looks at the new entries\nsince the last run and retrieves them for further processing (using a\nWHERE eventtime > '2006-02-24 14:00:00' ) to limit to the most recent\nentries\n\nThese queries run around 40-50 seconds (largely due to some LIKE %msg%\nthrewn in for good measure). Postgres performs a seq table scan on\nthose queries :-(\n\nMy idea is to limit the search to only the last n entries because I\nfound that a\n\nSELECT * from table ORDER eventtime DESC limit 1000\n\nis very fast. Because the inserts are in chronolgical order, I can\nstore the sequential id of the highest row from the last query and\nsubtract that from the current high row count to determine that number.\n\nIs there a way to limit the expensive query to only those last 1000 (or\nwhatever) results?\n\nI have tried to nest SELECTS but my SQL-fu is to limited to get\nanything through the SQL processor :-)\n\nthanks\nJens-Christian Fischer\n\n", "msg_date": "24 Feb 2006 06:13:31 -0800", "msg_from": "\"jcfischer\" <[email protected]>", "msg_from_op": true, "msg_subject": "nested query on last n rows of huge table" }, { "msg_contents": "sorry: Postgres 8.0.2 server. The EXPLAIN ANALYZE for the query looks\nlike this:\n\nexplain analyze select\nsyslog.logs.eventtime,assets.hosts.name,syslog.processes.name as\nprocess\nfrom\nsyslog.logs,assets.hosts,assets.ipaddrs,assets.macaddrs,syslog.processes\nwhere msg like '%session opened for user root%'\nand syslog.logs.assets_ipaddr_id = assets.ipaddrs.id\nand assets.ipaddrs.macaddr_id = assets.macaddrs.id\nand assets.macaddrs.host_id = assets.hosts.id\nand syslog.processes.id = syslog.logs.process_id and\neventtime > timestamp '2006-02-24 15:05:00'\n\nNested Loop (cost=0.00..328832.34 rows=2 width=254) (actual\ntime=49389.924..49494.665 rows=45 loops=1)\n -> Nested Loop (cost=0.00..328826.32 rows=1 width=90) (actual\ntime=49365.709..49434.500 rows=45 loops=1)\n -> Nested Loop (cost=0.00..328820.30 rows=1 width=90) (actual\ntime=49327.211..49360.043 rows=45 loops=1)\n -> Nested Loop (cost=0.00..328814.27 rows=1 width=90)\n(actual time=49327.183..49344.281 rows=45 loops=1)\n -> Seq Scan on logs (cost=0.00..328809.04 rows=1\nwidth=16) (actual time=49314.928..49331.451 rows=45 loops=1)\n Filter: (((msg)::text ~~ '%session opened for\nuser root%'::text) AND (eventtime > '2006-02-24 15:05:00'::timestamp\nwithout time zone))\n -> Index Scan using \"pk_syslog.processes\" on\nprocesses (cost=0.00..5.21 rows=1 width=82) (actual time=0.278..0.280\nrows=1 loops=45)\n Index Cond: (processes.id =\n\"outer\".process_id)\n -> Index Scan using \"pk_assets.ipaddrs\" on ipaddrs\n(cost=0.00..6.01 rows=1 width=8) (actual time=0.344..0.346 rows=1\nloops=45)\n Index Cond: (\"outer\".assets_ipaddr_id = ipaddrs.id)\n -> Index Scan using \"pk_assets.macaddrs\" on macaddrs\n(cost=0.00..6.01 rows=1 width=8) (actual time=1.648..1.650 rows=1\nloops=45)\n Index Cond: (\"outer\".macaddr_id = macaddrs.id)\n -> Index Scan using \"pk_assets.hosts\" on hosts (cost=0.00..6.01\nrows=1 width=172) (actual time=1.330..1.331 rows=1 loops=45)\n Index Cond: (\"outer\".host_id = hosts.id)\nTotal runtime: 49494.830 ms\n\n", "msg_date": "24 Feb 2006 06:26:40 -0800", "msg_from": "\"jcfischer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: nested query on last n rows of huge table" }, { "msg_contents": "\nOn Feb 24, 2006, at 23:13 , jcfischer wrote:\n\n> Is there a way to limit the expensive query to only those last 1000 \n> (or\n> whatever) results?\n\n>\n> I have tried to nest SELECTS but my SQL-fu is to limited to get\n> anything through the SQL processor :-)\n\nThe basics of a subquery are:\n\nSELECT <expensive query>\nFROM (\n\tSELECT *\n\tFROM table\n\tORDER eventtime DESC\n\tLIMIT 1000\n\t) as most_recent_1000\n\nDon't know enough about the other parts, but hopefully this can get \nyou started. :)\n\nMichael Glaesemann\ngrzm myrealbox com\n\n\n\n", "msg_date": "Wed, 1 Mar 2006 12:44:25 +0900", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: nested query on last n rows of huge table" } ]
[ { "msg_contents": "Hello all,\n\nI have this small set of questions that i have been looking to ask and\nhere it comes.\n\nLets imagin we have a setup where we have multiple databases instances\non one server,\nand we are starting to use more and more cross databases queries [query\nunions] using dblink [multiple context but \nusing the same templated database schema]. Most of the operations\nperformed on those instance\ndo not need to be done in the same context , but as requirement comes\nfor more and more inter context\ndata analysis we are starting to see some issue with the current\ndatabases layout.\n\nThats where we asked our self what we could do. Without mutch changes it\nwouldn't be so hard to merge\nlets say 4 context into one [4 database using the same template into 1]\n, and we are sure that those cross databases\nquery would get major speed improvement.\n\nBut now we have some issues arround having 1 database context for whats\nis actualy contained in 4 separated ones.\n\nHaving one context implies either 1 database with 4 SCHEMA OR 1 database\nwith 1 schema [still trying to see where we want to go]\n\nThe first issue we have is with pg_hba Acls where you can't define\nspecific ACLS for SCHEMA , where\nif we want to have a first line of defense against data segregation we\nmust\n\t1.Create \"ala context\" users for schema and per schema defined\nacls ...[hard to manage and a bit out of Security Logic]\n or\n\t2.Just create as many user as needed and implement underlying\nacls ..[also hard to manage and out of Security Logic]\n\nNow that the issue is that we can't implement a per connection user/ACL\npair , would it be a good idea to implement\nSchema ACLS and mabey pg_hba.conf Schema acls \n\n\nI mean \nNOW:\nlocal database user auth-method [auth-option]\n\t\nCould be :\nlocal database/or database.schema user auth-method [auth-option]\n\t\t^^^\n\t database could be considered as database.public\n\nNow if we would have that kind of acces Control to postresql i would\nlike to know what is the \nbenefices of using multiple schema in a single database in term of\nPostgresql Backend management \n[file on disk,Index Management,Maintenance Management],beside the\npossibilites to do quick and \nefficient cross databases querys, \n[Note those cross databases queries are not done by a users that need\nper Schema/Databases \nACLS but by a \"superior user\" so we do not need to consider that for\nthis issue]\n\nIs it really more efficient of having 4 schema in one databaes , 1\ndatabase with one schema or having 4 databases?\nI think i can have some of those answers i think but i want behind the\nscene point of view on those.\n\nThanks you all in advance, have a nice week-end\n\n-elz\n\nAVERTISSEMENT CONCERNANT LA CONFIDENTIALITE \n\nLe present message est a l'usage exclusif du ou des destinataires mentionnes ci-dessus. Son contenu est confidentiel et peut etre assujetti au secret professionnel. Si vous avez recu le present message par erreur, veuillez nous en aviser immediatement et le detruire en vous abstenant d'en faire une copie, d'en divulguer le contenu ou d'y donner suite.\n\nCONFIDENTIALITY NOTICE\n\nThis communication is intended for the exclusive use of the addressee identified above. Its content is confidential and may contain privileged information. If you have received this communication by error, please notify the sender and delete the message without copying or disclosing it.\n", "msg_date": "Fri, 24 Feb 2006 15:03:36 -0500", "msg_from": "\"Eric Lauzon\" <[email protected]>", "msg_from_op": true, "msg_subject": "Schema vs Independant Databases, ACLS,Overhead,pg_hba.conf" }, { "msg_contents": "Eric Lauzon wrote:\n\nHi,\n\n> Now that the issue is that we can't implement a per connection user/ACL\n> pair , would it be a good idea to implement\n> Schema ACLS and mabey pg_hba.conf Schema acls \n\nThis is certainly possible to do using GRANT and REVOKE on the schemas;\nno need to fool with pg_hba.conf. You can of course create groups/roles\nto simplify the assignment of privileges, as needed.\n\nApart from the much more efficient queries (i.e. using cross-schema\nqueries instead of dblink), I don't think you're going to see much\nchange in performance, because most things like WAL and shared buffers\nare shared among all databases anyway. You'd save a bit by not having\nmultiple copies of system caches (pg_class cache, etc), but I wouldn't\nknow if that's going to be very noticeable next to the primary\nimprovement.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Sat, 25 Feb 2006 13:03:00 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Schema vs Independant Databases, ACLS,Overhead,pg_hba.conf" } ]
[ { "msg_contents": "Hi all\n\n Is it secure to disable fsync havin battery-backed disk cache?\n \n Thx\n\n\nJavier Somoza\nOficina de Dirección Estratégica\nmailto:[email protected]\n\nPanda Software\nBuenos Aires, 12\n48001 BILBAO - ESPAÑA\nTeléfono: 902 24 365 4\nFax: 94 424 46 97\nhttp://www.pandasoftware.es\nPanda Software, una de las principales compañías desarrolladoras de\nsoluciones de protección contra virus e intrusos, presenta su nueva\nfamilia de soluciones. Todos los usuarios de ordenadores, desde las\nredes más grandes a los domésticos, disponen ahora de nuevos productos\ncon excelentes tecnologías de seguridad. Más información en:\nhttp://www.pandasoftware.es/productos\n\n \n\n¡Protéjase ahora contra virus e intrusos! Pruebe gratis nuestros\nproductos en http://www.pandasoftware.es/descargas/\n\n \n \n \n \n \n \n \n \n\n\n\n\n\n\n\n\n        Hi all\n\n        Is it secure to disable fsync havin battery-backed disk cache?\n                \n        Thx\n\n\n\n\n\nJavier Somoza\nOficina de Dirección Estratégica\nmailto:[email protected]\n\nPanda Software\nBuenos Aires, 12\n48001 BILBAO - ESPAÑA\nTeléfono: 902 24 365 4\nFax:  94 424 46 97\nhttp://www.pandasoftware.es\nPanda Software, una de las principales compañías desarrolladoras de soluciones de protección contra virus e intrusos, presenta su nueva familia de soluciones. Todos los usuarios de ordenadores, desde las redes más grandes a los domésticos, disponen ahora de nuevos productos con excelentes tecnologías de seguridad. Más información en: http://www.pandasoftware.es/productos\n\n\n\n¡Protéjase ahora contra virus e intrusos! Pruebe gratis nuestros productos en http://www.pandasoftware.es/descargas/", "msg_date": "Mon, 27 Feb 2006 11:03:14 +0100", "msg_from": "Javier Somoza <[email protected]>", "msg_from_op": true, "msg_subject": "fsync and battery-backed caches" }, { "msg_contents": "Javier Somoza schrieb:\n> \n> Hi all\n> \n> Is it secure to disable fsync havin battery-backed disk cache?\n> \n> Thx\n> \nNo. fsync moves the data from OS memory cache to disk-adaptor\ncache which is required to benefit from battery backup.\n\nIf this data is written to the plates immediately depends on settings\nof your disk adaptor card.\n\nRegards\nTino\n", "msg_date": "Mon, 27 Feb 2006 11:12:57 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fsync and battery-backed caches" }, { "msg_contents": "On Mon, Feb 27, 2006 at 11:12:57AM +0100, Tino Wildenhain wrote:\n> Javier Somoza schrieb:\n> >\n> > Hi all\n> >\n> > Is it secure to disable fsync havin battery-backed disk cache?\n> > \n> > Thx\n> >\n> No. fsync moves the data from OS memory cache to disk-adaptor\n> cache which is required to benefit from battery backup.\n\nMore importantly, in guarantees that data is committed to non-volatile\nstorage in such a way that PostgreSQL can recover from a crash without\ncorruption.\n\nIf you have a battery-backed controller and turn on write caching you\nshouldn't see much effect from fsync anyway.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 27 Feb 2006 18:22:23 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fsync and battery-backed caches" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Mon, Feb 27, 2006 at 11:12:57AM +0100, Tino Wildenhain wrote:\n> > Javier Somoza schrieb:\n> > >\n> > > Hi all\n> > >\n> > > Is it secure to disable fsync havin battery-backed disk cache?\n> > > \n> > > Thx\n> > >\n> > No. fsync moves the data from OS memory cache to disk-adaptor\n> > cache which is required to benefit from battery backup.\n> \n> More importantly, in guarantees that data is committed to non-volatile\n> storage in such a way that PostgreSQL can recover from a crash without\n> corruption.\n> \n> If you have a battery-backed controller and turn on write caching you\n> shouldn't see much effect from fsync anyway.\n\nWe do mention battery-backed cache in our docs:\n\n\thttp://www.postgresql.org/docs/8.1/static/wal.html\n\nIf it is unclear, please let us know.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n SRA OSS, Inc. http://www.sraoss.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 27 Feb 2006 20:06:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fsync and battery-backed caches" }, { "msg_contents": "Yeah, i saw it. It says full-page-writes can be disabled\nwithout problems.\n But i wanted to confirm fsync cannot be disabled although i\nhave battery.\n\n Thanks!! :-)\n\n\n\n> We do mention battery-backed cache in our docs:\n> \n> \thttp://www.postgresql.org/docs/8.1/static/wal.html\n> \n> If it is unclear, please let us know.\n\nJavier Somoza\nOficina de Dirección Estratégica\nmailto:[email protected]\n\nPanda Software\nBuenos Aires, 12\n48001 BILBAO - ESPAÑA\nTeléfono: 902 24 365 4\nFax: 94 424 46 97\nhttp://www.pandasoftware.es\nPanda Software, una de las principales compañías desarrolladoras de\nsoluciones de protección contra virus e intrusos, presenta su nueva\nfamilia de soluciones. Todos los usuarios de ordenadores, desde las\nredes más grandes a los domésticos, disponen ahora de nuevos productos\ncon excelentes tecnologías de seguridad. Más información en:\nhttp://www.pandasoftware.es/productos\n\n \n\n¡Protéjase ahora contra virus e intrusos! Pruebe gratis nuestros\nproductos en http://www.pandasoftware.es/descargas/\n\n \n \n \n \n \n \n \n \n\n\n\n\n\n\n\n\n            Yeah, i saw it. It says full-page-writes can be disabled without problems.\n            But i wanted to confirm fsync cannot be disabled although i have battery.\n\n            Thanks!!   :-)\n\n\nWe do mention battery-backed cache in our docs:\n\n\thttp://www.postgresql.org/docs/8.1/static/wal.html\n\nIf it is unclear, please let us know.\n\n\n\n\nJavier Somoza\nOficina de Dirección Estratégica\nmailto:[email protected]\n\nPanda Software\nBuenos Aires, 12\n48001 BILBAO - ESPAÑA\nTeléfono: 902 24 365 4\nFax:  94 424 46 97\nhttp://www.pandasoftware.es\nPanda Software, una de las principales compañías desarrolladoras de soluciones de protección contra virus e intrusos, presenta su nueva familia de soluciones. Todos los usuarios de ordenadores, desde las redes más grandes a los domésticos, disponen ahora de nuevos productos con excelentes tecnologías de seguridad. Más información en: http://www.pandasoftware.es/productos\n\n\n\n¡Protéjase ahora contra virus e intrusos! Pruebe gratis nuestros productos en http://www.pandasoftware.es/descargas/", "msg_date": "Tue, 28 Feb 2006 09:19:36 +0100", "msg_from": "Javier Somoza <[email protected]>", "msg_from_op": true, "msg_subject": "Re: fsync and battery-backed caches" }, { "msg_contents": "Hi,\n\n\tis interesting to do it when using RAID 1+0?\n\n\tThx\n\n\n\n\n\n\n\n\n\n\n\n\tHi,\n\n\tis interesting to do it when using RAID 1+0?\n\n\tThx", "msg_date": "Tue, 28 Feb 2006 10:45:45 +0100", "msg_from": "Javier Somoza <[email protected]>", "msg_from_op": true, "msg_subject": "Different disks for xlogs and data" }, { "msg_contents": "At 04:45 AM 2/28/2006, Javier Somoza wrote:\n\n>Hi,\n>is interesting to do it (use different HD sets AKA \"LUNs\" for xlogs \n>than for data) when using RAID 1+0?\n\nIf \"interesting\" means \"this increases performance\", this is not a \nsimple question.\n\nRegardless of what RAID level you use, under the proper circumstances \nit can boost performance to put xlog on a dedicated set of spindles.\n\nIf you have a large enough pool of HDs so that you can maximize the \nperformance of any LUN you create, then it is always good to put xlog \non its own LUN.\n\nIn the \"perfect\" world, each table or set of tables that tends to be \naccessed for the same query, which obviously includes xlog, would be \non its own LUN; and each LUN would contain enough HDs to maximize performance.\n\nMost people can't afford and/or fit that many HDs into their set up.\n\nIf you have a small number of HDs, and \"small\" depends on the \nspecifics of your DB and the HDs you are using, you may get better \nperformance by leaving everything together on one set of HDs.\n\nOnce you have more than whatever is a \"small\" number of HDs for your \nDB and set of disks, _usually_, but not always, one of the best first \ntables to move to different HD's is xlog.\n\nThe best way to find out what will get the most performance from your \nspecific HW and DB is to test, test, test. Start by putting \neverything on 1 RAID 5 set. Test. Then (if you have enough HDs) put \neverything on 1 RAID 1+0 set. Test again. If performance is \"good \nenough\", and only you know what that is for your DB, then STOP and \njust use whichever works best for you. It's possible to spend far \nmore money in man hours of effort than it is worth trying to get a \nfew extra percents of performance.\n\nIf you have the need and the budget to tweak things further, then \nstart worrying about the more complicated stuff.\n\nAs a rule of thumb, figure each 7200rpm HD does ~50MBps and each \n15Krpm (or 10Krpm WD Raptor) does ~75MBps. So a 8 HD RAID 5 set of \n7200rpm HDs does ~(8-1= 7)*50= ~350MBps. A RAID 10 set made of the \nsame 8 HDs should do ~4*50= 200MBps. (=If and only if= the rest of \nyour HW let's the RAID set push data at that speed.). Also, note \nthat RAID 5 writes are often 2/3 - 4/5 the speed of RAID 5 reads.\n\nIf we pull 1 HD from the 8 HD RAID 5 set above and dedicate it to \nxlog, then xlog will always get ~50MBps bandwidth. OTOH, in the \noriginal RAID 5 configuration xlog was probably sharing between 2/3 \nand 4/5 of ~350MBps bandwidth. Say ~233-280MBps. Which results in \nhigher overall performance for you DB? Only testing by you on your \nDB can tell.\n\nHope this helps,\nRon\n\n\n\n\n\n \n\n\n", "msg_date": "Tue, 28 Feb 2006 08:40:56 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Different disks for xlogs and data" }, { "msg_contents": "Actually, you can't assume that a BBU means you can safely disable\nfull-page-writes. Depending on the controller, it's still possible to\nend up with partially written pages.\n\nBTW, if your mailer makes doing so convenient, it would be nice to trim\ndown your .signature; note that it's about 3x longer than the email you\nactually sent.\n\nOn Tue, Feb 28, 2006 at 09:19:36AM +0100, Javier Somoza wrote:\n> \n> Yeah, i saw it. It says full-page-writes can be disabled\n> without problems.\n> But i wanted to confirm fsync cannot be disabled although i\n> have battery.\n> \n> Thanks!! :-)\n> \n> \n> \n> > We do mention battery-backed cache in our docs:\n> > \n> > \thttp://www.postgresql.org/docs/8.1/static/wal.html\n> > \n> > If it is unclear, please let us know.\n> \n> Javier Somoza\n> Oficina de Direcci?n Estrat?gica\n> mailto:[email protected]\n> \n> Panda Software\n> Buenos Aires, 12\n> 48001 BILBAO - ESPA?A\n> Tel?fono: 902 24 365 4\n> Fax: 94 424 46 97\n> http://www.pandasoftware.es\n> Panda Software, una de las principales compa??as desarrolladoras de\n> soluciones de protecci?n contra virus e intrusos, presenta su nueva\n> familia de soluciones. Todos los usuarios de ordenadores, desde las\n> redes m?s grandes a los dom?sticos, disponen ahora de nuevos productos\n> con excelentes tecnolog?as de seguridad. M?s informaci?n en:\n> http://www.pandasoftware.es/productos\n> \n> \n> \n> ?Prot?jase ahora contra virus e intrusos! Pruebe gratis nuestros\n> productos en http://www.pandasoftware.es/descargas/\n> \n> \n> \n> \n> \n> \n> \n> \n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 28 Feb 2006 11:23:34 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fsync and battery-backed caches" }, { "msg_contents": "Ups sorry.\n\n\n> Actually, you can't assume that a BBU means you can safely disable\n> full-page-writes. Depending on the controller, it's still possible to\n> end up with partially written pages.\n> \n> BTW, if your mailer makes doing so convenient, it would be nice to trim\n> down your .signature; note that it's about 3x longer than the email you\n> actually sent.\n\n\n\n\n\n\n\n\n\n\n            Ups sorry.\n\n\nActually, you can't assume that a BBU means you can safely disable\nfull-page-writes. Depending on the controller, it's still possible to\nend up with partially written pages.\n\nBTW, if your mailer makes doing so convenient, it would be nice to trim\ndown your .signature; note that it's about 3x longer than the email you\nactually sent.", "msg_date": "Tue, 28 Feb 2006 18:27:34 +0100", "msg_from": "Javier Somoza <[email protected]>", "msg_from_op": true, "msg_subject": "Re: fsync and battery-backed caches" }, { "msg_contents": "Ok, you absolutely can't guarantee you won't get partial page writes\nthen. A UPS buys you no more data safety than being plugged directly\ninto the wall. UPS's fail. People trip over cords. Breakers fail. Even\nif you have multiple power supplies on multiple circuits fed by\ndifferent UPS's you can *still* have unexpected power failures. The\n'master' for distributed.net had exactly that happen recently; the two\nbreakers feeding it (from 2 seperate UPS's) failed simultaneously.\n\nIn a nutshell, having a server on a UPS is notthing at all like having a\nBBU on the raid controller: commiting to the BBU is essentially the same\nas committing to the drives, unless the BBU runs out of power before the\nserver has power restored, or fails in some similar fasion. But because\nthere's many fewer parts involved, such a failure of the BBU is far less\nlikely than a failure up-stream.\n\nSo, if you want performance, get a controller with a BBU and allow it to\ncache writes. While you're at it, try and get one that will\nautomatically disable write caching if the BBU fails for some reason.\n\nOn Tue, Feb 28, 2006 at 06:27:34PM +0100, Javier Somoza wrote:\n> \n> Ups sorry.\n> \n> \n> > Actually, you can't assume that a BBU means you can safely disable\n> > full-page-writes. Depending on the controller, it's still possible to\n> > end up with partially written pages.\n> > \n> > BTW, if your mailer makes doing so convenient, it would be nice to trim\n> > down your .signature; note that it's about 3x longer than the email you\n> > actually sent.\n> \n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 28 Feb 2006 15:09:25 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fsync and battery-backed caches" } ]
[ { "msg_contents": "Hi,\n\n How should i set this configuration? Depending on the\nmemory?\n And then is it necessary to perform a benchmarking test?\n\n What did you do?\n\n Thx!\n \n\n\nJavier Somoza\nOficina de Dirección Estratégica\nmailto:[email protected]\n\nPanda Software\nBuenos Aires, 12\n48001 BILBAO - ESPAÑA\nTeléfono: 902 24 365 4\nFax: 94 424 46 97\nhttp://www.pandasoftware.es\nPanda Software, una de las principales compañías desarrolladoras de\nsoluciones de protección contra virus e intrusos, presenta su nueva\nfamilia de soluciones. Todos los usuarios de ordenadores, desde las\nredes más grandes a los domésticos, disponen ahora de nuevos productos\ncon excelentes tecnologías de seguridad. Más información en:\nhttp://www.pandasoftware.es/productos\n\n \n\n¡Protéjase ahora contra virus e intrusos! Pruebe gratis nuestros\nproductos en http://www.pandasoftware.es/descargas/\n\n \n \n \n \n \n \n \n \n\n\n\n\n\n\n\n\n            Hi,\n\n            How should i set this configuration? Depending on the memory?\n            And then is it necessary to perform a benchmarking test?\n\n            What did you do?\n\n            Thx!\n            \n\n\n\n\n\nJavier Somoza\nOficina de Dirección Estratégica\nmailto:[email protected]\n\nPanda Software\nBuenos Aires, 12\n48001 BILBAO - ESPAÑA\nTeléfono: 902 24 365 4\nFax:  94 424 46 97\nhttp://www.pandasoftware.es\nPanda Software, una de las principales compañías desarrolladoras de soluciones de protección contra virus e intrusos, presenta su nueva familia de soluciones. Todos los usuarios de ordenadores, desde las redes más grandes a los domésticos, disponen ahora de nuevos productos con excelentes tecnologías de seguridad. Más información en: http://www.pandasoftware.es/productos\n\n\n\n¡Protéjase ahora contra virus e intrusos! Pruebe gratis nuestros productos en http://www.pandasoftware.es/descargas/", "msg_date": "Mon, 27 Feb 2006 13:43:36 +0100", "msg_from": "Javier Somoza <[email protected]>", "msg_from_op": true, "msg_subject": "Setting the shared buffers" }, { "msg_contents": ">\n> How should i set this configuration? Depending on the memory?\n> And then is it necessary to perform a benchmarking test?\n\n\nI've set it to 'shared_buffers = 12288' with 8 GB RAM on postgresql 7.4.9,\nFreeBSD 6.0. There is no exact size, depends on type of workload, server-OS\netc. Adjust it up and down and see if your performance changes.\n\nregards\nClaus\n\n            How should i set this configuration? Depending on the memory?\n            And then is it necessary to perform a benchmarking test?I've set it to 'shared_buffers = 12288' with 8 GB RAM on postgresql 7.4.9, FreeBSD 6.0. There is no exact size, depends on type of workload, server-OS etc. Adjust it up and down and see if your performance changes.\nregardsClaus", "msg_date": "Mon, 27 Feb 2006 14:17:35 +0100", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Setting the shared buffers" } ]
[ { "msg_contents": "Hi all,\n\nShort story:\n\nI have a quite big table (about 200 million records, and ~2-3 million\nupdates/~1 million inserts/few thousand deletes per day). I started a\nvacuum on it on friday evening, and it still runs now (monday\nafternoon). I used \"vacuum verbose\", and the output looks like:\n\nINFO: vacuuming \"public.big_table\"\nINFO: index \"pk_big_table\" now contains 223227480 row versions in\n1069776 pages\nDETAIL: 711140 index row versions were removed.\n80669 index pages have been deleted, 80669 are currently reusable.\nCPU 14.56s/46.42u sec elapsed 13987.65 sec.\nINFO: index \"idx_big_table_1\" now contains 223229722 row versions in\n740108 pages\nDETAIL: 711140 index row versions were removed.\n58736 index pages have been deleted, 58733 are currently reusable.\nCPU 12.90s/94.97u sec elapsed 10052.12 sec.\nINFO: index \"idx_big_table_2\" now contains 16779341 row versions in\n55831 pages\nDETAIL: 125895 index row versions were removed.\n369 index pages have been deleted, 337 are currently reusable.\nCPU 1.39s/5.81u sec elapsed 763.25 sec.\nINFO: index \"idx_big_table_3\" now contains 472945 row versions in 2536\npages\nDETAIL: 5328 index row versions were removed.\n595 index pages have been deleted, 595 are currently reusable.\nCPU 0.06s/0.20u sec elapsed 35.36 sec.\nINFO: index \"idx_big_table_4\" now contains 471419 row versions in 2537\npages\nDETAIL: 5318 index row versions were removed.\n591 index pages have been deleted, 591 are currently reusable.\nCPU 0.08s/0.21u sec elapsed 36.18 sec.\nINFO: \"big_table\": removed 2795984 row versions in 413228 pages\nDETAIL: CPU 22.19s/26.92u sec elapsed 5095.57 sec.\nINFO: index \"pk_big_table\" now contains 221069840 row versions in\n1069780 pages\nDETAIL: 2162406 index row versions were removed.\n90604 index pages have been deleted, 80609 are currently reusable.\nCPU 7.77s/15.92u sec elapsed 13576.07 sec.\nINFO: index \"idx_big_table_1\" now contains 221087391 row versions in\n740109 pages\nDETAIL: 2162406 index row versions were removed.\n66116 index pages have been deleted, 58647 are currently reusable.\nCPU 6.34s/23.22u sec elapsed 10592.02 sec.\nINFO: index \"idx_big_table_2\" now contains 16782762 row versions in\n55831 pages\nDETAIL: 21 index row versions were removed.\n355 index pages have been deleted, 323 are currently reusable.\nCPU 0.24s/0.78u sec elapsed 651.89 sec.\nINFO: index \"idx_big_table_3\" now contains 482084 row versions in 2536\npages\nDETAIL: 525 index row versions were removed.\n561 index pages have been deleted, 561 are currently reusable.\nCPU 0.04s/0.10u sec elapsed 36.80 sec.\nINFO: index \"idx_big_table_4\" now contains 480575 row versions in 2537\npages\nDETAIL: 525 index row versions were removed.\n558 index pages have been deleted, 558 are currently reusable.\nCPU 0.07s/0.17u sec elapsed 39.37 sec.\nINFO: \"big_table\": removed 2795985 row versions in 32975 pages\nDETAIL: CPU 0.96s/0.30u sec elapsed 232.51 sec.\nINFO: index \"pk_big_table\" now contains 218297352 row versions in\n1069780 pages\nDETAIL: 2795309 index row versions were removed.\n103434 index pages have been deleted, 80489 are currently reusable.\nCPU 10.40s/18.63u sec elapsed 14420.05 sec.\nINFO: index \"idx_big_table_1\" now contains 218310055 row versions in\n740109 pages\nDETAIL: 2795309 index row versions were removed.\n75674 index pages have been deleted, 58591 are currently reusable.\nCPU 6.46s/23.33u sec elapsed 10495.41 sec.\nINFO: index \"idx_big_table_2\" now contains 16782885 row versions in\n55831 pages\nDETAIL: 29 index row versions were removed.\n354 index pages have been deleted, 322 are currently reusable.\nCPU 0.24s/0.72u sec elapsed 653.09 sec.\nINFO: index \"idx_big_table_3\" now contains 491320 row versions in 2536\npages\nDETAIL: 451 index row versions were removed.\n529 index pages have been deleted, 529 are currently reusable.\nCPU 0.02s/0.13u sec elapsed 36.83 sec.\nINFO: index \"idx_big_table_4\" now contains 489798 row versions in 2537\npages\nDETAIL: 451 index row versions were removed.\n522 index pages have been deleted, 522 are currently reusable.\nCPU 0.03s/0.13u sec elapsed 36.50 sec.\nINFO: \"big_table\": removed 2795957 row versions in 32947 pages\nDETAIL: CPU 0.93s/0.28u sec elapsed 216.91 sec.\nINFO: index \"pk_big_table\" now contains 215519688 row versions in\n1069780 pages\nDETAIL: 2793693 index row versions were removed.\n115142 index pages have been deleted, 80428 are currently reusable.\nCPU 7.97s/16.05u sec elapsed 14921.06 sec.\nINFO: index \"idx_big_table_1\" now contains 215523269 row versions in\n740109 pages\nDETAIL: 2793693 index row versions were removed.\n83819 index pages have been deleted, 58576 are currently reusable.\nCPU 8.62s/34.15u sec elapsed 9607.76 sec.\nINFO: index \"idx_big_table_2\" now contains 16780518 row versions in\n55831 pages\nDETAIL: 2385 index row versions were removed.\n362 index pages have been deleted, 322 are currently reusable.\nCPU 0.20s/0.73u sec elapsed 701.77 sec.\nINFO: index \"idx_big_table_3\" now contains 492309 row versions in 2536\npages\nDETAIL: 1097 index row versions were removed.\n520 index pages have been deleted, 520 are currently reusable.\nCPU 0.06s/0.19u sec elapsed 39.09 sec.\nINFO: index \"idx_big_table_4\" now contains 490789 row versions in 2537\npages\nDETAIL: 1090 index row versions were removed.\n515 index pages have been deleted, 515 are currently reusable.\nCPU 0.05s/0.17u sec elapsed 40.08 sec.\nINFO: \"big_table\": removed 2795966 row versions in 33760 pages\nDETAIL: CPU 1.40s/0.47u sec elapsed 273.16 sec.\nINFO: index \"pk_big_table\" now contains 212731896 row versions in\n1069780 pages\nDETAIL: 2791935 index row versions were removed.\n127577 index pages have been deleted, 80406 are currently reusable.\nCPU 7.78s/16.26u sec elapsed 14241.76 sec.\nINFO: index \"idx_big_table_1\" now contains 212738938 row versions in\n740109 pages\nDETAIL: 2791935 index row versions were removed.\n93049 index pages have been deleted, 58545 are currently reusable.\nCPU 9.57s/32.24u sec elapsed 9782.60 sec.\nINFO: index \"idx_big_table_2\" now contains 16772407 row versions in\n55831 pages\nDETAIL: 8234 index row versions were removed.\n390 index pages have been deleted, 322 are currently reusable.\nCPU 0.22s/0.82u sec elapsed 658.90 sec.\nINFO: index \"idx_big_table_3\" now contains 496310 row versions in 2536\npages\nDETAIL: 1719 index row versions were removed.\n501 index pages have been deleted, 501 are currently reusable.\nCPU 0.05s/0.19u sec elapsed 36.78 sec.\nINFO: index \"idx_big_table_4\" now contains 494804 row versions in 2537\npages\nDETAIL: 1716 index row versions were removed.\n497 index pages have been deleted, 497 are currently reusable.\nCPU 0.02s/0.18u sec elapsed 36.32 sec.\nINFO: \"big_table\": removed 2795961 row versions in 36659 pages\nDETAIL: CPU 1.04s/0.60u sec elapsed 253.21 sec.\nINFO: index \"pk_big_table\" now contains 209952007 row versions in\n1069780 pages\nDETAIL: 2791879 index row versions were removed.\n140136 index pages have been deleted, 80292 are currently reusable.\nCPU 9.00s/15.42u sec elapsed 14884.36 sec.\nINFO: index \"idx_big_table_1\" now contains 209966255 row versions in\n740109 pages\nDETAIL: 2791879 index row versions were removed.\n102429 index pages have been deleted, 58476 are currently reusable.\nCPU 7.78s/21.55u sec elapsed 11868.99 sec.\nINFO: index \"idx_big_table_2\" now contains 16772692 row versions in\n55831 pages\nDETAIL: 107 index row versions were removed.\n391 index pages have been deleted, 322 are currently reusable.\nCPU 0.29s/0.94u sec elapsed 804.51 sec.\nINFO: index \"idx_big_table_3\" now contains 506561 row versions in 2536\npages\nDETAIL: 1741 index row versions were removed.\n460 index pages have been deleted, 460 are currently reusable.\nCPU 0.06s/0.20u sec elapsed 70.12 sec.\nINFO: index \"idx_big_table_4\" now contains 505063 row versions in 2537\npages\nDETAIL: 1741 index row versions were removed.\n453 index pages have been deleted, 453 are currently reusable.\nCPU 0.07s/0.15u sec elapsed 67.72 sec.\nINFO: \"big_table\": removed 2795955 row versions in 33272 pages\nDETAIL: CPU 0.95s/0.30u sec elapsed 436.58 sec.\nINFO: index \"pk_big_table\" now contains 207177253 row versions in\n1069780 pages\nDETAIL: 2793516 index row versions were removed.\n153135 index pages have been deleted, 80210 are currently reusable.\nCPU 9.73s/16.60u sec elapsed 16165.25 sec.\nINFO: index \"idx_big_table_1\" now contains 207181989 row versions in\n740109 pages\nDETAIL: 2793516 index row versions were removed.\n112028 index pages have been deleted, 58454 are currently reusable.\nCPU 6.60s/19.69u sec elapsed 10805.05 sec.\nINFO: index \"idx_big_table_2\" now contains 16772703 row versions in\n55831 pages\nDETAIL: 16 index row versions were removed.\n391 index pages have been deleted, 322 are currently reusable.\nCPU 0.38s/1.10u sec elapsed 618.92 sec.\nINFO: index \"idx_big_table_3\" now contains 508312 row versions in 2536\npages\nDETAIL: 1860 index row versions were removed.\n447 index pages have been deleted, 447 are currently reusable.\nCPU 0.05s/0.15u sec elapsed 39.21 sec.\nINFO: index \"idx_big_table_4\" now contains 506796 row versions in 2537\npages\nDETAIL: 1860 index row versions were removed.\n441 index pages have been deleted, 441 are currently reusable.\nCPU 0.06s/0.16u sec elapsed 37.47 sec.\nINFO: \"big_table\": removed 2796014 row versions in 33014 pages\nDETAIL: CPU 0.64s/0.22u sec elapsed 231.78 sec.\nINFO: index \"pk_big_table\" now contains 204387243 row versions in\n1069780 pages\nDETAIL: 2795393 index row versions were removed.\n166053 index pages have been deleted, 80186 are currently reusable.\nCPU 10.27s/19.48u sec elapsed 14750.33 sec.\nINFO: index \"idx_big_table_1\" now contains 204393784 row versions in\n740109 pages\nDETAIL: 2795393 index row versions were removed.\n121640 index pages have been deleted, 58403 are currently reusable.\nCPU 7.23s/19.34u sec elapsed 10932.43 sec.\nINFO: index \"idx_big_table_2\" now contains 16772967 row versions in\n55831 pages\nDETAIL: 7 index row versions were removed.\n389 index pages have been deleted, 320 are currently reusable.\nCPU 0.32s/0.85u sec elapsed 744.28 sec.\nINFO: index \"idx_big_table_3\" now contains 513406 row versions in 2536\npages\nDETAIL: 507 index row versions were removed.\n429 index pages have been deleted, 429 are currently reusable.\nCPU 0.04s/0.16u sec elapsed 47.37 sec.\nINFO: index \"idx_big_table_4\" now contains 511904 row versions in 2537\npages\nDETAIL: 507 index row versions were removed.\n422 index pages have been deleted, 422 are currently reusable.\nCPU 0.06s/0.14u sec elapsed 44.98 sec.\nINFO: \"big_table\": removed 2795974 row versions in 32926 pages\nDETAIL: CPU 1.14s/0.36u sec elapsed 287.30 sec.\n\n\nNow the question:\n\nI wonder why the repeated infos about all the steps ? Is vacuum in some\nkind of loop here ?\n\n\nNow the long story and why the long vacuum is a problem for me:\n\nI have a postgres 8.1.3 (actually it's a non-released CVS version from\nthe 8.1 stable branch somewhere after 8.1.3 was released) installation\nwhere I have a quite big table which is also frequently updated. \n\nThe big problem is that I can't run vacuum on it, because it won't\nfinish in the maintenance time window I can allocate for it.\n\nI would let vacuum run on it as long as it's finished, but then I get a\nhuge performance hit on other tables, which are heavily\ninserted/deleted, and must be vacuumed very frequently.\nThe long running vacuum on the big table will prevent effective vacuum\non those, they will get quickly too big, the system will slow down, the\nbig vacuum will be even slower, and so on.\nOne of these (normally small) tables is particularly a problem, as it\nhas a query running on it frequently (several times per second) which\nrequires a full table scan and can't be accelerated by any indexing. And\ntop that with the fact that it has a high insert/delete ratio when the\nsystem is busy... I can't afford any long running transaction in busy\ntimes on this system.\n\nSo, after a clean dump/reload of this system coupled with migration to\n8.1, I thought that vacuuming the big table will be possible over night\n(it has about 200 million records, and ~2-3 million updates/~1 million\ninserts/few thousand deletes per day).\nBut to my surprise it was not enough, and it affected very negatively\nother maintenance tasks too, so I had to cancel nightly vacuuming for\nit.\n\nSo I scheduled the vacuum over the weekend, when we have only light\nactivity on this system. But it did not finish over the weekend\neither... and again it affected all other activities too much. I had to\nkill the vacuum (on monday) in the last few weeks, as it was stopping\nbusiness.\n\nSo we started to suspect that there is some concurrency problem with\neither our hardware or OS, and moved the server to another machine with\nthe same hardware, same OS (debian linux), all the same settings but a\ndifferent file system (XFS instead of ext3).\n\nWe actually have seen a significant overall performance boost from this\nsimple move... But the vacuum still didn't finish over the weekend, it\njust didn't affect anymore the other tasks, which finished slightly\nslower than when running alone. The vacuum itself wouldn't be a problem\nperformance-wise, except it is a long running transaction, and it\naffects other table's vacuuming schedule, as mentioned above. Business\nhours are coming, and I will have to kill the vacuum again...\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Mon, 27 Feb 2006 14:31:33 +0100", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "neverending vacuum" }, { "msg_contents": "Csaba Nagy wrote:\n\n> I have a quite big table (about 200 million records, and ~2-3 million\n> updates/~1 million inserts/few thousand deletes per day). I started a\n> vacuum on it on friday evening, and it still runs now (monday\n> afternoon). I used \"vacuum verbose\", and the output looks like:\n> \n> [vacuums list all the indexes noting how many tuples it cleaned, then\n> \"restarts\" and lists all the indexes again, then again ... ad nauseam]\n\nWhat happens is this: the vacuum commands scans the heap and notes which\ntuples need to be removed. It needs to remember them in memory, but\nmemory is limited; it uses the maintenance_work_mem GUC setting to\nfigure out how much to use. Within this memory it needs to store the\nTIDs (absolute location) of tuples that need to be deleted. When the\nmemory is filled, it stops scanning the heap and scans the first index,\nlooking for pointers to any of the tuples that were deleted in the heap.\nEventually it finds them all and goes to the next index: scan, delete\npointers. Next index. And so on, until all the indexes are done.\n\nAt this point, the first pass is done. Vacuum must then continue\nscanning the heap for the next set of TIDs, until it finds enough to\nfill maintenance_work_mem. Scan the indexes to clean them. Start\nagain. And again.\n\nSo one very effective way of speeding this process up is giving the\nvacuum process lots of memory, because it will have to do fewer passes\nat each index. How much do you have?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 27 Feb 2006 11:31:53 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: neverending vacuum" }, { "msg_contents": "> So one very effective way of speeding this process up is giving the\n> vacuum process lots of memory, because it will have to do fewer passes\n> at each index. How much do you have?\n\nOK, this is my problem... it is left at default (16 megabyte ?). This\nmust be a mistake in configuration, on other similar boxes I set this to\n262144 (256 megabyte). The box has 4 Gbyte memory.\n\nThanks for the explanation - you were right on the spot, it will likely\nsolve the problem.\n\nCheers,\nCsaba.\n\n\n\n", "msg_date": "Mon, 27 Feb 2006 16:03:40 +0100", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: neverending vacuum" } ]
[ { "msg_contents": "I have a table that has only a few records in it at the time, and they\nget deleted every few seconds and new records are inserted. Table never\nhas more than 5-10 records in it.\n\nHowever, I noticed a deteriorating performance in deletes and inserts\non it. So I performed vacuum analyze on it three times (twice in a row,\nand once two days later). In the statistics it says that the table size\nis 863Mb, toast table size is 246Mb, and indexes size is 134Mb, even\nthough the table has only 5-10 rows in it it. I was wondering how can I\nreclaim all this space and improve the performance?\n\nHere are the outputs of my vacuum sessions:\n\n----------------------02/24/06 4:30PM----------------------\nINFO: vacuuming \"incidents.php_sessions\"\nINFO: index \"php_sessions_pkey\" now contains 16 row versions in 17151\npages\nDETAIL: 878643 index row versions were removed.\n16967 index pages have been deleted, 8597 are currently reusable.\nCPU 3.35s/3.67u sec elapsed 25.96 sec.\nINFO: \"php_sessions\": removed 878689 row versions in 107418 pages\nDETAIL: CPU 17.53s/11.23u sec elapsed 88.22 sec.\nINFO: \"php_sessions\": found 878689 removable, 14 nonremovable row\nversions in 110472 pages\nDETAIL: 10 dead row versions cannot be removed yet.\nThere were 87817 unused item pointers.\n0 pages are entirely empty.\nCPU 23.87s/16.57u sec elapsed 124.54 sec.\nINFO: vacuuming \"pg_toast.pg_toast_47206\"\nINFO: index \"pg_toast_47206_index\" now contains 550 row versions in\n11927 pages\nDETAIL: 1415130 index row versions were removed.\n11901 index pages have been deleted, 6522 are currently reusable.\nCPU 1.45s/2.15u sec elapsed 20.62 sec.\nINFO: \"pg_toast_47206\": removed 1415130 row versions in 353787 pages\nDETAIL: CPU 56.92s/32.12u sec elapsed 592.18 sec.\nINFO: \"pg_toast_47206\": found 1415130 removable, 114 nonremovable row\nversions in 353815 pages\nDETAIL: 114 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 87.42s/43.06u sec elapsed 939.62 sec.\nINFO: analyzing \"incidents.php_sessions\"\nINFO: \"php_sessions\": scanned 3000 of 110479 pages, containing 0 live\nrows and 16 dead rows; 0 rows in sample, 0 estimated total rows\n\nTotal query runtime: 1064249 ms.\n\n\n----------------------02/24/06 5:00PM----------------------\nINFO: vacuuming \"incidents.php_sessions\"\nINFO: index \"php_sessions_pkey\" now contains 4 row versions in 17151\npages\nDETAIL: 783 index row versions were removed.\n17137 index pages have been deleted, 17129 are currently reusable.\nCPU 0.31s/0.20u sec elapsed 13.89 sec.\nINFO: \"php_sessions\": removed 784 row versions in 87 pages\nDETAIL: CPU 0.00s/0.01u sec elapsed 0.01 sec.\nINFO: \"php_sessions\": found 784 removable, 3 nonremovable row versions\nin 110479 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 966202 unused item pointers.\n0 pages are entirely empty.\nCPU 1.21s/0.79u sec elapsed 15.82 sec.\nINFO: vacuuming \"pg_toast.pg_toast_47206\"\nINFO: index \"pg_toast_47206_index\" now contains 310 row versions in\n11927 pages\nDETAIL: 1830 index row versions were removed.\n11922 index pages have been deleted, 11915 are currently reusable.\nCPU 0.18s/0.12u sec elapsed 11.39 sec.\nINFO: \"pg_toast_47206\": removed 1830 row versions in 465 pages\nDETAIL: CPU 0.12s/0.04u sec elapsed 0.25 sec.\nINFO: \"pg_toast_47206\": found 1830 removable, 30 nonremovable row\nversions in 354141 pages\nDETAIL: 20 dead row versions cannot be removed yet.\nThere were 1414680 unused item pointers.\n0 pages are entirely empty.\nCPU 16.07s/4.46u sec elapsed 200.87 sec.\nINFO: \"pg_toast_47206\": truncated 354141 to 312 pages\nDETAIL: CPU 8.32s/2.57u sec elapsed 699.85 sec.\nINFO: analyzing \"incidents.php_sessions\"\nINFO: \"php_sessions\": scanned 3000 of 110479 pages, containing 0 live\nrows and 0 dead rows; 0 rows in sample, 0 estimated total rows\n\nTotal query runtime: 924084 ms.\n\n\n----------------------02/26/06 9:30AM----------------------\nINFO: vacuuming \"incidents.php_sessions\"\nINFO: index \"php_sessions_pkey\" now contains 1 row versions in 17151\npages\nDETAIL: 46336 index row versions were removed.\n17140 index pages have been deleted, 16709 are currently reusable.\nCPU 0.20s/0.18u sec elapsed 13.96 sec.\nINFO: \"php_sessions\": removed 46343 row versions in 4492 pages\nDETAIL: CPU 0.25s/0.31u sec elapsed 2.42 sec.\nINFO: \"php_sessions\": found 46343 removable, 1 nonremovable row\nversions in 110479 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 948998 unused item pointers.\n0 pages are entirely empty.\nCPU 1.07s/0.90u sec elapsed 17.45 sec.\nINFO: vacuuming \"pg_toast.pg_toast_47206\"\nINFO: index \"pg_toast_47206_index\" now contains 50 row versions in\n11927 pages\nDETAIL: 125250 index row versions were removed.\n11923 index pages have been deleted, 11446 are currently reusable.\nCPU 0.25s/0.12u sec elapsed 11.79 sec.\nINFO: \"pg_toast_47206\": removed 125250 row versions in 31316 pages\nDETAIL: CPU 2.35s/1.79u sec elapsed 15.68 sec.\nINFO: \"pg_toast_47206\": found 125250 removable, 32 nonremovable row\nversions in 31436 pages\nDETAIL: 30 dead row versions cannot be removed yet.\nThere were 456 unused item pointers.\n0 pages are entirely empty.\nCPU 4.39s/2.20u sec elapsed 37.92 sec.\nINFO: analyzing \"incidents.php_sessions\"\nINFO: \"php_sessions\": scanned 3000 of 110479 pages, containing 0 live\nrows and 0 dead rows; 0 rows in sample, 0 estimated total rows\n\nTotal query runtime: 55517 ms.\n\n", "msg_date": "27 Feb 2006 06:48:02 -0800", "msg_from": "\"Nik\" <[email protected]>", "msg_from_op": true, "msg_subject": "Large Table With Only a Few Rows" }, { "msg_contents": "\"Nik\" <[email protected]> writes:\n> I have a table that has only a few records in it at the time, and they\n> get deleted every few seconds and new records are inserted. Table never\n> has more than 5-10 records in it.\n>\n> However, I noticed a deteriorating performance in deletes and inserts\n> on it. So I performed vacuum analyze on it three times (twice in a row,\n> and once two days later). In the statistics it says that the table size\n> is 863Mb, toast table size is 246Mb, and indexes size is 134Mb, even\n> though the table has only 5-10 rows in it it. I was wondering how can I\n> reclaim all this space and improve the performance?\n\nYou need to run VACUUM ANALYZE on this table very frequently.\n\nBased on what you describe, \"very frequently\" should be on the order\nof at least once per minute.\n\nSchedule a cron job specifically to vacuum this table, with a cron\nentry like the following:\n\n* * * * * /usr/local/bin/vacuumdb -z -t my_table -p 5432 my_database\n\nOf course, you need to bring it back down to size, first.\n\nYou could run CLUSTER on the table to bring it back down to size;\nthat's probably the fastest way...\n\n cluster my_table_pk on my_table;\n\nVACUUM FULL would also do the job, but probably not as quickly.\n-- \n(reverse (concatenate 'string \"gro.gultn\" \"@\" \"enworbbc\"))\nhttp://cbbrowne.com/info/sgml.html\n\"Now they can put you in jail if they *THINK* you're gonna commit a\ncrime. Let me say that again, because it sounds vaguely important\"\n--george carlin\n", "msg_date": "Mon, 27 Feb 2006 12:25:26 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Table With Only a Few Rows" }, { "msg_contents": "On 27/02/06, Chris Browne <[email protected]> wrote:\n>\n> \"Nik\" <[email protected]> writes:\n> > I have a table that has only a few records in it at the time, and they\n> > get deleted every few seconds and new records are inserted. Table never\n> > has more than 5-10 records in it.\n> >\n> > However, I noticed a deteriorating performance in deletes and inserts\n> > on it. So I performed vacuum analyze on it three times (twice in a row,\n> > and once two days later). In the statistics it says that the table size\n> > is 863Mb, toast table size is 246Mb, and indexes size is 134Mb, even\n> > though the table has only 5-10 rows in it it. I was wondering how can I\n> > reclaim all this space and improve the performance?\n>\n> You need to run VACUUM ANALYZE on this table very frequently.\n>\n> Based on what you describe, \"very frequently\" should be on the order\n> of at least once per minute.\n>\n> Schedule a cron job specifically to vacuum this table, with a cron\n> entry like the following:\n>\n> * * * * * /usr/local/bin/vacuumdb -z -t my_table -p 5432 my_database\n>\n> Of course, you need to bring it back down to size, first.\n>\n> You could run CLUSTER on the table to bring it back down to size;\n> that's probably the fastest way...\n>\n> cluster my_table_pk on my_table;\n>\n> VACUUM FULL would also do the job, but probably not as quickly.\n> --\n> (reverse (concatenate 'string \"gro.gultn\" \"@\" \"enworbbc\"))\n> http://cbbrowne.com/info/sgml.html\n> \"Now they can put you in jail if they *THINK* you're gonna commit a\n> crime. Let me say that again, because it sounds vaguely important\"\n> --george carlin\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nYou probably want to do one or two other things.\n\n1> Switch on autovacuum.\n\n2> improve the setting of max_fsm_pages in your postgresql.conf a restart\nwill be required.\n\nif you do a \"vacuum verbose;\" the last couple of lines should tell you how\nmuch free space is about against how much free space the database can\nactuall remember to use.\n\nINFO: free space map contains 5464 pages in 303 relations\nDETAIL: A total of 9760 page slots are in use (including overhead).\n9760 page slots are required to track all free space.\nCurrent limits are: 40000 page slots, 1000 relations, using 299 KB.\n\nif the required page slots (9760 in my case) goes above the current limit\n(40000 in my case) you will need to do a vacuum full to reclaim the free\nspace. (cluster of the relevent tables may work.\n\nIf you run Vacuum Verbose regullally you can check you are vacuuming often\nenough and that your free space map is big enough to hold your free space.\n\nPeter Childs\n\nOn 27/02/06, Chris Browne <[email protected]> wrote:\n\"Nik\" <[email protected]> writes:> I have a table that has only a few records in it at the time, and they> get deleted every few seconds and new records are inserted. Table never\n> has more than 5-10 records in it.>> However, I noticed a deteriorating performance in deletes and inserts> on it. So I performed vacuum analyze on it three times (twice in a row,> and once two days later). In the statistics it says that the table size\n> is 863Mb, toast table size is 246Mb, and indexes size is 134Mb, even> though the table has only 5-10 rows in it it. I was wondering how can I> reclaim all this space and improve the performance?\nYou need to run VACUUM ANALYZE on this table very frequently.Based on what you describe, \"very frequently\" should be on the orderof at least once per minute.Schedule a cron job specifically to vacuum this table, with a cron\nentry like the following:* * * * * /usr/local/bin/vacuumdb -z -t my_table -p 5432 my_databaseOf course, you need to bring it back down to size, first.You could run CLUSTER on the table to bring it back down to size;\nthat's probably the fastest way...   cluster my_table_pk on my_table;VACUUM FULL would also do the job, but probably not as quickly.--(reverse (concatenate 'string \"gro.gultn\" \"@\" \"enworbbc\"))\nhttp://cbbrowne.com/info/sgml.html\"Now they can put you in jail if they *THINK* you're gonna commit acrime.  Let me say that again, because it sounds vaguely important\"\n--george carlin---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to       choose an index scan if your joining column's datatypes do not\n       match\nYou probably want to do one or two other things.\n\n1> Switch on autovacuum.\n\n2> improve the setting of max_fsm_pages in your postgresql.conf a restart will be required.\n\nif you do a \"vacuum verbose;\" the last couple of lines should tell you\nhow much free space is about against how much free space the database\ncan actuall remember to use. \n\nINFO:  free space map contains 5464 pages in 303 relations\nDETAIL:  A total of 9760 page slots are in use (including overhead).\n9760 page slots are required to track all free space.\nCurrent limits are:  40000 page slots, 1000 relations, using 299 KB.\n\nif the required page slots (9760 in my case) goes above the current\nlimit (40000 in my case) you will need to do a vacuum full to reclaim\nthe free space. (cluster of the relevent tables may work.\n\nIf you run Vacuum Verbose regullally you can check you are vacuuming\noften enough and that your free space map is big enough to hold your\nfree space.\n\nPeter Childs", "msg_date": "Mon, 27 Feb 2006 18:34:12 +0000", "msg_from": "\"Peter Childs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Table With Only a Few Rows" }, { "msg_contents": "On Mon, Feb 27, 2006 at 06:48:02 -0800,\n Nik <[email protected]> wrote:\n> I have a table that has only a few records in it at the time, and they\n> get deleted every few seconds and new records are inserted. Table never\n> has more than 5-10 records in it.\n> \n> However, I noticed a deteriorating performance in deletes and inserts\n> on it. So I performed vacuum analyze on it three times (twice in a row,\n> and once two days later). In the statistics it says that the table size\n> is 863Mb, toast table size is 246Mb, and indexes size is 134Mb, even\n> though the table has only 5-10 rows in it it. I was wondering how can I\n> reclaim all this space and improve the performance?\n\nYou can use VACUUM FULL to recover the space. You should be running normal\nVACUUMs on that table every minute or two, not once a day.\n", "msg_date": "Wed, 1 Mar 2006 00:55:56 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Table With Only a Few Rows" } ]
[ { "msg_contents": "Hi All,\nI ' m using the postgresql datbase to stores cookies. Theses cookies \nbecome invalid after 30 mn and have to be deleted. i have defined a \nprocedure that will\ndelete all invalid cookies, but i don't know how to call it in loop way \n(for example each hour).\nI think that it possible because this behaivor is the same of the \nautovaccum procedure that handle the vaccum process every time (60s in \ndefault way).\nAfter reading the documentation, it seems that triggers can't handle \nthis stuff .\nhow can i resolve the problem ?\n\n\nThanks", "msg_date": "Mon, 27 Feb 2006 18:09:26 +0100", "msg_from": "Jamal Ghaffour <[email protected]>", "msg_from_op": true, "msg_subject": "The trigger can be specified to fire on time condition?" }, { "msg_contents": "Jamal Ghaffour wrote:\n> Hi All,\n> I ' m using the postgresql datbase to stores cookies. Theses cookies \n> become invalid after 30 mn and have to be deleted. i have defined a \n> procedure that will\n> delete all invalid cookies, but i don't know how to call it in loop way \n> (for example each hour).\n> I think that it possible because this behaivor is the same of the \n> autovaccum procedure that handle the vaccum process every time (60s in \n> default way).\n> After reading the documentation, it seems that triggers can't handle \n> this stuff .\n> how can i resolve the problem ?\n\nUse your system's crontab! (On Windows, \"scheduled tasks\" or whatever).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Mon, 27 Feb 2006 14:10:07 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The trigger can be specified to fire on time condition?" }, { "msg_contents": "[email protected] (Jamal Ghaffour) writes:\n> Hi All, I ' m using the postgresql datbase to stores cookies. Theses\n> cookies become invalid after 30 mn and have to be deleted. i have\n> defined a procedure that will delete all invalid cookies, but i\n> don't know how to call it in loop way (for example each hour). I\n> think that it possible because this behaivor is the same of the\n> autovaccum procedure that handle the vaccum process every time (60s\n> in default way). After reading the documentation, it seems that\n> triggers can't handle this stuff . how can i resolve the problem ?\n\nTime-based event scheduling is done using cron, external to the\ndatabase.\n-- \noutput = reverse(\"gro.mca\" \"@\" \"enworbbc\")\nhttp://cbbrowne.com/info/sgml.html\n\"Even in the area of anticompetitive conduct, Microsoft is mainly an\nimitator.\" -- Ralph Nader (1998/11/11)\n", "msg_date": "Mon, 27 Feb 2006 12:30:53 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The trigger can be specified to fire on time condition?" } ]
[ { "msg_contents": "Hi all,\n\nI am facing performance issues even with less than 3000 records, I am\nusing Triggers/SPs in all the tables. What could be the problem.\nAny idea it is good to use triggers w.r.t performance?\n\nRegards,\nJeeva.K\n", "msg_date": "Tue, 28 Feb 2006 09:14:59 +0530", "msg_from": "\"Jeevanandam, Kathirvel (IE10)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: rotate records" }, { "msg_contents": "Jeevanandam, Kathirvel (IE10) schrieb:\n> Hi all,\n> \n> I am facing performance issues even with less than 3000 records, I am\n> using Triggers/SPs in all the tables. What could be the problem.\n> Any idea it is good to use triggers w.r.t performance?\n\nMuch to general. What triggers? (what are they doing, when are\nthey invoked...?). Please provide much greater details with\nyour request or nobody can help.\n\nRegards\nTino\n\nPS: and try not to steal threads\n", "msg_date": "Tue, 28 Feb 2006 06:04:05 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "triggers, performance Was: Re: rotate records" }, { "msg_contents": "On Tue, Feb 28, 2006 at 09:14:59 +0530,\n \"Jeevanandam, Kathirvel (IE10)\" <[email protected]> wrote:\n> Hi all,\n\nPlease don't hijack existing threads to start new ones. This can cause\npeople to miss your question and messes up the archives.\n\nPerformance questions should generally be posted to the performance list.\nI have redirected followups to there.\n\n> \n> I am facing performance issues even with less than 3000 records, I am\n> using Triggers/SPs in all the tables. What could be the problem.\n> Any idea it is good to use triggers w.r.t performance?\n\nA common cause of this kind of thing is not running vacuum often enough\nleaving you with a lot of dead tuples.\n\nYou should probably start by doing a vacuum full analyse and then showing\nthe list some problem query sources along with explain analyse output\nfor them.\n\n> \n> Regards,\n> Jeeva.K\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n", "msg_date": "Mon, 27 Feb 2006 23:08:31 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: rotate records" } ]
[ { "msg_contents": "I am using triggers for all the events (insert,delete,update) please\nfind the details below.\n\ntrg_delpointtable BEFORE DELETE ON pointtable FOR EACH ROW EXECUTE\nPROCEDURE pp_delpointtable()\n\ntrg_insdelpoints AFTER DELETE ON pointtable FOR EACH ROW EXECUTE\nPROCEDURE pp_insdelpoints()\n\ntrgins_pointtable AFTER INSERT ON pointtable FOR EACH ROW EXECUTE\nPROCEDURE pp_inspointtable()\n\ntrupd_pointtable AFTER UPDATE ON pointtable FOR EACH ROW EXECUTE\nPROCEDURE pp_updpointtable()\n\n\nBasically, this each trigger modifies the content of other dependent\ntables.\n\nBest Regards,\nJeeva.K\n\n-----Original Message-----\nFrom: Tino Wildenhain [mailto:[email protected]] \nSent: Tuesday, February 28, 2006 10:34 AM\nTo: Jeevanandam, Kathirvel (IE10)\nCc: [email protected]\nSubject: triggers, performance Was: Re: [GENERAL] rotate records\n\nJeevanandam, Kathirvel (IE10) schrieb:\n> Hi all,\n> \n> I am facing performance issues even with less than 3000 records, I am\n> using Triggers/SPs in all the tables. What could be the problem.\n> Any idea it is good to use triggers w.r.t performance?\n\nMuch to general. What triggers? (what are they doing, when are\nthey invoked...?). Please provide much greater details with\nyour request or nobody can help.\n\nRegards\nTino\n\nPS: and try not to steal threads\n", "msg_date": "Tue, 28 Feb 2006 11:24:34 +0530", "msg_from": "\"Jeevanandam, Kathirvel (IE10)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: triggers, performance Was: Re: [GENERAL] rotate records" }, { "msg_contents": "And what do those functions do? And do their options trigger other\ntriggers? How about an EXPLAIN ANALYZE from a problem query, too.\n\nOn Tue, Feb 28, 2006 at 11:24:34AM +0530, Jeevanandam, Kathirvel (IE10) wrote:\n> I am using triggers for all the events (insert,delete,update) please\n> find the details below.\n> \n> trg_delpointtable BEFORE DELETE ON pointtable FOR EACH ROW EXECUTE\n> PROCEDURE pp_delpointtable()\n> \n> trg_insdelpoints AFTER DELETE ON pointtable FOR EACH ROW EXECUTE\n> PROCEDURE pp_insdelpoints()\n> \n> trgins_pointtable AFTER INSERT ON pointtable FOR EACH ROW EXECUTE\n> PROCEDURE pp_inspointtable()\n> \n> trupd_pointtable AFTER UPDATE ON pointtable FOR EACH ROW EXECUTE\n> PROCEDURE pp_updpointtable()\n> \n> \n> Basically, this each trigger modifies the content of other dependent\n> tables.\n> \n> Best Regards,\n> Jeeva.K\n> \n> -----Original Message-----\n> From: Tino Wildenhain [mailto:[email protected]] \n> Sent: Tuesday, February 28, 2006 10:34 AM\n> To: Jeevanandam, Kathirvel (IE10)\n> Cc: [email protected]\n> Subject: triggers, performance Was: Re: [GENERAL] rotate records\n> \n> Jeevanandam, Kathirvel (IE10) schrieb:\n> > Hi all,\n> > \n> > I am facing performance issues even with less than 3000 records, I am\n> > using Triggers/SPs in all the tables. What could be the problem.\n> > Any idea it is good to use triggers w.r.t performance?\n> \n> Much to general. What triggers? (what are they doing, when are\n> they invoked...?). Please provide much greater details with\n> your request or nobody can help.\n> \n> Regards\n> Tino\n> \n> PS: and try not to steal threads\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 1 Mar 2006 12:48:36 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: triggers, performance Was: Re: [GENERAL] rotate records" } ]
[ { "msg_contents": "Hi all\n\n I've a question about vacuuming, ...\n\n Vacuum: cleans out obsolete and deleted registers...\n Analyze: update statistics for the planner\n Reindex: rebuild indexes\n\n I think the correct order to perform the database\nmaintenance for performance is:\n\n 1 - Vacuum\n 2 - Reindex\n 3 - Analyze\n\n So the planner is updated with the updated indexes.\n\n\n The autovacuum daemon does vacuum and analyze. Not reindex.\n So, no way to perform it in that order.\n\n What do you think?\n How often do you reindex your tables?\n\n Thx all\n\n\n\n\n\n\n\n\n\n            Hi all\n\n            I've a question about vacuuming, ...\n\n            Vacuum: cleans out obsolete and deleted registers...\n            Analyze:  update statistics for the planner\n            Reindex:  rebuild indexes\n\n            I think the correct order to perform the database maintenance for performance is:\n\n            1 - Vacuum\n            2 - Reindex\n            3 - Analyze\n\n            So the planner is updated with the updated indexes.\n\n\n            The autovacuum daemon does vacuum and analyze. Not reindex.\n            So, no way to perform it in that order.\n\n            What do you think?\n            How often do you reindex your tables?\n\n            Thx all", "msg_date": "Tue, 28 Feb 2006 13:11:05 +0100", "msg_from": "Javier Somoza <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum, analyze and reindex" }, { "msg_contents": "Centuries ago, Nostradamus foresaw when [email protected] (Javier Somoza) would write:\n> ����������� Hi all\n> ����������� I've a question about vacuuming, ...\n> ����������� Vacuum: cleans out obsolete and deleted registers...\n> ����������� Analyze:� update statistics for the planner\n> ����������� Reindex:� rebuild indexes\n> ����������� I think the correct order to perform the database maintenance for performance is:\n> ����������� 1 - Vacuum\n> ����������� 2 - Reindex\n> ����������� 3 - Analyze\n> ����������� So the planner is updated with the updated indexes.\n\nThere is a misunderstanding there. The planner isn't aware of\n\"updates\" to the indexes; it is aware of how they are defined.\n\nANALYZE doesn't calculate index-specific things; it calculates\nstatistical distributions for the contents of each column. As soon as\nit runs, the planner will be better able to choose from the available\nindexes. If you add another index, the planner will, without another\nANALYZE, be able to choose that index, if it is useful to do so, based\non the existing statistical distributions.\n\n> ����������� The autovacuum daemon does vacuum and analyze. Not reindex.\n> ����������� So, no way to perform it in that order.\n> ����������� What do you think?\n> ����������� How often do you reindex your tables?\n\nIf tables are being vacuumed frequently enough, the answer to that can\nbe \"almost never.\"\n\nBack in the days of 7.2, there were conditions where indexes could\nbloat mercilessly, so that heavily updated tables needed a reindex\nevery so often. But if you're running a reasonably modern version,\nthat shouldn't be necessary.\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','gmail.com').\nhttp://linuxfinances.info/info/slony.html\nRules of the Evil Overlord #219. \"I will be selective in the hiring of\nassassins. Anyone who attempts to strike down the hero the first\ninstant his back is turned will not even be considered for the job.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Tue, 28 Feb 2006 08:34:01 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum, analyze and reindex" } ]
[ { "msg_contents": "Just a \"wouldn't it be nice if\" sort of feature request. I'm not sure\nhow practical it is.\n\nSomeone in our organization wrote a data fix query, which has sort of\nodd logic, but it does what they need. The problem is that it ran for\n14 hours in a test against a copy of the data. I looked at it and\nfigured it could do better with an extra index. The index took five\nminutes to build, and the run time for the query dropped to five\nminutes. The index is not needed for production, so it was then\ndropped.\n\nIt struck me that it would be outstanding if the planner could\nrecognize this sort of situation, and build a temporary index based on\nthe snapshot of the data visible to the transaction. It seems to me\nthat the obvious downside of this would be the explosion in the number\nof permutations the planner would need to examine -- based not just on\nwhat indexes ARE there, but which ones it could build. At a minimum,\nthere would need to be a cost threshold below which it would not even\nconsider the option. (In this case, as long as the optimizer spent less\nthan 13 hours and 50 minutes considering its options, we would have come\nout ahead.)\n\nI'm not sure the details of this particular incident are that relevant,\nbut I've attached the query and the two plans.\n\n-Kevin", "msg_date": "Tue, 28 Feb 2006 09:44:08 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "temporary indexes" }, { "msg_contents": "On Tue, Feb 28, 2006 at 09:44:08AM -0600, Kevin Grittner wrote:\n> It struck me that it would be outstanding if the planner could\n> recognize this sort of situation, and build a temporary index based on\n> the snapshot of the data visible to the transaction. It seems to me\n> that the obvious downside of this would be the explosion in the number\n> of permutations the planner would need to examine -- based not just on\n> what indexes ARE there, but which ones it could build. At a minimum,\n> there would need to be a cost threshold below which it would not even\n> consider the option. (In this case, as long as the optimizer spent less\n> than 13 hours and 50 minutes considering its options, we would have come\n> out ahead.)\n\nFWIW, Sybase supported something similar a long time ago. It had the\nability to build a temporary 'clustered table' (think index organized\ntable) when there was enough benefit to do so. This is actually\nmuch easier to make happen inside a transaction for us, because we don't\nneed to keep visibility information around. There's probably also some\nindex metadata that could be done away with. Perhaps the materialize\nnode could be made to allow this.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 28 Feb 2006 10:45:15 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: temporary indexes" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> It struck me that it would be outstanding if the planner could\n> recognize this sort of situation, and build a temporary index based on\n> the snapshot of the data visible to the transaction.\n\nI don't think that's an appropriate solution at all. What it looks like\nto me (assuming that explain's estimated row counts are reasonably\non-target) is that the time is all going into the EXISTS subplans. The\nreal problem here is that we aren't doing anything to convert correlated\nEXISTS subqueries into some form of join that's smarter than a raw\nnestloop.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Feb 2006 11:52:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] temporary indexes " }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> FWIW, Sybase supported something similar a long time ago. It had the\n> ability to build a temporary 'clustered table' (think index organized\n> table) when there was enough benefit to do so. This is actually\n> much easier to make happen inside a transaction for us, because we don't\n> need to keep visibility information around. There's probably also some\n> index metadata that could be done away with. Perhaps the materialize\n> node could be made to allow this.\n\nHow does what you describe differ from a merge join? Or a hash join,\nif you imagine the temp index as being a hash rather than btree index?\n\nThe issue at hand really has nothing to do with temp indexes, it's with\nthe constrained way that the planner deals with EXISTS subplans. The\nsubplans themselves are cheap enough, even in the poorly-indexed\nvariant, that the planner would certainly never have decided to create\nan index to use for them. The problem only becomes apparent at the next\nlevel up, where those subplans are going to be repeated a huge number of\ntimes ---- but the subplan plan is already chosen and won't be changed.\nSo even if we invented a temp-index facility, it would fail to be\napplied in Kevin's example. The limiting factor is that EXISTS subplans\naren't flattened ... and once that's fixed, I doubt the example would\nneed any new kind of join support.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Feb 2006 12:05:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] temporary indexes " }, { "msg_contents": ">>> On Tue, Feb 28, 2006 at 11:05 am, in message\n<[email protected]>,\nTom Lane <[email protected]> wrote: \n> \n> The issue at hand really has nothing to do with temp indexes, it's\nwith\n> the constrained way that the planner deals with EXISTS subplans.\n\nYet when the index exists, the query is optimized well.\n\n> The\n> subplans themselves are cheap enough, even in the poorly- indexed\n> variant, that the planner would certainly never have decided to\ncreate\n> an index to use for them.\n\nThat depends. If the planner was able to generate hypothetical index\ndescriptions which might be useful, and analyze everything based on\nthose (adding in creation cost, of course) -- why would it not be able\nto come up with the plan which it DID use when the index existed.\n\n> The limiting factor is that EXISTS subplans\n> aren't flattened ... and once that's fixed, I doubt the example\nwould\n> need any new kind of join support.\n\n<digression>\nI'm all for that. So far, we've been going after the low-hanging fruit\nin our use of PostgreSQL. When we get to the main applications, we're\ngoing to be dealing with a lot more in the way of EXISTS clauses. The\nproduct we're moving from usually optimized an IN test the same as the\nlogically equivalent EXISTS test, and where a difference existed, it\nalmost always did better with the EXISTS -- so we encouraged application\nprogrammers to use that form. Also, EXISTS works in situations where\nyou need to compare on multiple columns, so it is useful in many\nsituations where EXISTS or MIN/MAX techniques just don't work.\n</digression>\n\nIf fixing this would allow hash or merge techniques to cover this as\nwell as the index did, and that is true in a more general sense (not\njust for this one example), then temporary indexes would clearly not\nhave any value.\n\n-Kevin\n\n\n", "msg_date": "Tue, 28 Feb 2006 11:36:28 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] temporary indexes" }, { "msg_contents": ">>> On Tue, Feb 28, 2006 at 11:36 am, in message\n<[email protected]>, \"Kevin Grittner\"\n> Also, EXISTS works in situations where\n> you need to compare on multiple columns, so it is useful in many\n> situations where EXISTS or MIN/MAX techniques just don't work.\n\nSorry. That should have read:\n\nEXISTS works in situations where\nyou need to compare on multiple columns, so it is useful in many\nsituations where IN or MIN/MAX techniques just don't work.\n\n", "msg_date": "Tue, 28 Feb 2006 11:55:32 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] temporary indexes" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> EXISTS works in situations where\n> you need to compare on multiple columns, so it is useful in many\n> situations where IN or MIN/MAX techniques just don't work.\n\nIN works fine on multiple columns:\n\n\t(foo, bar, baz) IN (SELECT x, y, z FROM ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Feb 2006 13:06:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] temporary indexes " }, { "msg_contents": "On Tue, Feb 28, 2006 at 11:55:32AM -0600, Kevin Grittner wrote:\n>> Also, EXISTS works in situations where\n>> you need to compare on multiple columns, so it is useful in many\n>> situations where EXISTS or MIN/MAX techniques just don't work.\n> Sorry. That should have read:\n> \n> EXISTS works in situations where\n> you need to compare on multiple columns, so it is useful in many\n> situations where IN or MIN/MAX techniques just don't work.\n\nCan't you just do WHERE (foo,bar) IN ( SELECT baz,quux FROM table )? I'm\nquite sure I've done that a number of times.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 28 Feb 2006 19:08:37 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] temporary indexes" }, { "msg_contents": ">>> On Tue, Feb 28, 2006 at 12:06 pm, in message\n<[email protected]>,\nTom Lane <[email protected]> wrote: \n> IN works fine on multiple columns:\n> \n> \t(foo, bar, baz) IN (SELECT x, y, z FROM ...)\n\nThanks for pointing that out. I recognize it as valid ANSI/ISO syntax,\nusing a row value constructor list. Unfortunately, row value\nconstructor lists failed to make the portability cut for allowed syntax\nhere, because it was not supported by all candidate products at the\ntime. It is still not supported by the product we're moving from, and\nwe'll have about a one year window when our code will need to run on\nboth.\n\n-Kevin\n\n", "msg_date": "Tue, 28 Feb 2006 12:16:50 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] temporary indexes" }, { "msg_contents": "On Tue, Feb 28, 2006 at 11:36:28AM -0600, Kevin Grittner wrote:\n> <digression>\n> I'm all for that. So far, we've been going after the low-hanging fruit\n> in our use of PostgreSQL. When we get to the main applications, we're\n> going to be dealing with a lot more in the way of EXISTS clauses. The\n> product we're moving from usually optimized an IN test the same as the\n> logically equivalent EXISTS test, and where a difference existed, it\n> almost always did better with the EXISTS -- so we encouraged application\n> programmers to use that form. Also, EXISTS works in situations where\n> you need to compare on multiple columns, so it is useful in many\n> situations where EXISTS or MIN/MAX techniques just don't work.\n> </digression>\n\nMaybe it's just the way my twisted mind thinks, but I generally prefer\nusing a JOIN when possible...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 28 Feb 2006 15:02:32 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] temporary indexes" }, { "msg_contents": ">>> On Tue, Feb 28, 2006 at 11:05 am, in message\n<[email protected]>,\nTom Lane <[email protected]> wrote: \n> The limiting factor is that EXISTS subplans\n> aren't flattened ... and once that's fixed, I doubt the example\nwould\n> need any new kind of join support.\n\nI rewrote the query to use IN predicates rather than EXISTS predicates,\nand the cost estimates look like this:\n\nEXISTS, no index: 1.6 billion\nEXISTS, with index: 0.023 billion\nIN, no index: 13.7 billion\nIN, with index: 10.6 billion\n\nAt least for the two EXISTS cases, the estimates were roughly accurate.\n These plans were run against the data after the fix, but analyze has\nnot been run since then, so the estimates should be comparable with the\nearlier post.\n\nI'm not used to using the IN construct this way, so maybe someone can\nspot something horribly stupid in how I tried to use it.\n\n-Kevin", "msg_date": "Tue, 28 Feb 2006 15:15:31 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] temporary indexes" }, { "msg_contents": "Kevin Grittner wrote:\n\n> I rewrote the query to use IN predicates rather than EXISTS predicates,\n> and the cost estimates look like this:\n> \n> EXISTS, no index: 1.6 billion\n> EXISTS, with index: 0.023 billion\n> IN, no index: 13.7 billion\n> IN, with index: 10.6 billion\n> \n> At least for the two EXISTS cases, the estimates were roughly accurate.\n> These plans were run against the data after the fix, but analyze has\n> not been run since then, so the estimates should be comparable with the\n> earlier post.\n> \n> I'm not used to using the IN construct this way, so maybe someone can\n> spot something horribly stupid in how I tried to use it.\n\nI will have a look at your queries tomorrow. Some general advice (rdbms \nagnostic) on when to use IN and when to use EXISTS taken from \"SQL \nperformance tuning\":\n\n- if the inner table has few rows and the outer has many then IN is \npreferred\n- if however you have a restrictive expression on the outer query you \nshould preferr EXISTS\n- use NOT EXISTS instead of NOT IN (break out early)\n\nregards,\nLukas\n", "msg_date": "Wed, 01 Mar 2006 00:02:55 +0100", "msg_from": "Lukas Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] temporary indexes" }, { "msg_contents": ">>> On Tue, Feb 28, 2006 at 3:02 pm, in message\n<[email protected]>, \"Jim C. Nasby\"\n<[email protected]>\nwrote: \n> \n> Maybe it's just the way my twisted mind thinks, but I generally\nprefer\n> using a JOIN when possible...\n\nDefinitely. But sometimes you don't want one row from a table for each\nqualifying row in another table, you want one row from the table if one\nor more qualifying rows exist in the other table. Those are the cases\nin question here. Don't suggest that I just let the duplicates happen\nand use DISTINCT, that is much more prone to logic errors in complex\nqueries, and typically optimizes worse.\n\n-Kevin\n\n", "msg_date": "Wed, 01 Mar 2006 09:35:33 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] temporary indexes" } ]
[ { "msg_contents": "\nI have a table with a few small numeric fields and several text fields, on \npg. 8.1.2.\n\nThe numeric fields are used for searching (category_id, price, etc).\nThe text fields are just a description of the item, comments, email \naddress, telephone, etc.\n\nSo, in order to speed up requests which need a full table scan, I wanted \nto put the text fields in another table, and use a view to make it look \nlike nothing happened. Also, the small table used for searching is a lot \nmore likely to fit in RAM than the big table with all the text which is \nonly used for display.\n\nHowever the query plan for the view is sometimes very bad (see below)\n\nHere is a simplification of my schema with only 2 columns :\n\nCREATE TABLE items (\n id SERIAL PRIMARY KEY,\n price FLOAT NULL,\n category INTEGER NOT NULL,\n description TEXT\n);\n\nCREATE TABLE items_data (\n id SERIAL PRIMARY KEY,\n price FLOAT NULL,\n category INTEGER NOT NULL\n);\n\nCREATE TABLE items_desc (\n id INTEGER NOT NULL REFERENCES items_data(id) ON DELETE CASCADE,\n PRIMARY KEY (id ),\n description TEXT\n);\n\nINSERT INTO items about 100K rows\n\nINSERT INTO items_data (id,price,category) SELECT id,price,category FROM \nitems;\nINSERT INTO items_desc (id,description) SELECT id,description FROM items;\nalter table items_data ALTER price set statistics 100;\nalter table items_data ALTER category set statistics 100;\nVACUUM ANALYZE;\n\nCREATE VIEW items_view1 AS SELECT a.id, a.price, a.category, b.description \n FROM items_data a, items_desc b WHERE a.id=b.id;\nCREATE VIEW items_view2 AS SELECT a.id, a.price, a.category, b.description \n FROM items_data a LEFT JOIN items_desc b ON a.id=b.id;\n\nNow, an example query :\n\n** From the plain table\n\nEXPLAIN ANALYZE SELECT * FROM items WHERE price IS NOT NULL AND category=1 \nORDER BY price DESC LIMIT 10;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Limit (cost=10308.21..10308.23 rows=10 width=229) (actual \ntime=391.373..391.379 rows=10 loops=1)\n -> Sort (cost=10308.21..10409.37 rows=40466 width=229) (actual \ntime=391.371..391.375 rows=10 loops=1)\n Sort Key: price\n -> Seq Scan on items (cost=0.00..4549.57 rows=40466 width=229) \n(actual time=0.652..91.125 rows=42845 loops=1)\n Filter: ((price IS NOT NULL) AND (category = 1))\n Total runtime: 399.511 ms\n\n** From the data only table (no descriptions)\n\nEXPLAIN ANALYZE SELECT * FROM items_data WHERE price IS NOT NULL AND \ncategory=1 ORDER BY price DESC LIMIT 10;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Limit (cost=5250.92..5250.95 rows=10 width=16) (actual \ntime=275.765..275.769 rows=10 loops=1)\n -> Sort (cost=5250.92..5357.83 rows=42763 width=16) (actual \ntime=275.763..275.766 rows=10 loops=1)\n Sort Key: price\n -> Seq Scan on items_data (cost=0.00..1961.58 rows=42763 \nwidth=16) (actual time=0.411..57.610 rows=42845 loops=1)\n Filter: ((price IS NOT NULL) AND (category = 1))\n Total runtime: 278.023 ms\n\nIt is faster to access the smaller table. Note that I only added the \ndescription column in this example. With all the other columns like \ntelephone, email, etc of my production table, which are used for display \nonly and not for searching, it takes about 1.2 seconds, simply because the \ntable is a lot larger (yes, it fits in RAM... for now).\n\nNow, let's check out the 2 views : the plans are exactly the same\n\nEXPLAIN ANALYZE SELECT * FROM items_view2 WHERE price IS NOT NULL AND \ncategory=1 ORDER BY price DESC LIMIT 10;\n QUERY \nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=13827.38..13827.41 rows=10 width=222) (actual \ntime=584.704..584.712 rows=10 loops=1)\n -> Sort (cost=13827.38..13934.29 rows=42763 width=222) (actual \ntime=584.703..584.709 rows=10 loops=1)\n Sort Key: a.price\n -> Merge Left Join (cost=0.00..7808.02 rows=42763 width=222) \n(actual time=1.708..285.663 rows=42845 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n -> Index Scan using items_data_pkey on items_data a \n(cost=0.00..2439.74 rows=42763 width=16) (actual time=0.692..86.330 \nrows=42845 loops=1)\n Filter: ((price IS NOT NULL) AND (category = 1))\n -> Index Scan using items_desc_pkey on items_desc b \n(cost=0.00..4585.83 rows=99166 width=210) (actual time=0.038..104.957 \nrows=99165 loops=1)\n Total runtime: 593.068 ms\n\nWow. This is a lot slower because it does the big join BEFORE applying the \nsort.\n\nHere is the plain query generated by the view :\nSELECT a.id, a.price, a.category, b.description FROM items_data a LEFT \nJOIN items_desc b ON a.id=b.id WHERE price IS NOT NULL AND category=1 \nORDER BY price DESC LIMIT 10;\n\nI would have expected the planner to rewrite it like this :\n\nEXPLAIN ANALYZE SELECT foo.*, b.description FROM (SELECT * FROM items_data \na WHERE price IS NOT NULL AND category=1 ORDER BY price DESC LIMIT 10) AS \nfoo LEFT JOIN items_desc b ON foo.id=b.id ORDER BY price DESC LIMIT 10;\n\nThis query should be equivalent to the view with LEFT JOIN. I am aware it \nis not equivalent to the view with a simple join.\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=5250.92..5281.31 rows=10 width=222) (actual \ntime=273.300..273.363 rows=10 loops=1)\n -> Nested Loop Left Join (cost=5250.92..5281.31 rows=10 width=222) \n(actual time=273.299..273.361 rows=10 loops=1)\n -> Limit (cost=5250.92..5250.95 rows=10 width=16) (actual \ntime=273.267..273.269 rows=10 loops=1)\n -> Sort (cost=5250.92..5357.83 rows=42763 width=16) \n(actual time=273.266..273.267 rows=10 loops=1)\n Sort Key: a.price\n -> Seq Scan on items_data a (cost=0.00..1961.58 \nrows=42763 width=16) (actual time=0.423..67.149 rows=42845 loops=1)\n Filter: ((price IS NOT NULL) AND (category = 1))\n -> Index Scan using items_desc_pkey on items_desc b \n(cost=0.00..3.01 rows=1 width=210) (actual time=0.006..0.007 rows=1 \nloops=10)\n Index Cond: (\"outer\".id = b.id)\n Total runtime: 275.608 ms\n\nThe second form is faster, but more importantly, it does nearly its IO in \nthe small table, and only fetches the needed 10 rows from the large table. \nThus if the large table is not in disk cache, this is not so bad, which is \nthe whole point of using a view to split this.\n\nWith indexes, fast plans are picked, but they all perform the join before \ndoing the sort+limit. Only if there is an index on the \"ORDER BY\" column, \nit is used. And bitmap index scan also comes in to save the day (I love \nbitmap index scan).\n\nHowever, I will have a lot of searchable columns, and ORDER BY options. \nIdeally I would like to create a few indexes for the common searches and \norder-by's. I would prefer not to create about 15 indexes on this table, \nbecause this will slow down updates. Besides, some of the ORDER BY's are \nexpressions.\n\nA seq scan or an index scan of the small table, followed by a sort and \nlimit, then joining to the other table, wouls be more logical.\n\nSuppose I create an index on price and on category :\n\nEXPLAIN ANALYZE SELECT * FROM items_view2 WHERE price IS NOT NULL AND \ncategory IN (4,32) ORDER BY price LIMIT 10;\n QUERY \nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..31.54 rows=10 width=224) (actual time=0.737..0.964 \nrows=10 loops=1)\n -> Nested Loop Left Join (cost=0.00..112594.96 rows=35700 width=224) \n(actual time=0.735..0.958 rows=10 loops=1)\n -> Index Scan using item_data_price on items_data a \n(cost=0.00..4566.76 rows=35700 width=16) (actual time=0.696..0.753 rows=10 \nloops=1)\n Filter: ((price IS NOT NULL) AND ((category = 4) OR \n(category = 32)))\n -> Index Scan using items_desc_pkey on items_desc b \n(cost=0.00..3.01 rows=1 width=212) (actual time=0.018..0.018 rows=1 \nloops=10)\n Index Cond: (\"outer\".id = b.id)\n Total runtime: 0.817 ms\n\nNow, with a subtly different order by :\n\nEXPLAIN ANALYZE SELECT * FROM items_view2 WHERE price IS NOT NULL AND \ncategory IN (4,32) ORDER BY price,category LIMIT 10;\n QUERY \nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=12931.79..12931.82 rows=10 width=224) (actual \ntime=1121.426..1121.433 rows=10 loops=1)\n -> Sort (cost=12931.79..13021.04 rows=35700 width=224) (actual \ntime=1121.424..1121.428 rows=10 loops=1)\n Sort Key: a.price, a.category\n -> Merge Left Join (cost=0.00..7967.65 rows=35700 width=224) \n(actual time=0.060..530.815 rows=36705 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n -> Index Scan using items_data_pkey on items_data a \n(cost=0.00..2687.66 rows=35700 width=16) (actual time=0.051..116.995 \nrows=36705 loops=1)\n Filter: ((price IS NOT NULL) AND ((category = 4) OR \n(category = 32)))\n -> Index Scan using items_desc_pkey on items_desc b \n(cost=0.00..4585.83 rows=99166 width=212) (actual time=0.003..205.652 \nrows=95842 loops=1)\n Total runtime: 1128.972 ms\n\nORDER BY price,category disables the use of index for sort, and thus a \nlarge join is performed. With the rewritten query :\n\nEXPLAIN ANALYZE SELECT foo.*, b.description FROM (SELECT * FROM items_data \na WHERE price IS NOT NULL AND category IN (4,32) ORDER BY price,category \nDESC LIMIT 10) AS foo LEFT JOIN items_desc b ON foo.id=b.id ORDER BY \nprice,category DESC LIMIT 10;\n QUERY \nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=4229.26..4259.64 rows=10 width=224) (actual \ntime=222.353..222.410 rows=10 loops=1)\n -> Nested Loop Left Join (cost=4229.26..4259.64 rows=10 width=224) \n(actual time=222.352..222.405 rows=10 loops=1)\n -> Limit (cost=4229.26..4229.28 rows=10 width=16) (actual \ntime=222.318..222.324 rows=10 loops=1)\n -> Sort (cost=4229.26..4318.51 rows=35700 width=16) \n(actual time=222.317..222.322 rows=10 loops=1)\n Sort Key: a.price, a.category\n -> Bitmap Heap Scan on items_data a \n(cost=239.56..1529.69 rows=35700 width=16) (actual time=6.926..34.018 \nrows=36705 loops=1)\n Recheck Cond: ((category = 4) OR (category = \n32))\n Filter: (price IS NOT NULL)\n -> BitmapOr (cost=239.56..239.56 rows=37875 \nwidth=0) (actual time=6.778..6.778 rows=0 loops=1)\n -> Bitmap Index Scan on item_data_cat \n(cost=0.00..229.61 rows=36460 width=0) (actual time=6.295..6.295 \nrows=36400 loops=1)\n Index Cond: (category = 4)\n -> Bitmap Index Scan on item_data_cat \n(cost=0.00..9.95 rows=1415 width=0) (actual time=0.482..0.482 rows=1340 \nloops=1)\n Index Cond: (category = 32)\n -> Index Scan using items_desc_pkey on items_desc b \n(cost=0.00..3.01 rows=1 width=212) (actual time=0.006..0.006 rows=1 \nloops=10)\n Index Cond: (\"outer\".id = b.id)\n Total runtime: 224.476 ms\n\nIt is not very fast (the sort takes most of the time), but still is a lot \nfaster !\n\nNow, what should I do ?...\n\n\n\n\n\n", "msg_date": "Wed, 01 Mar 2006 15:56:25 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": true, "msg_subject": "Bad plan on a view" }, { "msg_contents": "PFC <[email protected]> writes:\n> So, in order to speed up requests which need a full table scan, I wanted \n> to put the text fields in another table, and use a view to make it look \n> like nothing happened. Also, the small table used for searching is a lot \n> more likely to fit in RAM than the big table with all the text which is \n> only used for display.\n\nAren't you going to a lot of work to reinvent something that TOAST\nalready does for you? (At least, in the cases where the text fields\nare wide enough that it really matters.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Mar 2006 10:16:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan on a view " }, { "msg_contents": "\n> Aren't you going to a lot of work to reinvent something that TOAST\n> already does for you? (At least, in the cases where the text fields\n> are wide enough that it really matters.)\n\n\tI know. But I have several text fields in the 20 to 200 characters, which \nis too small for toast, but large enough to make up about 90% of the table \nsize, which makes it problematic RAM-wise, especially since it's gonna \ngrow. Now, if I had 1 big text field, it would be TOASTed and I would be \nhappy ;)\n\n", "msg_date": "Wed, 01 Mar 2006 16:43:53 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad plan on a view " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> PFC <[email protected]> writes:\n> > So, in order to speed up requests which need a full table scan, I wanted \n> > to put the text fields in another table, and use a view to make it look \n> > like nothing happened. Also, the small table used for searching is a lot \n> > more likely to fit in RAM than the big table with all the text which is \n> > only used for display.\n> \n> Aren't you going to a lot of work to reinvent something that TOAST\n> already does for you? (At least, in the cases where the text fields\n> are wide enough that it really matters.)\n\nI think this is a fairly common data modelling trick actually. And it's not a\nterribly large amount of work either.\n\nWhile TOAST has a similar goal I don't think it has enough AI to completely\nreplace this manual process. It suffers in a number of use cases:\n\n1) When you have a large number of moderate sized text fields instead of a\n single very large text field. This is probably the case here.\n\n2) When you know exactly which fields you'll be searching on and which you\n won't be. Often many speed-sensitive queries don't need to access the\n extended information at all.\n\n Instead of making the decision on a per-record basis you can *always* move\n the data to the other table saving even more space even in cases where\n you're gaining very little per record. In total across the entire scan you\n still gain a lot being able to scan just the dense integer fields.\n\n\nAlso, is the optimizer capable of coming up with merge join type plans for\nTOAST tables when necessary?\n\n\n-- \ngreg\n\n", "msg_date": "01 Mar 2006 11:04:47 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan on a view" }, { "msg_contents": "\n\n> While TOAST has a similar goal I don't think it has enough AI to \n> completely\n> replace this manual process. It suffers in a number of use cases:\n>\n> 1) When you have a large number of moderate sized text fields instead of \n> a single very large text field. This is probably the case here.\n\n\tExactly.\n\n> 2) When you know exactly which fields you'll be searching on and which \n> you won't be. Often many speed-sensitive queries don't need to access the\n> extended information at all.\n\n\tAlso true. I only need the large fields to display the few rows which \nsurvive the LIMIT...\n\n\tHere's one of the same :\n\tAlthough the subselect has no influence on the WHERE condition, 97021 \nsubselects are computed, and only 10 kept...\n\tThis data also bloats the sort (if the subselect yields a large text \nfield instead of an int, the sort time doubles).\n\nexplain analyze select raw_annonce_id, price, rooms, surface, terrain, \ncontact_telephones, description, (SELECT price FROM raw_annonces r WHERE \nr.id=raw_annonce_id) from annonces where price is not null order by price \ndesc limit 10;\n QUERY \nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=459568.37..459568.40 rows=10 width=272) (actual \ntime=1967.360..1967.368 rows=10 loops=1)\n -> Sort (cost=459568.37..459812.60 rows=97689 width=272) (actual \ntime=1967.357..1967.361 rows=10 loops=1)\n Sort Key: price\n -> Seq Scan on annonces (cost=0.00..443102.59 rows=97689 \nwidth=272) (actual time=0.059..949.507 rows=97021 loops=1)\n Filter: (price IS NOT NULL)\n SubPlan\n -> Index Scan using raw_annonces_pkey on raw_annonces r \n(cost=0.00..4.46 rows=1 width=8) (actual time=0.005..0.006 rows=1 \nloops=97021)\n Index Cond: (id = $0)\n Total runtime: 1988.786 ms\n\n\n\n\n\n", "msg_date": "Wed, 01 Mar 2006 17:49:30 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad plan on a view" }, { "msg_contents": "On Wed, Mar 01, 2006 at 04:43:53PM +0100, PFC wrote:\n> \n> >Aren't you going to a lot of work to reinvent something that TOAST\n> >already does for you? (At least, in the cases where the text fields\n> >are wide enough that it really matters.)\n> \n> \tI know. But I have several text fields in the 20 to 200 characters, \n> \twhich is too small for toast, but large enough to make up about 90% of the \n> table size, which makes it problematic RAM-wise, especially since it's \n> gonna grow. Now, if I had 1 big text field, it would be TOASTed and I \n> would be happy ;)\n\nCases like this are why I really wish we had the ability to specify\nsomething other than BLKSZ/4 as when to trigger TOAST. In many cases the\ntext field is seldom refered to, so getting it out of the main heap is a\nbig win.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 1 Mar 2006 12:51:15 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan on a view" } ]
[ { "msg_contents": "Hi all,\n\n \n\nI have to provide a pretty standard query that should return every row\nwhere the NAME attribute begins with a specific string. The type of the\nNAME column is varchar. I do have an index for this column. One would\nthink that Postgres will use the index to look up the matches, but\napparently that is not the case. It performs a full table scan. My\nquery looks something like this:\n\n \n\nSELECT * FROM table WHERE name LIKE 'smith%';\n\n \n\nDoes anyone know a way to \"force\" the optimizer to utilize the index? Is\nthere perhaps another way of doing this?\n\n \n\nThanks for the help!\n\nJozsef\n\n\n\n\n\n\n\n\n\n\nHi all,\n \nI have to provide a pretty standard query that should return\nevery row where the NAME attribute begins with a specific string. The type of\nthe NAME column is varchar. I do have an index for this column. One would think\nthat Postgres will use the index to look up the matches, but apparently that is\nnot the case. It performs a full table scan.  My query looks something\nlike this:\n \nSELECT * FROM table WHERE name LIKE ‘smith%’;\n \nDoes anyone know a way to “force” the optimizer\nto utilize the index? Is there perhaps another way of doing this?\n \nThanks for the help!\nJozsef", "msg_date": "Thu, 2 Mar 2006 18:15:36 -0600", "msg_from": "\"Jozsef Szalay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Like 'name%' is not using index" }, { "msg_contents": "Jozsef Szalay wrote:\n> Hi all,\n> \n> \n> \n> I have to provide a pretty standard query that should return every row \n> where the NAME attribute begins with a specific string. The type of the \n> NAME column is varchar. I do have an index for this column. One would \n> think that Postgres will use the index to look up the matches, but \n> apparently that is not the case. It performs a full table scan. My \n> query looks something like this:\n> \n> \n> \n> SELECT * FROM table WHERE name LIKE �smith%�;\n> \n> \n> \n> Does anyone know a way to �force� the optimizer to utilize the index? Is \n> there perhaps another way of doing this?\n> \n\nCan you provide an EXPLAIN ANALYZE for the query? This will give us a \nhint as to why the index has not been chosen.\n\nThe other standard gotcha is that LIKE will not use an index if your \ncluster is initialized with locale != C. If it is, then you can try \nrecreating the index using something like:\n\nCREATE INDEX table_name ON table (name varchar_pattern_ops);\n\ncheers\n\nMark\n", "msg_date": "Fri, 03 Mar 2006 14:28:40 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Like 'name%' is not using index" }, { "msg_contents": "\n\"Jozsef Szalay\" <[email protected]> writes:\n\n> One would\n> think that Postgres will use the index to look up the matches, but\n> apparently that is not the case. It performs a full table scan. My\n> query looks something like this:\n> \n> SELECT * FROM table WHERE name LIKE 'smith%';\n\nThere are two possible answers here:\n\nFirst, what does this output on your database?\n\ndb=> show lc_collate;\n\nIf it's not \"C\" then the index can't be used. You would have to make a second\nspecial-purpose index specifically for use with LIKE.\n\nSecondly, please send \"explain analyze\" output for your query. It will show if\nthe optimizer is simply estimating that the index won't help enough to be\nfaster than the full table scan.\n\n-- \ngreg\n\n", "msg_date": "02 Mar 2006 23:01:39 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Like 'name%' is not using index" } ]
[ { "msg_contents": "The var_char_pattern_ops operator group has made the difference. \n\nThanks for the help!\nJozsef\n\n-----Original Message-----\nFrom: Mark Kirkwood [mailto:[email protected]] \nSent: Thursday, March 02, 2006 7:29 PM\nTo: Jozsef Szalay\nCc: [email protected]\nSubject: Re: [PERFORM] Like 'name%' is not using index\n\nJozsef Szalay wrote:\n> Hi all,\n> \n> \n> \n> I have to provide a pretty standard query that should return every row\n\n> where the NAME attribute begins with a specific string. The type of\nthe \n> NAME column is varchar. I do have an index for this column. One would \n> think that Postgres will use the index to look up the matches, but \n> apparently that is not the case. It performs a full table scan. My \n> query looks something like this:\n> \n> \n> \n> SELECT * FROM table WHERE name LIKE 'smith%';\n> \n> \n> \n> Does anyone know a way to \"force\" the optimizer to utilize the index?\nIs \n> there perhaps another way of doing this?\n> \n\nCan you provide an EXPLAIN ANALYZE for the query? This will give us a \nhint as to why the index has not been chosen.\n\nThe other standard gotcha is that LIKE will not use an index if your \ncluster is initialized with locale != C. If it is, then you can try \nrecreating the index using something like:\n\nCREATE INDEX table_name ON table (name varchar_pattern_ops);\n\ncheers\n\nMark\n\n", "msg_date": "Thu, 2 Mar 2006 20:48:51 -0600", "msg_from": "\"Jozsef Szalay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Like 'name%' is not using index" } ]
[ { "msg_contents": "Hello,\n\nI am doing some query optimizations for one of my clients who runs \nPostgreSQL 8.1.1, and am trying to cut down on the runtime of this \nparticular query as it runs very frequently:\n\nSELECT count(*) FROM test_table_1\n INNER JOIN test_table_2 ON\n (test_table_2.s_id = 13300613 AND test_table_1.id = test_table_2.n_id)\n WHERE now() BETWEEN test_table_1.start_ts AND test_table_1.end_ts\n AND test_table_1.id = test_table_1.g_id;\n\n\nThe related tables are as follows:\n\n Table \"public.test_table_1\"\n Column | Type | Modifiers\n----------+--------------------------+-----------\n id | numeric(20,0) | not null\n g_id | numeric(20,0) |\n start_ts | timestamp with time zone |\n end_ts | timestamp with time zone |\nIndexes:\n \"test_table_1_pkey\" PRIMARY KEY, btree (id)\n \"test_table_1_ts_index\" btree (start_ts, end_ts)\n\n Table \"public.test_table_2\"\n Column | Type | Modifiers\n--------+---------------+-----------\n s_id | numeric(20,0) |\n n_id | numeric(20,0) |\nIndexes:\n \"test_table_2_n_id\" btree (n_id)\n \"test_table_2_s_id\" btree (s_id)\n\n\nWhen I run the query it uses the following plan:\n\n Aggregate (cost=217.17..217.18 rows=1 width=0) (actual time=107.829..107.830 rows=1 loops=1)\n -> Nested Loop (cost=11.09..217.16 rows=1 width=0) (actual time=107.817..107.817 rows=0 loops=1)\n -> Index Scan using test_table_1_ts_index on test_table_1 (cost=0.01..204.05 rows=1 width=22) (actual time=3.677..4.388 rows=155 loops=1)\n Index Cond: ((now() >= start_ts) AND (now() <= end_ts))\n Filter: (id = g_id)\n -> Bitmap Heap Scan on test_table_2 (cost=11.09..13.10 rows=1 width=12) (actual time=0.664..0.664 rows=0 loops=155)\n Recheck Cond: ((test_table_2.s_id = 13300613::numeric) AND (\"outer\".id = test_table_2.n_id))\n -> BitmapAnd (cost=11.09..11.09 rows=1 width=0) (actual time=0.662..0.662 rows=0 loops=155)\n -> Bitmap Index Scan on test_table_2_s_id (cost=0.00..2.48 rows=136 width=0) (actual time=0.014..0.014 rows=1 loops=155)\n Index Cond: (s_id = 13300613::numeric)\n -> Bitmap Index Scan on test_table_2_n_id (cost=0.00..8.36 rows=959 width=0) (actual time=0.645..0.645 rows=891 loops=155)\n Index Cond: (\"outer\".id = test_table_2.n_id)\n Total runtime: 107.947 ms\n\n\nHowever, when I turn off enable_nestloop it runs as follows:\n\n Aggregate (cost=465.86..465.87 rows=1 width=0) (actual time=5.763..5.764 rows=1 loops=1)\n -> Merge Join (cost=465.16..465.86 rows=1 width=0) (actual time=5.752..5.752 rows=0 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".n_id)\n -> Sort (cost=204.06..204.07 rows=1 width=22) (actual time=5.505..5.505 rows=1 loops=1)\n Sort Key: test_table_1.id\n -> Index Scan using test_table_1_ts_index on test_table_1 (cost=0.01..204.05 rows=1 width=22) (actual time=4.458..4.995 rows=155 loops=1)\n Index Cond: ((now() >= start_ts) AND (now() <= end_ts))\n Filter: (id = g_id)\n -> Sort (cost=261.10..261.44 rows=136 width=12) (actual time=0.235..0.236 rows=1 loops=1)\n Sort Key: test_table_2.n_id\n -> Bitmap Heap Scan on test_table_2 (cost=2.48..256.28 rows=136 width=12) (actual time=0.218..0.219 rows=1 loops=1)\n Recheck Cond: (s_id = 13300613::numeric)\n -> Bitmap Index Scan on test_table_2_s_id (cost=0.00..2.48 rows=136 width=0) (actual time=0.168..0.168 rows=1 loops=1)\n Index Cond: (s_id = 13300613::numeric)\n Total runtime: 5.893 ms\n\nAs you can see the total runtime drops from 108ms to 6ms, indicating \nthat it is much better to use a Merge Join rather than a Nested Loop in \nthis case. It looks like the planner chooses a Nested Loop because it \nincorrectly estimates the (now() BETWEEN test_table_1.start_ts AND \ntest_table_1.end_ts AND test_table_1.id = test_table_1.g_id) condition \nto return 1 row, whereas in reality it returns 155 rows.\n\nI have set statistics for test_table_1.id and test_table_1.g_id to 1000, \nand have ANALYZEd both tables. This does not seem to make a bit of a \ndifference -- it keeps thinking the criteria will only return 1 row. \nHowever, if I add a boolean column named \"equal_ids\" to test_table_1 \nwith the value (test_table_1.id = test_table_1.g_id), and use that in \nthe query instead of the equality it does make a much better row \nestimate. Essentially:\n\nALTER TABLE test_table_1 ADD equal_ids BOOLEAN;\nUPDATE test_table_1 SET equal_ids = (id = g_id);\nVACUUM FULL test_table_1;\nANALYZE VERBOSE test_table_1;\n\tINFO: analyzing \"public.test_table_1\"\n\tINFO: \"test_table_1\": scanned 83 of 83 pages, containing 8827 live rows and 0 dead rows; 8827 rows in sample, 8827 estimated total rows\n\n\nThe plans listed above already reflect these changes. When I substitute \n\"test_table_1.id = test_table_1.g_id\" with \"test_table_1.equal_ids\" in \nthe query I get the following plan:\n\n Aggregate (cost=469.76..469.77 rows=1 width=0) (actual time=5.711..5.712 rows=1 loops=1)\n -> Merge Join (cost=468.52..469.76 rows=2 width=0) (actual time=5.703..5.703 rows=0 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".n_id)\n -> Sort (cost=207.42..207.69 rows=108 width=11) (actual time=5.462..5.462 rows=1 loops=1)\n Sort Key: test_table_1.id\n -> Index Scan using test_table_1_ts_index on test_table_1 (cost=0.01..203.77 rows=108 width=11) (actual time=4.547..4.984 rows=155 loops=1)\n Index Cond: ((now() >= start_ts) AND (now() <= end_ts))\n Filter: equal_ids\n -> Sort (cost=261.10..261.44 rows=136 width=12) (actual time=0.231..0.232 rows=1 loops=1)\n Sort Key: test_table_2.n_id\n -> Bitmap Heap Scan on test_table_2 (cost=2.48..256.28 rows=136 width=12) (actual time=0.212..0.213 rows=1 loops=1)\n Recheck Cond: (s_id = 13300613::numeric)\n -> Bitmap Index Scan on test_table_2_s_id (cost=0.00..2.48 rows=136 width=0) (actual time=0.177..0.177 rows=1 loops=1)\n Index Cond: (s_id = 13300613::numeric)\n Total runtime: 5.830 ms\n\nThe row estimate (108) is much better in this case.\n\nHere's some information on the data in these tables:\n\nSELECT count(*) FROM test_table_1;\n count\n-------\n 8827\n\nSELECT count(*) FROM test_table_2;\n count\n---------\n 1149533\n\nSELECT equal_ids, count(equal_ids) FROM test_table_1 GROUP BY equal_ids;\n equal_ids | count\n-----------+-------\n f | 281\n t | 8546\n\nSELECT equal_ids, count(equal_ids) FROM test_table_1 WHERE now() BETWEEN test_table_1.start_ts AND test_table_1.end_ts GROUP BY equal_ids;\n equal_ids | count\n-----------+-------\n t | 155\n\nSELECT attname, null_frac, n_distinct FROM pg_stats WHERE tablename = 'test_table_1' AND attname IN ('id', 'g_id', 'equal_ids');\n attname | null_frac | n_distinct\n-----------+-----------+------------\n id | 0 | -1\n g_id | 0 | -0.968166\n equal_ids | 0 | 2\n\n\nAny ideas on how I could go about getting PostgreSQL to use a Merge Join \nwithout having to resort to using the equal_ids column or disabling \nenable_nestloop? Let me know if you need any additional info.\n\nThanks!\n\nAlex\n", "msg_date": "Fri, 03 Mar 2006 19:10:52 -0600", "msg_from": "Alex Adriaanse <[email protected]>", "msg_from_op": true, "msg_subject": "Bad row estimates" }, { "msg_contents": "Alex Adriaanse <[email protected]> writes:\n\n> SELECT count(*) FROM test_table_1\n> INNER JOIN test_table_2 ON\n> (test_table_2.s_id = 13300613 AND test_table_1.id = test_table_2.n_id)\n> WHERE now() BETWEEN test_table_1.start_ts AND test_table_1.end_ts\n> AND test_table_1.id = test_table_1.g_id;\n\nI don't know if this is the entire answer but this query is touching on two of\nPostgres's specific difficulties in analyzing statistics:\n\nThe \"now() BETWEEN test_table_1.start_ts AND test_table_1.end_ts can't be\nanswered completely using a btree index. You could try using a GIST index here\nbut I'm not clear how much it would help you (or how much work it would be).\n\nThe \"test_table_1.id = test_table_1.g_id\" clause depends on intercolumn\n\"correlation\" which Postgres doesn't make any attempt at analyzing. That's why\nyou've found that no matter how much you increase the statitics goal it can't\ncome up with a better estimate.\n\nActually the \"now() between ...\" clause also suffers from the inter-column\ndependency issue which is why the estimates for it are off as well.\n\n> However, if I add a boolean column named \"equal_ids\" to test_table_1 with\n> the value (test_table_1.id = test_table_1.g_id), and use that in the query\n> instead of the equality it does make a much better row estimate.\n\nOne thing you could try is making an expression index on that expression. You\ndon't need to actually have a redundant column bloating your table. In 8.1 I\nbelieve Postgres will even calculate statistics for these expression indexes.\n\nIn fact you could go one step further and try a partial index like:\n\n CREATE INDEX eq_start ON test_table (start_ts) WHERE id = g_id\n\nThe ideal combination might be to create a partial GIST index :)\n\n(I don't think the end_ts in the index is buying you much, despite its\nappearance in the Index Cond in the plan.) \n\n-- \ngreg\n\n", "msg_date": "04 Mar 2006 02:01:35 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad row estimates" }, { "msg_contents": "\nGreg Stark <[email protected]> writes:\n\n> The \"now() BETWEEN test_table_1.start_ts AND test_table_1.end_ts can't be\n> answered completely using a btree index. You could try using a GIST index here\n> but I'm not clear how much it would help you (or how much work it would be).\n\nTo add to my own comment you could also try creating two separate indexes on\nstart_ts and end_ts. Postgres can combine the two indexes using a bitmap scan.\nIt's not a complete solution like a GIST index would be though.\n\nIt also doesn't help at all with the planner estimating how many records will\nactually match.\n\n-- \ngreg\n\n", "msg_date": "04 Mar 2006 03:15:02 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad row estimates" }, { "msg_contents": "On Sat, Mar 04, 2006 at 02:01:35AM -0500, Greg Stark wrote:\n> Alex Adriaanse <[email protected]> writes:\n> \n> > SELECT count(*) FROM test_table_1\n> > INNER JOIN test_table_2 ON\n> > (test_table_2.s_id = 13300613 AND test_table_1.id = test_table_2.n_id)\n> > WHERE now() BETWEEN test_table_1.start_ts AND test_table_1.end_ts\n> > AND test_table_1.id = test_table_1.g_id;\n\nSomething else that helps in cases like this is to place both an upper\nand lower boundary on one (or both) fields if possible. For example, if\nyou know that start_ts and end_ts will always be within 1 hour of each\nother, adding the following will help:\n\nAND start_ts >= now()-'1 hour'::interval AND end_ts <= now()+'1\nhour'::interval\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Sat, 4 Mar 2006 09:46:05 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad row estimates" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> (I don't think the end_ts in the index is buying you much, despite its\n> appearance in the Index Cond in the plan.) \n\nWell, it saves some trips to the heap, but the indexscan is still going\nto run from the beginning of the index to start_ts = now(), because\nbtree has no idea that there's any correlation between the two index\ncolumns.\n\nIf you could put some a-priori bound on the interval width, you could\nadd a WHERE constraint \"AND now() - max_width <= start_ts\", which'd\nconstrain the index scan and possibly also get you a better planner\nestimate. Otherwise I think you really need a special datatype for time\nintervals and a GIST or r-tree index on it :-(.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 Mar 2006 11:09:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad row estimates " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Otherwise I think you really need a special datatype for time\n> intervals and a GIST or r-tree index on it :-(.\n\nYou could actually take short cuts using expression indexes to do this. If it\nworks out well then you might want to implement a real data type to avoid the\noverhead of the SQL conversion functions.\n\nHere's an example. If I were to do this for real I would look for a better\ndatatype than the box datatype and I would wrap the whole conversion in an SQL\nfunction. But this will serve to demonstrate:\n\nstark=> create table interval_test (start_ts timestamp with time zone, end_ts timestamp with time zone);\nCREATE TABLE\n\nstark=> create index interval_idx on interval_test using gist (box(point(start_ts::abstime::integer, end_ts::abstime::integer) , point(start_ts::abstime::integer, end_ts::abstime::integer)));\nCREATE INDEX\n\nstark=> explain select * from interval_test where box(point(now()::abstime::integer,now()::abstime::integer),point(now()::abstime::integer,now()::abstime::integer)) ~ box(point(start_ts::abstime::integer, end_ts::abstime::integer) , point(start_ts::abstime::integer, end_ts::abstime::integer));\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using interval_idx on interval_test (cost=0.07..8.36 rows=2 width=16)\n Index Cond: (box(point((((now())::abstime)::integer)::double precision, (((now())::abstime)::integer)::double precision), point((((now())::abstime)::integer)::double precision, (((now())::abstime)::integer)::double precision)) ~ box(point((((start_ts)::abstime)::integer)::double precision, (((end_ts)::abstime)::integer)::double precision), point((((start_ts)::abstime)::integer)::double precision, (((end_ts)::abstime)::integer)::double precision)))\n(2 rows)\n\n-- \ngreg\n\n", "msg_date": "04 Mar 2006 13:11:13 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad row estimates" }, { "msg_contents": "Thank you all for your valuable input. I have tried creating a partial \nindex, a GIST index, and a GIST + partial index, as suggested, but it \ndoes not seem to make a significant difference. For instance:\n\nCREATE INDEX test_table_1_interval_idx ON test_table_1 USING GIST\n (box(point(start_ts::abstime::integer, start_ts::abstime::integer), point(end_ts::abstime::integer, end_ts::abstime::integer)))\n WHERE id = g_id;\n\nANALYZE test_table_1;\n\nEXPLAIN ANALYZE SELECT count(*) FROM test_table_1\n INNER JOIN test_table_2 ON (test_table_2.s_id=13300613 AND test_table_1.id = test_table_2.n_id)\n WHERE box(point(start_ts::abstime::integer, start_ts::abstime::integer), point(end_ts::abstime::integer, end_ts::abstime::integer))\n ~ box(point(now()::abstime::integer,now()::abstime::integer),point(now()::abstime::integer,now()::abstime::integer))\n AND test_table_1.id = test_table_1.g_id;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=15.09..15.10 rows=1 width=0) (actual time=69.771..69.772 rows=1 loops=1)\n -> Nested Loop (cost=9.06..15.08 rows=1 width=0) (actual time=69.752..69.752 rows=0 loops=1)\n -> Index Scan using test_table_1_interval_idx on test_table_1 (cost=0.07..4.07 rows=1 width=22) (actual time=2.930..3.607 rows=135 loops=1)\n Index Cond: (box(point((((start_ts)::abstime)::integer)::double precision, (((start_ts)::abstime)::integer)::double precision), point((((end_ts)::abstime)::integer)::double precision, (((end_ts)::abstime)::integer)::double precision)) ~ box(point((((now())::abstime)::integer)::double precision, (((now())::abstime)::integer)::double precision), point((((now())::abstime)::integer)::double precision, (((now())::abstime)::integer)::double precision)))\n -> Bitmap Heap Scan on test_table_2 (cost=8.99..11.00 rows=1 width=12) (actual time=0.486..0.486 rows=0 loops=135)\n Recheck Cond: ((test_table_2.s_id = 13300613::numeric) AND (\"outer\".id = test_table_2.n_id))\n -> BitmapAnd (cost=8.99..8.99 rows=1 width=0) (actual time=0.485..0.485 rows=0 loops=135)\n -> Bitmap Index Scan on test_table_2_s_id (cost=0.00..2.17 rows=48 width=0) (actual time=0.015..0.015 rows=1 loops=135)\n Index Cond: (s_id = 13300613::numeric)\n -> Bitmap Index Scan on test_table_2_n_id (cost=0.00..6.57 rows=735 width=0) (actual time=0.467..0.467 rows=815 loops=135)\n Index Cond: (\"outer\".id = test_table_2.n_id)\n Total runtime: 69.961 ms\n\n(Note: without the GIST index the query currently runs in about 65ms)\n\nIts row estimates are still way off. As a matter of fact, it almost \nseems as if the index doesn't affect row estimates at all.\n\nWhat would you guys suggest?\n\nThanks,\n\nAlex\n\nGreg Stark wrote:\n> You could actually take short cuts using expression indexes to do this. If it\n> works out well then you might want to implement a real data type to avoid the\n> overhead of the SQL conversion functions.\n>\n> Here's an example. If I were to do this for real I would look for a better\n> datatype than the box datatype and I would wrap the whole conversion in an SQL\n> function. But this will serve to demonstrate:\n>\n> stark=> create table interval_test (start_ts timestamp with time zone, end_ts timestamp with time zone);\n> CREATE TABLE\n>\n> stark=> create index interval_idx on interval_test using gist (box(point(start_ts::abstime::integer, end_ts::abstime::integer) , point(start_ts::abstime::integer, end_ts::abstime::integer)));\n> CREATE INDEX\n>\n> stark=> explain select * from interval_test where box(point(now()::abstime::integer,now()::abstime::integer),point(now()::abstime::integer,now()::abstime::integer)) ~ box(point(start_ts::abstime::integer, end_ts::abstime::integer) , point(start_ts::abstime::integer, end_ts::abstime::integer));\n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using interval_idx on interval_test (cost=0.07..8.36 rows=2 width=16)\n> Index Cond: (box(point((((now())::abstime)::integer)::double precision, (((now())::abstime)::integer)::double precision), point((((now())::abstime)::integer)::double precision, (((now())::abstime)::integer)::double precision)) ~ box(point((((start_ts)::abstime)::integer)::double precision, (((end_ts)::abstime)::integer)::double precision), point((((start_ts)::abstime)::integer)::double precision, (((end_ts)::abstime)::integer)::double precision)))\n> (2 rows)\n>\n> \n", "msg_date": "Wed, 08 Mar 2006 11:15:01 -0600", "msg_from": "Alex Adriaanse <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad row estimates" }, { "msg_contents": "Alex Adriaanse <[email protected]> writes:\n\n> Its row estimates are still way off. As a matter of fact, it almost seems as\n> if the index doesn't affect row estimates at all.\n\nIndexes normally don't affect estimates. Expression indexes do effectively\ncreate a new column to generate stats for, but that doesn't really help here\nbecause there aren't any estimation functions for the geometric gist indexes.\n\n> -> BitmapAnd (cost=8.99..8.99 rows=1 width=0) (actual time=0.485..0.485 rows=0 loops=135)\n> -> Bitmap Index Scan on test_table_2_s_id (cost=0.00..2.17 rows=48 width=0) (actual time=0.015..0.015 rows=1 loops=135)\n> Index Cond: (s_id = 13300613::numeric)\n> -> Bitmap Index Scan on test_table_2_n_id (cost=0.00..6.57 rows=735 width=0) (actual time=0.467..0.467 rows=815 loops=135)\n> Index Cond: (\"outer\".id = test_table_2.n_id)\n\nIf this query is representative then it seems you might be better off without\nthe test_table_2_n_id index. Of course this could be a problem if you need\nthat index for other purposes.\n\nI'm puzzled how test_table_2_s_id's estimate isn't more precise. Are there\nsome values of s_id that are quite common and others that are unique? You\nmight try raising the statistics target on s_id.\n\nIncidentally, 70ms is pretty good. I'm usually happy if all my mundane queries\nare under 100ms and the more complex queries in the vicinity of 300ms. Trying\nto optimize below 100ms is hard because you'll find a lot of variability in\nthe performance. Any extraneous disk i/o from checkpoints, vacuums, even other\nservices, will throw off your expectations.\n\n-- \ngreg\n\n", "msg_date": "08 Mar 2006 12:37:07 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad row estimates" } ]
[ { "msg_contents": "Hi,\n\nI have two tables:\n\nCustomer: objectid, lastname, fk_address\nAddress: objectid, city\n\nI want to select all customers with a name >= some_name and living in a\ncity >= some_city, all comparisons case insensitive\n\nBelow is what I actually have. Given the fact that it takes forever to\nget a result (> 6 seconds) , there must be something wrong with my\nsolution or my expectation. Can anyone tell what I should do to make\nthis query go faster ( or convince me to wait for the result ;-()?\n\n\nSELECT customers.objectid FROM prototype.customers,prototype.addresses \nWHERE\ncustomers.contactAddress = addresses.objectId \nAND \n( \n TRIM(UPPER(lastName)) >= TRIM(UPPER('some_name'))\n AND \n TRIM(UPPER(city)) >= TRIM(UPPER('some_city'))\n) \norder by TRIM(UPPER(lastname)), TRIM(UPPER(city)) \n\nExplain analyze after a full alayse vacuum:\n\nSort (cost=54710.68..54954.39 rows=97484 width=111) (actual\ntime=7398.971..7680.405 rows=96041 loops=1)\n Sort Key: btrim(upper(customers.lastname)),\nbtrim(upper(addresses.city))\n -> Hash Join (cost=14341.12..46632.73 rows=97484 width=111) (actual\ntime=1068.862..5472.788 rows=96041 loops=1)\n Hash Cond: (\"outer\".contactaddress = \"inner\".objectid)\n -> Seq Scan on customers (cost=0.00..24094.01 rows=227197\nwidth=116) (actual time=0.018..1902.646 rows=223990 loops=1)\n Filter: (btrim(upper(lastname)) >= 'JANSEN'::text)\n -> Hash (cost=13944.94..13944.94 rows=158473 width=75) (actual\ntime=1068.467..1068.467 rows=158003 loops=1)\n -> Bitmap Heap Scan on addresses (cost=1189.66..13944.94\nrows=158473 width=75) (actual time=71.259..530.986 rows=158003 loops=1)\n Recheck Cond: (btrim(upper(city)) >=\n'NIJMEGEN'::text)\n -> Bitmap Index Scan on\nprototype_addresses_trim_upper_city (cost=0.00..1189.66 rows=158473\nwidth=0) (actual time=68.290..68.290 rows=158003 loops=1)\n Index Cond: (btrim(upper(city)) >=\n'NIJMEGEN'::text)\nTotal runtime: 7941.095 ms\n\n\nI have indices on :\nfki_customers_addresses\ncustomer.lastname (both lastname and trim(uppercase(lastname)) \naddresses.city (both city and trim(uppercase(city)) \n\nI \n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n\n\n\n\n\n\n\n\nHi,\n\nI have two tables:\n\nCustomer: objectid, lastname, fk_address\nAddress: objectid, city\n\nI want to select all customers with a name >= some_name and living in a city >= some_city, all comparisons case insensitive\n\nBelow is what I actually have. Given the fact that it takes forever to get a result (> 6 seconds) , there must be something wrong with my solution or my expectation. Can anyone tell what I should do to make this query go faster ( or convince me to wait for the result ;-()?\n\n\nSELECT customers.objectid FROM prototype.customers,prototype.addresses \nWHERE\ncustomers.contactAddress = addresses.objectId \nAND \n( \n TRIM(UPPER(lastName)) >= TRIM(UPPER('some_name'))\n AND \n TRIM(UPPER(city)) >= TRIM(UPPER('some_city'))\n)  \norder by TRIM(UPPER(lastname)), TRIM(UPPER(city)) \n\nExplain analyze after a full alayse vacuum:\n\nSort  (cost=54710.68..54954.39 rows=97484 width=111) (actual time=7398.971..7680.405 rows=96041 loops=1)\n  Sort Key: btrim(upper(customers.lastname)), btrim(upper(addresses.city))\n  ->  Hash Join  (cost=14341.12..46632.73 rows=97484 width=111) (actual time=1068.862..5472.788 rows=96041 loops=1)\n        Hash Cond: (\"outer\".contactaddress = \"inner\".objectid)\n        ->  Seq Scan on customers  (cost=0.00..24094.01 rows=227197 width=116) (actual time=0.018..1902.646 rows=223990 loops=1)\n              Filter: (btrim(upper(lastname)) >= 'JANSEN'::text)\n        ->  Hash  (cost=13944.94..13944.94 rows=158473 width=75) (actual time=1068.467..1068.467 rows=158003 loops=1)\n              ->  Bitmap Heap Scan on addresses  (cost=1189.66..13944.94 rows=158473 width=75) (actual time=71.259..530.986 rows=158003 loops=1)\n                    Recheck Cond: (btrim(upper(city)) >= 'NIJMEGEN'::text)\n                    ->  Bitmap Index Scan on prototype_addresses_trim_upper_city  (cost=0.00..1189.66 rows=158473 width=0) (actual time=68.290..68.290 rows=158003 loops=1)\n                          Index Cond: (btrim(upper(city)) >= 'NIJMEGEN'::text)\nTotal runtime: 7941.095 ms\n\n\nI have indices on :\nfki_customers_addresses\ncustomer.lastname (both lastname and trim(uppercase(lastname)) \naddresses.city (both city and trim(uppercase(city)) \n\nI \n\n\n\n\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl", "msg_date": "Sat, 04 Mar 2006 10:58:03 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "How to query and index for customer with lastname and city" }, { "msg_contents": "On 3/4/06, Joost Kraaijeveld <[email protected]> wrote:\n> Below is what I actually have. Given the fact that it takes forever to get\n> a result (> 6 seconds) , there must be something wrong with my solution or\n> my expectation. Can anyone tell what I should do to make this query go\n> faster ( or convince me to wait for the result ;-()?\n> Explain analyze after a full alayse vacuum:\n> Sort (cost=54710.68..54954.39 rows=97484 width=111) (actual\n> time=7398.971..7680.405 rows=96041 loops=1)\n> Sort Key: btrim(upper(customers.lastname)), btrim(upper(addresses.city))\n> -> Hash Join (cost=14341.12..46632.73 rows=97484 width=111) (actual\n> time=1068.862..5472.788 rows=96041 loops=1)\n> Hash Cond: (\"outer\".contactaddress = \"inner\".objectid)\n> -> Seq Scan on customers (cost=0.00..24094.01 rows=227197\n> width=116) (actual time=0.018..1902.646 rows=223990 loops=1)\n> Filter: (btrim(upper(lastname)) >= 'JANSEN'::text)\n> -> Hash (cost=13944.94..13944.94 rows=158473 width=75) (actual\n> time=1068.467..1068.467 rows=158003 loops=1)\n> -> Bitmap Heap Scan on addresses (cost=1189.66..13944.94\n> rows=158473 width=75) (actual time=71.259..530.986 rows=158003 loops=1)\n> Recheck Cond: (btrim(upper(city)) >= 'NIJMEGEN'::text)\n> -> Bitmap Index Scan on\n> prototype_addresses_trim_upper_city (cost=0.00..1189.66\n> rows=158473 width=0) (actual time=68.290..68.290 rows=158003 loops=1)\n> Index Cond: (btrim(upper(city)) >=\n> 'NIJMEGEN'::text)\n> Total runtime: 7941.095 ms\n\nexplain clearly shows, that index is used for addresses scan, but it\nis not so for users.\nexplain estimates that 227197 customers match the lastname criteria -\nwhich looks awfuly high.\nhow many record do you have in the customers table?\n\ni would try to create index test on customers(contactAddress,\ntrim(uppercase(lastname)));\nor with other ordring of fields.\n\ntry this - create the index, make analyze of customers table, and\nrecheck explain.\nthen try the second index in the same manner.\n\nmaybe this could of some help...\n\ndepesz\n", "msg_date": "Sat, 4 Mar 2006 14:49:44 +0100", "msg_from": "\"hubert depesz lubaczewski\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to query and index for customer with lastname and city" }, { "msg_contents": "Hi Hubert,\n\nOn Sat, 2006-03-04 at 14:49 +0100, hubert depesz lubaczewski wrote:\n> > Sort (cost=54710.68..54954.39 rows=97484 width=111) (actual\n> > time=7398.971..7680.405 rows=96041 loops=1)\n> > Sort Key: btrim(upper(customers.lastname)), btrim(upper(addresses.city))\n> > -> Hash Join (cost=14341.12..46632.73 rows=97484 width=111) (actual time=1068.862..5472.788 rows=96041 loops=1)\n> > Hash Cond: (\"outer\".contactaddress = \"inner\".objectid)\n> > -> Seq Scan on customers (cost=0.00..24094.01 rows=227197 width=116) (actual time=0.018..1902.646 rows=223990 loops=1)\n> > Filter: (btrim(upper(lastname)) >= 'JANSEN'::text)\n> > -> Hash (cost=13944.94..13944.94 rows=158473 width=75) (actual time=1068.467..1068.467 rows=158003 loops=1)\n> > -> Bitmap Heap Scan on addresses (cost=1189.66..13944.94 rows=158473 width=75) (actual time=71.259..530.986 rows=158003 loops=1)\n> > Recheck Cond: (btrim(upper(city)) >= 'NIJMEGEN'::text)\n> > -> Bitmap Index Scan on prototype_addresses_trim_upper_city (cost=0.00..1189.66 rows=158473 width=0) (actual time=68.290..68.290 rows=158003 loops=1)\n> > Index Cond: (btrim(upper(city)) >=> 'NIJMEGEN'::text)\n> > Total runtime: 7941.095 ms\n> \n> explain clearly shows, that index is used for addresses scan, but it\nYes, but I do not understand why I have both a \"Bitmap Index Scan\" and\na \"Bitmap Heap Scan\" on (btrim(upper(city)) >=> 'NIJMEGEN'::text)?\n\n> is not so for users.\n> explain estimates that 227197 customers match the lastname criteria -\n> which looks awfuly high.\n> how many record do you have in the customers table?\n368915 of which 222465 actually meet the condition. \n\n>From what I understand from the mailing list, PostgreSQL prefers a table\nscan whenever it expects that the number of records in the resultset\nwill be ~ > 10 % of the total number of records in the table. Which\nexplains the table scan for customers, but than again, it does not\nexplain why it uses the index on addresses: it has 369337 addresses of\nwhich 158003 meet the condition\n\n> i would try to create index test on customers(contactAddress,\n> trim(uppercase(lastname)));\n> or with other ordring of fields.\n> \n> try this - create the index, make analyze of customers table, and\n> recheck explain.\n> then try the second index in the same manner.\nMakes no difference.\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n\n", "msg_date": "Sat, 04 Mar 2006 15:18:23 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to query and index for customer with lastname" }, { "msg_contents": "On 3/4/06, Joost Kraaijeveld <[email protected]> wrote:\n> > how many record do you have in the customers table?\n> 368915 of which 222465 actually meet the condition.\n> >From what I understand from the mailing list, PostgreSQL prefers a table\n> scan whenever it expects that the number of records in the resultset\n> will be ~ > 10 % of the total number of records in the table. Which\n> explains the table scan for customers, but than again, it does not\n> explain why it uses the index on addresses: it has 369337 addresses of\n> which 158003 meet the condition\n\n\nbitmap index scan is faster than sequential table scan. that's all. it\nwas introduced in 8.1 as far as i remember.\nbasically - i doubt if you can get better performace from query when\nthe result row-count is that high.\n\nout of curiosity though - why do you need so many rows? it's not\npossible to view them, nor do anything meaningful with 200 thousand\nrows!\n\ndepesz\n", "msg_date": "Sat, 4 Mar 2006 15:23:08 +0100", "msg_from": "\"hubert depesz lubaczewski\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to query and index for customer with lastname and city" }, { "msg_contents": "On Saturday 04 March 2006 08:23, hubert depesz lubaczewski wrote:\n> On 3/4/06, Joost Kraaijeveld <[email protected]> wrote:\n> > > how many record do you have in the customers table?\n> >\n> > 368915 of which 222465 actually meet the condition.\n> >\n> > >From what I understand from the mailing list, PostgreSQL prefers a table\n> >\n> > scan whenever it expects that the number of records in the resultset\n> > will be ~ > 10 % of the total number of records in the table. Which\n> > explains the table scan for customers, but than again, it does not\n> > explain why it uses the index on addresses: it has 369337 addresses of\n> > which 158003 meet the condition\n>\n> bitmap index scan is faster than sequential table scan. that's all. it\n> was introduced in 8.1 as far as i remember.\n> basically - i doubt if you can get better performace from query when\n> the result row-count is that high.\n>\n> out of curiosity though - why do you need so many rows? it's not\n> possible to view them, nor do anything meaningful with 200 thousand\n> rows!\n>\n> depesz\n\nIf you're just displaying, use limit and offset to grab one page at a time. \nIf you're manipulating it would be a good idea to do something in a stored \nprocedure.\n", "msg_date": "Sat, 4 Mar 2006 15:35:17 -0600", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to query and index for customer with lastname and city" } ]
[ { "msg_contents": "\n\tBitmap index scan is bliss. Many thanks to the postgres team ! Now \nsearching in tables with a lot of fields and conditions is no longer a \npain.\n\n\tAnd just a thought :\n\n\tSELECT * FROM table WHERE category IN (1,2,3) ORDER BY price LIMIT 10;\n\n\tSuppose you have an index on category, and another index on price. \nDepending on the stats postgres has about the values, you'll either get :\n\n\t0- seq scan + sort\n\t1- Plain or Bitmap Index scan using \"category\", then sort by \"price\"\n\t2- Index scan on \"price\", Filter on \"category IN (1,2,3)\", no sort.\n\n\t1 is efficient if the category is rare. Postgres knows this and uses this \nplan well.\n\tWithout a LIMIT, option 1 should be preferred.\n\n\t2 is efficient if the items in the categories 1,2,3 are cheap (close to \nthe start of the index on price). However if the items in question are on \nthe other side of the index, it will index-scan a large part of the table. \nThis can be a big hit. Postgres has no stats about the correlation of \n\"category\" and \"price\", so it won't know when there is going to be a \nproblem.\n\n\tAnother option would be interesting. It has two steps :\n\n\t- Build a bitmap using the index on \"category\" (just like in case 1)\n\tso we know which pages on the table have relevant rows\n\n\t- Index scan on \"price\", but only looking in the heap for pages which are \nflagged in the bitmap, and then \"Recheck Cond\" on \"category\".\n\tIn other words, do an index scan to get the rows in the right order, but \ndon't bother to check the heap for pages where the bitmap says there are \nno rows.\n\tIn the worst case, you still have to run through the entire index, but at \nleast not through the entire table !\t\n\n\tIt can also speed up some merge joins.\n\n\tWhat do you think ?\n\n", "msg_date": "Sun, 05 Mar 2006 22:00:25 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": true, "msg_subject": "Planner enhancement suggestion." }, { "msg_contents": "On Sun, Mar 05, 2006 at 10:00:25PM +0100, PFC wrote:\n> \n> \tBitmap index scan is bliss. Many thanks to the postgres team ! Now \n> searching in tables with a lot of fields and conditions is no longer a \n> pain.\n> \n> \tAnd just a thought :\n> \n> \tSELECT * FROM table WHERE category IN (1,2,3) ORDER BY price LIMIT \n> \t10;\n> \n> \tSuppose you have an index on category, and another index on price. \n> Depending on the stats postgres has about the values, you'll either get :\n> \n> \t0- seq scan + sort\n> \t1- Plain or Bitmap Index scan using \"category\", then sort by \"price\"\n> \t2- Index scan on \"price\", Filter on \"category IN (1,2,3)\", no sort.\n> \n> \t1 is efficient if the category is rare. Postgres knows this and uses \n> \tthis plan well.\n> \tWithout a LIMIT, option 1 should be preferred.\n> \n> \t2 is efficient if the items in the categories 1,2,3 are cheap (close \n> \tto the start of the index on price). However if the items in question are \n> on the other side of the index, it will index-scan a large part of the \n> table. This can be a big hit. Postgres has no stats about the correlation \n> of \"category\" and \"price\", so it won't know when there is going to be a \n> problem.\n> \n> \tAnother option would be interesting. It has two steps :\n> \n> \t- Build a bitmap using the index on \"category\" (just like in case 1)\n> \tso we know which pages on the table have relevant rows\n> \n> \t- Index scan on \"price\", but only looking in the heap for pages \n> \twhich are flagged in the bitmap, and then \"Recheck Cond\" on \"category\".\n> \tIn other words, do an index scan to get the rows in the right order, \n> \tbut don't bother to check the heap for pages where the bitmap says there \n> are no rows.\n> \tIn the worst case, you still have to run through the entire index, \n> \tbut at least not through the entire table !\t\n> \n> \tIt can also speed up some merge joins.\n\nThe problem is that you're now talking about doing 2 index scans instead\nof just one and a sort. If the correlation on price is high, it could\nstill win. As the cost estimator for index scan stands right now,\nthere's no way such a plan would be chosen unless correlation was\nextremely high, however.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 6 Mar 2006 18:19:13 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner enhancement suggestion." }, { "msg_contents": "\n> The problem is that you're now talking about doing 2 index scans instead\n> of just one and a sort.\n\n\tIt depends on what you call an index scan :\n\ta- Scanning just the index (no heap lookup) to create a bitmap\n\tb- Scanning the index and hitting the heap in index order to retrieve the \nrows\n\n\t(a) should be quite fast, because indexes generally use less space than \nthe main table, and have good locality of reference. (b) is OK if the \ntable fits in memory, but if it has to seek on every row from the heap...\n\n\tSo, when doing :\n\tSELECT * FROM products WHERE category=C ORDER BY price LIMIT 20;\n\n\tIf the category contains few products, using the index on category then \nsorting is good.\n\tHowever, if the category contains many items, postgres is likely to use \nthe index on price to avoid the sort. It needs to lose time fetching many \nrows from the heap which will not be in category C. In that case, I guess \nit would be a win to build a bitmap of the pages containing rows which \nbelongs to category C, and only do the heap lookup on these pages.\n\n\tI have a query like that. When category C contains cheap products, the \nindex scan on price finds them pretty quick. However if it is a category \ncontaining mostly expensive products, the index scan will have to hit most \nof the table in order to find them. The time needed for the query for \nthese two extreme varies from 1 ms to about 20 ms (and that's because the \ntable is fully cached, or else the worst case would be a lot slower). I \nwould definitely prefer a constant 2 ms. The other solution is to create \nan index on (category,price), but this path leads to lots, lots of indexes \ngiven all the combinations.\n\n\tThe bitmap trick I proposed in my previous post would be even more \ninteresting if the table is clustered on category (which seems a \nreasonable thing to do).\n\n> If the correlation on price is high, it could\n> still win. As the cost estimator for index scan stands right now,\n> there's no way such a plan would be chosen unless correlation was\n> extremely high, however.\n\n\tDoes the cost estimator know about this kind of correlation ?\n\n\n\n\n\n\n\n", "msg_date": "Tue, 07 Mar 2006 19:09:15 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner enhancement suggestion." }, { "msg_contents": "On Tue, Mar 07, 2006 at 07:09:15PM +0100, PFC wrote:\n> \n> >The problem is that you're now talking about doing 2 index scans instead\n> >of just one and a sort.\n> \n> \tIt depends on what you call an index scan :\n> \ta- Scanning just the index (no heap lookup) to create a bitmap\n\nSure, and then you scan the other index and read the heap at the same\ntime (b). Your plan requires doing both. The question is: in what cases\nwill it be faster to scan the extra index and build the bitmap vs. just\ndoing a sort.\n\n> \tb- Scanning the index and hitting the heap in index order to \n> \tretrieve the rows\n> \n> \t(a) should be quite fast, because indexes generally use less space \n> \tthan the main table, and have good locality of reference. (b) is OK if the \n> table fits in memory, but if it has to seek on every row from the heap...\n\nIf the table fits in memory, who cares? A sort should be damn fast at\nthat point, because you're dealing with a small set of data.\n\n> \tSo, when doing :\n> \tSELECT * FROM products WHERE category=C ORDER BY price LIMIT 20;\n> \n> \tIf the category contains few products, using the index on category \n> \tthen sorting is good.\n> \tHowever, if the category contains many items, postgres is likely to \n> \tuse the index on price to avoid the sort. It needs to lose time fetching \n\nHave you actually seen this behavior? My experience is that you have to\nhave a correlation somewhere over 80-90% before an index scan is favored\nover a seqscan + sort (which as I mentioned before appears to be\nbroken).\n\n> \tThe bitmap trick I proposed in my previous post would be even more \n> interesting if the table is clustered on category (which seems a \n> reasonable thing to do).\n\nIn which case it's highly unlikely that using the price index will buy\nyou anything.\n\n> \tDoes the cost estimator know about this kind of correlation ?\n\nYes. The problem is that the index scan cost estimator figures out a\nbest and worst case cost, and then interpolates between the two using\ncorrelation^2. IMO it should be using abs(correlation) to do this, and\nthere's some data at http://stats.distributed.net/~decibel/ that backs\nthis up. There's also been some discussions on -hackers (search the\narchives for \"index cost correlation nasby\"), but I've not had time to\nfollow up on this. If you wanted to test a new index cost formula it\nwould be a one line change to the code.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 7 Mar 2006 12:37:15 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner enhancement suggestion." } ]
[ { "msg_contents": "Hi.\n\nHas anybody tried the new Sun \"cool-thread\" servers t1000/t2000 from\nSun? I'd love to see benchmarks with Solaris 10 and pg 8.1.\n\nregards\nClaus\n", "msg_date": "Mon, 6 Mar 2006 10:05:45 +0100", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": true, "msg_subject": "t1000/t2000 sun-servers" }, { "msg_contents": "I may be able to organize a test on a T2000 if someone could give\nadvice as to an appropriate test to run...\n\nCheers,\n\nNeil\n\nOn 3/6/06, Claus Guttesen <[email protected]> wrote:\n> Hi.\n>\n> Has anybody tried the new Sun \"cool-thread\" servers t1000/t2000 from\n> Sun? I'd love to see benchmarks with Solaris 10 and pg 8.1.\n>\n> regards\n> Claus\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Mon, 6 Mar 2006 14:30:41 +0000", "msg_from": "\"Neil Saunders\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: t1000/t2000 sun-servers" }, { "msg_contents": "Suggestions for benchmarks on Sun Fire T2000...\n\n* Don't try DSS or TPC-H type of test with Postgres on Sun Fire T2000\n\nSince such queries tend to have one connection, it will perform badly \nwith Postgre since it will use only one hardware virtual CPU of the \navailable 32 virtual CPU on Sun Fire T2000. (Oracle/DB2 have ways of \nbreaking the queries into multiple processes and hence use multiple \nvirtual CPUs on Sun Fire T2000, PostgreSQL cannot do the same in such cases)\n\n* Use OLTP Type of benchmark\n\nWhere you have more than 30 simultaneous users/connections doing work on \nPostgres without bottlenecking on datafiles of course :-)\n\n* Use multiple databases or instances of Postgresql\n\nLike migrate all your postgresql databases to one T2000. You might see \nthat your average response time may not be faster but it can handle \nprobably all your databases migrated to one T2000.\n\nIn essence, your single thread performance will not speed up on Sun Fire \nT2000 but you can certainly use it to replace all your individual \npostgresql servers in your organization or see higher scalability in \nterms of number of users handled with 1 server with Sun Fire T2000.\n\n\nFor your /etc/system use the parameters as mentioned in\nhttp://www.sun.com/servers/coolthreads/tnb/parameters.jsp\n\nFor hints on setting it up for Postgresql refer to other databases setup on\nhttp://www.sun.com/servers/coolthreads/tnb/applications.jsp\n\nIf you get specific performance problems send email to \[email protected]\n\nRegards,\nJignesh\n\n\n\n\n\nNeil Saunders wrote:\n\n>I may be able to organize a test on a T2000 if someone could give\n>advice as to an appropriate test to run...\n>\n>Cheers,\n>\n>Neil\n>\n>On 3/6/06, Claus Guttesen <[email protected]> wrote:\n> \n>\n>>Hi.\n>>\n>>Has anybody tried the new Sun \"cool-thread\" servers t1000/t2000 from\n>>Sun? I'd love to see benchmarks with Solaris 10 and pg 8.1.\n>>\n>>regards\n>>Claus\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n>>\n>> \n>>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: explain analyze is your friend\n> \n>\n", "msg_date": "Mon, 06 Mar 2006 15:10:53 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: t1000/t2000 sun-servers" }, { "msg_contents": "On 06.03.2006, at 21:10 Uhr, Jignesh K. Shah wrote:\n\n> Like migrate all your postgresql databases to one T2000. You might \n> see that your average response time may not be faster but it can \n> handle probably all your databases migrated to one T2000.\n>\n> In essence, your single thread performance will not speed up on Sun \n> Fire T2000 but you can certainly use it to replace all your \n> individual postgresql servers in your organization or see higher \n> scalability in terms of number of users handled with 1 server with \n> Sun Fire T2000.\n\nHow good is a pgbench test for evaluating things like this? I have \nused it to compare several machines, operating systems and PostgreSQL \nversions - but it was more or less just out of curiosity. The real \nevaluation was made with \"real life tests\" - mostly scripts which \nalso tested the application server itself.\n\nBut as it was it's easy to compare several machines with pgbench, I \njust did the tests and they were interesting and reflected the real \nworld not as bad as I had thought from a \"benchmark\".\n\nSo, personally I'm interested in a simple pgbench test - perhaps with \nsome more ( > 50) clients simulated ...\n\ncug", "msg_date": "Mon, 6 Mar 2006 22:24:29 +0100", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: t1000/t2000 sun-servers" }, { "msg_contents": "\npgbench according to me is more io write intensive benchmark.\n\nT2000 with its internal drive may not perform well with pgbench with a \nhigh load. If you are using external storage, try it out.\n\nI havent tried it out yet but let me know what you see.\n\n\n-Jignesh\n\n\nGuido Neitzer wrote:\n\n> On 06.03.2006, at 21:10 Uhr, Jignesh K. Shah wrote:\n>\n>> Like migrate all your postgresql databases to one T2000. You might \n>> see that your average response time may not be faster but it can \n>> handle probably all your databases migrated to one T2000.\n>>\n>> In essence, your single thread performance will not speed up on Sun \n>> Fire T2000 but you can certainly use it to replace all your \n>> individual postgresql servers in your organization or see higher \n>> scalability in terms of number of users handled with 1 server with \n>> Sun Fire T2000.\n>\n>\n> How good is a pgbench test for evaluating things like this? I have \n> used it to compare several machines, operating systems and PostgreSQL \n> versions - but it was more or less just out of curiosity. The real \n> evaluation was made with \"real life tests\" - mostly scripts which \n> also tested the application server itself.\n>\n> But as it was it's easy to compare several machines with pgbench, I \n> just did the tests and they were interesting and reflected the real \n> world not as bad as I had thought from a \"benchmark\".\n>\n> So, personally I'm interested in a simple pgbench test - perhaps with \n> some more ( > 50) clients simulated ...\n>\n> cug\n\n", "msg_date": "Mon, 06 Mar 2006 17:11:29 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: t1000/t2000 sun-servers" }, { "msg_contents": "On Mon, Mar 06, 2006 at 10:24:29PM +0100, Guido Neitzer wrote:\n> On 06.03.2006, at 21:10 Uhr, Jignesh K. Shah wrote:\n> \n> >Like migrate all your postgresql databases to one T2000. You might \n> >see that your average response time may not be faster but it can \n> >handle probably all your databases migrated to one T2000.\n> >\n> >In essence, your single thread performance will not speed up on Sun \n> >Fire T2000 but you can certainly use it to replace all your \n> >individual postgresql servers in your organization or see higher \n> >scalability in terms of number of users handled with 1 server with \n> >Sun Fire T2000.\n> \n> How good is a pgbench test for evaluating things like this? I have \n> used it to compare several machines, operating systems and PostgreSQL \n> versions - but it was more or less just out of curiosity. The real \n> evaluation was made with \"real life tests\" - mostly scripts which \n> also tested the application server itself.\n> \n> But as it was it's easy to compare several machines with pgbench, I \n> just did the tests and they were interesting and reflected the real \n> world not as bad as I had thought from a \"benchmark\".\n> \n> So, personally I'm interested in a simple pgbench test - perhaps with \n> some more ( > 50) clients simulated ...\n\nI had the opportunity to do some dbt2 testing on Solaris and Sun\nhardware; it's probably your best bet for a test. You'll need to\nessentially fit the database into memory though, otherwise you'll be\ncompletely I/O bound. Another issue is that currently the test framework\nruns on the same machine as the database, so it's not very realistic in\nthat regard, but if you were to change that dependancy I'm pretty sure\nOSBC would gratefully accept patches.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 7 Mar 2006 12:43:27 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: t1000/t2000 sun-servers" } ]
[ { "msg_contents": "How big a VPS would I need to run a Postgres DB.\n\nI need a Postgres database with about 15 tables that will run on a\nsingle virtual private server.\n\nThe 15 tables will be spread over three tablespaces (5 tables per\ntablespace) and be accessed by three different applications running on\ndifferent machines.\n\nOne application will add about 500 orders per day\nAnother will access this data to create and send about 500 emails per day\nA third will access this data to create an after-sales survey for at\nmost 500 times per day.\n\nWhat type of VPS would I need to run a database with this type pf load?\nIs 128 MB ram enough?\nWhat percentage of a 2.8 GHz CPU would be required?\n\nIt's been a long time since I used Postgres.\n\nThanks for any help,\nNagita\n", "msg_date": "Mon, 6 Mar 2006 05:28:02 -0800", "msg_from": "\"Nagita Karunaratne\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres on VPS - how much is enough?" }, { "msg_contents": "On 3/6/06, Nagita Karunaratne <[email protected]> wrote:\n> How big a VPS would I need to run a Postgres DB.\n>\n\n> One application will add about 500 orders per day\n> Another will access this data to create and send about 500 emails per day\n> A third will access this data to create an after-sales survey for at\n> most 500 times per day.\n>\n> What type of VPS would I need to run a database with this type pf load?\n> Is 128 MB ram enough?\n> What percentage of a 2.8 GHz CPU would be required?\n\nMy problem with running PG inside of a VPS was that the VPS used a\nvirtual filesystem... basically, a single file that had been formatted\nand loop mounted so that it looked like a regular hard drive.\nUnfortunately, it was very slow. The difference between my application\nand yours is that mine well more than filled the 1GB of RAM that I had\nallocated. If your data will fit comfortably into RAM then you may be\nfine.\n\nIf you really want to know how it will work, try running it yourself.\nTwo projects that make this really easy and free is the colinux\nproject[1] which allows you to run a linux VPS in Windows and the\nlinux-vserver project[2] which is free software that works on pretty\nmuch any linux OS.\n\nTry it out, tinker with the values and that way you won't have to\nguess when making your purchase decission.\n\n[1] http://www.colinux.org/ Coperative Linux\n[2] http://linux-vserver.org/ Linux-vserver project\n\n--\nMatthew Nuzum\nwww.bearfruit.org\n", "msg_date": "Mon, 6 Mar 2006 08:56:00 -0600", "msg_from": "\"Matthew Nuzum\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on VPS - how much is enough?" }, { "msg_contents": "Nagita Karunaratne wrote:\n> How big a VPS would I need to run a Postgres DB.\n>\n> I need a Postgres database with about 15 tables that will run on a\n> single virtual private server.\n>\n> The 15 tables will be spread over three tablespaces (5 tables per\n> tablespace) and be accessed by three different applications running on\n> different machines.\n>\n> One application will add about 500 orders per day\n> Another will access this data to create and send about 500 emails per day\n> A third will access this data to create an after-sales survey for at\n> most 500 times per day.\n>\n> What type of VPS would I need to run a database with this type pf load?\n> Is 128 MB ram enough?\n> What percentage of a 2.8 GHz CPU would be required?\n> \nIf the database is going to be larger then the allocated memory, disk \nI/O is very important. Not all VPS technologies are equal in this \nregard. (see link below) Like someone else suggested, the best way to \nknow what VPS specs you need is to do your own tests/benchamarks.\n\nhttp://www.cl.cam.ac.uk/Research/SRG/netos/xen/performance.html\n\n-Kevin\n\n", "msg_date": "Mon, 06 Mar 2006 08:20:44 -0700", "msg_from": "Kevin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on VPS - how much is enough?" }, { "msg_contents": "Thanks for the replies,\n\n From personal experience, would you run Postgres on a linux machine\n(NOT a vps) with 512MB of ram?\n\nAssumining I can keep all my data in memory.\n\nThanks,\nNagita\n\n> My problem with running PG inside of a VPS was that the VPS used a\n> virtual filesystem... basically, a single file that had been formatted\n> and loop mounted so that it looked like a regular hard drive.\n> Unfortunately, it was very slow. The difference between my application\n> and yours is that mine well more than filled the 1GB of RAM that I had\n> allocated. If your data will fit comfortably into RAM then you may be\n> fine.\n>\n", "msg_date": "Mon, 6 Mar 2006 08:52:44 -0800", "msg_from": "\"Nagita Karunaratne\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres on VPS - how much is enough?" }, { "msg_contents": "On Mon, 6 Mar 2006, Matthew Nuzum wrote:\n\n> On 3/6/06, Nagita Karunaratne <[email protected]> wrote:\n>> How big a VPS would I need to run a Postgres DB.\n>>\n>\n>> One application will add about 500 orders per day\n>> Another will access this data to create and send about 500 emails per day\n>> A third will access this data to create an after-sales survey for at\n>> most 500 times per day.\n>>\n>> What type of VPS would I need to run a database with this type pf load?\n>> Is 128 MB ram enough?\n>> What percentage of a 2.8 GHz CPU would be required?\n>\n> My problem with running PG inside of a VPS was that the VPS used a\n> virtual filesystem... basically, a single file that had been formatted\n> and loop mounted so that it looked like a regular hard drive.\n> Unfortunately, it was very slow. The difference between my application\n> and yours is that mine well more than filled the 1GB of RAM that I had\n> allocated. If your data will fit comfortably into RAM then you may be\n> fine.\n\nWe host VPSs here (http://www.hub.org) and don't use the 'single file, \nvirtual file system' to put them into ... it must depend on where you \nhost?\n\n----\nMarc G. Fournier Hub.Org Networking Services (http://www.hub.org)\nEmail: [email protected] Yahoo!: yscrappy ICQ: 7615664\n", "msg_date": "Mon, 6 Mar 2006 13:14:45 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on VPS - how much is enough?" }, { "msg_contents": "On 3/6/06, Marc G. Fournier <[email protected]> wrote:\n> On Mon, 6 Mar 2006, Matthew Nuzum wrote:\n> > My problem with running PG inside of a VPS was that the VPS used a\n> > virtual filesystem... basically, a single file that had been formatted\n> > and loop mounted so that it looked like a regular hard drive.\n> > Unfortunately, it was very slow. The difference between my application\n> > and yours is that mine well more than filled the 1GB of RAM that I had\n> > allocated. If your data will fit comfortably into RAM then you may be\n> > fine.\n>\n> We host VPSs here (http://www.hub.org) and don't use the 'single file,\n> virtual file system' to put them into ... it must depend on where you\n> host?\n\nThat's true... I hope I didn't imply that I am anti-vps, I run my own\nservers and one of them is dedicated to doing VPS for different\napplications. I think they're wonderful.\n\n\nOn 3/6/06, Nagita Karunaratne <[email protected]> wrote:\n> From personal experience, would you run Postgres on a linux machine\n> (NOT a vps) with 512MB of ram?\n>\n> Assumining I can keep all my data in memory.\n\nNagita,\n\nIt all depends on performance... I have one postgres database that\nruns on a Pentium 350MHz with 128MB of RAM. It does 1 insert per\nminute 24 hours per day. Because the load is so low, I can get away\nwith minimal hardware.\n\nIf your application has a lot of inserts/updates then disk speed is\nimportant and can vary greatly from one VPS to another.\n\nIf your application is not time-critical than this may be a moot point anyway.\n\n--\nMatthew Nuzum\nwww.bearfruit.org\n", "msg_date": "Mon, 6 Mar 2006 13:45:25 -0600", "msg_from": "\"Matthew Nuzum\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on VPS - how much is enough?" }, { "msg_contents": "Clustering solutions for PostgreSQL are currently pretty limited. Slony\ncould be a good option in the future, but it currently only supports\nMaster-Slave replication (not true clustering) and in my experience is a\npain to set up and administer. Bizgres MPP has a lot of promise,\nespecially for data warehouses, but it currently doesn't have the best\nOLTP database performance. \n\nSo, I had a couple of questions:\n1) I have heard bad things from people on this list regarding SANs - but\nis there a better alternative for a high performance database cluster?\n(both for redundancy and performance) I've heard internal storage\ntouted before, but then you have to do something like master-master\nreplication to get horizontal scalability and write performance will\nsuffer.\n\n2) Has anyone on this list had experience using Ingres R3 in a clustered\nenvironment? I am considering using Ingres R3's built-in clustering\nsupport with a SAN, but am interested to know other people's experiences\nbefore we start toying with this possibility. Any experience with the\nIngres support from Computer Associates? Good/bad?\n\nJeremy\n", "msg_date": "Mon, 06 Mar 2006 14:58:32 -0500", "msg_from": "\"Jeremy Haile\" <[email protected]>", "msg_from_op": false, "msg_subject": "Postgres and Ingres R3 / SAN" }, { "msg_contents": "On Mon, Mar 06, 2006 at 01:14:45PM -0400, Marc G. Fournier wrote:\n> We host VPSs here (http://www.hub.org) and don't use the 'single file, \n> virtual file system' to put them into ... it must depend on where you \n> host?\n\nYeah, but aren't you also using FreeBSD jails? AFAIK linux doesn't have\nan equivalent to jail; all their VPS stuff actually brings up a\nfull-blown copy of linux, kernel and all.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 7 Mar 2006 12:45:58 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on VPS - how much is enough?" }, { "msg_contents": "Please don't steal threds; post a new email rather than replying to an\nexisting thread.\n\nOn Mon, Mar 06, 2006 at 02:58:32PM -0500, Jeremy Haile wrote:\n> Clustering solutions for PostgreSQL are currently pretty limited. Slony\n> could be a good option in the future, but it currently only supports\n> Master-Slave replication (not true clustering) and in my experience is a\n> pain to set up and administer. Bizgres MPP has a lot of promise,\n> especially for data warehouses, but it currently doesn't have the best\n> OLTP database performance. \n> \n> So, I had a couple of questions:\n> 1) I have heard bad things from people on this list regarding SANs - but\n> is there a better alternative for a high performance database cluster?\n> (both for redundancy and performance) I've heard internal storage\n> touted before, but then you have to do something like master-master\n> replication to get horizontal scalability and write performance will\n> suffer.\n\nPostgreSQL on a SAN won't buy you what I think you think it will. It's\nessentially impossible to safely run two PostgreSQL installs off the\nsame data files without destroying your data. What a SAN can buy you is\ndisk-level replication, but I've no experience with that.\n\n> 2) Has anyone on this list had experience using Ingres R3 in a clustered\n> environment? I am considering using Ingres R3's built-in clustering\n> support with a SAN, but am interested to know other people's experiences\n> before we start toying with this possibility. Any experience with the\n> Ingres support from Computer Associates? Good/bad?\n\nCan you point us at more info about this? I can't even find a website\nfor Ingress...\n\nI'd be careful about OSS-based clusters. Everyone I've seen has some\nlimitations, some of which are pretty serious. There are some that are\ncommand-based clustering/replication, but that raises some serious\npotential issues with non-deterministic functions among other things.\nContinuent seems to have done a good job dealing with this, but there's\nstill some gotchas you need to be aware of.\n\nThen there's things like MySQL cluster, which requires that the entire\ndatabase fits in memory. Well, if the database is in memory, it's going\nto be pretty dang fast to begin with, so you're unlikely to need\nscaleability across machines.\n\nBasically, truely enterprise-class clustering (and replication) are\nextremely hard to do, which is why this is pretty much exclusively the\nrealm of the 'big 3' at this point. Slony-II could seriously change\nthings when it comes out, though it still won't give you the data\nguarantees that a true syncronous multi-master setup does. But it will\nhopefully offer multi-master syncronous type behavior with the\nperformance of an async database, which would be a huge leap forward.\n\nPerhaps if you posted your performance requirements someone could help\npoint you to a solution that would meet them.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 7 Mar 2006 13:00:19 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and Ingres R3 / SAN" }, { "msg_contents": "On Tue, 2006-03-07 at 13:00 -0600, Jim C. Nasby wrote:\n\n...\n\n> PostgreSQL on a SAN won't buy you what I think you think it will. It's\n> essentially impossible to safely run two PostgreSQL installs off the\n> same data files without destroying your data. What a SAN can buy you is\n> disk-level replication, but I've no experience with that.\n\nIt is possible to run two instances against the same SAN using tools\nsuch as RedHat's Cluster Suite. We use that in-house as a cheap\nalternative for Oracle clustering, although we're not using it for our\nPostgreSQL servers yet. It's not for load balancing, just\nactive/passive fault tolerance.\n\n-- Mark Lewis\n", "msg_date": "Tue, 07 Mar 2006 11:20:50 -0800", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and Ingres R3 / SAN" }, { "msg_contents": "On Tue, Mar 07, 2006 at 11:20:50AM -0800, Mark Lewis wrote:\n> On Tue, 2006-03-07 at 13:00 -0600, Jim C. Nasby wrote:\n> \n> ...\n> \n> > PostgreSQL on a SAN won't buy you what I think you think it will. It's\n> > essentially impossible to safely run two PostgreSQL installs off the\n> > same data files without destroying your data. What a SAN can buy you is\n> > disk-level replication, but I've no experience with that.\n> \n> It is possible to run two instances against the same SAN using tools\n> such as RedHat's Cluster Suite. We use that in-house as a cheap\n> alternative for Oracle clustering, although we're not using it for our\n> PostgreSQL servers yet. It's not for load balancing, just\n> active/passive fault tolerance.\n\nTrue, but the OP was talking about scaleability, which is not something\nyou get with this setup.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 7 Mar 2006 13:22:21 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and Ingres R3 / SAN" }, { "msg_contents": "\n\n\nOn 7/3/06 18:45, \"Jim C. Nasby\" <[email protected]> wrote:\n\n> On Mon, Mar 06, 2006 at 01:14:45PM -0400, Marc G. Fournier wrote:\n>> We host VPSs here (http://www.hub.org) and don't use the 'single file,\n>> virtual file system' to put them into ... it must depend on where you\n>> host?\n> \n> Yeah, but aren't you also using FreeBSD jails? AFAIK linux doesn't have\n> an equivalent to jail; all their VPS stuff actually brings up a\n> full-blown copy of linux, kernel and all.\n\nNo, linux vserver is equivalent to a jail - and they work superbly imho.\ndeveloper.pgadmin.org is just one such VM that I run.\n\nhttp://www.linux-vserver.org/\n\nRegards, Dave.\n\n", "msg_date": "Tue, 07 Mar 2006 20:08:50 +0000", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on VPS - how much is enough?" }, { "msg_contents": "On 3/7/06, Dave Page <[email protected]> wrote:\n> On 7/3/06 18:45, \"Jim C. Nasby\" <[email protected]> wrote:\n> > On Mon, Mar 06, 2006 at 01:14:45PM -0400, Marc G. Fournier wrote:\n> >> We host VPSs here (http://www.hub.org) and don't use the 'single file,\n> >> virtual file system' to put them into ... it must depend on where you\n> >> host?\n> >\n> > Yeah, but aren't you also using FreeBSD jails? AFAIK linux doesn't have\n> > an equivalent to jail; all their VPS stuff actually brings up a\n> > full-blown copy of linux, kernel and all.\n>\n> No, linux vserver is equivalent to a jail - and they work superbly imho.\n> developer.pgadmin.org is just one such VM that I run.\n>\n> http://www.linux-vserver.org/\n>\n> Regards, Dave.\n\nI can confirm this. I've been using linux-vserver for years. It is a\nvery up-to-date and active project that is extremely responsive and\nhelpful to users of all experience levels.\n--\nMatthew Nuzum\nwww.bearfruit.org\n", "msg_date": "Tue, 7 Mar 2006 20:16:00 -0600", "msg_from": "\"Matthew Nuzum\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on VPS - how much is enough?" } ]
[ { "msg_contents": "Hello,\n\nWhile doing performance tests on Windows Server 2003 we observed to following \ntwo problems.\n\nEnvironment: J2EE application running in JBoss application server, against \npgsql 8.1 database. Load is caused by a smallish number of (very) complex \ntransactions, typically about 5-10 concurrently.\n\nThe first one, which bothers me the most, is that after about 6-8 hours the \napplication stops processing. No errors are reported, neither by the JDBC \ndriver nor by the server, but when I kill the application server, I see that \nall my connections hang in a SQL statements (which never seem to return):\n\n2006-03-03 08:17:12 4504 6632560 LOG: duration: 45087000.000 ms statement: \nEXECUTE <unnamed> [PREPARE: SELECT objID FROM objects WHERE objID = $1 FOR \nUPDATE]\n\nI think I can reliably reproduce this by loading the app, and waiting a couple \nof hours.\n\n\n\nThe second problem is less predictable:\n\nJDBC exception:\n\nAn I/O error occured while sending to the backend.\norg.postgresql.util.PSQLException: An I/O error occured while sending to the \nbackend.\n at \norg.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:214)\n at \norg.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:430)\n at \norg.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:346)\n at \norg.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:250)\n\n\nIn my server log, I have:\n\n2006-03-02 12:31:02 5692 6436342 LOG: could not receive data from client: A \nnon-blocking socket operation could not be completed immediately.\n\nAt the time my box is fairly heavy loaded, but still responsive. Server and \nJBoss appserver live on the same dual 2Ghz Opteron.\n\nA quick Google told me that:\n\n1. More people have seen this.\n2. No solutions.\n3. The server message appears to indicate an unhandled WSAEWOULDBLOCK winsock \nerror on recv(), which MSDN said is to be expected and should be retried.\n\nIs this a known bug?\n\njan \n\n\n-- \n--------------------------------------------------------------\nJan de Visser                     [email protected]\n\n                Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n", "msg_date": "Mon, 6 Mar 2006 09:38:00 -0500", "msg_from": "Jan de Visser <[email protected]>", "msg_from_op": true, "msg_subject": "Hanging queries and I/O exceptions" }, { "msg_contents": "I have more information on this issue.\n\nFirst of, the problem now happens after about 1-2 hours, as opposed to the 6-8 \nI mentioned earlier. Yey for shorter test cycles.\n\nFurtermore, it does not happen on Linux machines, both single CPU and dual \nCPU, nor on single CPU windows machines. We can only reproduce on a dual CPU \nwindows machine, and if we take one CPU out, it does not happen.\n\nI executed the following after it hung:\n\ndb=# select l.pid, c.relname, l.mode, l.granted, l.page, l.tuple \nfrom pg_locks l, pg_class c where c.oid = l.relation order by l.pid;\n\nWhich showed me that several transactions where waiting for a particular row \nwhich was locked by another transaction. This transaction had no pending \nlocks (so no deadlock), but just does not complete and hence never \nrelinquishes the lock.\n\nWhat gives? has anybody ever heard of problems like this on dual CPU windows \nmachines?\n\njan\n\n\n\nOn Monday 06 March 2006 09:38, Jan de Visser wrote:\n> Hello,\n>\n> While doing performance tests on Windows Server 2003 we observed to\n> following two problems.\n>\n> Environment: J2EE application running in JBoss application server, against\n> pgsql 8.1 database. Load is caused by a smallish number of (very) complex\n> transactions, typically about 5-10 concurrently.\n>\n> The first one, which bothers me the most, is that after about 6-8 hours the\n> application stops processing. No errors are reported, neither by the JDBC\n> driver nor by the server, but when I kill the application server, I see\n> that all my connections hang in a SQL statements (which never seem to\n> return):\n>\n> 2006-03-03 08:17:12 4504 6632560 LOG:  duration: 45087000.000 ms\n>  statement: EXECUTE <unnamed>  [PREPARE:  SELECT objID FROM objects WHERE\n> objID = $1 FOR UPDATE]\n>\n> I think I can reliably reproduce this by loading the app, and waiting a\n> couple of hours.\n\n-- \n--------------------------------------------------------------\nJan de Visser                     [email protected]\n\n                Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n", "msg_date": "Thu, 9 Mar 2006 15:07:04 -0500", "msg_from": "Jan de Visser <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hanging queries on dual CPU windows" }, { "msg_contents": "Jan de Visser <[email protected]> writes:\n> Furtermore, it does not happen on Linux machines, both single CPU and dual \n> CPU, nor on single CPU windows machines. We can only reproduce on a dual CPU \n> windows machine, and if we take one CPU out, it does not happen.\n> ...\n> Which showed me that several transactions where waiting for a particular row \n> which was locked by another transaction. This transaction had no pending \n> locks (so no deadlock), but just does not complete and hence never \n> relinquishes the lock.\n\nIs the stuck transaction still consuming CPU time, or just stopped?\n\nIs it possible to get a stack trace from the stuck process? I dunno\nif you've got anything gdb-equivalent under Windows, but that's the\nfirst thing I'd be interested in ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 Mar 2006 15:10:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hanging queries on dual CPU windows " }, { "msg_contents": "On Thursday 09 March 2006 15:10, Tom Lane wrote:\n> Jan de Visser <[email protected]> writes:\n> > Furtermore, it does not happen on Linux machines, both single CPU and\n> > dual CPU, nor on single CPU windows machines. We can only reproduce on a\n> > dual CPU windows machine, and if we take one CPU out, it does not happen.\n> > ...\n> > Which showed me that several transactions where waiting for a particular\n> > row which was locked by another transaction. This transaction had no\n> > pending locks (so no deadlock), but just does not complete and hence\n> > never relinquishes the lock.\n>\n> Is the stuck transaction still consuming CPU time, or just stopped?\n\nCPU drops off. In fact, that's my main clue something's wrong ;-)\n\n>\n> Is it possible to get a stack trace from the stuck process? I dunno\n> if you've got anything gdb-equivalent under Windows, but that's the\n> first thing I'd be interested in ...\n\nI wouldn't know. I'm hardly a windows expert. Prefer not to touch the stuff, \nmyself. Can do some research though...\n\n>\n> \t\t\tregards, tom lane\n\njan\n\n-- \n--------------------------------------------------------------\nJan de Visser                     [email protected]\n\n                Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n", "msg_date": "Thu, 9 Mar 2006 16:15:47 -0500", "msg_from": "Jan de Visser <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hanging queries on dual CPU windows" }, { "msg_contents": "On Thursday 09 March 2006 15:10, Tom Lane wrote:\n> Is it possible to get a stack trace from the stuck process?  I dunno\n> if you've got anything gdb-equivalent under Windows, but that's the\n> first thing I'd be interested in ...\n\nHere ya go:\n\nhttp://www.devisser-siderius.com/stack1.jpg\nhttp://www.devisser-siderius.com/stack2.jpg\nhttp://www.devisser-siderius.com/stack3.jpg\n\nThere are three threads in the process. I guess thread 1 (stack1.jpg) is the \nmost interesting.\n\nI also noted that cranking up concurrency in my app reproduces the problem in \nabout 4 minutes ;-)\n\nWith thanks to Magnus Hagander for the Process Explorer hint.\n\njan\n\n-- \n--------------------------------------------------------------\nJan de Visser                     [email protected]\n\n                Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n", "msg_date": "Thu, 9 Mar 2006 21:00:27 -0500", "msg_from": "Jan de Visser <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hanging queries on dual CPU windows" } ]
[ { "msg_contents": "Hi,\n\nBelow are some results of running pgbench, run on a machine that is doing nothing else than running PostgreSQL woth pgbench. The strange thing is that the results are *constantly alternating* hight (750-850 transactions)and low (50-80 transactions), no matter how many test I run. If I wait a long time (> 5 minutes) after running the test, I always get a hight score, followed by a low one, followed by a high one, low one etc. \n\nI was expecting a low(ish) score the first run (because the tables are not loaded in the cache yet), followed by continues high(ish) scores, but not an alternating pattern. I also did not expect so much difference, given the hardware I have (Dual Opteron, 4GB memory , 3Ware SATA RAID5 with 5 disks, seerate swap and pg_log disks).\n\nAnyone any idea?\n\nResults of pgbench:\n\npostgres@panoramix:/usr/lib/postgresql/8.1/bin$ ./pgbench -c 10 -t 150 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 150\nnumber of transactions actually processed: 1500/1500\ntps = 50.651705 (including connections establishing)\ntps = 50.736338 (excluding connections establishing)\npostgres@panoramix:/usr/lib/postgresql/8.1/bin$ ./pgbench -c 10 -t 150 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 150\nnumber of transactions actually processed: 1500/1500\ntps = 816.972995 (including connections establishing)\ntps = 836.951755 (excluding connections establishing)\npostgres@panoramix:/usr/lib/postgresql/8.1/bin$ ./pgbench -c 10 -t 150 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 150\nnumber of transactions actually processed: 1500/1500\ntps = 42.924294 (including connections establishing)\ntps = 42.986747 (excluding connections establishing)\npostgres@panoramix:/usr/lib/postgresql/8.1/bin$ ./pgbench -c 10 -t 150 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 150\nnumber of transactions actually processed: 1500/1500\ntps = 730.651970 (including connections establishing)\ntps = 748.538852 (excluding connections establishing)\n\n\nTIA\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n", "msg_date": "Mon, 6 Mar 2006 16:29:49 +0100", "msg_from": "\"Joost Kraaijeveld\" <[email protected]>", "msg_from_op": true, "msg_subject": "Can anyone explain this pgbench results?" }, { "msg_contents": "On Mon, Mar 06, 2006 at 04:29:49PM +0100, Joost Kraaijeveld wrote:\n> Below are some results of running pgbench, run on a machine that\n> is doing nothing else than running PostgreSQL woth pgbench. The\n> strange thing is that the results are *constantly alternating* hight\n> (750-850 transactions)and low (50-80 transactions), no matter how\n> many test I run. If I wait a long time (> 5 minutes) after running\n> the test, I always get a hight score, followed by a low one, followed\n> by a high one, low one etc.\n\nThe default checkpoint_timeout is 300 seconds (5 minutes). Is it\ncoincidence that the \"long time\" between fast results is about the\nsame? What's your setting? Are your test results more consistent\nif you execute CHECKPOINT between them?\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 6 Mar 2006 10:47:42 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can anyone explain this pgbench results?" }, { "msg_contents": "On Mon, Mar 06, 2006 at 04:29:49PM +0100, Joost Kraaijeveld wrote:\n> I was expecting a low(ish) score the first run (because the tables are not loaded in the cache yet), followed by continues high(ish) scores, but not an alternating pattern. I also did not expect so much difference, given the hardware I have (Dual Opteron, 4GB memory , 3Ware SATA RAID5 with 5 disks, seerate swap and pg_log disks).\n\nOn a side-note:\nRAID5 and databases generally don't mix well.\n\nMost people find that pg_xlog will live happily with the OS; it's the\ndata files that need the most bandwidth.\n\nIf you start swapping, performance will tank to the point that it's\nunlikely that swap being on seperate disks will help at all. Better off\nto just keep it with the OS and use the disks for the database tables.\n\nSpeaking of 'disks', what's your exact layout? Do you have a 5 drive\nraid5 for the OS and the database, 1 drive for swap and 1 drive for\npg_xlog?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 7 Mar 2006 13:09:34 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can anyone explain this pgbench results?" } ]