threads
listlengths
1
275
[ { "msg_contents": "Hi,\nI am having problems with my master database now. It used to work\nextremely good just two days ago, but then I started playing with\nadding/dropping schemas and added another database and performance went\ndown.\n\nI also have a replica database on the same server and when I run the\nsame query on it, it runs good.\nInterestingly the planner statistics for this query are the same on the\nmaster and replica databases.\nHowever, query execution on the master database is about 4min. and on\nthe replica database is 6 sec.!\n\nI VACUUM ANALYZED both databases and made sure they have same indexes on\nthe tables.\nI don't know where else to look, but here are the schemas I have on the\nmaster and replica database. The temp schemas must be the ones that I\ncreated and then dropped.\nmaster=# select * from pg_namespace;\nnspname | nspowner | nspacl\n------------+----------+--------\npg_catalog | 1 | {=U}\npg_toast | 1 | {=}\npublic | 1 | {=UC}\npg_temp_1 | 1 | \npg_temp_3 | 1 | \npg_temp_10 | 1 | \npg_temp_28 | 1 | \nreplica=> select * from pg_namespace;\nnspname | nspowner | nspacl \n------------+----------+--------\npg_catalog | 1 | {=U}\npg_toast | 1 | {=}\npublic | 1 | {=UC}\npg_temp_1 | 1 | \npg_temp_39 | 1 | \nindia | 105 | \n\nHere is the query:\nSELECT * FROM media m, speccharacter c \nWHERE m.mediatype IN (SELECT objectid FROM mediatype WHERE\nmedianame='Audio') \nAND m.mediachar = c.objectid \nAND (m.activity='178746' \n\tOR \n\t\t(EXISTS (SELECT ism.objectid \n\t\tFROM intsetmedia ism, set s \n\t\tWHERE ism.set = s.objectid \n\t\tAND ism.media = m.objectid AND s.activity='178746' )\n\t\t) \n\tOR \n\t\t(EXISTS (SELECT dtrm.objectid \n\t\tFROM dtrowmedia dtrm, dtrow dtr, dtcol dtc, datatable dt\n\n\t\tWHERE dtrm.dtrow = dtr.objectid \n\t\tAND dtrm.media = m.objectid \n\t\tAND dtr.dtcol = dtc.objectid \n\t\tAND dtc.datatable = dt.objectid \n\t\tAND dt.activity = '178746')\n\t\t)\n\t) \nORDER BY medianame ASC, status DESC;\n\n*************************************\n\nThis email may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments. \nAny review, copying, printing, disclosure or other use is prohibited.\nWe reserve the right to monitor email sent through our network.\n\n*************************************\n\n", "msg_date": "Fri, 21 Feb 2003 09:30:53 -0700", "msg_from": "Oleg Lebedev <[email protected]>", "msg_from_op": true, "msg_subject": "slow query" }, { "msg_contents": "Oleg Lebedev <[email protected]> writes:\n> Interestingly the planner statistics for this query are the same on the\n> master and replica databases.\n> However, query execution on the master database is about 4min. and on\n> the replica database is 6 sec.!\n\nWhat does EXPLAIN ANALYZE show in each case?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Feb 2003 21:20:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query " }, { "msg_contents": "Oleg,\n\n> I VACUUM ANALYZED both databases and made sure they have same indexes on\n> the tables.\n\nHave you VACUUM FULL the main database? And how about REINDEX? \n\n> Here is the query:\n> SELECT * FROM media m, speccharacter c\n> WHERE m.mediatype IN (SELECT objectid FROM mediatype WHERE\n> medianame='Audio')\n\nThe above should use an EXISTS clause, not IN, unless you are absolutely sure \nthat the subquery will never return more than 12 rows.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 23 Feb 2003 12:52:30 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "On Sun, 2003-02-23 at 13:52, Josh Berkus wrote:\n> Oleg,\n> \n> > I VACUUM ANALYZED both databases and made sure they have same indexes on\n> > the tables.\n> \n> Have you VACUUM FULL the main database? And how about REINDEX? \n> \n> > Here is the query:\n> > SELECT * FROM media m, speccharacter c\n> > WHERE m.mediatype IN (SELECT objectid FROM mediatype WHERE\n> > medianame='Audio')\n> \n> The above should use an EXISTS clause, not IN, unless you are absolutely sure \n> that the subquery will never return more than 12 rows.\n\nI am assuming you said this because EXISTS is faster for > 12 rows? \nInteresting :)\n\nthanks,\n\n- Ryan\n\n\n-- \nRyan Bradetich <[email protected]>\n\n", "msg_date": "23 Feb 2003 14:05:34 -0700", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "Ryan,\n\n> > The above should use an EXISTS clause, not IN, unless you are absolutely\n> > sure that the subquery will never return more than 12 rows.\n>\n> I am assuming you said this because EXISTS is faster for > 12 rows?\n> Interesting :)\n\nThat's my rule of thumb, *NOT* any kind of relational-calculus-based truth.\n\nBasically, one should only use IN for a subquery when one is *absolutely* sure \nthat the subquery will only return a handful of records, *and* the subquery \ndoesn't have to do an complex work like aggregating or custom function \nevaluation. \n\nYou're safer using EXISTS for all subqueries, really.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 23 Feb 2003 13:47:58 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> I am assuming you said this because EXISTS is faster for > 12 rows?\n\n> That's my rule of thumb, *NOT* any kind of relational-calculus-based truth.\n\nKeep in mind also that the tradeoffs will change quite a lot when PG 7.4\nhits the streets, because the optimizer has gotten a lot smarter about\nhow to handle IN, but no smarter about EXISTS. Here's one rather silly\nexample using CVS tip:\n\nregression=# explain analyze select * from tenk1 a where\nregression-# unique1 in (select hundred from tenk1 b);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=486.32..504.11 rows=100 width=248) (actual time=453.19..468.86 rows=100 loops=1)\n Merge Cond: (\"outer\".unique1 = \"inner\".hundred)\n -> Index Scan using tenk1_unique1 on tenk1 a (cost=0.00..1571.87 rows=10000 width=244) (actual time=0.12..5.25 rows=101 loops=1)\n -> Sort (cost=486.32..486.57 rows=100 width=4) (actual time=452.91..453.83 rows=100 loops=1)\n Sort Key: b.hundred\n -> HashAggregate (cost=483.00..483.00 rows=100 width=4) (actual time=447.59..449.80 rows=100 loops=1)\n -> Seq Scan on tenk1 b (cost=0.00..458.00 rows=10000 width=4) (actual time=0.06..276.47 rows=10000 loops=1)\n Total runtime: 472.06 msec\n(8 rows)\n\nregression=# explain analyze select * from tenk1 a where\nregression-# exists (select 1 from tenk1 b where b.hundred = a.unique1);\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on tenk1 a (cost=0.00..35889.66 rows=5000 width=244) (actual time=3.69..1591.78 rows=100 loops=1)\n Filter: (subplan)\n SubPlan\n -> Index Scan using tenk1_hundred on tenk1 b (cost=0.00..354.32 rows=100 width=0) (actual time=0.10..0.10 rows=0 loops=10000)\n Index Cond: (hundred = $0)\n Total runtime: 1593.88 msec\n(6 rows)\n\nThe EXISTS case takes about the same time in 7.3, but the IN case is off\nthe charts (I got bored of waiting after 25 minutes...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 23 Feb 2003 21:28:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query " } ]
[ { "msg_contents": "Hi,\n\nwe want to migrate from MS SQL Server (windows2000)\nto PostgreSQL (Linux) :-))\nand we want to use the old MSSQL Hardware.\n\nDual Pentium III 800\n1 GB RAM\n2 IDE 10 GB\n2 RAID Controller (RAID 0,1 aviable) with 2 9GB SCSI HDD\n1 RAID Controller (RAID 0,1,5 aviable) with 3 18GB SCSI HDD\n\nThe configuration for MS-SQL was this:\nOS on the 2 IDE Harddisks with Software-RAID1\nSQL-Data on RAID-Controller with RAID-5 (3 x 18GB SCSI Harddisks)\nSQL-TempDB on RAID-Controller with RAID-1 (2 x 9GB SCSI Harddisk)\nSQL-TransactionLog on RAID-Controller with RAID-1 (2 x 9GB SCSI Harddisk)\n\nCan i make a similar configuration with PostgreSQL?\nOr what is the prefered fragmentation for\noperatingsystem, swap-partition, data, indexes, tempdb and transactionlog?\n\nWhat is pg_xlog and how important is it?\n\nWhat ist the prefered filesystem (ext2, ext3 or raiserfs)?\n\nWe want to use about 20 databases with varios size from 5 MB to 500MB per\ndatabase\nand more selects than inserts (insert/select ratio about 1/10) for fast\nwebaccess.\n\nThank you for your hints!\n\nBye,\nMario\n", "msg_date": "Mon, 24 Feb 2003 10:52:55 +0100", "msg_from": "\"Schaefer, Mario\" <[email protected]>", "msg_from_op": true, "msg_subject": "partitioning os swap data log tempdb" }, { "msg_contents": "On 24 Feb 2003 at 10:52, Schaefer, Mario wrote:\n\n> Hi,\n> \n> we want to migrate from MS SQL Server (windows2000)\n> to PostgreSQL (Linux) :-))\n> and we want to use the old MSSQL Hardware.\n> \n> Dual Pentium III 800\n> 1 GB RAM\n> 2 IDE 10 GB\n> 2 RAID Controller (RAID 0,1 aviable) with 2 9GB SCSI HDD\n> 1 RAID Controller (RAID 0,1,5 aviable) with 3 18GB SCSI HDD\n> \n> The configuration for MS-SQL was this:\n> OS on the 2 IDE Harddisks with Software-RAID1\n> SQL-Data on RAID-Controller with RAID-5 (3 x 18GB SCSI Harddisks)\n> SQL-TempDB on RAID-Controller with RAID-1 (2 x 9GB SCSI Harddisk)\n> SQL-TransactionLog on RAID-Controller with RAID-1 (2 x 9GB SCSI Harddisk)\n> \n> Can i make a similar configuration with PostgreSQL?\n> Or what is the prefered fragmentation for\n> operatingsystem, swap-partition, data, indexes, tempdb and transactionlog?\n\nHmm.. You can put your OS on IDE and databases on 3x18GB SCSI. Postgresql can \nnot split data/indexes/tempdb etc. So they will be on one drive. You don't have \nmuch of a choice here.\n\n> What is pg_xlog and how important is it?\n\nIt is transaction log. It is hit every now and then for insert/update/deletes. \nSymlinking it to a separate drive would be a great performance boost. Put it on \nthe other SCSI disk. AFAIK, it is a single file. I suggest you put WAL and xlog \non other 2x9GB SCSI drive. You need to shut down postgresql after schema \ncreation and symlink the necessary files by hand. If postgresql ever recreates \nthese files/directories by hand, it will drop the symlinks and recreate the \nfiles. In that case you need to redo the exercise of symlinkg.\n\n \n> What ist the prefered filesystem (ext2, ext3 or raiserfs)?\n\nreiserfs or XFS.\n \n> We want to use about 20 databases with varios size from 5 MB to 500MB per\n> database\n> and more selects than inserts (insert/select ratio about 1/10) for fast\n> webaccess.\n\nshouldn't be a problem. Tune shared_buffers around 150-250MB. Beef up FSM \nentries, sort mem and vacuum regularly. \n\nHTH\nBye\n Shridhar\n\n--\nSlous' Contention:\tIf you do a job too well, you'll get stuck with it.\n\n", "msg_date": "Mon, 24 Feb 2003 16:02:17 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning os swap data log tempdb" }, { "msg_contents": "On Mon, Feb 24, 2003 at 10:52:55AM +0100, Schaefer, Mario wrote:\n> Can i make a similar configuration with PostgreSQL?\n\nSort of. PostgreSQL currently does not have a convenient way to\nspecify where this or that part of the database lives. As a result,\nyour best bet is to use RAID 1+0 for the data area, and get the speed\nthat way. If you must do it without more hardware, however, you can\nmanually move some files to other drives and symlink them. You\n_must_ do this while offline, and if the file grows above 1G, the\nadvantage will be lost.\n\nIt is nevertheless a good idea to put your OS, data directory, and\nwrite head log (WAL) on separate disks. Also, you should make sure\nyour PostgreSQL logs don't interfere with the database I/O (the OS\ndisk is probably a good place for them, but make sure you use some\nsort of log rotator. Syslog is helpful here).\n\n> What is pg_xlog and how important is it?\n\nIt's the write ahead log. Put it on a separate RAID.\n\n> What ist the prefered filesystem (ext2, ext3 or raiserfs)?\n\nCertainly ext2 is not crash-safe. There've been some reports of\ncorruption under reiserfs, but I've never had it happen. There have\nbeen complaints about performance with ext3. You might want to\ninvestigate XFS, as it was designed for this sort of task.\n\n> We want to use about 20 databases with varios size from 5 MB to 500MB per\n> database\n> and more selects than inserts (insert/select ratio about 1/10) for fast\n> webaccess.\n\nThe WAL is less critical in this case, because it is only extended\nwhen you change the data, not when you select.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 24 Feb 2003 07:15:00 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning os swap data log tempdb" }, { "msg_contents": "On Mon, 24 Feb 2003, Schaefer, Mario wrote:\n\n> Hi,\n> \n> we want to migrate from MS SQL Server (windows2000)\n> to PostgreSQL (Linux) :-))\n> and we want to use the old MSSQL Hardware.\n> \n> Dual Pentium III 800\n> 1 GB RAM\n> 2 IDE 10 GB\n> 2 RAID Controller (RAID 0,1 aviable) with 2 9GB SCSI HDD\n> 1 RAID Controller (RAID 0,1,5 aviable) with 3 18GB SCSI HDD\n> \n> The configuration for MS-SQL was this:\n> OS on the 2 IDE Harddisks with Software-RAID1\n> SQL-Data on RAID-Controller with RAID-5 (3 x 18GB SCSI Harddisks)\n> SQL-TempDB on RAID-Controller with RAID-1 (2 x 9GB SCSI Harddisk)\n> SQL-TransactionLog on RAID-Controller with RAID-1 (2 x 9GB SCSI Harddisk)\n> \n> Can i make a similar configuration with PostgreSQL?\n> Or what is the prefered fragmentation for\n> operatingsystem, swap-partition, data, indexes, tempdb and transactionlog?\n> \n> What is pg_xlog and how important is it?\n> \n> What ist the prefered filesystem (ext2, ext3 or raiserfs)?\n> \n> We want to use about 20 databases with varios size from 5 MB to 500MB per\n> database\n> and more selects than inserts (insert/select ratio about 1/10) for fast\n> webaccess.\n\nWith that ratio of writers to readers, you may find a big RAID5 works as \nwell as anything.\n\nAlso, you don't mention what RAID controllers you're using. If they're \nreal low end stuff like adaptec 133s, then you're better off just using \nthem as straight scsi cards under linux and letting the kernel do the \nwork.\n\nCan you create RAID arrays across multiple RAID cards on that setup? if \nso, a big RAID-5 with 4 9 gigs and 3 more 9 gigs from the other 3 drives \nmight be your fastest storage. That's 36 gigs of storage across 7 \nspindles, which means good parallel read access.\n\nHow many simo users are you expecting?\n\n", "msg_date": "Mon, 24 Feb 2003 11:11:33 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning os swap data log tempdb" }, { "msg_contents": "Mario,\n\n> we want to migrate from MS SQL Server (windows2000)\n> to PostgreSQL (Linux) :-))\n> and we want to use the old MSSQL Hardware.\n\nI don't blame you. I just finished an MSSQL project. BLEAH!\n\n> The configuration for MS-SQL was this:\n> OS on the 2 IDE Harddisks with Software-RAID1\n> SQL-Data on RAID-Controller with RAID-5 (3 x 18GB SCSI Harddisks)\n> SQL-TempDB on RAID-Controller with RAID-1 (2 x 9GB SCSI Harddisk)\n> SQL-TransactionLog on RAID-Controller with RAID-1 (2 x 9GB SCSI Harddisk)\n>\n> Can i make a similar configuration with PostgreSQL?\n\nYes. Many of the concerns are the same. However, 3-disk RAID 5 performs \npoorly for UPDATES for PostgreSQL. That is, while reads are often better \nthan for a single SCSI disk, UPDATES happen at half or less of the speed than \nthey would do on a SCSI disk alone.\n\nThere is no TempDB in PostgreSQL. This gives you a whole free RAID array to \nplay with.\n\n> Or what is the prefered fragmentation for\n> operatingsystem, swap-partition, data, indexes, tempdb and transactionlog?\n>\n> What is pg_xlog and how important is it?\n\nIt is analogous to the SQL Transaction Log, although it does not need to be \nbacked up to truncate it. Deal with it the way you would deal with an MSSQL \ntransaction log; i.e. on its own disk, if possible. However, you gain little \nby putting it on RAID other than failover safety; in fact, you might find \nthat the xlog peforms better on a lone SCSI disk since even the best RAID 1 \nwill slow down data writes by up to 15%.\n\nSwap is not such a concern for PostgreSQL on Linux or BSD. With proper \ndatabase tuning and a GB of memory, you will never use the swap. Or to put \nit another way, if you're seeing regular hits on swap, you need to re-tune \nyour database.\n\nFinally, you want to make absolutely sure that either the write-through cache \non each RAID controller is disabled in the BIOS, or that you have a battery \nback-up which you trust 100%. Otherwise, the caching done by the RAID \ncontrollers will cancel completely the benefit of the Xlog for database \nrecovery.\n\n> What ist the prefered filesystem (ext2, ext3 or raiserfs)?\n\nThat's a matter of open debate. I like Reiser. Ext3 has its proponents, as \ndoes XFS. Ext2 is probably faster than all of the above ... provided that \nyour machine never has an unexpected shutdown. Then Ext2 is very bad ar \nrecovering from power-outs ...\n\n> We want to use about 20 databases with varios size from 5 MB to 500MB per\n> database\n> and more selects than inserts (insert/select ratio about 1/10) for fast\n> webaccess.\n\nKeep in mind that unlike SQL Server, you cannot easily query between databases \non PostgreSQL. So if those databases are all related, you probably want to \nput them in the same PostgreSQL 7.3.2 database and use schema instead.\n\nIf the databases are for different users and completely seperate, you'll want \nto read up heavily on Postgres' security model, which has been significantly \nimproved for 7.3.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 24 Feb 2003 11:00:37 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning os swap data log tempdb" } ]
[ { "msg_contents": "Thanks everybody for your help.\nVACUUM FULL did the job, and now the query performance is the same in\nboth databases. I am surprised that FULL option makes such a dramatic\nchange to the query performance: from 4min. to 5sec.!!! It also changed\nplanner stats from ~9 sec to ~8sec.\nI haven't tried to REINDEX yet, though.\nRegarding IN vs. EXISTS. The sub-query in the IN clause will always\nreturn fewer records than 12. I tried using EXISTS instead of IN with\nPostgres7.2.1 and it slowed down query performance. With postgres 7.3,\nwhen I use EXISTS instead of IN the planner returns the same stats and\nquery performance does not improve. However, if I use \nm.mediatype=(SELECT objectid FROM mediatype WHERE medianame='Audio')\nthe planner returns ~7 sec., which is the same as if I the query is\nchanged like this:\nSELECT * FROM media m, speccharacter c, mediatype mt\nWHERE mt.objectid=m.mediatype and mt.medianame='Audio'\n...\nSo, using JOIN and =(SELECT ...) is better than using IN and EXISTS in\nthis case.\nThanks.\n\nOleg\n\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Sunday, February 23, 2003 1:53 PM\nTo: Oleg Lebedev; [email protected]\nSubject: Re: [PERFORM] slow query\nImportance: Low\n\n\nOleg,\n\n> I VACUUM ANALYZED both databases and made sure they have same indexes \n> on the tables.\n\nHave you VACUUM FULL the main database? And how about REINDEX? \n\n> Here is the query:\n> SELECT * FROM media m, speccharacter c\n> WHERE m.mediatype IN (SELECT objectid FROM mediatype WHERE\n> medianame='Audio')\n\nThe above should use an EXISTS clause, not IN, unless you are absolutely\nsure \nthat the subquery will never return more than 12 rows.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n*************************************\n\nThis email may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments. \nAny review, copying, printing, disclosure or other use is prohibited.\nWe reserve the right to monitor email sent through our network.\n\n*************************************\n\n", "msg_date": "Mon, 24 Feb 2003 08:59:11 -0700", "msg_from": "Oleg Lebedev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query" }, { "msg_contents": "On Mon, 2003-02-24 at 10:59, Oleg Lebedev wrote:\n> Thanks everybody for your help.\n> VACUUM FULL did the job, and now the query performance is the same in\n> both databases. I am surprised that FULL option makes such a dramatic\n> change to the query performance: from 4min. to 5sec.!!! It also changed\n> planner stats from ~9 sec to ~8sec.\n\nIf your seeing wildly dramatic improvments from vacuum full, you might\nwant to look into running regular vacuums more often (especially for\nhigh turnover tables), increase your max_fsm_relations to 1000, and\nincreasing your max_fsm_pages. \n\nRobert Treat\n\n\n", "msg_date": "24 Feb 2003 11:58:33 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "On 24 Feb 2003, Robert Treat wrote:\n\n> \n> If your seeing wildly dramatic improvments from vacuum full, you might\n> want to look into running regular vacuums more often (especially for\n> high turnover tables), increase your max_fsm_relations to 1000, and\n> increasing your max_fsm_pages. \n\nI don't know about the settings you mention, but a frequent vacuum\ndoes not at all obviate a vacuum full. My database is vacuumed every\nnight, but a while ago I found that a vacuum full changed a simple\nsingle-table query from well over 30 seconds to one or two. We now\ndo a vacuum full every night.\n\n\n", "msg_date": "Mon, 24 Feb 2003 09:27:56 -0800 (PST)", "msg_from": "Clarence Gardner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "On Mon, 2003-02-24 at 12:27, Clarence Gardner wrote:\n> On 24 Feb 2003, Robert Treat wrote:\n> \n> > \n> > If your seeing wildly dramatic improvments from vacuum full, you might\n> > want to look into running regular vacuums more often (especially for\n> > high turnover tables), increase your max_fsm_relations to 1000, and\n> > increasing your max_fsm_pages. \n> \n> I don't know about the settings you mention, but a frequent vacuum\n> does not at all obviate a vacuum full. My database is vacuumed every\n> night, but a while ago I found that a vacuum full changed a simple\n> single-table query from well over 30 seconds to one or two. We now\n> do a vacuum full every night.\n> \n\nActually if you are vacuuming frequently enough, it can (and should*)\nobviate a vacuum full. Be aware that frequently enough might mean really\nfrequent, for instance I have several tables in my database that update\nevery row within a 15 minute timeframe, so I run a \"lazy\" vacuum on\nthese tables every 10 minutes. This allows postgresql to reuse the space\nfor these tables almost continuously so I never have to vacuum full\nthem. \n\n(* this assumes your max_fsm_pages/relations settings are set correctly)\n\nRobert Treat\n\n\n", "msg_date": "24 Feb 2003 13:30:34 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "On Mon, 2003-02-24 at 12:27, Clarence Gardner wrote:\n> On 24 Feb 2003, Robert Treat wrote:\n> \n> > \n> > If your seeing wildly dramatic improvments from vacuum full, you might\n> > want to look into running regular vacuums more often (especially for\n> > high turnover tables), increase your max_fsm_relations to 1000, and\n> > increasing your max_fsm_pages. \n> \n> I don't know about the settings you mention, but a frequent vacuum\n> does not at all obviate a vacuum full. My database is vacuumed every\n> night, but a while ago I found that a vacuum full changed a simple\n> single-table query from well over 30 seconds to one or two. We now\n> do a vacuum full every night.\n\nSure is does. If your free-space-map (FSM) is up to date, tuples are\nnot appended to the end of the table, so table growth will not occur\n(beyond a 'settling' point.\n\nUnless you remove more data from the table than you ever expect to have\nin the table again, this is enough. That said, the instant the FSM is\nempty, you're table starts to grow again and a VACUUM FULL will be\nrequired to re-shrink it.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "24 Feb 2003 13:39:12 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "Robert,\n\n> Actually if you are vacuuming frequently enough, it can (and should*)\n> obviate a vacuum full. Be aware that frequently enough might mean really\n> frequent, for instance I have several tables in my database that update\n> every row within a 15 minute timeframe, so I run a \"lazy\" vacuum on\n> these tables every 10 minutes. This allows postgresql to reuse the space\n> for these tables almost continuously so I never have to vacuum full\n> them.\n\nThis would assume absolutely perfect FSM settings, and that the DB never gets \nthrown off by unexpected loads. I have never been so fortunate as to work \nwith such a database. However, I agree that good FSM tuning and frequent \nregular VACUUMs can greatly extend the period required for running FULL.\n\nI have not found, though, that this does anything to prevent the need for \nREINDEX on frequently-updated tables. How about you, Robert?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 24 Feb 2003 10:45:20 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "On Mon, Feb 24, 2003 at 09:27:56AM -0800, Clarence Gardner wrote:\n> \n> I don't know about the settings you mention, but a frequent vacuum\n> does not at all obviate a vacuum full. My database is vacuumed every\n> night, but a while ago I found that a vacuum full changed a simple\n> single-table query from well over 30 seconds to one or two. We now\n> do a vacuum full every night.\n\nThis probably means that either some of your FSM settings should be\ndifferent, or that you have long-running queries, or both. _Some_\nadvantage to vacuum full is expected, but 30 seconds to one or two is\na pretty big deal.\n\nExcept in cases where a large percentage of the table is a vacuum\ncandidate, the standard vacuum should be more than adequate. But\nthere are a couple of gotchas.\n\nYou need to have room in your free space map to hold information\nabout the bulk of the to-be-freed tables. So perhaps your FSM\nsettings are not big enough, even though you tried to set them\nhigher. (Of course, if you're replacing, say, more than half the\ntable, setting the FSM high enough isn't practical.)\n\nAnother possibility is that you have multiple long-running\ntransactions that are keeping non-blocking vacuum from being very\neffective. Since those transactions are possibly referencing old\nversions of a row, when the non-blocking vacuum comes around, it just\nskips the \"dead\" tuples which are nevertheless alive to someone. \n(You can see the effect of this by using the contrib/pgstattuple\nfunction). Blocking vacuum doesn't have this problem, because it\njust waits on the table until everything ahead of it has committed or\nrolled back. So you pay in wait time for all transactions during the\nvacuum. (I have encountered this very problem on a table which gets\na lot of update activity on just on just one row. Even vacuuming\nevery minute, the table grew and grew, because of another misbehaving\napplication which was keeping a transaction open when it shouldn't\nhave.)\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 24 Feb 2003 13:51:25 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "On Mon, 2003-02-24 at 13:45, Josh Berkus wrote:\n> Robert,\n> \n> > Actually if you are vacuuming frequently enough, it can (and should*)\n> > obviate a vacuum full. Be aware that frequently enough might mean really\n> > frequent, for instance I have several tables in my database that update\n> > every row within a 15 minute timeframe, so I run a \"lazy\" vacuum on\n> > these tables every 10 minutes. This allows postgresql to reuse the space\n> > for these tables almost continuously so I never have to vacuum full\n> > them.\n> \n> This would assume absolutely perfect FSM settings, and that the DB never gets \n> thrown off by unexpected loads. I have never been so fortunate as to work \n> with such a database. However, I agree that good FSM tuning and frequent \n> regular VACUUMs can greatly extend the period required for running FULL.\n> \n\nIt's somewhat relative. On one of my tables, it has about 600 rows, each\nrow gets updated within 15 minutes. I vacuum it every 10 minutes, which\nshould leave me with around 1000 tuples (dead and alive) for that table,\nEven if something overload the updates on that table, chances are that I\nwouldn't see enough of a performance drop to warrant a vacuum full. Of\ncourse, its a small table, so YMMV. I think the point is though that if\nyour running nightly vacuum fulls just to stay ahead of the game, your\nnot maintaining the database optimally.\n\n> I have not found, though, that this does anything to prevent the need for \n> REINDEX on frequently-updated tables. How about you, Robert?\n> \n\nWell, this touches on a different topic. On those tables where I get\n\"index bloat\", I do have to do REINDEX's. But that's not currently\nsolvable with vacuum (remember indexes dont even use FSM) though IIRC\nTom & Co. have done some work toward this for 7.4\n\nRobert Treat\n\n", "msg_date": "24 Feb 2003 14:58:57 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> ... However, I agree that good FSM tuning and frequent \n> regular VACUUMs can greatly extend the period required for running FULL.\n\n> I have not found, though, that this does anything to prevent the need for \n> REINDEX on frequently-updated tables. How about you, Robert?\n\nAs of 7.3, FSM doesn't have anything to do with indexes. If you have\nindex bloat, it's because of the inherent inability of btree indexes to\nreuse space when the data distribution changes over time. (Portions of\nthe btree may become empty, but they aren't recycled.) You'll\nparticularly get burnt by indexes that are on OIDs or sequentially\nassigned ID numbers, since the set of IDs in use just naturally tends to\nmigrate higher over time. I don't think that the update rate per se has\nmuch to do with this, it's the insertion of new IDs and deletion of old\nones that causes the statistical shift. The tree grows at the right\nedge, but doesn't shrink at the left.\n\nAs of CVS tip, however, the situation is different ;-). Btree indexes\nwill recycle space using FSM in 7.4.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 24 Feb 2003 15:50:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query " }, { "msg_contents": "On Mon, 2003-02-24 at 15:50, Tom Lane wrote:\n> You'll\n> particularly get burnt by indexes that are on OIDs or sequentially\n> assigned ID numbers, since the set of IDs in use just naturally tends to\n> migrate higher over time. I don't think that the update rate per se has\n> much to do with this, it's the insertion of new IDs and deletion of old\n> ones that causes the statistical shift. \n\nWould it be safe to say that tables with high update rates where the\nupdates do not change the indexed value would not suffer from index\nbloat? For example updates to non-index columns or updates that\noverwrite, but don't change the value of indexed columns; do these even\nneed to touch the index? \n\nRobert Treat\n\n", "msg_date": "25 Feb 2003 10:28:02 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "Robert Treat <[email protected]> writes:\n> Would it be safe to say that tables with high update rates where the\n> updates do not change the indexed value would not suffer from index\n> bloat?\n\nI would expect not. If you vacuum often enough to keep the main table\nsize under control, the index should stay under control too.\n\n> For example updates to non-index columns or updates that\n> overwrite, but don't change the value of indexed columns; do these even\n> need to touch the index? \n\nYes, they do. Think MVCC.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Feb 2003 11:07:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query " }, { "msg_contents": "On Tue, 25 Feb 2003, Tom Lane wrote:\n\n> Robert Treat <[email protected]> writes:\n> > Would it be safe to say that tables with high update rates where the\n> > updates do not change the indexed value would not suffer from index\n> > bloat?\n> \n> I would expect not. If you vacuum often enough to keep the main table\n> size under control, the index should stay under control too.\n\n\tYes and No, From what I can work out the index is a tree and if \nmany updates occus the tree can become unbalanced (eventually it get so \nunbalanced its no better than a seq scan.... Its not big its just all the \ndata is all on one side of the tree. Which is why Reindexing is a good \nplan. What is really needed is a quicker way of rebalancing the tree. So \nthe database notices when the index is unbalanced and picks a new root \nnode and hangs the old root to that. (Makes for a very intresting \nalgorithim if I remeber my University lectures....)\n\tNow I'm trying to sort out a very large static table that I've \njust finished updating. I am beginning to think that the quickest way of \nsorting it out is to dump and reload it. But I'm trying a do it in place \nmethod. (Of Reindex it, vaccum full analyse) but what is the correct order \nto do this in?\n\nReindex, Vaccum\n\nor \n\nVaccum, Reindex.\n\nPeter Childs\n\n> \n> > For example updates to non-index columns or updates that\n> > overwrite, but don't change the value of indexed columns; do these even\n> > need to touch the index? \n> \n> Yes, they do. Think MVCC.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n", "msg_date": "Tue, 25 Feb 2003 16:36:11 +0000 (GMT)", "msg_from": "Peter Childs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query " } ]
[ { "msg_contents": "I run VACUUM (not FULL though) every night, which I thought would be\nenough for good query performance. Moreover, the data in table does not\nreally change significantly since database usage is pretty low right\nnow, therefore I thought that VACUUM FULL was an overkill.\nI think that creating, populating and dropping schemas in the master\ndatabase could have affected the query performance and required VACUUM\nFULL.\nI will definitely look at max_fsm_relations and max_fsm_pages parameter\nsettings.\nThank you.\n\nOleg Lebedev\n \n\n\n-----Original Message-----\nFrom: Robert Treat [mailto:[email protected]] \nSent: Monday, February 24, 2003 9:59 AM\nTo: Oleg Lebedev\nCc: Josh Berkus; [email protected]\nSubject: Re: [PERFORM] slow query\n\n\nOn Mon, 2003-02-24 at 10:59, Oleg Lebedev wrote:\n> Thanks everybody for your help.\n> VACUUM FULL did the job, and now the query performance is the same in \n> both databases. I am surprised that FULL option makes such a dramatic \n> change to the query performance: from 4min. to 5sec.!!! It also \n> changed planner stats from ~9 sec to ~8sec.\n\nIf your seeing wildly dramatic improvments from vacuum full, you might\nwant to look into running regular vacuums more often (especially for\nhigh turnover tables), increase your max_fsm_relations to 1000, and\nincreasing your max_fsm_pages. \n\nRobert Treat\n\n\n\n*************************************\n\nThis email may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments. \nAny review, copying, printing, disclosure or other use is prohibited.\nWe reserve the right to monitor email sent through our network.\n\n*************************************\n\n", "msg_date": "Mon, 24 Feb 2003 10:07:03 -0700", "msg_from": "Oleg Lebedev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query" } ]
[ { "msg_contents": "What about REINDEXing and other DB tricks?\nDo you use any of these regularly?\nThanks.\n\nOleg\n\n-----Original Message-----\nFrom: Clarence Gardner [mailto:[email protected]] \nSent: Monday, February 24, 2003 10:28 AM\nTo: Robert Treat\nCc: Oleg Lebedev; Josh Berkus; [email protected]\nSubject: Re: [PERFORM] slow query\n\n\nOn 24 Feb 2003, Robert Treat wrote:\n\n> \n> If your seeing wildly dramatic improvments from vacuum full, you might\n\n> want to look into running regular vacuums more often (especially for \n> high turnover tables), increase your max_fsm_relations to 1000, and \n> increasing your max_fsm_pages.\n\nI don't know about the settings you mention, but a frequent vacuum does\nnot at all obviate a vacuum full. My database is vacuumed every night,\nbut a while ago I found that a vacuum full changed a simple single-table\nquery from well over 30 seconds to one or two. We now do a vacuum full\nevery night.\n\n\n\n*************************************\n\nThis email may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments. \nAny review, copying, printing, disclosure or other use is prohibited.\nWe reserve the right to monitor email sent through our network.\n\n*************************************\n\n", "msg_date": "Mon, 24 Feb 2003 10:44:57 -0700", "msg_from": "Oleg Lebedev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query" } ]
[ { "msg_contents": "Folks:\n\nI'm not clear on where the memory needed by max_fsm_pages comes from. Is it \ntaken out of the shared_buffers, or in addition to them?\n\nFurther, Joe Conway gave me a guesstimate of 6k per max_fsm_pages which seems \nrather high ... in fact, the default settings for this value (10000) would \nswamp the memory used by the rest of Postgres. Does anyone have a good \nmeasurment of the memory load imposed by higher FSM settings?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 24 Feb 2003 10:28:00 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Memory taken by FSM_relations" }, { "msg_contents": "Josh Berkus wrote:\n> Further, Joe Conway gave me a guesstimate of 6k per max_fsm_pages which seems \n> rather high ... in fact, the default settings for this value (10000) would \n> swamp the memory used by the rest of Postgres.\n\nI don't recall (and cannot find in my sent mail) ever making that \nguesstimate. Can you provide some context?\n\nJoe\n\n\n\n", "msg_date": "Mon, 24 Feb 2003 11:11:34 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory taken by FSM_relations" }, { "msg_contents": "Joe,\n\n> I don't recall (and cannot find in my sent mail) ever making that\n> guesstimate. Can you provide some context?\n\nYeah, hold on ... hmmm ... no, your e-mail did not provide a figure. Sorry!\n\nMaybe I got it from Neil?\n\nIn any case, it can't be the right figure ...\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 24 Feb 2003 11:25:38 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory taken by FSM_relations" }, { "msg_contents": "On Mon, 24 Feb 2003, Joe Conway wrote:\n\n> Josh Berkus wrote:\n> > Further, Joe Conway gave me a guesstimate of 6k per max_fsm_pages which seems \n> > rather high ... in fact, the default settings for this value (10000) would \n> > swamp the memory used by the rest of Postgres.\n> \n> I don't recall (and cannot find in my sent mail) ever making that \n> guesstimate. Can you provide some context?\n\nIf I remember right, it was 6 BYTES per max fsm pages... not kbytes. \nThat sounds about right anyway.\n\n", "msg_date": "Mon, 24 Feb 2003 12:34:00 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory taken by FSM_relations" }, { "msg_contents": "\n\n--On Monday, February 24, 2003 11:11:34 -0800 Joe Conway \n<[email protected]> wrote:\n\n> Josh Berkus wrote:\n>> Further, Joe Conway gave me a guesstimate of 6k per max_fsm_pages which\n>> seems rather high ... in fact, the default settings for this value\n>> (10000) would swamp the memory used by the rest of Postgres.\n>\n> I don't recall (and cannot find in my sent mail) ever making that\n> guesstimate. Can you provide some context?\nIt may be 6 **BYTES** per max_fsm_pages.\n\n\n>\n> Joe\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n\n\n", "msg_date": "Mon, 24 Feb 2003 13:35:34 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory taken by FSM_relations" }, { "msg_contents": "Scott,\n\n> If I remember right, it was 6 BYTES per max fsm pages... not kbytes.\n> That sounds about right anyway.\n\nSo, does it come out of Shared_buffers or add to it?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 24 Feb 2003 11:55:45 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory taken by FSM_relations" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> I'm not clear on where the memory needed by max_fsm_pages comes from.\n> Is it taken out of the shared_buffers, or in addition to them?\n\nIn addition to. Basically max_fsm_pages and max_fsm_relations are used\nto ratchet up the postmaster's initial shared memory request to the kernel.\n\n> Further, Joe Conway gave me a guesstimate of 6k per max_fsm_pages\n> which seems rather high ...\n\nQuite ;-). The correct figure is six bytes per fsm_page slot, and\nI think about forty bytes per fsm_relation slot (recent versions of\npostgresql.conf mention the multipliers, although for some reason\nthe Admin Guide does not).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 24 Feb 2003 15:17:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory taken by FSM_relations " } ]
[ { "msg_contents": "\nQuestion in brief: does the planner/optimizer take into account the\nforeign key constraints?\n\nIf the answer is \"no\", please stop reading here.\n\nHere comes the details. Let me give a simple example to illustrate the\nsituation.\n\n1. We have two tables t1 and t2.\n\n create table t1 (\n id integer primary key,\n dummy integer\n );\n\n create table t2 (\n id integer,\n dummy integer\n );\n\n2. We create indexes on all the non-pkey fields.\n\n create index t1_dummy_idx on t1(dummy);\n create index t2_id_idx on t2(id);\n create index t2_dummy_idx on t2(dummy);\n\n3. We make t2(id) a foreign key of t1(id).\n\n alter table t2 add constraint t2_fkey foreign key (id) references t1(id);\n\n4. Populate \"t1\" with unique \"id\"s from 0 to 19999 with a dummy value.\n\n copy \"t1\" from stdin;\n 0 654\n 1 86097\n ...\n 19998 93716\n 19999 9106\n \\.\n\n5. Populate \"t2\" with 50000 \"id\"s with a normal distribution.\n\n copy \"t2\" from stdin;\n 8017 98659\n 11825 5946\n ...\n 8202 35994\n 8436 19729\n \\.\n\nNow we are ready to go ...\n\n\n- First query is to find the \"dummy\" values with highest frequency.\n\n => explain select dummy from t2 group by dummy order by count(*) desc limit 10;\n NOTICE: QUERY PLAN:\n\n Limit (cost=2303.19..2303.19 rows=10 width=4)\n -> Sort (cost=2303.19..2303.19 rows=5000 width=4)\n -> Aggregate (cost=0.00..1996.00 rows=5000 width=4)\n -> Group (cost=0.00..1871.00 rows=50000 width=4)\n -> Index Scan using t2_dummy_idx on t2 (cost=0.00..1746.00 rows=50000 width=4)\n\n EXPLAIN\n\n\n- Second query is esseitially the same, but we do a merge with \"t1\" on\n the foreign key (just for the sake of illustrating the point).\n\n => explain select t2.dummy from t1, t2 where t1.id = t2.id group by t2.dummy order by count(*) desc limit 10;\n NOTICE: QUERY PLAN:\n\n Limit (cost=7643.60..7643.60 rows=10 width=12)\n -> Sort (cost=7643.60..7643.60 rows=5000 width=12)\n -> Aggregate (cost=7086.41..7336.41 rows=5000 width=12)\n -> Group (cost=7086.41..7211.41 rows=50000 width=12)\n -> Sort (cost=7086.41..7086.41 rows=50000 width=12)\n -> Merge Join (cost=0.00..3184.00 rows=50000 width=12)\n -> Index Scan using t1_pkey on t1 (cost=0.00..638.00 rows=20000 width=4)\n -> Index Scan using t2_id_idx on t2 (cost=0.00..1746.00 rows=50000 width=8)\n\n EXPLAIN\n\n\nDoes this mean that the planner/optimizer doesn't take into account the\nforeign key constraint?\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.21-pre4)\n\nDon't worry about the world coming to an end today. It's already tomorrow\nin Australia.\n\t\t-- Charles Schulz\n\n", "msg_date": "Tue, 25 Feb 2003 10:11:43 +0600", "msg_from": "Anuradha Ratnaweera <[email protected]>", "msg_from_op": true, "msg_subject": "Superfluous merge/sort" }, { "msg_contents": "\nOn Tue, 25 Feb 2003, Anuradha Ratnaweera wrote:\n\n> Question in brief: does the planner/optimizer take into account the\n> foreign key constraints?\n>\n> If the answer is \"no\", please stop reading here.\n\nNot really. However, as a note, from t1,t2 where t1.id=t2.id is not\nnecessarily an identity even with the foreign key due to NULLs.\n\n", "msg_date": "Tue, 25 Feb 2003 07:56:14 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Superfluous merge/sort" }, { "msg_contents": "On Tue, Feb 25, 2003 at 07:56:14AM -0800, Stephan Szabo wrote:\n> \n> On Tue, 25 Feb 2003, Anuradha Ratnaweera wrote:\n> \n> > Question in brief: does the planner/optimizer take into account the\n> > foreign key constraints?\n> >\n> > If the answer is \"no\", please stop reading here.\n> \n> Not really. However, as a note, from t1,t2 where t1.id=t2.id is not\n> necessarily an identity even with the foreign key due to NULLs.\n\n\"not null\" doesn't make a difference, either :-(\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.21-pre4)\n\nIt's not Camelot, but it's not Cleveland, either.\n\t\t-- Kevin White, Mayor of Boston\n\n", "msg_date": "Wed, 26 Feb 2003 10:57:28 +0600", "msg_from": "Anuradha Ratnaweera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Superfluous merge/sort" }, { "msg_contents": "\nOn Wed, 26 Feb 2003, Anuradha Ratnaweera wrote:\n\n> On Tue, Feb 25, 2003 at 07:56:14AM -0800, Stephan Szabo wrote:\n> >\n> > On Tue, 25 Feb 2003, Anuradha Ratnaweera wrote:\n> >\n> > > Question in brief: does the planner/optimizer take into account the\n> > > foreign key constraints?\n> > >\n> > > If the answer is \"no\", please stop reading here.\n> >\n> > Not really. However, as a note, from t1,t2 where t1.id=t2.id is not\n> > necessarily an identity even with the foreign key due to NULLs.\n>\n> \"not null\" doesn't make a difference, either :-(\n\nNo, but the two queries you gave aren't equivalent without a not null\nconstraint and as such treating the second as the first is simply wrong\nwithout it. ;)\n\nThe big thing is that checking this would be a cost to all queries (or at\nleast any queries with joins). You'd probably have to come up with a\nconsistent set of rules on when the optimization applies (*) and then show\nthat there's a reasonable way to check for the case that's significantly\nnot expensive (right now I think it'd involve looking at the constraint\ntable, making sure that all columns of the constraint are referenced and\nonly in simple ways).\n\n(*) - I haven't done enough checking to say that the following is\n sufficient, but it'll give an idea:\n Given t1 and t2 where t2 is the foreign key table and t1 is the\n primary key table in a foreign key constraint, a select that has no\n column references to t1 other than to the key fields of the foreign key\n directly in the where clause where the condition is simply\n t1.pcol = t2.fcol (or reversed) and all key fields of the constraint\n are so referenced then there exist two possible optimizations\n if all of the foreign key constraint columns in t2 are marked as\n not null, the join to t1 is redundant and it and the conditions\n that reference it can be simply removed\n otherwise, the join to t1 and the conditions that reference may be\n replaced with a set of conditions (t2.fcol1 is not null [and t2.fcol2\n is not null ...]) anded to any other where clause elements\n\n", "msg_date": "Tue, 25 Feb 2003 21:39:56 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Superfluous merge/sort" } ]
[ { "msg_contents": "Hello Everyone,\n\nI am having some problems getting my queries to use the correct index, and as \na result they use a sequential scan that takes a very long time.\n\nThe table in question is used for replicating data between computers and \ncontains over 7 million records. The table is on a few different linux \ncomputers, some running postgresql version 7.3 and some 7.3.2; my problem is \nthe same on all. The relevent fields are:\n Table \"public.replicate\"\n Column | Type | Modifiers\n--------------+-----------------------------+-----------\n computer | integer | not null\n sequence | integer | not null\n\nThe majority of records (about 6.8 million) have computer = 8 with sequence \nstarting at 2200000 and incrementing by 1.\nThere are about 497000 records with computer = 3 with the sequence starting at \n1 and also incrementing by 1.\nThere are only a few records with other computer numbers.\nRecords are inserted (they are never deleted but sometimes updated) in \nnumerical order by the sequence field for each computer and together these \nfields (computer, sequence) are unique.\n\nI have a few queries that attempt to find recently inserted records for a \nparticular computer. Most of my queries include other terms in the where \nclause and sort the results (usually by sequence), however this simple query \ncan be used as an example of my problem:\nselect * from replicate where computer = 3 and sequence >= 490000;\n\nI have created several different indexes (always doing a vacuum analyse \nafterwards etc), but the explain always reports a sequential scan. If I \nforce an index scan, it runs very quickly - as it should. Also, it appears \nthat if a specify an upper limit for sequence (a value which I cannot always \neasily predict), it also uses the index.\n\nIf my query is for those records with computer = 8, I believe it chooses the \ncorrect index every time.\n\nHere are some examples of the indexes I created and the explains:\n=======\nThis is the original index which works fine until there are lots (several \nthousand) of records for a particular computer number:\ncreate unique index replicate_id on replicate (computer, sequence);\n\nexplain analyse select * from replicate where computer = 3 and sequence >= \n490000;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Seq Scan on replicate (cost=0.00..400259.20 rows=300970 width=386) (actual \ntime=80280.18..80974.41 rows=7459 loops=1)\n Filter: ((computer = 3) AND (\"sequence\" >= 490000))\n Total runtime: 80978.67 msec\n(3 rows)\n\nBut if we put in an upper limit for the sequence we get:\nexplain analyse select * from replicate where computer = 3 and sequence >= \n490000 and sequence < 600000;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Index Scan using replicate_id on replicate (cost=0.00..625.99 rows=182 \nwidth=388) (actual time=45.00..446.31 rows=7789 loops=1)\n Index Cond: ((computer = 3) AND (\"sequence\" >= 490000) AND (\"sequence\" < \n600000))\n Total runtime: 451.18 msec\n(3 rows)\n\n\nset enable_seqscan=off;\nexplain analyse select * from replicate where computer = 3 and sequence >= \n490000;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using replicate_id on replicate (cost=0.00..991401.16 rows=289949 \nwidth=388) (actual time=0.06..47.84 rows=7788 loops=1)\n Index Cond: ((computer = 3) AND (\"sequence\" >= 490000))\n Total runtime: 52.48 msec\n(3 rows)\n\n======\nI tried adding this index, and it seemed to work until (I think) there were \nabout 400000 records for computer = 3.\ncreate index replicate_computer_3 on replicate (sequence) WHERE computer = 3;\n\nexplain analyse select * from replicate where computer = 3 and sequence >= \n490000;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Seq Scan on replicate (cost=0.00..400262.91 rows=287148 width=386) (actual \ntime=74371.70..74664.84 rows=7637 loops=1)\n Filter: ((computer = 3) AND (\"sequence\" >= 490000))\n Total runtime: 74669.22 msec\n(3 rows)\n\nBut if we put an upper limit for the sequence we get:\nexplain analyse select * from replicate where computer = 3 and sequence >= \n490000 and sequence < 600000;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using replicate_computer_3 on replicate (cost=0.00..417.29 \nrows=180 width=386) (actual time=0.06..54.21 rows=7657 loops=1)\n Index Cond: ((\"sequence\" >= 490000) AND (\"sequence\" < 600000))\n Filter: (computer = 3)\n Total runtime: 58.86 msec\n(4 rows)\n\n\nAnd, forcing the index:\nset enable_seqscan=off;\nexplain analyse select * from replicate where computer = 3 and sequence >= \n490000;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using replicate_computer_3 on replicate (cost=0.00..660538.28 \nrows=287148 width=386) (actual time=0.06..53.05 rows=7657 loops=1)\n Index Cond: (\"sequence\" >= 490000)\n Filter: (computer = 3)\n Total runtime: 57.66 msec\n(4 rows)\n\n\nI have tried quiet a few different indexes and they all seem to do the same \nsort of thing. Is there anything that I should be doing to make these \nqueries use the index? I am unable to easily predict what the upper limit \nfor the sequence should be in all cases, so I would rather a solution that \ndidn't require specifying it. Am I missing something?\n\nIs there some other information that I should have provided?\n\nThanks\nMark Halliwell\n\n\n", "msg_date": "Tue, 25 Feb 2003 19:03:40 +1100", "msg_from": "Mark Halliwell <[email protected]>", "msg_from_op": true, "msg_subject": "Query not using the index" }, { "msg_contents": "On Tue, Feb 25, 2003 at 07:03:40PM +1100, Mark Halliwell wrote:\n\n> The majority of records (about 6.8 million) have computer = 8 with sequence \n> starting at 2200000 and incrementing by 1.\n> There are about 497000 records with computer = 3 with the sequence starting at \n> 1 and also incrementing by 1.\n> There are only a few records with other computer numbers.\n\n> select * from replicate where computer = 3 and sequence >= 490000;\n> \n> I have created several different indexes (always doing a vacuum analyse \n> afterwards etc), but the explain always reports a sequential scan. If I \n\nTry setting the statistics on computer to a much wider value -- say\n\n\tALTER TABLE computer ALTER COLUMN computer SET STATISTICS 1000\n\nand see if it helps. You can poke around in the pg_stats view to see\nwhy this might help, and perhaps to get a more realistic idea of what\nyou need to set the statistics to. The problem is likely the\noverwhelming commonality of computer=8.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 25 Feb 2003 07:44:50 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query not using the index" }, { "msg_contents": "On Tue, Feb 25, 2003 at 19:03:40 +1100,\n Mark Halliwell <[email protected]> wrote:\n> \n> The majority of records (about 6.8 million) have computer = 8 with sequence \n> starting at 2200000 and incrementing by 1.\n> There are about 497000 records with computer = 3 with the sequence starting at \n> 1 and also incrementing by 1.\n> There are only a few records with other computer numbers.\n\nYou might get some benefit using a partial index that just covers the\nrows where computer = 3.\n", "msg_date": "Tue, 25 Feb 2003 07:56:29 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query not using the index" }, { "msg_contents": "Mark Halliwell <[email protected]> writes:\n> The majority of records (about 6.8 million) have computer = 8 with sequence \n> starting at 2200000 and incrementing by 1.\n> There are about 497000 records with computer = 3 with the sequence starting at \n> 1 and also incrementing by 1.\n> There are only a few records with other computer numbers.\n\nYou aren't going to find any non-kluge solution, because Postgres keeps\nno cross-column statistics and thus is quite unaware that there's any\ncorrelation between the computer and sequence fields. So in a query\nlike\n\n> select * from replicate where computer = 3 and sequence >= 490000;\n\nthe sequence constraint looks extremely unselective to the planner, and\nyou get a seqscan, even though *in the domain of computer = 3* it's a\nreasonably selective constraint.\n\n> that if a specify an upper limit for sequence (a value which I cannot always \n> easily predict), it also uses the index.\n\nI would think that it'd be sufficient to say\n\n select * from replicate where computer = 3 and sequence >= 490000\n and sequence < 2200000;\n\nIf it's not, try increasing the statistics target for the sequence\ncolumn so that ANALYZE gathers a finer-grain histogram for that column.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Feb 2003 10:13:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query not using the index " } ]
[ { "msg_contents": "\n\nHi !!\n\n We have \"PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.96\"\ninstalled on Linux (RedHat 7.2)\nOur database size is 15 GB.\nSince the database size was increasing and was about to cross the actual \nHard Disk parttion Size, we moved the datafiles (also the index files) to \nanother partition and created link to them from the data directory.\nThis was working fine.\nBut what we found was , the index files(2 files) were not getting updated \nin the new partition, instead postgres had created another index file with \nname\n\"tableID\".1 in the original data directory. The size of this file was \n356MB, \nThe actual size of the data table is 1GB. and there were 2 indexes for the \ntable. which were of size approximately=150MB.\n\nBut after we created link, those 2 index files were not getting updated, \ninstead the new file with \".1\" extension got created in the data directory\n(old parttion) and the same is getting updated everyday.\n\nWe dropped the table but the file with \".1\" extension was not getting \nremoved from data directory. We manually had to remove it.\n\nCan U please suggest some way to avoid the file getting created when we \nmove the data file (along with the index files) to another partition.\n\n\nThanks in Advance.\n\n\n\nRegards,\nPragati.\n\n\n\n\n", "msg_date": "Wed, 26 Feb 2003 11:56:48 +0530 (IST)", "msg_from": "PRAGATI SAVAIKAR <[email protected]>", "msg_from_op": true, "msg_subject": "Index File growing big." }, { "msg_contents": "\nI remember this is your second posting for the same problem.\n\nI hope you are aware that postgres can manage multiple databases that \nlie in different partitions.\n\nIf its feasible for you you may think of moving the *big* tables in another\ndatabase which can be initialised in another partition. it depends on ur app\ndesign as you will have to create a new db connection.\n\nanother possibility is to buy a new bigger hdd ofcourse ;-)\nand migrate the data.\n\nalso i think the <tableid>.1 or <tableid>.2 are extensions of the \ndatafile not index files. index files have seperate id of their own.\npg_class have that id in relfilenode.\n\n\nin case i am getting the problem wrong , my query is have you relocated the\nindex files also and created the symlinks ? (using ln -s)\n\nAlso 7.2.1 is too old an version to use in 7.2.x series 7.2.4 is latest\nand in 7.3.x 7.3.2 is latest.\n\nregds\nmallah.\n\n\n\nOn Wednesday 26 February 2003 11:56 am, PRAGATI SAVAIKAR wrote:\n> Hi !!\n>\n> We have \"PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.96\"\n> installed on Linux (RedHat 7.2)\n> Our database size is 15 GB.\n> Since the database size was increasing and was about to cross the actual\n> Hard Disk parttion Size, we moved the datafiles (also the index files) to\n> another partition and created link to them from the data directory.\n> This was working fine.\n> But what we found was , the index files(2 files) were not getting updated\n> in the new partition, instead postgres had created another index file with\n> name\n> \"tableID\".1 in the original data directory. The size of this file was\n> 356MB,\n> The actual size of the data table is 1GB. and there were 2 indexes for the\n> table. which were of size approximately=150MB.\n>\n> But after we created link, those 2 index files were not getting updated,\n> instead the new file with \".1\" extension got created in the data directory\n> (old parttion) and the same is getting updated everyday.\n>\n> We dropped the table but the file with \".1\" extension was not getting\n> removed from data directory. We manually had to remove it.\n>\n> Can U please suggest some way to avoid the file getting created when we\n> move the data file (along with the index files) to another partition.\n>\n>\n> Thanks in Advance.\n>\n>\n>\n> Regards,\n> Pragati.\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\nRegds\nMallah\n\n----------------------------------------\nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n", "msg_date": "Wed, 26 Feb 2003 12:59:23 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index File growing big." }, { "msg_contents": "On Wed, Feb 26, 2003 at 11:56:48AM +0530, PRAGATI SAVAIKAR wrote:\n> \n> Can U please suggest some way to avoid the file getting created when we \n> move the data file (along with the index files) to another partition.\n\nYes. Submit a patch which implements tablespaces ;-)\n\nSeriously, there is no way to avoid this in the case where you are\nmoving the files by hand. The suggestions for how to move files\naround note this.\n\nIf this is merely a disk-size problem, why not move the entire\npostgres installation to another disk, and make a link to it. If you\nstill need to spread things across disks, you can move things which\ndon't change in size very much. A good candidate here is the WAL\n(pg_xlog), since it grows to a predictable size. You even get a\nperformance benefit.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 26 Feb 2003 07:00:40 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index File growing big." }, { "msg_contents": "> > Can U please suggest some way to avoid the file getting created when we\n> > move the data file (along with the index files) to another partition.\n>\n> Yes. Submit a patch which implements tablespaces ;-)\n\nYou should note that someone already has sent in a patch for tablespaces, it\nhasn't been acted on though - can't quite remember why. Maybe we should\nresurrect it...\n\nChris\n\n\n", "msg_date": "Thu, 27 Feb 2003 09:19:32 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index File growing big." }, { "msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n>> Yes. Submit a patch which implements tablespaces ;-)\n\n> You should note that someone already has sent in a patch for tablespaces, it\n> hasn't been acted on though - can't quite remember why. Maybe we should\n> resurrect it...\n\nIt's been awhile, but my recollection is that the patch had restricted\nfunctionality (which would be okay for a first cut) and it invented SQL\nsyntax that seemed to lock us into that restricted functionality\npermanently (not so okay). Details are fuzzy though...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Feb 2003 01:56:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index File growing big. " }, { "msg_contents": "> > You should note that someone already has sent in a patch for\ntablespaces, it\n> > hasn't been acted on though - can't quite remember why. Maybe we should\n> > resurrect it...\n>\n> It's been awhile, but my recollection is that the patch had restricted\n> functionality (which would be okay for a first cut) and it invented SQL\n> syntax that seemed to lock us into that restricted functionality\n> permanently (not so okay). Details are fuzzy though...\n\nWell, I'll resurrect it and see if it can be improved. Tablespaces seem to\nbe a requested feature these days...\n\nChris\n\n\n", "msg_date": "Thu, 27 Feb 2003 17:03:37 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index File growing big. " }, { "msg_contents": "All,\n\nI was the person who submitted the patch. I tried to a generic syntax. I also tried to keep the tablespace concept\nsimple. See my posting on HACKERS/GENERAL from a week or 2 ago about the syntax. I am still interested in working on\nthis patch with others. I have many system here that are 500+ gigabytes and growing. It is a real pain to add more\ndisk space (I have to backup, drop database(s), rebuild raid set (I am using raid 10) and reload data).\n\nJim\n\n\n\n> > > You should note that someone already has sent in a patch for\n> tablespaces, it\n> > > hasn't been acted on though - can't quite remember why. Maybe we should\n> > > resurrect it...\n> >\n> > It's been awhile, but my recollection is that the patch had restricted\n> > functionality (which would be okay for a first cut) and it invented SQL\n> > syntax that seemed to lock us into that restricted functionality\n> > permanently (not so okay). Details are fuzzy though...\n> \n> Well, I'll resurrect it and see if it can be improved. Tablespaces seem to\n> be a requested feature these days...\n> \n> Chris\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n\n\n", "msg_date": "Thu, 27 Feb 2003 07:34:12 -0500", "msg_from": "\"Jim Buttafuoco\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index File growing big. " }, { "msg_contents": "Hi Jim,\n\nDo you have a version of the patch that's synced against CVS HEAD?\n\nChris\n\n> All,\n>\n> I was the person who submitted the patch. I tried to a generic syntax. I\nalso tried to keep the tablespace concept\n> simple. See my posting on HACKERS/GENERAL from a week or 2 ago about the\nsyntax. I am still interested in working on\n> this patch with others. I have many system here that are 500+ gigabytes\nand growing. It is a real pain to add more\n> disk space (I have to backup, drop database(s), rebuild raid set (I am\nusing raid 10) and reload data).\n>\n> Jim\n\n\n", "msg_date": "Fri, 28 Feb 2003 09:57:43 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Index File growing big. " }, { "msg_contents": "\nYes, the issue was that it only had places for heap and index location,\nnot more generic.\n\nI can work with a few folks to get this done. I think it can be done in\na few stages:\n\n\tDecide on syntax/functionality\n\tUpdate grammer to support it\n\tUpdate system catalogs to hold information\n\tUpdate storage manager to handle storage locations\n\nIf folks can decide on the first item, I can do the second and third\nones.\n\n---------------------------------------------------------------------------\n\nJim Buttafuoco wrote:\n> All,\n> \n> I was the person who submitted the patch. I tried to a generic syntax. I also tried to keep the tablespace concept\n> simple. See my posting on HACKERS/GENERAL from a week or 2 ago about the syntax. I am still interested in working on\n> this patch with others. I have many system here that are 500+ gigabytes and growing. It is a real pain to add more\n> disk space (I have to backup, drop database(s), rebuild raid set (I am using raid 10) and reload data).\n> \n> Jim\n> \n> \n> \n> > > > You should note that someone already has sent in a patch for\n> > tablespaces, it\n> > > > hasn't been acted on though - can't quite remember why. Maybe we should\n> > > > resurrect it...\n> > >\n> > > It's been awhile, but my recollection is that the patch had restricted\n> > > functionality (which would be okay for a first cut) and it invented SQL\n> > > syntax that seemed to lock us into that restricted functionality\n> > > permanently (not so okay). Details are fuzzy though...\n> > \n> > Well, I'll resurrect it and see if it can be improved. Tablespaces seem to\n> > be a requested feature these days...\n> > \n> > Chris\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 6 Mar 2003 15:18:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index File growing big." } ]
[ { "msg_contents": "\nIn some older mails in the archive I found rumors about difficulcies that\nmight\noccur when OIDs are used as an integral part of a data model.\n\nI am considering the option of placing an index on the already existing oid\nand using\nit as the primary key for all tables (saves some space and a sequence\nlookup). This\nincludes saving the oid in foreign keys (virtual ones, not actually declared\nreferences).\nI read that using OID in keys is generally a bad idea. Is it really? Why\nexactly?\n\nAre there any disadvantages as to reliability or performance apart from\naccidentally\nforgetting to use the -o option with pg_dump? If so, please give details.\n\nI felt especially worried by a postgres developer's statement in another\narchived mail:\n\"As far as I know, there is no reason oid's have to be unique, especially if\nthey are in different tables.\"\n(http://archives.postgresql.org/pgsql-hackers/1998-12/msg00570.php)\n\nHow unique are oids as of version 7.3 of postgres ?\n\nIs it planned to keep oids semantically the same in future releases of\npostgres?\nWill the oid type be extended so that oids can be larger than 4 bytes (if\nthis is still\ncorrect for 7.3) and do not rotate in large systems?\n\n\nThanks for your time and advice.\n\nDaniel Alvarez <[email protected]>\n \n \n\n-- \n+++ GMX - Mail, Messaging & more http://www.gmx.net +++\nBitte lächeln! Fotogalerie online mit GMX ohne eigene Homepage!\n\n", "msg_date": "Wed, 26 Feb 2003 14:04:39 +0100 (MET)", "msg_from": "daniel alvarez <[email protected]>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "On Wednesday 26 Feb 2003 1:04 pm, daniel alvarez wrote:\n> In some older mails in the archive I found rumors about difficulcies that\n> might\n> occur when OIDs are used as an integral part of a data model.\n>\n> I am considering the option of placing an index on the already existing oid\n> and using\n> it as the primary key for all tables (saves some space and a sequence\n> lookup). This\n> includes saving the oid in foreign keys (virtual ones, not actually\n> declared references).\n> I read that using OID in keys is generally a bad idea. Is it really? Why\n> exactly?\n\nOIDs are not even guaranteed to be there any more - you can create a table \nWITHOUT OIDs if you want to save some space. If you want a numeric primary \nkey, I'd recommend int4/int8 attached to a sequence - it's much clearer \nwhat's going on then.\n\n> Are there any disadvantages as to reliability or performance apart from\n> accidentally\n> forgetting to use the -o option with pg_dump? If so, please give details.\n>\n> I felt especially worried by a postgres developer's statement in another\n> archived mail:\n> \"As far as I know, there is no reason oid's have to be unique, especially\n> if they are in different tables.\"\n> (http://archives.postgresql.org/pgsql-hackers/1998-12/msg00570.php)\n>\n> How unique are oids as of version 7.3 of postgres ?\n\nOIDs are unique per object (table) I believe, no more so. See chapter 5.10 of \nthe user guide for details. They are used to identify system objects and so \nthe fact that a function and a table could both have the same OID should \ncause no problems.\n\n> Is it planned to keep oids semantically the same in future releases of\n> postgres?\n\nCouldn't say - don't see why not.\n\n> Will the oid type be extended so that oids can be larger than 4 bytes (if\n> this is still\n> correct for 7.3) and do not rotate in large systems?\n\nStrikes me as unlikely, though I'm not a developer. Look into 8-byte \nserial/sequences.\n\n-- \n Richard Huxton\n", "msg_date": "Wed, 26 Feb 2003 13:58:53 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys" }, { "msg_contents": "\n> > I am considering the option of placing an index on the already existing\noid\n> > and using it as the primary key for all tables (saves some space and a\nsequence\n> > lookup). This includes saving the oid in foreign keys (virtual ones, not\nactually\n> > declared references). I read that using OID in keys is generally a bad\nidea.\n> > Is it really? Why exactly?\n\n> OIDs are not even guaranteed to be there any more - you can create a table\n> WITHOUT OIDs if you want to save some space. If you want a numeric primary\n> key, I'd recommend int4/int8 attached to a sequence - it's much clearer\nwhat's\n> going on then.\n\nOf course this is a cleaner solution. I did not know that oids can be\nsupressed and\nwas looking for a way to make space usage more efficient. Trying to get rid\nof user-\ndefined surrogate primary keys and substitute them by the already existing\nOID is\nobviously the wrong approch, as postgres already defines a cleaner option.\n\nThere can also be some problems when using replication, because one needs to\nmake\nsure that OIDs are the same on all machines in the cluster.\n\nWhy should user-defined tables have OIDs by default? Other DBMS use ROWIDs\nas the physical storage location used for pointers in index leafs, but this\nis equivalent\nto Postgres TIDs. To the user an OID column is not different than any other\ncolumn\nhe can define himself. I'd find it more natural if the column wasn't there\nat all.\n\nDaniel Alvarez\n\n-- \n+++ GMX - Mail, Messaging & more http://www.gmx.net +++\nBitte lächeln! Fotogalerie online mit GMX ohne eigene Homepage!\n\n", "msg_date": "Wed, 26 Feb 2003 15:59:35 +0100 (MET)", "msg_from": "daniel alvarez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OIDs as keys" }, { "msg_contents": "daniel alvarez <[email protected]> writes:\n> Why should user-defined tables have OIDs by default?\n\nAt this point it's just for historical reasons. There actually is a\nproposal on the table to flip the default to WITHOUT OIDS, but so far\nit's not been accepted because of worries about compatibility. See\nthe pghackers archives a few weeks back.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Feb 2003 10:56:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys " }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n>> How unique are oids as of version 7.3 of postgres ?\n\n> OIDs are unique per object (table) I believe, no more so.\n\nEven then, you should only assume uniqueness if you put a unique index\non the table's OID column to enforce it. (The system catalogs that use\nOID all have such indexes.) Without that, you might have duplicates\nafter the OID counter wraps around.\n\n>> Will the oid type be extended so that oids can be larger than 4 bytes (if\n>> this is still correct for 7.3) and do not rotate in large systems?\n\n> Strikes me as unlikely, though I'm not a developer.\n\nI tend to agree. At one point that was a live possibility, but now\nwe're more likely to change the default for user tables to WITHOUT OIDS\nand declare the problem solved. Making OIDs 8 bytes looks like too much\nof a performance hit for non-64-bit machines. (Not to mention machines\nthat haven't got \"long long\" at all; right now the only thing that\ndoesn't work for them is type int8, and I'd like it to stay that way,\nat least for a few more years.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Feb 2003 11:02:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys " }, { "msg_contents": "On Wednesday 26 Feb 2003 2:59 pm, daniel alvarez wrote:\n>\n> Why should user-defined tables have OIDs by default? Other DBMS use ROWIDs\n> as the physical storage location used for pointers in index leafs, but this\n> is equivalent\n> to Postgres TIDs. To the user an OID column is not different than any other\n> column\n> he can define himself. I'd find it more natural if the column wasn't there\n> at all.\n\nI believe the plan is to phase them out, but some people are using them, so \nthe default is still to create them. Imagine if you were using OIDs as keys \nand after a dump/restore they were all gone...\n-- \n Richard Huxton\n", "msg_date": "Wed, 26 Feb 2003 16:10:13 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys" }, { "msg_contents": "Daniel,\n\n> There can also be some problems when using replication, because one needs\n> to make\n> sure that OIDs are the same on all machines in the cluster.\n\nSee the \"uniqueidentifier\" contrib package if you need a universally unique id \nfor replication.\n\n> I'd find it more natural if the column wasn't there\n> at all.\n\nProbably by Postgres 8.0, it won't be.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 26 Feb 2003 09:17:24 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys" }, { "msg_contents": "\n> daniel alvarez <[email protected]> writes:\n> > Why should user-defined tables have OIDs by default?\n>\n> At this point it's just for historical reasons. There actually is a\n> proposal on the table to flip the default to WITHOUT OIDS, but so far\n> it's not been accepted because of worries about compatibility. See\n> the pghackers archives a few weeks back.\n\nShall I include a patch to pg_dump that will explicitly set WITH OIDS when I\nsubmit this SET STORAGE dumping patch?\n\nChris\n\n\n", "msg_date": "Thu, 27 Feb 2003 12:00:31 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys " }, { "msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> Shall I include a patch to pg_dump that will explicitly set WITH OIDS when I\n> submit this SET STORAGE dumping patch?\n\nNot if you want it to be accepted ;-)\n\nWe pretty much agreed we did not want that in the prior thread.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Feb 2003 01:35:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys " }, { "msg_contents": "> \"Christopher Kings-Lynne\" <[email protected]> writes:\n> > Shall I include a patch to pg_dump that will explicitly set WITH OIDS\nwhen I\n> > submit this SET STORAGE dumping patch?\n>\n> Not if you want it to be accepted ;-)\n>\n> We pretty much agreed we did not want that in the prior thread.\n\nThe patch I submitted did not include OID stuff, I decided that it's better\nto submit orthogonal patches :)\n\nChris\n\n\n", "msg_date": "Thu, 27 Feb 2003 15:00:04 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys " }, { "msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> The patch I submitted did not include OID stuff, I decided that it's better\n> to submit orthogonal patches :)\n\nRight. But the problem with switching the OID default is not a matter\nof code --- it's of working out what the compatibility issues are.\nAs I recall, one thing people did not want was for pg_dump to plaster\nWITH OIDS or WITHOUT OIDS on every single CREATE TABLE, as this would\npretty much destroy any shot at loading PG dumps into any other\ndatabase. What we need is an agreement on the behavior we want (making\nthe best possible compromise between this and other compatibility\ndesires). After that, the actual patch is probably trivial, while in\nadvance of some consensus on the behavior, offering a patch is a waste\nof time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Feb 2003 02:06:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys " }, { "msg_contents": "> Right. But the problem with switching the OID default is not a matter\n> of code --- it's of working out what the compatibility issues are.\n> As I recall, one thing people did not want was for pg_dump to plaster\n> WITH OIDS or WITHOUT OIDS on every single CREATE TABLE, as this would\n> pretty much destroy any shot at loading PG dumps into any other\n> database.\n\nUmmm...what about SERIAL columns, ALTER TABLE / SET STATS, SET STORAGE,\ncustom types, 'btree' in CREATE INDEX, SET SEARCH_PATH, '::\" cast operator,\nstored procedures, rules, etc. - how is adding WITH OIDS going to change\nthat?!\n\n> What we need is an agreement on the behavior we want (making\n> the best possible compromise between this and other compatibility\n> desires). After that, the actual patch is probably trivial, while in\n> advance of some consensus on the behavior, offering a patch is a waste\n> of time.\n\nSure.\n\nChris\n\n\n", "msg_date": "Thu, 27 Feb 2003 15:19:33 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys " }, { "msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n>> As I recall, one thing people did not want was for pg_dump to plaster\n>> WITH OIDS or WITHOUT OIDS on every single CREATE TABLE, as this would\n>> pretty much destroy any shot at loading PG dumps into any other\n>> database.\n\n> Ummm...what about SERIAL columns, ALTER TABLE / SET STATS, SET STORAGE,\n> custom types, 'btree' in CREATE INDEX, SET SEARCH_PATH, '::\" cast operator,\n> stored procedures, rules, etc. - how is adding WITH OIDS going to change\n> that?!\n\nIt's moving in the wrong direction. We've been slowly eliminating\nunnecessary nonstandardisms in pg_dump output; this puts in a new one\nin a quite fundamental place. You could perhaps expect another DB\nto drop commands it didn't understand like SET SEARCH_PATH ... but if\nit drops all your CREATE TABLEs, you ain't got much dump left to load.\n\nI'm not necessarily wedded to the above argument myself, mind you;\nbut it is a valid point that needs to be weighed in the balance of\nwhat we're trying to accomplish.\n\nThe bottom line is that \"code first, design later\" is no way to\napproach this problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Feb 2003 02:30:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys " }, { "msg_contents": "Tom wrote:\n>\n>As I recall, one thing people did not want was for pg_dump to plaster\n>WITH OIDS or WITHOUT OIDS on every single CREATE TABLE, as this would\n>pretty much destroy any shot at loading PG dumps into any other\n>database. \n\n Has there been any talk about adding a flag to pg_dump to explicitly\nask for a standard format? (sql3? sql99? etc?) Ideally for me, such\na flag would produce a \"portable\" dump file of the subset that did \nfollow the standard, and also produce a separate log file that could \ncontain any constructs that could not be standardly dumped.\n\n If such a flag existed, it might be the easiest way to load to other\ndatabases, and people might be less opposed to plastering more postgres\nspecific stuff to the default format.\n\n If I were going to try to write portable dumps for other databases, I'd \nwant as few postgresisms in the big file as possible. The separate log file\nwould make it easier for me to make hand-ported separate files to set up \nfunctions, views, etc.\n\n Yes, I know that would restrict the functionality I could\ndepend on, including types, sequences, etc. However if I were in an\nenvironment where developers did prototyping on postgres / mssql /etc\nand migrated functionality to whatever the company's official\nstandard system was, I think developers would want to constrain\nthemselves to standards as much as possible, and this option may help\nthem do so.\n\n Ron\n\nPS: I'm not sure if I'm volunteering or not, unless someone tells\n me how easy/hard it would be. Last thing I tried was harder\n than it first appeared.\n\n", "msg_date": "Thu, 27 Feb 2003 17:51:13 -0800", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys " }, { "msg_contents": "On Thu, 2003-02-27 at 02:30, Tom Lane wrote:\n> It's moving in the wrong direction. We've been slowly eliminating\n> unnecessary nonstandardisms in pg_dump output; this puts in a new one\n> in a quite fundamental place. You could perhaps expect another DB\n> to drop commands it didn't understand like SET SEARCH_PATH ... but if\n> it drops all your CREATE TABLEs, you ain't got much dump left to load.\n\nRather than specifying the use of OIDs by WITH OIDS clauses for each\nCREATE TABLE in a dump, couldn't we do it by adding a SET command that\ntoggles the 'use_oids' GUC option prior to every CREATE TABLE? That way,\na user concerned with portability could fairly easily strip out (or just\nignore) the SET commands.\n\nCheers,\n\nNeil\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n\n\n", "msg_date": "05 Mar 2003 08:54:58 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys" }, { "msg_contents": "On Wed, 2003-03-05 at 08:54, Neil Conway wrote:\n> On Thu, 2003-02-27 at 02:30, Tom Lane wrote:\n> > It's moving in the wrong direction. We've been slowly eliminating\n> > unnecessary nonstandardisms in pg_dump output; this puts in a new one\n> > in a quite fundamental place. You could perhaps expect another DB\n> > to drop commands it didn't understand like SET SEARCH_PATH ... but if\n> > it drops all your CREATE TABLEs, you ain't got much dump left to load.\n> \n> Rather than specifying the use of OIDs by WITH OIDS clauses for each\n> CREATE TABLE in a dump, couldn't we do it by adding a SET command that\n> toggles the 'use_oids' GUC option prior to every CREATE TABLE? That way,\n> a user concerned with portability could fairly easily strip out (or just\n> ignore) the SET commands.\n\nToggling the SET command prior to each table creation? Thats an\nexcellent idea. It should also allow us to easily transition to the\ndefault being off after a release or two.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "05 Mar 2003 09:05:56 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys" }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> Rather than specifying the use of OIDs by WITH OIDS clauses for each\n> CREATE TABLE in a dump, couldn't we do it by adding a SET command that\n> toggles the 'use_oids' GUC option prior to every CREATE TABLE?\n\nSeems better than cluttering the CREATE TABLE itself with them, I guess.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Mar 2003 10:20:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys " }, { "msg_contents": "Tom Lane wrote:\n> \"Christopher Kings-Lynne\" <[email protected]> writes:\n> >> As I recall, one thing people did not want was for pg_dump to plaster\n> >> WITH OIDS or WITHOUT OIDS on every single CREATE TABLE, as this would\n> >> pretty much destroy any shot at loading PG dumps into any other\n> >> database.\n> \n> > Ummm...what about SERIAL columns, ALTER TABLE / SET STATS, SET STORAGE,\n> > custom types, 'btree' in CREATE INDEX, SET SEARCH_PATH, '::\" cast operator,\n> > stored procedures, rules, etc. - how is adding WITH OIDS going to change\n> > that?!\n> \n> It's moving in the wrong direction. We've been slowly eliminating\n> unnecessary nonstandardisms in pg_dump output; this puts in a new one\n> in a quite fundamental place. You could perhaps expect another DB\n> to drop commands it didn't understand like SET SEARCH_PATH ... but if\n> it drops all your CREATE TABLEs, you ain't got much dump left to load.\n\nWhy was the schema path called search_path rather than schema_path? \nStandards?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 6 Mar 2003 16:11:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys" }, { "msg_contents": "Tom Lane wrote:\n> Neil Conway <[email protected]> writes:\n> > Rather than specifying the use of OIDs by WITH OIDS clauses for each\n> > CREATE TABLE in a dump, couldn't we do it by adding a SET command that\n> > toggles the 'use_oids' GUC option prior to every CREATE TABLE?\n> \n> Seems better than cluttering the CREATE TABLE itself with them, I guess.\n\nIt would be good to somehow SET the use_oids GUC value on restore start,\nand just use SET when the table is different than the default, but then\nthere is no mechanism to do that when you restore a single table.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 6 Mar 2003 16:13:12 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Why was the schema path called search_path rather than schema_path? \n\nNobody suggested anything different ... it's a bit late now ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Mar 2003 16:14:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Why was the schema path called search_path rather than schema_path? \n> \n> Nobody suggested anything different ... it's a bit late now ...\n\nI started to think about it when we were talking about a config_path\nvariable. Search path then looked confusing. :-(\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 6 Mar 2003 16:16:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys" }, { "msg_contents": "On Thu, 2003-03-06 at 16:13, Bruce Momjian wrote:\n> It would be good to somehow SET the use_oids GUC value on restore start,\n> and just use SET when the table is different than the default, but then\n> there is no mechanism to do that when you restore a single table.\n\nWhat if the default value changes?\n\nIMHO, running a SET per CREATE TABLE isn't too ugly...\n\nCheers,\n\nNeil\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n\n\n", "msg_date": "06 Mar 2003 16:22:54 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys" }, { "msg_contents": "Neil Conway wrote:\n> On Thu, 2003-03-06 at 16:13, Bruce Momjian wrote:\n> > It would be good to somehow SET the use_oids GUC value on restore start,\n> > and just use SET when the table is different than the default, but then\n> > there is no mechanism to do that when you restore a single table.\n> \n> What if the default value changes?\n> \n> IMHO, running a SET per CREATE TABLE isn't too ugly...\n\nNot ugly, but a little noisy. However, my idea of having a single SET\nat the top is never going to work, so I don't have a better idea.\n\nThe killer for me is that you are never going to know the GUC default\nwhen you are loading, so we are _always_ going to have that SET for each\ntable.\n\nI suppose we could set the default to off, and set it ON in the dump\nonly when we want OID. If they set GUC to on, they will get oid's from\nthe load, but it will cut down on the cruft and over time, they will\nonly have the SET for cases where they really want an oid.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 6 Mar 2003 20:50:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys" }, { "msg_contents": "> Neil Conway wrote:\n> > On Thu, 2003-03-06 at 16:13, Bruce Momjian wrote:\n> > > It would be good to somehow SET the use_oids GUC value on restore\n> start,\n> > > and just use SET when the table is different than the default, but\n> then\n> > > there is no mechanism to do that when you restore a single table.\n> > \n> > What if the default value changes?\n> > \n> > IMHO, running a SET per CREATE TABLE isn't too ugly...\n> \n> Not ugly, but a little noisy. However, my idea of having a single SET\n> at the top is never going to work, so I don't have a better idea.\n\nWhy isn't this done on a per-session basis? Having a session setting for the\ncommon case and a CREATE-TABLE clause for the specifics sounds natural.\n\nWhen a single table needs to be restored all one needs to to is changing the\nsession setting before running the CREATE command. The alternative clause\nin CREATE-TABLE statements would be used as a cleaner way of expressing\nthe same thing without affecting the session, when the statement's text can\nbe entered manually (as opposed to loading it from an existing dumpfile).\n\nThe default for the session setting could be set in the configuration file\nthen.\n\nregards, Daniel Alvarez Arribas <[email protected]>\n\n\n-- \n+++ GMX - Mail, Messaging & more http://www.gmx.net +++\nBitte l�cheln! Fotogalerie online mit GMX ohne eigene Homepage!\n\n", "msg_date": "Sat, 8 Mar 2003 21:01:32 +0100 (MET)", "msg_from": "daniel alvarez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OIDs as keys" }, { "msg_contents": "daniel alvarez <[email protected]> writes:\n>> Not ugly, but a little noisy. However, my idea of having a single SET\n>> at the top is never going to work, so I don't have a better idea.\n\n> Why isn't this done on a per-session basis?\n\nBecause pg_dump can't know what the session default will be when the\ndump is reloaded. The scheme you are proposing will only succeed in\nmaking pg_dump unreliable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Mar 2003 15:09:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys " }, { "msg_contents": "> daniel alvarez <[email protected]> writes:\n> >> Not ugly, but a little noisy. However, my idea of having a single SET\n> >> at the top is never going to work, so I don't have a better idea.\n> \n> > Why isn't this done on a per-session basis?\n> \n> Because pg_dump can't know what the session default will be when the\n> dump is reloaded. The scheme you are proposing will only succeed in\n> making pg_dump unreliable.\n\nOuch. Why is this? Doesn't it read the config because of portability\nreasons?\n\n-- \n+++ GMX - Mail, Messaging & more http://www.gmx.net +++\nBitte l�cheln! Fotogalerie online mit GMX ohne eigene Homepage!\n\n", "msg_date": "Sat, 8 Mar 2003 21:16:32 +0100 (MET)", "msg_from": "daniel alvarez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OIDs as keys" }, { "msg_contents": "daniel alvarez wrote:\n> > daniel alvarez <[email protected]> writes:\n> > >> Not ugly, but a little noisy. However, my idea of having a single SET\n> > >> at the top is never going to work, so I don't have a better idea.\n> > \n> > > Why isn't this done on a per-session basis?\n> > \n> > Because pg_dump can't know what the session default will be when the\n> > dump is reloaded. The scheme you are proposing will only succeed in\n> > making pg_dump unreliable.\n> \n> Ouch. Why is this? Doesn't it read the config because of portability\n> reasons?\n\nRemember the dump output is just an SQL script, so there is no 'logic'\nin the script, and it can be loaded right into psql.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 10 Mar 2003 11:05:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OIDs as keys" } ]
[ { "msg_contents": "Hi there,\n\nIs there any performance tunning that i can make to the postgresql server to\nmake the server more stable.\nIn my case the main usage of the DB is from an website which has quite lots\nof visitors.\nIn the last weeks the SQL server crashes every day !\n\nBefore complete crash the transaction start to work slower.\n\nI would appreciate any suggestion regarding this issue !\n\nThank you !\n\nCatalin\nwww.xclub.ro\n\n\n", "msg_date": "Thu, 27 Feb 2003 14:19:23 +0200", "msg_from": "\"Catalin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Daily crash" }, { "msg_contents": "On 27 Feb 2003 at 14:19, Catalin wrote:\n> Is there any performance tunning that i can make to the postgresql server to\n> make the server more stable.\n> In my case the main usage of the DB is from an website which has quite lots\n> of visitors.\n> In the last weeks the SQL server crashes every day !\n\nCould you please post the database logs? This is weird as far as I can guess.\n\nBye\n Shridhar\n\n--\nI'm frequently appalled by the low regard you Earthmen have for life.\t\t-- \nSpock, \"The Galileo Seven\", stardate 2822.3\n\n", "msg_date": "Thu, 27 Feb 2003 17:54:00 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Daily crash" }, { "msg_contents": "i'm afraid i don't have any logs.\ni have the default redhat instalation of postgres 7.0.2 which comes with no\nlogging enabled.\n\ni will try to enable logging and post the logs to the list !\n\nanyway in PHP when trying to connect to the crashed SQL server i get the\nerror message:\nToo many connections...\n\nCatalin\n\n----- Original Message -----\nFrom: Shridhar Daithankar\nTo: [email protected]\nSent: Thursday, February 27, 2003 2:24 PM\nSubject: Re: [PERFORM] Daily crash\n\n\nOn 27 Feb 2003 at 14:19, Catalin wrote:\n> Is there any performance tunning that i can make to the postgresql server\nto\n> make the server more stable.\n> In my case the main usage of the DB is from an website which has quite\nlots\n> of visitors.\n> In the last weeks the SQL server crashes every day !\n\nCould you please post the database logs? This is weird as far as I can\nguess.\n\nBye\n Shridhar\n\n--\nI'm frequently appalled by the low regard you Earthmen have for life. --\nSpock, \"The Galileo Seven\", stardate 2822.3\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Thu, 27 Feb 2003 14:51:59 +0200", "msg_from": "\"Catalin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Daily crash" }, { "msg_contents": "On 27 Feb 2003 at 14:51, Catalin wrote:\n\n> i'm afraid i don't have any logs.\n> i have the default redhat instalation of postgres 7.0.2 which comes with no\n> logging enabled.\n> \n> i will try to enable logging and post the logs to the list !\n> \n> anyway in PHP when trying to connect to the crashed SQL server i get the\n> error message:\n> Too many connections...\n\nTell me. Does that sound like a crash? To me the server is well alive.\n\nAnd if you are using default configuration, you must be experiencing a real \npathetic performance for a real world load.\n\nTry tuning the database. There are too many tips to put in one place. but \nediting /var/lib/data/postgresql/postgresql.conf ( I hope I am right, I am too \nused to do pg_ctl by hand. Never used services provided by disro.s) is first \nstep. You need to read the admin guide as well.\n\nHTH\n\nBye\n Shridhar\n\n--\nGlib's Fourth Law of Unreliability:\tInvestment in reliability will increase \nuntil it exceeds the\tprobable cost of errors, or until someone insists on \ngetting\tsome useful work done.\n\n", "msg_date": "Thu, 27 Feb 2003 18:33:15 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Daily crash" }, { "msg_contents": "On Thu, Feb 27, 2003 at 14:51:59 +0200,\n Catalin <[email protected]> wrote:\n> i'm afraid i don't have any logs.\n> i have the default redhat instalation of postgres 7.0.2 which comes with no\n> logging enabled.\n\nYou should upgrade to 7.2.4 or 7.3.2. (7.3 has schemas and that may make\nupgrading harder which is why you might consider just going to 7.2.4.)\n\n> i will try to enable logging and post the logs to the list !\n> \n> anyway in PHP when trying to connect to the crashed SQL server i get the\n> error message:\n> Too many connections...\n\nYou are going to want the number of allowed connections to match the number\nof simultaneous requests possible from the web server. Typically this is\nthe maximum number of allowed apache processes which defaults to something\nlike 150. The default maximum number of connections to postgres is about\n32. You will also want to raise the number of shared buffers to about\n1000 (assuming you have at least a couple hundred of megabytes of memory),\nnot just to 2 times the new maximum number of connections. This may require\nto change the maximum amount of shared memory allowed by your operating\nsystem. Also take a look at increasing sort mem as well. You don't want\nthis too high because each sort gets this much memory and in your situation\nit may be that you could have a lot of sorts runing at the same time\n(dpending on the types of queries being done).\n", "msg_date": "Thu, 27 Feb 2003 07:30:11 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Daily crash" } ]
[ { "msg_contents": "hi all,\n\nwe have a tsearch index on the following table:\n\n Table \"public.sentences\"\n Column | Type | Modifiers\n---------------+---------+--------------------\n sentence_id | bigint | not null\n puid | integer |\n py | integer |\n journal_id | integer |\n sentence_pos | integer | not null\n sentence_type | integer | not null default 0\n sentence | text | not null\n sentenceidx | txtidx | not null\nIndexes: sentences_pkey primary key btree (sentence_id),\n sentence_uni unique btree (puid, sentence_pos, sentence),\n sentenceidx_i gist (sentenceidx),\n sentences_puid_i btree (puid),\n sentences_py_i btree (py)\n\nthe table contains 50.554.768 rows and is vacuum full analyzed.\n\nThe sentenceidx has been filled NOT USING txt2txtidx, but a custom \nimplementation that should have had the same effect (parsing into \nwords/phrases, deleting stop words). Nevertheless, might this be the \nreason for the very bad performance of the index, or is the table \"just\" \nto big (I hope not!)?\n\nNote that the index on sentenceidx has not been clustered, yet. I wanted \nto ask first whether I might need to refill the column sentenceidx using \ntxt2txtidx. (with so many rows every action has to be reconsidered ;-) )\n\nEXPLAIN ANALYZE\nSELECT sentence FROM sentences WHERE sentenceidx @@ 'amino\\\\ acid';\n\n\n QUERY PLAN\n-------------------------------------------------------------------\n Index Scan using sentenceidx_i on sentences\n (cost=0.00..201327.85 rows=50555 width=148)\n (actual time=973940.41..973940.41 rows=0 loops=1)\n\n Index Cond: (sentenceidx @@ '\\'amino acid\\''::query_txt)\n Filter: (sentenceidx @@ '\\'amino acid\\''::query_txt)\n\n Total runtime: 973941.09 msec\n(4 rows)\n\nthank you for any thoughts, hints, tips!\nChantal\n\n", "msg_date": "Thu, 27 Feb 2003 14:21:54 +0100", "msg_from": "Chantal Ackermann <[email protected]>", "msg_from_op": true, "msg_subject": "tsearch performance" } ]
[ { "msg_contents": "Dear Catalin,\n\n\nWhat I understand from ur post is,\n1. U are using PHP for connection. what version ??? assuming u are on PHP\nversion >4.2.X\n2. Ur Postgresql server is not tuned to handle loads.\n\nWhat U can try is ......\n\n1.Make sure that ur PHP scripts close the connection when transaction is\ncomplete (HOW??? see after pg_connect\n\nU see a pg_close function of PHP)\n a. Use pg_pconnect and forget about pg_connect and pg_close\n2. Increse the limit of connection to be made to PostgreSQL this can be\ndone as said by Shridhar the default is 32\n3. For God sake Upgrade to PostgreSQL 7.3.2 and PHP 4.3.1 you are missing a\nlot with that old versions.\n\n\n\nRegards,\nV Kashyap\n\n================================\nSome people think it's holding on that makes one strong;\n sometimes it's letting go.\n================================\n\n\n", "msg_date": "Thu, 27 Feb 2003 20:14:54 +0530", "msg_from": "Aspire Something <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Performance] Daily Crash" }, { "msg_contents": "i have php 4.2.3\ni use pg_pConnect and no pg_Close.\n\nthanks for yours advices.\ni will upgrade to postgresql 7.2.3 asap \nand see if there are improvments !\n\nthanks again !\n\nCatalin\n\n----- Original Message ----- \nFrom: Aspire Something \nTo: [email protected] \nSent: Thursday, February 27, 2003 4:44 PM\nSubject: Re: [PERFORM] [Performance] Daily Crash\n\n\nDear Catalin,\n\n\nWhat I understand from ur post is,\n1. U are using PHP for connection. what version ??? assuming u are on PHP\nversion >4.2.X\n2. Ur Postgresql server is not tuned to handle loads.\n\nWhat U can try is ......\n\n1.Make sure that ur PHP scripts close the connection when transaction is\ncomplete (HOW??? see after pg_connect\n\nU see a pg_close function of PHP)\n a. Use pg_pconnect and forget about pg_connect and pg_close\n2. Increse the limit of connection to be made to PostgreSQL this can be\ndone as said by Shridhar the default is 32\n3. For God sake Upgrade to PostgreSQL 7.3.2 and PHP 4.3.1 you are missing a\nlot with that old versions.\n\n\n\nRegards,\nV Kashyap\n\n================================\nSome people think it's holding on that makes one strong;\n sometimes it's letting go.\n================================\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Thu, 27 Feb 2003 16:53:22 +0200", "msg_from": "\"Catalin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Performance] Daily Crash" }, { "msg_contents": "Catalin,\n\n> thanks for yours advices.\n> i will upgrade to postgresql 7.2.3 asap\n> and see if there are improvments !\n\nUm, that's 7.2.4. 7.2.3 has a couple of bugs in it.\n\nAlso, you are going to have to edit your postgresql.conf file per the \nsuggestions already made on this list.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 27 Feb 2003 08:53:08 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Performance] Daily Crash" }, { "msg_contents": "[email protected] wrote:\n> i have php 4.2.3\n> i use pg_pConnect and no pg_Close.\n> \n> thanks for yours advices.\n> i will upgrade to postgresql 7.2.3 asap\n> and see if there are improvments !\n> \n> thanks again !\n> \n> Catalin\n> \n> ----- Original Message -----\n> From: Aspire Something\n> To: [email protected]\n> Sent: Thursday, February 27, 2003 4:44 PM\n> Subject: Re: [PERFORM] [Performance] Daily Crash\n> \n> \n> Dear Catalin,\n> \n> \n> What I understand from ur post is,\n> 1. U are using PHP for connection. what version ??? assuming u are on\n> PHP version >4.2.X\n> 2. Ur Postgresql server is not tuned to handle loads.\n> \n> What U can try is ......\n> \n> 1.Make sure that ur PHP scripts close the connection when\n> transaction is complete (HOW??? see after pg_connect\n> \n> U see a pg_close function of PHP)\n> a. Use pg_pconnect and forget about pg_connect and pg_close\n> 2. Increse the limit of connection to be made to PostgreSQL this can\n> be done as said by Shridhar the default is 32\n> 3. For God sake Upgrade to PostgreSQL 7.3.2 and PHP 4.3.1 you are\n> missing a lot with that old versions.\n> \n> \n> \n> Regards,\n> V Kashyap\n> \n> ================================\n> Some people think it's holding on that makes one strong;\n> sometimes it's letting go.\n> ================================\n> \n> \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 2: you can get off all\n> lists at once with the unregister command (send \"unregister\n> YourEmailAddressHere\" to [email protected]) \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 5: Have you checked our\n> extensive FAQ? \n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\nyou can use no persistent link to database\nsee in php.ini\n\npgsql.allow_persistent = Off\n", "msg_date": "Thu, 27 Feb 2003 19:51:01 +0100", "msg_from": "\"philip johnson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Performance] Daily Crash" }, { "msg_contents": "how often should i run VACUUM to keep the thing tunned ?\n\nCatalin\n\n----- Original Message ----- \nFrom: philip johnson \nTo: Catalin ; [email protected] \nSent: Thursday, February 27, 2003 8:51 PM\nSubject: RE: [PERFORM] [Performance] Daily Crash\n\n\[email protected] wrote:\n> i have php 4.2.3\n> i use pg_pConnect and no pg_Close.\n> \n> thanks for yours advices.\n> i will upgrade to postgresql 7.2.3 asap\n> and see if there are improvments !\n> \n> thanks again !\n> \n> Catalin\n> \n> ----- Original Message -----\n> From: Aspire Something\n> To: [email protected]\n> Sent: Thursday, February 27, 2003 4:44 PM\n> Subject: Re: [PERFORM] [Performance] Daily Crash\n> \n> \n> Dear Catalin,\n> \n> \n> What I understand from ur post is,\n> 1. U are using PHP for connection. what version ??? assuming u are on\n> PHP version >4.2.X\n> 2. Ur Postgresql server is not tuned to handle loads.\n> \n> What U can try is ......\n> \n> 1.Make sure that ur PHP scripts close the connection when\n> transaction is complete (HOW??? see after pg_connect\n> \n> U see a pg_close function of PHP)\n> a. Use pg_pconnect and forget about pg_connect and pg_close\n> 2. Increse the limit of connection to be made to PostgreSQL this can\n> be done as said by Shridhar the default is 32\n> 3. For God sake Upgrade to PostgreSQL 7.3.2 and PHP 4.3.1 you are\n> missing a lot with that old versions.\n> \n> \n> \n> Regards,\n> V Kashyap\n> \n> ================================\n> Some people think it's holding on that makes one strong;\n> sometimes it's letting go.\n> ================================\n> \n> \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 2: you can get off all\n> lists at once with the unregister command (send \"unregister\n> YourEmailAddressHere\" to [email protected]) \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 5: Have you checked our\n> extensive FAQ? \n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\nyou can use no persistent link to database\nsee in php.ini\n\npgsql.allow_persistent = Off\n", "msg_date": "Thu, 27 Feb 2003 22:19:42 +0200", "msg_from": "\"Catalin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Performance] Daily Crash" }, { "msg_contents": "so i did the server upgrade to postgresql 7.3.2\nand tunned it like you ppl said.\n\nit works great.\nplus my DB on postgresql 7.0 had 640 MB, including indexes, etc (files)\nnow it has only 140MB.\nthis is much better !\n\nthank you for your help !\n\nCatalin\n\n----- Original Message ----- \nFrom: Catalin \nTo: [email protected] \nSent: Thursday, February 27, 2003 10:19 PM\nSubject: Re: [PERFORM] [Performance] Daily Crash\n\n\nhow often should i run VACUUM to keep the thing tunned ?\n\nCatalin\n\n----- Original Message ----- \nFrom: philip johnson \nTo: Catalin ; [email protected] \nSent: Thursday, February 27, 2003 8:51 PM\nSubject: RE: [PERFORM] [Performance] Daily Crash\n\n\[email protected] wrote:\n> i have php 4.2.3\n> i use pg_pConnect and no pg_Close.\n> \n> thanks for yours advices.\n> i will upgrade to postgresql 7.2.3 asap\n> and see if there are improvments !\n> \n> thanks again !\n> \n> Catalin\n> \n> ----- Original Message -----\n> From: Aspire Something\n> To: [email protected]\n> Sent: Thursday, February 27, 2003 4:44 PM\n> Subject: Re: [PERFORM] [Performance] Daily Crash\n> \n> \n> Dear Catalin,\n> \n> \n> What I understand from ur post is,\n> 1. U are using PHP for connection. what version ??? assuming u are on\n> PHP version >4.2.X\n> 2. Ur Postgresql server is not tuned to handle loads.\n> \n> What U can try is ......\n> \n> 1.Make sure that ur PHP scripts close the connection when\n> transaction is complete (HOW??? see after pg_connect\n> \n> U see a pg_close function of PHP)\n> a. Use pg_pconnect and forget about pg_connect and pg_close\n> 2. Increse the limit of connection to be made to PostgreSQL this can\n> be done as said by Shridhar the default is 32\n> 3. For God sake Upgrade to PostgreSQL 7.3.2 and PHP 4.3.1 you are\n> missing a lot with that old versions.\n> \n> \n> \n> Regards,\n> V Kashyap\n> \n> ================================\n> Some people think it's holding on that makes one strong;\n> sometimes it's letting go.\n> ================================\n> \n> \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 2: you can get off all\n> lists at once with the unregister command (send \"unregister\n> YourEmailAddressHere\" to [email protected]) \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 5: Have you checked our\n> extensive FAQ? \n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\nyou can use no persistent link to database\nsee in php.ini\n\npgsql.allow_persistent = Off\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Fri, 28 Feb 2003 15:22:18 +0200", "msg_from": "\"Catalin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Performance] Daily Crash" } ]
[ { "msg_contents": "Murthy,\n\n> You could get mondo (http://www.microwerks.net/~hugo/), then backup your\n> system to CDs and restore it with the new filesystem layout. You might want\n> to do these backups as a matter of course?\n\nThanks for the suggestion. The problem isn't backup media ... we have a DLT \ndrive ... the problem is time. This particular application is already about \n4 weeks behind schedule because of various hardware problems. At some point, \nKevin Brown and I will take a weekend to swap the postgres files to a spare \ndisk, and re-format the data array as pass-through Linux RAID.\n\nAnd this is the last time I leave it up to the company sysadmin to buy \nhardware for a database server, even with explicit instructions ... \"Yes, I \nsaw which one you wanted, but the 2200S was on sale!\"\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 27 Feb 2003 08:57:32 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data write speed" }, { "msg_contents": "On Thu, 27 Feb 2003, Josh Berkus wrote:\n\n> Murthy,\n> \n> > You could get mondo (http://www.microwerks.net/~hugo/), then backup your\n> > system to CDs and restore it with the new filesystem layout. You might want\n> > to do these backups as a matter of course?\n> \n> Thanks for the suggestion. The problem isn't backup media ... we have a DLT \n> drive ... the problem is time. This particular application is already about \n> 4 weeks behind schedule because of various hardware problems. At some point, \n> Kevin Brown and I will take a weekend to swap the postgres files to a spare \n> disk, and re-format the data array as pass-through Linux RAID.\n> \n> And this is the last time I leave it up to the company sysadmin to buy \n> hardware for a database server, even with explicit instructions ... \"Yes, I \n> saw which one you wanted, but the 2200S was on sale!\"\n\nI still remember going round and round with a hardware engineer who was \nextolling the adaptec AIC 133 controller as a great raid controller. I \nfinally made him test it instead of just reading the pamphlet that came \nwith it... Needless to say, it couldn't hold it's own against a straight \nsymbios UW card running linux software RAID.\n\nHe's the same guy who speced my workstation with no AGP slot in it. The \nweek before he was laid off. Talk about bad timing... :-(\n\n", "msg_date": "Thu, 27 Feb 2003 10:54:49 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data write speed" } ]
[ { "msg_contents": "\nIs there any way to see all the columns in the database? I have a column \nthat needs to be changed in all the tables that use it, without having to \ncheck each table manually. In Sybase you would link syscolumns with \nsysobjects, I can only find info on pg_tables in Postgres but none on \ncolumns. I would like to write some sort of dynamic sql to complete my \ntask. Is there any way of doing this?\n\nThanks in advance\nJeandre\n\n", "msg_date": "Fri, 28 Feb 2003 13:03:49 +0200 (SAST)", "msg_from": "Jeandre du Toit <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "\nWe can fix your credit. We are very successful at getting \nbankruptcies, judgments, tax liens, foreclosures, late payments, charge-offs, \nrepossessions, and even student loans removed from a persons credit report. To find out more go to\nhttp://www.cjlinc.net.\nIf you no longer want to receive information from us just go to \[email protected].\n \n\njppqcwvxbucxknjykmaickbeaprregekfrvwt\n", "msg_date": "Sat, 1 Mar 2003 17:36:46 -0600", "msg_from": "Orito <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ABOUT YOUR CREDIT........... eyr" } ]
[ { "msg_contents": "Hello-\n\nI'm working with a MS Access database that a client wants to turn into \na multi-user database. I'm evaluating PostgreSQL for that purpose (I'd \n_really_ like to be able to recommend and open-source solution to \nthem). However, I'm running into a performance-related issue that I was \nhoping this list could help me with.\n\nI have three tables: tbl_samples (~2000 rows), tbl_tests (~4000 rows), \nand tbl_results (~20,000 rows), with one-to-many relationships between \nthem (cascading, in the order given - table definitions are attached \nbelow). I'm looking at the following query that joins these three \ntables:\n\nSELECT\n\ttbl_samples.station_id,\n\ttbl_samples.samp_date,\n\ttbl_samples.matrix,\n\ttbl_samples.samp_type_code,\n\ttbl_samples.samp_no,\n\ttbl_samples.field_samp_id,\n\ttbl_tests.method,\n\ttbl_tests.lab,\n\ttbl_results.par_code,\n\ttbl_results.val_qualifier,\n\ttbl_results.value,\n\ttbl_results.units,\n\ttbl_results.mdl,\n\ttbl_results.date_anal\nFROM\n (tbl_samples\n INNER JOIN tbl_tests USING\n\t\t(station_id,\n\t\tsamp_date,\n\t\tmatrix,\n\t\tsamp_type_code,\n\t\tsamp_no,\n\t\tsamp_bdepth,\n\t\tsamp_edepth)\n )\n INNER JOIN tbl_results USING\n\t\t(station_id,\n\t\tsamp_date,\n\t\tmatrix,\n\t\tsamp_type_code,\n\t\tsamp_no,\n\t\tsamp_bdepth,\n\t\tsamp_edepth,\n\t\tmethod);\n\nIn Access, this query runs in about a second. In PostgreSQL on the same \nmachine, it takes about 12-15 seconds for the initial rows to be \nreturned, and about 45 seconds to returns all rows. (This is consistent \nwhether I use psql, use the pgAdminII SQL window, or use Access with \nthe ODBC driver.)\n\nThis is the output from EXPLAIN:\n\nNested Loop (cost=437.73..1216.02 rows=1 width=245)\n Join Filter: (\"outer\".method = \"inner\".method)\n -> Merge Join (cost=437.73..461.38 rows=125 width=131)\n Merge Cond: ((\"outer\".matrix = \"inner\".matrix) AND \n(\"outer\".samp_edepth = \"inner\".samp_edepth) AND (\"outer\".samp_bdepth = \n\"inner\".samp_bdepth) AND (\"outer\".samp_no = \"inner\".samp_no) AND \n(\"outer\".samp_type_code = \"inner\".samp_type_code) AND \n(\"outer\".samp_date = \"inner\".samp_date) AND (\"outer\".station_id = \n\"inner\".station_id))\n -> Sort (cost=117.51..120.77 rows=1304 width=63)\n Sort Key: tbl_samples.matrix, tbl_samples.samp_edepth, \ntbl_samples.samp_bdepth, tbl_samples.samp_no, \ntbl_samples.samp_type_code, tbl_samples.samp_date, \ntbl_samples.station_id\n -> Seq Scan on tbl_samples (cost=0.00..50.04 rows=1304 \nwidth=63)\n -> Sort (cost=320.22..328.68 rows=3384 width=68)\n Sort Key: tbl_tests.matrix, tbl_tests.samp_edepth, \ntbl_tests.samp_bdepth, tbl_tests.samp_no, tbl_tests.samp_type_code, \ntbl_tests.samp_date, tbl_tests.station_id\n -> Seq Scan on tbl_tests (cost=0.00..121.84 rows=3384 \nwidth=68)\n -> Index Scan using tbl_results_pkey on tbl_results \n(cost=0.00..5.99 rows=1 width=114)\n Index Cond: ((\"outer\".station_id = tbl_results.station_id) AND \n(\"outer\".samp_date = tbl_results.samp_date) AND (\"outer\".matrix = \ntbl_results.matrix) AND (\"outer\".samp_type_code = \ntbl_results.samp_type_code) AND (\"outer\".samp_no = tbl_results.samp_no) \nAND (\"outer\".samp_bdepth = tbl_results.samp_bdepth) AND \n(\"outer\".samp_edepth = tbl_results.samp_edepth))\n\nI've done the following to try to improve performance:\n\n-postgresql.conf:\n\tincreased shared_buffers to 384\n\tincreased sort_mem to 2048\n-clustered all tables on the pkey index\n-made sure the joined fields are indexed (they are through the pkeys)\n\nAs a note, vm_stat shows no paging while the query is run. Also, I \nrealize that these keys are large (as in the number of fields). I'll be \ncondensing these down to sequential IDs (e.g. a SERIAL type) for a \nfurther test, but I'm curious why Access seems to outperform Postgres \nin this instance.\n\nMy question is, am I missing anything? PostgreSQL will be a hard sell \nif they have to take a performance hit.\n\nThanks for any suggestions you can provide. Sorry for the long e-mail, \nbut I wanted to provide enough info to diagnose the issue.\n\nAlex Johnson\n________________________________\nTable defs:\n\nCREATE TABLE tbl_Samples (\n Station_ID VARCHAR (25) NOT NULL,\n Samp_Date TIMESTAMP WITH TIME ZONE NOT NULL,\n Matrix VARCHAR (10) NOT NULL,\n Samp_Type_Code VARCHAR (5) NOT NULL,\n Samp_No INTEGER NOT NULL,\n Samp_BDepth DOUBLE PRECISION NOT NULL,\n Samp_EDepth DOUBLE PRECISION NOT NULL,\n Depth_units VARCHAR (3),\n Samp_start_time TIME,\n Samp_end_time TIME,\n Field_Samp_ID VARCHAR (20),\n Lab_Samp_ID VARCHAR (20),\n Samp_Meth VARCHAR (20),\n ...snip...\n PRIMARY KEY \n(Station_ID,Samp_Date,Matrix,Samp_Type_Code,Samp_No,Samp_BDepth,Samp_EDe \npth)\n);\n\nCREATE TABLE tbl_Tests (\n Station_ID VARCHAR (25) NOT NULL,\n Samp_Date TIMESTAMP WITH TIME ZONE NOT NULL,\n Matrix VARCHAR (10) NOT NULL,\n Samp_Type_Code VARCHAR (5) NOT NULL,\n Samp_No INTEGER NOT NULL,\n Samp_BDepth DOUBLE PRECISION NOT NULL,\n Samp_EDepth DOUBLE PRECISION NOT NULL,\n Method VARCHAR (50) NOT NULL,\n Lab VARCHAR (10) NOT NULL,\n Date_Rec TIMESTAMP WITH TIME ZONE,\n...snip...\n PRIMARY KEY \n(Station_ID,Samp_Date,Matrix,Samp_Type_Code,Samp_No,Samp_BDepth,Samp_EDe \npth,Method)\n);\n\nCREATE TABLE tbl_Results (\n Station_ID VARCHAR (25) NOT NULL,\n Samp_Date TIMESTAMP WITH TIME ZONE NOT NULL,\n Matrix VARCHAR (10) NOT NULL,\n Samp_Type_Code VARCHAR (5) NOT NULL,\n Samp_No INTEGER NOT NULL,\n Samp_BDepth DOUBLE PRECISION NOT NULL,\n Samp_EDepth DOUBLE PRECISION NOT NULL,\n Method VARCHAR (50) NOT NULL,\n Par_code VARCHAR (50) NOT NULL,\n Val_Qualifier VARCHAR (50) NOT NULL,\n Value DECIMAL (20,9) NOT NULL,\n...snip...\n PRIMARY KEY \n(Station_ID,Samp_Date,Matrix,Samp_Type_Code,Samp_No,Samp_BDepth,Samp_EDe \npth,Method,Par_code)\n);\n\nALTER TABLE tbl_Tests ADD CONSTRAINT REL_1 FOREIGN KEY \n(Station_ID,Samp_Date,Matrix,Samp_Type_Code,Samp_No,Samp_BDepth,Samp_EDe \npth)\n REFERENCES tbl_Samples ON DELETE CASCADE ON UPDATE CASCADE;\nALTER TABLE tbl_Results ADD CONSTRAINT REL_2 FOREIGN KEY \n(Station_ID,Samp_Date,Matrix,Samp_Type_Code,Samp_No,Samp_BDepth,Samp_EDe \npth,Method)\n REFERENCES tbl_Tests ON DELETE CASCADE ON UPDATE CASCADE;\n\n________________________________________________________________________ \n______\nA r e t e S y s t e m s\nAlexander M. Johnson, P.E.\n\n", "msg_date": "Mon, 3 Mar 2003 21:48:49 -0800", "msg_from": "Alex Johnson <[email protected]>", "msg_from_op": true, "msg_subject": "Slow performance with join on many fields" }, { "msg_contents": "Alex Johnson <[email protected]> writes:\n> I'm looking at the following query that joins these three \n> tables:\n\n> SELECT ...\n> FROM\n> (tbl_samples\n> INNER JOIN tbl_tests USING ...\n> )\n> INNER JOIN tbl_results USING ...\n\nYou're forcing the join order; perhaps another order is preferable? See\nhttp://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/explicit-joins.html\n\n> This is the output from EXPLAIN:\n\nEXPLAIN ANALYZE output would've been more useful (it would have shown\nwhether a different join order would be better, for one thing).\n\n> I've done the following to try to improve performance:\n> \tincreased shared_buffers to 384\n\nThat's on the picayune side yet. 1000 buffers or so is where you want\nto be, I think. Also, have you run ANALYZE or VACUUM ANALYZE lately?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Mar 2003 01:27:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow performance with join on many fields " }, { "msg_contents": "Tom-\nThanks for the speedy reply.\n\n> That's on the picayune side yet. 1000 buffers or so is where you want\n> to be, I think. Also, have you run ANALYZE or VACUUM ANALYZE lately?\n\nVACUUM ANALYSE did it.... (doh!...now I feel stupid). I had run VACUUM \nand VACUUM ANALYZE from pgAdmin, yesterday. After running it from the \ncommand line now, It's much improved (~ 2-3 secs). I'm now looking \ninto getting my kernel to increase the SHMAX parameter so I can bump up \nthe shared buffers some more.\n\nThanks again for the speedy help, and sorry for the obvious goof.\n\nAlex Johnson\n________________________________________________________________________ \n______\nA r e t e S y s t e m s\nAlexander Johnson, P.E.\n\n", "msg_date": "Mon, 3 Mar 2003 23:46:38 -0800", "msg_from": "Alex Johnson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow performance with join on many fields " } ]
[ { "msg_contents": "Hi,\n\n \n\nI am executing a query on a table:\n\n \n\n Table \"public.measurement\"\n\n Column | Type | Modifiers\n\n------------+-----------------------+-----------\n\n assessment | integer |\n\n time | integer |\n\n value | character varying(50) |\n\nIndexes: idx_measurement_assessment btree (assessment),\n\n idx_measurement_time btree (\"time\")\n\n \n\nThe primary key of the table is a combination of assessment and time,\nand there are indexes on both assessment and time.\n\n \n\nThe query I am executing is\n\n \n\nSelect time,value\n\n>From measurement\n\nWhere assessment = ?\n\nAnd time between ? and ?\n\n \n\nThis used to run like a rocket before my database got a little larger.\nThere are now around 15 million rows in the table and it is taking a\nlong time to execute queries that get a fair number of rows back (c.300)\n\n \n\nThe database is 'VACUUM ANALYZED' regularly, and I've upped the shared\nbuffers to a significant amount.\n\n \n\nI've tried it on various machine configurations now. A dual processor\nLinux/Intel Machine with 1G of Memory, (0.5G shared buffers). A single\nprocessor Linux/Intel Machine (0.25G shared buffers) , and a Solaris\nmachine (0.25G shared buffers). I'm getting similar performance on all\nof them.\n\n \n\nAnybody see anything I've obviously done wrong? Any ways of improving\nthe performance of this query?\n\n \n\nThanks in advance.\n\n \n\nPaul McKay.\n\n \n\n \n\n======================================\n\nPaul Mckay\n\nConsultant Partner\n\nServicing Division\n\nClearwater-IT\n\ne:[email protected]\n\nt:0161 877 6090\n\nm: 07713 510946\n\n======================================\n\n \n\n\n\n\n\n\n\n\n\n\nHi,\n \nI am executing a query on a table:\n \n          \nTable \"public.measurement\"\n   Column  \n|        \nType          | Modifiers\n------------+-----------------------+-----------\n assessment |\ninteger              \n|\n time       |\ninteger              \n|\n value      | character\nvarying(50) |\nIndexes: idx_measurement_assessment btree (assessment),\n         idx_measurement_time\nbtree (\"time\")\n \nThe primary key of the table is a combination of assessment\nand time, and there are indexes on both assessment and time.\n \nThe query I am executing is\n \nSelect time,value\nFrom measurement\nWhere assessment = ?\nAnd time between ? and ?\n \nThis used to run like a rocket before my database got a\nlittle larger.  There are now around 15 million rows in the table and it\nis taking a long time to execute queries that get a fair number of rows back\n(c.300)\n \nThe database is  ‘VACUUM ANALYZED’ regularly,\nand I’ve upped the shared buffers to a significant amount.\n \nI’ve tried it on various machine configurations now. A\ndual processor Linux/Intel Machine with 1G of Memory, (0.5G shared\nbuffers).  A single processor Linux/Intel Machine (0.25G shared buffers) ,\nand a Solaris machine (0.25G shared buffers).  I’m getting similar\nperformance on all of them.\n \nAnybody see anything I’ve obviously done wrong?  Any\nways of improving the performance of this query?\n \nThanks in advance.\n \nPaul McKay.\n \n \n======================================\nPaul Mckay\nConsultant Partner\nServicing Division\nClearwater-IT\ne:[email protected]\nt:0161 877 6090\nm: 07713 510946\n======================================", "msg_date": "Tue, 4 Mar 2003 14:45:18 -0000", "msg_from": "\"Paul McKay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query performance on large table" }, { "msg_contents": "Paul McKay wrote:\n> Hi,\n> \n> \n> \n> I am executing a query on a table:\n> \n> \n> \n> Table \"public.measurement\"\n> \n> Column | Type | Modifiers\n> \n> ------------+-----------------------+-----------\n> \n> assessment | integer |\n> \n> time | integer |\n> \n> value | character varying(50) |\n> \n> Indexes: idx_measurement_assessment btree (assessment),\n> \n> idx_measurement_time btree (\"time\")\n> \n> \n> \n> The primary key of the table is a combination of assessment and time, \n> and there are indexes on both assessment and time.\n> \n> \n> \n> The query I am executing is\n> \n> \n> \n> Select time,value\n> \n> From measurement\n> \n> Where assessment = ?\n> \n> And time between ? and ?\nChanging 2 indexes into one both-fields index should improve \nperformance much.\n\ncreate index ind_meas on measurement (assessment,time).\n\nRegards,\nTomasz Myrta\n\n\n", "msg_date": "Tue, 04 Mar 2003 16:09:51 +0100", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query performance on large table" }, { "msg_contents": "\"Paul McKay\" <[email protected]> writes:\n> The query I am executing is\n> Select time,value\n> From measurement\n> Where assessment = ?\n> And time between ? and ?\n\nEXPLAIN ANALYZE would help you investigate this. Is it using an\nindexscan? On which index? Does forcing use of the other index\n(by temporarily dropping the preferred one) improve matters?\n\nPossibly a two-column index on both assessment and time would be\nan improvement, but it's hard to guess without knowing anything\nabout the selectivity of the two WHERE clauses.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Mar 2003 10:13:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query performance on large table " }, { "msg_contents": "On Tue, Mar 04, 2003 at 02:45:18PM -0000, Paul McKay wrote:\n> \n> Select time,value\n> \n> >From measurement\n> \n> Where assessment = ?\n> \n> And time between ? and ?\n> \n\nPlease run this with EXPLAIN ANALYSE with values that slow the query\ndown. By bet is that you have an index which needs wider statistics\nsetting on the column to be useful, but without the output from\nEXAPLIN ANALYSE it'll be hard to tell.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 4 Mar 2003 10:15:29 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query performance on large table" }, { "msg_contents": "The results were\n\nclearview=# explain analyse\nclearview-# select assessment,time\nclearview-# from measurement\nclearview-# where assessment = 53661\nclearview-# and time between 1046184261 and 1046335461;\n\nNOTICE: QUERY PLAN:\n\nIndex Scan using idx_measurement_assessment on measurement\n(cost=0.00..34668.61 rows=261 width=8) (actual time=26128.07..220584.69\nrows=503 loops=1)\nTotal runtime: 220587.06 msec\n\nEXPLAIN\n\nAfter adding the index kindly suggested by yourself and Tomasz I get,\n\nclearview=# explain analyse\nclearview-# select assessment,time\nclearview-# from measurement\nclearview-# where assessment = 53661\nclearview-# and time between 1046184261 and 1046335461;\nNOTICE: QUERY PLAN:\n\nIndex Scan using ind_measurement_ass_time on measurement\n(cost=0.00..1026.92 rows=261 width=8) (actual time=15.37..350.46\nrows=503 loops=1)\nTotal runtime: 350.82 msec\n\nEXPLAIN\n\n\nI vaguely recall doing a bit of a reorganize on this database a bit back\nand it looks like I lost the primary Key index. No wonder it was going\nslow.\n\nThanks a lot for your help.\n\nPaul Mckay.\n\n======================================\nPaul Mckay\nConsultant Partner\nServicing Division\nClearwater-IT\ne:[email protected]\nt:0161 877 6090\nm: 07713 510946\n======================================\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: 04 March 2003 15:13\nTo: Paul McKay\nCc: [email protected]\nSubject: Re: [PERFORM] Slow query performance on large table \n\n\"Paul McKay\" <[email protected]> writes:\n> The query I am executing is\n> Select time,value\n> From measurement\n> Where assessment = ?\n> And time between ? and ?\n\nEXPLAIN ANALYZE would help you investigate this. Is it using an\nindexscan? On which index? Does forcing use of the other index\n(by temporarily dropping the preferred one) improve matters?\n\nPossibly a two-column index on both assessment and time would be\nan improvement, but it's hard to guess without knowing anything\nabout the selectivity of the two WHERE clauses.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 4 Mar 2003 16:11:20 -0000", "msg_from": "\"Paul McKay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query performance on large table " }, { "msg_contents": "Tom Lane wrote:\n\n>\"Paul McKay\" <[email protected]> writes:\n> \n>\n>>The query I am executing is\n>>Select time,value\n>>From measurement\n>>Where assessment = ?\n>>And time between ? and ?\n>> \n>>\n>\n>EXPLAIN ANALYZE would help you investigate this. Is it using an\n>indexscan? On which index? Does forcing use of the other index\n>(by temporarily dropping the preferred one) improve matters?\n>\n>Possibly a two-column index on both assessment and time would be\n>an improvement, but it's hard to guess without knowing anything\n>about the selectivity of the two WHERE clauses.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n> \n>\n\nTom,\n\ndoes this mean that a primary key alone might not be enough? As far as I \nunderstood Paul, the PK looks quite as the newly created index does, so \n\"create index ind_meas on measurement (assessment,time)\" should perform \nthe same as \"... primary key(assessment,time)\".\nDo possibly non-optimal indices (only assessment, only time as Paul \ndescribed earlier) screw up the optimizer, igoring the better option \nusiing the PK? Obviously, the index used should be combined of \n(assessment,time) but IMHO a PK should be enough.\n\nregards,\n\nAndreas\n\n", "msg_date": "Tue, 04 Mar 2003 17:38:44 +0100", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query performance on large table" }, { "msg_contents": "On Tue, 2003-03-04 at 11:11, Paul McKay wrote:\n> The results were\n> \n> clearview=# explain analyse\n> clearview-# select assessment,time\n> clearview-# from measurement\n> clearview-# where assessment = 53661\n> clearview-# and time between 1046184261 and 1046335461;\n> \n> NOTICE: QUERY PLAN:\n> \n> Index Scan using idx_measurement_assessment on measurement\n> (cost=0.00..34668.61 rows=261 width=8) (actual time=26128.07..220584.69\n> rows=503 loops=1)\n> Total runtime: 220587.06 msec\n> \n> EXPLAIN\n> \n> After adding the index kindly suggested by yourself and Tomasz I get,\n> \n> clearview=# explain analyse\n> clearview-# select assessment,time\n> clearview-# from measurement\n> clearview-# where assessment = 53661\n> clearview-# and time between 1046184261 and 1046335461;\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using ind_measurement_ass_time on measurement\n> (cost=0.00..1026.92 rows=261 width=8) (actual time=15.37..350.46\n> rows=503 loops=1)\n> Total runtime: 350.82 msec\n> \n> EXPLAIN\n> \n> \n> I vaguely recall doing a bit of a reorganize on this database a bit back\n> and it looks like I lost the primary Key index. No wonder it was going\n> slow.\n> \n\nMaybe it's just me, but I get the feeling you need to work some regular\nreindexing into your maintenance schedule. Given your query is using\nbetween, I don't think it would use the index on the time field anyway\n(and explain analyze seems to be supporting this). Rewrite it so that\nyou have a and time > foo and time < bar and I think you'll see a\ndifference. With that in mind, I think your speedier query results are \ndue more to having a non-bloated index freshly created than the fact\nthat it being a dual column index.\n\nRobert Treat \n\n\n\n", "msg_date": "04 Mar 2003 12:02:29 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query performance on large table" }, { "msg_contents": "Paul,\n\n> Index Scan using idx_measurement_assessment on measurement\n> (cost=0.00..34668.61 rows=261 width=8) (actual time=26128.07..220584.69\n> rows=503 loops=1)\n> Total runtime: 220587.06 msec\n\nThese query results say to me that you need to do both a VACUUM FULL and a \nREINDEX on this table. The 26-second delay before returning the first row \nsays \"table/index with lots of dead pages\" to me.\n\nFor the future, you should consider dramatically increasing your FSM settings \nand working a regular VACUUM FULL and REINDEX into your maintainence jobs.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 4 Mar 2003 09:14:44 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query performance on large table" }, { "msg_contents": "I used the between .. and in a vain attempt to improve performance!\nRunning with < and > improves the performance again by about 10 times.\n\nThe explain's below were ran on a test server I was using (not the live\nserver) where I had recreated the database in order to investigate\nmatters, so all the indexes were newly created anyway. The dual column\nindex was the key (literally).\n\n\n======================================\nPaul Mckay\nConsultant Partner\nServicing Division\nClearwater-IT\ne:[email protected]\nt:0161 877 6090\nm: 07713 510946\n======================================\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Robert\nTreat\nSent: 04 March 2003 17:02\nTo: Paul McKay\nCc: 'Tom Lane'; [email protected]\nSubject: Re: [PERFORM] Slow query performance on large table\n\nOn Tue, 2003-03-04 at 11:11, Paul McKay wrote:\n> The results were\n> \n> clearview=# explain analyse\n> clearview-# select assessment,time\n> clearview-# from measurement\n> clearview-# where assessment = 53661\n> clearview-# and time between 1046184261 and 1046335461;\n> \n> NOTICE: QUERY PLAN:\n> \n> Index Scan using idx_measurement_assessment on measurement\n> (cost=0.00..34668.61 rows=261 width=8) (actual\ntime=26128.07..220584.69\n> rows=503 loops=1)\n> Total runtime: 220587.06 msec\n> \n> EXPLAIN\n> \n> After adding the index kindly suggested by yourself and Tomasz I get,\n> \n> clearview=# explain analyse\n> clearview-# select assessment,time\n> clearview-# from measurement\n> clearview-# where assessment = 53661\n> clearview-# and time between 1046184261 and 1046335461;\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using ind_measurement_ass_time on measurement\n> (cost=0.00..1026.92 rows=261 width=8) (actual time=15.37..350.46\n> rows=503 loops=1)\n> Total runtime: 350.82 msec\n> \n> EXPLAIN\n> \n> \n> I vaguely recall doing a bit of a reorganize on this database a bit\nback\n> and it looks like I lost the primary Key index. No wonder it was going\n> slow.\n> \n\nMaybe it's just me, but I get the feeling you need to work some regular\nreindexing into your maintenance schedule. Given your query is using\nbetween, I don't think it would use the index on the time field anyway\n(and explain analyze seems to be supporting this). Rewrite it so that\nyou have a and time > foo and time < bar and I think you'll see a\ndifference. With that in mind, I think your speedier query results are \ndue more to having a non-bloated index freshly created than the fact\nthat it being a dual column index.\n\nRobert Treat \n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n", "msg_date": "Tue, 4 Mar 2003 17:19:03 -0000", "msg_from": "\"Paul McKay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query performance on large table" }, { "msg_contents": "Robert Treat <[email protected]> writes:\n> Maybe it's just me, but I get the feeling you need to work some regular\n> reindexing into your maintenance schedule.\n\nOr at least, more vacuuming...\n\n> Given your query is using\n> between, I don't think it would use the index on the time field anyway\n> (and explain analyze seems to be supporting this). Rewrite it so that\n> you have a and time > foo and time < bar and I think you'll see a\n> difference.\n\nNo, you won't, because that's exactly what BETWEEN is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Mar 2003 12:20:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query performance on large table " }, { "msg_contents": "Andreas Pflug wrote:\n\n> Tom,\n> \n> does this mean that a primary key alone might not be enough? As far as I \n> understood Paul, the PK looks quite as the newly created index does, so \n> \"create index ind_meas on measurement (assessment,time)\" should perform \n> the same as \"... primary key(assessment,time)\".\n> Do possibly non-optimal indices (only assessment, only time as Paul \n> described earlier) screw up the optimizer, igoring the better option \n> usiing the PK? Obviously, the index used should be combined of \n> (assessment,time) but IMHO a PK should be enough.\n> \n> regards,\n> \n> Andreas\nYou are right - primary key should be ok, but Paul lost it. psql \\d \nshows primary key indexes, but in this case there was no such primary key.\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Tue, 04 Mar 2003 18:20:57 +0100", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query performance on large table" }, { "msg_contents": "Robert Treat wrote:\n\n> Maybe it's just me, but I get the feeling you need to work some regular\n> reindexing into your maintenance schedule. Given your query is using\n> between, I don't think it would use the index on the time field anyway\n> (and explain analyze seems to be supporting this). Rewrite it so that\n> you have a and time > foo and time < bar and I think you'll see a\n> difference. With that in mind, I think your speedier query results are \n> due more to having a non-bloated index freshly created than the fact\n> that it being a dual column index.\n> \n> Robert Treat \nDo you know anything about between, what should we know?\nI made some tests, and there was no noticable difference between them:\n\npvwatch=# EXPLAIN analyze * from stats where hostid=1 and stp between 1 \nand 2;\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------\n Index Scan using ind_stats on stats (cost=0.00..6.01 rows=1 width=28) \n(actual time=0.00..0.00 rows=0 loops=1)\n Index Cond: ((hostid = 1) AND (stp >= 1) AND (stp <= 2))\n Total runtime: 0.00 msec\n(3 rows)\n\npvwatch=# EXPLAIN analyze SELECT * from stats where hostid=1 and stp> 1 \nand stp<2;\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------\n Index Scan using ind_stats on stats (cost=0.00..6.01 rows=1 width=28) \n(actual time=0.00..0.00 rows=0 loops=1)\n Index Cond: ((hostid = 1) AND (stp > 1) AND (stp < 2))\n Total runtime: 0.00 msec\n(3 rows)\n\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Tue, 04 Mar 2003 18:29:30 +0100", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query performance on large table" }, { "msg_contents": "Andreas Pflug <[email protected]> writes:\n> \"create index ind_meas on measurement (assessment,time)\" should perform \n> the same as \"... primary key(assessment,time)\".\n\nSure.\n\n> Do possibly non-optimal indices (only assessment, only time as Paul \n> described earlier) screw up the optimizer, igoring the better option \n> usiing the PK?\n\nOne would like to think the optimizer will make the right choice. But\nusing a two-column index just because it's there isn't necessarily the\nright choice. The two-column index will certainly be bulkier and more\nexpensive to scan, so if there's a one-column index that's nearly as\nselective, it might be a better choice.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Mar 2003 12:53:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query performance on large table " }, { "msg_contents": "Tomasz Myrta wrote:\n\n> You are right - primary key should be ok, but Paul lost it. psql \\d \n> shows primary key indexes, but in this case there was no such primary \n> key.\n>\n> Regards,\n> Tomasz Myrta\n>\nOk,\n\nthen my view of the world is all right again.\n\nRe Tom Lane\n\n> One would like to think the optimizer will make the right choice. But\n> using a two-column index just because it's there isn't necessarily the\n> right choice. The two-column index will certainly be bulkier and more\n> expensive to scan, so if there's a one-column index that's nearly as\n> selective, it might be a better choice.\n\n\nIf I know that the access pattern of my app looks as if it will need a \nmultipart index I should create it. If the optimizer finds out, a \nsimpler one will fit better, all right, it knows better (if properly \nVACUUMed :-). But it's still good practice to offer complete indices. \nWill pgsql use a multipart index as efficiently for simpler queries as a \nshorter one covering only the first columns? In this example, the \n(assessment, time) index could replace the (accessment) index, but \ncertainly not the (time) index. I tend to design longer indices with \nhopefully valuable columns.\n\nIn this context:\n From MSSQL, I know \"covering indices\". Imagine a table t with many \ncolumns, and an index on (a,b,c).\nin MSSQL, SELECT c from t where (a ... AND b...) will use that index to \nretrieve the c column value also without touching the row data. In a \nsense, the index is used as an alternative table. Does pgsql profit from \nthis kind of indices also?\n\nRegards,\n\nAndreas\n\n", "msg_date": "Tue, 04 Mar 2003 22:45:36 +0100", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query performance on large table" }, { "msg_contents": "\nHopefully you guys can help me with another query I've got that's\nrunning slow.\n\nThis time it's across two tables I have\n\nclearview=# \\d panconversation\n Table \"panconversation\"\n Column | Type | Modifiers\n-------------+---------+-----------\n assessment | integer | not null\n interface | integer |\n source | integer |\n destination | integer |\n protocol | integer |\nIndexes: idx_panconversation_destination,\n idx_panconversation_interface,\n idx_panconversation_protocol,\n idx_panconversation_source\nPrimary key: panconversation_pkey\nUnique keys: unq_panconversation\nTriggers: RI_ConstraintTrigger_52186648,\n RI_ConstraintTrigger_52186654,\n RI_ConstraintTrigger_52186660,\n RI_ConstraintTrigger_52186666\n\nPrimary key is assessment\n\nAlong with the table I was dealing with before, with the index I'd\nmislaid put back in\n\nclearview=# \\d measurement\n Table \"measurement\"\n Column | Type | Modifiers\n------------+-----------------------+-----------\n assessment | integer |\n time | integer |\n value | character varying(50) |\nIndexes: idx_measurement_assessment,\n idx_measurement_time,\n ind_measurement_ass_time\n\nThe 'explain analyse' of the query I am running is rather evil.\n\nclearview=# explain analyse select source,value\nclearview-# from measurement, PANConversation\nclearview-# where PANConversation.assessment =\nmeasurement.assessment\nclearview-# and Interface = 11\nclearview-# and Time > 1046184261 and Time < 1046335461\nclearview-# ;\nNOTICE: QUERY PLAN:\n\nHash Join (cost=1532.83..345460.73 rows=75115 width=23) (actual\ntime=1769.84..66687.11 rows=16094 loops=1)\n -> Seq Scan on measurement (cost=0.00..336706.07 rows=418859\nwidth=15) (actual time=1280.11..59985.47 rows=455788 loops=1)\n -> Hash (cost=1498.21..1498.21 rows=13848 width=8) (actual\ntime=253.49..253.49 rows=0 loops=1)\n -> Seq Scan on panconversation (cost=0.00..1498.21 rows=13848\nwidth=8) (actual time=15.64..223.18 rows=13475 loops=1)\nTotal runtime: 66694.82 msec\n\nEXPLAIN\n\nAnybody shed any light on why the indexes I created aren't being used,\nand I have these nasty sequential scans?\n\nThanks in advance,\n\nPaul.\n======================================\nPaul Mckay\nConsultant Partner\nServicing Division\nClearwater-IT\ne:[email protected]\nt:0161 877 6090\nm: 07713 510946\n======================================\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tomasz\nMyrta\nSent: 04 March 2003 17:21\nTo: Andreas Pflug\nCc: [email protected]\nSubject: Re: [PERFORM] Slow query performance on large table\n\nAndreas Pflug wrote:\n\n> Tom,\n> \n> does this mean that a primary key alone might not be enough? As far as\nI \n> understood Paul, the PK looks quite as the newly created index does,\nso \n> \"create index ind_meas on measurement (assessment,time)\" should\nperform \n> the same as \"... primary key(assessment,time)\".\n> Do possibly non-optimal indices (only assessment, only time as Paul \n> described earlier) screw up the optimizer, igoring the better option \n> usiing the PK? Obviously, the index used should be combined of \n> (assessment,time) but IMHO a PK should be enough.\n> \n> regards,\n> \n> Andreas\nYou are right - primary key should be ok, but Paul lost it. psql \\d \nshows primary key indexes, but in this case there was no such primary\nkey.\n\nRegards,\nTomasz Myrta\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n\n", "msg_date": "Wed, 5 Mar 2003 09:47:51 -0000", "msg_from": "\"Paul McKay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query performance on large table" }, { "msg_contents": "Paul McKay wrote:\n> Hopefully you guys can help me with another query I've got that's\n> running slow.\n> \n> This time it's across two tables I have\n> \n> clearview=# \\d panconversation\n> Table \"panconversation\"\n> Column | Type | Modifiers\n> -------------+---------+-----------\n> assessment | integer | not null\n> interface | integer |\n> source | integer |\n> destination | integer |\n> protocol | integer |\n> Indexes: idx_panconversation_destination,\n> idx_panconversation_interface,\n> idx_panconversation_protocol,\n> idx_panconversation_source\n> Primary key: panconversation_pkey\n> Unique keys: unq_panconversation\n> Triggers: RI_ConstraintTrigger_52186648,\n> RI_ConstraintTrigger_52186654,\n> RI_ConstraintTrigger_52186660,\n> RI_ConstraintTrigger_52186666\n> \n> Primary key is assessment\n> \n> Along with the table I was dealing with before, with the index I'd\n> mislaid put back in\n> \n> clearview=# \\d measurement\n> Table \"measurement\"\n> Column | Type | Modifiers\n> ------------+-----------------------+-----------\n> assessment | integer |\n> time | integer |\n> value | character varying(50) |\n> Indexes: idx_measurement_assessment,\n> idx_measurement_time,\n> ind_measurement_ass_time\n> \n> The 'explain analyse' of the query I am running is rather evil.\n> \n> clearview=# explain analyse select source,value\n> clearview-# from measurement, PANConversation\n> clearview-# where PANConversation.assessment =\n> measurement.assessment\n> clearview-# and Interface = 11\n> clearview-# and Time > 1046184261 and Time < 1046335461\n> clearview-# ;\n> NOTICE: QUERY PLAN:\n> \n> Hash Join (cost=1532.83..345460.73 rows=75115 width=23) (actual\n> time=1769.84..66687.11 rows=16094 loops=1)\n> -> Seq Scan on measurement (cost=0.00..336706.07 rows=418859\n> width=15) (actual time=1280.11..59985.47 rows=455788 loops=1)\n> -> Hash (cost=1498.21..1498.21 rows=13848 width=8) (actual\n> time=253.49..253.49 rows=0 loops=1)\n> -> Seq Scan on panconversation (cost=0.00..1498.21 rows=13848\n> width=8) (actual time=15.64..223.18 rows=13475 loops=1)\n> Total runtime: 66694.82 msec\n> \n> EXPLAIN\n> \n> Anybody shed any light on why the indexes I created aren't being used,\n> and I have these nasty sequential scans?\n\nMeasurement is sequentially scaned, because probably \"interface=12\" \nresults in lot of records.\n\nPlease, check how many rows you have\n- all rows in measurement/panconversation,\n- rows in measurement with \"Interface\"=12\n- rows in panconversation between your time.\n\nRegards,\nTomasz Myrta\n\n\n", "msg_date": "Wed, 05 Mar 2003 11:04:46 +0100", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query performance on large table" }, { "msg_contents": "\n\nclearview=# select count(*) from measurement;\n count\n----------\n 15302138\n(1 row)\n\nclearview=# select count(*) from panconversation;\n count\n-------\n 77217\n(1 row)\n\nclearview=# select count(*) from panconversation where interface = 11;\n count\n-------\n 13475\n(1 row)\n\nclearview=# select count(*) from measurement where time > 1046184261 and\ntime < 1046335461;\n count\n--------\n 455788\n(1 row)\n\n======================================\nPaul Mckay\nConsultant Partner\nServicing Division\nClearwater-IT\ne:[email protected]\nt:0161 877 6090\nm: 07713 510946\n======================================\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tomasz\nMyrta\nSent: 05 March 2003 10:05\nTo: Paul McKay\nCc: [email protected]\nSubject: Re: [PERFORM] Slow query performance on large table\n\nPaul McKay wrote:\n> Hopefully you guys can help me with another query I've got that's\n> running slow.\n> \n> This time it's across two tables I have\n> \n> clearview=# \\d panconversation\n> Table \"panconversation\"\n> Column | Type | Modifiers\n> -------------+---------+-----------\n> assessment | integer | not null\n> interface | integer |\n> source | integer |\n> destination | integer |\n> protocol | integer |\n> Indexes: idx_panconversation_destination,\n> idx_panconversation_interface,\n> idx_panconversation_protocol,\n> idx_panconversation_source\n> Primary key: panconversation_pkey\n> Unique keys: unq_panconversation\n> Triggers: RI_ConstraintTrigger_52186648,\n> RI_ConstraintTrigger_52186654,\n> RI_ConstraintTrigger_52186660,\n> RI_ConstraintTrigger_52186666\n> \n> Primary key is assessment\n> \n> Along with the table I was dealing with before, with the index I'd\n> mislaid put back in\n> \n> clearview=# \\d measurement\n> Table \"measurement\"\n> Column | Type | Modifiers\n> ------------+-----------------------+-----------\n> assessment | integer |\n> time | integer |\n> value | character varying(50) |\n> Indexes: idx_measurement_assessment,\n> idx_measurement_time,\n> ind_measurement_ass_time\n> \n> The 'explain analyse' of the query I am running is rather evil.\n> \n> clearview=# explain analyse select source,value\n> clearview-# from measurement, PANConversation\n> clearview-# where PANConversation.assessment =\n> measurement.assessment\n> clearview-# and Interface = 11\n> clearview-# and Time > 1046184261 and Time < 1046335461\n> clearview-# ;\n> NOTICE: QUERY PLAN:\n> \n> Hash Join (cost=1532.83..345460.73 rows=75115 width=23) (actual\n> time=1769.84..66687.11 rows=16094 loops=1)\n> -> Seq Scan on measurement (cost=0.00..336706.07 rows=418859\n> width=15) (actual time=1280.11..59985.47 rows=455788 loops=1)\n> -> Hash (cost=1498.21..1498.21 rows=13848 width=8) (actual\n> time=253.49..253.49 rows=0 loops=1)\n> -> Seq Scan on panconversation (cost=0.00..1498.21\nrows=13848\n> width=8) (actual time=15.64..223.18 rows=13475 loops=1)\n> Total runtime: 66694.82 msec\n> \n> EXPLAIN\n> \n> Anybody shed any light on why the indexes I created aren't being used,\n> and I have these nasty sequential scans?\n\nMeasurement is sequentially scaned, because probably \"interface=12\" \nresults in lot of records.\n\nPlease, check how many rows you have\n- all rows in measurement/panconversation,\n- rows in measurement with \"Interface\"=12\n- rows in panconversation between your time.\n\nRegards,\nTomasz Myrta\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Wed, 5 Mar 2003 10:27:27 -0000", "msg_from": "\"Paul McKay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query performance on large table" }, { "msg_contents": "On Wed, 5 Mar 2003 09:47:51 -0000, \"Paul McKay\"\n<[email protected]> wrote:\n>Hash Join (cost=1532.83..345460.73 rows=75115 width=23) (actual\n>time=1769.84..66687.11 rows=16094 loops=1)\n> -> Seq Scan on measurement (cost=0.00..336706.07 rows=418859\n>width=15) (actual time=1280.11..59985.47 rows=455788 loops=1)\n> -> Hash (cost=1498.21..1498.21 rows=13848 width=8) (actual\n>time=253.49..253.49 rows=0 loops=1)\n> -> Seq Scan on panconversation (cost=0.00..1498.21 rows=13848\n>width=8) (actual time=15.64..223.18 rows=13475 loops=1)\n>Total runtime: 66694.82 msec\n\n|clearview=# select count(*) from measurement;\n| 15302138\n|clearview=# select count(*) from panconversation;\n| 77217\nPaul,\n\nyou seem to have a lot of dead tuples in your tables.\n\n\tVACUUM FULL VERBOSE ANALYZE panconversation;\n\tVACUUM FULL VERBOSE ANALYZE measurement;\n\nThis should cut your query time to ca. one third. If you could\nmigrate to 7.3 and create your tables WITHOUT OIDS, I'd expect a\nfurther speed increase of ~ 15%.\n\nServus\n Manfred\n", "msg_date": "Sat, 08 Mar 2003 11:15:56 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query performance on large table" } ]
[ { "msg_contents": "Mr. Peddermors,\n\n\"We have a postgres backend to our Mail Server product, and encountering \nperformance issues. Simple selects are taking 7-10 seconds.. \nWe have of course applied all the suggested performance settings for Postgres, \n(We are running on Debian Stable/Linux BTW)\nWe moved the database to a standalone server, but still having the problems.\nWith app 100,000 users authenticating pop mail, plus all of the smtp \nverfications, the server is expected to perform snappy queries, else mail \ndelivery/pickup is inordintaely long, or can't occur, and loads snowball..\"\n\nI've cc'd your question to PGSQL-Performance list. Can you give us a few \nexamples of EXPLAIN ANALYZE output for the queries which are running slow? \n(as well as the queries themeselves)? It's possible that you have a \nplatform issue on Debian, but far more likely that this is a garden-variety \nperformance tuning issue.\n\nIf this is a business-critical issue, I suggest that you retain a PostgreSQL \nconsultant, such as PostgreSQL Inc., myself, or Justin Clift.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 4 Mar 2003 09:29:28 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL Performance Issue on Mail Server" } ]
[ { "msg_contents": "I know this has been covered on one of the lists in the past, but I'm damned \nif I can find the keywords to locate it.\n\nIf I join two tables with a comparison to a constant on one, why can't the \nplanner see that the comparison applies to both tables:\n\nSELECT a.id FROM a JOIN b ON a.id=b.id WHERE a.id=1;\n\nruns much slower than\n\nSELECT a.id FROM a JOIN b ON a.id=b.id WHERE a.id=1 AND b.id=1;\n\nIt's not a real problem since it's easy to work around, but I was wondering \nwhat the difficulties are for the planner in seeing that query 1 is the same \nas query 2. Note that it doesn't seem related to JOIN forcing the planner's \nhand, the same applies just using WHERE a.id=b.id\n\n-- \n Richard Huxton\n", "msg_date": "Wed, 5 Mar 2003 11:13:14 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": true, "msg_subject": "Planner matching constants across tables in a join" }, { "msg_contents": "\nRichard Huxton <[email protected]> writes:\n\n> I know this has been covered on one of the lists in the past, but I'm damned \n> if I can find the keywords to locate it.\n> \n> If I join two tables with a comparison to a constant on one, why can't the \n> planner see that the comparison applies to both tables:\n\nIt sure does. Postgres does an impressive job of tracing equality clauses\naround for just this purpose.\n\n> SELECT a.id FROM a JOIN b ON a.id=b.id WHERE a.id=1;\n> \n> runs much slower than\n> \n> SELECT a.id FROM a JOIN b ON a.id=b.id WHERE a.id=1 AND b.id=1;\n\nReally? They produce virtually the same plan for me.\n\nWhy do you think it'll run slower?\nWhat query are you actually finding slow?\n\n-- \ngreg\n\n", "msg_date": "05 Mar 2003 07:42:18 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner matching constants across tables in a join" }, { "msg_contents": "On Wednesday 05 Mar 2003 12:42 pm, Greg Stark wrote:\n> Really? They produce virtually the same plan for me.\n>\n> Why do you think it'll run slower?\n> What query are you actually finding slow?\n\nThe actual query uses three tables, but isn't very complicated. Apologies for \nthe wrapping on the explain.\n\nEXPLAIN ANALYSE SELECT a.line_id, a.start_time, a.call_dur, i.cam_id,\ni.prod_id, i.chg_per_min, i.rev_per_min\nFROM campaign_items i, campaign c, activity a\nWHERE\ni.cam_id=c.id AND a.line_id=i.line_id\nAND a.start_time BETWEEN c.cam_from AND c.cam_to\nAND a.line_id='0912345 0004' AND i.line_id='0912345 0004';\n\n \nQUERY PLAN\n----------\n Merge Join (cost=348.01..348.72 rows=1 width=72) (actual time=115.43..116.27 \nrows=21 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".cam_id)\n Join Filter: ((\"outer\".line_id)::text = (\"inner\".line_id)::text)\n -> Sort (cost=245.45..245.75 rows=118 width=40) (actual time=83.98..84.10 \nrows=94 loops=1)\n Sort Key: c.id\n -> Nested Loop (cost=0.00..241.40 rows=118 width=40) (actual \ntime=3.83..83.27 rows=94 loops=1)\n Join Filter: ((\"outer\".start_time >= \n(\"inner\".cam_from)::timestamp without time zone) AND (\"outer\".start_time <= \n(\"inner\".cam_to)::timestamp without time zone))\n -> Seq Scan on activity a (cost=0.00..199.00 rows=11 \nwidth=28) (actual time=3.06..54.14 rows=19 loops=1)\n Filter: ((line_id)::text = '0912345 0004'::text)\n -> Seq Scan on campaign c (cost=0.00..2.00 rows=100 width=12) \n(actual time=0.02..0.84 rows=100 loops=19)\n -> Sort (cost=102.56..102.57 rows=5 width=32) (actual time=31.36..31.39 \nrows=20 loops=1)\n Sort Key: i.cam_id\n -> Seq Scan on campaign_items i (cost=0.00..102.50 rows=5 width=32) \n(actual time=17.16..31.11 rows=6 loops=1)\n Filter: ((line_id)::text = '0912345 0004'::text)\n Total runtime: 117.08 msec\n(15 rows)\n\n\nand this is the plan where I just check the one line_id:\n\n\nEXPLAIN ANALYSE SELECT a.line_id, a.start_time, a.call_dur, i.cam_id, \ni.prod_id, i.chg_per_min, i.rev_per_min\nFROM campaign_items i, campaign c, activity a\nWHERE\ni.cam_id=c.id AND a.line_id=i.line_id\nAND a.start_time BETWEEN c.cam_from AND c.cam_to\nAND i.line_id='0912345 0004';\n \nQUERY PLAN\n---------------------------------------\n Hash Join (cost=2.25..1623.70 rows=6 width=72) (actual time=48.27..974.30 \nrows=21 loops=1)\n Hash Cond: (\"outer\".cam_id = \"inner\".id)\n Join Filter: ((\"outer\".start_time >= (\"inner\".cam_from)::timestamp without \ntime zone) AND (\"outer\".start_time <= (\"inner\".cam_to)::timestamp without \ntime zone))\n -> Nested Loop (cost=0.00..1619.87 rows=53 width=60) (actual \ntime=24.49..969.33 rows=114 loops=1)\n Join Filter: ((\"inner\".line_id)::text = (\"outer\".line_id)::text)\n -> Seq Scan on campaign_items i (cost=0.00..102.50 rows=5 width=32) \n(actual time=15.72..28.52 rows=6 loops=1)\n Filter: ((line_id)::text = '0912345 0004'::text)\n -> Seq Scan on activity a (cost=0.00..174.00 rows=10000 width=28) \n(actual time=0.03..101.95 rows=10000 loops=6)\n -> Hash (cost=2.00..2.00 rows=100 width=12) (actual time=1.54..1.54 \nrows=0 loops=1)\n -> Seq Scan on campaign c (cost=0.00..2.00 rows=100 width=12) \n(actual time=0.06..0.94 rows=100 loops=1)\n Total runtime: 975.13 msec\n(11 rows)\n\nTable campaign has 100 rows, campaign_items 5000, activity 10000. My guess is \nthat the planner starts with \"campaign\" because of the low number of rows, \nbut it still looks like filtering on \"activity\" would help things. Indeed, \ntesting a.line_id instead of i.line_id does make a difference.\n\n \nQUERY PLAN\n-------------------\n Hash Join (cost=241.70..457.54 rows=6 width=72) (actual time=161.20..225.68 \nrows=21 loops=1)\n Hash Cond: (\"outer\".cam_id = \"inner\".id)\n Join Filter: ((\"inner\".line_id)::text = (\"outer\".line_id)::text)\n -> Seq Scan on campaign_items i (cost=0.00..90.00 rows=5000 width=32) \n(actual time=0.03..72.00 rows=5000 loops=1)\n -> Hash (cost=241.40..241.40 rows=118 width=40) (actual time=85.46..85.46 \nrows=0 loops=1)\n -> Nested Loop (cost=0.00..241.40 rows=118 width=40) (actual \ntime=3.80..84.66 rows=94 loops=1)\n Join Filter: ((\"outer\".start_time >= \n(\"inner\".cam_from)::timestamp without time zone) AND (\"outer\".start_time <= \n(\"inner\".cam_to)::timestamp without time zone))\n -> Seq Scan on activity a (cost=0.00..199.00 rows=11 \nwidth=28) (actual time=3.03..54.48 rows=19 loops=1)\n Filter: ((line_id)::text = '0912345 0004'::text)\n -> Seq Scan on campaign c (cost=0.00..2.00 rows=100 width=12) \n(actual time=0.03..0.89 rows=100 loops=19)\n Total runtime: 226.51 msec\n(11 rows)\n\n-- \n Richard Huxton\n", "msg_date": "Wed, 5 Mar 2003 14:24:12 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner matching constants across tables in a join" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n\n> Filter: ((line_id)::text = '0912345 0004'::text)\n\nSo I think this means that line_id is being casted to \"text\". Though I'm not\nclear why it would be choosing \"text\" for the constant if line_id wasn't text\nto begin with. \n\nIn any case my plans here look like:\n> Filter: (aa = 'x'::text)\n\nso it looks like there's something extra going on in your plan.\n\nwhat does your table definition look like?\n\n-- \ngreg\n\n", "msg_date": "05 Mar 2003 10:02:17 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner matching constants across tables in a join" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Richard Huxton <[email protected]> writes:\n>> If I join two tables with a comparison to a constant on one, why can't the \n>> planner see that the comparison applies to both tables:\n\n> It sure does. Postgres does an impressive job of tracing equality clauses\n> around for just this purpose.\n\nCVS tip does. Existing releases don't...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Mar 2003 10:08:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner matching constants across tables in a join " }, { "msg_contents": "On Wednesday 05 Mar 2003 3:02 pm, Greg Stark wrote:\n> Richard Huxton <[email protected]> writes:\n> > Filter: ((line_id)::text = '0912345 0004'::text)\n>\n> So I think this means that line_id is being casted to \"text\". Though I'm\n> not clear why it would be choosing \"text\" for the constant if line_id\n> wasn't text to begin with.\n\nA domain defined as varchar() actually - which is why it's not using an index, \nbut that's neither here nor there regarding the constant issue.\n\n> In any case my plans here look like:\n> > Filter: (aa = 'x'::text)\n>\n> so it looks like there's something extra going on in your plan.\n>\n> what does your table definition look like?\n\nrms=> \\d campaign\n Table \"rms.campaign\"\n Column | Type | Modifiers\n----------+-----------+-----------\n id | integer | not null\n title | item_name |\n cam_from | date |\n cam_to | date |\n owner | integer |\nIndexes: campaign_pkey primary key btree (id),\n campaign_from_idx btree (cam_from),\n campaign_to_idx btree (cam_to)\n\nrms=> \\d campaign_items\n Table \"rms.campaign_items\"\n Column | Type | Modifiers\n-------------+---------+-----------\n cam_id | integer | not null\n line_id | tel_num | not null\n prod_id | integer | not null\n chg_per_min | integer |\n rev_per_min | integer |\nIndexes: campaign_items_pkey primary key btree (cam_id, line_id, prod_id),\n cam_item_line_idx btree (line_id)\nForeign Key constraints: $1 FOREIGN KEY (cam_id) REFERENCES campaign(id) ON \nUPDATE NO ACTION ON DELETE NO ACTION,\n $2 FOREIGN KEY (line_id) REFERENCES line(telno) ON \nUPDATE NO ACTION ON DELETE NO ACTION,\n $3 FOREIGN KEY (prod_id) REFERENCES product(id) ON \nUPDATE NO ACTION ON DELETE NO ACTION\n\nrms=> \\d activity\n Table \"rms.activity\"\n Column | Type | Modifiers\n------------+-----------------------------+-----------\n line_id | tel_num | not null\n start_time | timestamp without time zone | not null\n call_dur | integer |\nIndexes: activity_pkey primary key btree (line_id, start_time),\n activity_start_idx btree (start_time)\nForeign Key constraints: $1 FOREIGN KEY (line_id) REFERENCES line(telno) ON \nUPDATE NO ACTION ON DELETE NO ACTION\n\n\n-- \n Richard Huxton\n", "msg_date": "Wed, 5 Mar 2003 16:12:09 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner matching constants across tables in a join" }, { "msg_contents": "Richard,\n\n> A domain defined as varchar() actually - which is why it's not using\n> an index, \n> but that's neither here nor there regarding the constant issue.\n\nYou might improve your performance overall if you cast the constant to\ntel_num before doing the comparison in the query. Right now, the\nparser is casting the whole column to text instead, because it can't\ntell that the constant you supply is a valid tel_num.\n\n-Josh\n", "msg_date": "Wed, 05 Mar 2003 11:00:23 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner matching constants across tables in a" }, { "msg_contents": "On Wednesday 05 Mar 2003 7:00 pm, Josh Berkus wrote:\n> Richard,\n>\n> > A domain defined as varchar() actually - which is why it's not using\n> > an index,\n> > but that's neither here nor there regarding the constant issue.\n>\n> You might improve your performance overall if you cast the constant to\n> tel_num before doing the comparison in the query. Right now, the\n> parser is casting the whole column to text instead, because it can't\n> tell that the constant you supply is a valid tel_num.\n\nThat's what I thought, but...\n\nrms=> EXPLAIN ANALYSE SELECT * FROM line WHERE telno='0912345 0004'::tel_num;\n QUERY PLAN\n----------------------------------------------------------------------------------------------\n Seq Scan on line (cost=0.00..20.50 rows=1 width=28) (actual time=0.10..5.28 \nrows=1 loops=1)\n Filter: ((telno)::text = ('0912345 0004'::character varying)::text)\n Total runtime: 5.43 msec\n\nrms=> EXPLAIN ANALYSE SELECT * FROM line WHERE telno='0912345 0004'::varchar;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Index Scan using line_pkey on line (cost=0.00..5.78 rows=1 width=28) (actual \ntime=14.03..14.03 rows=1 loops=1)\n Index Cond: ((telno)::character varying = '0912345 0004'::character \nvarying)\n Total runtime: 14.28 msec\n\nIgnoring the times (fake data on my test box) it seems like there's an issue \nin comparing against DOMAIN defined types. Or maybe it's in the index \ndefinition, although I don't know how to find out the type of an index.\n-- \n Richard Huxton\n", "msg_date": "Wed, 5 Mar 2003 19:25:43 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner matching constants across tables in a" }, { "msg_contents": "On Wednesday 05 Mar 2003 7:00 pm, Josh Berkus wrote:\n> You might improve your performance overall if you cast the constant to\n> tel_num before doing the comparison in the query. \n\nStranger and stranger...\n\nrichardh=# CREATE DOMAIN intdom int4;\nrichardh=# CREATE DOMAIN textdom text;\nrichardh=# CREATE TABLE domtest (a intdom, b textdom);\nrichardh=# CREATE INDEX domtest_a_idx ON domtest (a);\nrichardh=# CREATE INDEX domtest_b_idx ON domtest (b);\nrichardh=# INSERT INTO domtest VALUES (1,'aaa');\nrichardh=# INSERT INTO domtest VALUES (2,'bbb');\nrichardh=# INSERT INTO domtest VALUES (3,'ccc');\n\nrichardh=# EXPLAIN ANALYSE SELECT * FROM domtest WHERE a=1::intdom;\n-------------------------------------------------------------------------------------------------\n Seq Scan on domtest (cost=0.00..22.50 rows=5 width=36) (actual \ntime=0.08..0.11 rows=1 loops=1)\n Filter: ((a)::oid = 1::oid)\n\nrichardh=# EXPLAIN ANALYSE SELECT * FROM domtest WHERE a=1::int4;\n-----------------------------------------------------------------------------------------------------------------------\n Index Scan using domtest_a_idx on domtest (cost=0.00..17.07 rows=5 width=36) \n(actual time=0.09..0.11 rows=1 loops=1)\n Index Cond: ((a)::integer = 1)\n\nrichardh=# EXPLAIN ANALYSE SELECT * FROM domtest WHERE b='aaa'::textdom;\n-----------------------------------------------------------------------------------------------------------------------\n Index Scan using domtest_b_idx on domtest (cost=0.00..17.07 rows=5 width=36) \n(actual time=0.09..0.11 rows=1 loops=1)\n Index Cond: ((b)::text = 'aaa'::text)\n\nrichardh=# EXPLAIN ANALYSE SELECT * FROM domtest WHERE b='aaa'::text;\n-----------------------------------------------------------------------------------------------------------------------\n Index Scan using domtest_b_idx on domtest (cost=0.00..17.07 rows=5 width=36) \n(actual time=0.10..0.12 rows=1 loops=1)\n Index Cond: ((b)::text = 'aaa'::text)\n\nCan't think why we're getting casts to type \"oid\" in the first example - I'd \nhave thought int4 would be the default. I'm guessing the text domain always \nworks because that's the default cast.\n\n-- \n Richard Huxton\n", "msg_date": "Wed, 5 Mar 2003 19:31:44 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner matching constants across tables in a" } ]
[ { "msg_contents": "Tim,\n\n> I'm new to Postgres, and am not even the DBA for the system. I'm just a\n> sysadmin trying to make things run faster. Every month, we copy over a 25\n> million row table from the production server to the reporting server. Total\n> size is something like 40 gigabytes.\n\nAre you doing this through COPY files, or some other means?\n\n-- \nJosh Berkus\[email protected]\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 5 Mar 2003 11:37:39 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Batch copying of databases" }, { "msg_contents": "Hi all,\n\nI'm new to Postgres, and am not even the DBA for the system. I'm just a\nsysadmin trying to make things run faster. Every month, we copy over a 25\nmillion row table from the production server to the reporting server. Total\nsize is something like 40 gigabytes.\n\nThe copy in takes close to 24 hours, and I see the disks being hammered by\nhundreds of small writes every second. The system is mostly waiting on I/O.\nIs there any facility in Postgres to force batching of the I/O transactions\nto something more reasonable than 8K?\n\nThanks for any advice,\nTim\n\n", "msg_date": "Wed, 5 Mar 2003 15:26:27 -0500", "msg_from": "\"Tim Mohler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Batch copying of databases" }, { "msg_contents": "On 5 Mar 2003 at 15:26, Tim Mohler wrote:\n\n> Hi all,\n> \n> I'm new to Postgres, and am not even the DBA for the system. I'm just a\n> sysadmin trying to make things run faster. Every month, we copy over a 25\n> million row table from the production server to the reporting server. Total\n> size is something like 40 gigabytes.\n> \n> The copy in takes close to 24 hours, and I see the disks being hammered by\n> hundreds of small writes every second. The system is mostly waiting on I/O.\n> Is there any facility in Postgres to force batching of the I/O transactions\n> to something more reasonable than 8K?\n\nWell, 8K has nothing to with transactions in postgresql.\n\nYou need to make sure at least two things.\n\n1. You are using copy. By default postgresql writes each inserts in it's own \ntransaction which is seriously slow for bulk load. Copy bunches the rwos in a \nsingle transaction and is quite fast.\n\nif you need to preprocess the data, batch something like 1K-10K records in a \nsingle transaction.\n\n2. Postgresql bulk load is not as fast as many of us would like, especially \nwhen compared to oracle. So if you know you are going to bulk load using say \ncopy, don't load the data from a single connection. Split the data file in say \n5-10 parts and start loading all of them simaltaneously. It does speed up the \nthings. At least it certainly saturates the disk bandwidth which single load \ndoes not do many times.\n\nOn a side note, for such a bulk load consider dropping any indexes and foreign \nkey contraints. \n\nHTH\n\n\nBye\n Shridhar\n\n--\nTurnaucka's Law:\tThe attention span of a computer is only as long as its\t\nelectrical cord.\n\n", "msg_date": "Thu, 06 Mar 2003 12:23:42 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Batch copying of databases" } ]
[ { "msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql-server\nChanges by:\[email protected]\t03/03/05 22:16:56\n\nModified files:\n\t. : configure configure.in \n\tsrc/include : pg_config.h.in \n\tsrc/interfaces/libpq: fe-misc.c \n\nLog message:\n\tUse poll(2) in preference to select(2), if available. This solves\n\tproblems in applications that may have a large number of files open,\n\tsuch that libpq's socket number exceeds the range supported by fd_set.\n\tFrom Chris Brown.\n\n", "msg_date": "Wed, 5 Mar 2003 22:16:56 -0500 (EST)", "msg_from": "[email protected] (Tom Lane)", "msg_from_op": true, "msg_subject": "pgsql-server/ /configure /configure.in rc/incl ..." }, { "msg_contents": "Has anyone ever thought about adding kqueue (for *BSD) support to Postgres,\ninstead of using select?\n\nLIBRARY\n Standard C Library (libc, -lc)\n\nSYNOPSIS\n #include <sys/types.h>\n #include <sys/event.h>\n #include <sys/time.h>\n\n int\n kqueue(void);\n\n int\n kevent(int kq, const struct kevent *changelist, int nchanges,\n struct kevent *eventlist, int nevents,\n const struct timespec *timeout);\n\n EV_SET(&kev, ident, filter, flags, fflags, data, udata);\n\nDESCRIPTION\n kqueue() provides a generic method of notifying the user when an event\n happens or a condition holds, based on the results of small pieces of\n kernel code termed filters. A kevent is identified by the (ident, fil-\n ter) pair; there may only be one unique kevent per kqueue.\n\n The filter is executed upon the initial registration of a kevent in\norder\n to detect whether a preexisting condition is present, and is also exe-\n cuted whenever an event is passed to the filter for evaluation. If the\n filter determines that the condition should be reported, then the\nkevent\n is placed on the kqueue for the user to retrieve.\n\n The filter is also run when the user attempts to retrieve the kevent\nfrom\n the kqueue. If the filter indicates that the condition that triggered\n the event no longer holds, the kevent is removed from the kqueue and is\n not returned.\n\n\nChris\n\n> CVSROOT: /cvsroot\n> Module name: pgsql-server\n> Changes by: [email protected] 03/03/05 22:16:56\n>\n> Modified files:\n> . : configure configure.in\n> src/include : pg_config.h.in\n> src/interfaces/libpq: fe-misc.c\n>\n> Log message:\n> Use poll(2) in preference to select(2), if available. This solves\n> problems in applications that may have a large number of files open,\n> such that libpq's socket number exceeds the range supported by fd_set.\n> From Chris Brown.\n\n\n", "msg_date": "Thu, 6 Mar 2003 11:30:11 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ..." }, { "msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> Has anyone ever thought about adding kqueue (for *BSD) support to Postgres,\n> instead of using select?\n\nWhy? poll() is standard. kqueue isn't, AFAIK.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Mar 2003 22:34:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ... " }, { "msg_contents": "> \"Christopher Kings-Lynne\" <[email protected]> writes:\n> > Has anyone ever thought about adding kqueue (for *BSD) support to\nPostgres,\n> > instead of using select?\n>\n> Why? poll() is standard. kqueue isn't, AFAIK.\n\nIt's supposed be a whole heap faster - there is no polling involved...\n\nChris\n\n", "msg_date": "Thu, 6 Mar 2003 11:42:42 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ... " }, { "msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n>>> Has anyone ever thought about adding kqueue (for *BSD) support to\n>>> Postgres, instead of using select?\n>> \n>> Why? poll() is standard. kqueue isn't, AFAIK.\n\n> It's supposed be a whole heap faster - there is no polling involved...\n\nSupposed by whom? Faster than what? And how would it not poll?\n\nThe way libpq uses this call, it's either probing for current status\n(timeout=0) or it's willing to block, possibly indefinitely, until the\ndesired condition arises. It does not sit there in a busy-wait loop.\nI can't see any reason to think that an OS-specific API would give\nany marked difference in performance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Mar 2003 22:47:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ... " }, { "msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> It's supposed be a whole heap faster - there is no polling involved...\n\nI looked into this more. AFAICT, the scenario in which kqueue is\nsaid to be faster involves watching a large number of file\ndescriptors simultaneously. Since libpq is only watching one\ndescriptor, I don't see the benefit of adopting kqueue ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Mar 2003 23:19:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ... " }, { "msg_contents": "\nI assume he just assumed poll() actually polls. I doesn't. It is just\nlike select().\n\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> \"Christopher Kings-Lynne\" <[email protected]> writes:\n> >>> Has anyone ever thought about adding kqueue (for *BSD) support to\n> >>> Postgres, instead of using select?\n> >> \n> >> Why? poll() is standard. kqueue isn't, AFAIK.\n> \n> > It's supposed be a whole heap faster - there is no polling involved...\n> \n> Supposed by whom? Faster than what? And how would it not poll?\n> \n> The way libpq uses this call, it's either probing for current status\n> (timeout=0) or it's willing to block, possibly indefinitely, until the\n> desired condition arises. It does not sit there in a busy-wait loop.\n> I can't see any reason to think that an OS-specific API would give\n> any marked difference in performance.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 5 Mar 2003 23:33:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ..." }, { "msg_contents": "> >>> Has anyone ever thought about adding kqueue (for *BSD) support to\n> >>> Postgres, instead of using select?\n> >> \n> >> Why? poll() is standard. kqueue isn't, AFAIK.\n> \n> > It's supposed be a whole heap faster - there is no polling involved...\n> \n> Supposed by whom? Faster than what? And how would it not poll?\n> \n> The way libpq uses this call, it's either probing for current status\n> (timeout=0) or it's willing to block, possibly indefinitely, until the\n> desired condition arises. It does not sit there in a busy-wait loop.\n> I can't see any reason to think that an OS-specific API would give\n> any marked difference in performance.\n\nHeh, kqueue is _the_ reason to use FreeBSD.\n\nhttp://www.kegel.com/dkftpbench/Poller_bench.html#results\n\nI've toyed with the idea of adding this because it is monstrously more\nefficient than select()/poll() in basically every way, shape, and\nform.\n\nThat said, in terms of performance perks, I'd think migrating the\nbackend to using mmap() would yield a bigger performance benefit (see\nStevens) to a larger group of people than adding FreeBSD's kqueue\ninterface (something I plan on doing at some point if no one beats me\nto it). mmap() + write() for FreeBSD is a zero-copy socket operation\nand likely is on other platforms. Reducing the number of pages that\nhave to be copied around would be a big win in terms of sending data\nto clients as well as scanning through data. Files are also only\nmmap()'ed in the kernel once with BSD's VM system which could reduce\nthe RAM consumed by backends considerably.\n\nmmap() would also be an interesting way of providing some kind of\natomicity for MVCC (re: WAL, use msync() to have the mapped region hit\nthe disk before the change). I was actually quite surprised when I\ngrep'ed through the code and found that mmap() wasn't in use\n_anywhere_. The TODO seems to be full of messages, but not much in\nthe way of authoritative statements. Is this one of the areas of\nPostgreSQL that just needs to get slowly migrated to use mmap() or are\nthere any gaping reasons why to not use the family of system calls?\n\n-sc\n\n-- \nSean Chittenden", "msg_date": "Thu, 6 Mar 2003 01:41:17 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ..." }, { "msg_contents": "Sean Chittenden <[email protected]> writes:\n> I've toyed with the idea of adding this because it is monstrously more\n> efficient than select()/poll() in basically every way, shape, and\n> form.\n\n From what I've looked at, kqueue only wins when you are watching a large\nnumber of file descriptors at the same time; which is an operation done\nnowhere in Postgres. I think the above would be a complete waste of\neffort.\n\n> Is this one of the areas of\n> PostgreSQL that just needs to get slowly migrated to use mmap() or are\n> there any gaping reasons why to not use the family of system calls?\n\nThere has been much speculation on this, and no proof that it actually\nbuys us anything to justify the portability hit. There would be some\nnontrivial problems to solve, such as the mechanics of accessing a\nlarge number of files from a large number of backends without running\nout of virtual memory. Also, is it guaranteed that multiple backends\nmmap'ing the same block will access the very same physical buffer, and\nnot multiple copies? Multiple copies would be fatal. See the acrhives\nfor more discussion.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Mar 2003 10:25:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ... " }, { "msg_contents": "[moving to -performance, please drop -committers from replies]\n\n> > I've toyed with the idea of adding this because it is monstrously more\n> > efficient than select()/poll() in basically every way, shape, and\n> > form.\n> \n> From what I've looked at, kqueue only wins when you are watching a\n> large number of file descriptors at the same time; which is an\n> operation done nowhere in Postgres. I think the above would be a\n> complete waste of effort.\n\nIt scales very well to many thousands of descriptors, but it also\nworks well on small numbers as well. kqueue is about 5x faster than\nselect() or poll() on the low end of number of fd's. As I said\nearlier, I don't think there is _much_ to gain in this regard, but I\ndo think that it would be a speed improvement but only to one OS\nsupported by PostgreSQL. I think that there are bigger speed\nimprovements to be had elsewhere in the code.\n\n> > Is this one of the areas of PostgreSQL that just needs to get\n> > slowly migrated to use mmap() or are there any gaping reasons why\n> > to not use the family of system calls?\n> \n> There has been much speculation on this, and no proof that it\n> actually buys us anything to justify the portability hit.\n\nActually, I think that it wouldn't be that big of a portability hit\nbecause you still would read() and write() as always, but in\nperformance sensitive areas, an #ifdef HAVE_MMAP section would have\nthe appropriate mmap() calls. If the system doesn't have mmap(),\nthere isn't much to loose and we're in the same position we're in now.\n\n> There would be some nontrivial problems to solve, such as the\n> mechanics of accessing a large number of files from a large number\n> of backends without running out of virtual memory. Also, is it\n> guaranteed that multiple backends mmap'ing the same block will\n> access the very same physical buffer, and not multiple copies?\n> Multiple copies would be fatal. See the acrhives for more\n> discussion.\n\nHave read through the archives. Making a call to madvise() will speed\nup access to the pages as it gives hints to the VM about what order\nthe pages are accessed/used. Here are a few bits from the BSD mmap()\nand madvise() man pages:\n\nmmap(2):\n MAP_NOSYNC Causes data dirtied via this VM map to be flushed to\n physical media only when necessary (usually by the\n pager) rather then gratuitously. Typically this pre-\n vents the update daemons from flushing pages dirtied\n through such maps and thus allows efficient sharing of\n memory across unassociated processes using a file-\n backed shared memory map. Without this option any VM\n pages you dirty may be flushed to disk every so often\n (every 30-60 seconds usually) which can create perfor-\n mance problems if you do not need that to occur (such\n as when you are using shared file-backed mmap regions\n for IPC purposes). Note that VM/filesystem coherency\n is maintained whether you use MAP_NOSYNC or not. This\n option is not portable across UNIX platforms (yet),\n though some may implement the same behavior by default.\n\n WARNING! Extending a file with ftruncate(2), thus cre-\n ating a big hole, and then filling the hole by modify-\n ing a shared mmap() can lead to severe file fragmenta-\n tion. In order to avoid such fragmentation you should\n always pre-allocate the file's backing store by\n write()ing zero's into the newly extended area prior to\n modifying the area via your mmap(). The fragmentation\n problem is especially sensitive to MAP_NOSYNC pages,\n because pages may be flushed to disk in a totally ran-\n dom order.\n\n The same applies when using MAP_NOSYNC to implement a\n file-based shared memory store. It is recommended that\n you create the backing store by write()ing zero's to\n the backing file rather then ftruncate()ing it. You\n can test file fragmentation by observing the KB/t\n (kilobytes per transfer) results from an ``iostat 1''\n while reading a large file sequentially, e.g. using\n ``dd if=filename of=/dev/null bs=32k''.\n\n The fsync(2) function will flush all dirty data and\n metadata associated with a file, including dirty NOSYNC\n VM data, to physical media. The sync(8) command and\n sync(2) system call generally do not flush dirty NOSYNC\n VM data. The msync(2) system call is obsolete since\n BSD implements a coherent filesystem buffer cache.\n However, it may be used to associate dirty VM pages\n with filesystem buffers and thus cause them to be\n flushed to physical media sooner rather then later.\n\nmadvise(2):\n MADV_NORMAL Tells the system to revert to the default paging behav-\n ior.\n\n MADV_RANDOM Is a hint that pages will be accessed randomly, and\n prefetching is likely not advantageous.\n\n MADV_SEQUENTIAL Causes the VM system to depress the priority of pages\n immediately preceding a given page when it is faulted\n in.\n\nmprotect(2):\n The mprotect() system call changes the specified pages to have protection\n prot. Not all implementations will guarantee protection on a page basis;\n the granularity of protection changes may be as large as an entire\n region. A region is the virtual address space defined by the start and\n end addresses of a struct vm_map_entry.\n\n Currently these protection bits are known, which can be combined, OR'd\n together:\n\n PROT_NONE No permissions at all.\n\n PROT_READ The pages can be read.\n\n PROT_WRITE The pages can be written.\n\n PROT_EXEC The pages can be executed.\n\nmsync(2):\n The msync() system call writes any modified pages back to the filesystem\n and updates the file modification time. If len is 0, all modified pages\n within the region containing addr will be flushed; if len is non-zero,\n only those pages containing addr and len-1 succeeding locations will be\n examined. The flags argument may be specified as follows:\n\n MS_ASYNC Return immediately\n MS_SYNC Perform synchronous writes\n MS_INVALIDATE Invalidate all cached data\n\n\nA few thoughts come to mind:\n\n1) backends could share buffers by mmap()'ing shared regions of data.\n While I haven't seen any numbers to reflect this, I'd wager that\n mmap() is a faster interface than ipc.\n\n2) It looks like while there are various file IO schemes scattered all\n over the place, the bulk of the critical routines that would need\n to be updated are in backend/storage/file/fd.c, more specifically:\n\n *) fileNameOpenFile() would need the appropriate mmap() call made\n to it.\n\n *) FileTruncate() would need some attention to avoid fragmentation.\n\n *) a new \"sync\" GUC would have to be introduced to handle msync\n (affects only pg_fsync() and pg_fdatasync()).\n\n3) There's a bit of code in pgsql/src/backend/storage/smgr that could\n be gutted/removed. Which of those storage types are even used any\n more? There's a reference in the code to PostgreSQL 3.0. :)\n\nAnd I think that'd be it. The LRU code could be used if necessary to\nhelp manage the amount of mmap()'ed in the VM at any one time, at the\nvery least that could be a handled by a shm var that various backends\nwould increment/decrement as files are open()'ed/close()'ed.\n\nI didn't spend too long looking at this, but I _think_ that'd cover\n80% of PostgreSQL's disk access needs. The next bit to possibly add\nwould be passing a flag on FileOpen operations that'd act as a hint to\nmadvise() that way the VM could proactively react to PostgreSQL's\nneeds.\n\nI don't have my copy of Steven's handy (it's some 700mi away atm\notherwise I'd cite it), but if Tom or someone else has it handy, look\nup the example re: the performance gain from read()'ing an mmap()'ed\nfile versus a non-mmap()'ed file. The difference is non-trivial and\n_WELL_ worth the time given the speed increase. The same speed\nbenefit held true for writes as well, iirc. It's been a while, but I\nthink it was around page 330. The index has it listed and it's not\nthat hard of an example to find. -sc\n\n-- \nSean Chittenden", "msg_date": "Thu, 6 Mar 2003 16:36:40 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ..." }, { "msg_contents": "On Thu, 2003-03-06 at 19:36, Sean Chittenden wrote:\n> I don't have my copy of Steven's handy (it's some 700mi away atm\n> otherwise I'd cite it), but if Tom or someone else has it handy, look\n> up the example re: the performance gain from read()'ing an mmap()'ed\n> file versus a non-mmap()'ed file. The difference is non-trivial and\n> _WELL_ worth the time given the speed increase.\n\nCan anyone confirm this? If so, one easy step we could take in this\ndirection would be adapting COPY FROM to use mmap().\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n\n\n", "msg_date": "06 Mar 2003 19:47:52 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/ /configure /configure.in rc/incl ..." }, { "msg_contents": "> > I don't have my copy of Steven's handy (it's some 700mi away atm\n> > otherwise I'd cite it), but if Tom or someone else has it handy, look\n> > up the example re: the performance gain from read()'ing an mmap()'ed\n> > file versus a non-mmap()'ed file. The difference is non-trivial and\n> > _WELL_ worth the time given the speed increase.\n> \n> Can anyone confirm this? If so, one easy step we could take in this\n> direction would be adapting COPY FROM to use mmap().\n\nWeeee! Alright, so I got to have some fun writing out some simple\ntests with mmap() and friends tonight. Are the results interesting?\nAbsolutely! Is this a simple benchmark? Yup. Do I think it\nsimulates PostgreSQL? Eh, not particularly. Does it demonstrate that\nmmap() is a win and something worth implementing? I sure hope so. Is\nthis a test program to demonstrate the ideal use of mmap() in\nPostgreSQL? No. Is it a place to start a factual discussion? I hope\nso.\n\nI have here four tests that are conditionalized by cpp.\n\n# The first one uses read() and write() but with the buffer size set\n# to the same size as the file.\ngcc -O3 -finline-functions -fkeep-inline-functions -funroll-loops -o test-mmap test-mmap.c\n/usr/bin/time ./test-mmap > /dev/null\nBeginning tests with file: services\n\nPage size: 4096\nFile read size is the same as the file size\nNumber of iterations: 100000\nStart time: 1047013002.412516\nTime: 82.88178\n\nCompleted tests\n 82.09 real 2.13 user 68.98 sys\n\n# The second one uses read() and write() with the default buffer size:\n# 65536\ngcc -O3 -finline-functions -fkeep-inline-functions -funroll-loops -DDEFAULT_READSIZE=1 -o test-mmap test-mmap.c\n/usr/bin/time ./test-mmap > /dev/null\nBeginning tests with file: services\n\nPage size: 4096\nFile read size is default read size: 65536\nNumber of iterations: 100000\nStart time: 1047013085.16204\nTime: 18.155511\n\nCompleted tests\n 18.16 real 0.90 user 14.79 sys\n# Please note this is significantly faster, but that's expected\n\n# The third test uses mmap() + madvise() + write()\ngcc -O3 -finline-functions -fkeep-inline-functions -funroll-loops -DDEFAULT_READSIZE=1 -DDO_MMAP=1 -o test-mmap test-mmap.c\n/usr/bin/time ./test-mmap > /dev/null\nBeginning tests with file: services\n\nPage size: 4096\nFile read size is the same as the file size\nNumber of iterations: 100000\nStart time: 1047013103.859818\nTime: 8.4294203644\n\nCompleted tests\n 7.24 real 0.41 user 5.92 sys\n# Faster still, and twice as fast as the normal read() case\n\n# The last test only calls mmap()'s once when the file is opened and\n# only msync()'s, munmap()'s, close()'s the file once at exit.\ngcc -O3 -finline-functions -fkeep-inline-functions -funroll-loops -DDEFAULT_READSIZE=1 -DDO_MMAP=1 -DDO_MMAP_ONCE=1 -o test-mmap test-mmap.c\n/usr/bin/time ./test-mmap > /dev/null\nBeginning tests with file: services\n\nPage size: 4096\nFile read size is the same as the file size\nNumber of iterations: 100000\nStart time: 1047013111.623712\nTime: 1.174076\n\nCompleted tests\n 1.18 real 0.09 user 0.92 sys\n# Substantially faster\n\n\nObviously this isn't perfect, but reading and writing data is faster\n(specifically moving pages through the VM/OS). Doing partial writes\nfrom mmap()'ed data should be faster along with scanning through\nmmap()'ed portions of - or completely mmap()'ed - files because the\npages are already loaded in the VM. PostgreSQL's LRU file descriptor\ncache could easily be adjusted to add mmap()'ing of frequently\naccessed files (specifically, system catalogs come to mind). It's not\nhard to figure out how often particular files are accessed and to\neither _avoid_ mmap()'ing a file that isn't accessed often, or to\nmmap() files that _are_ accessed often. mmap() does have a cost, but\nI'd wager that mmap()'ing the same file a second or third time from a\ndifferent process would be more efficient. The speedup of searching\nthrough an mmap()'ed file may be worth it, however, to mmap() all\nfiles if the system is under a tunable resource limit\n(max_mmaped_bytes?).\n\nIf someone is so inclined or there's enough interest, I can reverse\nthis test case so that data is written to an mmap()'ed file, but the\nsame performance difference should hold true (assuming this isn't a\nwrite to a tape drive ::grin::).\n\nThe URL for the program used to generate the above tests is at:\n\nhttp://people.freebsd.org/~seanc/mmap_test/\n\n\nPlease ask if you have questions. -sc\n\n-- \nSean Chittenden", "msg_date": "Thu, 6 Mar 2003 22:04:12 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/ /configure /configure.in rc/incl ..." }, { "msg_contents": "Sean Chittenden <[email protected]> writes:\n> Absolutely! Is this a simple benchmark? Yup. Do I think it\n> simulates PostgreSQL? Eh, not particularly.\n\nThis would be on what OS? What hardware? What size test file?\nDo the \"iterations\" mean so many reads of the entire file, or\nso many buffer-sized read requests? Did the mmap case actually\n*read* anything, or just map and unmap the file?\n\nAlso, what did you do to normalize for the effects of the test file\nbeing already in kernel disk cache after the first test?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Mar 2003 09:29:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/ /configure /configure.in rc/incl ... " }, { "msg_contents": "> > Absolutely! Is this a simple benchmark? Yup. Do I think it\n> > simulates PostgreSQL? Eh, not particularly.\n\nI think quite a few of these Q's would have been answered by reading\nthe code/Makefile....\n\n> This would be on what OS?\n\nFreeBSD, but it shouldn't matter. Any reasonably written VM should\nhave similar numbers (though BSD is generally regarded as having the\nbest VM, which, I think Linux poached not that long ago, iirc\n::grimace::).\n\n> What hardware?\n\nMy ultra-pathetic laptop with some fine - overly-noisy and can hardly\nbuildworld - IDE drives.\n\n> What size test file?\n\nIn this case, only 72K. I've just updated the test program to use an\narray of files though.\n\n> Do the \"iterations\" mean so many reads of the entire file, or so\n> many buffer-sized read requests?\n\nIn some cases, yes. With the file mmap()'ed, sorta. One of the test\ncases (the one that did it in ~8s), mmap()'ed and munmap()'ed the file\nevery iteration and was twice as fast as the vanilla read() call.\n\n> Did the mmap case actually *read* anything, or just map and unmap\n> the file?\n\nNope, read it and wrote it out to stdout (which was redirected to\n/dev/null).\n\n> Also, what did you do to normalize for the effects of the test file\n> being already in kernel disk cache after the first test?\n\nThat honestly doesn't matter too much since I wasn't testing the rate\nof reading in files from my hard drive, only the OS's ability to\nread/write pages of data around. In any case, I've updated my test\ncase to iterate through an array of files instead of just reading in a\ncopy of /etc/services. My laptop is generally a poor benchmark for\ndisk read performance given it takes 8hrs to buildworld, over 12hrs to\nbuild mozilla, 18 for KDE, and about 48hrs for Open Office. :)\nSomeone with faster disks may want to try this and report back, but it\ndoesn't matter much in terms of relevancy for considering the benefits\nof mmap(). The point is that there are calls that can be used that\nsubstantially speed up read()'s and write()'s by allowing the VM to\nalign pages of data and give hints about its usage. For the sake of\nargument re: the previously done tests, I'll reverse the order in\nwhich I ran them and I bet dime to dollar that the times will be\nidentical.\n\n% make ~/open_source/mmap_test\ncp -f /etc/services ./services\ngcc -O3 -finline-functions -fkeep-inline-functions -funroll-loops -DDEFAULT_READSIZE=1 -DDO_MMAP=1 -DDO_MMAP_ONCE=1 -o mmap-test mmap-test.c\n/usr/bin/time ./mmap-test > /dev/null\nBeginning tests with file: services\n\nPage size: 4096\nFile read size is the same as the file size\nNumber of iterations: 100000\nStart time: 1047064672.276544\nTime: 1.281477\n\nCompleted tests\n 1.29 real 0.10 user 0.92 sys\ngcc -O3 -finline-functions -fkeep-inline-functions -funroll-loops -DDEFAULT_READSIZE=1 -DDO_MMAP=1 -o mmap-test mmap-test.c\n/usr/bin/time ./mmap-test > /dev/null\nBeginning tests with file: services\n\nPage size: 4096\nFile read size is the same as the file size\nNumber of iterations: 100000\nStart time: 1047064674.266191\nTime: 7.486622\n\nCompleted tests\n 7.49 real 0.41 user 6.01 sys\ngcc -O3 -finline-functions -fkeep-inline-functions -funroll-loops -DDEFAULT_READSIZE=1 -o mmap-test mmap-test.c\n/usr/bin/time ./mmap-test > /dev/null\nBeginning tests with file: services\n\nPage size: 4096\nFile read size is default read size: 65536\nNumber of iterations: 100000\nStart time: 1047064682.288637\nTime: 19.35214\n\nCompleted tests\n 19.04 real 0.88 user 15.43 sys\ngcc -O3 -finline-functions -fkeep-inline-functions -funroll-loops -o mmap-test mmap-test.c\n/usr/bin/time ./mmap-test > /dev/null\nBeginning tests with file: services\n\nPage size: 4096\nFile read size is the same as the file size\nNumber of iterations: 100000\nStart time: 1047064701.867031\nTime: 82.4294540875\n\nCompleted tests\n 81.57 real 2.10 user 69.55 sys\n\n\nHere's the updated test that iterates through. Ooh! One better, the\nfiles I've used are actual data files from ~pgsql. The new benchmark\niterates through the list of files and and calls bench() once for each\nfile and restarts at the first file after reaching the end of its\nlist (ARGV).\n\nWhoa, if these tests are even close to real world, then we at the very\nleast should be mmap()'ing the file every time we read it (assuming\nwe're reading more than just a handful of bytes):\n\nfind /usr/local/pgsql/data -type f | /usr/bin/xargs /usr/bin/time ./mmap-test > /dev/null\nPage size: 4096\nFile read size is the same as the file size\nNumber of iterations: 100000\nStart time: 1047071143.463360\nTime: 12.109530\n\nCompleted tests\n 12.11 real 0.36 user 6.80 sys\n\nfind /usr/local/pgsql/data -type f | /usr/bin/xargs /usr/bin/time ./mmap-test > /dev/null\nPage size: 4096\nFile read size is default read size: 65536\nNumber of iterations: 100000\n.... [been waiting here for >40min now....]\n\n\nAh well, if these tests finish this century, I'll post the results in\na bit, but it's pretty clearly a win. In terms of the data that I'm\ncopying, I'm copying ~700MB of data from my test DB on my laptop. I\nonly have 256MB of RAM so I can pretty much promise you that the data\nisn't in my system buffers. If anyone else would like to run the\ntests or look at the results, please check it out:\n\no1 and o2 should be the only targets used if FILES is bigger than the\nRAM on the system. o3's by far and away the fastest, but only in rare\ncases will a DBA have more RAM than data. But, as mentioned earlier,\nthe LRU cache could easily be modified to munmap() infrequently\naccessed files to keep the size of mmap()'ed data down to a reasonable\nlevel.\n\nThe updated test programs are at:\n\nhttp://people.FreeBSD.org/~seanc/mmap_test/\n\n-sc\n\n-- \nSean Chittenden", "msg_date": "Fri, 7 Mar 2003 13:46:30 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/ /configure /configure.in rc/incl ..." }, { "msg_contents": "yOn Thu, 6 Mar 2003, Sean Chittenden wrote:\n\n> > >>> Has anyone ever thought about adding kqueue (for *BSD) support to\n> > >>> Postgres, instead of using select?\n> > >>\n> > >> Why? poll() is standard. kqueue isn't, AFAIK.\n> >\n> > > It's supposed be a whole heap faster - there is no polling involved...\n> >\n> > Supposed by whom? Faster than what? And how would it not poll?\n> >\n> > The way libpq uses this call, it's either probing for current status\n> > (timeout=0) or it's willing to block, possibly indefinitely, until the\n> > desired condition arises. It does not sit there in a busy-wait loop.\n> > I can't see any reason to think that an OS-specific API would give\n> > any marked difference in performance.\n>\n> Heh, kqueue is _the_ reason to use FreeBSD.\n>\n> http://www.kegel.com/dkftpbench/Poller_bench.html#results\n>\n> I've toyed with the idea of adding this because it is monstrously more\n> efficient than select()/poll() in basically every way, shape, and\n> form.\n\nI would personally be interested in seeing patches ... what would be\ninvolved?\n\n", "msg_date": "Mon, 10 Mar 2003 22:50:59 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ..." }, { "msg_contents": "> > Heh, kqueue is _the_ reason to use FreeBSD.\n> >\n> > http://www.kegel.com/dkftpbench/Poller_bench.html#results\n> >\n> > I've toyed with the idea of adding this because it is monstrously more\n> > efficient than select()/poll() in basically every way, shape, and\n> > form.\n> \n> I would personally be interested in seeing patches ... what would be\n> involved?\n\nWhoa! Surprisingly, much less than I expected!!! A small shim would\nhave to be put in place to abstract away returning valid file\ndescriptors that are ready to be read()/write(). What's really cool,\nis there are only a handful of places that'd have to be updated (as\nfar as I can tell):\n\nsrc/backend/access/transam/xact.c\nsrc/backend/postmaster/pgstat.c\nsrc/backend/postmaster/postmaster.c\nsrc/backend/storage/lmgr/s_lock.c\nsrc/bin/pg_dump/pg_dump.c\nsrc/interfaces/libpq/fe-misc.c\n\nThen it'd be possible to have clients/servers switch between kqueue,\npoll, select, or whatever the new flavor of alerting from available IO\nfd's. I've added it to my personal TODO list of things to work on.\nIf someone beats me to it, cool, it's just something that one day I'll\nget to (hopefully). -sc\n\n-- \nSean Chittenden\n", "msg_date": "Mon, 10 Mar 2003 20:11:33 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ..." }, { "msg_contents": "> > I would personally be interested in seeing patches ... what would be\n> > involved?\n>\n> Whoa! Surprisingly, much less than I expected!!! A small shim would\n> have to be put in place to abstract away returning valid file\n> descriptors that are ready to be read()/write(). What's really cool,\n> is there are only a handful of places that'd have to be updated (as\n> far as I can tell):\n\nIt would be nice to have this support there, however Tom was correct in\nsaying it really only applies to network apps that are handling thousands of\nconnections, all really, really fast. Postgres doesn't. I say you'd have\nto do the work, then do the benchmarking to see if it makes a difference.\n\nChris\n\n", "msg_date": "Tue, 11 Mar 2003 12:17:46 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ..." }, { "msg_contents": "On Mon, 2003-03-10 at 23:17, Christopher Kings-Lynne wrote:\n> It would be nice to have this support there, however Tom was correct in\n> saying it really only applies to network apps that are handling thousands of\n> connections, all really, really fast. Postgres doesn't. I say you'd have\n> to do the work, then do the benchmarking to see if it makes a difference.\n\n... and if it doesn't make a significant difference, I'd oppose\nincluding it in the mainline source. Performance optimization is one\nthing; performance \"optimization\" that doesn't actually improve\nperformance is another :-)\n\nCheers,\n\nNeil\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n\n\n", "msg_date": "10 Mar 2003 23:42:35 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ..." }, { "msg_contents": "> > It would be nice to have this support there, however Tom was correct in\n> > saying it really only applies to network apps that are handling\nthousands of\n> > connections, all really, really fast. Postgres doesn't. I say you'd\nhave\n> > to do the work, then do the benchmarking to see if it makes a\ndifference.\n>\n> ... and if it doesn't make a significant difference, I'd oppose\n> including it in the mainline source. Performance optimization is one\n> thing; performance \"optimization\" that doesn't actually improve\n> performance is another :-)\n\nThat was the unsaid implication... :)\n\nChris\n\n", "msg_date": "Tue, 11 Mar 2003 12:53:01 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ..." }, { "msg_contents": "> > It would be nice to have this support there, however Tom was\n> > correct in saying it really only applies to network apps that are\n> > handling thousands of connections, all really, really fast.\n> > Postgres doesn't. I say you'd have to do the work, then do the\n> > benchmarking to see if it makes a difference.\n> \n> ... and if it doesn't make a significant difference, I'd oppose\n> including it in the mainline source. Performance optimization is one\n> thing; performance \"optimization\" that doesn't actually improve\n> performance is another :-)\n\n::sigh:: Well, I'm not about to argue one way or another on this\nbeyond saying: kqueue is better than select/poll, but there are much\nbigger, much lower, and much easier pieces of fruit to pick off the\noptimization tree given the cost/benefit for the amount of network IO\nPostgreSQL does. That said, what was the performance gain of moving\nfrom select() to poll()? It wasn't the biggest optimization in\nPostgreSQL history, nor the smallest, but it was a step forward. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Mon, 10 Mar 2003 20:56:10 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ..." }, { "msg_contents": "Sean Chittenden <[email protected]> writes:\n> That said, what was the performance gain of moving\n> from select() to poll()? It wasn't the biggest optimization in\n> PostgreSQL history, nor the smallest, but it was a step forward. -sc\n\nThat change was not sold as a performance improvement; I doubt that it\nis one. It was sold as not failing when libpq runs inside an\napplication that has thousands of open files (i.e., more than select()\ncan cope with). \"Faster\" is debatable, \"fails\" is not...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Mar 2003 00:06:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ... " }, { "msg_contents": "> > That said, what was the performance gain of moving from select()\n> > to poll()? It wasn't the biggest optimization in PostgreSQL\n> > history, nor the smallest, but it was a step forward. -sc\n> \n> That change was not sold as a performance improvement; I doubt that\n> it is one. It was sold as not failing when libpq runs inside an\n> application that has thousands of open files (i.e., more than\n> select() can cope with). \"Faster\" is debatable, \"fails\" is not...\n\nWell, I've only heard through 2nd hand sources (dillion) the kind of\nhellish conditions that Mark has on his boxen, but \"faster and more\nefficient in the kernel\" is \"faster and more efficient in the kernel\"\nno matter how 'ya slice it and I know that every last bit helps a\nloaded system.\n\nI'm not stating that most people, or even 90% of people, will notice.\nHopefully 100% of the universe runs their boxen under ideal conditions\n(like most databases should, right? ::wink wink, nudge nudge:: For\nthose that don't, however, and get to watch things run in the red with\na load average over 20, the use of kqueue or more efficient system\ncalls is likely very appreciated. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Mon, 10 Mar 2003 21:30:33 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ /configure /configure.in rc/incl ..." } ]
[ { "msg_contents": "Hi all,\n \nAfter two month we have been migrating from Ms-Sql Server 7 to PosgreSQL\n7.3, we also build new interface to connect from client in Win base to\nPosgresql and it call “PDAdmin”. PDAdmin is a Posgresql tools to help\nDatabase Administrator (DBA) for a make a Trigger, Function, or Rule\nquickly because the User just could input the parameters that important\nonly and then the program will perform frame program automatically and\ncan generate script Trigger/Function/Rule from posgresql database just\nclick in table or schema.\n \nPDAdmin be make by concept and method difference by data tools for same\nPosgreSQL like PgAdmin, because first concept this program to help DBA\nfor beginner or advance (in my team) to make transactional script to be\nuse in PosgreSQL Version 7.3 like Trigger, Function or Rule easily,\nquick and flexible with show capability the editor. \n \nNow, we wishful to share “PDAdmin version 1.0.5” in this milist “FREE”\n \nOther features:\n-. Connection to PosgreSQL server without ODBC\n-. Update condition of trigger with choice checkbox or radio button.\n-. Available Database Explorer\n-. Shortcut to general function PostreSQL\n-. User define shortcut\n-. Block Execute Command\n-. Block Increase/Decrease Indent\n-. Export Trigger/Function/Rule from database to file\n-. Import data from Ms-Sql Server\n-. Freeware, No Limit, No Ads.\n \nRequirements:\n-. Windows 95/98/Me/NT/2000/XP\n-. File Size 1150Kb\n-. Uninstaller Included: Yes\n-. Recommended: PosgreSQL 7.3.x\n \nDownload: \nhttp://www.csahome.com/download/PDAdmin/PDASetup.exe\n \nScreenshot:\nhttp://www.csahome.com/download/PDAdmin/pdadmin1.jpg\n \nRegards,\nFadjar Hamidi\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi all,\n \nAfter two month we have been migrating from Ms-Sql Server 7\nto PosgreSQL 7.3, we also build new interface to connect from client in Win\nbase to Posgresql and it call “PDAdmin”. PDAdmin is a Posgresql\ntools to help Database Administrator (DBA) for a make a Trigger, Function, or\nRule quickly because the User just could input the parameters that important\nonly and then the program will perform frame program automatically and can\ngenerate script Trigger/Function/Rule from posgresql database just click in\ntable or schema.\n \nPDAdmin be make by concept and method difference by data\ntools for same PosgreSQL like PgAdmin, because first concept this program to\nhelp DBA for beginner or advance (in my team) to make transactional script to\nbe use in PosgreSQL Version 7.3 like Trigger, Function or Rule easily, quick\nand flexible with show capability the editor. \n \nNow, we wishful to share “PDAdmin version 1.0.5”\nin this milist “FREE”\n \nOther features:\n-. Connection to PosgreSQL server without ODBC\n-. Update condition of trigger with choice checkbox or radio\nbutton.\n-. Available Database Explorer\n-. Shortcut to general function PostreSQL\n-. User define shortcut\n-. Block Execute Command\n-. Block Increase/Decrease Indent\n-. Export Trigger/Function/Rule from database to file\n-. Import data from Ms-Sql Server\n-. Freeware, No Limit, No Ads.\n \nRequirements:\n-. Windows\n95/98/Me/NT/2000/XP\n-. File Size 1150Kb\n-. Uninstaller Included: Yes\n-. Recommended: PosgreSQL 7.3.x\n \nDownload: \nhttp://www.csahome.com/download/PDAdmin/PDASetup.exe\n \nScreenshot:\nhttp://www.csahome.com/download/PDAdmin/pdadmin1.jpg\n \nRegards,\nFadjar Hamidi", "msg_date": "Thu, 6 Mar 2003 17:51:28 +0700", "msg_from": "\"Mr.F\" <[email protected]>", "msg_from_op": true, "msg_subject": "New Interface for Win" }, { "msg_contents": "Can U please pass on the Ip of your server my ISP's DNS do not have your websites entry\n\n\n\n ----- Original Message ----- \n From: Mr.F \n To: [email protected] \n Sent: Thursday, March 06, 2003 4:21 PM\n Subject: [ADMIN] New Interface for Win\n\n\n Hi all,\n\n \n\n After two month we have been migrating from Ms-Sql Server 7 to PosgreSQL 7.3, we also build new interface to connect from client in Win base to Posgresql and it call \"PDAdmin\". PDAdmin is a Posgresql tools to help Database Administrator (DBA) for a make a Trigger, Function, or Rule quickly because the User just could input the parameters that important only and then the program will perform frame program automatically and can generate script Trigger/Function/Rule from posgresql database just click in table or schema.\n\n \n\n PDAdmin be make by concept and method difference by data tools for same PosgreSQL like PgAdmin, because first concept this program to help DBA for beginner or advance (in my team) to make transactional script to be use in PosgreSQL Version 7.3 like Trigger, Function or Rule easily, quick and flexible with show capability the editor. \n\n \n\n Now, we wishful to share \"PDAdmin version 1.0.5\" in this milist \"FREE\"\n\n \n\n Other features:\n\n -. Connection to PosgreSQL server without ODBC\n\n -. Update condition of trigger with choice checkbox or radio button.\n\n -. Available Database Explorer\n\n -. Shortcut to general function PostreSQL\n\n -. User define shortcut\n\n -. Block Execute Command\n\n -. Block Increase/Decrease Indent\n\n -. Export Trigger/Function/Rule from database to file\n\n -. Import data from Ms-Sql Server\n\n -. Freeware, No Limit, No Ads.\n\n \n\n Requirements:\n\n -. Windows 95/98/Me/NT/2000/XP\n\n -. File Size 1150Kb\n\n -. Uninstaller Included: Yes\n\n -. Recommended: PosgreSQL 7.3.x\n\n \n\n Download: \n\n http://www.csahome.com/download/PDAdmin/PDASetup.exe\n\n \n\n Screenshot:\n\n http://www.csahome.com/download/PDAdmin/pdadmin1.jpg\n\n \n\n Regards,\n\n Fadjar Hamidi\n\n\n\n\n\n\n\n\n\n\nCan U please pass on the Ip of your \nserver my ISP's DNS do not have your websites entry\n \n \n \n\n----- Original Message ----- \nFrom:\nMr.F \nTo: [email protected] \nSent: Thursday, March 06, 2003 4:21 \n PM\nSubject: [ADMIN] New Interface for \n Win\n\n\nHi \n all,\n \nAfter two month we have been \n migrating from Ms-Sql Server 7 to PosgreSQL 7.3, we also build new interface \n to connect from client in Win base to Posgresql and it call “PDAdmin”. PDAdmin \n is a Posgresql tools to help Database Administrator (DBA) for a make a \n Trigger, Function, or Rule quickly because the User just could input the \n parameters that important only and then the program will perform frame program \n automatically and can generate script Trigger/Function/Rule from posgresql \n database just click in table or schema.\n \nPDAdmin be make by concept and \n method difference by data tools for same PosgreSQL like PgAdmin, because first \n concept this program to help DBA for beginner or advance (in my team) to make \n transactional script to be use in PosgreSQL Version 7.3 like Trigger, Function \n or Rule easily, quick and flexible with show capability the editor. \n \n \nNow, we wishful to share “PDAdmin \n version 1.0.5” in this milist “FREE”\n \nOther \n features:\n-. Connection to PosgreSQL server \n without ODBC\n-. Update condition of trigger \n with choice checkbox or radio button.\n-. Available Database \n Explorer\n-. Shortcut to general function \n PostreSQL\n-. User define \n shortcut\n-. Block Execute \n Command\n-. Block Increase/Decrease \n Indent\n-. Export Trigger/Function/Rule \n from database to file\n-. Import data from Ms-Sql \n Server\n-. Freeware, No Limit, No \n Ads.\n \nRequirements:\n-. \n Windows \n 95/98/Me/NT/2000/XP\n-. File Size \n 1150Kb\n-. Uninstaller Included: \n Yes\n-. Recommended: PosgreSQL \n 7.3.x\n \nDownload: \n \nhttp://www.csahome.com/download/PDAdmin/PDASetup.exe\n \nScreenshot:\nhttp://www.csahome.com/download/PDAdmin/pdadmin1.jpg\n \nRegards,\nFadjar Hamidi", "msg_date": "Thu, 06 Mar 2003 20:18:13 +0530", "msg_from": "Aspire Something <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New Interface for Win" }, { "msg_contents": "Hi all PostgreSQL users,\n\nI would recomend PDAdmin. It's a great tool for windows user it has,\n1. Good interface\n2. Support for Creating functions and Trgers and MORE >>>\nWhat it avoid is the dirty interface of PGadmin2\nIt's nearly equivilant to the EMS Postgresql Tool.\n\nGive it a try \n\nV Kashyap\n \n ----- Original Message ----- \n From: Mr.F \n To: [email protected] \n Sent: Thursday, March 06, 2003 4:21 PM\n Subject: [ADMIN] New Interface for Win\n\n\n Hi all,\n\n \n\n After two month we have been migrating from Ms-Sql Server 7 to PosgreSQL 7.3, we also build new interface to connect from client in Win base to Posgresql and it call \"PDAdmin\". PDAdmin is a Posgresql tools to help Database Administrator (DBA) for a make a Trigger, Function, or Rule quickly because the User just could input the parameters that important only and then the program will perform frame program automatically and can generate script Trigger/Function/Rule from posgresql database just click in table or schema.\n\n \n\n PDAdmin be make by concept and method difference by data tools for same PosgreSQL like PgAdmin, because first concept this program to help DBA for beginner or advance (in my team) to make transactional script to be use in PosgreSQL Version 7.3 like Trigger, Function or Rule easily, quick and flexible with show capability the editor. \n\n \n\n Now, we wishful to share \"PDAdmin version 1.0.5\" in this milist \"FREE\"\n\n \n\n Other features:\n\n -. Connection to PosgreSQL server without ODBC\n\n -. Update condition of trigger with choice checkbox or radio button.\n\n -. Available Database Explorer\n\n -. Shortcut to general function PostreSQL\n\n -. User define shortcut\n\n -. Block Execute Command\n\n -. Block Increase/Decrease Indent\n\n -. Export Trigger/Function/Rule from database to file\n\n -. Import data from Ms-Sql Server\n\n -. Freeware, No Limit, No Ads.\n\n \n\n Requirements:\n\n -. Windows 95/98/Me/NT/2000/XP\n\n -. File Size 1150Kb\n\n -. Uninstaller Included: Yes\n\n -. Recommended: PosgreSQL 7.3.x\n\n \n\n Download: \n\n http://www.csahome.com/download/PDAdmin/PDASetup.exe\n\n \n\n Screenshot:\n\n http://www.csahome.com/download/PDAdmin/pdadmin1.jpg\n\n \n\n Regards,\n\n Fadjar Hamidi\n\n\n\n\n\n\n\n\n\n\nHi all PostgreSQL users,\n \nI would recomend PDAdmin. It's a great \ntool for windows user it has,\n1. Good interface\n2. Support for Creating functions and \nTrgers and MORE >>>\nWhat it avoid is  the dirty interface of \nPGadmin2\nIt's nearly equivilant to the EMS \nPostgresql Tool.\n \nGive it a try \n \nV Kashyap\n \n\n----- Original Message ----- \nFrom:\nMr.F \nTo: [email protected] \nSent: Thursday, March 06, 2003 4:21 \n PM\nSubject: [ADMIN] New Interface for \n Win\n\n\nHi \n all,\n \nAfter two month we have been \n migrating from Ms-Sql Server 7 to PosgreSQL 7.3, we also build new interface \n to connect from client in Win base to Posgresql and it call “PDAdmin”. PDAdmin \n is a Posgresql tools to help Database Administrator (DBA) for a make a \n Trigger, Function, or Rule quickly because the User just could input the \n parameters that important only and then the program will perform frame program \n automatically and can generate script Trigger/Function/Rule from posgresql \n database just click in table or schema.\n \nPDAdmin be make by concept and \n method difference by data tools for same PosgreSQL like PgAdmin, because first \n concept this program to help DBA for beginner or advance (in my team) to make \n transactional script to be use in PosgreSQL Version 7.3 like Trigger, Function \n or Rule easily, quick and flexible with show capability the editor. \n \n \nNow, we wishful to share “PDAdmin \n version 1.0.5” in this milist “FREE”\n \nOther \n features:\n-. Connection to PosgreSQL server \n without ODBC\n-. Update condition of trigger \n with choice checkbox or radio button.\n-. Available Database \n Explorer\n-. Shortcut to general function \n PostreSQL\n-. User define \n shortcut\n-. Block Execute \n Command\n-. Block Increase/Decrease \n Indent\n-. Export Trigger/Function/Rule \n from database to file\n-. Import data from Ms-Sql \n Server\n-. Freeware, No Limit, No \n Ads.\n \nRequirements:\n-. \n Windows \n 95/98/Me/NT/2000/XP\n-. File Size \n 1150Kb\n-. Uninstaller Included: \n Yes\n-. Recommended: PosgreSQL \n 7.3.x\n \nDownload: \n \nhttp://www.csahome.com/download/PDAdmin/PDASetup.exe\n \nScreenshot:\nhttp://www.csahome.com/download/PDAdmin/pdadmin1.jpg\n \nRegards,\nFadjar Hamidi", "msg_date": "Fri, 07 Mar 2003 11:03:14 +0530", "msg_from": "Aspire Something <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New Interface for Win" }, { "msg_contents": "Thanks for your support specially for publishing PDAdmin to the\nposgresql users.\nPlease send me email if you have any idea or advice to make it better.\n \nRegards,\nFadjar Hamidi\n \nAnother URL to download: http://www.geocities.com/fadjarh\n \n \n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Aspire Something\nSent: 07 Maret 2003 12:33\nTo: php-db; PG Performance; pgsql-novice; Pg Admin\nSubject: Re: [ADMIN] New Interface for Win\n \nHi all PostgreSQL users,\n \nI would recomend PDAdmin. It's a great tool for windows user it has,\n1. Good interface\n2. Support for Creating functions and Trgers and MORE >>>\nWhat it avoid is the dirty interface of PGadmin2\nIt's nearly equivilant to the EMS Postgresql Tool.\n \nGive it a try \n \nV Kashyap\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThanks for your support specially for publishing\nPDAdmin to the posgresql users.\nPlease send me email if you have any idea\nor advice to make it better.\n \nRegards,\nFadjar Hamidi\n \nAnother URL to download: http://www.geocities.com/fadjarh\n \n \n-----Original Message-----\nFrom:\[email protected] [mailto:[email protected]] On Behalf Of Aspire Something\nSent: 07 Maret 2003 12:33\nTo: php-db; PG Performance;\npgsql-novice; Pg Admin\nSubject: Re: [ADMIN] New Interface\nfor Win\n \n\nHi all PostgreSQL users,\n\n\n \n\n\nI would recomend PDAdmin.\nIt's a great tool for windows user it has,\n\n\n1. Good interface\n\n\n2. Support for Creating\nfunctions and Trgers and MORE >>>\n\n\nWhat it avoid is  the\ndirty interface of PGadmin2\n\n\nIt's nearly equivilant to\nthe EMS Postgresql Tool.\n\n\n \n\n\nGive it a try \n\n\n \n\n\nV Kashyap", "msg_date": "Fri, 7 Mar 2003 13:47:32 +0700", "msg_from": "\"Mr.F\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New Interface for Win" } ]
[ { "msg_contents": "How can I detect whether a column was changed by an update command \ninside a trigger?\n\ncreate table test(a int, b int, c int, primary key(a))\n\nb and c should be updated inside an update trigger if not modified by \nthe statement itself\n\n1) update test set a=0 -> trigger does its work\n2) update test set a=0, b=1, c=2 -> trigger does nothing\n3) update test set a=0, b=b, c=c -> trigger does nothing, but content of \na and b dont change either although touched\n\nWhat I'm looking for is something like\nIF NOT COLUMN_TOUCHED(b) THEN ...\nFor MSSQL, this would be coded as IF NOT UPDATE(b) ..\n\nAny hints?\n\nAndreas\n\n\n\n", "msg_date": "Thu, 06 Mar 2003 16:00:58 +0100", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": true, "msg_subject": "How to notice column changes in trigger" }, { "msg_contents": "On Thu, 2003-03-06 at 15:00, Andreas Pflug wrote:\n> How can I detect whether a column was changed by an update command \n> inside a trigger?\n> \n> create table test(a int, b int, c int, primary key(a))\n> \n> b and c should be updated inside an update trigger if not modified by \n> the statement itself\n> \n> 1) update test set a=0 -> trigger does its work\n> 2) update test set a=0, b=1, c=2 -> trigger does nothing\n> 3) update test set a=0, b=b, c=c -> trigger does nothing, but content of \n> a and b dont change either although touched\n> \n> What I'm looking for is something like\n> IF NOT COLUMN_TOUCHED(b) THEN ...\n> For MSSQL, this would be coded as IF NOT UPDATE(b) ..\n\n IF NEW.b = OLD.b OR (NEW.b IS NULL AND OLD.b IS NULL) THEN\n -- b has not changed\n ...\n END IF;\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The LORD is my light and my salvation; whom shall I \n fear? the LORD is the strength of my life; of whom \n shall I be afraid?\" Psalms 27:1 \n\n", "msg_date": "06 Mar 2003 16:01:24 +0000", "msg_from": "Oliver Elphick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to notice column changes in trigger" }, { "msg_contents": "Oliver Elphick wrote:\n\n> IF NEW.b = OLD.b OR (NEW.b IS NULL AND OLD.b IS NULL) THEN\n> -- b has not changed\n> ...\n> END IF;\n> \n>\nThis doesn't cover case 3, since UPDATE ... SET b=b will lead to NEW.b=OLD.b\n\n\n", "msg_date": "Thu, 06 Mar 2003 17:09:40 +0100", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to notice column changes in trigger" }, { "msg_contents": "Andreas,\n\n> This doesn't cover case 3, since UPDATE ... SET b=b will lead to\n> NEW.b=OLD.b\n\nWhy do you care about SET b = b?\n\nAnd shouldn't this discussion be on the PGSQL-SQL list?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 6 Mar 2003 09:05:37 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to notice column changes in trigger" } ]
[ { "msg_contents": "I have some existing code running in a production environment with \nembedded SELECTs whose WHERE's use ISNULL tests on indexed foreign key \nfields. This is obviously very slow.\n\nMy ISNULL *queries *take anywhere from 6 to 40 seconds. These queries \nare used to generate reports to a device which times out at 20 seconds, \nso half the time these devices don't get their reports, which makes my \ncustomers VERY angry.\n\nI recall seeing an email (I believe on this list) about how to improve \nperformance of ISNULL's with some sort of tweak or trick. However, I \ncan't find that email anywhere, and couldn't find it searching the \nmaillist archives.\n\nSo, until I have the time to code the fixes I need to prevent the use of \nISNULL, does anybody know how I can speed up this existing system?\n\nMan, I wish PG indexed nulls! Is there any plan on adding these in the \nfuture?\n\nThanks for any help you can give!\n\n-- \nMatt Mello\n\n", "msg_date": "Thu, 06 Mar 2003 12:51:11 -0600", "msg_from": "Matt Mello <[email protected]>", "msg_from_op": true, "msg_subject": "ISNULL performance tweaks" }, { "msg_contents": "\nMatt,\n\n> I recall seeing an email (I believe on this list) about how to improve \n> performance of ISNULL's with some sort of tweak or trick. However, I \n> can't find that email anywhere, and couldn't find it searching the \n> maillist archives.\n\nEasy. Create a partial index on NULLs:\n\nCREATE INDEX idx_tablename_nulls ON tablename(columname)\nWHERE columname IS NULL;\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 6 Mar 2003 11:04:33 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISNULL performance tweaks" }, { "msg_contents": "\nMatt,\n\n> Man, I wish PG indexed nulls! Is there any plan on adding these in the \n> future?\n\nBTW, this is a design argument. As far as a lot of SQL-geeks are concerned \n(and I'm one of them) use of NULLs should be minimized or eliminiated \nentirely from well-normalized database designs. In such designs, IS NULL \nqueries are used only for outer joins (where indexes don't matter) or for \ndata integrity maintainence (where query speed doesn't matter). \n\nAs a result, the existing core team doesn't see this issue as a priority. \nWhat fixing it requires is a new programmer who cares enough about it to hack \nit. What would be really nice is the ability to create an index WITH NULLS, \nas follows:\n\nCREATE INDEX idx_tablename_one ON tablename(column_one) WITH NULLS;\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 6 Mar 2003 11:12:07 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ISNULL performance tweaks" } ]
[ { "msg_contents": "Hello.\n\nI am curious if there is any planned support for full stored procedure \ncompiling? I've seen that PostgreSQL does not compile the SQL code inside \nplpgsql code. It is merely interpreted when the procedure gets called. This \nis also documented in the main html documentation.\n\nWhat I am wondering specifically is if stored procedure compiling will work \nsimilar to Oracle's stored procedure compilation in the future?\n\nThanks\n", "msg_date": "Fri, 07 Mar 2003 13:21:09 -0800", "msg_from": "Daniel Bruce Lynes <[email protected]>", "msg_from_op": true, "msg_subject": "Stored Procedures and compiling" }, { "msg_contents": "Daniel Bruce Lynes <[email protected]> writes:\n> I am curious if there is any planned support for full stored procedure \n> compiling? I've seen that PostgreSQL does not compile the SQL code inside \n> plpgsql code. It is merely interpreted when the procedure gets called. This\n> is also documented in the main html documentation.\n> What I am wondering specifically is if stored procedure compiling will work \n> similar to Oracle's stored procedure compilation in the future?\n\nWhat exactly do you consider \"compiling\", and why do you think that\nwhatever Oracle does (which you didn't bother to explain) is superior\nto what plpgsql does?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Mar 2003 22:13:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stored Procedures and compiling " }, { "msg_contents": "On Friday 07 March 2003 19:13, Tom Lane wrote:\n\n> What exactly do you consider \"compiling\", and why do you think that\n> whatever Oracle does (which you didn't bother to explain) is superior\n> to what plpgsql does?\n\nWhen you run a script to place a stored procedure into Oracle, it checks the \nentire script to ensure that there are no syntax errors in both the \nprocedural code and the SQL code. However, with PostgreSQL, if there are \nerrors in the code, I usually don't find out about it until I reach that \nbranch in the logic upon execution of the stored procedure from client code.\n\nAs I understand it, Oracle also compiles the stored procedure into pcode \n(internally), the first time it is called so that it runs faster. You can \ncompile stored procedures into pcode manually also, and store the pcode in \nthe database, rather than the pl/sql code.\n", "msg_date": "Sat, 08 Mar 2003 10:34:01 -0800", "msg_from": "Daniel Bruce Lynes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Stored Procedures and compiling" } ]
[ { "msg_contents": "Hi all,\n \nI've been using pgsql heavily for about 2 years now, and I keep running into\nsome index-related wierdness that's rather puzzling. This is for release\n7.2.1, so if a more recent release has solved these, great! Never the less:\n \nI have a table with about 170,000 rows, each of them a network event. I\nalso have a serial 8 primary key set up, with a corresponding (unique) btree\nindex. The primary key is basically sequential, being incremented\ndynamically at insert time. The problems I've had revolve around selecting\nan individual entry, or trying to figure out the current maximum ID in the\ntable. In both cases, the results are rather counter-intuitive. Example\nbelow, with my comments in bold.\nI've had this problem using functions such as max(), etc. For example:\n\nObvious way, using max():\n\n# explain analyze select max(my_e_id) from my_events;\nAggregate (cost=68132.85..68132.85 rows=1 width=8) (actual\ntime=16103.03..16103.03 rows=1 loops=1)\n -> Seq Scan on my_events (cost=0.00..67699.28 rows=173428 width=8)\n(actual time=0.09..15932.27 rows=173480 loops=1)\nTotal runtime: 16103.11 msec\n\nObtuse way, using ORDER BY DESC/LIMIT\n\n# explain analyze select my_e_id from sn_events ORDER BY my_e_id DESC LIMIT\n1;\nLimit (cost=0.00..1.48 rows=1 width=8) (actual time=36.02..36.03 rows=1\nloops=1)\n -> Index Scan Backward using my_events_pkey on my_events\n(cost=0.00..256931.94 rows=173428 width=8) (actual time=36.02..36.02 rows=2\nloops=\n1)\nTotal runtime: 36.09 msec\n\nIn this case, the obtuse way is faster... 446 times faster, in fact. I'd\nunderstand if this was a corner cases, but this has been the situation with\never PGSQL db I've built.\n\nHere's another example, just trying to pick out a single random entry out of\na 170,000. \nFirst, the simple approach (status quo):<?xml:namespace prefix = o ns =\n\"urn:schemas-microsoft-com:office:office\" />\n\n# explain analyze select * from my_events WHERE my_e_id = 10800000;\n\nSeq Scan on my_events (cost=0.00..68132.85 rows=1 width=771) (actual\ntime=15916.75..16337.31 rows=1 loops=1)\nTotal runtime: 16337.42 msec\n\nPretty darned slow.. (16 secs in fact, ouch). So now lets try our idea with\nlimiting the query by order it in reverse order, and limiting to 1 result\n(even though the limit is unnecessary, but performance is identical without\nit)\n\n# explain analyze select * from my_events WHERE my_e_id = 10800000 ORDER BY\nmy_e_id DESC LIMIT 1;\nLimit (cost=68132.86..68132.86 rows=1 width=771) (actual\ntime=16442.42..16442.43 rows=1 loops=1)\n -> Sort (cost=68132.86..68132.86 rows=1 width=771) (actual\ntime=16442.42..16442.42 rows=1 loops=1)\n -> Seq Scan on my_events (cost=0.00..68132.85 rows=1 width=771)\n(actual time=16009.50..16441.91 rows=1 loops=1)\nTotal runtime: 16442.70 msec\n\nWell, that's not any better... over a few runs, sometimes this was even\nslower that the status quo. Well, at this point there was only one thing\nleft to try... put in a <= in place of =, and see if it made a difference.\n\n# explain analyze select * from my_events WHERE my_e_id <= 10800000 ORDER BY\nmy_e_id DESC LIMIT 1;\nLimit (cost=0.00..5.52 rows=1 width=771) (actual time=474.40..474.42 rows=1\nloops=1)\n -> Index Scan Backward using my_events_pkey on my_events\n(cost=0.00..257365.51 rows=46663 width=771) (actual time=474.39..474.41\nrows=2 loo\nps=1)\nTotal runtime: 474.55 msec\n\nOddly enough, it did... note the \"Index Scan Backward\"... finally! So for\nwhatever reason, the DB decides not to use an index scan unless there's a\ngreater or less than comparison operator in conjunction with an ORDER\nBY/LIMIT. Now it takes half a second, instead of 16.\n\n# explain analyze select * from my_events WHERE my_e_id >= 10800000 ORDER BY\nmy_e_id LIMIT 1;\nLimit (cost=0.00..2.03 rows=1 width=771) (actual time=1379.74..1379.76\nrows=1 loops=1)\n -> Index Scan using my_events_pkey on my_events (cost=0.00..257365.51\nrows=126765 width=771) (actual time=1379.73..1379.75 rows=2 loops=1)\nTotal runtime: 1380.10 msec\n\nJust for fun, run it in regular order (front to back, versus back to front,\nlooking for >=). Sure enough, still far better than the scan... 1.4 seconds\nvs 16. So even the worst case index scan is still far better than the\ndefault approach. Note that I tried using \"set enable_seqscan=off\", and it\nSTILL insisted on scanning the table, but even slower this time.\n\nAm I missing something really obvious? Is there a proven way to\nconsistantly encourage it to use indexes for these sorts of (rather obvious)\nqueries?\n\nSeveral runs of the above resulted in some variations in run time, but the\ncorresponding orders of difference performance stayed pretty consistant.\nI'm just confused as to why I have to go through such convoluted methods to\nforce it to use the index when its obviously a FAR more efficient route to\ngo regardless of which order it scans it in (forwards or backwards). Any\nthoughts are appreciated. Thanks!\n\n Lucas.\n\n\n\n\n\n\nHi \nall,\n \nI've been using \npgsql heavily for about 2 years now, and I keep running into some index-related \nwierdness that's rather puzzling.  This is for release 7.2.1, so if a more \nrecent release has solved these, great!  Never the \nless:\n \nI have a table with \nabout 170,000 rows, each of them a network event.  I also have a serial 8 \nprimary key set up, with a corresponding (unique) btree index.  The primary \nkey is basically sequential, being incremented dynamically at insert time.  \nThe problems I've had revolve around selecting an individual entry, or trying to \nfigure out the current maximum ID in the table.  In both cases, the results \nare rather counter-intuitive.  Example below, with my comments in \nbold.\n\nI've had this problem using functions such as \nmax(), etc.  For example:\nObvious way, using \nmax():\n# explain analyze select max(my_e_id) from \nmy_events;Aggregate  (cost=68132.85..68132.85 rows=1 \nwidth=8) (actual time=16103.03..16103.03 rows=1 loops=1)  ->  \nSeq Scan on my_events  (cost=0.00..67699.28 rows=173428 width=8) (actual \ntime=0.09..15932.27 rows=173480 loops=1)Total runtime: 16103.11 \nmsec\nObtuse way, using ORDER BY \nDESC/LIMIT\n# explain analyze select my_e_id from sn_events ORDER \nBY my_e_id DESC LIMIT 1;Limit  (cost=0.00..1.48 rows=1 \nwidth=8) (actual time=36.02..36.03 rows=1 loops=1)  ->  Index \nScan Backward using my_events_pkey on my_events  (cost=0.00..256931.94 \nrows=173428 width=8) (actual time=36.02..36.02 rows=2 loops=1)Total \nruntime: 36.09 msecIn this case, the obtuse way is faster... \n446 times faster, in fact.  I'd understand if this was a corner cases, but \nthis has been the situation with ever PGSQL db I've \nbuilt.\nHere's another example, just trying \nto pick out a single random entry out of a 170,000.  First, the simple \napproach (status quo):\n\n# explain \nanalyze select * from my_events WHERE my_e_id = \n10800000;                                                                     \nSeq Scan on \nmy_events  (cost=0.00..68132.85 rows=1 width=771) (actual \ntime=15916.75..16337.31 rows=1 loops=1)Total runtime: 16337.42 \nmsec\nPretty darned slow.. (16 \nsecs in fact, ouch).  So now lets try \nour idea with limiting the query by order it in reverse order, and limiting to 1 \nresult (even though the limit is unnecessary, but performance is \nidentical without it)\n# explain \nanalyze select * from my_events WHERE my_e_id = 10800000 ORDER BY my_e_id DESC \nLIMIT 1;Limit  \n(cost=68132.86..68132.86 rows=1 width=771) (actual time=16442.42..16442.43 \nrows=1 loops=1)  ->  Sort  (cost=68132.86..68132.86 rows=1 \nwidth=771) (actual time=16442.42..16442.42 rows=1 \nloops=1)        ->  Seq Scan on \nmy_events  (cost=0.00..68132.85 rows=1 width=771) (actual \ntime=16009.50..16441.91 rows=1 loops=1)Total runtime: 16442.70 \nmsec\nWell, that's not any \nbetter... over a few runs, sometimes this was even slower that the status \nquo.  Well, at this point there was only one thing left to try... put in a \n<= in place of =, and see if it made a \ndifference.\n# explain \nanalyze select * from my_events WHERE my_e_id <= 10800000 ORDER BY my_e_id \nDESC LIMIT 1;Limit  (cost=0.00..5.52 rows=1 \nwidth=771) (actual time=474.40..474.42 rows=1 loops=1)  ->  \nIndex Scan Backward using my_events_pkey on my_events  \n(cost=0.00..257365.51 rows=46663 width=771) (actual time=474.39..474.41 rows=2 \nloops=1)Total runtime: 474.55 msec\nOddly enough, it did... note the \n\"Index Scan Backward\"... finally!  So for whatever reason, the DB decides \nnot to use an index scan unless there's a greater or less than comparison \noperator in conjunction with an ORDER BY/LIMIT.  Now it takes half a \nsecond, instead of 16.\n# explain \nanalyze select * from my_events WHERE my_e_id >= 10800000 ORDER BY my_e_id \nLIMIT 1;Limit  (cost=0.00..2.03 rows=1 \nwidth=771) (actual time=1379.74..1379.76 rows=1 loops=1)  ->  \nIndex Scan using my_events_pkey on my_events  (cost=0.00..257365.51 \nrows=126765 width=771) (actual time=1379.73..1379.75 rows=2 loops=1)Total \nruntime: 1380.10 msec\nJust for fun, run it in \nregular order (front to back, versus back to front, looking for >=).  \nSure enough, still far better than the scan... 1.4 seconds vs 16.  So even the worst case index scan is still far \nbetter than the default approach.  Note that I tried using \"set \nenable_seqscan=off\", and it STILL insisted on scanning the table, but even \nslower this time.\nAm I missing something really obvious?  Is there a \nproven way to consistantly encourage it to use indexes for these sorts of \n(rather obvious) queries?\nSeveral runs of the above resulted in some variations \nin run time, but the corresponding orders of difference \nperformance stayed pretty \nconsistant.  I'm just confused as to why I have to go through such \nconvoluted methods to force it to use the index when its obviously a FAR more \nefficient route to go regardless of which order it scans it in (forwards or \nbackwards).  Any thoughts are appreciated.  \nThanks!\n  \nLucas.", "msg_date": "Fri, 7 Mar 2003 16:15:42 -0800 ", "msg_from": "Lucas Adamski <[email protected]>", "msg_from_op": true, "msg_subject": "Index / Performance issues" }, { "msg_contents": "Hi Lucas, you are running into two fairly common postgresql tuning issues. \n\nWhen you run max(), you are literally asking the database to look at every \nvalue and find the highest one. while 'select max(field) from table' \nseems like a simple one to optimize, how about 'select max(field) from \ntable where id<=800000 and size='m' isn't so obivious anymore. As the \nmax() queries get more complex, the ability to optimize them quickly \ndisappears.\n\nThings get more complex in a multi-user environment, where different folks \ncan see different things. While the limit offset solution seems like a \nhack, it is actually asking the question in a more easily optimized way.\n\nThe second problem you're running into is that postgresql doesn't \nautomatically match int8 to int4, and it assumes ints without '' around \nthem are int4. the easy solution is to enclose your id number inside '' \nmarks, so you have :\n\nselect * from table where 8bitintfield='123456789';\n\nand that will force the planner to convert your number to int8.\n\nOn Fri, 7 Mar 2003, Lucas Adamski wrote:\n\n> Hi all,\n> \n> I've been using pgsql heavily for about 2 years now, and I keep running into\n> some index-related wierdness that's rather puzzling. This is for release\n> 7.2.1, so if a more recent release has solved these, great! Never the less:\n> \n> I have a table with about 170,000 rows, each of them a network event. I\n> also have a serial 8 primary key set up, with a corresponding (unique) btree\n> index. The primary key is basically sequential, being incremented\n> dynamically at insert time. The problems I've had revolve around selecting\n> an individual entry, or trying to figure out the current maximum ID in the\n> table. In both cases, the results are rather counter-intuitive. Example\n> below, with my comments in bold.\n> I've had this problem using functions such as max(), etc. For example:\n> \n> Obvious way, using max():\n> \n> # explain analyze select max(my_e_id) from my_events;\n> Aggregate (cost=68132.85..68132.85 rows=1 width=8) (actual\n> time=16103.03..16103.03 rows=1 loops=1)\n> -> Seq Scan on my_events (cost=0.00..67699.28 rows=173428 width=8)\n> (actual time=0.09..15932.27 rows=173480 loops=1)\n> Total runtime: 16103.11 msec\n> \n> Obtuse way, using ORDER BY DESC/LIMIT\n> \n> # explain analyze select my_e_id from sn_events ORDER BY my_e_id DESC LIMIT\n> 1;\n> Limit (cost=0.00..1.48 rows=1 width=8) (actual time=36.02..36.03 rows=1\n> loops=1)\n> -> Index Scan Backward using my_events_pkey on my_events\n> (cost=0.00..256931.94 rows=173428 width=8) (actual time=36.02..36.02 rows=2\n> loops=\n> 1)\n> Total runtime: 36.09 msec\n> \n> In this case, the obtuse way is faster... 446 times faster, in fact. I'd\n> understand if this was a corner cases, but this has been the situation with\n> ever PGSQL db I've built.\n> \n> Here's another example, just trying to pick out a single random entry out of\n> a 170,000. \n> First, the simple approach (status quo):<?xml:namespace prefix = o ns =\n> \"urn:schemas-microsoft-com:office:office\" />\n> \n> # explain analyze select * from my_events WHERE my_e_id = 10800000;\n> \n> Seq Scan on my_events (cost=0.00..68132.85 rows=1 width=771) (actual\n> time=15916.75..16337.31 rows=1 loops=1)\n> Total runtime: 16337.42 msec\n> \n> Pretty darned slow.. (16 secs in fact, ouch). So now lets try our idea with\n> limiting the query by order it in reverse order, and limiting to 1 result\n> (even though the limit is unnecessary, but performance is identical without\n> it)\n> \n> # explain analyze select * from my_events WHERE my_e_id = 10800000 ORDER BY\n> my_e_id DESC LIMIT 1;\n> Limit (cost=68132.86..68132.86 rows=1 width=771) (actual\n> time=16442.42..16442.43 rows=1 loops=1)\n> -> Sort (cost=68132.86..68132.86 rows=1 width=771) (actual\n> time=16442.42..16442.42 rows=1 loops=1)\n> -> Seq Scan on my_events (cost=0.00..68132.85 rows=1 width=771)\n> (actual time=16009.50..16441.91 rows=1 loops=1)\n> Total runtime: 16442.70 msec\n> \n> Well, that's not any better... over a few runs, sometimes this was even\n> slower that the status quo. Well, at this point there was only one thing\n> left to try... put in a <= in place of =, and see if it made a difference.\n> \n> # explain analyze select * from my_events WHERE my_e_id <= 10800000 ORDER BY\n> my_e_id DESC LIMIT 1;\n> Limit (cost=0.00..5.52 rows=1 width=771) (actual time=474.40..474.42 rows=1\n> loops=1)\n> -> Index Scan Backward using my_events_pkey on my_events\n> (cost=0.00..257365.51 rows=46663 width=771) (actual time=474.39..474.41\n> rows=2 loo\n> ps=1)\n> Total runtime: 474.55 msec\n> \n> Oddly enough, it did... note the \"Index Scan Backward\"... finally! So for\n> whatever reason, the DB decides not to use an index scan unless there's a\n> greater or less than comparison operator in conjunction with an ORDER\n> BY/LIMIT. Now it takes half a second, instead of 16.\n> \n> # explain analyze select * from my_events WHERE my_e_id >= 10800000 ORDER BY\n> my_e_id LIMIT 1;\n> Limit (cost=0.00..2.03 rows=1 width=771) (actual time=1379.74..1379.76\n> rows=1 loops=1)\n> -> Index Scan using my_events_pkey on my_events (cost=0.00..257365.51\n> rows=126765 width=771) (actual time=1379.73..1379.75 rows=2 loops=1)\n> Total runtime: 1380.10 msec\n> \n> Just for fun, run it in regular order (front to back, versus back to front,\n> looking for >=). Sure enough, still far better than the scan... 1.4 seconds\n> vs 16. So even the worst case index scan is still far better than the\n> default approach. Note that I tried using \"set enable_seqscan=off\", and it\n> STILL insisted on scanning the table, but even slower this time.\n> \n> Am I missing something really obvious? Is there a proven way to\n> consistantly encourage it to use indexes for these sorts of (rather obvious)\n> queries?\n> \n> Several runs of the above resulted in some variations in run time, but the\n> corresponding orders of difference performance stayed pretty consistant.\n> I'm just confused as to why I have to go through such convoluted methods to\n> force it to use the index when its obviously a FAR more efficient route to\n> go regardless of which order it scans it in (forwards or backwards). Any\n> thoughts are appreciated. Thanks!\n> \n> Lucas.\n> \n> \n\n", "msg_date": "Fri, 7 Mar 2003 17:33:26 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index / Performance issues" }, { "msg_contents": "\n\n\"scott.marlowe\" <[email protected]> writes:\n\n> select * from table where 8bitintfield='123456789';\n\nOr:\n\nselect * from table where 8bitintfield=123456789::int8\n\n\nI'm not sure which is aesthetically more pleasing.\n\n-- \ngreg\n\n", "msg_date": "07 Mar 2003 21:39:09 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index / Performance issues" }, { "msg_contents": "On 7 Mar 2003, Greg Stark wrote:\n\n> \n> \n> \"scott.marlowe\" <[email protected]> writes:\n> \n> > select * from table where 8bitintfield='123456789';\n> \n> Or:\n> \n> select * from table where 8bitintfield=123456789::int8\n> \n> \n> I'm not sure which is aesthetically more pleasing.\n\nThe cast is self documenting, so it's probably a better choice for most \nsetups. On the other hand, it's not as likely to be portable.\n\n", "msg_date": "Mon, 10 Mar 2003 09:55:53 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index / Performance issues" }, { "msg_contents": "scott.marlowe wrote:\n> > select * from table where 8bitintfield=123456789::int8\n> > \n> > \n> > I'm not sure which is aesthetically more pleasing.\n> \n> The cast is self documenting, so it's probably a better choice for most \n> setups. On the other hand, it's not as likely to be portable.\n\nMay as well make it as portable as possible, though:\n\nselect * from table where 8bitintfield = CAST(123456789 AS bigint)\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Mon, 10 Mar 2003 19:44:36 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index / Performance issues" } ]
[ { "msg_contents": "Hello,\n\n \n\nWhile running benchmarks for my database, I am seeing a large difference in\nthe elapsed time (from stats collected in the logs) and run time (running\nexplain analyze on the query using ./psql <database>) for each of my\nqueries. The database is being ran on a sunfire 880 with 4 750mhz\nprocessors with 8 G RAM running solaris 8\n\n \n\nI am simulating 200 user connections each running 6 select queries on 1\nindexed table with 50,000 records. The elapsed time for the queries average\naround 2.5 seconds while if I run the query using explain analyze while the\ntest is running, the run time is around 300 ms although it takes much longer\n(few seconds) to display the results. If I reduce the number of concurrent\nconnections to 100 then the run time and elapsed time for the queries are\nthe same.\n\n \n\nI have tried numerous configurations in the postgresql.conf file. I have\nset the shared_buffers with values ranging from 75 MB to 4000MB with no\nluck. I have also tried increasing the sort_mem with no luck.\n\n \n\n \n\nWhen the test is running, the cpu is well over 50% idle and iostat shows\nthat the processes are not waiting for i/o and disk usage percentage is low.\n\n \n\nAny help would be appreciated.\n\n \n\nThanks.\n\n\n\n\n\n\n\n\n\n\nHello,\n \nWhile running benchmarks for my database, I am seeing a\nlarge difference in the elapsed time (from stats collected in the logs) and run\ntime (running explain analyze on the query using ./psql <database>) for\neach of my queries.  The database is being ran on a sunfire 880 with 4 750mhz\nprocessors with 8 G RAM running solaris 8\n \nI am simulating 200 user connections each running 6 select\nqueries on 1 indexed table with 50,000 records. The elapsed time for the\nqueries average around 2.5 seconds while if I run the query using explain\nanalyze while the test is running, the run time is around 300 ms although it\ntakes much longer (few seconds) to display the results.  If I reduce the\nnumber of concurrent connections to 100 then the run time and elapsed time for\nthe queries are the same.\n \nI have tried numerous configurations in the postgresql.conf\nfile.  I have set the shared_buffers with values ranging from 75 MB to 4000MB\nwith no luck.  I have also tried increasing the sort_mem with no luck.\n \n \nWhen the test is running, the cpu is well over 50% idle and iostat shows\nthat the processes are not waiting for i/o and disk usage percentage is low.\n \nAny help would be appreciated.\n \nThanks.", "msg_date": "Mon, 10 Mar 2003 13:58:26 -0500", "msg_from": "\"Scott Buchan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Large difference between elapsed time and run time for queries" }, { "msg_contents": "\"Scott Buchan\" <[email protected]> writes:\n> I am simulating 200 user connections each running 6 select queries on 1\n> indexed table with 50,000 records. The elapsed time for the queries average\n> around 2.5 seconds while if I run the query using explain analyze while the\n> test is running, the run time is around 300 ms although it takes much longer\n> (few seconds) to display the results.\n\nHow many rows are these queries returning? AFAICS the differential must\nbe the cost of transmitting the data to the frontend, which of course\ndoes not happen when you use explain analyze. (I think, but am not\ncompletely sure, that explain analyze also suppresses the CPU effort of\nconverting the data to text form, as would normally be done before\ntransmitting it. But given that you don't see a problem at 100\nconnections, that's probably not where the issue lies.)\n\n> The database is being ran on a sunfire 880 with 4 750mhz\n> processors with 8 G RAM running solaris 8\n\nWe have seen some other weird performance problems on Solaris (their\nstandard qsort apparently is very bad, for example). Might be that you\nneed to be looking at kernel behavior, not at Postgres.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Mar 2003 15:07:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large difference between elapsed time and run time for queries " } ]
[ { "msg_contents": "Hi, \n\nI have noted similar issues in the past - and seemed then that most of the overhead bottleneck was due to establishing a new connection in the front end. As soon as I started using connection pooling, with connections made when the app initialises, and then recycled for each request (i.e. the connections never close) then the execution time was far quicker. \nI have also noticed that sparc processor speed, num processors, disk space and memory seems to makes little difference with postgres (for us anyway!) performance - e.g. performance no better with dual sparc 450mhz, 2 scsi disks, 1Gb mem - than on a single processor 400 mhz Netra, 256Mb ram with a single IDE disk!\n\nNikk\n\n-----Original Message-----\nFrom: Scott Buchan [mailto:[email protected]]\nSent: 10 March 2003 18:58\nTo: [email protected]\nSubject: [PERFORM] Large difference between elapsed time and run time for queries\n\n\nHello,\n\nWhile running benchmarks for my database, I am seeing a large difference in the elapsed time (from stats collected in the logs) and run time (running explain analyze on the query using ./psql <database>) for each of my queries. The database is being ran on a sunfire 880 with 4 750mhz processors with 8 G RAM running solaris 8\n\nI am simulating 200 user connections each running 6 select queries on 1 indexed table with 50,000 records. The elapsed time for the queries average around 2.5 seconds while if I run the query using explain analyze while the test is running, the run time is around 300 ms although it takes much longer (few seconds) to display the results. If I reduce the number of concurrent connections to 100 then the run time and elapsed time for the queries are the same.\n\nI have tried numerous configurations in the postgresql.conf file. I have set the shared_buffers with values ranging from 75 MB to 4000MB with no luck. I have also tried increasing the sort_mem with no luck.\n\n\nWhen the test is running, the cpu is well over 50% idle and iostat shows that the processes are not waiting for i/o and disk usage percentage is low.\n\nAny help would be appreciated.\n\nThanks.\n\n\n\n\n\nRE: [PERFORM] Large difference between elapsed time and run time for queries\n\n\nHi, \n\nI have noted similar issues in the past - and seemed then that most of the overhead bottleneck was due to establishing a new connection in the front end.  As soon as I started using connection pooling, with connections made when the app initialises, and then recycled for each request (i.e. the connections never close) then the execution time was far quicker. \nI have also noticed that sparc processor speed, num processors, disk space and memory seems to makes little difference with postgres (for us anyway!) performance - e.g. performance no better with dual sparc 450mhz, 2 scsi disks, 1Gb mem - than on a single processor 400 mhz Netra, 256Mb ram with a single IDE disk!\nNikk\n\n-----Original Message-----\nFrom: Scott Buchan [mailto:[email protected]]\nSent: 10 March 2003 18:58\nTo: [email protected]\nSubject: [PERFORM] Large difference between elapsed time and run time for queries\n\n\nHello,\n\nWhile running benchmarks for my database, I am seeing a large difference in the elapsed time (from stats collected in the logs) and run time (running explain analyze on the query using ./psql <database>) for each of my queries.  The database is being ran on a sunfire 880 with 4 750mhz processors with 8 G RAM running solaris 8\nI am simulating 200 user connections each running 6 select queries on 1 indexed table with 50,000 records. The elapsed time for the queries average around 2.5 seconds while if I run the query using explain analyze while the test is running, the run time is around 300 ms although it takes much longer (few seconds) to display the results.  If I reduce the number of concurrent connections to 100 then the run time and elapsed time for the queries are the same.\nI have tried numerous configurations in the postgresql.conf file.  I have set the shared_buffers with values ranging from 75 MB to 4000MB with no luck.  I have also tried increasing the sort_mem with no luck.\n\nWhen the test is running, the cpu is well over 50% idle and iostat shows that the processes are not waiting for i/o and disk usage percentage is low.\nAny help would be appreciated.\n\nThanks.", "msg_date": "Tue, 11 Mar 2003 08:46:32 -0000", "msg_from": "Nikk Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large difference between elapsed time and run time " }, { "msg_contents": "RE: [PERFORM] Large difference between elapsed time and run time for queriesExcuse me for butting into this conversation but I would LOVE to know exactly how you manage that pooling because I have this same issue. When I run a test selection using psql I get sub-second response time and when I use the online (a separate machine dedicated to http) and do a pg_connect to the database using PHP4 I hit 45-50 second response times. I even tried changing the connection to a persistent connection with pg_pconnect and I get the same thing. I installed the database on the http machine and the responses are much quicker, but still not quite ideal.\n\nMy question is how are you accomplishing the connection pooling?\n\n Jeff \n ----- Original Message ----- \n From: Nikk Anderson \n To: 'Scott Buchan' ; [email protected] \n Sent: Tuesday, March 11, 2003 3:46 AM\n Subject: Re: [PERFORM] Large difference between elapsed time and run time \n\n\n Hi, \n\n I have noted similar issues in the past - and seemed then that most of the overhead bottleneck was due to establishing a new connection in the front end. As soon as I started using connection pooling, with connections made when the app initialises, and then recycled for each request (i.e. the connections never close) then the execution time was far quicker. \n\n I have also noticed that sparc processor speed, num processors, disk space and memory seems to makes little difference with postgres (for us anyway!) performance - e.g. performance no better with dual sparc 450mhz, 2 scsi disks, 1Gb mem - than on a single processor 400 mhz Netra, 256Mb ram with a single IDE disk!\n\n Nikk \n\n -----Original Message----- \n From: Scott Buchan [mailto:[email protected]] \n Sent: 10 March 2003 18:58 \n To: [email protected] \n Subject: [PERFORM] Large difference between elapsed time and run time for queries \n\n\n\n Hello, \n\n While running benchmarks for my database, I am seeing a large difference in the elapsed time (from stats collected in the logs) and run time (running explain analyze on the query using ./psql <database>) for each of my queries. The database is being ran on a sunfire 880 with 4 750mhz processors with 8 G RAM running solaris 8\n\n I am simulating 200 user connections each running 6 select queries on 1 indexed table with 50,000 records. The elapsed time for the queries average around 2.5 seconds while if I run the query using explain analyze while the test is running, the run time is around 300 ms although it takes much longer (few seconds) to display the results. If I reduce the number of concurrent connections to 100 then the run time and elapsed time for the queries are the same.\n\n I have tried numerous configurations in the postgresql.conf file. I have set the shared_buffers with values ranging from 75 MB to 4000MB with no luck. I have also tried increasing the sort_mem with no luck.\n\n\n\n When the test is running, the cpu is well over 50% idle and iostat shows that the processes are not waiting for i/o and disk usage percentage is low.\n\n Any help would be appreciated. \n\n Thanks. \n\n\nRE: [PERFORM] Large difference between elapsed time and run time for queries\n\n\n\n\n\nExcuse me for butting into this conversation but I \nwould LOVE to know exactly how you manage that pooling because I have this same \nissue.  When I run a test selection using psql I get sub-second response \ntime and when I use the online (a separate machine dedicated to http) and \ndo a pg_connect to the database using PHP4 I hit 45-50 second response \ntimes.  I even tried changing the connection to a persistent connection \nwith pg_pconnect and I get the same thing.  I installed the database \non the http machine and the responses are much quicker, but still not quite \nideal.\n \nMy question is how are you accomplishing the connection \npooling?\n \n     Jeff \n\n----- Original Message ----- \nFrom:\nNikk Anderson \nTo: 'Scott Buchan' ; [email protected]\n\nSent: Tuesday, March 11, 2003 3:46 \n AM\nSubject: Re: [PERFORM] Large difference \n between elapsed time and run time \n\nHi, \nI have noted similar issues in the past - and seemed then that \n most of the overhead bottleneck was due to establishing a new connection in \n the front end.  As soon as I started using connection pooling, with \n connections made when the app initialises, and then recycled for each request \n (i.e. the connections never close) then the execution time was far quicker. \n \nI have also noticed that sparc processor speed, num \n processors, disk space and memory seems to makes little difference with \n postgres (for us anyway!) performance - e.g. performance no better with dual \n sparc 450mhz, 2 scsi disks, 1Gb mem - than on a single processor 400 mhz \n Netra, 256Mb ram with a single IDE disk!\nNikk \n-----Original Message----- From: Scott \n Buchan [mailto:[email protected]]\nSent: 10 March 2003 18:58 To: \n [email protected] Subject: [PERFORM] \n Large difference between elapsed time and run time for queries \nHello, \nWhile running benchmarks for my database, I am seeing a large \n difference in the elapsed time (from stats collected in the logs) and run time \n (running explain analyze on the query using ./psql <database>) for each \n of my queries.  The database is being ran on a sunfire 880 with 4 750mhz \n processors with 8 G RAM running solaris 8\nI am simulating 200 user connections each running 6 select \n queries on 1 indexed table with 50,000 records. The elapsed time for the \n queries average around 2.5 seconds while if I run the query using explain \n analyze while the test is running, the run time is around 300 ms although it \n takes much longer (few seconds) to display the results.  If I reduce the \n number of concurrent connections to 100 then the run time and elapsed time for \n the queries are the same.\nI have tried numerous configurations in the postgresql.conf \n file.  I have set the shared_buffers with values ranging from 75 MB to \n 4000MB with no luck.  I have also tried increasing the sort_mem with no \n luck.\nWhen the test is running, the cpu is well over 50% idle and \n iostat shows that the processes are not waiting for i/o and disk usage \n percentage is low.\nAny help would be appreciated. \nThanks.", "msg_date": "Tue, 11 Mar 2003 09:24:20 -0500", "msg_from": "\"Jeffrey D. Brower\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large difference between elapsed time and run time " }, { "msg_contents": "On Tue, 11 Mar 2003, Jeffrey D. Brower wrote:\n\n> RE: [PERFORM] Large difference between elapsed time and run time for \n> queriesExcuse me for butting into this conversation but I would LOVE to \n> know exactly how you manage that pooling because I have this same issue. \n> When I run a test selection using psql I get sub-second response time \n> and when I use the online (a separate machine dedicated to http) and do \n> a pg_connect to the database using PHP4 I hit 45-50 second response \n> times. I even tried changing the connection to a persistent connection \n> with pg_pconnect and I get the same thing. I installed the database on \n> the http machine and the responses are much quicker, but still not quite \n> ideal.\n> \n> My question is how are you accomplishing the connection pooling?\n\nIn PHP, you do NOT have the elegant connection pooling that jdbc and \nAOLServer have. It's easy to build an apache/php/postgresql server that \ncollapses under load if you don't know how to configure it to make sure \napache runs out of children before postgresql runs out of resources.\n\nYou have a connection for each apache child, and they are \nper database and per users, so if you connect as frank to db1, then the \nnext page connects as jenny to db2, it can't reuse that connection. The \nsetting in php.ini that says max persistant connects is PER PROCESS, not \ntotal, so if you have that set to 5, and max apache children to 150, you \ncould theoretically wind up with 749 idle connections after a while. Not \ngood.\n\nIf your machine is taking more than a few milliseconds to connect to \npostgresql, something is very wrong with it. It could be you're running \nout of memory and having a swap storm, or that postgresql front ends are \ncrashing, or any other problem. What does top or free show when you are \nconnecting? i.e. how much memory is used by swap, how much is cache, how \nmuch is shared, all that jazz.\n\n", "msg_date": "Tue, 11 Mar 2003 13:50:30 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large difference between elapsed time and run time " } ]
[ { "msg_contents": "Hi, \n\n-----Original Message-----\nFrom: Jeffrey D. Brower [mailto:[email protected]]\nSent: 11 March 2003 14:24\nTo: Nikk Anderson; 'Scott Buchan'; [email protected]\nSubject: Re: [PERFORM] Large difference between elapsed time and run time \n\n>My question is how are you accomplishing the connection pooling?\n\n\nI have programmed a connection pool in Java - I am sure something similar is possible in most other languages. \n\nVery basically, the concept is as follows:\n\n> Application initialisation\n\t>>> 1) Create X number of connections to the database\n\t>>> 2) Store connections in an Object\n\t>>> 3) Create an array of free and busy connections - put all new connections in free connection array\n\t>>> 4) Object is visible to all components of web application\n\n> Request for a connection\n\t>>> 4) Code asks for a connection from the pool object (3). \n\t>>> 5) Pool object moves connection from free array, to the busy array.\n\t>>> 5) Connection is used to do queries\n\t>>> 6) Connection is sent back to pool object (3). \n\t>>> 7) Pool object moves the connection from the busy array, back to the free array\n\n\nI hope that helps!\n\nNikk\n\n\n\n\n\nRE: [PERFORM] Large difference between elapsed time and run time \n\n\nHi, \n\n-----Original Message-----\nFrom: Jeffrey D. Brower [mailto:[email protected]]\nSent: 11 March 2003 14:24\nTo: Nikk Anderson; 'Scott Buchan'; [email protected]\nSubject: Re: [PERFORM] Large difference between elapsed time and run time \n\n>My question is how are you accomplishing the connection pooling?\n\n\nI have programmed a connection pool in Java - I am sure something similar is possible in most other languages. \n\nVery basically, the concept is as follows:\n\n> Application initialisation\n        >>> 1) Create X number of connections to the database\n        >>> 2) Store connections in an Object\n        >>> 3) Create an array of free and busy connections - put all new connections in free connection array\n        >>> 4) Object is visible to all components of web application\n\n> Request for a connection\n        >>> 4) Code asks for a connection from the pool object (3).  \n        >>> 5) Pool object moves connection from free array, to the busy array.\n        >>> 5) Connection is used to do queries\n        >>> 6) Connection is sent back to pool object (3).  \n        >>> 7) Pool object moves the connection from the busy array, back to the free array\n\n\nI hope that helps!\n\nNikk", "msg_date": "Tue, 11 Mar 2003 14:30:45 -0000", "msg_from": "Nikk Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large difference between elapsed time and run time " }, { "msg_contents": "RE: [PERFORM] Large difference between elapsed time and run timeYes, this helps a lot. Obviously this is far more complicated than just a simple pg_pconnect to accomplish a speedy reply. I really thought that a persistent connection was supposed to eliminate the overhead time in a connection (well, other than the first time through). But even if I am the only person on the machine it still takes forever to get a response every time I use http. I wondered if I was supposed to have a php program that had its own pconnect and every http call to the PostgreSQL database went to that php program rather than handling it by itself, but I found no indication of that while RTFM. I will give this a try and see if I can get the speed to anything reasonable. Thanks for the quick reply!\n\n Jeff\n ----- Original Message ----- \n From: Nikk Anderson \n To: 'Jeffrey D. Brower' ; [email protected] \n Sent: Tuesday, March 11, 2003 9:30 AM\n Subject: Re: [PERFORM] Large difference between elapsed time and run time \n\n\n Hi, \n\n -----Original Message----- \n From: Jeffrey D. Brower [mailto:[email protected]] \n Sent: 11 March 2003 14:24 \n To: Nikk Anderson; 'Scott Buchan'; [email protected] \n Subject: Re: [PERFORM] Large difference between elapsed time and run time \n\n >My question is how are you accomplishing the connection pooling? \n\n\n\n I have programmed a connection pool in Java - I am sure something similar is possible in most other languages. \n\n Very basically, the concept is as follows: \n\n > Application initialisation \n >>> 1) Create X number of connections to the database \n >>> 2) Store connections in an Object \n >>> 3) Create an array of free and busy connections - put all new connections in free connection array \n >>> 4) Object is visible to all components of web application \n\n > Request for a connection \n >>> 4) Code asks for a connection from the pool object (3). \n >>> 5) Pool object moves connection from free array, to the busy array. \n >>> 5) Connection is used to do queries \n >>> 6) Connection is sent back to pool object (3). \n >>> 7) Pool object moves the connection from the busy array, back to the free array \n\n\n\n I hope that helps! \n\n Nikk \n\n\nRE: [PERFORM] Large difference between elapsed time and run time\n\n\n\n\n\nYes, this helps a lot.  Obviously this is far more \ncomplicated than just a simple pg_pconnect to accomplish a speedy reply.  I \nreally thought that a persistent connection was supposed to eliminate the \noverhead time in a connection (well, other than the first time through).  \nBut even if I am the only person on the machine it still takes forever to get a \nresponse every time I use http.  I wondered if I was supposed to have a php \nprogram that had its own pconnect and every http call to the PostgreSQL \ndatabase went to that php program rather than handling it by itself, but I \nfound no indication of that while RTFM.  I will give this a try and see if \nI can get the speed to anything reasonable.  Thanks for the quick \nreply!\n \n     Jeff\n\n----- Original Message ----- \nFrom:\nNikk Anderson \nTo: 'Jeffrey D. Brower' ; [email protected]\n\nSent: Tuesday, March 11, 2003 9:30 \n AM\nSubject: Re: [PERFORM] Large difference \n between elapsed time and run time \n\nHi, \n-----Original Message----- From: \n Jeffrey D. Brower [mailto:[email protected]]\nSent: 11 March 2003 14:24 To: Nikk \n Anderson; 'Scott Buchan'; [email protected]\nSubject: Re: [PERFORM] Large difference between elapsed time \n and run time \n>My question is how are you accomplishing the connection \n pooling? \nI have programmed a connection pool in Java - I am sure \n something similar is possible in most other languages. \nVery basically, the concept is as follows: \n> Application initialisation\n        >>> 1) \n Create X number of connections to the database\n        >>> 2) \n Store connections in an Object\n        >>> 3) \n Create an array of free and busy connections - put all new connections in free \n connection array         >>> 4) Object is visible to all components of web \n application \n> Request for a connection\n        >>> 4) \n Code asks for a connection from the pool object (3).  \n         >>> 5) Pool object moves connection from free array, to the \n busy array.         >>> 5) Connection is used to do queries\n        >>> 6) \n Connection is sent back to pool object (3).  \n         >>> 7) Pool object moves the connection from the busy array, \n back to the free array \nI hope that helps! \nNikk", "msg_date": "Tue, 11 Mar 2003 09:53:45 -0500", "msg_from": "\"Jeffrey D. Brower\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large difference between elapsed time and run time " } ]
[ { "msg_contents": "Thanks for the quick reply.\n\nI just upgraded from 7.2 to 7.3 since 7.3 uses a different qsort\n(BSD-licensed). After running a few tests, I have noticed some performance\ngains.\n\nI think another problem that I was having was due to the way I was\nperforming the tests. I was using the tool \"The Grinder\" to simulate 300\nconnections (through JDBC) to the database each running 6 queries without\nany connection pooling. Once I figure out how to use connection pooling\nwith the Grinder, I will try running the tests again.\n\nDo you know of any other performance issues with using Solaris?\n\nThanks for the help,\n\nScott\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Monday, March 10, 2003 3:08 PM\nTo: Scott Buchan\nCc: [email protected]\nSubject: Re: [PERFORM] Large difference between elapsed time and run time\nfor queries \n\n\"Scott Buchan\" <[email protected]> writes:\n> I am simulating 200 user connections each running 6 select queries on 1\n> indexed table with 50,000 records. The elapsed time for the queries\naverage\n> around 2.5 seconds while if I run the query using explain analyze while\nthe\n> test is running, the run time is around 300 ms although it takes much\nlonger\n> (few seconds) to display the results.\n\nHow many rows are these queries returning? AFAICS the differential must\nbe the cost of transmitting the data to the frontend, which of course\ndoes not happen when you use explain analyze. (I think, but am not\ncompletely sure, that explain analyze also suppresses the CPU effort of\nconverting the data to text form, as would normally be done before\ntransmitting it. But given that you don't see a problem at 100\nconnections, that's probably not where the issue lies.)\n\n> The database is being ran on a sunfire 880 with 4 750mhz\n> processors with 8 G RAM running solaris 8\n\nWe have seen some other weird performance problems on Solaris (their\nstandard qsort apparently is very bad, for example). Might be that you\nneed to be looking at kernel behavior, not at Postgres.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Mar 2003 13:23:21 -0500", "msg_from": "\"Scott Buchan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large difference between elapsed time and run time for queries " } ]
[ { "msg_contents": "Hi everybody.\n\nI am a newbie to Postgresql, trying to migrate an application from MSAccess.\n\nI am quite dissapointed with the problems I am facing with some queries\ncontaining multiple joins. I confess it has been hard for someone that is\nnot a DBA to figure out which are the problems. Just to ilustrate, I have\nsome queries that provide a reasonable query plan (at least from my point of\nview), but that return no result: keep running on and on.\n\nMy system description is: Postgresql 7.1.3, Linux RedHat 7.1 (all patches\napplied), 160Mb RAM.\n\nIs the performance of the mentioned Postgresql version much slower than the\n7.3.1?\n\nAll advice will be more than appeciated.\n\nRegards.\n\nFabio\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.461 / Virus Database: 260 - Release Date: 10/03/2003\n\n", "msg_date": "Wed, 12 Mar 2003 17:47:23 -0300", "msg_from": "=?iso-8859-1?Q?Enix_Empreendimentos_e_Constru=E7=F5es_Ltda.?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql performance" }, { "msg_contents": "There are a lot of things that can be done to speed up your queries.\nEspecially if you are using the kludge that Access puts out. Post some of\nthem and we can help.\n\nAs far as the difference between 7.1.3 and 7.3.2 there are a lot of\noptimizations as well as bug fixes, its always a good idea to upgrade.\n\nHTH\nChad\n----- Original Message -----\nFrom: \"Enix Empreendimentos e Constru��es Ltda.\" <[email protected]>\nTo: <[email protected]>\nSent: Wednesday, March 12, 2003 1:47 PM\nSubject: [PERFORM] Postgresql performance\n\n\n> Hi everybody.\n>\n> I am a newbie to Postgresql, trying to migrate an application from\nMSAccess.\n>\n> I am quite dissapointed with the problems I am facing with some queries\n> containing multiple joins. I confess it has been hard for someone that is\n> not a DBA to figure out which are the problems. Just to ilustrate, I have\n> some queries that provide a reasonable query plan (at least from my point\nof\n> view), but that return no result: keep running on and on.\n>\n> My system description is: Postgresql 7.1.3, Linux RedHat 7.1 (all patches\n> applied), 160Mb RAM.\n>\n> Is the performance of the mentioned Postgresql version much slower than\nthe\n> 7.3.1?\n>\n> All advice will be more than appeciated.\n>\n> Regards.\n>\n> Fabio\n>\n>\n> ---\n> Outgoing mail is certified Virus Free.\n> Checked by AVG anti-virus system (http://www.grisoft.com).\n> Version: 6.0.461 / Virus Database: 260 - Release Date: 10/03/2003\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n\n", "msg_date": "Wed, 12 Mar 2003 14:38:16 -0700", "msg_from": "\"Chad Thompson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql performance" }, { "msg_contents": "> Is the performance of the mentioned Postgresql version much slower than the\n> 7.3.1?\n\nSomewhat, but not significantly.\n\nStandard questions:\n\nHave you run VACUUM?\nHave you run ANALYZE?\nWhat does EXPLAIN ANALYZE <query> output for the slow queries?\n\nIf performance is still poor after the first 2, send the results of\nEXPLAIN here and we'll tell you which index you're missing ;)\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "12 Mar 2003 16:45:52 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql performance" }, { "msg_contents": "Enix Empreendimentos e Construções Ltda. kirjutas K, 12.03.2003 kell\n22:47:\n> Hi everybody.\n> \n> I am a newbie to Postgresql, trying to migrate an application from MSAccess.\n> \n> I am quite dissapointed with the problems I am facing with some queries\n> containing multiple joins.\n\nPostgres currently does *not* optimize join order for explicit joins\n(this is currently left as a cludge for users to hand-optimize query\nplans).\n\nTo get the benefits form optimiser you have to rewrite\n\nFROM TA JOIN TB ON TA.CB=TB.CB\n\nto \n\nFROM A,B\nWHERE TA.CB=TB.CB\n\n> I confess it has been hard for someone that is\n> not a DBA to figure out which are the problems. Just to ilustrate, I have\n> some queries that provide a reasonable query plan (at least from my point of\n> view), but that return no result: keep running on and on.\n\nCould you try to explain it in other words (or give an example). I am\nnot native english speaker and I can read your text in at least 5\ndifferent ways ;(\n\n> Is the performance of the mentioned Postgresql version much slower than the\n> 7.3.1?\n\nIt can be slower. It may also be a little faster in some very specific\ncases ;)\n\n--------------\nHannu\n\n", "msg_date": "14 Mar 2003 00:26:20 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql performance" } ]
[ { "msg_contents": "I'm looking for a general method to \nspeed up DISTINCT and COUNT queries. \n\n\nmydatabase=> EXPLAIN ANALYZE select distinct(mac) from node;\nNOTICE: QUERY PLAN:\n\nUnique (cost=110425.67..110514.57 rows=3556 width=6) (actual\ntime=45289.78..45598.62 rows=25334 loops=1)\n -> Sort (cost=110425.67..110425.67 rows=35561 width=6) (actual\n time=45289.77..45411.53 rows=34597 loops=1)\n -> Seq Scan on node (cost=0.00..107737.61 rows=35561\n width=6) (actual time=6.73..44383.57 rows=34597 loops=1)\n\n Total runtime: 45673.19 msec\n ouch. \n\nI run VACCUUM ANALYZE once a day. \n\nThanks,\nmax\n", "msg_date": "Wed, 12 Mar 2003 14:38:11 -0800", "msg_from": "Max Baker <[email protected]>", "msg_from_op": true, "msg_subject": "speeding up COUNT and DISTINCT queries" }, { "msg_contents": "On Wed, 2003-03-12 at 17:38, Max Baker wrote:\n> I'm looking for a general method to \n> speed up DISTINCT and COUNT queries. \n> \n> \n> mydatabase=> EXPLAIN ANALYZE select distinct(mac) from node;\n> NOTICE: QUERY PLAN:\n> \n> Unique (cost=110425.67..110514.57 rows=3556 width=6) (actual\n> time=45289.78..45598.62 rows=25334 loops=1)\n> -> Sort (cost=110425.67..110425.67 rows=35561 width=6) (actual\n> time=45289.77..45411.53 rows=34597 loops=1)\n> -> Seq Scan on node (cost=0.00..107737.61 rows=35561\n> width=6) (actual time=6.73..44383.57 rows=34597 loops=1)\n> \n> Total runtime: 45673.19 msec\n> ouch. \n> \n> I run VACCUUM ANALYZE once a day. \n\nThats not going to do anything for that query, as there only is one\npossible plan at the moment :)\n\nI don't think you can do much about that query, other than buy a faster\nharddisk or more ram. Nearly all the time seems to be used pulling the\ndata off the disk (in the Seq Scan).\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "12 Mar 2003 17:52:19 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up COUNT and DISTINCT queries" }, { "msg_contents": "Ive found that group by works faster than distinct.\n\nTry\nEXPLAIN ANALYZE select mac from node group by mac;\n\nHTH\nChad\n\n----- Original Message -----\nFrom: \"Max Baker\" <[email protected]>\nTo: \"PostgreSQL Performance Mailing List\" <[email protected]>\nSent: Wednesday, March 12, 2003 3:38 PM\nSubject: [PERFORM] speeding up COUNT and DISTINCT queries\n\n\n> I'm looking for a general method to\n> speed up DISTINCT and COUNT queries.\n>\n>\n> mydatabase=> EXPLAIN ANALYZE select distinct(mac) from node;\n> NOTICE: QUERY PLAN:\n>\n> Unique (cost=110425.67..110514.57 rows=3556 width=6) (actual\n> time=45289.78..45598.62 rows=25334 loops=1)\n> -> Sort (cost=110425.67..110425.67 rows=35561 width=6) (actual\n> time=45289.77..45411.53 rows=34597 loops=1)\n> -> Seq Scan on node (cost=0.00..107737.61 rows=35561\n> width=6) (actual time=6.73..44383.57 rows=34597 loops=1)\n>\n> Total runtime: 45673.19 msec\n> ouch.\n>\n> I run VACCUUM ANALYZE once a day.\n>\n> Thanks,\n> max\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Wed, 12 Mar 2003 15:55:09 -0700", "msg_from": "\"Chad Thompson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up COUNT and DISTINCT queries" }, { "msg_contents": "Do you have an index on mac?\n\n\nMax Baker wrote:\n> \n> I'm looking for a general method to\n> speed up DISTINCT and COUNT queries.\n> \n> mydatabase=> EXPLAIN ANALYZE select distinct(mac) from node;\n> NOTICE: QUERY PLAN:\n> \n> Unique (cost=110425.67..110514.57 rows=3556 width=6) (actual\n> time=45289.78..45598.62 rows=25334 loops=1)\n> -> Sort (cost=110425.67..110425.67 rows=35561 width=6) (actual\n> time=45289.77..45411.53 rows=34597 loops=1)\n> -> Seq Scan on node (cost=0.00..107737.61 rows=35561\n> width=6) (actual time=6.73..44383.57 rows=34597 loops=1)\n> \n> Total runtime: 45673.19 msec\n> ouch.\n> \n> I run VACCUUM ANALYZE once a day.\n> \n> Thanks,\n> max\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Wed, 12 Mar 2003 18:40:36 -0500", "msg_from": "Jean-Luc Lachance <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up COUNT and DISTINCT queries" }, { "msg_contents": "On Wed, 12 Mar 2003 14:38:11 -0800, Max Baker <[email protected]> wrote:\n> -> Seq Scan on node (cost=0.00..107737.61 rows=35561\n> width=6) (actual time=6.73..44383.57 rows=34597 loops=1)\n\n35000 tuples in 100000 pages?\n\n>I run VACCUUM ANALYZE once a day. \n\nTry VACUUM FULL VERBOSE ANALAYZE; this should bring back your table\nto a reasonable size. If the table starts growing again, VACUUM more\noften.\n\nServus\n Manfred\n", "msg_date": "Thu, 13 Mar 2003 00:48:27 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up COUNT and DISTINCT queries" }, { "msg_contents": "On Wed, Mar 12, 2003 at 03:55:09PM -0700, Chad Thompson wrote:\n> Ive found that group by works faster than distinct.\n> \n> Try\n> EXPLAIN ANALYZE select mac from node group by mac;\n\nThis was about 25% faster, thanks!\n\nThat will work for distinct() only calls, but I still am looking for a\nway to speed up the count() command. Maybe an internal counter of rows,\nand triggers?\n\n-m\n", "msg_date": "Wed, 12 Mar 2003 17:00:19 -0800", "msg_from": "Max Baker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: speeding up COUNT and DISTINCT queries" }, { "msg_contents": "On Thu, Mar 13, 2003 at 12:48:27AM +0100, Manfred Koizar wrote:\n> On Wed, 12 Mar 2003 14:38:11 -0800, Max Baker <[email protected]> wrote:\n> > -> Seq Scan on node (cost=0.00..107737.61 rows=35561\n> > width=6) (actual time=6.73..44383.57 rows=34597 loops=1)\n> \n> 35000 tuples in 100000 pages?\n> \n> >I run VACCUUM ANALYZE once a day. \n> \n> Try VACUUM FULL VERBOSE ANALAYZE; this should bring back your table\n> to a reasonable size. If the table starts growing again, VACUUM more\n> often.\n\nManfred,\n\nThanks for the help. I guess i'm not clear on why there is so much\nextra cruft. Does postgres leave a little bit behind every time it does\nan update? Because this table is updated constantly.\n\nCheck out the results, 1.5 seconds compared to 46 seconds :\n\nmydb=> vacuum full verbose analyze node;\nNOTICE: --Relation node--\nNOTICE: Pages 107589: Changed 0, reaped 107588, Empty 0, New 0; Tup 34846: Vac 186847, Keep/VTL 0/0, UnUsed 9450103, MinLen 88, MaxLen 104; Re-using: Free/Avail. Space 837449444/837449368; EndEmpty/Avail. Pages 0/107588.\n CPU 15.32s/0.51u sec elapsed 30.89 sec.\nNOTICE: Index node_pkey: Pages 10412; Tuples 34846: Deleted 186847.\n CPU 3.67s/2.48u sec elapsed 77.06 sec.\nNOTICE: Index idx_node_switch_port: Pages 54588; Tuples 34846: Deleted 186847.\n CPU 9.59s/2.42u sec elapsed 273.50 sec.\nNOTICE: Index idx_node_switch: Pages 50069; Tuples 34846: Deleted 186847.\n CPU 8.46s/2.08u sec elapsed 258.62 sec.\nNOTICE: Index idx_node_mac: Pages 6749; Tuples 34846: Deleted 186847.\n CPU 2.19s/1.59u sec elapsed 56.05 sec.\nNOTICE: Index idx_node_switch_port_active: Pages 51138; Tuples 34846: Deleted 186847.\n CPU 8.58s/2.99u sec elapsed 273.03 sec.\nNOTICE: Index idx_node_mac_active: Pages 6526; Tuples 34846: Deleted 186847.\n CPU 1.75s/1.90u sec elapsed 46.70 sec.\n NOTICE: Rel node: Pages: 107589 --> 399; Tuple(s) moved: 34303.\n CPU 83.49s/51.73u sec elapsed 1252.35 sec.\nNOTICE: Index node_pkey: Pages 10412; Tuples 34846: Deleted 34303.\n CPU 3.65s/1.64u sec elapsed 72.99 sec.\nNOTICE: Index idx_node_switch_port: Pages 54650; Tuples 34846: Deleted 34303.\n CPU 10.77s/2.05u sec elapsed 278.46 sec.\nNOTICE: Index idx_node_switch: Pages 50114; Tuples 34846: Deleted 34303.\n CPU 9.95s/1.65u sec elapsed 266.55 sec.\nNOTICE: Index idx_node_mac: Pages 6749; Tuples 34846: Deleted 34303.\n CPU 1.75s/1.13u sec elapsed 52.78 sec.\nNOTICE: Index idx_node_switch_port_active: Pages 51197; Tuples 34846: Deleted 34303.\n CPU 10.48s/1.89u sec elapsed 287.46 sec.\nNOTICE: Index idx_node_mac_active: Pages 6526; Tuples 34846: Deleted 34303.\n CPU 2.16s/0.96u sec elapsed 48.67 sec.\nNOTICE: --Relation pg_toast_64458--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_64458_idx: Pages 1; Tuples 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing node\nVACUUM\n\nmydb=> EXPLAIN ANALYZE select distinct(mac) from node;\nNOTICE: QUERY PLAN:\n\nUnique (cost=3376.37..3463.48 rows=3485 width=6) (actual time=1049.09..1400.45 rows=25340 loops=1)\n -> Sort (cost=3376.37..3376.37 rows=34846 width=6) (actual time=1049.07..1190.58 rows=34846 loops=1)\n -> Seq Scan on node (cost=0.00..747.46 rows=34846 width=6) (actual time=0.14..221.18 rows=34846 loops=1)\nTotal runtime: 1491.56 msec\n\nEXPLAIN\n\nnow that's results =]\n-m\n", "msg_date": "Wed, 12 Mar 2003 17:55:40 -0800", "msg_from": "Max Baker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: speeding up COUNT and DISTINCT queries" }, { "msg_contents": "Max Baker wrote:\n> Thanks for the help. I guess i'm not clear on why there is so much\n> extra cruft. Does postgres leave a little bit behind every time it does\n> an update? Because this table is updated constantly.\n> \n\nYes. See:\nhttp://www.us.postgresql.org/users-lounge/docs/7.3/postgres/routine-vacuuming.html\n\nJoe\n\n", "msg_date": "Wed, 12 Mar 2003 17:57:50 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up COUNT and DISTINCT queries" }, { "msg_contents": "On Wed, Mar 12, 2003 at 05:57:50PM -0800, Joe Conway wrote:\n> Max Baker wrote:\n> >Thanks for the help. I guess i'm not clear on why there is so much\n> >extra cruft. Does postgres leave a little bit behind every time it does\n> >an update? Because this table is updated constantly.\n> >\n> \n> Yes. See:\n> http://www.us.postgresql.org/users-lounge/docs/7.3/postgres/routine-vacuuming.html\n\nThat would explain why once a night isn't enough. Thanks. \nThe contents of this table get refreshed every 4 hours. I'll add a\nvacuum after every refresh and comapre the results in a couple days.\n\n-m\n", "msg_date": "Wed, 12 Mar 2003 18:05:47 -0800", "msg_from": "Max Baker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: speeding up COUNT and DISTINCT queries" }, { "msg_contents": "Try setting up a trigger to maintain a separate table containing only the\ndistinct values...\n\nChris\n\n----- Original Message -----\nFrom: \"Max Baker\" <[email protected]>\nTo: \"PostgreSQL Performance Mailing List\" <[email protected]>\nSent: Thursday, March 13, 2003 6:38 AM\nSubject: [PERFORM] speeding up COUNT and DISTINCT queries\n\n\n> I'm looking for a general method to\n> speed up DISTINCT and COUNT queries.\n>\n>\n> mydatabase=> EXPLAIN ANALYZE select distinct(mac) from node;\n> NOTICE: QUERY PLAN:\n>\n> Unique (cost=110425.67..110514.57 rows=3556 width=6) (actual\n> time=45289.78..45598.62 rows=25334 loops=1)\n> -> Sort (cost=110425.67..110425.67 rows=35561 width=6) (actual\n> time=45289.77..45411.53 rows=34597 loops=1)\n> -> Seq Scan on node (cost=0.00..107737.61 rows=35561\n> width=6) (actual time=6.73..44383.57 rows=34597 loops=1)\n>\n> Total runtime: 45673.19 msec\n> ouch.\n>\n> I run VACCUUM ANALYZE once a day.\n>\n> Thanks,\n> max\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Thu, 13 Mar 2003 10:20:04 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up COUNT and DISTINCT queries" }, { "msg_contents": "\nMax Baker <[email protected]> writes:\n\n> On Wed, Mar 12, 2003 at 05:57:50PM -0800, Joe Conway wrote:\n> > Max Baker wrote:\n> > >Thanks for the help. I guess i'm not clear on why there is so much\n> > >extra cruft. Does postgres leave a little bit behind every time it does\n> > >an update? Because this table is updated constantly.\n> > >\n> > \n> > Yes. See:\n> > http://www.us.postgresql.org/users-lounge/docs/7.3/postgres/routine-vacuuming.html\n> \n> That would explain why once a night isn't enough. Thanks. \n> The contents of this table get refreshed every 4 hours. I'll add a\n> vacuum after every refresh and comapre the results in a couple days.\n\nIf it gets completely refreshed, ie, every tuple is updated or deleted and\nre-inserted in a big batch job then VACUUM might never be enough without\nboosting some config values a lot. You might need to do a VACUUM FULL after\nthe refresh. VACUUM FULL locks the table though which might be unfortunate.\n\nVACUUM FULL should be sufficient but you might want to consider instead\nTRUNCATE-ing the table and then reinserting records rather than deleting if\nthat's what you're doing. Or alternatively building the new data in a new\ntable and then doing a switcheroo with ALTER TABLE RENAME. However ALTER TABLE\n(and possible TRUNCATE as well?) will invalidate functions and other objects\nthat refer to the table.\n\nRegarding the original question:\n\n. 7.4 will probably be faster than 7.3 at least if you stick with GROUP BY.\n\n. You could try building an index on mac, but I suspect even then it'll choose\n the sequential scan. But try it with an index and enable_seqscan = off to\n see if it's even worth trying to get it to use the index. If so you'll have\n to lower random_page_cost and/or play with cpu_tuple_cost and other\n variables to get it to do so.\n\n. You might also want to cluster the table on that index. You would have to\n recluster it every time you do your refresh and it's not clear how much it\n would help if any. But it might be worth trying.\n\n--\ngreg\n\n", "msg_date": "13 Mar 2003 10:42:55 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up COUNT and DISTINCT queries" }, { "msg_contents": "On Thu, 2003-03-13 at 10:42, Greg Stark wrote:\n> Max Baker <[email protected]> writes:\n> > On Wed, Mar 12, 2003 at 05:57:50PM -0800, Joe Conway wrote:\n> > That would explain why once a night isn't enough. Thanks. \n> > The contents of this table get refreshed every 4 hours. I'll add a\n> > vacuum after every refresh and comapre the results in a couple days.\n> \n> If it gets completely refreshed, ie, every tuple is updated or deleted and\n> re-inserted in a big batch job then VACUUM might never be enough without\n> boosting some config values a lot. You might need to do a VACUUM FULL after\n> the refresh. VACUUM FULL locks the table though which might be unfortunate.\n> \n\nhmm... approx 35,000 records, getting updated every 4 hours. so..\n\n35000 / (4*60) =~ 145 tuples per minute. \n\nLets assume we want to keep any overhead at 10% or less, so we need to\nlazy vacuum every 3500 updates. so...\n\n3500 tuples / 145 tpm =~ 25 minutes. \n\nSo, set up a cron job to lazy vacuum every 20 minutes and see how that\nworks for you.\n\nRobert Treat \n\n\n", "msg_date": "13 Mar 2003 15:05:30 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up COUNT and DISTINCT queries" }, { "msg_contents": "On Thu, Mar 13, 2003 at 03:05:30PM -0500, Robert Treat wrote:\n> On Thu, 2003-03-13 at 10:42, Greg Stark wrote:\n> > Max Baker <[email protected]> writes:\n> > > On Wed, Mar 12, 2003 at 05:57:50PM -0800, Joe Conway wrote:\n> > > That would explain why once a night isn't enough. Thanks. \n> > > The contents of this table get refreshed every 4 hours. I'll add a\n> > > vacuum after every refresh and comapre the results in a couple days.\n> > \n> > If it gets completely refreshed, ie, every tuple is updated or deleted and\n> > re-inserted in a big batch job then VACUUM might never be enough without\n> > boosting some config values a lot. You might need to do a VACUUM FULL after\n> > the refresh. VACUUM FULL locks the table though which might be unfortunate.\n\nI'm not starting with fresh data every time, I'm usually checking for\nan existing record, then setting a timestamp and a boolean flag. \n\nI've run some profiling and it's about 8000-10,000 UPDATEs every 4\nhours. These are accompanied by about 800-1000 INSERTs. \n \n> hmm... approx 35,000 records, getting updated every 4 hours. so..\n> \n> 35000 / (4*60) =~ 145 tuples per minute. \n> \n> Lets assume we want to keep any overhead at 10% or less, so we need to\n> lazy vacuum every 3500 updates. so...\n> \n> 3500 tuples / 145 tpm =~ 25 minutes. \n> \n> So, set up a cron job to lazy vacuum every 20 minutes and see how that\n> works for you.\n\nI'm now having VACUUM ANALYZE run after each of these updates. The data\ncomes in in spurts -- a 90 minute batch job that runs every 4 hours. \n\n\nthanks folks,\n-m\n", "msg_date": "Thu, 13 Mar 2003 12:22:54 -0800", "msg_from": "Max Baker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: speeding up COUNT and DISTINCT queries" }, { "msg_contents": "Max,\n\n> I'm not starting with fresh data every time, I'm usually checking for\n> an existing record, then setting a timestamp and a boolean flag.\n>\n> I've run some profiling and it's about 8000-10,000 UPDATEs every 4\n> hours. These are accompanied by about 800-1000 INSERTs.\n\nIf these are wide records (i.e. large text fields or lots of columns ) you may \nwant to consider raising your max_fsm_relation in postgresql.conf slightly, \nto about 15,000.\n\nYou can get a better idea of a good FSM setting by running VACUUM FULL VERBOSE \nafter your next batch (this will lock the database temporarily) and seeing \nhow many data pages are \"reclaimed\", in total, by the vacuum. Then set your \nFSM to at least that level.\n\nAnd has anyone mentioned REINDEX on this thread?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 14 Mar 2003 09:10:06 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speeding up COUNT and DISTINCT queries" } ]
[ { "msg_contents": "Hi all.\n\nAs the topic suggests, I am having fairly critical troubles with\npostgresql on PlanetMath.org (a site which I run). You can go there and\ntry to pull up some entries and you will see the problem: everything is\nincredibly slow.\n\nIt is hard to pinpoint when this began happening, but I've tried a\nvariety of optimizations to fix it, all of which have failed.\n\nFirst: the machine. The machine is not too spectactular, but it is not\nso bad that the performance currently witnessed should be happening. It\nis a dual PIII-650 with 512MB of RAM and a 20gb IDE drive (yes, DMA is\non). There is plenty of free space on the drive.\n\nNow, the optimisations I have tried:\n\n- Using hash indices everywhere. A few months ago, I did this, and\n there was a dramatic and instant speed up. However, this began\n degenerating. I also noticed in the logs that there was deadlock\n happening all over the place. The server response time was\n intolerable so I figured the deadlock might have something to do with\n this, and eliminated all hash indices (replaced with normal BTree\n indices). \n\n- Going back to BTrees yielded a temporary respite, but soon enough the\n server was back to half a minute to pull up an already-cached entry,\n which is of course crazy. \n\n- I then tried increasing the machines shared memory max to 75% of the\n physical memory, and scaled postgresql's buffers accordingly. This\n also sped things up for a while, but again resulted in eventual\n degeneration. Even worse, there were occasional crashes due to\n running out of memory that (according to my calculations) shouldn't\n have been happening.\n\n- Lastly, I tried reducing the shared memory max and limiting postgresql\n to more conservative values, although still not to the out-of-box\n values. Right now shared memory max on the system is 128mb,\n postgres's shared buffers are at 64mb, sort_mem is at 16mb, and\n effective cache size is at 10mb.\n\nFor perspective, the size of the PlanetMath database dump is 24mb. It\nshould be able to fit in memory easily, so I'm not sure what I'm doing\nwrong regarding the caching.\n\nFor the most trivial request, Postgresql takes up basically all the CPU\nfor the duration of the request. The load average of the machine is\nover-unity at all times, sometimes as bad as being the 30's. None of\nthis happens without postgres running, so it is definitely the culprit.\n\nThe site averages about one hit every twenty seconds. This should not\nbe an overwhelming load, especially for what is just pulling up cached\ninformation 99% of the time.\n\nGiven this scenario, can anyone advise? I am particularly puzzled as to\nwhy everything I tried initially helped, but always degenerated rather\nrapidly to a near standstill. It seems to me that everything should be\nable to be cached in memory with no problem, perhaps I need to force\nthis more explicitly.\n\nMy next step, if I cannot fix this, is to try mysql =(\n\nAnyway, whoever helps would be doing a great service to many who use\nPlanetMath =) It'd be much appreciated. \n\nAaron Krowne\n\n\n", "msg_date": "Sun, 16 Mar 2003 01:01:25 -0500", "msg_from": "Aaron Krowne <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql meltdown on PlanetMath.org" }, { "msg_contents": "> As the topic suggests, I am having fairly critical troubles with\n> postgresql on PlanetMath.org (a site which I run). You can go there and\n> try to pull up some entries and you will see the problem: everything is\n> incredibly slow.\n\nHave you read the following?\n\nhttp://developer.postgresql.org/docs/postgres/performance-tips.html\n\n> First: the machine. The machine is not too spectactular, but it is not\n> so bad that the performance currently witnessed should be happening. It\n> is a dual PIII-650 with 512MB of RAM and a 20gb IDE drive (yes, DMA is\n> on). There is plenty of free space on the drive.\n\nThis shouldn't be an issue for the load you describe. A p-100 should\nbe okay, but it depends on your queries that you're performing.\n\n> Now, the optimisations I have tried:\n\n*) Stick with btree's.\n\n> - I then tried increasing the machines shared memory max to 75% of the\n> physical memory, and scaled postgresql's buffers accordingly. This\n> also sped things up for a while, but again resulted in eventual\n> degeneration. Even worse, there were occasional crashes due to\n> running out of memory that (according to my calculations) shouldn't\n> have been happening.\n\n*) Don't do this, go back to near default levels. I bet this is\n hurting your setup.\n\n> - Lastly, I tried reducing the shared memory max and limiting postgresql\n> to more conservative values, although still not to the out-of-box\n> values. Right now shared memory max on the system is 128mb,\n> postgres's shared buffers are at 64mb, sort_mem is at 16mb, and\n> effective cache size is at 10mb.\n\n*) You shouldn't have to do this either.\n\n> For perspective, the size of the PlanetMath database dump is 24mb.\n> It should be able to fit in memory easily, so I'm not sure what I'm\n> doing wrong regarding the caching.\n\nI hate to say this, but this sounds like a config error. :-/\n\n> For the most trivial request, Postgresql takes up basically all the\n> CPU for the duration of the request. The load average of the\n> machine is over-unity at all times, sometimes as bad as being the\n> 30's. None of this happens without postgres running, so it is\n> definitely the culprit.\n\n*) Send an EXPLAIN statement as specified here:\n\nhttp://developer.postgresql.org/docs/postgres/performance-tips.html#USING-EXPLAIN\n\n> The site averages about one hit every twenty seconds. This should not\n> be an overwhelming load, especially for what is just pulling up cached\n> information 99% of the time.\n\n*) Have you done a vacuum analyze?\n\nhttp://developer.postgresql.org/docs/postgres/populate.html#POPULATE-ANALYZE\n\n> Given this scenario, can anyone advise? I am particularly puzzled\n> as to why everything I tried initially helped, but always\n> degenerated rather rapidly to a near standstill. It seems to me\n> that everything should be able to be cached in memory with no\n> problem, perhaps I need to force this more explicitly.\n\n*) Send the EXPLAIN output and we can work from there.\n\n> My next step, if I cannot fix this, is to try mysql =(\n\nBah, don't throw down the gauntlet, it's pretty clear this is a local\nissue and not a problem with the DB. :)\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Sat, 15 Mar 2003 22:12:08 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "Aaron Krowne <[email protected]> writes:\n> As the topic suggests, I am having fairly critical troubles with\n> postgresql on PlanetMath.org (a site which I run).\n\nUm ... not meaning to insult your intelligence, but how often do you\nvacuum? Also, exactly what Postgres version are you running? Can\nyou show us EXPLAIN ANALYZE results for some of the slow queries?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Mar 2003 01:26:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " }, { "msg_contents": "Aaron Krowne wrote:\n> Given this scenario, can anyone advise? I am particularly puzzled as to\n> why everything I tried initially helped, but always degenerated rather\n> rapidly to a near standstill. It seems to me that everything should be\n> able to be cached in memory with no problem, perhaps I need to force\n> this more explicitly.\n\nBasic guidance:\n- Keep shared memory use reasonable; your final settings of 64M shared\n buffers and 16M sort_mem sound OK. In any case, be sure you're not\n disk-swapping.\n- If you don't already, run VACUUM ANALYZE on some regular schedule\n (how often depends on your data turnover rate)\n- Possibly consider running REINDEX periodically\n- Post the SQL and EXPLAIN ANALYZE output for the queries causing the\n worst of your woes to the list\n\nExplanations of these can be found by searching the list archives and \nreading the related sections of the manual.\n\nA few questions:\n- What version of Postgres?\n- Have you run VACUUM FULL ANALYZE lately (or at least VACUUM ANALYZE)?\n- Does the database see mostly SELECTs and INSERTs, or are there many\n UPDATEs and/or DELETEs too?\n- Are all queries slow, or particular ones?\n\nHTH,\nJoe\n\n\n", "msg_date": "Sat, 15 Mar 2003 22:37:07 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "> - Keep shared memory use reasonable; your final settings of 64M shared\n> buffers and 16M sort_mem sound OK. In any case, be sure you're not\n> disk-swapping.\n\nYeah, those seem like reasonable values to me. But I am not sure I'm\nnot disk-swapping, in fact it is almost certainly going on here bigtime.\n\n> - If you don't already, run VACUUM ANALYZE on some regular schedule\n> (how often depends on your data turnover rate)\n\nI've done it here and there, especially when things seem slow. Never\nseems to help much; the data turnover isn't high.\n\n> - Possibly consider running REINDEX periodically\n\nOk thats a new one, I'll try that out.\n\n> - Post the SQL and EXPLAIN ANALYZE output for the queries causing the\n> worst of your woes to the list\n> - Are all queries slow, or particular ones?\n\nI'm grouping two separate things together to reply to, because the\nsecond point answers the first: there's really no single culprit. Every\nSELECT has a lag on the scale of a second; resolving all of the foreign\nkeys in various tables to construct a typical data-rich page piles up\nmany of these. I'm assuming the badness of this depends on how much\nswapping is going on.\n\n> Explanations of these can be found by searching the list archives and \n> reading the related sections of the manual.\n\nWill check that out, thanks.\n\n> A few questions:\n> - What version of Postgres?\n\n7.2.1\n\n> - Have you run VACUUM FULL ANALYZE lately (or at least VACUUM ANALYZE)?\n\nYes, after a particularly bad slowdown... it didn't seem to fix things. \n\n> - Does the database see mostly SELECTs and INSERTs, or are there many\n> UPDATEs and/or DELETEs too?\n\nAlmost exclusively SELECTs.\n\nOK, I have just run a VACUUM FULL ANALYZE and things seem much better...\nwhich would be the first time its really made a difference =) I tried\ncomparing an EXPLAIN ANALYZE of a single row select on the main objects\ntable before and after the vacuum, and the plan didn't change\n(sequential scan still), but the response time went from ~1 second to\n~5msec! I'm not really sure what could have happened here\nbehind-the-scenes since it didn't start using the index, and there\nprobably weren't more than 10% updated/added rows since the last VACUUM.\n\nI actually thought I had a task scheduled which was running a VACUUM\nperiodically, but maybe it broke for some reason or another. Still, I\nhave not been getting consistent results from running VACUUMs, so I'm\nnot entirely confident that the book is closed on the problem. \n\nThanks for your help.\n\napk\n", "msg_date": "Sun, 16 Mar 2003 02:52:06 -0500", "msg_from": "Aaron Krowne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "> Have you read the following?\n> http://developer.postgresql.org/docs/postgres/performance-tips.html\n\nYup. I would never go and bother real people without first checking the\nmanual, but I bet you get a lot of that =)\n\n> This shouldn't be an issue for the load you describe. A p-100 should\n> be okay, but it depends on your queries that you're performing.\n\nMostly just gather-retrieval based on unique identifier keys in a bunch\nof tables. Really mundane stuff.\n\n> *) Stick with btree's.\n\nYeah, that saddens me, though =) When I initially switched to hashes,\nthings were blazing. This application makes heavy use of keys and equal\ncomparisons on indices, so hashes are really the optimal index\nstructure. I'd like to be able to go back to using them some day... if\nnot for the concurrency issue, which seems like it should be fixable\n(even having mutually exclusive locking on the entire index would\nprobably be fine for this application and would prevent deadlock).\n\n> > - I then tried increasing the machines shared memory max to 75% of the\n> > physical memory, and scaled postgresql's buffers accordingly. This\n> *) Don't do this, go back to near default levels. I bet this is\n> hurting your setup.\n> > - Lastly, I tried reducing the shared memory max and limiting postgresql\n> > to more conservative values, although still not to the out-of-box\n> > values. Right now shared memory max on the system is 128mb,\n> > postgres's shared buffers are at 64mb, sort_mem is at 16mb, and\n> > effective cache size is at 10mb.\n> *) You shouldn't have to do this either.\n\nWell, I've now been advised that the best way is all 3 that I have tried\n(among aggressive buffering, moderate buffering, and default\nconservative buffering). \n\nPerhaps you could explain to me why the system shouldn't be ok with the\nmoderate set of buffer sizes on a 512mb machine? I don't really know\nenough about the internals of postgres to be doing anything but voodoo\nwhen I change the values. \n\n> I hate to say this, but this sounds like a config error. :-/\n\nThats better than a hardware error! This is what I wanted to hear =)\n\n> *) Have you done a vacuum analyze?\n\nSee previous message to list (summary: it worked this time, but usually\nit does not help.)\n\nThanks,\n\nAaron Krowne\n", "msg_date": "Sun, 16 Mar 2003 03:06:01 -0500", "msg_from": "Aaron Krowne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "> > > - Lastly, I tried reducing the shared memory max and limiting postgresql\n> > > to more conservative values, although still not to the out-of-box\n> > > values. Right now shared memory max on the system is 128mb,\n> > > postgres's shared buffers are at 64mb, sort_mem is at 16mb, and\n> > > effective cache size is at 10mb.\n> > *) You shouldn't have to do this either.\n> \n> Well, I've now been advised that the best way is all 3 that I have\n> tried (among aggressive buffering, moderate buffering, and default\n> conservative buffering).\n>\n> Perhaps you could explain to me why the system shouldn't be ok with\n> the moderate set of buffer sizes on a 512mb machine? I don't really\n> know enough about the internals of postgres to be doing anything but\n> voodoo when I change the values.\n\nHonestly? The defaults are small, but they're not small enough to\ngive you the lousy performance you were describing. If your buffers\nare too high or there are enough things that are using up KVM/system\nmemory... contention can cause thashing/swapping which it wasn't clear\nthat you weren't having happen. Defaults shouldn't, under any\nnon-embedded circumstance cause problems with machines >233Mhz,\nthey're just too conservative to do any harm. :)\n\n> > *) Have you done a vacuum analyze?\n> \n> See previous message to list (summary: it worked this time, but\n> usually it does not help.)\n\nHrmm... ENOTFREEBSD, eh?\n\nhttp://www.freebsd.org/cgi/cvsweb.cgi/ports/databases/postgresql7/files/502.pgsql?rev=1.5&content-type=text/x-cvsweb-markup\n\nYou may want to setup a nightly vacuum/backup procedure. Palle\nGirgensohn <[email protected]> has written a really nice and simple\nscript that's been in use for ages on FreeBSD PostgreSQL installations\nfor making sure that you don't have this problem.\n\nActually, it'd be really cool to lobby to get this script added to the\nbase PostgreSQL installation that way you wouldn't have this\nproblem... it'd also dramatically increase the number of nightly\nbackups performed for folks if a default script does this along with\nvacuuming. -sc\n\n\n-- \nSean Chittenden\n", "msg_date": "Sun, 16 Mar 2003 00:20:24 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "> You may want to setup a nightly vacuum/backup procedure. Palle\n> Girgensohn <[email protected]> has written a really nice and simple\n> script that's been in use for ages on FreeBSD PostgreSQL installations\n> for making sure that you don't have this problem.\n> \n> Actually, it'd be really cool to lobby to get this script added to the\n> base PostgreSQL installation that way you wouldn't have this\n> problem... it'd also dramatically increase the number of nightly\n> backups performed for folks if a default script does this along with\n> vacuuming. -sc\n\n*Actually*, I just double checked, and I was not hallucinating: I *do*\nhave a nightly vacuum script... because Debian postgres comes with it =)\n\nSo, either it is broken, or doing a VACUUM FULL ANALYZE rather than just\nVACUUM ANALYZE made all the difference. Is this possible (the latter,\nwe know the former is possible...)?\n\napk\n", "msg_date": "Sun, 16 Mar 2003 03:30:11 -0500", "msg_from": "Aaron Krowne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "> > You may want to setup a nightly vacuum/backup procedure. Palle\n> > Girgensohn <[email protected]> has written a really nice and simple\n> > script that's been in use for ages on FreeBSD PostgreSQL installations\n> > for making sure that you don't have this problem.\n> > \n> > Actually, it'd be really cool to lobby to get this script added to the\n> > base PostgreSQL installation that way you wouldn't have this\n> > problem... it'd also dramatically increase the number of nightly\n> > backups performed for folks if a default script does this along with\n> > vacuuming. -sc\n> \n> *Actually*, I just double checked, and I was not hallucinating: I *do*\n> have a nightly vacuum script... because Debian postgres comes with it =)\n\nCool, glad to hear other installations are picking up doing this.\n\n> So, either it is broken, or doing a VACUUM FULL ANALYZE rather than just\n> VACUUM ANALYZE made all the difference. Is this possible (the latter,\n> we know the former is possible...)?\n\nYou shouldn't have to do a VACUUM FULL. Upgrade your PostgreSQL\ninstallation if you can (most recent if possible), there have been\nmany performance updates and VACUUM fixes worth noting. Check the\nrelease notes starting with your version and read through them up to\nthe current release... you'll be amazed at all the work that's been\ndone, some of which it looks like may affect your installation.\n\nhttp://developer.postgresql.org/docs/postgres/release.html\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Sun, 16 Mar 2003 00:35:37 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "Aaron Krowne <[email protected]> writes:\n> So, either it is broken, or doing a VACUUM FULL ANALYZE rather than just\n> VACUUM ANALYZE made all the difference. Is this possible (the latter,\n> we know the former is possible...)?\n\nIf your FSM parameters in postgresql.conf are too small, then plain\nvacuums might have failed to keep up with the available free space,\nleading to a situation where vacuum full is essential. Did you happen\nto notice whether the vacuum full shrunk the database's disk footprint\nnoticeably?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Mar 2003 03:37:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " }, { "msg_contents": "Aaron Krowne wrote:\n>>- What version of Postgres?\n> 7.2.1\n\nYou should definitely look at upgrading, at least to 7.2.4 (which you \ncan do without requiring a dump/reload cycle), but better yet to 7.3.2 \n(which will require a dump/reload cycle). I don't know that will fix you \nspecific issue, but there were some critical bug fixes between 7.2.1 and \n7.2.4.\n\n>>- Does the database see mostly SELECTs and INSERTs, or are there many\n>> UPDATEs and/or DELETEs too?\n> \n> Almost exclusively SELECTs.\n> \n> OK, I have just run a VACUUM FULL ANALYZE and things seem much better...\n\nHmmm, do you periodically do large updates or otherwise turn over rows \nin batches?\n\n> which would be the first time its really made a difference =) I tried\n> comparing an EXPLAIN ANALYZE of a single row select on the main objects\n> table before and after the vacuum, and the plan didn't change\n> (sequential scan still), but the response time went from ~1 second to\n> ~5msec! I'm not really sure what could have happened here\n> behind-the-scenes since it didn't start using the index, and there\n> probably weren't more than 10% updated/added rows since the last VACUUM.\n\nIf your app is mostly doing equi-lookups by primary key, and indexes \naren't being used (I think I saw you mention that on another post), then \nsomething else is still wrong. Please pick one or two typical queries \nthat are doing seq scans and post the related table definitions, \nindexes, SQL, and EXPLAIN ANALYZE. I'd bet you are getting bitten by a \ndatatype mismatch or something.\n\nJoe\n\n", "msg_date": "Sun, 16 Mar 2003 03:29:14 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "> - Lastly, I tried reducing the shared memory max and limiting postgresql\n> to more conservative values, although still not to the out-of-box\n> values. Right now shared memory max on the system is 128mb,\n> postgres's shared buffers are at 64mb, sort_mem is at 16mb, and\n> effective cache size is at 10mb.\n\nI found that 5000 shared buffers was best performance on my system.\nHowever, your problems are probably due to maybe not running vacuum,\nanalyze, reindex, etc. Your queries may not be effectively indexed -\nEXPLAIN ANALYZE them all.\n\nChris\n\n", "msg_date": "Mon, 17 Mar 2003 10:08:10 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "> I don't know what your definition of \"high\" is, but I do find that\n> turnover can degrade performance over time. Perhaps one of the devs\n> can enlighten me, but I have a database that turns over ~100,000\n> rows/day that does appear to slowly get worse. The updates are done\n> in batches and I \"VACUUM\" and \"VACUUM ANALYZE\" after each batch\n> (three/day) but I found that over time simple queries would start to\n> hit the disk more and more.\n\nCreeping index syndrome. Tom recently fixed this in HEAD. Try the\nlatest copy from the repo and see if this solves your problems.\n\n> A \"select count(*) FROM tblwordidx\" initially took about 1 second to\n> return a count of 2 million but after a few months it took several\n> minutes of really hard HDD grinding.\n\nThat's because there are dead entries in the index that weren't being\nreused or cleaned up. As I said, this has been fixed.\n\n-sc\n\n\nPS It's good to see you around again. :)\n\n-- \nSean Chittenden\n", "msg_date": "Sun, 16 Mar 2003 22:10:11 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "I don't know what your definition of \"high\" is, but I do find that\nturnover can degrade performance over time. Perhaps one of the devs can\nenlighten me, but I have a database that turns over ~100,000 rows/day that\ndoes appear to slowly get worse. The updates are done in batches and I\n\"VACUUM\" and \"VACUUM ANALYZE\" after each batch (three/day) but I found\nthat over time simple queries would start to hit the disk more and more.\n\nA \"select count(*) FROM tblwordidx\" initially took about 1 second to\nreturn a count of 2 million but after a few months it took several minutes\nof really hard HDD grinding. Also, the database only had a couple hundred\nmegs of data in it, but the db was taking up 8-9 GB of disk space. I'm\nthinking data fragmentation is ruining cache performance? When I did a\ndump restore and updated from 7.2.1 to 7.3.1 queries were zippy again. \nBut, now it is starting to slow... I have yet to measure the effects of a\nVACUUM FULL, however. I'll try it an report back...\n\n\nLogan Bowers\n\nOn Sun, 16 Mar 2003, Aaron Krowne wrote:\n\n<snip>\n> I've done it here and there, especially when things seem slow. Never\n> seems to help much; the data turnover isn't high.\n> \n<snip>\n", "msg_date": "Mon, 17 Mar 2003 01:12:34 -0500 (EST)", "msg_from": "Logan Bowers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "Sean Chittenden said:\n>> A \"select count(*) FROM tblwordidx\" initially took about 1 second to\n>> return a count of 2 million but after a few months it took several\n>> minutes of really hard HDD grinding.\n>\n> That's because there are dead entries in the index that weren't being\n> reused or cleaned up. As I said, this has been fixed.\n\nThat's doubtful: \"select count(*) FROM foo\" won't use an index. There are\na bunch of other factors (e.g. dead heap tuples, changes in the pages\ncached in the buffer, disk fragmentation, etc.) that could effect\nperformance in that situation, however.\n\nCheers,\n\nNeil\n\n\n", "msg_date": "Mon, 17 Mar 2003 01:18:59 -0500 (EST)", "msg_from": "\"Neil Conway\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "> >> A \"select count(*) FROM tblwordidx\" initially took about 1 second to\n> >> return a count of 2 million but after a few months it took several\n> >> minutes of really hard HDD grinding.\n> >\n> > That's because there are dead entries in the index that weren't being\n> > reused or cleaned up. As I said, this has been fixed.\n> \n> That's doubtful: \"select count(*) FROM foo\" won't use an\n> index. There are a bunch of other factors (e.g. dead heap tuples,\n> changes in the pages cached in the buffer, disk fragmentation, etc.)\n> that could effect performance in that situation, however.\n\n*blush* Yeah, jumped the gun on that when I read that queries were\ngetting slower (churn of an index == slow creaping death for\nperformance). A SELECT COUNT(*), however, wouldn't be affected by the\nindex growth problem. Is the COUNT() on a view that uses an index? I\nhaven't had any real problems with this kind of degredation outside of\nindexes. :-/ -sc\n\n-- \nSean Chittenden\n", "msg_date": "Sun, 16 Mar 2003 22:29:29 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "\"Neil Conway\" <[email protected]> writes:\n> Sean Chittenden said:\n> A \"select count(*) FROM tblwordidx\" initially took about 1 second to\n> return a count of 2 million but after a few months it took several\n> minutes of really hard HDD grinding.\n>> \n>> That's because there are dead entries in the index that weren't being\n>> reused or cleaned up. As I said, this has been fixed.\n\n> That's doubtful: \"select count(*) FROM foo\" won't use an index.\n\nTo know what's going on, as opposed to guessing about it, we'd need to\nknow something about the physical sizes of the table and its indexes.\n\"vacuum verbose\" output would be instructive...\n\nBut my best theorizing-in-advance-of-the-data guess is that Logan's\nFSM settings are too small, causing free space to be leaked over time.\nIf a vacuum full restores the original performance then that's probably\nthe right answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Mar 2003 01:34:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Aaron Krowne <[email protected]> writes:\n> > So, either it is broken, or doing a VACUUM FULL ANALYZE rather than just\n> > VACUUM ANALYZE made all the difference. Is this possible (the latter,\n> > we know the former is possible...)?\n> \n> If your FSM parameters in postgresql.conf are too small, then plain\n> vacuums might have failed to keep up with the available free space,\n> leading to a situation where vacuum full is essential. Did you happen\n> to notice whether the vacuum full shrunk the database's disk footprint\n> noticeably?\n\nThis seems to be a frequent problem. \n\nIs there any easy way to check an existing table for lost free space?\n\nIs there any way vauum could do this check and print a warning suggesting\nusing vaccuum full and/or increasing fsm parameters if it finds such?\n\n--\ngreg\n\n", "msg_date": "17 Mar 2003 10:58:39 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Is there any easy way to check an existing table for lost free space?\n\ncontrib/pgstattuple gives a pretty good set of statistics. (I thought\nVACUUM VERBOSE printed something about total free space in a table,\nbut apparently only VACUUM FULL VERBOSE does. Maybe should change\nthat.)\n\n> Is there any way vauum could do this check and print a warning suggesting\n> using vaccuum full and/or increasing fsm parameters if it finds such?\n\nIn CVS tip, a whole-database VACUUM VERBOSE gives info about the free\nspace map occupancy, eg\n\nINFO: Free space map: 224 relations, 450 pages stored; 3776 total pages needed.\n Allocated FSM size: 1000 relations + 20000 pages = 178 KB shared mem.\n\nIf the \"pages needed\" number is drastically larger than the allocated\nFSM size, you've got a problem. (I don't think you need to panic if\nit's just a little larger, though. 10X bigger would be time to do\nsomething, 2X bigger maybe not.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Mar 2003 11:11:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " }, { "msg_contents": "On Mon, 17 Mar 2003, Tom Lane wrote:\n\n> In CVS tip, a whole-database VACUUM VERBOSE gives info about the free\n> space map occupancy, eg\n> \n> INFO: Free space map: 224 relations, 450 pages stored; 3776 total pages needed.\n> Allocated FSM size: 1000 relations + 20000 pages = 178 KB shared mem.\n> \n\nHow do you get this information?\n\nI just ran VACUUM VERBOSE and it spit out a bunch of information per \nrelation, but nothing about total relations and FSM space. We are running \n7.3.2.\n\nChris\n\n", "msg_date": "Mon, 17 Mar 2003 09:12:32 -0800 (PST)", "msg_from": "Chris Sutton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " }, { "msg_contents": "On Sun, Mar 16, 2003 at 03:37:32AM -0500, Tom Lane wrote:\n> Aaron Krowne <[email protected]> writes:\n> > So, either it is broken, or doing a VACUUM FULL ANALYZE rather than just\n> > VACUUM ANALYZE made all the difference. Is this possible (the latter,\n> > we know the former is possible...)?\n> \n> If your FSM parameters in postgresql.conf are too small, then plain\n> vacuums might have failed to keep up with the available free space,\n> leading to a situation where vacuum full is essential. Did you happen\n> to notice whether the vacuum full shrunk the database's disk footprint\n> noticeably?\n\nI was having a similar problem a couple threads ago, and a VACUUM FULL\nreduced my database from 3.9 gigs to 2.1 gigs ! \n\nSo my question is how to (smartly) choose an FSM size?\n\nthanks,\nmax`\n", "msg_date": "Mon, 17 Mar 2003 10:33:27 -0800", "msg_from": "Max Baker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "Chris Sutton said:\n> On Mon, 17 Mar 2003, Tom Lane wrote:\n>> In CVS tip, a whole-database VACUUM VERBOSE gives info about the free\n>> space map occupancy, eg\n\n> How do you get this information?\n>\n> I just ran VACUUM VERBOSE and it spit out a bunch of information per\n> relation, but nothing about total relations and FSM space. We are\n> running 7.3.2.\n\nAs Tom mentioned, that information is printed by a database-wide VACUUM\nVERBOSE \"in CVS tip\" -- i.e. in the development code that will eventually\nbecome PostgreSQL 7.4\n\nCheers,\n\nNeil\n\n\n", "msg_date": "Mon, 17 Mar 2003 14:20:10 -0500 (EST)", "msg_from": "\"Neil Conway\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "Chris Sutton <[email protected]> writes:\n> On Mon, 17 Mar 2003, Tom Lane wrote:\n>> In CVS tip, a whole-database VACUUM VERBOSE gives info about the free\n>> space map occupancy, eg\n>> INFO: Free space map: 224 relations, 450 pages stored; 3776 total pages needed.\n>> Allocated FSM size: 1000 relations + 20000 pages = 178 KB shared mem.\n\n> How do you get this information?\n\nBefore CVS tip, you don't.\n\n[ thinks...] Perhaps we could back-port the FSM changes into 7.3 ...\nit would be a larger change than I'd usually consider reasonable for a\nstable branch, though. Particularly considering that it would be hard\nto call it a bug fix. By any sane definition this is a new feature,\nand we have a policy against putting new features in stable branches.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Mar 2003 14:26:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " }, { "msg_contents": "On Mon, Mar 17, 2003 at 02:26:00PM -0500, Tom Lane wrote:\n> [ thinks...] Perhaps we could back-port the FSM changes into 7.3 ...\n\nFor what it's worth, I think that'd be a terrible precedent. Perhaps\nmaking a patch file akin to what the Postgres-R folks do, for people\nwho really want it. But there is just no way it's a bug fix, and one\nof the things I _really really_ like about Postgres is the way\n\"stable\" means stable. Introducing such a new feature to 7.3.x now\nsmacks to me of the direction the Linux kernal has gone, where major\nnew funcitonality gets \"merged\"[1] in dot-releases of the so-called\nstable version.\n\n[1] This is the meaning of \"merge\" also used in Toronto on the 401 at\nrush hour. 8 lanes of traffic jam and growing.\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 17 Mar 2003 14:47:38 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> On Mon, Mar 17, 2003 at 02:26:00PM -0500, Tom Lane wrote:\n>> [ thinks...] Perhaps we could back-port the FSM changes into 7.3 ...\n\n> For what it's worth, I think that'd be a terrible precedent.\n\nOh, I quite agree. I was just throwing up the option to see if anyone\nthought the issue was important enough to take risks for. I do not...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Mar 2003 15:46:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " }, { "msg_contents": "I should have paid more attention to the disk space before... but it\nlooks like somewhere between half a gig and a gig was freed! The disk\nfootprint is about a gig now.\n\nAaron Krowne\n\nOn Sun, Mar 16, 2003 at 03:37:32AM -0500, Tom Lane wrote:\n> Aaron Krowne <[email protected]> writes:\n> > So, either it is broken, or doing a VACUUM FULL ANALYZE rather than just\n> > VACUUM ANALYZE made all the difference. Is this possible (the latter,\n> > we know the former is possible...)?\n> \n> If your FSM parameters in postgresql.conf are too small, then plain\n> vacuums might have failed to keep up with the available free space,\n> leading to a situation where vacuum full is essential. Did you happen\n> to notice whether the vacuum full shrunk the database's disk footprint\n> noticeably?\n> \n> \t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Mar 2003 21:31:22 -0500", "msg_from": "Aaron Krowne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "All right, I performed a VACUUM FULL last night and after about 3 hours I\ntried running a select count(*) FROM tblwordidx and that did help things\nconsiderably (it runs in ~20 seconds instead of 1-2 minutes). Not as good \nas originally, but close. \n\nBut, here's the breakdown of the db: \n\nI'm using the database as a keyword based file search engine (not the most \nefficient method, I know, but it works well for my purposes). The biggest \nand most relevant tables are a table of files and of words. The basic \noperation that each file has a set of keywords associated with it, I do a \nwhole word search on tblwordidx and join with tblfiles (I know, the naming \nscheme sucks, sorry!). \n\nThree times a day I scan the network and update the database. I insert\nabout 180,000 rows into a temporary table and then use it to update\ntemporary table (tbltmp). With the aid of a few other tables, I clean up\ntblFiles so that existing rows have an updated timestamp in tblseen and\nfiles with a timestamp older than 1 day are removed. Then, I take the new\nrows in tblfiles and use a perl script to add more words to tblwordidx. \nAfter each update a do a VACUUM and VACUUM ANALYZE which usually grinds\nfor 10 to 15 minutes.\n\nI'm running this db on a celeron 450Mhz with 256MB RAM and a 60GB HDD\n(7200 rpm). For the most part I have the db running \"well enough.\" Over\ntime however, I find that performance degrades, the count(*) above is an\nexample of a command that does worse over time. It gets run once an hour\nfor stats collection. When I first migrated the db to v7.3.1 it would\ntake about 5-10 seconds (which it is close to now after a VACUUM FULL) but \nafter a few weeks it would take over a minute of really intense HDD\nactivity. Also of note is that when I first loaded the data it would\ncache very well with the query taking maybe taking 15 seconds if I had \njust started the db after reboot, but when it was in its \"slow\" state \nrepeating the query didn't noticably use the disk less (nor did it take \nless time). \n\nI've attached a VACUUM VERBOSE and my conf file (which is pretty vanilla,\nI haven't tweaked it since updating). If you have any suggestions on how\nI can correct this situation through config changes that would be ideal\nand thanks for your help, if is just a case of doing lots of VACUUM FULLs,\nI can definitely see it as a performance bottleneck for postgres. \nFortunately I can afford the huge peroformance penalty of a VACUUM FULL,\nbut I can certainly think of apps that can't.\n\n\nLogan Bowers \n\n\\d tblfiles: (219,248 rows)\n Column | Type | Modifiers\n----------+-----------------------------+-------------------------------------------\n fid | integer | not null default \nnextval('fileids'::text)\n hid | integer | not null\n pid | integer | not null\n name | character varying(256) | not null\n size | bigint | not null\nIndexes: temp_fid_key unique btree (fid),\n filediridx btree (hid, pid, name, size, fid),\n fileidx btree (name, hid, pid, fid),\n fileidxfid btree (fid, name, pid)\n\n\\d tblwordidx: (1,739,481 rows)\n Table \"public.tblwordidx\"\n Column | Type | Modifiers\n--------+------------------------+-----------\n fid | integer | not null\n word | character varying(128) | not null\n todel | boolean |\nIndexes: wordidxfid btree (fid, word),\n wordidxfidonly btree (fid),\n wordidxw btree (word, fid)\n\n\n\nOn Mon, 17 Mar 2003, Tom Lane wrote:\n\n> \"Neil Conway\" <[email protected]> writes:\n> > Sean Chittenden said:\n> > A \"select count(*) FROM tblwordidx\" initially took about 1 second to\n> > return a count of 2 million but after a few months it took several\n> > minutes of really hard HDD grinding.\n> >> \n> >> That's because there are dead entries in the index that weren't being\n> >> reused or cleaned up. As I said, this has been fixed.\n> \n> > That's doubtful: \"select count(*) FROM foo\" won't use an index.\n> \n> To know what's going on, as opposed to guessing about it, we'd need to\n> know something about the physical sizes of the table and its indexes.\n> \"vacuum verbose\" output would be instructive...\n> \n> But my best theorizing-in-advance-of-the-data guess is that Logan's\n> FSM settings are too small, causing free space to be leaked over time.\n> If a vacuum full restores the original performance then that's probably\n> the right answer.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n>", "msg_date": "Mon, 17 Mar 2003 21:41:07 -0500 (EST)", "msg_from": "Logan Bowers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " }, { "msg_contents": "> I'm running this db on a celeron 450Mhz with 256MB RAM and a 60GB HDD\n> (7200 rpm). For the most part I have the db running \"well enough.\" Over\n> time however, I find that performance degrades, the count(*) above is an\n> example of a command that does worse over time. It gets run once an hour\n> for stats collection. When I first migrated the db to v7.3.1 it would\n> take about 5-10 seconds (which it is close to now after a VACUUM FULL) but\n> after a few weeks it would take over a minute of really intense HDD\n> activity. Also of note is that when I first loaded the data it would\n> cache very well with the query taking maybe taking 15 seconds if I had\n> just started the db after reboot, but when it was in its \"slow\" state\n> repeating the query didn't noticably use the disk less (nor did it take\n> less time).\n\nTo speed up your COUNT(*), how about doing this:\n\nCreate a separate table to hold a single integer.\n\nAdd a trigger after insert on your table to increment the counter in the\nother table\nAdd a trigger after delete on your table to decrement the counter in the\nother table.\n\nThat way you always have an O(1) count...\n\nChris\n\n", "msg_date": "Tue, 18 Mar 2003 10:44:01 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " }, { "msg_contents": "Logan Bowers <[email protected]> writes:\n> I've attached a VACUUM VERBOSE and my conf file (which is pretty vanilla,\n> I haven't tweaked it since updating).\n\nYou definitely need to increase the fsm shared memory parameters. The\ndefault max_fsm_relations is just plain too small (try 1000) and the\ndefault_max_fsm_pages is really only enough for perhaps a 100Mb\ndatabase. I'd try bumping it to 100,000. Note you need a postmaster\nrestart to make these changes take effect.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Mar 2003 21:51:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " }, { "msg_contents": "On Mon, 17 Mar 2003, Logan Bowers wrote:\n\n> Logan Bowers \n> \n> \\d tblfiles: (219,248 rows)\n> Column | Type | Modifiers\n> ----------+-----------------------------+-------------------------------------------\n> fid | integer | not null default \n> nextval('fileids'::text)\n> hid | integer | not null\n> pid | integer | not null\n> name | character varying(256) | not null\n> size | bigint | not null\n> Indexes: temp_fid_key unique btree (fid),\n> filediridx btree (hid, pid, name, size, fid),\n> fileidx btree (name, hid, pid, fid),\n> fileidxfid btree (fid, name, pid)\n\nI'm no expert on indexes, but I seem to remember reading that creating \nmulticolumn indexes on more than 2 or 3 columns gets sort of pointless:\n\nhttp://www.us.postgresql.org/users-lounge/docs/7.3/postgres/indexes-multicolumn.html\n\nThere is probably a ton of disk space and CPU used to keep all these multi \ncolumn indexes. Might be part of the problem.\n\n> \\d tblwordidx: (1,739,481 rows)\n> Table \"public.tblwordidx\"\n> Column | Type | Modifiers\n> --------+------------------------+-----------\n> fid | integer | not null\n> word | character varying(128) | not null\n> todel | boolean |\n> Indexes: wordidxfid btree (fid, word),\n> wordidxfidonly btree (fid),\n> wordidxw btree (word, fid)\n> \n\nAnother index question for the pros. When creating a multi-column index \ndo you need to do it both ways:\n\nwordidxfid btree (fid, word)\nwordidxw btree (word, fid\n\nWe have a very similar \"dictonary\" table here for searching. It's about \n1.7 million rows, takes about 80mb of disk space. There is one multi \ncolumn index on the table which uses about 50mb of disk space.\n\nTo find out how much disk space you are using, the hard way is:\n\nselect relfilenode from pg_class where relname='tblwordidx';\nselect relfilenode from pg_class where relname='wordidxw';\n\nrelfilenode is the name of the file in your data directory.\n\nI'm pretty sure there is an easier way to do this with a function I saw in \ncontrib.\n\nJust some thoughts.\n\nChris\n\n", "msg_date": "Tue, 18 Mar 2003 06:59:56 -0800 (PST)", "msg_from": "Chris Sutton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " }, { "msg_contents": "Logan,\n\n> I'm running this db on a celeron 450Mhz with 256MB RAM and a 60GB HDD\n> (7200 rpm). For the most part I have the db running \"well enough.\" Over\n\nHmmm ... actually, considering your hardware, I'd say the database performance \nyou're getting is excellent. You're facing 3 bottlenecks:\n\n1) The Celeron II's lack of on-chip cache will slow down even moderately \ncomplex queries as much as 50% over a comparably-clocked pentium or AMD chip, \nin my experience. \n\n2) 256mb RAM is small enough that if you are running Apache on the same \nmachine, Apache & Postgres could be contesting for RAM during busy periods.\n\n3) (most noticable) You have pretty much the bare minimum of disk. For a \none-gb database, a Linux RAID array or mirror would be a lot better ...\n\nOf course, that's all relative. What I'm saying is, if you want your \ndatabase to \"scream\" you're going to have to put some money into hardware. \nIf you're just looking for adequate performance, then that can be had with a \nlittle tweaking and maintainence.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 18 Mar 2003 08:26:10 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> You definitely need to increase the fsm shared memory parameters. The\n> default max_fsm_relations is just plain too small (try 1000) and the\n> default_max_fsm_pages is really only enough for perhaps a 100Mb\n> database. I'd try bumping it to 100,000. Note you need a postmaster\n> restart to make these changes take effect.\n\nHmm, are there any guidelines for choosing these values?\n\nWe have a database with a table into which we insert about 4,000,000\nrows each day, and delete another 4,000,000 rows. The total row count\nis around 40 million, I guess, and the rows are about 150 bytes long.\n(VACUUM FULL is running at the moment, so I can't check.)\n\nThe database is used as a research tool, and we run moderately complex\nad-hoc queries on it. As a consequence, I don't see much room for\noptimization.\n\nOne of the columns is time-based and indexed, so we suffer from the\ncreeping index syndrome. A nightly index rebuild followed by a VACUUM\nANALYZE isn't a problem (it takes less than six ours), but this\ndoesn't seem to be enough (we seem to lose disk space nevertheless).\n\nI can't afford a regular VACUUM FULL because it takes down the\ndatabase for over ten hours, and this starts to cut into the working\nhours no matter when it starts.\n\nCan you suggest some tweaks to the FSM values so that we can avoid the\nfull VACUUM? The database runs 7.3.2 and resides on a 4-way Xeon box\nwith 4 GB of RAM and a severely underpowered disk subsystem (Linux\nsoftware RAID1 on two 10k 36 GB SCSI drives -- don't ask, this\ndatabase application is nothing but an accident which happened after\npurchase of the box).\n\n-- \nFlorian Weimer \t [email protected]\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n", "msg_date": "Fri, 21 Mar 2003 00:01:18 +0100", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "On Friday 21 Mar 2003 4:31 am, Florian Weimer wrote:\n> Tom Lane <[email protected]> writes:\n> > You definitely need to increase the fsm shared memory parameters. The\n> > default max_fsm_relations is just plain too small (try 1000) and the\n> > default_max_fsm_pages is really only enough for perhaps a 100Mb\n> > database. I'd try bumping it to 100,000. Note you need a postmaster\n> > restart to make these changes take effect.\n>\n> Hmm, are there any guidelines for choosing these values?\n>\n> We have a database with a table into which we insert about 4,000,000\n> rows each day, and delete another 4,000,000 rows. The total row count\n> is around 40 million, I guess, and the rows are about 150 bytes long.\n> (VACUUM FULL is running at the moment, so I can't check.)\n\nI suggest you split your tables into exactly similar tables using inheritance. \nYour queries won't be affected as you can make them on parent table and get \nsame result.\n\nBut as far as vacuuming goes, you can probably dump a child table entirely and \nrecreate it as a fast alternative to vacuum.\n\nOnly catch is, I don't know if inherited tables would use their respective \nindxes other wise your queries might be slow as anything.\n\n> One of the columns is time-based and indexed, so we suffer from the\n> creeping index syndrome. A nightly index rebuild followed by a VACUUM\n> ANALYZE isn't a problem (it takes less than six ours), but this\n> doesn't seem to be enough (we seem to lose disk space nevertheless).\n\nI am sure a select * from table into another table; drop table; renamre temp \ntable kind of hack would be faster than vacuuming in this case..\n\nThis is just a suggestion. Good if this works for you..\n\n Shridhar\n", "msg_date": "Fri, 21 Mar 2003 09:18:54 +0530", "msg_from": "\"Shridhar Daithankar<[email protected]>\"\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "Florian Weimer <[email protected]> writes:\n> Hmm, are there any guidelines for choosing these values?\n\n> We have a database with a table into which we insert about 4,000,000\n> rows each day, and delete another 4,000,000 rows. The total row count\n> is around 40 million, I guess, and the rows are about 150 bytes long.\n\nIf you are replacing 10% of the rows in the table every day, then it's\na pretty good bet that every single page of the table contains free\nspace. Accordingly, you'd better set max_fsm_pages large enough to\nhave a FSM slot for every page of the table. (1 page = 8Kb normally)\n\nYou could possibly get away with a smaller FSM if you do (non-FULL)\nvacuums more often than once a day. Some people find they can run\nbackground vacuums without hurting performance too much, some don't\n--- I suspect it depends on how much spare disk bandwidth you have.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Mar 2003 01:18:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " } ]
[ { "msg_contents": "Folks,\n\nOn one database, I have an overnight data transformation procedure that goes \nlike:\n\nTableA has about 125,000 records.\n\nBegin Transaction:\n1) Update 80% of records in TableA\n2) Update 10% of records in TableA\n3) Update 65% of records in TableA\n4) Update 55% of records in TableA\n5) Update 15% or records in TableA with references to other records in TableA\n6) Flag what hasn't been updated.\nCommit\n\nI've found that, no matter what my FSM settings (I've gone as high as \n1,000,000) by the time I reach step 4 execution has slowed down considerably, \nand for step 5 it takes the server more than 10 minutes to complete the \nupdate statement. During this period, CPU, RAM and disk I/O are almost idle \n... the system seems to spend all of its time doing lengthy seeks. There is, \nfor that matter, no kernel swap activity, but I'm not sure how to measure \nPostgres temp file activity.\n\n(FYI: Dual Athalon 1600mhz/1gb/Hardware Raid 1 with xlog on seperate SCSI \ndrive/Red Hat Linux 8.0/PostgreSQL 7.2.4)\n\nThe only way around this I've found is to break up the above into seperate \ntransactions with VACUUMs in between, and \"simulate\" a transaction by making \na back-up copy of the table and restoring from it if something goes wrong. \nI've tried enough different methods to be reasonably certain that there is no \nway around this in 7.2.4.\n\nThe reason I bring this up is that PostgreSQL's dramatic plunge in performance \nin large serial updates is really problematic for us in the OLAP database \nmarket, where large data transformations, as well as extensive use of \ncalculated temporary tables, is common. I was particularly distressed when \nI had to tell a client considering switching from MSSQL to Postgres for an \nOLAP database that they might just be trading one set of problems for \nanother.\n\nIs there any way we can improve on this kind of operation in future versions \nof PostgreSQL?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 17 Mar 2003 09:38:38 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Performance on large data transformations" }, { "msg_contents": "Josh Berkus wrote:\n>\n>There is, for that matter, no kernel swap activity, but I'm not \n>sure how to measure Postgres temp file activity.\n\nOf course you could:\n\n mv /wherever/data/base/16556/pgsql_tmp /some_other_disk/\n ln -s /some_other_disk/pgsql_tmp /wherever/data/base/16556\n\nand use \"iostat\" from the \"systat\" package to watch how much you're\nusing the disk the temp directory's on.\n\n\nIn fact, for OLAP stuff I've had this help performance because\nquite a few data warehousing operations look like:\n First read from main database, \n do a big hash-or-sort, (which gets written to pgsql_tmp),\n then read from this temporary table and write result to main database\n\nPS: Yes, I know this doesn't help the FSM stuff you asked about.\n\n\n Ron\n\n\n", "msg_date": "Mon, 17 Mar 2003 12:06:39 -0800", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on large data transformations" } ]
[ { "msg_contents": "What is the structure of you table?\nIs the data types in the table the same as in the SQL....\n\nDid you create the index after the loading the table?\ncluster the table around the most used index....\n\nIs you web site on the same box you database is on?\n\ntelnet www.planetmath.org 5432\noh, $hit...\n\nnever mind........\n\nIf you have another box, please put the database on it. The web server maybe \nkilling the database but this depends on the amount of traffic.\nand block the port.........\n\n\nHow fast is you hard drive? 5400rpm :S,\n\nk=n^r/ck, SCJP\n\n_________________________________________________________________\nMSN 8 with e-mail virus protection service: 2 months FREE* \nhttp://join.msn.com/?page=features/virus\n\n", "msg_date": "Mon, 17 Mar 2003 16:46:30 -0600", "msg_from": "\"Kendrick C. Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " }, { "msg_contents": "Or at least restrict TCP/IP connections from localhost only, and use SSH\ntunnels if you must have direct external access (for pgAdmin, etc.) to the\nDB.\n Lucas.\n\n-----Original Message-----\nFrom: Kendrick C. Wilson [mailto:[email protected]]\nSent: Monday, March 17, 2003 2:47 PM\nTo: [email protected]\nSubject: Re: [PERFORM] postgresql meltdown on PlanetMath.org\n\n\nWhat is the structure of you table?\nIs the data types in the table the same as in the SQL....\n\nDid you create the index after the loading the table?\ncluster the table around the most used index....\n\nIs you web site on the same box you database is on?\n\ntelnet www.planetmath.org 5432\noh, $hit...\n\nnever mind........\n\nIf you have another box, please put the database on it. The web server maybe\nkilling the database but this depends on the amount of traffic.\nand block the port.........\n\n\nHow fast is you hard drive? 5400rpm :S,\n\nk=n^r/ck, SCJP\n\n_________________________________________________________________\nMSN 8 with e-mail virus protection service: 2 months FREE*\nhttp://join.msn.com/?page=features/virus\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/docs/faqs/FAQ.html\n\n", "msg_date": "Mon, 17 Mar 2003 15:15:04 -0800", "msg_from": "\"Lucas Adamski\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " }, { "msg_contents": "\n\n> What is the structure of you table?\n> Is the data types in the table the same as in the SQL....\n>\n> Did you create the index after the loading the table?\n> cluster the table around the most used index....\n\nThere is no point clustering a table around the most used index, unless\naccess to the index is non-random. eg. you are picking up more than one\nconsecutive entry from the index at a time. eg. Indexes on foreign keys are\nexcellent for clustering.\n\nChris\n\n", "msg_date": "Tue, 18 Mar 2003 09:34:36 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org " } ]
[ { "msg_contents": "Clustering is good for queries that return multiple values.\n\nselect this, that\nfrom tableA\nwhere this = 'whatever';\n\nIf there are multiple values, the location of the first record is found in \nthe indexFile.\n\nThen dataFile is scanned until this != 'whatever';\n\nThis will decrease disk activity, which is the bottle neck in database \nperformance.\n\nk=n^r/ck, SCJP\n\n\n>From: \"Christopher Kings-Lynne\" <[email protected]>\n>To: \"Kendrick C. Wilson\" \n><[email protected]>,<[email protected]>\n>Subject: Re: [PERFORM] postgresql meltdown on PlanetMath.org Date: Tue, 18 \n>Mar 2003 09:34:36 +0800\n>MIME-Version: 1.0\n>Received: from relay2.pgsql.com ([64.49.215.143]) by \n>mc6-f41.law1.hotmail.com with Microsoft SMTPSVC(5.0.2195.5600); Mon, 17 Mar \n>2003 17:34:42 -0800\n>Received: from postgresql.org (postgresql.org [64.49.215.8])by \n>relay2.pgsql.com (Postfix) with ESMTPid 022ADE5BD; Mon, 17 Mar 2003 \n>20:34:36 -0500 (EST)\n>Received: from houston.familyhealth.com.au (unknown [203.59.48.253])by \n>postgresql.org (Postfix) with ESMTP id A55E5475F09for \n><[email protected]>; Mon, 17 Mar 2003 20:34:33 -0500 (EST)\n>Received: (from root@localhost)by houston.familyhealth.com.au \n>(8.11.6/8.11.6) id h2I1Yac95711for [email protected]; Tue, \n>18 Mar 2003 09:34:36 +0800 (WST)(envelope-from [email protected])\n>Received: from mariner (mariner.internal [192.168.0.101])by \n>houston.familyhealth.com.au (8.11.6/8.9.3) with SMTP id h2I1YW795594;Tue, \n>18 Mar 2003 09:34:32 +0800 (WST)\n>X-Message-Info: yilqo4+6kc64AXpUCzRAW30W84h6gtv8\n>X-Original-To: [email protected]\n>Message-ID: <[email protected]>\n>References: <[email protected]>\n>X-Priority: 3\n>X-MSMail-Priority: Normal\n>X-Mailer: Microsoft Outlook Express 6.00.2800.1106\n>X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1106\n>X-scanner: scanned by Inflex 0.1.5c - (http://www.inflex.co.za/)\n>Precedence: bulk\n>Sender: [email protected]\n>Return-Path: [email protected]\n>X-OriginalArrivalTime: 18 Mar 2003 01:34:42.0860 (UTC) \n>FILETIME=[8CCFDEC0:01C2ECEE]\n>\n>\n>\n> > What is the structure of you table?\n> > Is the data types in the table the same as in the SQL....\n> >\n> > Did you create the index after the loading the table?\n> > cluster the table around the most used index....\n>\n>There is no point clustering a table around the most used index, unless\n>access to the index is non-random. eg. you are picking up more than one\n>consecutive entry from the index at a time. eg. Indexes on foreign keys \n>are\n>excellent for clustering.\n>\n>Chris\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/docs/faqs/FAQ.html\n\n\n\n\n_________________________________________________________________\nMSN 8 helps eliminate e-mail viruses. Get 2 months FREE*. \nhttp://join.msn.com/?page=features/virus\n\n", "msg_date": "Tue, 18 Mar 2003 09:19:44 -0600", "msg_from": "\"Kendrick C. Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" }, { "msg_contents": "On Tue, 18 Mar 2003 09:19:44 -0600, \"Kendrick C. Wilson\"\n<[email protected]> wrote:\n>If there are multiple values, the location of the first record is found in \n>the indexFile.\n>\n>Then dataFile is scanned until this != 'whatever';\n\nNice, but unfortunately not true for Postgres. When you do the first\nUPDATE after CLUSTER the new version of the changed row(s) are written\nto the end of the dataFile (heap relation in Postgres speech). So the\n*index* has to be scanned until this != 'whatever'.\n\n>Clustering is good for queries that return multiple [rows with the same search] values.\n\nYes. With clustering you can expect that most of the tuples you want\nare near to each other and you find several of them in the same page.\n\nServus\n Manfred\n", "msg_date": "Wed, 19 Mar 2003 20:52:21 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql meltdown on PlanetMath.org" } ]
[ { "msg_contents": "\nWe can fix your credit. We are very successful at getting \nbankruptcies, judgments, tax liens, foreclosures, late payments, charge-offs, \nrepossessions, and even student loans removed from a persons credit report. To find out more go to\nhttp://www.netcreditlawyer.com.\nIf you no longer want to receive information from us just go to \[email protected].\n \n\neefellsundvsttjoqtaxnhxlg\n", "msg_date": "Tue, 18 Mar 2003 20:28:02 -0600", "msg_from": "Lodovico <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ABOUT YOUR CREDIT......... lquwj" } ]
[ { "msg_contents": "Hi,\n\nwe have a great Database with Postgres. It is a Community.\n\nWe have a Dual-CPU-System with 1 GB RAM\n\nIt works on Apache with PHP. But we hadn't enough Performance.\n\nWhat's the optimized configuration with many Database-actions on great \ntables in a lapp-system?\n\nGreetings\nTorsten\n\n", "msg_date": "Thu, 20 Mar 2003 19:53:27 +0100", "msg_from": "Torsten Schulz <[email protected]>", "msg_from_op": true, "msg_subject": "Make PGSQL faster" }, { "msg_contents": "On Fri, 2003-03-21 at 06:53, Torsten Schulz wrote:\n> Hi,\n> \n> we have a great Database with Postgres. It is a Community.\n> \n> We have a Dual-CPU-System with 1 GB RAM\n> \n> It works on Apache with PHP. But we hadn't enough Performance.\n> \n> What's the optimized configuration with many Database-actions on great \n> tables in a lapp-system?\n\nIt is hard to say without more information, but it may be that you\nshould increase the buffers used by postgres - 1000 is a good starting\npoint.\n\nMy experience suggests that performance is not a 'general' thing\napplying to the whole application but in most cases the bad performance\nwill be one query out of a hundred.\n\nIn my applications I wrap my calls to PostgreSQL so I can log the amount\nof time each query took (in microseconds). Then when I have a query\nthat takes 10mS, I know I can ignore it and concentrate on the one that\ntakes 20000mS instead.\n\nRegards,\n\t\t\t\t\tAndrew.\n-- \n---------------------------------------------------------------------\nAndrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n Survey for nothing with http://survey.net.nz/ \n---------------------------------------------------------------------\n\n", "msg_date": "22 Mar 2003 08:06:11 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make PGSQL faster" } ]
[ { "msg_contents": "Will a increase in the size of a data page increase performance of a \ndatabase with large records?\n\nI have records about 881 byte + 40 byte (header) = 921.\n\n8k page size / 921 bytes per record is ONLY 8 records...........\n\nComments are welcome.........\n\nk=n^r/ck, SCJP\n\n_________________________________________________________________\nMSN 8 with e-mail virus protection service: 2 months FREE* \nhttp://join.msn.com/?page=features/virus\n\n", "msg_date": "Thu, 20 Mar 2003 14:45:24 -0600", "msg_from": "\"Kendrick C. Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Page Size in Future Releases" }, { "msg_contents": "On Friday 21 Mar 2003 2:15 am, Kendrick C. Wilson wrote:\n> Will a increase in the size of a data page increase performance of a\n> database with large records?\n>\n> I have records about 881 byte + 40 byte (header) = 921.\n>\n> 8k page size / 921 bytes per record is ONLY 8 records...........\n\nYou can tweak it yourself at compile time in some header file and that should \nwork but that is a point of diminising results as far as hackers are \nconcerned.\n\nOne reason I know where it would help is getting postgresql to use tons of \nshaerd memory. Right now postgresql can not use much beyond 250MB(??) because \nnumber of shared buffer are int or something. So if you know your reconrds \nare large, are often manipulated and your OS is not so good at file caching, \nthen increasing page size might help.\n\nGiven how good unices are in general in terms of file and memory handling, I \nwoudl say you should not do it unless your average record size is greater \nthan 8K, something like a large genome sequence or so.\n\nYMMV..\n\n Shridhar\n", "msg_date": "Fri, 21 Mar 2003 09:09:56 +0530", "msg_from": "\"Shridhar Daithankar<[email protected]>\"\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Page Size in Future Releases" }, { "msg_contents": "\n> > I have records about 881 byte + 40 byte (header) = 921.\n> >\n> > 8k page size / 921 bytes per record is ONLY 8 records...........\n>\n> You can tweak it yourself at compile time in some header file and that\nshould\n> work but that is a point of diminising results as far as hackers are\n> concerned.\n\nAs far as I'm aware the 8k page size has nothing to do with speed and\neverything to do with atomic writes. You can't be guaranteed that the O/S\nand hard drive controller will write anything more than 8K in an atomic\nblock...\n\nChris\n\n", "msg_date": "Fri, 21 Mar 2003 11:51:51 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Page Size in Future Releases" }, { "msg_contents": "Shridar,\n\n> One reason I know where it would help is getting postgresql to use tons of\n> shaerd memory. Right now postgresql can not use much beyond 250MB(??)\n> because number of shared buffer are int or something. So if you know your\n> reconrds are large, are often manipulated and your OS is not so good at\n> file caching, then increasing page size might help.\n\nUm, two fallacies:\n1) You can allocate as much shared buffer ram as you want. The maxium I've \ntested is 300mb, personally, but I know of no hard limit. \n\n2) However, allocating more shared buffer ram ... in fact anything beyond \nabout 40mb ... has never been shown by anyone on this list to be helpful for \nany size database, and sometimes the contrary.\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 20 Mar 2003 20:03:41 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Page Size in Future Releases" }, { "msg_contents": "\"Kendrick C. Wilson\" <[email protected]> writes:\n> Will a increase in the size of a data page increase performance of a \n> database with large records?\n\nProbably not; in fact the increased WAL overhead could make it a net\nloss. But feel free to try changing BLCKSZ to see how it works for you.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Mar 2003 19:15:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Page Size in Future Releases " }, { "msg_contents": "Am Samstag, 22. März 2003 01:15 schrieb Tom Lane:\n> \"Kendrick C. Wilson\" <[email protected]> writes:\n> > Will a increase in the size of a data page increase performance of a\n> > database with large records?\n>\n> Probably not; in fact the increased WAL overhead could make it a net\n> loss. But feel free to try changing BLCKSZ to see how it works for you.\n\nI've several database with 32KB and 8KB, and though the results are not really comparable due to slight different hardware, I've the feeling that 8KB buffers work best in most cases. The only difference I noticed are large objects which seem to work slightly better with larger sizes.\n\nRegards,\n\tMario Weilguni\n\n", "msg_date": "Sun, 23 Mar 2003 09:46:41 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Page Size in Future Releases" } ]
[ { "msg_contents": "I am setting up a project using APache, PHP and Postgresql.\nThis application will be used by about 30 users.\n\nThe database is about this type :\n\nbetween 12GB and 15GB\n4 tables will have 1M rows and 1000 columns with 90% of INT2 and the rest of float (20% of all the data will be 0)\nthe orther tables are less than 10 000 rows\n\nMost of the queries will be SELECT being not very complicated (I think at this time)\n\nI have 1 question regardind the hardware configuration :\n\nDELL \nbi-processor 2.8GHz\n4GB RAM\n76GB HD using Raid 5\nLinux version to be defined (Redhat ?)\n\nDo you think this configuration is enough to have good performance after setting up properly the database ?\n\nDo you thing the big tables should be splitted in order to have less columns. This could mean that I would have some queries with JOIN ?\n\nThank you for your help !\n\n\n\n\n\n\n\nI am setting up a project using APache, PHP and \nPostgresql.\nThis application will be used by about 30 \nusers.\n \nThe database is about this type :\n \nbetween 12GB and 15GB\n4 tables will have 1M rows and 1000 columns with \n90% of INT2 and the rest of float (20% of all the data will be 0)\nthe orther tables are less than 10 000 \nrows\n \nMost of the queries will be SELECT being not very \ncomplicated (I think at this time)\n \nI have 1 question regardind the hardware \nconfiguration :\n \nDELL \nbi-processor 2.8GHz\n4GB RAM\n76GB HD using Raid 5\nLinux version to be defined (Redhat ?)\n \nDo you think this configuration is enough to have \ngood performance after setting up properly the database ?\n \nDo you thing the big tables should be splitted in \norder to have less columns. This could mean that I would have some queries with \nJOIN ?\n \nThank you for your help !", "msg_date": "Thu, 20 Mar 2003 22:26:40 +0100", "msg_from": "\"Guillaume Houssay\" <[email protected]>", "msg_from_op": true, "msg_subject": "just to get some opinion on my configuration" } ]
[ { "msg_contents": "I have a table with 8,628,633 rows that I'd LIKE to search (ha ha).\n\nI have a very simple query:\n SELECT * FROM tableA WHERE column1 LIKE '%something%';\n\ntableA.column1 has an index on it and the database has been vacuumed recently. My problem is with the output of EXPLAIN:\n\n+----------------------------------------------------------------+\n| QUERY PLAN |\n+----------------------------------------------------------------+\n| Seq Scan on tableA (cost=0.00..212651.61 rows=13802 width=46) |\n| Filter: (column1 ~~ '%something%'::text) |\n+----------------------------------------------------------------+\n\nI don't like that cost (2,12,651) at all! Is there anyway I can optimize this query? Make a different kind of index (it's currently btree)? Use substr or indexof or something instead of LIKE?\n\nThoughts?\n\n--------------------------\nDavid Olbersen \niGuard Engineer\n11415 West Bernardo Court \nSan Diego, CA 92127 \n1-858-676-2277 x2152\n", "msg_date": "Thu, 20 Mar 2003 13:41:25 -0800", "msg_from": "\"David Olbersen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help with LIKE" }, { "msg_contents": "David,\n\n> I have a table with 8,628,633 rows that I'd LIKE to search (ha ha).\n> \n> I have a very simple query:\n> SELECT * FROM tableA WHERE column1 LIKE '%something%';\n\nThat's what's called an \"unanchored text search\". That kind of query cannot \nbe indexed using a regular index.\n\nWhat you need is called \"Full Text Indexing\" or \"Full Text Search\". Check \nout two resources:\n\n1) contrib/tsearch in your PostgreSQL source code;\n2) OpenFTS (www.openfts.org).\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 20 Mar 2003 13:55:32 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with LIKE" } ]
[ { "msg_contents": "Josh,\n\n> That's what's called an \"unanchored text search\". That kind \n> of query cannot be indexed using a regular index.\n\nDuh, should have tried the anchors to get what I wanted...\n\n> What you need is called \"Full Text Indexing\" or \"Full Text \n> Search\". Check \n> out two resources:\n\nThis isn't actually what I was looking for, the anchor works better (only 5.87 now!)\n\nThanks for the reminder!\n\n--------------------------\nDavid Olbersen \niGuard Engineer\n11415 West Bernardo Court \nSan Diego, CA 92127 \n1-858-676-2277 x2152\n", "msg_date": "Thu, 20 Mar 2003 15:19:13 -0800", "msg_from": "\"David Olbersen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with LIKE" } ]
[ { "msg_contents": "My mistake, things don't get much better.\n\nI'm selecting URLs out of a database like this:\n\n SELECT * FROM table WHERE url ~ '^http://.*something.*$';\n\nThis still uses a sequential scan but cuts the time down to 76,351 from 212,651 using\n\n WHERE url LIKE '%something%';\n\nThe full text indexing doesn't look quite right as there are no spaces in this data.\n\nAlso, using something like:\n \n WHERE position( 'something', url ) > 0\n\nis a bit worse, giving 84,259.\n\n--------------------------\nDavid Olbersen \niGuard Engineer\n11415 West Bernardo Court \nSan Diego, CA 92127 \n1-858-676-2277 x2152\n\n\n> -----Original Message-----\n> From: David Olbersen \n> Sent: Thursday, March 20, 2003 3:19 PM\n> To: [email protected]\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Help with LIKE\n> \n> \n> Josh,\n> \n> > That's what's called an \"unanchored text search\". That kind \n> > of query cannot be indexed using a regular index.\n> \n> Duh, should have tried the anchors to get what I wanted...\n> \n> > What you need is called \"Full Text Indexing\" or \"Full Text \n> > Search\". Check \n> > out two resources:\n> \n> This isn't actually what I was looking for, the anchor works \n> better (only 5.87 now!)\n> \n> Thanks for the reminder!\n> \n> --------------------------\n> David Olbersen \n> iGuard Engineer\n> 11415 West Bernardo Court \n> San Diego, CA 92127 \n> 1-858-676-2277 x2152\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n> \n", "msg_date": "Thu, 20 Mar 2003 15:35:39 -0800", "msg_from": "\"David Olbersen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with LIKE" }, { "msg_contents": "David,\n\n> My mistake, things don't get much better.\n> \n> I'm selecting URLs out of a database like this:\n> \n> SELECT * FROM table WHERE url ~ '^http://.*something.*$';\n\nThat search still requires a seq scan, since it has \"gaps\" in the seqence of \ncharacters. That is,\n\nurl ~ '^http://www.something.*' could use an index, but your search above \ncannot.\n\nYou may be right that the standard OpenFTS indexing won't help you in this \ncase, since you're really searching for fragments of a continuous text \nstring.\n\nOne thing I might suggest is that you look for ways that you might be able to \nbreak out the text you're searching for with a function. For example, if you \nwere searching strictly on the domain SLD name, then you could create an \n\"immutable\" function called if_split_sld(TEXT) and index on that.\n\nIf you are really searching for \"floating\" text within the string, I believe \nthat there are some options in tseach to help you, but they may not end up \nimproving performance much.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 20 Mar 2003 17:27:21 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with LIKE" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> SELECT * FROM table WHERE url ~ '^http://.*something.*$';\n\n> That search still requires a seq scan, since it has \"gaps\" in the seqence of \n> characters. That is,\n\n> url ~ '^http://www.something.*' could use an index, but your search above \n> cannot.\n\nActually, it *can* use an index ... but the index condition will only\nuse the characters before the \".*\", ie, \"http://\". Which is just about\nuseless if you're searching a column of URLs :-(\n\nI agree that tsearch or OpenFTS are the tools to be looking at.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Mar 2003 21:06:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with LIKE " } ]
[ { "msg_contents": "Hi everybody,\n\nI'm having a performance problem, PostgreSQL (7.3.2) is skipping some\noptimisation options that it shouldn't IMO. It can be fully reproduced as\nfollows:\n\ncreate table foo(\nbar char(100),\nbaz integer\n);\n\nNow create a file with 1.2 million empty lines and do a \\copy foo (bar)\nfrom 'thatfile'. This should fill the table with 1.2 million rows. Now do:\n\ninsert into foo (baz) values (28);\ncreate index foo_idx on foo(baz);\nvacuum full analyze foo;\n\nNow, we would expect that PostgreSQL is fully aware that there are not\nmany rows in foo that have \"baz is not null\". However:\n\nbsamwel=> explain update foo set baz=null where baz is not null;\n QUERY PLAN\n---------------------------------------------------------------\n Seq Scan on foo (cost=0.00..34470.09 rows=1286146 width=110)\n Filter: (baz IS NOT NULL)\n(2 rows)\n\n\nSo, it thinks it must do a sequential scan on foo, even though it should\nknow by now that foo.baz is really mostly null. Even if I disable\nsequential scan it still chooses this option! Why doesn't it use the\nindex? It doesn't use the index either when I try to select all rows that\nare not null.\n\nJust for completeness' sake I'll give you the explain analyze:\n\nbsamwel=> explain analyze update foo set baz=null where baz is not null;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------\n Seq Scan on foo (cost=0.00..34470.09 rows=1286146 width=110) (actual\ntime=19678.82..19678.84 rows=1 loops=1)\n Filter: (baz IS NOT NULL)\n Total runtime: 19750.21 msec\n(3 rows)\n\nDo you guys have any idea?\n\nRegards,\nBart\n\n", "msg_date": "Sun, 23 Mar 2003 18:38:58 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Slow update of indexed column with many nulls" }, { "msg_contents": "Bart,\n\n> insert into foo (baz) values (28);\n> create index foo_idx on foo(baz);\n> vacuum full analyze foo;\n>\n> Now, we would expect that PostgreSQL is fully aware that there are not\n> many rows in foo that have \"baz is not null\". However:\n\nThis is a known issue discussed several times on this list. Try re-creating \nyour index as:\n\ncreate index foo_idx on foo(baz) where foo is not null;\n\nSee the list archives for the reasons why. This may improve in future \nreleases of PostgreSQL.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Sun, 23 Mar 2003 13:55:02 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow update of indexed column with many nulls" } ]
[ { "msg_contents": "Hi guys,\n\nI'm having another performance problem as well. I have two tables called\n\"wwwlog\" (about 100 bytes per row, 1.2 million records) and table called\n\"triples\" (about 20 bytes per row, 0.9 million records). Triples contains\nan integer foreign key to wwwlog, but it was not marked as a foreign key\nat the point of table creation. Now, when I do:\n\nalter table triples add foreign key(id1) references wwwlog(id);\n\nPostgreSQL starts doing heavy work for at least one and a half hour, and I\nbroke it off at that. It is not possible to \"explain\" a statement like\nthis! Probably what it does is that it will check the foreign key\nconstraint for every field in the table. This will make it completely\nimpossible to load my data, because:\n\n(1) I cannot set the foreign key constraints BEFORE loading the 0.9\nmillion records, because that would cause the checks to take place during\nloading.\n(2) I cannot set the foreign key constraints AFTER loading the 0.9 million\nrecords because I've got no clue at all how long this operation is going\nto take.\n(3) Table \"triples\" contains two more foreign keys to the same wwwlog key.\nThis means I've got to do the same thing two more times after the first\none is finished.\n\nI find this behaviour very annoying, because it is possible to optimize a\ncheck like this very well, for instance by creating a temporary data set\ncontaining the union of all foreign keys and all primary keys of the\noriginal table, augmented with an extra field \"pri\" which is 1 if the\nrecord comes from the primary keys and 0 otherwise. Say this data is\ncontained in a temporary table called \"t\" with columns \"key\" and \"pri\" for\nthe data. One would then be able to do the check like this:\n\nNOT EXISTS(\n SELECT key,sum(pri)\n FROM t\n GROUP BY key\n HAVING sum(pri) = 0\n);\n\nThis means that there must not exist a group of \"key\" values that does not\nhave a primary key somewhere in the set. This query is extremely easy to\nexecute and would be done in a few seconds.\n\nDoes anyone know of a way of adding foreign key constraints faster in\nPostgreSQL? Or, if there is no solution, do you guys know of any reasons\nwhy a solution like the one I described above would or would not work, and\ncould or could not be built into PostgreSQL at some point?\n\nRegards,\nBart\n\n", "msg_date": "Sun, 23 Mar 2003 18:58:24 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Adding a foreign key constraint is extremely slow" }, { "msg_contents": "\nOn Sun, 23 Mar 2003 [email protected] wrote:\n\n> Hi guys,\n>\n> I'm having another performance problem as well. I have two tables called\n> \"wwwlog\" (about 100 bytes per row, 1.2 million records) and table called\n> \"triples\" (about 20 bytes per row, 0.9 million records). Triples contains\n> an integer foreign key to wwwlog, but it was not marked as a foreign key\n> at the point of table creation. Now, when I do:\n>\n> alter table triples add foreign key(id1) references wwwlog(id);\n>\n> PostgreSQL starts doing heavy work for at least one and a half hour, and I\n> broke it off at that. It is not possible to \"explain\" a statement like\n> this! Probably what it does is that it will check the foreign key\n> constraint for every field in the table. This will make it completely\n\nIn fact it does exactly this. It could be done using\nselect * from fk where not exists (select * from pk where ...)\nor another optimized method, but noone's gotten to changing it. I didn't\ndo it in the start becase I didn't want to duplicate the check logic if it\ncould be helped.\n\nAs a temporary workaround until something is done(assuming you know the\ndata is valid), set the constraints before loading then turn off triggers\non the tables (see pg_dump's data only output for an example), load the\ndata and turn them back on.\n\n", "msg_date": "Sun, 23 Mar 2003 11:30:04 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding a foreign key constraint is extremely slow" }, { "msg_contents": "[email protected] writes:\n\n> alter table triples add foreign key(id1) references wwwlog(id);\n> \n> PostgreSQL starts doing heavy work for at least one and a half hour, and I\n> broke it off at that. It is not possible to \"explain\" a statement like\n> this! Probably what it does is that it will check the foreign key\n> constraint for every field in the table. This will make it completely\n> impossible to load my data, because:\n> \n> (2) I cannot set the foreign key constraints AFTER loading the 0.9 million\n> records because I've got no clue at all how long this operation is going\n> to take.\n\nTry adding an index on wwwlog(id) so that it can check the constraint without\ndoing a full table scan for each value being checked.\n\n-- \ngreg\n\n", "msg_date": "26 Mar 2003 09:17:47 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding a foreign key constraint is extremely slow" }, { "msg_contents": "Greg Stark wrote:\n> [email protected] writes:\n> \n> \n>>alter table triples add foreign key(id1) references wwwlog(id);\n>>\n>>PostgreSQL starts doing heavy work for at least one and a half hour, and I\n>>broke it off at that. It is not possible to \"explain\" a statement like\n>>this! Probably what it does is that it will check the foreign key\n>>constraint for every field in the table. This will make it completely\n>>impossible to load my data, because:\n>>\n>>(2) I cannot set the foreign key constraints AFTER loading the 0.9 million\n>>records because I've got no clue at all how long this operation is going\n>>to take.\n> \n> \n> Try adding an index on wwwlog(id) so that it can check the constraint without\n> doing a full table scan for each value being checked.\n\nAFAIK, because wwwlog(id) is the primary key, this index already exists \nimplicitly. Still, 0.9 million separate index lookups are too slow for \nmy purposes, if for example it takes something as low as 1 ms per lookup \nit will still take 900 seconds (= 15 minutes) to complete. As the \ncomplete adding of the foreign key constraint took about an hour, that \nwould suggest an average of 4 ms per lookup, which suggests that the \nindex is, in fact, present. :)\n\nAnyway, I've actually waited for the operation to complete. The problem \nis out of my way for now.\n\nBart\n\n\n-- \n\nLeiden Institute of Advanced Computer Science (http://www.liacs.nl)\nE-mail: [email protected] Telephone: +31-71-5277037\nHomepage: http://www.liacs.nl/~bsamwel\nOpinions stated in this e-mail are mine and not necessarily my employer's.\n\n", "msg_date": "Wed, 26 Mar 2003 18:08:51 +0100", "msg_from": "Bart Samwel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding a foreign key constraint is extremely slow" } ]
[ { "msg_contents": "Please help me speed up the following query. It used to run in 2-5 sec.,\nbut now it takes 2-3 mins!\nI ran VACUUM FULL ANALYZE and REINDEX.\nSELECT * FROM media m\nWHERE m.mediatype = (SELECT objectid FROM mediatype WHERE\nmedianame='Audio') \nAND EXISTS \n (SELECT * FROM \n (SELECT objectid AS mediaid \n FROM media \n WHERE activity='347667' \n UNION \n SELECT ism.media AS mediaid \n FROM intsetmedia ism, set s \n WHERE ism.set = s.objectid \n AND s.activity='347667' ) AS a1 \n WHERE a1.mediaid = m.objectid \n LIMIT 1) \nORDER BY medianame ASC, status DESC \n \nBasically it tries to find all Audios that are either explicitly\nattached to the given activity, or attached to the given activity via a\nmany-to-many relationship intsetmedia which links records in table\nInteraction, Set, and Media.\nI attached the output of EXPLAIN and schemas and indexes on the tables\ninvolved. Most of the fields are not relevant to the query, but I listed\nthem anyways. I discarded trigger information, though.\nThanks for your help.\n \nOleg\n\n\n*************************************\n\nThis email may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments. \nAny review, copying, printing, disclosure or other use is prohibited.\nWe reserve the right to monitor email sent through our network.\n\n*************************************", "msg_date": "Mon, 24 Mar 2003 10:48:52 -0700", "msg_from": "Oleg Lebedev <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query" }, { "msg_contents": "Oleg,\n\n> Please help me speed up the following query. It used to run in 2-5 sec.,\n> but now it takes 2-3 mins!\n> I ran VACUUM FULL ANALYZE and REINDEX.\n> SELECT * FROM media m\n> WHERE m.mediatype = (SELECT objectid FROM mediatype WHERE\n\nThis is a repost, isn't it?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Mon, 24 Mar 2003 10:54:49 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query" }, { "msg_contents": "\nOn Mon, 24 Mar 2003, Oleg Lebedev wrote:\n\n> Please help me speed up the following query. It used to run in 2-5 sec.,\n> but now it takes 2-3 mins!\n\nEXPLAIN ANALYZE output would be useful to see where the time is actually\ntaking place (rather than an estimate thereof).\n\n", "msg_date": "Mon, 24 Mar 2003 11:03:42 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query" }, { "msg_contents": "Stephan,\n\nHmmm ... I'm a bit confused by the new EXPLAIN output. Stefan, does Oleg's \noutput show the time for *one* subplan execution, executed for 44,000 loops, \nor does it show the total time? The former would make more sense given his \nquery, but I'm just not sure ....\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 24 Mar 2003 11:47:09 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query" }, { "msg_contents": "Oleg Lebedev <[email protected]> writes:\n> SELECT * FROM media m\n> WHERE m.mediatype =3D (SELECT objectid FROM mediatype WHERE\n> medianame=3D'Audio')=20\n> AND EXISTS=20\n> (SELECT * FROM=20\n> (SELECT objectid AS mediaid=20\n> FROM media=20\n> WHERE activity=3D'347667'=20\n> UNION=20\n> SELECT ism.media AS mediaid=20\n> FROM intsetmedia ism, set s=20\n> WHERE ism.set =3D s.objectid=20\n> AND s.activity=3D'347667' ) AS a1=20\n> WHERE a1.mediaid =3D m.objectid=20\n> LIMIT 1)=20\n> ORDER BY medianame ASC, status DESC=20\n\nWell, one observation is that the LIMIT clause is useless and probably\ncounterproductive; EXISTS takes only one row from the subselect anyway.\nAnother is that the UNION is doing it the hard way; UNION implies doing\na duplicate-elimination step, which you don't need here. UNION ALL\nwould be a little quicker. But what I would do is split it into two\nEXISTS:\n\nSELECT * FROM media m\nWHERE m.mediatype = (SELECT objectid FROM mediatype WHERE\nmedianame='Audio') \nAND ( EXISTS(SELECT 1\n FROM media \n WHERE activity='347667' \n AND objectid = m.objectid)\n OR EXISTS(SELECT 1\n FROM intsetmedia ism, set s \n WHERE ism.set = s.objectid \n AND s.activity='347667'\n AND ism.media = m.objectid))\nORDER BY medianame ASC, status DESC \n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 24 Mar 2003 15:48:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query " } ]
[ { "msg_contents": "No, I don't believe so.\nMy previous question regarding performance was solved by VACUUM FULL and\nREINDEX.\nThe current one, I believe, is more related to query structure and\nplanner stats.\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Monday, March 24, 2003 11:55 AM\nTo: Oleg Lebedev; [email protected]\nSubject: Re: [PERFORM] Slow query\n\n\nOleg,\n\n> Please help me speed up the following query. It used to run in 2-5 \n> sec., but now it takes 2-3 mins! I ran VACUUM FULL ANALYZE and \n> REINDEX. SELECT * FROM media m\n> WHERE m.mediatype = (SELECT objectid FROM mediatype WHERE\n\nThis is a repost, isn't it?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n\n*************************************\n\nThis email may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments. \nAny review, copying, printing, disclosure or other use is prohibited.\nWe reserve the right to monitor email sent through our network.\n\n*************************************\n\n", "msg_date": "Mon, 24 Mar 2003 12:02:51 -0700", "msg_from": "Oleg Lebedev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query" } ]
[ { "msg_contents": "EXPLAIN ANALYZE plan is shown below.\nI also attached it as a file.\n\nOne thing that might help is that the query produces 27 rows, which is\nmuch less than predicted 1963.\n\nQUERY PLAN \n Sort (cost=553657.66..553662.57 rows=1963 width=218) (actual\ntime=133036.73..133036.75 rows=27 loops=1) \n Sort Key: medianame, status \n InitPlan \n -> Seq Scan on mediatype (cost=0.00..1.29 rows=1 width=8) (actual\ntime=0.12..0.14 rows=1 loops=1) \n Filter: (medianame = 'Audio'::character varying) \n -> Index Scan using media_mtype_index on media m (cost=0.00..553550.28\nrows=1963 width=218) (actual time=5153.36..133036.00 rows=27 loops=1) \n Index Cond: (mediatype = $0) \n Filter: (subplan) \n SubPlan \n -> Limit (cost=138.92..138.93 rows=1 width=24) (actual time=2.92..2.92\nrows=0 loops=44876) \n -> Subquery Scan a1 (cost=138.92..138.93 rows=1 width=24) (actual\ntime=2.92..2.92 rows=0 loops=44876) \n -> Unique (cost=138.92..138.93 rows=1 width=24) (actual\ntime=2.91..2.91 rows=0 loops=44876) \n -> Sort (cost=138.92..138.93 rows=2 width=24) (actual time=2.91..2.91\nrows=0 loops=44876) \n Sort Key: mediaid \n -> Append (cost=0.00..138.91 rows=2 width=24) (actual time=2.80..2.81\nrows=0 loops=44876) \n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..5.11 rows=1 width=8) (actual\ntime=0.06..0.06 rows=0 loops=44876) \n -> Index Scan using media_pkey on media (cost=0.00..5.11 rows=1\nwidth=8) (actual time=0.05..0.05 rows=0 loops=44876) \n Index Cond: (objectid = $1) \n Filter: (activity = 347667::bigint) \n -> Subquery Scan \"*SELECT* 2\" (cost=24.25..133.80 rows=1 width=24)\n(actual time=2.73..2.73 rows=0 loops=44876) \n -> Hash Join (cost=24.25..133.80 rows=1 width=24) (actual\ntime=2.72..2.72 rows=0 loops=44876) \n Hash Cond: (\"outer\".\"set\" = \"inner\".objectid) \n -> Index Scan using intsetmedia_media_index on intsetmedia ism\n(cost=0.00..109.26 rows=38 width=16) (actual time=0.04..0.04 rows=1\nloops=44876) \n Index Cond: (media = $1) \n -> Hash (cost=24.24..24.24 rows=6 width=8) (actual time=0.14..0.14\nrows=0 loops=44876) \n -> Index Scan using set_act_index on \"set\" s (cost=0.00..24.24 rows=6\nwidth=8) (actual time=0.11..0.13 rows=2 loops=44876) \n Index Cond: (activity = 347667::bigint) \n Total runtime: 133037.49 msec \n\n\n-----Original Message-----\nFrom: Stephan Szabo [mailto:[email protected]] \nSent: Monday, March 24, 2003 12:04 PM\nTo: Oleg Lebedev\nCc: [email protected]\nSubject: Re: [PERFORM] Slow query\n\n\n\nOn Mon, 24 Mar 2003, Oleg Lebedev wrote:\n\n> Please help me speed up the following query. It used to run in 2-5 \n> sec., but now it takes 2-3 mins!\n\nEXPLAIN ANALYZE output would be useful to see where the time is actually\ntaking place (rather than an estimate thereof).\n\n\n\n*************************************\n\nThis email may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments. \nAny review, copying, printing, disclosure or other use is prohibited.\nWe reserve the right to monitor email sent through our network.\n\n*************************************", "msg_date": "Mon, 24 Mar 2003 12:20:13 -0700", "msg_from": "Oleg Lebedev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query" } ]
[ { "msg_contents": "I decided that it might help to list the cardinalities of the pertinent\ntables:\nIntsetmedia: 90,000 rows\nInteraction: 26,000 rows\nSet: 7,000 rows\nMedia: 80,000 rows\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Monday, March 24, 2003 12:47 PM\nTo: Stephan Szabo; [email protected]\nSubject: Re: [PERFORM] Slow query\nImportance: Low\n\n\nStephan,\n\nHmmm ... I'm a bit confused by the new EXPLAIN output. Stefan, does\nOleg's \noutput show the time for *one* subplan execution, executed for 44,000\nloops, \nor does it show the total time? The former would make more sense given\nhis \nquery, but I'm just not sure ....\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n*************************************\n\nThis email may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments. \nAny review, copying, printing, disclosure or other use is prohibited.\nWe reserve the right to monitor email sent through our network.\n\n*************************************\n\n", "msg_date": "Mon, 24 Mar 2003 13:28:47 -0700", "msg_from": "Oleg Lebedev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query" } ]
[ { "msg_contents": "I just ran the query you sent me and attached the output of EXPLAIN\nANALYZE as TOMs_plan.txt\nIt did not speed up the query significantly.\n\nIt always seemed to me that UNION is faster than OR, so I tried your\nsuggestion to use UNION ALL with the original query without\ncounter-productive LIMIT 1 in EXISTS clause. This reduced the cost of\nthe plan by 50%, but slowed down the query. Weird ... The plan is shown\nin UNION_ALL_plan.txt\n\nAFAIK, the only change I've done since the time when the query took 3\nsec. to run was adding more indexes and increasing the size of data by\nabout 25%. It sounds kind of stupid, but I remember that adding indexes\nsometimes slowed down my queries. I will try to drop all the indexes and\nadd them back again one by one.\n\nAny other ideas?\n\nThanks.\nOleg\n\n\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Monday, March 24, 2003 1:48 PM\nTo: Oleg Lebedev\nCc: [email protected]\nSubject: Re: [PERFORM] Slow query\n\n\nOleg Lebedev <[email protected]> writes:\n> SELECT * FROM media m\n> WHERE m.mediatype =3D (SELECT objectid FROM mediatype WHERE \n> medianame=3D'Audio')=20 AND EXISTS=20\n> (SELECT * FROM=20\n> (SELECT objectid AS mediaid=20\n> FROM media=20\n> WHERE activity=3D'347667'=20\n> UNION=20\n> SELECT ism.media AS mediaid=20\n> FROM intsetmedia ism, set s=20\n> WHERE ism.set =3D s.objectid=20\n> AND s.activity=3D'347667' ) AS a1=20\n> WHERE a1.mediaid =3D m.objectid=20\n> LIMIT 1)=20\n> ORDER BY medianame ASC, status DESC=20\n\nWell, one observation is that the LIMIT clause is useless and probably\ncounterproductive; EXISTS takes only one row from the subselect anyway.\nAnother is that the UNION is doing it the hard way; UNION implies doing\na duplicate-elimination step, which you don't need here. UNION ALL\nwould be a little quicker. But what I would do is split it into two\nEXISTS:\n\nSELECT * FROM media m\nWHERE m.mediatype = (SELECT objectid FROM mediatype WHERE\nmedianame='Audio') \nAND ( EXISTS(SELECT 1\n FROM media \n WHERE activity='347667' \n AND objectid = m.objectid)\n OR EXISTS(SELECT 1\n FROM intsetmedia ism, set s \n WHERE ism.set = s.objectid \n AND s.activity='347667'\n AND ism.media = m.objectid))\nORDER BY medianame ASC, status DESC \n\n\t\t\tregards, tom lane\n\n\n\n*************************************\n\nThis email may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments. \nAny review, copying, printing, disclosure or other use is prohibited.\nWe reserve the right to monitor email sent through our network.\n\n*************************************", "msg_date": "Mon, 24 Mar 2003 14:46:09 -0700", "msg_from": "Oleg Lebedev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query" }, { "msg_contents": "Oleg Lebedev <[email protected]> writes:\n> I just ran the query you sent me and attached the output of EXPLAIN\n> ANALYZE as TOMs_plan.txt\n> It did not speed up the query significantly.\n\nNope. I was hoping to see a faster-start plan, but given the number of\nrows involved I guess it won't change its mind. You're going to have to\nthink about a more intelligent approach, rather than minor tweaks.\n\nOne question: since objectid is evidently a primary key, why are you\ndoing a subselect for the first part? Wouldn't it give the same result\njust to say \"m.activity = '347667'\" in the top-level WHERE?\n\nAs for the second part, I think you'll have to try to rewrite it as a\njoin with the media table.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 24 Mar 2003 17:08:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query " } ]
[ { "msg_contents": "You are right. I rewrote the query using JOINs and it increased\nperformance from 123 sec. to 20msec. I betcha I screwed smth up, but I\nlist the rewritten query below anyways. I also attached the new plan.\nThank you.\n\nSELECT * FROM media m \nJOIN \n((SELECT objectid AS mediaid \nFROM media \nWHERE activity='347667') \nUNION \n(SELECT ism.media AS mediaid \nFROM intsetmedia ism, set s \nWHERE ism.set = s.objectid \nAND s.activity='347667' )) a1 \nON \nm.mediatype = (SELECT objectid FROM mediatype WHERE medianame='Audio') \nAND m.objectid=mediaid \nORDER BY medianame ASC, status DESC \n\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Monday, March 24, 2003 3:09 PM\nTo: Oleg Lebedev\nCc: [email protected]\nSubject: Re: [PERFORM] Slow query\n\n\nOleg Lebedev <[email protected]> writes:\n> I just ran the query you sent me and attached the output of EXPLAIN \n> ANALYZE as TOMs_plan.txt It did not speed up the query significantly.\n\nNope. I was hoping to see a faster-start plan, but given the number of\nrows involved I guess it won't change its mind. You're going to have to\nthink about a more intelligent approach, rather than minor tweaks.\n\nOne question: since objectid is evidently a primary key, why are you\ndoing a subselect for the first part? Wouldn't it give the same result\njust to say \"m.activity = '347667'\" in the top-level WHERE?\n\nAs for the second part, I think you'll have to try to rewrite it as a\njoin with the media table.\n\n\t\t\tregards, tom lane\n\n\n\n*************************************\n\nThis email may contain privileged or confidential material intended for the named recipient only.\nIf you are not the named recipient, delete this message and all attachments. \nAny review, copying, printing, disclosure or other use is prohibited.\nWe reserve the right to monitor email sent through our network.\n\n*************************************", "msg_date": "Mon, 24 Mar 2003 15:37:10 -0700", "msg_from": "Oleg Lebedev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query" } ]
[ { "msg_contents": "Oleg,\n\nMy guess is that the query runs slow because by adding\ndata you exceeded what your database can do in memory\nand you need to do some kind of disk sort.\n\nHow about rewriting your query without the UNION and\nthe EXISTS to something like\n\nSELECT * FROM media m\nWHERE m.mediatype = (SELECT objectid FROM mediatype \n WHERE medianame='Audio')\nAND ( m.activity='347667'\n OR m.objectid IN (\n SELECT s.objectid\n FROM intsetmedia ism, set s\n WHERE ism.set = s.objectid\n AND s.activity='347667'))\nORDER BY medianame ASC, status DESC\n\nRegards,\nNikolaus Dilger\n\nOn Mon, 24 Mar 2003, Oleg Lebedev wrote:\n\n\nMessage\n\n\n\nPlease help me speed \nup the following query. It used to run in 2-5 sec., but\nnow it takes 2-3 \nmins!\nI ran VACUUM FULL \nANALYZE and REINDEX.\nSELECT * FROM media \nm\nWHERE m.mediatype = \n(SELECT objectid FROM mediatype WHERE\nmedianame='Audio') \nAND EXISTS \n\n        (SELECT * FROM \n\n                \n(SELECT objectid AS mediaid \n                \nFROM media \n                \nWHERE activity='347667' \n                \nUNION \n                \nSELECT ism.media AS mediaid \n                \nFROM intsetmedia ism, set s \n                \nWHERE ism.set = s.objectid \n                \nAND s.activity='347667' ) AS a1 \n        WHERE a1.mediaid = m.objectid \n\n        LIMIT 1) \nORDER BY medianame ASC, status DESC \n \nBasically it tries \nto find all Audios that are either explicitly attached\nto the given activity, or \nattached to the given activity via a many-to-many\nrelationship intsetmedia which \nlinks records in table Interaction, Set, and Media.\nI attached the \noutput of EXPLAIN and schemas and indexes on the tables\ninvolved. Most of the \nfields are not relevant to the query, but I listed them\nanyways. I discarded \ntrigger information, though.\nThanks for your \nhelp.\n \nOleg\n\n*************************************\n\nThis email may contain privileged or confidential\nmaterial intended for the named recipient only.\nIf you are not the named recipient, delete this message\nand all attachments. \nAny review, copying, printing, disclosure or other use\nis prohibited.\nWe reserve the right to monitor email sent through our\nnetwork.\n\n*************************************\n\n", "msg_date": "Mon, 24 Mar 2003 19:26:57 -0800 (PST)", "msg_from": "\"Nikolaus Dilger\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query" } ]
[ { "msg_contents": "Hi,\n\nUsing 7.2.3 and 7.2.4 (the last .3 is being retired this weekend).\n\nI'm struggling with an application which is keeping open a\ntransaction (or, likely from the results, more than one) against a\npair of frequently-updated tables. Unfortunately, the\nfrequently-updated tables are also a performance bottleneck.\n\nThese tables are small, but their physical size is very large,\nbecause of all the updates.\n\nThe problem is, of course, that vacuum isn't working because\n_something_ is holding open the transaction. But I can't tell what.\n\nWe connect to the database via JDBC; we have a pool which recycles\nits connections. In the next version of the pool, the autocommit\nfoolishness (end transaction and issue immediate BEGIN) is gone, but\nthat won't help me in the case at hand.\n\nWhat I'm trying to figure out is whether there is a way to learn\nwhich pids are responsible for the long-running transaction(s) that\ntouch(es) the candidate tables. Then I can find a way of paring those\nprocesses back, so that I can get vacuum to succeed.\n\nI think there must be a way with gdb, but I'm stumped. Any\nsuggestions? The time a process has been living is not a guide,\nbecause the connections (and hence processes) get recycled in the\npool.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 25 Mar 2003 08:12:34 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": true, "msg_subject": "Finding the PID keeping a transaction open" }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> What I'm trying to figure out is whether there is a way to learn\n> which pids are responsible for the long-running transaction(s) that\n> touch(es) the candidate tables.\n\nIn 7.3 you could look at the pg_locks system view, but I can't think\nof any reasonable way to do it in 7.2 :-(\n\n> I think there must be a way with gdb, but I'm stumped.\n\nThe lock structures are arcane enough that manual examination with gdb\nwould take many minutes --- which you'd have to do with the LockMgr lock\nheld to keep them from changing underneath you. This seems quite\nunworkable for a production database ...\n\nIt's conceivable that some version of the pg_locks code could be\nback-ported to 7.2 --- you'd have to settle for dumping the info to\nthe log, probably, due to lack of table-function support, but it\ncould be done.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 25 Mar 2003 09:37:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding the PID keeping a transaction open " }, { "msg_contents": "On Tue, Mar 25, 2003 at 09:37:41AM -0500, Tom Lane wrote:\n> In 7.3 you could look at the pg_locks system view, but I can't think\n> of any reasonable way to do it in 7.2 :-(\n\nThanks. I was afraid you'd say that. Rats.\n\n> would take many minutes --- which you'd have to do with the LockMgr lock\n> held to keep them from changing underneath you. This seems quite\n\nWell, then, _that's_ a non-starter. Ugh. \n\n> It's conceivable that some version of the pg_locks code could be\n> back-ported to 7.2 --- you'd have to settle for dumping the info to\n> the log, probably, due to lack of table-function support, but it\n> could be done.\n\nI think it's probably better just to work on making the whole thing\nwork correctly with 7.3, instead. I'm keen to move it, and 7.3 seems\nstable enough, so I'm inclined just to move that up in priority. \n\nThanks,\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 25 Mar 2003 10:30:49 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finding the PID keeping a transaction open" } ]
[ { "msg_contents": "I'm planning to develop a website network with n network nodes. There \nwill be a central node (website) wich will summarize information from \nall the network nodes. It will be also possible to use data between \nnodes (node A showing own data + data from node B). Table structures \nbetween nodes will be identical. So my question is: what should i do, \nput all the data in one huge database or spread it in several nearly \nidentical databases?\n\nData generated will grow at a rate of ~ 250Mb/year; 10000 rows per table \n(size is physical space of /var/lib/postgres/data after vacuum analyze, \nthis dir contains only one database).\n\nThank you in advance,\n\ndharana\n\n", "msg_date": "Tue, 25 Mar 2003 21:31:58 +0100", "msg_from": "dharana <[email protected]>", "msg_from_op": true, "msg_subject": "What's better: one huge database or several smaller ones?" } ]
[ { "msg_contents": "subscribe\n\nend\n\n", "msg_date": "Wed, 26 Mar 2003 16:26:01 -0800", "msg_from": "Abhishek Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "I have not been able to find any documentation on how to determine the \nproper settings for the max_fsm_relations and the max_fsm_pages config \noptions. Any help would be appreciated.\n\nThanks\n\nDoug Oden\n\n", "msg_date": "Thu, 27 Mar 2003 10:58:08 -0600", "msg_from": "Robert D Oden <[email protected]>", "msg_from_op": true, "msg_subject": "max_fsm settings" }, { "msg_contents": "The mail archives are a wonderful place, check out this thread and the\ndiscussion that followed. \nhttp://fts.postgresql.org/db/mw/msg.html?mid=1360953\n\nRobert Treat\n\nOn Thu, 2003-03-27 at 11:58, Robert D Oden wrote:\n> I have not been able to find any documentation on how to determine the \n> proper settings for the max_fsm_relations and the max_fsm_pages config \n> options. Any help would be appreciated.\n> \n> Thanks\n> \n> Doug Oden\n\n", "msg_date": "27 Mar 2003 16:10:21 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: max_fsm settings" } ]
[ { "msg_contents": "here is the query that is killing me:\n\nselect shoporder from sodetailtabletrans where shoporder not in(select \nshoporder from soheadertable)\n\nThis is just an example query. Any time I use 'where not in(' it takes several \nhours to return a resultset. The postgres version is 7.2.3 although I have \ntried it on my test server which has 7.3 on it and it runs just as slow. The \nserver is a fast server 2GHz with a gig of ram. I have tried several \ndifferant index setups but nothing seems to help.\n\nsoheadertable - 5104 rows\nCREATE TABLE \"soheadertable\" (\n \"shoporder\" numeric(10,0) NOT NULL,\n \"initrundate\" date,\n \"actualrundate\" date,\n \"processedminutes\" numeric(10,0),\n \"starttime\" timestamptz,\n \"endtime\" timestamptz,\n \"line\" int4,\n \"personcount\" numeric(10,0),\n \"product\" varchar(15),\n \"qtytomake\" numeric(10,3),\n \"linestatus\" numeric(2,0) DEFAULT 1,\n \"finishlinestatus\" numeric(2,0) DEFAULT 1,\n \"qtyinqueue\" numeric(10,3),\n \"lastcartonprinted\" numeric(10,0),\n \"qtydone\" int8,\n \"warehouse\" text,\n \"rescheduledate\" date,\n \"calculateddatetorun\" date\n);\nCREATE UNIQUE INDEX \"shoporder_soheadertable_ukey\" ON \"soheadertable\" \n(\"shoporder\");\n\nsodetailtabletrans - 31494 rows\nCREATE TABLE \"sodetailtabletrans\" (\n \"shoporder\" numeric(10,0) NOT NULL,\n \"soseq\" numeric(5,0) NOT NULL,\n \"product\" char(15) NOT NULL,\n \"qtyqueued\" numeric(17,2),\n \"qtyneeded\" numeric(17,2),\n \"qtyallocated\" numeric(17,2),\n \"qtyused\" numeric(17,2),\n \"linestatus\" numeric(2,0) DEFAULT 1,\n \"unitsperenditem\" numeric(10,1),\n CONSTRAINT \"sodetailtrans_pk\" PRIMARY KEY (\"shoporder\", \"soseq\")\n\n-Jeremiah Elliott\[email protected]\n);\n\n", "msg_date": "Fri, 28 Mar 2003 09:38:50 -0600", "msg_from": "Jeremiah Elliott <[email protected]>", "msg_from_op": true, "msg_subject": "slow query - where not in" }, { "msg_contents": "On Fri, Mar 28, 2003 at 09:38:50 -0600,\n Jeremiah Elliott <[email protected]> wrote:\n> here is the query that is killing me:\n> \n> select shoporder from sodetailtabletrans where shoporder not in(select \n> shoporder from soheadertable)\n\nThis will probably work better in 7.4.\n\nFor now there are several ways to rewrite this query.\n\nIf there are no null values for shoporder in soheadertable or\nsodetailtabletrans you can use not exists instead of not in:\nselect shoporder from sodetailtabletrans where shoporder not exists(select \nshoporder from soheadertable)\n\nYou can use set difference to calculate the result:\nselect shoporder from sodetailtabletrans except all select \nshoporder from soheadertable\n\nIf there are no null values for shoporder in one of sodetailtabletrans\nor soheadertable you can user an outer join with a restriction that limits\nthe rows of interest to those that don't match:\nselect sodetailtabletrans.shoporder from sodetailtabletrans left join\nsoheadertable using (shoporder) where soheadertable.shoporder is null\n\n", "msg_date": "Fri, 28 Mar 2003 09:59:33 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query - where not in" }, { "msg_contents": "Jeremiah Elliott <[email protected]> writes:\n\n> here is the query that is killing me:\n> \n> select shoporder from sodetailtabletrans where shoporder not in(select \n> shoporder from soheadertable)\n> \n> This is just an example query. Any time I use 'where not in(' it takes several \n> hours to return a resultset. The postgres version is 7.2.3 although I have \n> tried it on my test server which has 7.3 on it and it runs just as slow. The \n> server is a fast server 2GHz with a gig of ram. I have tried several \n> differant index setups but nothing seems to help.\n\nThis should be improved with 7.4, however there are some other things you can\ntry now.\n\ntry\n\nSELECT shoporder \n FROM sodetailtabletrans \n WHERE NOT EXISTS (\n SELECT 1\n FROM soheadertable\n WHERE shoporder = sodetailtabletrans.shoporder\n )\n\nor else try something like\n\n SELECT a.shoporder\n FROM sodetailtabletrans as a\nLEFT OUTER JOIN soheadertable as b ON (a.shoporder = b.shoporder)\n WHERE b.shoporder IS NULL\n\n\n--\ngreg\n\n", "msg_date": "28 Mar 2003 11:20:29 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query - where not in" }, { "msg_contents": "Bruno Wolff III <[email protected]> wrote:\n\n> Jeremiah Elliott <[email protected]> wrote:\n> > here is the query that is killing me:\n> >\n> > select shoporder from sodetailtabletrans where shoporder not in(select\n> > shoporder from soheadertable)\n>\n\n> If there are no null values for shoporder in soheadertable or\n> sodetailtabletrans you can use not exists instead of not in:\n> select shoporder from sodetailtabletrans where shoporder not exists(select\n> shoporder from soheadertable)\n\nI think this should rather be:\n\nSELECT shoporder FROM sodetailtabletrans\n WHERE NOT EXISTS (\n SELECT 1 FROM soheadertable\n WHERE soheadertable.shoporder = sodetailtabletrans.shoporder\n )\n\nRegards,\nMichael Paesold\n\n", "msg_date": "Fri, 28 Mar 2003 17:53:46 +0100", "msg_from": "\"Michael Paesold\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query - where not in" }, { "msg_contents": "On Fri, Mar 28, 2003 at 17:53:46 +0100,\n Michael Paesold <[email protected]> wrote:\n> Bruno Wolff III <[email protected]> wrote:\n> \n> I think this should rather be:\n> \n> SELECT shoporder FROM sodetailtabletrans\n> WHERE NOT EXISTS (\n> SELECT 1 FROM soheadertable\n> WHERE soheadertable.shoporder = sodetailtabletrans.shoporder\n> )\n\nThanks for catching my mistake.\n\n", "msg_date": "Fri, 28 Mar 2003 11:53:56 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query - where not in" }, { "msg_contents": "thanks guys - Greg, Bruno and Michael. That made a world of diferance. \n\nthx \n-Jeremiah\n\n", "msg_date": "Fri, 28 Mar 2003 14:58:48 -0600", "msg_from": "Jeremiah Elliott <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query - where not in" } ]
[ { "msg_contents": "Hi everybody!\n\nI have to insert a lot of data (more than 1.000.000 rows) in various \ntables. I use stored procedures in C to insert the data. It is necessary \nto run ANALYZE after inserting a few thousand rows into a table. Can someone \ntell me how to call ANALYZE (or analyze_rel(Oid relid, VacuumStmt *vacstmt)) \nfrom a C stored procedure ? Any help would be appreciated. \n\nThanks\n\nUlli Mueckstein\n\n-- \n\n", "msg_date": "Fri, 28 Mar 2003 18:04:47 +0100 (CET)", "msg_from": "Ulli Mueckstein <[email protected]>", "msg_from_op": true, "msg_subject": "calling analyze from a stored procedure in C" } ]
[ { "msg_contents": "Hi!\n\nI've got the following problem:\nPostgreSQL 7.2.1-2 (Debian) on Duron/700MHz, 512MB, IDE hdd (laptop).\n\nI've got a table that has 6400 rows, an index on the deleted, nachname,\nvorname and hvvsnummer attributes, and my O-R wrapper generate queries\nlike this:\n\nSELECT patient.id, patient.vorname, patient.nachname, patient.titel,\npatient.geburtsdatum, patient.hvvsnummer, patient.geschlecht,\npatient.adresse_id, patient.beruf, patient.kommentar, patient.cave,\npatient.zusatzversicherung, patient.deleted FROM patient WHERE\n((((patient.deleted = 'f') AND (patient.nachname LIKE 'K%')) AND\n(patient.vorname LIKE '%')) AND (patient.hvvsnummer LIKE '%'))\n\nThis results in a SeqScan von patient. Even more curious is that simpler\nqueries like \n\nselect * from patient where deleted='f'; OR:\nselect * from patient where nachname LIKE 'K%';\n\nall result in SeqScan on patient.\n\nI've \"analyzed\" and \"reindex\" the table already multiple times, and\nstill PostgreSQL insists upon not using any index.\n\nTIA for any pointers,\n\nAndreas\n\nmpp2=# \\d patient\n Table \"patient\"\n Column | Type | Modifiers\n--------------------+--------------+-------------\n id | integer | not null\n vorname | text | not null\n nachname | text | not null\n titel | text |\n geburtsdatum | date |\n hvvsnummer | text |\n geschlecht | character(1) |\n adresse_id | integer |\n beruf | text |\n kommentar | text |\n cave | text |\n zusatzversicherung | text |\n deleted | boolean | default 'f'\nIndexes: patient_deleted,\n patient_hvvsnummer,\n patient_nachname,\n patient_vorname\nPrimary key: patient_pkey\nCheck constraints: \"patient_geschlecht\" (((geschlecht = 'm'::bpchar) OR\n(geschlecht = 'w'::bpchar)) OR (geschlecht = '?'::bpchar))\nTriggers: RI_ConstraintTrigger_352787,\n RI_ConstraintTrigger_352789,\n RI_ConstraintTrigger_352801,\n RI_ConstraintTrigger_352803,\n RI_ConstraintTrigger_352815\n\nmpp2=# select count(*) from patient;\n count\n-------\n 6406\n(1 row)\n\nmpp2=# explain SELECT * FROM patient WHERE (patient.nachname LIKE 'K%');\nNOTICE: QUERY PLAN:\n\nSeq Scan on patient (cost=0.00..173.07 rows=272 width=70)\n\nEXPLAIN\nmpp2=# explain SELECT * FROM patient WHERE NOT deleted;\nNOTICE: QUERY PLAN:\n\nSeq Scan on patient (cost=0.00..157.06 rows=6406 width=70)\n\nEXPLAIN\nmpp2=# explain SELECT * FROM patient WHERE deleted='f';\nNOTICE: QUERY PLAN:\n\nSeq Scan on patient (cost=0.00..173.07 rows=6406 width=70)\n\nEXPLAIN", "msg_date": "29 Mar 2003 10:49:19 +0100", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": true, "msg_subject": "Index not used, performance problem" }, { "msg_contents": "Hi Andreas,\n\nA few points:\n\nPostgreSQL is rarely going to use an index for a boolean column. The\nreason is that since almost by definition true will occupy 50% of the rows\nand false will occupy 50% (say). In this case, a sequential scan is\nalways faster. You would say that the 'selectivity' isn't good enough.\n\nAs for the LIKE searches, the only ones that PostgreSQL can index are of\nthe form 'FOO%', which is what you are doing. However, I believe that\nPostgreSQL cannot do this if your database encoding is anything other than\n'C'. So, if you are using an Austrian encoding, it might not be able to\nuse the index.\n\nSome things to try:\n\nIf you are always seeking over all four columns, then drop the 4\nindividual indexes and create one like this:\n\ncreate index my_key on patient(nachname, vorname, hvvsnummer);\n\nThat would be more efficient, in the C locale.\n\nAlso, what is the point of searching for LIKE '%'? Why not just leave that\nout?\n\nChris\n\nOn 29 Mar 2003, Andreas Kostyrka wrote:\n\n> Hi!\n>\n> I've got the following problem:\n> PostgreSQL 7.2.1-2 (Debian) on Duron/700MHz, 512MB, IDE hdd (laptop).\n>\n> I've got a table that has 6400 rows, an index on the deleted, nachname,\n> vorname and hvvsnummer attributes, and my O-R wrapper generate queries\n> like this:\n>\n> SELECT patient.id, patient.vorname, patient.nachname, patient.titel,\n> patient.geburtsdatum, patient.hvvsnummer, patient.geschlecht,\n> patient.adresse_id, patient.beruf, patient.kommentar, patient.cave,\n> patient.zusatzversicherung, patient.deleted FROM patient WHERE\n> ((((patient.deleted = 'f') AND (patient.nachname LIKE 'K%')) AND\n> (patient.vorname LIKE '%')) AND (patient.hvvsnummer LIKE '%'))\n>\n> This results in a SeqScan von patient. Even more curious is that simpler\n> queries like\n>\n> select * from patient where deleted='f'; OR:\n> select * from patient where nachname LIKE 'K%';\n>\n> all result in SeqScan on patient.\n>\n> I've \"analyzed\" and \"reindex\" the table already multiple times, and\n> still PostgreSQL insists upon not using any index.\n>\n> TIA for any pointers,\n>\n> Andreas\n>\n> mpp2=# \\d patient\n> Table \"patient\"\n> Column | Type | Modifiers\n> --------------------+--------------+-------------\n> id | integer | not null\n> vorname | text | not null\n> nachname | text | not null\n> titel | text |\n> geburtsdatum | date |\n> hvvsnummer | text |\n> geschlecht | character(1) |\n> adresse_id | integer |\n> beruf | text |\n> kommentar | text |\n> cave | text |\n> zusatzversicherung | text |\n> deleted | boolean | default 'f'\n> Indexes: patient_deleted,\n> patient_hvvsnummer,\n> patient_nachname,\n> patient_vorname\n> Primary key: patient_pkey\n> Check constraints: \"patient_geschlecht\" (((geschlecht = 'm'::bpchar) OR\n> (geschlecht = 'w'::bpchar)) OR (geschlecht = '?'::bpchar))\n> Triggers: RI_ConstraintTrigger_352787,\n> RI_ConstraintTrigger_352789,\n> RI_ConstraintTrigger_352801,\n> RI_ConstraintTrigger_352803,\n> RI_ConstraintTrigger_352815\n>\n> mpp2=# select count(*) from patient;\n> count\n> -------\n> 6406\n> (1 row)\n>\n> mpp2=# explain SELECT * FROM patient WHERE (patient.nachname LIKE 'K%');\n> NOTICE: QUERY PLAN:\n>\n> Seq Scan on patient (cost=0.00..173.07 rows=272 width=70)\n>\n> EXPLAIN\n> mpp2=# explain SELECT * FROM patient WHERE NOT deleted;\n> NOTICE: QUERY PLAN:\n>\n> Seq Scan on patient (cost=0.00..157.06 rows=6406 width=70)\n>\n> EXPLAIN\n> mpp2=# explain SELECT * FROM patient WHERE deleted='f';\n> NOTICE: QUERY PLAN:\n>\n> Seq Scan on patient (cost=0.00..173.07 rows=6406 width=70)\n>\n> EXPLAIN\n>\n>\n>\n\n", "msg_date": "Sat, 29 Mar 2003 21:47:51 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used, performance problem" }, { "msg_contents": "On Sat, Mar 29, 2003 at 09:47:51PM +0800, Christopher Kings-Lynne wrote:\n> the form 'FOO%', which is what you are doing. However, I believe that\n> PostgreSQL cannot do this if your database encoding is anything other than\n> 'C'. So, if you are using an Austrian encoding, it might not be able to\n\nThat is, you need to have had the LOCALE set to 'C' when you did\ninitdb. It's not enough to change it afterwards.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sat, 29 Mar 2003 10:34:52 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used, performance problem" }, { "msg_contents": "In Linux (Redhat) where, exactly, does one set the LOCALE to C?\n\nTIA :-)\n\nJohn.\n\nOn Saturday 29 March 2003 10:34 am, Andrew Sullivan wrote:\n> On Sat, Mar 29, 2003 at 09:47:51PM +0800, Christopher Kings-Lynne wrote:\n> > the form 'FOO%', which is what you are doing. However, I believe that\n> > PostgreSQL cannot do this if your database encoding is anything other\n> > than 'C'. So, if you are using an Austrian encoding, it might not be\n> > able to\n>\n> That is, you need to have had the LOCALE set to 'C' when you did\n> initdb. It's not enough to change it afterwards.\n>\n> A\n\n", "msg_date": "Sat, 29 Mar 2003 11:49:00 -0500", "msg_from": "\"John K. Herreshoff\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used, performance problem" }, { "msg_contents": "On Sat, 2003-03-29 at 14:47, Christopher Kings-Lynne wrote:\n> As for the LIKE searches, the only ones that PostgreSQL can index are of\n> the form 'FOO%', which is what you are doing. However, I believe that\n> PostgreSQL cannot do this if your database encoding is anything other than\n> 'C'. So, if you are using an Austrian encoding, it might not be able to\n> use the index.\nWell, I use LATIN1. How do I store 8-bit chars else? And if so,\nPostgreSQL seems quite strongly broken, because a relational database\nrelies by design heavily on indexes.\n\n> Also, what is the point of searching for LIKE '%'? Why not just leave that\n> out?\nWell, it's about generating the SQL query.\nActually it's just a border case for searching for a given prefix.\n\nAndreas", "msg_date": "29 Mar 2003 17:57:58 +0100", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index not used, performance problem" }, { "msg_contents": "On Sat, 2003-03-29 at 14:47, Christopher Kings-Lynne wrote:\n> Hi Andreas,\n> \n> A few points:\n> \n> PostgreSQL is rarely going to use an index for a boolean column. The\n> reason is that since almost by definition true will occupy 50% of the rows\n> and false will occupy 50% (say). In this case, a sequential scan is\n> always faster. You would say that the 'selectivity' isn't good enough.\nWell, perhaps it should collect statistics, because a \"deleted\" column\nis a prime candidate for a strongly skewed population.\n\nAndreas", "msg_date": "29 Mar 2003 17:59:11 +0100", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index not used, performance problem" }, { "msg_contents": "# su postgres\n% export LANG=C\n% /usr/local/pgsql/bin/initdb blah blah\n\nThat always works for me!\n\nOn Sat, 2003-03-29 at 08:49, John K. Herreshoff wrote:\n> In Linux (Redhat) where, exactly, does one set the LOCALE to C?\n> \n> TIA :-)\n> \n> John.\n> \n> On Saturday 29 March 2003 10:34 am, Andrew Sullivan wrote:\n> > On Sat, Mar 29, 2003 at 09:47:51PM +0800, Christopher Kings-Lynne wrote:\n> > > the form 'FOO%', which is what you are doing. However, I believe that\n> > > PostgreSQL cannot do this if your database encoding is anything other\n> > > than 'C'. So, if you are using an Austrian encoding, it might not be\n> > > able to\n> >\n> > That is, you need to have had the LOCALE set to 'C' when you did\n> > initdb. It's not enough to change it afterwards.\n> >\n> > A\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n-- \nJord Tanner <[email protected]>\n\n", "msg_date": "29 Mar 2003 08:59:29 -0800", "msg_from": "Jord Tanner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used, performance problem" }, { "msg_contents": "I have many boolean columns, and my queries almost always use indexes. \nJust because a column can have only 2 values does not mean that 50% of \nthem will be true and 50% will be false. The ratio of T|F depends on \nthe content. I have some boolean columns with less than 1% true. \nObviously, an index will help with these ... and it does, tremendously.\n\nIf you only have 6400 rows, it is *possible* that the planner will \nchoose not to use an index, as using an index might be slower than just \nseqscanning.\n\nIf you do lots of updates on that table, you might need to do a vacuum \nfull occasionally, although I'm not certain how much that benefits a \nboolean field.\n\nAlso, if possible, I would consider upgrading to a more recent version. \n I have seen many of the experts here post news about significant bug \nfixes between 7.2 and 7.3. (My experience with boolean fields is using \n7.3.)\n\nIn addition, when posting to the list, it is helpful to post an \"explain \nanalyze\" for a query, as it gives more & better details (for those same \nexperts, of which I am not).\n\n\nAndreas Kostyrka wrote:\n> On Sat, 2003-03-29 at 14:47, Christopher Kings-Lynne wrote:\n> \n>>Hi Andreas,\n>>\n>>A few points:\n>>\n>>PostgreSQL is rarely going to use an index for a boolean column. The\n>>reason is that since almost by definition true will occupy 50% of the rows\n>>and false will occupy 50% (say). In this case, a sequential scan is\n>>always faster. You would say that the 'selectivity' isn't good enough.\n> \n> Well, perhaps it should collect statistics, because a \"deleted\" column\n> is a prime candidate for a strongly skewed population.\n> \n> Andreas\n\n-- \nMatt Mello\n512-350-6900\n\n", "msg_date": "Sat, 29 Mar 2003 11:55:00 -0600", "msg_from": "Matt Mello <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used, performance problem" }, { "msg_contents": "Andreas Kostyrka <[email protected]> writes:\n> On Sat, 2003-03-29 at 14:47, Christopher Kings-Lynne wrote:\n>> As for the LIKE searches, the only ones that PostgreSQL can index are of\n>> the form 'FOO%', which is what you are doing. However, I believe that\n>> PostgreSQL cannot do this if your database encoding is anything other than\n>> 'C'. So, if you are using an Austrian encoding, it might not be able to\n>> use the index.\n\n> Well, I use LATIN1. How do I store 8-bit chars else?\n\nYou are both confusing locale with encoding. The LIKE optimization\nrequires 'C' locale, but it should work with any encoding (or at least\nany single-byte encoding; not sure about multibyte).\n\n> And if so, PostgreSQL seems quite strongly broken, because a\n> relational database relies by design heavily on indexes.\n\nSome of us would reply that the locales are broken ;-). The bizarre\nsorting rules demanded by so many locales are what make it impossible\nto optimize a LIKE prefix into an indexscan. See the archives for\nthe reasons why our many tries at this have failed.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 29 Mar 2003 18:13:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used, performance problem " }, { "msg_contents": "On 29 Mar 2003, Andreas Kostyrka wrote:\n\n> On Sat, 2003-03-29 at 14:47, Christopher Kings-Lynne wrote:\n> > Hi Andreas,\n> > \n> > A few points:\n> > \n> > PostgreSQL is rarely going to use an index for a boolean column. The\n> > reason is that since almost by definition true will occupy 50% of the rows\n> > and false will occupy 50% (say). In this case, a sequential scan is\n> > always faster. You would say that the 'selectivity' isn't good enough.\n> Well, perhaps it should collect statistics, because a \"deleted\" column\n> is a prime candidate for a strongly skewed population.\n\nIt does. When you run analyze. You have vacuumed and analyzed the \ndatabase right?\n\nAssuming you have, it's often better to make a partial index for your \nbooleans. I'll assume that patient.deleted being true is a more rare \ncondition than false, since false is the default.\n\nSo, create your index this way to make it smaller and faster:\n\ncreate index dxname on sometable (bool_field) where bool_field IS TRUE;\n\nNow you have a tiny little index that gets scanned ultra fast and is easy \nto maintain. You have to, however, access it the same way. the proper \nway to reference a bool field is with IS [NOT] {TRUE|FALSE}\n\nselect * from some_table where bool_field IS TRUE would match the index I \ncreated aboce.\n\nselect * from some_table where bool_field = 't' would not.\n\n", "msg_date": "Mon, 31 Mar 2003 11:21:45 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used, performance problem" }, { "msg_contents": "\"scott.marlowe\" <[email protected]> writes:\n> So, create your index this way to make it smaller and faster:\n> create index dxname on sometable (bool_field) where bool_field IS TRUE;\n\nAlso note that the index itself could be on some other column; for\nexample if you do\n\n\tcreate index fooi on foo (intcol) where boolcol;\n\nthen a query like\n\n\tselect ... from foo where intcol >= 42 and boolcol;\n\ncould use the index to exploit both WHERE conditions.\n\n> You have to, however, access it the same way. the proper \n> way to reference a bool field is with IS [NOT] {TRUE|FALSE}\n\nThis strikes me as pedantry. \"WHERE bool\" (resp. \"WHERE NOT bool\") has\nthe same semantics and is easier to read, at least to me. (Of course,\nif you think differently, then by all means write the form that seems\nclearest to you.)\n\nBut yeah, the condition appearing in the actual queries had best match\nwhat's used in the partial-index CREATE command exactly. The planner is\nnot real smart about deducing \"this implies that\".\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 31 Mar 2003 13:53:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used, performance problem " } ]
[ { "msg_contents": "Hi,\n\nI've written a pl/pgsql-function called 'f_matchstr' to support a \nsearch-module on several websites. In short, the function scans the \ncontent of a field and counts the occurances of a given search-string.\n\nThe complete function is listed below.\n\nOn a database-server that runs SuSE-linux 7.1 and PostgreSQL 7.2 the \nfunction perfoms fine. Even when text-fields are accessed with large \nvolumes of text inside the response is OK. This is also very important, \nbecause the search-module is used to scan articles that are stored in a \ndatabasetable.\n\nRecently the database-server is upgraded. It now runs SuSE 8.1 and \nPostgreSQL 7.2. I copied the databases to the new server using \npg_dumpall etc.\n\nOn the new server - although this server has far better specs! - the \nfunction does NOT perfom as well as on the old server. Searches take \nseveral minutes, where on the old server a few SECONDS where needed.\n\nAs far as I can see the settings of PostgreSQL on both servers are the same.\n\nCan someone help me with this problem??\n\nThanx,\n\nWil Peters\nwww.ldits.nl\n\n\n\n\n-- Name: \"f_matchstr\" (text,text,integer,integer)\n-- Type: FUNCTION\n-- Owner: postgres\n\nCREATE FUNCTION \"f_matchstr\" (text,text,integer,integer) RETURNS integer \nAS 'DECLARE\n\tfld text; -- Field\n\tsstr text; -- Searchstring\n\tscptn ALIAS FOR $3;\t-- Case-sensitivity\n\tsxmtch integer; \t-- Exact-matching\n\tmatch integer; -- Number of matches\n\ti integer;\n\tlenfld integer;\n\tlensstr integer;\n\tsrchstr text;\n\tmiddle text;\n\tlenmiddle integer;\nBEGIN\n\tfld := $1;\n\tsstr := $2;\n\tsxmtch := $4;\n\tlenfld := length(fld);\n\tlensstr := length(sstr);\n\ti := 1;\n\tmatch\t:= 0;\n\n\t-- Work case insensitive\n\tIF scptn = 0 THEN\n\t fld := lower(fld); -- Set fieldcontent to lowercase\n\t sstr := lower(sstr); -- Set searchstring to lowercase\n\tEND IF;\n\n\tIF lenfld = lensstr THEN\n\t sxmtch := 0; -- Setting of sxmtch does not matter\n\tEND IF;\n\n\t-- Set searchstring\n\tsrchstr := '''' || sstr || '''';\n\n\tIF fld ~ srchstr THEN\n\t IF lensstr <= lenfld AND sxmtch = 0 THEN\n\t\t-- Walk trough fieldcontent\n\t\tWHILE i <= lenfld LOOP\n\t\t IF substring(fld,i,lensstr) = sstr THEN\n\t\t\tmatch := match + 1;\n\t\t END IF;\n\t\t i := i + 1;\n\t\t -- Escape from loop if 10 matches are reached\n\t\t IF match >= 10 THEN\n\t\t\ti := lenfld + 1;\n\t\t END IF;\n\t\tEND LOOP;\n\t ELSIF lensstr < lenfld AND sxmtch = 1 THEN\n\t\t-- Set searchstring for begin of fieldcontent\n\t\tsrchstr := ''^'' || sstr || ''[ ,:?!]+'';\n\t\tIF substring(fld,1,lensstr+1) ~ srchstr THEN\n\t\t match := match + 1;\n\t\tEND IF;\n\t\t-- Set searchstring for end of fieldcontent\n\t\tsrchstr := '' '' || sstr || ''[.?!]?$'';\n\t\tIF substring(fld,lenfld-lensstr-1,lensstr+2) ~ srchstr \t THEN\n\t\t match := match + 1;\n\t\tEND IF;\n\t\t-- Extract middle part of fieldcontent\n\t\tmiddle := substring(fld,lensstr+1,lenfld-(2*lensstr));\n\t\t-- Store length of middle part\n\t\tlenmiddle := length(middle);\n\t\t-- Set searchstring for end of fieldcontent\n\t\t-- See below for regular expression thas is needed\n\t\tsrchstr := ''[ >(\"\\\\'' || '''''' || '']+'' || sstr || ''[ ,.:?!)<\"\\\\'' \n|| '''''' || '']+'';\n\t\t-- Walk trough middle part of fieldcontent\n\t\tWHILE i <= lenmiddle LOOP\n\t\t IF substring(middle,i,lensstr+2) ~ srchstr THEN\n\t\t\tmatch := match + 1;\n\t\t END IF;\n\t\t i := i + 1;\n\t\t -- Escape from loop if 10 matches are reached\n\t\t IF match >= 10 THEN\n\t\t\ti := lenmiddle + 1;\n\t\t END IF;\n\t\tEND LOOP;\n\t END IF;\n\tEND IF;\n\tRETURN match;\nEND;' LANGUAGE 'plpgsql';\n\n", "msg_date": "Sat, 29 Mar 2003 22:17:26 +0100", "msg_from": "Wil Peters <[email protected]>", "msg_from_op": true, "msg_subject": "Bad perfomance of pl/pgsql-function on new server" }, { "msg_contents": "Wil Peters <[email protected]> writes:\n> On the new server - although this server has far better specs! - the \n> function does NOT perfom as well as on the old server. Searches take \n> several minutes, where on the old server a few SECONDS where needed.\n\nIs the new installation really equivalent to the old? I'd wonder about\ndifferences in multibyte compilation option, database locale and\nencoding, etc. Any of these could result in a huge hit in text-pushing\nperformance.\n\nAnother traditional post-upgrade problem is forgetting to VACUUM\nANALYZE; but that probably shouldn't affect this function, since it's\nnot issuing any database queries.\n\n(Personally I'd have written this sort of function in plperl or pltcl,\neither of which are far more appropriate for text-string-mashing than\nplpgsql. But that's not really answering your question.)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 29 Mar 2003 18:25:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad perfomance of pl/pgsql-function on new server " } ]
[ { "msg_contents": "Hi, i'm running 7.2.3 (on RHL7.3). I've read the \"WAL Configuration\"\nsection of the manual:\n\nhttp://www.postgresql.org/docs/view.php?version=7.2&idoc=0&file=wal-configuration.html\n\nI've set wal_debug = 1 in postgresql.conf, but there's no example\nof how LogInsert and LogFlush are logged. I can find many\n\nDEBUG: XLogFlush: request 6/6D8F54BC; write 6/6E13ECB8; flush 6/6E13ECB8\n\nlines in my log, but no XLogInsert. There are lot of\n\nDEBUG: INSERT @ 6/70DC8744: prev 6/70DC8564; xprev 6/70DC8564; xid 372353616; bkpb 1: Btree - insert: node 9468978/12901623;\n\nlines, but it's not clear if they are calls to LogInsert or something\ndifferent. They also come in different kinds (Btree - insert,\nHeap - update, Transaction - commit, XLOG - checkpoint:, maybe others)\nand I don't know which ones I should be looking for.\n\nI've got 7365 'XLogFlush:' lines and 23275 'INSERT @' lines in the\nlast 9 hours. Should I increase the number of WAL buffers?\n\nTIA,\n.TM.\n-- \n ____/ ____/ /\n / / /\t\t\tMarco Colombo\n ___/ ___ / /\t\t Technical Manager\n / / /\t\t\t ESI s.r.l.\n _____/ _____/ _/\t\t [email protected]\n\n", "msg_date": "Mon, 31 Mar 2003 14:06:50 +0200 (CEST)", "msg_from": "Marco Colombo <[email protected]>", "msg_from_op": true, "msg_subject": "WAL monitoring and optimizing" }, { "msg_contents": "Marco Colombo <[email protected]> writes:\n> I've got 7365 'XLogFlush:' lines and 23275 'INSERT @' lines in the\n> last 9 hours. Should I increase the number of WAL buffers?\n\nWith a transaction rate as low as that, I wouldn't think you need to.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 31 Mar 2003 12:54:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL monitoring and optimizing " } ]
[ { "msg_contents": "hi there,\n\nI was reading bruce's 'postgresql hardware performance\ntuning' article and he has suggested ext3 filesystem\nwith data mode = writeback for high performance. \n\nI would really appreciate if anyone could share your\nexperiences with ext3 from a production stand point or\nany other suggestions for best read/write performance.\n\nOur applications is an hybrid of heavy inserts/updates\nand DSS queries.\n\nversion - postgres 7.3.2\nhardware - raid 5 (5 x 73 g hardware raid), 4g ram, 2\n* 2.8 GHz cpu, redhat 7.3\n\nNote : we don't have the luxury of raid 1+0 (dedicated\ndisks) for xlog and clog files to start with but may\nbe down the line we might look into those options, but\nfor now i've planned on having them on local drives\nrather than raid 5.\n\nthanks for any inputs,\nShankar\n\n\n\n\n\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Platinum - Watch CBS' NCAA March Madness, live on your desktop!\nhttp://platinum.yahoo.com\n\n", "msg_date": "Mon, 31 Mar 2003 12:55:44 -0800 (PST)", "msg_from": "Shankar K <[email protected]>", "msg_from_op": true, "msg_subject": "ext3 filesystem / linux 7.3" }, { "msg_contents": "What is the URL of that article? I understood that ext2 was faster with PG\nand so I went to a lot of trouble of creating an ext2 partition just for PG\nand gave up the journalling to do that. Something about double effort since\nPG already does a lot of that.\n\nBruce, is there a final determination of which is faster/safer?\n\n Jeff\n\n----- Original Message -----\nFrom: \"Shankar K\" <[email protected]>\nTo: <[email protected]>\nSent: Monday, March 31, 2003 3:55 PM\nSubject: [PERFORM] ext3 filesystem / linux 7.3\n\n\n> hi there,\n>\n> I was reading bruce's 'postgresql hardware performance\n> tuning' article and he has suggested ext3 filesystem\n> with data mode = writeback for high performance.\n>\n> I would really appreciate if anyone could share your\n> experiences with ext3 from a production stand point or\n> any other suggestions for best read/write performance.\n>\n> Our applications is an hybrid of heavy inserts/updates\n> and DSS queries.\n>\n> version - postgres 7.3.2\n> hardware - raid 5 (5 x 73 g hardware raid), 4g ram, 2\n> * 2.8 GHz cpu, redhat 7.3\n>\n> Note : we don't have the luxury of raid 1+0 (dedicated\n> disks) for xlog and clog files to start with but may\n> be down the line we might look into those options, but\n> for now i've planned on having them on local drives\n> rather than raid 5.\n>\n> thanks for any inputs,\n> Shankar\n>\n>\n>\n>\n>\n>\n> __________________________________________________\n> Do you Yahoo!?\n> Yahoo! Platinum - Watch CBS' NCAA March Madness, live on your desktop!\n> http://platinum.yahoo.com\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Tue, 1 Apr 2003 12:33:15 -0500", "msg_from": "\"Jeffrey D. Brower\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "hi jeff,\n\ngo to\nhttp://www.ca.postgresql.org/docs/momjian/hw_performance/\nunder 'filesystems' slide.\n\nsnip\n\nFile system choice is particularly difficult on Linux\nbecause there are so many file system choices, and\nnone of them are optimal: ext2 is not entirely\ncrash-safe, ext3, XFS, and JFS are journal-based, and\nReiser is optimized for small files and does\njournalling. The journalling file systems can be\nsignificantly slower than ext2 but when crash recovery\nis required, ext2 isn't an option. If ext2 must be\nused, mount it with sync enabled. Some people\nrecommend XFS or an ext3 filesystem mounted with\ndata=writeback.\n\n/snip\n\n--- \"Jeffrey D. Brower\" <[email protected]> wrote:\n> What is the URL of that article? I understood that\n> ext2 was faster with PG\n> and so I went to a lot of trouble of creating an\n> ext2 partition just for PG\n> and gave up the journalling to do that. Something\n> about double effort since\n> PG already does a lot of that.\n> \n> Bruce, is there a final determination of which is\n> faster/safer?\n> \n> Jeff\n> \n> ----- Original Message -----\n> From: \"Shankar K\" <[email protected]>\n> To: <[email protected]>\n> Sent: Monday, March 31, 2003 3:55 PM\n> Subject: [PERFORM] ext3 filesystem / linux 7.3\n> \n> \n> > hi there,\n> >\n> > I was reading bruce's 'postgresql hardware\n> performance\n> > tuning' article and he has suggested ext3\n> filesystem\n> > with data mode = writeback for high performance.\n> >\n> > I would really appreciate if anyone could share\n> your\n> > experiences with ext3 from a production stand\n> point or\n> > any other suggestions for best read/write\n> performance.\n> >\n> > Our applications is an hybrid of heavy\n> inserts/updates\n> > and DSS queries.\n> >\n> > version - postgres 7.3.2\n> > hardware - raid 5 (5 x 73 g hardware raid), 4g\n> ram, 2\n> > * 2.8 GHz cpu, redhat 7.3\n> >\n> > Note : we don't have the luxury of raid 1+0\n> (dedicated\n> > disks) for xlog and clog files to start with but\n> may\n> > be down the line we might look into those options,\n> but\n> > for now i've planned on having them on local\n> drives\n> > rather than raid 5.\n> >\n> > thanks for any inputs,\n> > Shankar\n> >\n> >\n> >\n> >\n> >\n> >\n> > __________________________________________________\n> > Do you Yahoo!?\n> > Yahoo! Platinum - Watch CBS' NCAA March Madness,\n> live on your desktop!\n> > http://platinum.yahoo.com\n> >\n> >\n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to\n> [email protected]\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Tax Center - File online, calculators, forms, and more\nhttp://platinum.yahoo.com\n\n", "msg_date": "Tue, 1 Apr 2003 09:39:17 -0800 (PST)", "msg_from": "Shankar K <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "On Tue, 1 Apr 2003 09:39:17 -0800 (PST) in message <[email protected]>, Shankar K <[email protected]> wrote:\n> hi jeff,\n> \n> go to\n> http://www.ca.postgresql.org/docs/momjian/hw_performance/\n> under 'filesystems' slide.\n> \n\nI suspect that is what he's seen. \n\n From my experience, ext3 is only a percent or two slower than ext2 under pg_bench. It saves an amazing amount of time on startup after a failure by not having to fsck to confirm that the filesystem is in a consistent state. \n\nI believe that ext3 is a metadata journaling system, and not a data journaling system. This would indicate that the PG transactioning is complimentary to the filesystem journaling, not duplication. \n\neric\n\n", "msg_date": "Tue, 01 Apr 2003 09:53:48 -0800", "msg_from": "eric soroos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "\nI have heard XFS with the mount option is fastest.\n\n---------------------------------------------------------------------------\n\nShankar K wrote:\n> hi jeff,\n> \n> go to\n> http://www.ca.postgresql.org/docs/momjian/hw_performance/\n> under 'filesystems' slide.\n> \n> snip\n> \n> File system choice is particularly difficult on Linux\n> because there are so many file system choices, and\n> none of them are optimal: ext2 is not entirely\n> crash-safe, ext3, XFS, and JFS are journal-based, and\n> Reiser is optimized for small files and does\n> journalling. The journalling file systems can be\n> significantly slower than ext2 but when crash recovery\n> is required, ext2 isn't an option. If ext2 must be\n> used, mount it with sync enabled. Some people\n> recommend XFS or an ext3 filesystem mounted with\n> data=writeback.\n> \n> /snip\n> \n> --- \"Jeffrey D. Brower\" <[email protected]> wrote:\n> > What is the URL of that article? I understood that\n> > ext2 was faster with PG\n> > and so I went to a lot of trouble of creating an\n> > ext2 partition just for PG\n> > and gave up the journalling to do that. Something\n> > about double effort since\n> > PG already does a lot of that.\n> > \n> > Bruce, is there a final determination of which is\n> > faster/safer?\n> > \n> > Jeff\n> > \n> > ----- Original Message -----\n> > From: \"Shankar K\" <[email protected]>\n> > To: <[email protected]>\n> > Sent: Monday, March 31, 2003 3:55 PM\n> > Subject: [PERFORM] ext3 filesystem / linux 7.3\n> > \n> > \n> > > hi there,\n> > >\n> > > I was reading bruce's 'postgresql hardware\n> > performance\n> > > tuning' article and he has suggested ext3\n> > filesystem\n> > > with data mode = writeback for high performance.\n> > >\n> > > I would really appreciate if anyone could share\n> > your\n> > > experiences with ext3 from a production stand\n> > point or\n> > > any other suggestions for best read/write\n> > performance.\n> > >\n> > > Our applications is an hybrid of heavy\n> > inserts/updates\n> > > and DSS queries.\n> > >\n> > > version - postgres 7.3.2\n> > > hardware - raid 5 (5 x 73 g hardware raid), 4g\n> > ram, 2\n> > > * 2.8 GHz cpu, redhat 7.3\n> > >\n> > > Note : we don't have the luxury of raid 1+0\n> > (dedicated\n> > > disks) for xlog and clog files to start with but\n> > may\n> > > be down the line we might look into those options,\n> > but\n> > > for now i've planned on having them on local\n> > drives\n> > > rather than raid 5.\n> > >\n> > > thanks for any inputs,\n> > > Shankar\n> > >\n> > >\n> > >\n> > >\n> > >\n> > >\n> > > __________________________________________________\n> > > Do you Yahoo!?\n> > > Yahoo! Platinum - Watch CBS' NCAA March Madness,\n> > live on your desktop!\n> > > http://platinum.yahoo.com\n> > >\n> > >\n> > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to\n> > [email protected]\n> > \n> > \n> > ---------------------------(end of\n> > broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> \n> \n> __________________________________________________\n> Do you Yahoo!?\n> Yahoo! Tax Center - File online, calculators, forms, and more\n> http://platinum.yahoo.com\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n", "msg_date": "Tue, 1 Apr 2003 12:54:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "On Tue, Apr 01, 2003 at 12:33:15PM -0500, Jeffrey D. Brower wrote:\n> What is the URL of that article? I understood that ext2 was faster with PG\n> and so I went to a lot of trouble of creating an ext2 partition just for PG\n> and gave up the journalling to do that. Something about double effort since\n> PG already does a lot of that.\n\nI don't know how ext3 could be faster than ext2, since it has to do\nmore work. \n\nBut ext2 is not crash-safe. So your data could well be hosed if you\ncome back from a crash on ext2.\n\nActually, I have my doubts about _any_ of the journaling filesystems\nfor Linux: ext3 has a reputation for being slow if you journal in the\nreal-safe mode, and there have been so many unrepeatable reiserfs\nproblem reports that I'm loathe to use it for real systems. I had\nexceptionally good experiences with xfs when I was admining SGI\nboxes, but that's not part of the standard Linux kernel distribution,\nand with no idea why, I think my managers would get grumpy with me\nfor using it.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 1 Apr 2003 12:55:19 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "eric soroos wrote:\n> On Tue, 1 Apr 2003 09:39:17 -0800 (PST) in message\n> <[email protected]>, Shankar\n> K <[email protected]> wrote:\n> > hi jeff,\n> >\n> > go to\n> > http://www.ca.postgresql.org/docs/momjian/hw_performance/\n> > under 'filesystems' slide.\n> >\n> \n> I suspect that is what he's seen.\n> \n> >From my experience, ext3 is only a percent or two slower than ext2 under pg_bench. It saves an amazing amount of time on startup after a failure by not having to fsck to confirm that the filesystem is in a consistent state.\n> \n> I believe that ext3 is a metadata journaling system, and not a\n> data journaling system. This would indicate that the PG\n> transactioning is complimentary to the filesystem journaling,\n> not duplication.\n\nExt3 is only metadata journaling if you set the mount flags as\ndescribed. I also don't think pgbench is the best test for testing file\nsystem performance.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n", "msg_date": "Tue, 1 Apr 2003 13:26:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "OK so am I hearing:\n\nXFS is the fastest (but is it the safest?) but does not come on Linux.\n\nExt2 does less work than Ext3 so is fastest among what DOES come with\nLinux - but if you have a crash that fsck can't fix you're hosed.\n\nExt3 is quite a bit slower if set to be real safe, a wee bit slower if run\nwith standard options which makes it more crash-safe, and much slower if the\nmount flags are set to metadata journaling but that is much safer as a file\nsystem because the metadata journaling is complementary to the PG\ntransactioning.\n\nTo determine which you want you must choose which one feels to you like the\nright balance of speed and the setup work you are willing to perform and\nmaintain.\n\nDo I have it right?\n\n Jeff\n\n", "msg_date": "Tue, 1 Apr 2003 15:42:54 -0500", "msg_from": "\"Jeffrey D. Brower\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "FYI, I believe that XFS will be included in the 2.6 kernel.\n\nKeith Bottner\[email protected]\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Andrew\nSullivan\nSent: Tuesday, April 01, 2003 11:55 AM\nTo: [email protected]\nSubject: Re: [PERFORM] ext3 filesystem / linux 7.3\n\n\nOn Tue, Apr 01, 2003 at 12:33:15PM -0500, Jeffrey D. Brower wrote:\n> What is the URL of that article? I understood that ext2 was faster \n> with PG and so I went to a lot of trouble of creating an ext2 \n> partition just for PG and gave up the journalling to do that. \n> Something about double effort since PG already does a lot of that.\n\nI don't know how ext3 could be faster than ext2, since it has to do more\nwork. \n\nBut ext2 is not crash-safe. So your data could well be hosed if you\ncome back from a crash on ext2.\n\nActually, I have my doubts about _any_ of the journaling filesystems for\nLinux: ext3 has a reputation for being slow if you journal in the\nreal-safe mode, and there have been so many unrepeatable reiserfs\nproblem reports that I'm loathe to use it for real systems. I had\nexceptionally good experiences with xfs when I was admining SGI boxes,\nbut that's not part of the standard Linux kernel distribution, and with\nno idea why, I think my managers would get grumpy with me for using it.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Tue, 1 Apr 2003 14:43:35 -0600", "msg_from": "\"Keith Bottner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Just switch to FreeBSD and use UFS ;)\r\n\r\nChris\r\n\r\n----- Original Message ----- \r\nFrom: \"Jeffrey D. Brower\" <[email protected]>\r\nTo: \"Bruce Momjian\" <[email protected]>; \"eric soroos\" <[email protected]>\r\nCc: \"Shankar K\" <[email protected]>; <[email protected]>\r\nSent: Wednesday, April 02, 2003 4:42 AM\r\nSubject: Re: [PERFORM] ext3 filesystem / linux 7.3\r\n\r\n\r\n> OK so am I hearing:\r\n> \r\n> XFS is the fastest (but is it the safest?) but does not come on Linux.\r\n> \r\n> Ext2 does less work than Ext3 so is fastest among what DOES come with\r\n> Linux - but if you have a crash that fsck can't fix you're hosed.\r\n> \r\n> Ext3 is quite a bit slower if set to be real safe, a wee bit slower if run\r\n> with standard options which makes it more crash-safe, and much slower if the\r\n> mount flags are set to metadata journaling but that is much safer as a file\r\n> system because the metadata journaling is complementary to the PG\r\n> transactioning.\r\n> \r\n> To determine which you want you must choose which one feels to you like the\r\n> right balance of speed and the setup work you are willing to perform and\r\n> maintain.\r\n> \r\n> Do I have it right?\r\n> \r\n> Jeff\r\n> \r\n> \r\n> ---------------------------(end of broadcast)---------------------------\r\n> TIP 3: if posting/reading through Usenet, please send an appropriate\r\n> subscribe-nomail command to [email protected] so that your\r\n> message can get through to the mailing list cleanly\r\n> ", "msg_date": "Wed, 2 Apr 2003 09:33:56 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "> Just switch to FreeBSD and use UFS ;)\n\nI must say, I found this whole discussion rather amusing on the\nsidelines given it's largely a non-problem for non-Linux users. :)\n\n\"Better performance through engineering elegance.\"\n\n-sc\n\n-- \nSean Chittenden\[email protected]\n\n", "msg_date": "Tue, 1 Apr 2003 17:49:06 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "On Wednesday 02 April 2003 07:19, you wrote:\n> > Just switch to FreeBSD and use UFS ;)\n>\n> I must say, I found this whole discussion rather amusing on the\n> sidelines given it's largely a non-problem for non-Linux users. :)\n>\n> \"Better performance through engineering elegance.\"\n\nWell, this may sound like a troll, but I have said this before and will say \nthat again. I found reiserfs to be faster than ext2, upto 40% at times when \nwe tried a quasi closed source benchmark on a quad xeon machine with SCSI \nRAID.\n\nEverything else being same and defaults used out of box, reiserfs on mandrake9 \nwas far faster in every respect than ext2.\n\nI personally find freeBSD UFS to be a better combo based on my workstation \ntests. I believe freeBSD has a better IO scheuler that utilises disk \nbandwidth in optimal manner. Scratching (my poor IDE) disk like mad does not \nhappen with freeBSD but linux does it plenty. But I didn't benchmark it for \nthroughput..\n\n Shridhar\n\n", "msg_date": "Wed, 2 Apr 2003 09:42:18 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "On Tue, 2003-04-01 at 19:53, eric soroos wrote:\n> On Tue, 1 Apr 2003 09:39:17 -0800 (PST) in message <[email protected]>, Shankar K <[email protected]> wrote:\n> > hi jeff,\n> > \n> > go to\n> > http://www.ca.postgresql.org/docs/momjian/hw_performance/\n> > under 'filesystems' slide.\n> > \n> \n> I suspect that is what he's seen. \n> \n> >From my experience, ext3 is only a percent or two slower than ext2 under pg_bench. It saves an amazing amount of time on startup after a failure by not having to fsck to confirm that the filesystem is in a consistent state. \n> \n> I believe that ext3 is a metadata journaling system, and not a data journaling system. This would indicate that the PG transactioning is complimentary to the filesystem journaling, not duplication. \nIt's both. See the -o data=journal|data=ordered|data=writeback mount\ntime option.\n\nAndreas", "msg_date": "02 Apr 2003 17:13:19 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "On Tue, 2003-04-01 at 19:55, Andrew Sullivan wrote:\n> I don't know how ext3 could be faster than ext2, since it has to do\n> more work. \nDepending upon certain parameters, it can be faster, because it writes\nthe data to the journal serially without head movement. The kernel might\nbe able to write that data in it spot later when the hdd would be idle.\n\nSo yes, in certain cases, ext3 might be faster than ext2.\n\n> \n> Actually, I have my doubts about _any_ of the journaling filesystems\n> for Linux: ext3 has a reputation for being slow if you journal in the\nWell, journaled filesystem usually means only meta-data journaling. ext3\nis the only LinuxFS (AFAIK) that offers a fully journaled fs.\n> real-safe mode, and there have been so many unrepeatable reiserfs\n> problem reports that I'm loathe to use it for real systems. I had\nWell, I've been using ReiserFS now for years, and never had any problems\nwith it. \n\nAndreas\n-- \nAndreas Kostyrka\nJosef-Mayer-Strasse 5\n83043 Bad Aibling", "msg_date": "02 Apr 2003 17:18:26 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "... and what *exactly* is the difference?\n\n", "msg_date": "Wed, 2 Apr 2003 10:37:36 -0500", "msg_from": "\"Jeffrey D. Brower\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "On Wed, 2003-04-02 at 17:37, Jeffrey D. Brower wrote:\n> ... and what *exactly* is the difference?\nBetween what? (how about a bit more context?)\n\nAndreas\n-- \nAndreas Kostyrka\nJosef-Mayer-Strasse 5\n83043 Bad Aibling", "msg_date": "02 Apr 2003 17:47:36 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "On Wed, Apr 02, 2003 at 05:18:26PM +0200, Andreas Kostyrka wrote:\n\n> Well, I've been using ReiserFS now for years, and never had any problems\n> with it. \n\nMe too. But the \"known failure modes\" that people keep reporting\nabout have to do with completely trashing, say, a whole page of data. \nYour directories are fine, but the data is all hosed.\n\nI've never had it happen. I've never seen anyone who can\nconsistently reproduce it. But I've certainly read about it often\nenough to have pretty serious reservations about relying on the\nfilesystem for data I can't afford to lose.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 2 Apr 2003 10:56:49 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Are there any comments on JFS regarding real-life safety and speed?\n\n", "msg_date": "Wed, 02 Apr 2003 18:45:46 +0200", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": ">> This would indicate that the PG transactioning is complimentary to the\nfilesystem journaling, not duplication.\n\n>It's both. See the -o data=journal|data=ordered|data=writeback mount\n>time option.\n\nI did a RTFM on that but I am now confused again.\n\nI am wondering what the *best* setting is with ext3. When I RTFM the man\npage for mount, the data=writeback option says plainly that it is fastest\nbut in a crash old data is quite possibly on the dataset. The safest\n*looks* to be data=journal since the journaling happens before writes are\ncommitted to the file (and presumably the journal is used to update the file\non the disk to apply the journal entry to the disk file?) and the default is\ndata=ordered which says write to the disk AND THEN to the journal (which\nseems bizarre to me).\n\nHow all of that works WITH and/or AGAINST PostgreSQL and what metadata\nREALLY means is my bottom line quandary. Obviously that is where finding\nthe warm and fuzzy place between speed and safety is found.\n\n Jeff\n\n", "msg_date": "Wed, 2 Apr 2003 12:07:57 -0500", "msg_from": "\"Jeffrey D. Brower\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Jeff,\n\n> How all of that works WITH and/or AGAINST PostgreSQL and what metadata\n> REALLY means is my bottom line quandary. Obviously that is where finding\n> the warm and fuzzy place between speed and safety is found.\n\nFor your $PGDATA directory, your only need for filesystem journaling is to \nprevent a painful fsck process on an unexpected power-out. You are not, as a \nrule, terribly concerned with journaling the data as PostgreSQL already \nprovides some data recovery protection through WAL.\n\nAs a result, on my one server where I have to use Ext3 (I use Reiser on most \nmachines, and have never had a problem except for one disaster when upgrading \nReiser versions), the $PGDATA is mounted \"noatime,data=writeback\"\n\n(BTW, I found that combining \"data=writeback\" with Linux LVM on RedHat 8.0 \nresulted in system-fatal mounting errors. Anyone else have this problem?)\n\nOf course, if you have a machine with a $60,000 disk array and disk I/O is \nunlimited, then maybe you want to enable data=journal just for the protection \nagainst corruption of the WAL and clog files. \n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 2 Apr 2003 12:05:30 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Thanks for that Josh.\n\nI had previously understood that ext3 was a bad thing with PostgreSQL and I\nwent way above and beyond to create it on an Ext2 filesystem (the only one\non the server) and mount that.\n\nShould I undo that work and go back to Ext3?\n\n Jeff\n\n----- Original Message -----\nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Jeffrey D. Brower\" <[email protected]>; \"Andreas Kostyrka\"\n<[email protected]>\nCc: \"Bruce Momjian\" <[email protected]>;\n<[email protected]>; \"Shankar K\" <[email protected]>; \"eric\nsoroos\" <[email protected]>\nSent: Wednesday, April 02, 2003 3:05 PM\nSubject: Re: [PERFORM] ext3 filesystem / linux 7.3\n\n\n> Jeff,\n>\n> > How all of that works WITH and/or AGAINST PostgreSQL and what metadata\n> > REALLY means is my bottom line quandary. Obviously that is where\nfinding\n> > the warm and fuzzy place between speed and safety is found.\n>\n> For your $PGDATA directory, your only need for filesystem journaling is to\n> prevent a painful fsck process on an unexpected power-out. You are not,\nas a\n> rule, terribly concerned with journaling the data as PostgreSQL already\n> provides some data recovery protection through WAL.\n>\n> As a result, on my one server where I have to use Ext3 (I use Reiser on\nmost\n> machines, and have never had a problem except for one disaster when\nupgrading\n> Reiser versions), the $PGDATA is mounted \"noatime,data=writeback\"\n>\n> (BTW, I found that combining \"data=writeback\" with Linux LVM on RedHat 8.0\n> resulted in system-fatal mounting errors. Anyone else have this\nproblem?)\n>\n> Of course, if you have a machine with a $60,000 disk array and disk I/O is\n> unlimited, then maybe you want to enable data=journal just for the\nprotection\n> against corruption of the WAL and clog files.\n>\n> --\n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n\n", "msg_date": "Wed, 2 Apr 2003 19:29:57 -0500", "msg_from": "\"Jeffrey D. Brower\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Jeff,\n\n> Thanks for that Josh.\n\nWelcome\n\n> I had previously understood that ext3 was a bad thing with PostgreSQL and I\n> went way above and beyond to create it on an Ext2 filesystem (the only one\n> on the server) and mount that.\n> \n> Should I undo that work and go back to Ext3?\n\nI would. Not necessarily Ext3, mind you; you might want to consider Reiser or \nJFS, too. My experience has been better with Reiser than Ext3 with Postgres, \nbut I can't back that up with any statistics.\n\n(DISCLAIMER: This is not professional advice, and comes with no warranty. If \nyou want professional advice, pay me.)\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 2 Apr 2003 16:46:36 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "On Tuesday, April 1, 2003, at 03:42 PM, Jeffrey D. Brower wrote:\n\n> OK so am I hearing:\n\nEnough...\n\n...there is waaay too much hearsay going on in this thread. Let's come \nup with an acceptable test battery and actually settle it once and for \nall with good hard numbers. It would be worth my while to spend some \ntime on this since the developers I support currently hate pgsql due to \nperformance complaints (on servers that predate my employment there). \nSo if I am going to move them to better servers it would be worth my \nwhile to do some homework on what OS and FS is best.\n\nI'm not qualified at all to define the tests. I am willing to try it \non any OS that will run on a Sun Ultra 5, which would include Linux, \nseveral BSD's and Solaris to name a few. It also runs the gammut of \nfilesystems that have been talked about here. The machine isn't a \nbarnstormer but I'm willing to put in an 18GB SCSI drive and try this \nwith many different OS's and FS's if someone qualified will put \ntogether an acceptable test suite and it doesn't meet with too much \nopposition by the gurus here.\n\nThe test machine:\n\n\tSun UltraSPARC 5\n\t333MHz UltraSPARC CPU, 2MB cache\n\t256MB RAM\n\twhatever SCSI card I can find most quickly\n\teither a 9GB or 18GB SCSI drive (whichever I can find most quickly)\n\nThe test client would likely be an Apple Powerbook G4 800MHz, 512MB, \nrunning OS X 10.2.4. Yes the client runs rings around the server but I \ncan afford to abuse the server.\n\nWhile the server is admittedly an older machine, for the purpose of \nthis test it should not matter as long as the hardware configuration is \nequal for all tests. If we agree on a test suite there is nothing to \nstop someone from running the same suite on their own hardware and \nreporting their own results.\n\nAnyone game to give a go at this?\n\n--\n\n\"What difference does it make to the dead, the orphans and the \nhomeless, whether the mad destruction is wrought under the name of \ntotalitarianism or the holy name of liberty or democracy?\" - Mahatma \nGandhi", "msg_date": "Wed, 2 Apr 2003 21:44:31 -0500", "msg_from": "Chris Hedemark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "On Wed, Apr 02, 2003 at 09:44:31PM -0500, Chris Hedemark wrote:\n\n> While the server is admittedly an older machine, for the purpose of \n> this test it should not matter as long as the hardware configuration is \n> equal for all tests. If we agree on a test suite there is nothing to \n\nThat's false. \n\nOne of the big problems with a lot of tuning info is that it tends\nnot to take int consideration hardware, &c. I can tell you for sure\nthat if you have a giant-cache array connected by fibre channel, _it\nmakes no difference_ what the filesystem is. The array is so fast\nthat you can't really fill the cache under normal load anyway. \nSimilarly, if you have enough memory, every read test is going to be\nas fast as any other: you'll get 100% cache hits, and the same memory\nconfigured the same way will always respond at about the same speed.\n\nThat said, I think you're right to demand some tests, and to say that\nholding the machine constant and changing filesystems is a good\nfilesystem test.\n\nSo here are some suggested things, in no real order:\n\n1.\tMake sure you run out of buffers before you start to read\n(for read filesystem speed tests).\n2.\tPull the power plug repeatedly while the server is under\nload. Judge robustness.\n3.\tPut WAL and data area on different filesystems (to be fair,\nthis should probably be different spindles, but I'll take what I can\nget) and configure the filesystems in various ways (including, say,\nwriteback for data and full journalling for WAL). See tests above.\n4.\tMake sure your controller doesn't lie about fsync.\n5.\tTest under different loads. 10% writes vs. 90% reads;\n20% writes; &c. Compare simple INSERT write with UPDATE write. \nCompare UPDATE writes where the UPDATEd row is the same one over and\nover. Make sure you do (2) several times.\n\nLots of these are artificial. But it seems they might reveal\nsomething. I'd be particularly keen to hear about what _really_ is\nup with reiserfs.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 2 Apr 2003 21:58:16 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Chris,\n\n> ...there is waaay too much hearsay going on in this thread. Let's come \n> up with an acceptable test battery and actually settle it once and for \n> all with good hard numbers. It would be worth my while to spend some \n> time on this since the developers I support currently hate pgsql due to \n> performance complaints (on servers that predate my employment there). \n> So if I am going to move them to better servers it would be worth my \n> while to do some homework on what OS and FS is best.\n\nYou're not going to be able to determine this for certain, but at least you \nshould be able to debunk some myths. Here's my suggested tests:\n\n1) Read-only test -- numerous small rapidfire queries in the fashion of a PHP \nweb application. PGBench already does this one test ok, maybe you could use \nthat.\n\n2) Complex query test -- run a few 12-table queries with CASE statements, \ncustom functions and subselects and/or UNIONs. \n\n3) Transaction Test -- hit the database with numerous rapid-fire single row \nupdates to a few tables.\n\n4) OLAP Test -- do a few massive updates to thousands of rows based on related \ndata and/or cascading updates to multiple tables and dozens-hundreds of rows. \nCreate large temp tables based on Joe Conway's Crosstab.\n\n5) Mixed use test: combine 1, 2, & 3 in a ratio of 70% 10% 20% on several \nsimultaneous connections.\n\nOf course this requires us to have a sample database with at least 100,000 \nrows of data in one or two tables plus at least 5-10 additional tables with \nrealistically complex relationships. Donor, anyone?\n\nAlso, we'll have to talk about .conf files ...\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 2 Apr 2003 21:33:44 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "\nIn message <[email protected]>, Josh Berkus writes:\n\n Chris,\n \n > ...there is waaay too much hearsay going on in this thread. Let's come \n > up with an acceptable test battery and actually settle it once and for \n > all with good hard numbers. It would be worth my while to spend some \n > time on this since the developers I support currently hate pgsql due to \n > performance complaints (on servers that predate my employment there). \n > So if I am going to move them to better servers it would be worth my \n > while to do some homework on what OS and FS is best.\n \n You're not going to be able to determine this for certain, but at\n least you should be able to debunk some myths. Here's my\n suggested tests:\n \n [...]\n \n Also, we'll have to talk about .conf files ...\n \nWhen I installed my postgres, I tried a test program I wrote with all\nfour values of wal_sync, and for my RedHat Linux 8.0 ext3 filesystem\n(default mount options), and my toy test; open_sync performed the best\nfor me. Thus, I would suggest adding the wal_sync_method as another\naxis for your testing.\n\n -Seth Robertson\n [email protected]\n\n", "msg_date": "Thu, 03 Apr 2003 00:46:48 -0500", "msg_from": "Seth Robertson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3 " }, { "msg_contents": "\nOn Thursday, April 3, 2003, at 12:33 AM, Josh Berkus wrote:\n\n> You're not going to be able to determine this for certain, but at \n> least you\n> should be able to debunk some myths. Here's my suggested tests:\n[snip]\n\nBeing a mere sysadmin, it is creation of the test cases (perl script, \nmaybe?) that I'll have to ask someone else with more of a development \nbent to help with. My talent is more along the lines of system \nadministration. Plus I am willing to take the time to go through these \ntests over & over with a different OS or different tuning parameters on \nthe same OS, different FS's, etc. Someone else needs to come up with \nthe test code. The client machine has pgsql on it also if the results \nare going into a db that won't go away after every test. :)\n\n--\n\n\"What difference does it make to the dead, the orphans and the \nhomeless, whether the mad destruction is wrought under the name of \ntotalitarianism or the holy name of liberty or democracy?\" - Mahatma \nGandhi\n\n", "msg_date": "Thu, 3 Apr 2003 05:58:01 -0500", "msg_from": "Chris Hedemark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "On Wed, 2 Apr 2003, Josh Berkus wrote:\n\n> > I had previously understood that ext3 was a bad thing with PostgreSQL and I\n> > went way above and beyond to create it on an Ext2 filesystem (the only one\n> > on the server) and mount that.\n\nWe recently started using Postgres on a new database server running RH 7.3 \nand ext3. Due to some kernel problems the machine would crash at random \ntimes. Each time it crashed it came back up extremly easily with no data \nloss. If we were on ext2 coming back up after a crash probably wouldn't \nhave been quite as easy.\n\nWe have since given up on RH 7.3 and gone with RH Enterprise ES. Just an \nFIY for any of you out there thinking about moving to RH 7.3 or those that \nare having problems with 7.3 and ext3.\n\nChris\n\n", "msg_date": "Thu, 3 Apr 2003 08:08:46 -0800 (PST)", "msg_from": "Chris Sutton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Chris,\n\n> Being a mere sysadmin, it is creation of the test cases (perl script,\n> maybe?) that I'll have to ask someone else with more of a development\n> bent to help with. \n\nI'll write the test queries and perl scripts if someone else can supply the \ndatabase. Unfortunately, while I have a few databases that meet the \ncriteria, they are all NDA. \n\nCriteria again:\nMust have at least 100,000 rows with 12+ columns in \"main\" table.\nMust have at least 10-12 additional tables, some with FK relationships to the \nmain table and each other.\nMust be OK to make contents public.\nMore is better up to 500MB.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Thu, 3 Apr 2003 08:52:34 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "\nOn Thursday, April 3, 2003, at 11:52 AM, Josh Berkus wrote:\n\n> Unfortunately, while I have a few databases that meet the\n> criteria, they are all NDA.\n\nI'm in the same boat.\n\n--\n\n\"What difference does it make to the dead, the orphans and the \nhomeless, whether the mad destruction is wrought under the name of \ntotalitarianism or the holy name of liberty or democracy?\" - Mahatma \nGandhi\n\n", "msg_date": "Thu, 3 Apr 2003 11:59:44 -0500", "msg_from": "Chris Hedemark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Can't we generate data? Random data stored in random formats at random\nsizes would stress the file system wouldn't it?\n\n----- Original Message -----\nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Chris Hedemark\" <[email protected]>; <[email protected]>\nSent: Thursday, April 03, 2003 11:52 AM\nSubject: Re: [PERFORM] ext3 filesystem / linux 7.3\n\n\n> Chris,\n>\n> > Being a mere sysadmin, it is creation of the test cases (perl script,\n> > maybe?) that I'll have to ask someone else with more of a development\n> > bent to help with.\n>\n> I'll write the test queries and perl scripts if someone else can supply\nthe\n> database. Unfortunately, while I have a few databases that meet the\n> criteria, they are all NDA.\n>\n> Criteria again:\n> Must have at least 100,000 rows with 12+ columns in \"main\" table.\n> Must have at least 10-12 additional tables, some with FK relationships to\nthe\n> main table and each other.\n> Must be OK to make contents public.\n> More is better up to 500MB.\n>\n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Thu, 3 Apr 2003 12:12:16 -0500", "msg_from": "\"Jeffrey D. Brower\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Jeffery,\n\n> Can't we generate data? Random data stored in random formats at random\n> sizes would stress the file system wouldn't it?\n\nIn my experience, randomly generated data tends to resemble real data very \nlittle in distribution patterns and data types. This is one of the \nlimitations of PGBench.\n\nSurely there must be an OSS project out there with a medium-large PG database \nwhich is OSS-licensed?\n\nI'll post on GENERAL\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 3 Apr 2003 09:43:06 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "On Thu, 3 Apr 2003, Chris Sutton wrote:\n\n> On Wed, 2 Apr 2003, Josh Berkus wrote:\n> \n> > > I had previously understood that ext3 was a bad thing with PostgreSQL and I\n> > > went way above and beyond to create it on an Ext2 filesystem (the only one\n> > > on the server) and mount that.\n> \n> We recently started using Postgres on a new database server running RH 7.3 \n> and ext3. Due to some kernel problems the machine would crash at random \n> times. Each time it crashed it came back up extremly easily with no data \n> loss. If we were on ext2 coming back up after a crash probably wouldn't \n> have been quite as easy.\n> \n> We have since given up on RH 7.3 and gone with RH Enterprise ES. Just an \n> FIY for any of you out there thinking about moving to RH 7.3 or those that \n> are having problems with 7.3 and ext3.\n\nWe're still running RH 7.2 due to issues we had with 7.3 as well.\n\n", "msg_date": "Thu, 3 Apr 2003 10:45:50 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Hi Scott,\n\nCould you please share with us the problems you had\nwith linux 7.3\n\nwould be really interested to know the kernel configs\nand ext3 filesystem modes\n\nShankar\n\n--- \"scott.marlowe\" <[email protected]> wrote:\n> On Thu, 3 Apr 2003, Chris Sutton wrote:\n> \n> > On Wed, 2 Apr 2003, Josh Berkus wrote:\n> > \n> > > > I had previously understood that ext3 was a\n> bad thing with PostgreSQL and I\n> > > > went way above and beyond to create it on an\n> Ext2 filesystem (the only one\n> > > > on the server) and mount that.\n> > \n> > We recently started using Postgres on a new\n> database server running RH 7.3 \n> > and ext3. Due to some kernel problems the machine\n> would crash at random \n> > times. Each time it crashed it came back up\n> extremly easily with no data \n> > loss. If we were on ext2 coming back up after a\n> crash probably wouldn't \n> > have been quite as easy.\n> > \n> > We have since given up on RH 7.3 and gone with RH\n> Enterprise ES. Just an \n> > FIY for any of you out there thinking about moving\n> to RH 7.3 or those that \n> > are having problems with 7.3 and ext3.\n> \n> We're still running RH 7.2 due to issues we had with\n> 7.3 as well.\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the\n> unregister command\n> (send \"unregister YourEmailAddressHere\" to\[email protected])\n\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Tax Center - File online, calculators, forms, and more\nhttp://tax.yahoo.com\n\n", "msg_date": "Thu, 3 Apr 2003 11:45:52 -0800 (PST)", "msg_from": "Shankar K <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "On Thu, 3 Apr 2003, Shankar K wrote:\n\n> Hi Scott,\n> \n> Could you please share with us the problems you had\n> with linux 7.3\n> \n> would be really interested to know the kernel configs\n> and ext3 filesystem modes\n\nActually, I had a couple of problems with it, one of which was that I \ncouldn't get it to book with ext3 file systems properly. I think it was \nsomething to do with ext3 on linux kernel RAID sets that wouldn't work \nright. There's probably a fix for it, but 7.2 is pretty stable, and we \ncan wait for 8.0 or maybe look at another distro.\n\nI remember there being some other issues I had with configuration stuff \nlike this, but now that it's been many months since I played with it I \ncan't remember them all.\n\nMy personal problem was that redhat stopped including linuxconf as an rpm \npackage, and the only configuration programs they include don't seem to \nwork well from a command line, but seemed to prefer to be used in X11.\n\n", "msg_date": "Thu, 3 Apr 2003 13:19:10 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Hey guys,\n\nOn Thu, 2003-04-03 at 13:19, scott.marlowe wrote:\n> On Thu, 3 Apr 2003, Shankar K wrote:\n> \n> > Hi Scott,\n> > \n> > Could you please share with us the problems you had\n> > with linux 7.3\n> > \n> > would be really interested to know the kernel configs\n> > and ext3 filesystem modes\n> \n> Actually, I had a couple of problems with it, one of which was that I \n> couldn't get it to book with ext3 file systems properly. I think it was \n> something to do with ext3 on linux kernel RAID sets that wouldn't work \n> right. There's probably a fix for it, but 7.2 is pretty stable, and we \n> can wait for 8.0 or maybe look at another distro.\n> \n\nNormally I stay far far away from the distro wars / filesystem\ndiscussions. However I'd like to offer information about the systems we\nuse here at OFS. The 2 core database servers are a matched pair of\nsystem with the following statistics. \nDual AMD MP 1800's\nTyan Thunder K7x motherboard\nLSI Megaraid Elite 1650 controller w/ battery pack & 128 Mb cache\n5 Seagate Cheetak 10k 36 Gig drives Configured in a raid 1+0 w/ hot\nspare.\n\nBoth are using the stock redhat 7.3 kernel w/ the latest LSI megaraid\ndrivers and firmware.\n\nThe postgresql cluster itself contains the records and information\nnecessary to process loans and loan applications. \n\nWe are using rserv ( from contrib ) to replicate data from three\ndatabases in the cluster between the two servers. ( Hahah, I think we\nmay be the only people using this in production or something. )\n\nAt any rate we use ext3 on the filesystems and we've had no problems at\nall with the systems. Everything is stable and runs. We keep the\nmachines running and available 24/7 with scheduled downtime transitions\nto the redundant servers as we need to for whatever kind of\nenhancements.\n\nThe largest table in the cluster btw, has 4.2 million tuples in it and\nits the rserv log table.\n\nHope this gives you some additional information to base your decisions\non.\n\nSincerely,\nWill LaShell\n\n<snip>", "msg_date": "03 Apr 2003 15:04:19 -0700", "msg_from": "Will LaShell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Will,\n\n\n> At any rate we use ext3 on the filesystems and we've had no problems at\n> all with the systems. Everything is stable and runs. We keep the\n> machines running and available 24/7 with scheduled downtime transitions\n> to the redundant servers as we need to for whatever kind of\n> enhancements.\n\nHey, can we use you as a case study for advocacy.openoffice.org?\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n", "msg_date": "Thu, 3 Apr 2003 15:12:45 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "On Wed, 2003-04-02 at 17:56, Andrew Sullivan wrote:\n> On Wed, Apr 02, 2003 at 05:18:26PM +0200, Andreas Kostyrka wrote:\n> \n> > Well, I've been using ReiserFS now for years, and never had any problems\n> > with it. \n> \n> Me too. But the \"known failure modes\" that people keep reporting\n> about have to do with completely trashing, say, a whole page of data. \n> Your directories are fine, but the data is all hosed.\n> \n> I've never had it happen. I've never seen anyone who can\n> consistently reproduce it. But I've certainly read about it often\n> enough to have pretty serious reservations about relying on the\n> filesystem for data I can't afford to lose.\nWell, than backups and statistics are your only solution.\nOnly way to know if something works is to test it for some time. (You\nnever know if something in your use doesn't trigger some border case of\nmalfunction in the kernel.)\n\nAndreas\n-- \nAndreas Kostyrka\nJosef-Mayer-Strasse 5\n83043 Bad Aibling", "msg_date": "04 Apr 2003 13:28:50 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Yes, I think we'd be willing to do that.\n\n( 480 967 7530 ) is the phone contact for the company,\nIT manager is Trevor Mantle\nand you can ask for me as well.\n\[email protected] is my work email you can feel free to\nuse.\n\nSincerely,\n\nWill LaShell\n\nOn Thu, 2003-04-03 at 16:12, Josh Berkus wrote:\n> Will,\n> \n> \n> > At any rate we use ext3 on the filesystems and we've had no problems at\n> > all with the systems. Everything is stable and runs. We keep the\n> > machines running and available 24/7 with scheduled downtime transitions\n> > to the redundant servers as we need to for whatever kind of\n> > enhancements.\n> \n> Hey, can we use you as a case study for advocacy.openoffice.org?\n> \n> -- \n> -Josh Berkus\n> \n> ______AGLIO DATABASE SOLUTIONS___________________________\n> Josh Berkus\n> Complete information technology \[email protected]\n> and data management solutions \t(415) 565-7293\n> for law firms, small businesses \t fax 621-2533\n> and non-profit organizations. \tSan Francisco\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster", "msg_date": "04 Apr 2003 15:57:19 -0700", "msg_from": "Will LaShell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "We've had 2 crashes on red hat 7.3 in about 9 months of running. Both\ninstances required manual power off/on of the server, but everything\ncame up nice and ready to go. The problems seemed to stem from i/o load\nwith the kernel (not postgresql specific), but should be resolved with\nthe latest Red Hat kernel. If you search on buffer_jdirty in bugzilla\nyou'll see a couple of reports. \n\nRobert Treat\n\nOn Thu, 2003-04-03 at 14:45, Shankar K wrote:\n> Hi Scott,\n> \n> Could you please share with us the problems you had\n> with linux 7.3\n> \n> would be really interested to know the kernel configs\n> and ext3 filesystem modes\n> \n> Shankar\n> \n> --- \"scott.marlowe\" <[email protected]> wrote:\n> > On Thu, 3 Apr 2003, Chris Sutton wrote:\n> > \n> > > On Wed, 2 Apr 2003, Josh Berkus wrote:\n> > > \n> > > > > I had previously understood that ext3 was a\n> > bad thing with PostgreSQL and I\n> > > > > went way above and beyond to create it on an\n> > Ext2 filesystem (the only one\n> > > > > on the server) and mount that.\n> > > \n> > > We recently started using Postgres on a new\n> > database server running RH 7.3 \n> > > and ext3. Due to some kernel problems the machine\n> > would crash at random \n> > > times. Each time it crashed it came back up\n> > extremly easily with no data \n> > > loss. If we were on ext2 coming back up after a\n> > crash probably wouldn't \n> > > have been quite as easy.\n> > > \n> > > We have since given up on RH 7.3 and gone with RH\n> > Enterprise ES. Just an \n> > > FIY for any of you out there thinking about moving\n> > to RH 7.3 or those that \n> > > are having problems with 7.3 and ext3.\n> > \n> > We're still running RH 7.2 due to issues we had with\n> > 7.3 as well.\n> >\n\n", "msg_date": "07 Apr 2003 13:17:35 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Josh Berkus wrote:\n> Jeffery,\n> \n> > Can't we generate data? Random data stored in random formats at random\n> > sizes would stress the file system wouldn't it?\n> \n> In my experience, randomly generated data tends to resemble real data very \n> little in distribution patterns and data types. This is one of the \n> limitations of PGBench.\n\nOkay, from this it sounds like what we need is information on the data\ntypes typically used for real world applications and information on\nthe the distribution patterns for each type (the latter could get\nquite complex and varied, I'm sure, but since we're after something\nthat's typical, we only need a few examples).\n\nSo perhaps the first step in this is to write something that will show\nwhat the distribution pattern for data in a table is? With that\ninformation, we *could* randomly generate data that would conform to\nthe statistical patterns seen in the real world.\n\nIn fact, even though the databases you have access to are all\nproprietary, I'm pretty sure their owners would agree to let you run a\nprogram that would gather statistical distribution about it. Then (as\nlong as they agree) you could copy the schema itself, recreate it on\nthe test system, and randomly generate the data.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Tue, 8 Apr 2003 16:22:47 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" }, { "msg_contents": "Kevin,\n\n> So perhaps the first step in this is to write something that will show\n> what the distribution pattern for data in a table is? With that\n> information, we *could* randomly generate data that would conform to\n> the statistical patterns seen in the real world.\n\nSure. But I think it'll be *much* easier just to use portions of the FCC \ndatabase. You want to start working on converting it to PostgreSQL?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Wed, 9 Apr 2003 09:10:29 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 filesystem / linux 7.3" } ]
[ { "msg_contents": "Hi all,\n\nI use a case tool and we generate the querys automatically.\nThe query explained is a part of an Report and takes a long time\nto complete (30 ~ 70 seconds). My machine is a Dual Xeon 2 Ghz, 1 Mb DDR,\n3 SCSI HW RAID 5.\nThe tables involved in query have 500.000 rows.\n\nThank´s for any help...\n\nAlexandre\n\n\nexplain analyze SELECT T2.fi08ufemp, T4.es10almtra, T3.fi08MovEst,\nT1.es10qtdgra, T1.es10Tamanh, T1.es10item, T1.es10numdoc, T1.fi08codigo,\nT1.es10tipdoc, T1.es10codemp, T4.es10codalm, T4.es10empa, T1.es10datlan,\nT4.co13CodPro, T4.co13Emp06, T1.es10EmpTam FROM (((ES10T2 T1 LEFT JOIN\nES10T T2 ON T2.es10codemp = T1.es10codemp AND T2.es10datlan =\nT1.es10datlan AND T2.es10tipdoc = T1.es10tipdoc AND T2.fi08codigo =\nT1.fi08codigo AND T2.es10numdoc = T1.es10numdoc) LEFT JOIN FI08T T3 ON\nT3.fi08ufemp = T2.fi08ufemp AND T3.fi08codigo =T1.fi08codigo) LEFT JOIN\nES10T1 T4 ON T4.es10codemp = T1.es10codemp AND T4.es10datlan =\nT1.es10datlan AND T4.es10tipdoc = T1.es10tipdoc AND T4.fi08codigo =\nT1.fi08codigo AND T4.es10numdoc = T1.es10numdoc AND T4.es10item =\nT1.es10item) WHERE ( T4.co13Emp06 = '1' AND T4.co13CodPro = '16998' AND\nT1.es10datlan >= '2003-02-01'::date ) AND ( T1.es10datlan >=\n'2003-02-01'::date) AND ( T3.fi08MovEst = 'S' ) AND ( T4.es10empa = '1' OR\n( '1' = 0 ) ) AND ( T4.es10codalm = '0' OR T4.es10almtra = '0' OR ( '0'\n= 0 ) ) AND ( T1.es10datlan <= '2003-02-28'::date ) ORDER BY\nT4.co13Emp06, T4.co13CodPro, T1.es10datlan, T4.es10empa, T4.es10codalm,\nT4.es10almtra, T1.es10codemp, T1.es10tipdoc, T1.fi08codigo,\nT1.es10numdoc, T1.es10item;\n\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=379749.51..379833.81 rows=33722 width=142) (actual\ntime=74031.72..74031.72 rows=0 loops=1)\n Sort Key: t4.co13emp06, t4.co13codpro, t1.es10datlan, t4.es10empa,\nt4.es10codalm, t4.es10almtra, t1.es10codemp, t1.es10tipdoc,\nt1.fi08codigo, t1.es10numdoc, t1.es10item\n -> Nested Loop (cost=1160.89..377213.38 rows=33722 width=142) (actual\ntime=74031.18..74031.18 rows=0 loops=1)\n Filter: ((\"inner\".co13emp06 = 1::smallint) AND\n(\"inner\".co13codpro = 16998) AND (\"inner\".es10empa =\n1::smallint))\n -> Hash Join (cost=1160.89..173492.20 rows=33722 width=99)\n(actual time=35.98..27046.08 rows=33660 loops=1)\n Hash Cond: (\"outer\".fi08codigo = \"inner\".fi08codigo)\n Join Filter: (\"inner\".fi08ufemp = \"outer\".fi08ufemp)\n Filter: (\"inner\".fi08movest = 'S'::bpchar)\n -> Hash Join (cost=1120.19..172524.13 rows=33722\nwidth=86) (actual time=33.64..26566.83 rows=33660 loops=1)\n Hash Cond: (\"outer\".es10datlan = \"inner\".es10datlan)\n Join Filter: ((\"inner\".es10codemp =\n\"outer\".es10codemp) AND (\"inner\".es10tipdoc =\n\"outer\".es10tipdoc) AND (\"inner\".fi08codigo =\n\"outer\".fi08codigo) AND (\"inner\".es10numdoc =\n\"outer\".es10numdoc))\n -> Index Scan using es10t2_ad1 on es10t2 t1 \n(cost=0.00..1148.09 rows=33722 width=51) (actual\ntime=0.08..1885.06 rows=33660 loops=1)\n Index Cond: ((es10datlan >= '2003-02-01'::date)\nAND (es10datlan <= '2003-02-28'::date))\n -> Hash (cost=1109.15..1109.15 rows=4415 width=35)\n(actual time=33.23..33.23 rows=0 loops=1)\n -> Seq Scan on es10t t2 (cost=0.00..1109.15\nrows=4415 width=35) (actual time=0.03..24.63\nrows=4395 loops=1)\n -> Hash (cost=40.16..40.16 rows=216 width=13) (actual\ntime=1.91..1.91 rows=0 loops=1)\n -> Seq Scan on fi08t t3 (cost=0.00..40.16 rows=216\nwidth=13) (actual time=0.03..1.46 rows=216 loops=1)\n -> Index Scan using es10t1_pkey on es10t1 t4 (cost=0.00..6.01\nrows=1 width=43) (actual time=1.38..1.39 rows=1 loops=33660)\n Index Cond: ((t4.es10codemp = \"outer\".es10codemp) AND\n(t4.es10datlan = \"outer\".es10datlan) AND (t4.es10tipdoc =\n\"outer\".es10tipdoc) AND (t4.fi08codigo =\n\"outer\".fi08codigo) AND (t4.es10numdoc =\n\"outer\".es10numdoc) AND (t4.es10item = \"outer\".es10item))\n Total runtime: 74032.60 msec\n(20 rows)\n\n", "msg_date": "Mon, 31 Mar 2003 18:13:27 -0300 (BRT)", "msg_from": "\"alexandre :: aldeia digital\" <[email protected]>", "msg_from_op": true, "msg_subject": "30-70 seconds query..." }, { "msg_contents": "Uz.ytkownik alexandre :: aldeia digital napisa?:\n> Hi all,\n> \n> I use a case tool and we generate the querys automatically.\n> The query explained is a part of an Report and takes a long time\n> to complete (30 ~ 70 seconds). My machine is a Dual Xeon 2 Ghz, 1 Mb DDR,\n> 3 SCSI HW RAID 5.\n> The tables involved in query have 500.000 rows.\n> \n> Thank´s for any help...\n> \n> Alexandre\n> \n> \n> explain analyze SELECT T2.fi08ufemp, T4.es10almtra, T3.fi08MovEst,\n> T1.es10qtdgra, T1.es10Tamanh, T1.es10item, T1.es10numdoc, T1.fi08codigo,\n> T1.es10tipdoc, T1.es10codemp, T4.es10codalm, T4.es10empa, T1.es10datlan,\n> T4.co13CodPro, T4.co13Emp06, T1.es10EmpTam FROM (((ES10T2 T1 LEFT JOIN\n> ES10T T2 ON T2.es10codemp = T1.es10codemp AND T2.es10datlan =\n> T1.es10datlan AND T2.es10tipdoc = T1.es10tipdoc AND T2.fi08codigo =\n> T1.fi08codigo AND T2.es10numdoc = T1.es10numdoc) LEFT JOIN FI08T T3 ON\n> T3.fi08ufemp = T2.fi08ufemp AND T3.fi08codigo =T1.fi08codigo) LEFT JOIN\n> ES10T1 T4 ON T4.es10codemp = T1.es10codemp AND T4.es10datlan =\n> T1.es10datlan AND T4.es10tipdoc = T1.es10tipdoc AND T4.fi08codigo =\n> T1.fi08codigo AND T4.es10numdoc = T1.es10numdoc AND T4.es10item =\n> T1.es10item) WHERE ( T4.co13Emp06 = '1' AND T4.co13CodPro = '16998' AND\n> T1.es10datlan >= '2003-02-01'::date ) AND ( T1.es10datlan >=\n> '2003-02-01'::date) AND ( T3.fi08MovEst = 'S' ) AND ( T4.es10empa = '1' OR\n> ( '1' = 0 ) ) AND ( T4.es10codalm = '0' OR T4.es10almtra = '0' OR ( '0'\n> = 0 ) ) AND ( T1.es10datlan <= '2003-02-28'::date ) ORDER BY\n> T4.co13Emp06, T4.co13CodPro, T1.es10datlan, T4.es10empa, T4.es10codalm,\n> T4.es10almtra, T1.es10codemp, T1.es10tipdoc, T1.fi08codigo,\n> T1.es10numdoc, T1.es10item;\n> \n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=379749.51..379833.81 rows=33722 width=142) (actual\n> time=74031.72..74031.72 rows=0 loops=1)\n> Sort Key: t4.co13emp06, t4.co13codpro, t1.es10datlan, t4.es10empa,\n> t4.es10codalm, t4.es10almtra, t1.es10codemp, t1.es10tipdoc,\n> t1.fi08codigo, t1.es10numdoc, t1.es10item\n> -> Nested Loop (cost=1160.89..377213.38 rows=33722 width=142) (actual\n> time=74031.18..74031.18 rows=0 loops=1)\n> Filter: ((\"inner\".co13emp06 = 1::smallint) AND\n> (\"inner\".co13codpro = 16998) AND (\"inner\".es10empa =\n> 1::smallint))\n> -> Hash Join (cost=1160.89..173492.20 rows=33722 width=99)\n> (actual time=35.98..27046.08 rows=33660 loops=1)\n> Hash Cond: (\"outer\".fi08codigo = \"inner\".fi08codigo)\n> Join Filter: (\"inner\".fi08ufemp = \"outer\".fi08ufemp)\n> Filter: (\"inner\".fi08movest = 'S'::bpchar)\n> -> Hash Join (cost=1120.19..172524.13 rows=33722\n> width=86) (actual time=33.64..26566.83 rows=33660 loops=1)\n> Hash Cond: (\"outer\".es10datlan = \"inner\".es10datlan)\n> Join Filter: ((\"inner\".es10codemp =\n> \"outer\".es10codemp) AND (\"inner\".es10tipdoc =\n> \"outer\".es10tipdoc) AND (\"inner\".fi08codigo =\n> \"outer\".fi08codigo) AND (\"inner\".es10numdoc =\n> \"outer\".es10numdoc))\n> -> Index Scan using es10t2_ad1 on es10t2 t1 \n> (cost=0.00..1148.09 rows=33722 width=51) (actual\n> time=0.08..1885.06 rows=33660 loops=1)\n> Index Cond: ((es10datlan >= '2003-02-01'::date)\n> AND (es10datlan <= '2003-02-28'::date))\n> -> Hash (cost=1109.15..1109.15 rows=4415 width=35)\n> (actual time=33.23..33.23 rows=0 loops=1)\n> -> Seq Scan on es10t t2 (cost=0.00..1109.15\n> rows=4415 width=35) (actual time=0.03..24.63\n> rows=4395 loops=1)\n> -> Hash (cost=40.16..40.16 rows=216 width=13) (actual\n> time=1.91..1.91 rows=0 loops=1)\n> -> Seq Scan on fi08t t3 (cost=0.00..40.16 rows=216\n> width=13) (actual time=0.03..1.46 rows=216 loops=1)\n> -> Index Scan using es10t1_pkey on es10t1 t4 (cost=0.00..6.01\n> rows=1 width=43) (actual time=1.38..1.39 rows=1 loops=33660)\n> Index Cond: ((t4.es10codemp = \"outer\".es10codemp) AND\n> (t4.es10datlan = \"outer\".es10datlan) AND (t4.es10tipdoc =\n> \"outer\".es10tipdoc) AND (t4.fi08codigo =\n> \"outer\".fi08codigo) AND (t4.es10numdoc =\n> \"outer\".es10numdoc) AND (t4.es10item = \"outer\".es10item))\n> Total runtime: 74032.60 msec\n> (20 rows)\n\nIs the query below the same to yours?\n\nexplain analyze\nSELECT T2.fi08ufemp, T4.es10almtra, T3.fi08MovEst,\n T1.es10qtdgra, T1.es10Tamanh, T1.es10item, T1.es10numdoc, T1.fi08codigo,\n T1.es10tipdoc, T1.es10codemp, T4.es10codalm, T4.es10empa, T1.es10datlan,\n T4.co13CodPro, T4.co13Emp06, T1.es10EmpTam\nFROM\n ES10T2 T1\n LEFT JOIN T2 using \n(es10codemp,es10datlan,es10tipdoc,fi08codigo,es10numdoc)\n LEFT JOIN FI08T T3 using (fi08ufemp,fi08codigo)\n LEFT JOIN ES10T1 T4 using \n(es10codemp,es10datlan,es10tipdoc,fi08codigo,es10numdoc,es10item)\nWHERE ( T4.co13Emp06 = '1' AND T4.co13CodPro = '16998' AND\n T1.es10datlan >= '2003-02-01'::date ) AND ( T1.es10datlan >=\n '2003-02-01'::date) AND ( T3.fi08MovEst = 'S' ) AND ( T4.es10empa = '1' OR\n ( '1' = 0 ) ) AND ( T4.es10codalm = '0' OR T4.es10almtra = '0' OR ( '0'\n = 0 ) ) AND ( T1.es10datlan <= '2003-02-28'::date )\nORDER BY\n T4.co13Emp06, T4.co13CodPro, T1.es10datlan, T4.es10empa, T4.es10codalm,\n T4.es10almtra, T1.es10codemp, T1.es10tipdoc, T1.fi08codigo,\n T1.es10numdoc, T1.es10item;\n\nI have some ideas for your query:\n- you can probably change outer joins into inner ones because of your \nwhere clauses\n- it looks like the most selective where clause is on t4. Maybe you \nshould rewrite your query to have T4 first after \"from\"?\nCheck how selective is each your where condition and reorder \"from \n...tables....\" to use your where selectivity.\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Tue, 01 Apr 2003 00:15:30 +0200", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 30-70 seconds query..." }, { "msg_contents": "\"alexandre :: aldeia digital\" <[email protected]> writes:\n> I use a case tool and we generate the querys automatically.\n\n> explain analyze SELECT T2.fi08ufemp, T4.es10almtra, T3.fi08MovEst,\n> T1.es10qtdgra, T1.es10Tamanh, T1.es10item, T1.es10numdoc, T1.fi08codigo,\n> T1.es10tipdoc, T1.es10codemp, T4.es10codalm, T4.es10empa, T1.es10datlan,\n> T4.co13CodPro, T4.co13Emp06, T1.es10EmpTam FROM (((ES10T2 T1 LEFT JOIN\n> ES10T T2 ON T2.es10codemp = T1.es10codemp AND T2.es10datlan =\n> T1.es10datlan AND T2.es10tipdoc = T1.es10tipdoc AND T2.fi08codigo =\n> T1.fi08codigo AND T2.es10numdoc = T1.es10numdoc) LEFT JOIN FI08T T3 ON\n> T3.fi08ufemp = T2.fi08ufemp AND T3.fi08codigo =T1.fi08codigo) LEFT JOIN\n> ES10T1 T4 ON T4.es10codemp = T1.es10codemp AND T4.es10datlan =\n> T1.es10datlan AND T4.es10tipdoc = T1.es10tipdoc AND T4.fi08codigo =\n> T1.fi08codigo AND T4.es10numdoc = T1.es10numdoc AND T4.es10item =\n> T1.es10item) WHERE ( T4.co13Emp06 = '1' AND T4.co13CodPro = '16998' AND\n> T1.es10datlan >= '2003-02-01'::date ) AND ( T1.es10datlan >=\n> '2003-02-01'::date) AND ( T3.fi08MovEst = 'S' ) AND ( T4.es10empa = '1' OR\n> ( '1' = 0 ) ) AND ( T4.es10codalm = '0' OR T4.es10almtra = '0' OR ( '0'\n> = 0 ) ) AND ( T1.es10datlan <= '2003-02-28'::date ) ORDER BY\n> T4.co13Emp06, T4.co13CodPro, T1.es10datlan, T4.es10empa, T4.es10codalm,\n> T4.es10almtra, T1.es10codemp, T1.es10tipdoc, T1.fi08codigo,\n> T1.es10numdoc, T1.es10item;\n\nYour CASE tool isn't doing you any favors, is it :-(.\n\nMostly you need to rearrange the JOIN order into something more efficient.\nI'd guess that joining T1 to T4, then to T3, then to T2 would be the\nway to go here. Also, some study of the WHERE conditions proves that\nall the LEFT JOINs could be reduced to plain joins, because any\nnull-extended row will get discarded by WHERE anyway. That would be a\ngood thing to do to give the planner more flexibility.\n\nPG 7.4 will be better prepared to handle this sort of query, but I don't\nthink it will realize that the T1/T2 left join could be reduced to a\nplain join given these conditions (that requires observing that null T2\nwill lead to null T3 because of the join condition... hmmm, I wonder how\npractical that would be...). Without that deduction, the key step of\ndeciding to join T1/T4 first isn't reachable.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 31 Mar 2003 17:17:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 30-70 seconds query... " }, { "msg_contents": "I said:\n> PG 7.4 will be better prepared to handle this sort of query, but I don't\n> think it will realize that the T1/T2 left join could be reduced to a\n> plain join given these conditions\n\nI take that back --- actually, the algorithm used in CVS tip *does*\ndeduce that all these left joins can be plain joins.\n\nDon't suppose you'd like to experiment with a current snapshot to see\nhow well it does for you?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 31 Mar 2003 17:58:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 30-70 seconds query... " }, { "msg_contents": "On Mon, 31 Mar 2003, Tom Lane wrote:\n\n> I said:\n> > PG 7.4 will be better prepared to handle this sort of query, but I don't\n> > think it will realize that the T1/T2 left join could be reduced to a\n> > plain join given these conditions\n> \n> I take that back --- actually, the algorithm used in CVS tip *does*\n> deduce that all these left joins can be plain joins.\n> \n> Don't suppose you'd like to experiment with a current snapshot to see\n> how well it does for you?\n\nThink we can get the authors of the case tool that started this to include \nit? :-)\n\n", "msg_date": "Mon, 31 Mar 2003 16:44:32 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 30-70 seconds query... " }, { "msg_contents": "Tom,\n\nI will try the current snapshot and I will report in the list.\n\nThanks to Tomasz Myrta too for the help.\n\n\nAlexandre\n\n\n> I said:\n>> PG 7.4 will be better prepared to handle this sort of query, but I\n>> don't think it will realize that the T1/T2 left join could be reduced\n>> to a plain join given these conditions\n>\n> I take that back --- actually, the algorithm used in CVS tip *does*\n> deduce that all these left joins can be plain joins.\n>\n> Don't suppose you'd like to experiment with a current snapshot to see\n> how well it does for you?\n>\n> \t\t\tregards, tom lane\n\n", "msg_date": "Tue, 1 Apr 2003 09:23:42 -0300 (BRT)", "msg_from": "\"alexandre :: aldeia digital\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 30-70 seconds query..." } ]
[ { "msg_contents": "Hi,\n\n \n\nI am getting poor performance from my postgresql (version 7.3.2 compiled\nwith gcc 2.95.2) database when running load tests on my web application.\nThe database works great until I get above 200 concurrent users. The\nfollowing query runtime will vary from:\n\n \n\nexplain analyze select TIMESTAMP, SPM_CONVSRCADDR,\n\nSPM_CONVDSTADDR, SPM_CONVSRCPORT, SPM_CONVDSTPORT,\n\nSPM_CONVPROTO, ACTION from FirewallLogs where TimeStamp>=1044939600000\n\nand TimeStamp<=1047391020000 and spm_subid='462' and fqdn ='bs2@pcp' order\nby\n\ntimestamp desc;\n\n QUERY\nPLAN \n\n----------------------------------------------------------------------------\n-----------------------------------------------------------------------\n\n Sort (cost=3525.27..3527.49 rows=888 width=47) (actual time=55.65..56.16\nrows=913 loops=1)\n\n Sort Key: \"timestamp\"\n\n -> Index Scan using firewalllogsindex on firewalllogs\n(cost=0.00..3481.77 rows=888 width=47) (actual time=0.40..50.33 rows=913\nloops=1)\n\n Index Cond: ((fqdn = 'bs2@pcp'::character varying) AND (\"timestamp\"\n>= 1044939600000::bigint) AND (\"timestamp\" <= 1047391020000::bigint))\n\n Filter: (spm_subid = 462)\n\n Total runtime: 57.36 msec\n\n(6 rows)\n\n \n\nto\n\n \n\n QUERY\nPLAN \n\n----------------------------------------------------------------------------\n-----------------------------------------------------------------------\n\n Sort (cost=3525.27..3527.49 rows=888 width=47) (actual\ntime=2323.79..2324.31 rows=913 loops=1)\n\n Sort Key: \"timestamp\"\n\n -> Index Scan using firewalllogsindex on firewalllogs\n(cost=0.00..3481.77 rows=888 width=47) (actual time=0.26..2318.11 rows=913\nloops=1)\n\n Index Cond: ((fqdn = 'bs2@pcp'::character varying) AND (\"timestamp\"\n>= 1044939600000::bigint) AND (\"timestamp\" <= 1047391020000::bigint))\n\n Filter: (spm_subid = 462)\n\n Total runtime: 2325.62 msec\n\n(6 rows)\n\n \n\nNOTE: I am only performing select queries - no inserts, deletes or updates.\n\n\nI am running postgresql on a SunFire v880 with 4 750MHz sparcv9 processors\nwith 8 Gig of RAM running solaris 8. I have 2 tables with 500,000 records\nin each and both tables are indexed. I am connecting to the database\nthrough JDBC using a pool of connections (tried pools of 50, 100, and 200\nwith similar results). When running the load tests, the cpu of the box is\nalways above 60% idle. I have run iostat and I am not seeing any problems\nwith io.\n\n \n\nI have tried different size shared_buffers from 4100 to 64000, and I have\nadded the following to the /etc/system file:\n\n \n\nset shmsys:shminfo_shmmax=0xffffffff\n\nset shmsys:shminfo_shmmin=1\n\nset shmsys:shminfo_shmmni=256\n\nset shmsys:shminfo_shmseg=256\n\nset semsys:seminfo_semmap=256\n\nset semsys:seminfo_semmni=512\n\nset semsys:seminfo_semmns=512\n\nset semsys:seminfo_semmsl=32\n\n \n\nI understand that this could be a problem with the kernel and not postgresql\nbut I am at a loss at what to change to get better performance out of the\ndatabase or the kernel.\n\n \n\nAny help would be appreciated.\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nHi,\n \nI am getting poor performance from my postgresql (version\n7.3.2 compiled with gcc 2.95.2) database when running load tests on my web\napplication.  The database works great until I get above 200 concurrent\nusers.  The following query runtime will vary from:\n \nexplain analyze select TIMESTAMP, SPM_CONVSRCADDR,\nSPM_CONVDSTADDR, SPM_CONVSRCPORT, SPM_CONVDSTPORT,\nSPM_CONVPROTO, ACTION from FirewallLogs where TimeStamp>=1044939600000\nand TimeStamp<=1047391020000 and spm_subid='462' and fqdn\n='bs2@pcp' order by\ntimestamp desc;\n                                                                   \nQUERY\nPLAN                              \n\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=3525.27..3527.49 rows=888 width=47)\n(actual time=55.65..56.16 rows=913 loops=1)\n   Sort Key: \"timestamp\"\n   ->  Index Scan using firewalllogsindex\non firewalllogs  (cost=0.00..3481.77 rows=888 width=47) (actual\ntime=0.40..50.33 rows=913 loops=1)\n         Index Cond:\n((fqdn = 'bs2@pcp'::character varying) AND (\"timestamp\" >=\n1044939600000::bigint) AND (\"timestamp\" <= 1047391020000::bigint))\n         Filter: (spm_subid\n= 462)\n Total runtime: 57.36 msec\n(6 rows)\n \nto\n \n                                                                   \nQUERY\nPLAN                              \n\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=3525.27..3527.49 rows=888 width=47)\n(actual time=2323.79..2324.31 rows=913 loops=1)\n   Sort Key: \"timestamp\"\n   ->  Index Scan using firewalllogsindex\non firewalllogs  (cost=0.00..3481.77 rows=888 width=47) (actual\ntime=0.26..2318.11 rows=913 loops=1)\n         Index Cond:\n((fqdn = 'bs2@pcp'::character varying) AND (\"timestamp\" >=\n1044939600000::bigint) AND (\"timestamp\" <= 1047391020000::bigint))\n         Filter: (spm_subid\n= 462)\n Total runtime: 2325.62 msec\n(6 rows)\n \nNOTE:  I am only performing select queries - no inserts,\ndeletes or updates.  \nI am running postgresql on a SunFire v880 with\n4 750MHz sparcv9 processors with 8 Gig of RAM running solaris 8.  I have 2\ntables with 500,000 records in each and both tables are indexed.  I am\nconnecting to the database through JDBC using a pool of connections (tried\npools of 50, 100, and 200 with similar results).  When running the load\ntests, the cpu of the box is always above 60% idle.  I have run iostat and\nI am not seeing any problems with io.\n \nI have tried different size shared_buffers from 4100 to 64000, and I\nhave added the following to the /etc/system file:\n \nset shmsys:shminfo_shmmax=0xffffffff\nset shmsys:shminfo_shmmin=1\nset shmsys:shminfo_shmmni=256\nset shmsys:shminfo_shmseg=256\nset semsys:seminfo_semmap=256\nset semsys:seminfo_semmni=512\nset semsys:seminfo_semmns=512\nset semsys:seminfo_semmsl=32\n \nI understand that this could be a problem with the kernel\nand not postgresql but I am at a loss at what to change to get better\nperformance out of the database or the kernel.\n \nAny help would be appreciated.", "msg_date": "Tue, 1 Apr 2003 14:31:13 -0500 ", "msg_from": "\"Scott Buchan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql performance on Solaris" }, { "msg_contents": "\"Scott Buchan\" <[email protected]> writes:\n> I am getting poor performance from my postgresql (version 7.3.2 compiled\n> with gcc 2.95.2) database when running load tests on my web application.\n> The database works great until I get above 200 concurrent users.\n\nHmm ... that sounds kinda familiar; you might check the archives for\nsimilar reports from Solaris users. AFAIR we didn't figure out the\nproblem yet, but there's some raw data available.\n\n> When running the load tests, the cpu of the box is always above 60%\n> idle. I have run iostat and I am not seeing any problems with io.\n\n[ scratches head ... ] If the bottleneck isn't CPU, and it isn't I/O,\nthen what could it be? You sure about the above observations? (Does\niostat include swap activity on that platform?)\n\nThe only other idea I can think of is that there's some weird effect in\nthe locking code (which only shows up with lots of concurrent backends)\nsuch that would-be lockers repeatedly fail and sleep when they should\nhave gotten the lock. If you can figure out how to tell the difference\nbetween a backend waiting for disk I/O and one waiting for a semaphore\nor sleeping, it'd be interesting to see what the majority of the\nbackends are doing.\n\nAnother way to try to gather some data is to attach to one of the\nbackend processes with a debugger, and just stop it to get a stack trace\nevery so often. If the stack traces tend to point to the same place\nthat would give some info about the bottleneck.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 01 Apr 2003 15:21:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql performance on Solaris " }, { "msg_contents": "On Tue, 1 Apr 2003, Tom Lane wrote:\n\n> The only other idea I can think of is that there's some weird effect in\n> the locking code (which only shows up with lots of concurrent backends)\n> such that would-be lockers repeatedly fail and sleep when they should\n> have gotten the lock. If you can figure out how to tell the difference\n> between a backend waiting for disk I/O and one waiting for a semaphore\n> or sleeping, it'd be interesting to see what the majority of the\n> backends are doing.\n\nI was thinking along the lines of it being something like the old Linux \nkernel had with apache and other programs with waking all the processes. \nIt could be that something about Solaris is meaning that every backend \nprocess, no matter how idle they are, get \"touched\" every time something \nis done. Just guessing.\n\n", "msg_date": "Tue, 1 Apr 2003 14:42:59 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql performance on Solaris " } ]
[ { "msg_contents": "Folks,\n\nPlease pardon the cross-posting.\n\nA small group of us on the Performance list were discussing the first steps \ntoward constructing a comprehensive Postgresql installation benchmarking \ntool, mostly to compare different operating systems and file systemsm but \nlater to be used as a foundation for a tuning wizard. \n\nTo do this, we need one or more real (not randomly generated*) medium-large \ndatabase which is or can be BSD-licensed (data AND schema). This database \nmust have:\n\n1) At least one \"main\" table with 12+ columns and 100,000+ rows (each).\n2) At least 10-12 additional tables of assorted sizes, at least half of which \nshould have Foriegn Key relationships to the main table(s) or each other.\n3) At least one large text or varchar field among the various tables.\n\nIn addition, the following items would be helpful, but are not required:\n4) Views, triggers, and functions built on the database\n5) A query log of database activity to give us sample queries to work with.\n6) Some complex data types, such as geometric, network, and/or custom data \ntypes.\n\nThanks for any leads you can give me!\n\n(* To forestall knee-jerk responses: Randomly generated data does not look or \nperform the same as real data in my professional opinion, and I'm the one \nwriting the test scripts.)\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 3 Apr 2003 09:55:16 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "OSS database needed for testing" }, { "msg_contents": "I don't know that it meets your criteria, but.....\n\nI have a set of scripts and a program that will load the US Census TigerUA\ndatabase into PostgreSQL. The thing is absolutely freak'n huge. I forget\nwhich, but it is either 30g or 60g of data excluding indexes.\n\nAlso, if that is too much, I have a similar setup to load the FreeDB music\ndatabase, from www.freedb.org. It has roughly 670,000 entries in \"cdtitles\"\nand 8 million entries in \"cdsongs.\"\n\nEither one of which, I would be willing to send you the actual DB on cd(s)\nif you pay for postage and media. \n \n\n> Folks,\n> \n> Please pardon the cross-posting.\n> \n> A small group of us on the Performance list were discussing the first\n> steps toward constructing a comprehensive Postgresql installation\n> benchmarking tool, mostly to compare different operating systems and\n> file systemsm but later to be used as a foundation for a tuning\n> wizard. \n> \n> To do this, we need one or more real (not randomly generated*)\n> medium-large database which is or can be BSD-licensed (data AND\n> schema). This database must have:\n> \n> 1) At least one \"main\" table with 12+ columns and 100,000+ rows (each).\n> 2) At least 10-12 additional tables of assorted sizes, at least half of\n> which should have Foriegn Key relationships to the main table(s) or\n> each other. 3) At least one large text or varchar field among the\n> various tables.\n> \n> In addition, the following items would be helpful, but are not\n> required: 4) Views, triggers, and functions built on the database\n> 5) A query log of database activity to give us sample queries to work\n> with. 6) Some complex data types, such as geometric, network, and/or\n> custom data types.\n> \n> Thanks for any leads you can give me!\n> \n> (* To forestall knee-jerk responses: Randomly generated data does not\n> look or perform the same as real data in my professional opinion, and\n> I'm the one writing the test scripts.)\n> \n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 1: subscribe and unsubscribe\n> commands go to [email protected]\n\n", "msg_date": "Thu, 3 Apr 2003 13:26:01 -0500 (EST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OSS database needed for testing" }, { "msg_contents": "On Thu, Apr 03, 2003 at 13:26:01 -0500,\n [email protected] wrote:\n> I don't know that it meets your criteria, but.....\n> \n> I have a set of scripts and a program that will load the US Census TigerUA\n> database into PostgreSQL. The thing is absolutely freak'n huge. I forget\n> which, but it is either 30g or 60g of data excluding indexes.\n\nAre the data model or the loading scripts available publicly?\nI have the tiger data and a program that uses it to convert addresses\nto latitude and longitude, but I don't really like the program and\nwas thinking about trying to load the data into a database and do\nqueries against the database to find location.\n\n", "msg_date": "Thu, 3 Apr 2003 15:01:47 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] [HACKERS] OSS database needed for testing" }, { "msg_contents": "\n\nBruno Wolff III wrote:\n\n>On Thu, Apr 03, 2003 at 13:26:01 -0500,\n> [email protected] wrote:\n> \n>\n>>I don't know that it meets your criteria, but.....\n>>\n>>I have a set of scripts and a program that will load the US Census TigerUA\n>>database into PostgreSQL. The thing is absolutely freak'n huge. I forget\n>>which, but it is either 30g or 60g of data excluding indexes.\n>> \n>>\n>\n>Are the data model or the loading scripts available publicly?\n>I have the tiger data and a program that uses it to convert addresses\n>to latitude and longitude, but I don't really like the program and\n>was thinking about trying to load the data into a database and do\n>queries against the database to find location.\n>\n> \n>\nI have a set of scripts, SQL table defs, a small C program, along with a \nset of field with files that loads it into PGSQL using the \"copy from \nstdin\" It works fairly well, but takes a good long time to load it all.\n\nShould I put it in the download section of my website?\n\n", "msg_date": "Thu, 03 Apr 2003 17:19:13 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] [HACKERS] OSS database needed for testing" }, { "msg_contents": "On Thu, Apr 03, 2003 at 17:19:13 -0500,\n mlw <[email protected]> wrote:\n> \n> I have a set of scripts, SQL table defs, a small C program, along with a \n> set of field with files that loads it into PGSQL using the \"copy from \n> stdin\" It works fairly well, but takes a good long time to load it all.\n> \n> Should I put it in the download section of my website?\n\nYes. I would be interested in looking at it even if I don't use exactly\nthe same way to do stuff. Taking a logn time to load the data into the\ndatabase isn't a big deal for me. reading through the tiger (and FIPS) data\ndocumentation it seemed like there might be some gotchas in unusual cases\nand I am not sure the google contest program really handled things right\nso I would like to see another implementation. I am also interested in the\ndata model as that will save me some time.\n\n", "msg_date": "Thu, 3 Apr 2003 16:20:37 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] [HACKERS] OSS database needed for testing" }, { "msg_contents": "Hi Josh,\n\nLet me vote on the Tiger data. I used to use this database. It is public,\nupdated by the government, VERY useful in own right, it works well with the\nearthdistance contribution, a real world database a lot of us use and I\nthink you can put together some killer scripts on it.\n\nCan I vote twice? <g>\n\n Jeff\n\n----- Original Message -----\nFrom: <[email protected]>\nTo: <[email protected]>\nCc: <[email protected]>; <[email protected]>;\n<[email protected]>\nSent: Thursday, April 03, 2003 1:26 PM\nSubject: Re: [PERFORM] [HACKERS] OSS database needed for testing\n\n\n> I don't know that it meets your criteria, but.....\n>\n> I have a set of scripts and a program that will load the US Census TigerUA\n> database into PostgreSQL. The thing is absolutely freak'n huge. I forget\n> which, but it is either 30g or 60g of data excluding indexes.\n>\n> Also, if that is too much, I have a similar setup to load the FreeDB music\n> database, from www.freedb.org. It has roughly 670,000 entries in\n\"cdtitles\"\n> and 8 million entries in \"cdsongs.\"\n>\n> Either one of which, I would be willing to send you the actual DB on cd(s)\n> if you pay for postage and media.\n>\n>\n> > Folks,\n> >\n> > Please pardon the cross-posting.\n> >\n> > A small group of us on the Performance list were discussing the first\n> > steps toward constructing a comprehensive Postgresql installation\n> > benchmarking tool, mostly to compare different operating systems and\n> > file systemsm but later to be used as a foundation for a tuning\n> > wizard.\n> >\n> > To do this, we need one or more real (not randomly generated*)\n> > medium-large database which is or can be BSD-licensed (data AND\n> > schema). This database must have:\n> >\n> > 1) At least one \"main\" table with 12+ columns and 100,000+ rows (each).\n> > 2) At least 10-12 additional tables of assorted sizes, at least half of\n> > which should have Foriegn Key relationships to the main table(s) or\n> > each other. 3) At least one large text or varchar field among the\n> > various tables.\n> >\n> > In addition, the following items would be helpful, but are not\n> > required: 4) Views, triggers, and functions built on the database\n> > 5) A query log of database activity to give us sample queries to work\n> > with. 6) Some complex data types, such as geometric, network, and/or\n> > custom data types.\n> >\n> > Thanks for any leads you can give me!\n> >\n> > (* To forestall knee-jerk responses: Randomly generated data does not\n> > look or perform the same as real data in my professional opinion, and\n> > I'm the one writing the test scripts.)\n> >\n> > --\n> > -Josh Berkus\n> > Aglio Database Solutions\n> > San Francisco\n> >\n> >\n> > ---------------------------(end of\n> > broadcast)--------------------------- TIP 1: subscribe and unsubscribe\n> > commands go to [email protected]\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n", "msg_date": "Thu, 3 Apr 2003 21:43:27 -0500", "msg_from": "\"Jeffrey D. Brower\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OSS database needed for testing" }, { "msg_contents": "Jeff,\n\n> Let me vote on the Tiger data. I used to use this database. It is public,\n> updated by the government, VERY useful in own right, it works well with the\n> earthdistance contribution, a real world database a lot of us use and I\n> think you can put together some killer scripts on it.\n\nWe'd have to use a subset of it. 30G is a little larger than anything we \nwant people to download as a test package.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Thu, 3 Apr 2003 20:29:12 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] OSS database needed for testing" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> We'd have to use a subset of it. 30G is a little larger than anything we \n> want people to download as a test package.\n\nYeah, it seems a bit over the top ...\n\nThe FCC database sounded like an interesting alternative to me.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 03 Apr 2003 23:54:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OSS database needed for testing " }, { "msg_contents": "\n\nJosh Berkus wrote:\n\n>Jeff,\n>\n> \n>\n>>Let me vote on the Tiger data. I used to use this database. It is public,\n>>updated by the government, VERY useful in own right, it works well with the\n>>earthdistance contribution, a real world database a lot of us use and I\n>>think you can put together some killer scripts on it.\n>> \n>>\n>\n>We'd have to use a subset of it. 30G is a little larger than anything we \n>want people to download as a test package.\n>\n> \n>\nActually, come to think of it, the TigerUA DB is in chunks. You can use \nas much or as little as you want. I'll put the loader scripts on my \ndownload page tonight.\n\nHere is the home page for the data:\nhttp://www.census.gov/geo/www/tiger/tigerua/ua_tgr2k.html\n \n\n", "msg_date": "Fri, 04 Apr 2003 07:21:49 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OSS database needed for testing" }, { "msg_contents": "Absolutely. We could just use one large state or several small ones and let\nfolks download the whole thing if they wanted. Using that technique you\ncould control the size of the test quite closely and still make something\npotentially quite valuable as a contribution beyond the bench.\n\n----- Original Message -----\nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Jeffrey D. Brower\" <[email protected]>; <[email protected]>\nCc: <[email protected]>\nSent: Thursday, April 03, 2003 11:29 PM\nSubject: Re: [PERFORM] [HACKERS] OSS database needed for testing\n\n\n> Jeff,\n>\n> > Let me vote on the Tiger data. I used to use this database. It is\npublic,\n> > updated by the government, VERY useful in own right, it works well with\nthe\n> > earthdistance contribution, a real world database a lot of us use and I\n> > think you can put together some killer scripts on it.\n>\n> We'd have to use a subset of it. 30G is a little larger than anything we\n> want people to download as a test package.\n>\n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n\n", "msg_date": "Fri, 4 Apr 2003 07:45:29 -0500", "msg_from": "\"Jeffrey D. Brower\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] OSS database needed for testing" }, { "msg_contents": "Jeff, Mlw, \n\n> Absolutely. We could just use one large state or several small ones and\n> let folks download the whole thing if they wanted. Using that technique\n> you could control the size of the test quite closely and still make\n> something potentially quite valuable as a contribution beyond the bench.\n\nHold on a second. The FCC database is still a better choice because it is \nmore complex with a carefully defined schema. The Tiger database would be \ngood for doing tests of type 1 and 3, but not for tests of types 2 and 4.\n\nIt would certainly be interesting to use the Tiger database as the basis for \nan additional type of test:\n\n6) Very Large Data Set: querying, then updating, 300+ selected rows from a \n2,000,000 + row table.\n\n... but I still see the FCC database as our best candidate for the battery of \ntests 1-5.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Fri, 4 Apr 2003 08:09:22 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] OSS database needed for testing" }, { "msg_contents": "I think you got me there. I have to agree with both points.\n\n(Besides, you are the one coding this thing and I think you understand it\nbetter than I do.)\n\nLet me know if I can help.\n\n Jeff\n\n----- Original Message -----\nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Jeffrey D. Brower\" <[email protected]>; <[email protected]>\nCc: <[email protected]>; <[email protected]>\nSent: Friday, April 04, 2003 11:09 AM\nSubject: Re: [PERFORM] [HACKERS] OSS database needed for testing\n\n\n> Jeff, Mlw,\n>\n> > Absolutely. We could just use one large state or several small ones and\n> > let folks download the whole thing if they wanted. Using that technique\n> > you could control the size of the test quite closely and still make\n> > something potentially quite valuable as a contribution beyond the bench.\n>\n> Hold on a second. The FCC database is still a better choice because it is\n> more complex with a carefully defined schema. The Tiger database would\nbe\n> good for doing tests of type 1 and 3, but not for tests of types 2 and 4.\n>\n> It would certainly be interesting to use the Tiger database as the basis\nfor\n> an additional type of test:\n>\n> 6) Very Large Data Set: querying, then updating, 300+ selected rows from a\n> 2,000,000 + row table.\n>\n> ... but I still see the FCC database as our best candidate for the battery\nof\n> tests 1-5.\n>\n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Fri, 4 Apr 2003 11:42:33 -0500", "msg_from": "\"Jeffrey D. Brower\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OSS database needed for testing" } ]
[ { "msg_contents": "Josh Berkus wrote:\n> 1) At least one \"main\" table with 12+ columns and 100,000+ rows\n(each).\n> 2) At least 10-12 additional tables of assorted sizes, at least half\nof\n> which\n> should have Foriegn Key relationships to the main table(s) or each\nother.\n> 3) At least one large text or varchar field among the various tables.\n> \n> In addition, the following items would be helpful, but are not\nrequired:\n> 4) Views, triggers, and functions built on the database\n> 5) A query log of database activity to give us sample queries to work\n> with.\n> 6) Some complex data types, such as geometric, network, and/or custom\ndata\n> types.\n> \nMight I recommend the FCC database of transmitters. Its publicly\navailable via anonymous FTP, medium largish with tables running 100k ->\n1m+ records, and demonstrates many interesting test cases. For example,\nlat/lon spatial queries (RTree vs. GIST) can be tested with a decent\nvolume. Also it demonstrates a good example of the use of schemas.\nEmail me if you want info.\n\nFormat is pipe delimited (non quoted), and data turnover is < 1% a week.\n\nMerlin\n\n", "msg_date": "Thu, 3 Apr 2003 13:12:12 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] OSS database needed for testing" } ]
[ { "msg_contents": "\ncreate table baz (event text, level int);\n\ninsert into baz values ('x',1);\ninsert into baz values ('x',2);\ninsert into baz values ('x',3);\ninsert into baz values ('y',2);\ninsert into baz values ('y',3);\ninsert into baz values ('y',3);\n\nselect * from baz;\n\n event | level \n-------+-------\n x | 1\n x | 2\n x | 3\n y | 2\n y | 3\n y | 3\n(6 rows)\n\n\nI want to know how many ones, twos, and threes there are for each event:\n\nselect \n\tevent, \n\t(select count(*) from baz a \n\t\twhere level = 1 and a.event=baz.event) as ones, \n\t(select count(*) from baz a \n\t\twhere level = 2 and a.event=baz.event) as twos, \n\t(select count(*) from baz a \n\t\twhere level = 3 and a.event=baz.event) as threes\nfrom\n\t baz\ngroup by \n\tevent;\n\nwhich gives me:\n\n event | ones | twos | threes \n-------+------+------+--------\n x | 1 | 1 | 1\n y | 0 | 1 | 2\n(2 rows)\n\n\nwhich is fine, but I am wondering if there is a better way to do this?\nI'd mainly like to reduce the number of subqueries involved. Another\nimprovement would be to not have to explicitly query for each level,\nthough this isn't as big since I know the range of levels in advance\n(famous last words for a dba :-) \n\nThanks in advance,\n\nRobert Treat\n\n", "msg_date": "03 Apr 2003 16:02:04 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "can i make this sql query more efficiant?" }, { "msg_contents": "if you're allowed to change the resultset structure, you could do:\nSELECT\n event,\n level, \n count(*)\nFROM\n baz\nGROUP BY\n event,\n level; \n\n event | level | count\n-------+-------+-------\n x | 1 | 1\n x | 2 | 1\n x | 3 | 1\n y | 2 | 1\n y | 3 | 2\n(5 rows)\n\nof course it doesn't show you the rows where the count is zero.\nif you need the zeros, do this\n\nSELECT \n EL.event,\n EL.level, \n count(baz.*)\nFROM\n (\n SELECT DISTINCT\n B1.event, B2.level \n FROM \n baz B1 \n CROSS JOIN baz B2\n ) EL\n LEFT JOIN baz ON (baz.event=EL.event AND baz.level=EL.level) \nGROUP BY\n EL.event,\n EL.level; \n\n event | level | count\n-------+-------+-------\n x | 1 | 1\n x | 2 | 1\n x | 3 | 1\n y | 1 | 0\n y | 2 | 1\n y | 3 | 2\n(6 rows)\n\nhope it helps.\n\nOn Thursday 03 April 2003 18:02, Robert Treat wrote:\n> create table baz (event text, level int);\n>\n> insert into baz values ('x',1);\n> insert into baz values ('x',2);\n> insert into baz values ('x',3);\n> insert into baz values ('y',2);\n> insert into baz values ('y',3);\n> insert into baz values ('y',3);\n>\n> select * from baz;\n>\n> event | level\n> -------+-------\n> x | 1\n> x | 2\n> x | 3\n> y | 2\n> y | 3\n> y | 3\n> (6 rows)\n>\n>\n> I want to know how many ones, twos, and threes there are for each event:\n>\n> select\n> \tevent,\n> \t(select count(*) from baz a\n> \t\twhere level = 1 and a.event=baz.event) as ones,\n> \t(select count(*) from baz a\n> \t\twhere level = 2 and a.event=baz.event) as twos,\n> \t(select count(*) from baz a\n> \t\twhere level = 3 and a.event=baz.event) as threes\n> from\n> \t baz\n> group by\n> \tevent;\n>\n> which gives me:\n>\n> event | ones | twos | threes\n> -------+------+------+--------\n> x | 1 | 1 | 1\n> y | 0 | 1 | 2\n> (2 rows)\n>\n>\n> which is fine, but I am wondering if there is a better way to do this?\n> I'd mainly like to reduce the number of subqueries involved. Another\n> improvement would be to not have to explicitly query for each level,\n> though this isn't as big since I know the range of levels in advance\n> (famous last words for a dba :-)\n>\n> Thanks in advance,\n>\n> Robert Treat\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly", "msg_date": "Thu, 3 Apr 2003 19:15:15 -0300", "msg_from": "Franco Bruno Borghesi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can i make this sql query more efficiant?" }, { "msg_contents": "<cut>\n> select \n> \tevent, \n> \t(select count(*) from baz a \n> \t\twhere level = 1 and a.event=baz.event) as ones, \n> \t(select count(*) from baz a \n> \t\twhere level = 2 and a.event=baz.event) as twos, \n> \t(select count(*) from baz a \n> \t\twhere level = 3 and a.event=baz.event) as threes\n> from\n> \t baz\n> group by \n> \tevent;\n> \n> which gives me:\n> \n> event | ones | twos | threes \n> -------+------+------+--------\n> x | 1 | 1 | 1\n> y | 0 | 1 | 2\n> (2 rows)\n<cut>\nWhat about this:\nselect\n event,\n sum(case when level=1 then 1 else 0 end) as ones,\n sum(case when level=2 then 1 else 0 end) as twos,\n sum(case when level=3 then 1 else 0 end) as threes\nfrom baz\ngroup by event;\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Fri, 4 Apr 2003 08:02:09 +0900", "msg_from": "\"Tomasz Myrta\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can i make this sql query more efficiant?" }, { "msg_contents": "On 03 Apr 2003 16:02:04 -0500, Robert Treat\n<[email protected]> wrote:\n>select \n>\tevent, \n>\t(select count(*) from baz a \n>\t\twhere level = 1 and a.event=baz.event) as ones, \n>\t(select count(*) from baz a \n>\t\twhere level = 2 and a.event=baz.event) as twos, \n>\t(select count(*) from baz a \n>\t\twhere level = 3 and a.event=baz.event) as threes\n>from\n>\t baz\n>group by \n>\tevent;\n\n>which is fine, but I am wondering if there is a better way to do this?\n>I'd mainly like to reduce the number of subqueries involved.\n\nSELECT event,\n SUM (CASE level WHEN 1 THEN 1 ELSE 0 END) AS ones,\n SUM (CASE level WHEN 2 THEN 1 ELSE 0 END) AS twos,\n SUM (CASE level WHEN 3 THEN 1 ELSE 0 END) AS threes\n FROM baz\n GROUP BY event;\n\n> Another\n>improvement would be to not have to explicitly query for each level,\n\nThis might be a case for a clever set returning function, but that's\nnot my realm. Wait for Joe to jump in ;-)\n\nServus\n Manfred\n\n", "msg_date": "Fri, 04 Apr 2003 01:13:18 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can i make this sql query more efficiant?" }, { "msg_contents": "Tomasz,\n\n> What about this:\n> select\n> event,\n> sum(case when level=1 then 1 else 0 end) as ones,\n> sum(case when level=2 then 1 else 0 end) as twos,\n> sum(case when level=3 then 1 else 0 end) as threes\n> from baz\n> group by event;\n\nThat version is only more efficient for small data sets. I've generally \nfound that case statements are slower than subselects for large data sets. \nYMMV.\n\nBTW, while it won't be faster, Joe Conway's crosstab function in /tablefunc \ndoes this kind of transformation.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Fri, 4 Apr 2003 08:16:01 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can i make this sql query more efficiant?" }, { "msg_contents": "On Fri, 4 Apr 2003 08:16:01 -0800, Josh Berkus <[email protected]>\nwrote:\n>That version is only more efficient for small data sets. I've generally \n>found that case statements are slower than subselects for large data sets. \n\nI'd be honestly interested in the circumstances where you made that\nobservation.\n\n>YMMV.\n\nYes, it does :-) Out of curiosity I did a few tests with PG 7.2 on my\nold notebook:\n\nCREATE TABLE baz (event int, level int);\nINSERT INTO baz SELECT (100*random()+0.5), (3*random()+0.5);\nINSERT INTO baz SELECT (100*random()+0.5), (3*random()+0.5) FROM baz;\n...\nINSERT INTO baz SELECT (100*random()+0.5), (3*random()+0.5) FROM baz;\nCREATE INDEX baz_event ON baz(event);\nANALYSE baz;\n\nSELECT event,\n SUM (CASE level WHEN 1 THEN 1 ELSE 0 END) AS ones,\n SUM (CASE level WHEN 2 THEN 1 ELSE 0 END) AS twos,\n SUM (CASE level WHEN 3 THEN 1 ELSE 0 END) AS threes\n FROM baz GROUP BY event;\n\nSELECT event,\n (SELECT count(*) FROM baz a\n WHERE level = 1 AND a.event=baz.event) AS ones,\n (SELECT count(*) FROM baz a\n WHERE level = 2 and a.event=baz.event) AS twos,\n (SELECT count(*) FROM baz a\n WHERE level = 3 and a.event=baz.event) AS threes\n FROM baz GROUP BY event;\n\ntuples case subselect\n 8K 718.48 msec 16199.88 msec\n 32K 6168.18 msec 74742.85 msec\n128K 25072.34 msec 304585.61 msec\n\nCLUSTER baz_event ON baz; ANALYSE baz;\nThis changes the subselect plan from seq scan to index scan.\n\n128K 12116.07 msec 17530.85 msec\n\nAdd 128K more tuples, so that only the first half of the relation is\nclustered.\n\n256K 45663.35 msec 117748.23 msec\n\nCLUSTER baz_event ON baz; ANALYSE baz;\n\n256K 23691.81 msec 35138.26 msec\n\nMaybe it is just the data distribution (100 events, 3 levels,\nthousands of tuples) that makes CASE look faster than subselects ...\n \nServus\n Manfred\n\n", "msg_date": "Fri, 04 Apr 2003 21:03:08 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] can i make this sql query more efficiant?" }, { "msg_contents": "Manfred,\n\n> I'd be honestly interested in the circumstances where you made that\n> observation.\n\nHmmmm ... one of my database involves a \"crosstab\" converstion where there \nwere 13 possible values, and the converted table is heavily indexed. For \nthat case, I found using CASE statements to be slower.\n\nFor your example, how do the statistics change if you increase the number of \nlevels to 15 and put an index on them?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 4 Apr 2003 11:26:14 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] can i make this sql query more efficiant?" }, { "msg_contents": "On Fri, 4 Apr 2003 11:26:14 -0800, Josh Berkus <[email protected]>\nwrote:\n>For your example, how do the statistics change if you increase the number of \n>levels to 15 and put an index on them?\n\nCREATE TABLE baz (event int, level int);\n\nINSERT INTO baz SELECT (100*random()+0.5), (15*random()+0.5);\nINSERT INTO baz SELECT (100*random()+0.5), (15*random()+0.5) FROM baz;\n...\nINSERT INTO baz SELECT (100*random()+0.5), (15*random()+0.5) FROM baz;\nANALYSE baz;\nCREATE INDEX baz_e ON baz(event);\nCREATE INDEX baz_l ON baz(level);\nCREATE INDEX baz_el ON baz(event, level);\nCREATE INDEX baz_le ON baz(level, event);\n\ntup cluster case subsel\n 8K - 1219.90 msec 70605.93 msec (seq scan)\n 8K - 3087.30 msec (seq scan off)\n\n 16K - 3861.87 msec 161902.36 msec (seq scan)\n 16K - 31498.76 msec (seq scan off)\n 16K event 2407.72 msec 5773.12 msec\n 16K level 2298.08 msec 32752.43 msec\n 16K l, e 2318.60 msec 3184.84 msec\n\n 32K - 6571.57 msec 7381.22 msec\n 32K e, l 4584.97 msec 3429.94 msec\n 32K l, e 4552.00 msec 64782.59 msec\n 32K l, e 4552.98 msec 3544.32 msec (baz_l dropped)\n\n 64K - 17275.73 msec 26525.24 msec\n 64K - 17150.16 msec 26195.87 msec (baz_le dropped)\n 64K - 17286.29 msec 656046.24 msec (baz_el dropped)\n 64K e, l 9137.88 msec 21809.52 msec\n 64K e, l 9183.25 msec 6412.97 msec (baz_e dropped)\n 64K e, l 11690.28 msec 10022.44 msec (baz_el dropped)\n 64K e, l 11740.54 msec 643046.39 msec (baz_le dropped)\n 64K l, e 9437.65 msec 133368.20 msec\n 64K l, e 9119.48 msec 6722.00 msec (baz_l dropped)\n 64K l, e 9294.68 msec 6663.15 msec (baz_le dropped)\n 64K l, e 9259.35 msec 639754.27 msec (baz_el dropped)\n\n256K - 59809.69 msec 120755.78 msec\n256K - 59809.69 msec 114133.34 msec (baz_le dropped)\n256K e, l 38506.41 msec 88531.54 msec\n256K e, l 49427.43 msec 43544.03 msec (baz_e dropped)\n256K l, e 56821.23 msec 575850.14 msec\n256K l, e 57462.78 msec 67911.41 msec (baz_l dropped)\n\nSo yes, there are cases where subselect is faster than case, but case\nis much more robust regarding correlation and indices.\n \nServus\n Manfred\n\n", "msg_date": "Sat, 05 Apr 2003 04:08:22 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] can i make this sql query more efficiant?" } ]
[ { "msg_contents": "Up until a few days ago I have been running Postgresl 7.2.3 with Tsearch\nfrom the contrib dir, but at various times the performance of the\ndatabase would suddenly and rapidly deteriate so that queries which\npreviously took 500ms then took 8 or 9 seconds.\n\nThe only cure is a backup and restore of the database, vacuuming and\nanalysing does nothing. I even tried rebuilding all indexes once which\ndidn't seem to help.\n\nThis was an annoying but intermittent thing, which happened the last\ntime this Wednesday. Since I was doing a backup and restore anyway, I\ndecided to upgrade to 7.3.2 in the hope this might fix the annoying\nproblem, however it has made it WAY worse.\n\nRather than going a few weeks (and sometimes months) in between having\nto use this fix, I am now having to do it almost every single day. I'm\nnow lucky if it lasts 24 hours before it brings my website to a total\ncrawl.\n\nThere is nothing special about my database other than the fact that I\nuse the Tsearch addon. Now if I go and do a bit update to the Tsearch\nindexes on a table, with for example:\n\n\tUPDATE tblmessages SET strmessageidx=txt2txtidx(strheading || '\n' || strmessage);\n\nThen that instantly brings the whole database to a crawl, which no\namount of index rebuilding, vacuuming and analysing helps.\n\nHelp! (And sorry if this is the wrong list)\n\n\n\nYours Unwhettedly,\nRobert John Shepherd.\n\nEditor\nDVD REVIEWER\nThe UK's BIGGEST Online DVD Magazine\nhttp://www.dvd.reviewer.co.uk\n\nFor a copy of my Public PGP key, email: [email protected] \n\n", "msg_date": "Fri, 4 Apr 2003 11:49:25 +0100", "msg_from": "\"Robert John Shepherd\" <[email protected]>", "msg_from_op": true, "msg_subject": "Rapid deteriation of performance (might be caused by tsearch?) in\n\t7.3.2" }, { "msg_contents": "\nOn Fri, 4 Apr 2003, Robert John Shepherd wrote:\n\n> Up until a few days ago I have been running Postgresl 7.2.3 with Tsearch\n> from the contrib dir, but at various times the performance of the\n> database would suddenly and rapidly deteriate so that queries which\n> previously took 500ms then took 8 or 9 seconds.\n\nHmm, what are the before and after explain analyze results? Also, what\nare your conf settings for shared buffers, sort memory and the fsm\nparameters?\n\n> The only cure is a backup and restore of the database, vacuuming and\n> analysing does nothing. I even tried rebuilding all indexes once which\n> didn't seem to help.\n\nDid you do a regular vacuum or vacuum full? If only the former, it's\npossible that you need to either vacuum more frequently and/or raise the\nfree space map settings in your configuration file.\n\nWhat does vacuum full verbose <table>; give you for the tables involved?\n\n> Help! (And sorry if this is the wrong list)\npgsql-performance is a better list, so I've replied to there. You'll\nprobably need to join in order to reply to list.\n\n", "msg_date": "Fri, 4 Apr 2003 06:50:41 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Rapid deteriation of performance (might be caused by" }, { "msg_contents": "\"Robert John Shepherd\" <[email protected]> writes:\n> Help! (And sorry if this is the wrong list)\n\nYes, it's the wrong list. pgsql-performance would be the place to\ndiscuss this. We can't help you anyway without more details: show us\nthe EXPLAIN ANALYZE results for some of the slow queries. (Ideally\nI'd like to see EXPLAIN ANALYZE for the same queries in both fast\nand slow states ...)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 04 Apr 2003 09:53:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rapid deteriation of performance (might be caused by tsearch?) in\n\t7.3.2" }, { "msg_contents": "> > Up until a few days ago I have been running Postgresl 7.2.3 with\nTsearch\n> > from the contrib dir, but at various times the performance of the\n> > database would suddenly and rapidly deteriate so that queries which\n> > previously took 500ms then took 8 or 9 seconds.\n\n> Hmm, what are the before and after explain analyze results? Also,\nwhat\n> are your conf settings for shared buffers, sort memory and the fsm\n> parameters?\n\nshared_buffers = 40960\nsort_mem = 20480\n#max_fsm_relations = 1000\n#max_fsm_pages = 10000\n\nAs you can see I've not uncommented or touched the fsm parameters, I\nhave no idea what they do. Optimisation wise I have only played with\nshared_buffers, sort_mem and max_connections.\n\n\n> > The only cure is a backup and restore of the database\n\n> Did you do a regular vacuum or vacuum full? If only the former, it's\n> possible that you need to either vacuum more frequently and/or raise\nthe\n> free space map settings in your configuration file.\n\nI've been running this daily:\n\n\tvacuumdb -h localhost -a -z\n\nShould I be using the full switch then?\n\nI'll get back to you on the other questions if you think they are still\nneeded.\n\n\n> pgsql-performance is a better list, so I've replied to there. You'll\n> probably need to join in order to reply to list.\n\nThanks, especially for not shouting at me heh, this is stressful enough\nas it is.\n\n\nYours Unwhettedly,\nRobert John Shepherd.\n\nEditor\nDVD REVIEWER\nThe UK's BIGGEST Online DVD Magazine\nhttp://www.dvd.reviewer.co.uk\n\nFor a copy of my Public PGP key, email: [email protected] \n\n", "msg_date": "Fri, 4 Apr 2003 16:19:34 +0100", "msg_from": "\"Robert John Shepherd\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] Rapid deteriation of performance (might be caused by\n\ttsearch?) in 7.3.2" }, { "msg_contents": "\nOn Fri, 4 Apr 2003, Robert John Shepherd wrote:\n\n> > > Up until a few days ago I have been running Postgresl 7.2.3 with\n> Tsearch\n> > > from the contrib dir, but at various times the performance of the\n> > > database would suddenly and rapidly deteriate so that queries which\n> > > previously took 500ms then took 8 or 9 seconds.\n>\n> > Hmm, what are the before and after explain analyze results? Also,\n> what\n> > are your conf settings for shared buffers, sort memory and the fsm\n> > parameters?\n>\n> shared_buffers = 40960\n> sort_mem = 20480\n> #max_fsm_relations = 1000\n> #max_fsm_pages = 10000\n>\n> As you can see I've not uncommented or touched the fsm parameters, I\n> have no idea what they do. Optimisation wise I have only played with\n> shared_buffers, sort_mem and max_connections.\n>\n>\n> > > The only cure is a backup and restore of the database\n>\n> > Did you do a regular vacuum or vacuum full? If only the former, it's\n> > possible that you need to either vacuum more frequently and/or raise\n> the\n> > free space map settings in your configuration file.\n>\n> I've been running this daily:\n>\n> \tvacuumdb -h localhost -a -z\n>\n> Should I be using the full switch then?\n\nWell, you generally shouldn't need to if the fsm settings are high enough.\nIf you're doing really big updates like update each row of a 1 billion\nrow table, you may end up having to do one immediately following that.\nOf course, if you're doing that, performance is probably not your biggest\nconcern. ;)\n\nExplain analyze'll tell us if the system is changing plans (presumably to\na worse one) - for example, deciding to move to a sequence scan because it\nthinks that the index scan is now to expensive, or conversely moving to an\nindex scan because it thinks that there'll be too many reads, while those\npage reads actually are fairly localized. The vacuum full verbose should\nget some idea of how much empty space is there.\n\n> > pgsql-performance is a better list, so I've replied to there. You'll\n> > probably need to join in order to reply to list.\n>\n> Thanks, especially for not shouting at me heh, this is stressful enough\n> as it is.\n\n:)\n\n", "msg_date": "Fri, 4 Apr 2003 07:29:43 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Rapid deteriation of performance (might be" }, { "msg_contents": "> > I've been running this daily:\n> > \tvacuumdb -h localhost -a -z\n> > Should I be using the full switch then?\n> \n> Well, you generally shouldn't need to if the fsm settings are high\nenough.\n> If you're doing really big updates like update each row of a 1 billion\n> row table, you may end up having to do one immediately following that.\n> Of course, if you're doing that, performance is probably not your\nbiggest\n> concern. ;)\n\nNot doing that, no. ;)\n\n\n> Explain analyze'll tell us if the system is changing plans (presumably\nto\n> a worse one)\n\nIt wasn't, oddly enough.\n\nI've added a new table that cuts down 85% of the work this query has to\ndo, and it seems to have helped an awful lot at the moment. Of course\nonly time will tell. :)\n\nThanks for the suggestions.\n\n\nYours Unwhettedly,\nRobert John Shepherd.\n\nEditor\nDVD REVIEWER\nThe UK's BIGGEST Online DVD Magazine\nhttp://www.dvd.reviewer.co.uk\n\nFor a copy of my Public PGP key, email: [email protected] \n\n", "msg_date": "Mon, 7 Apr 2003 15:25:38 +0100", "msg_from": "\"Robert John Shepherd\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] Rapid deteriation of performance (might be caused by\n\ttsearch?) in 7.3.2" } ]
[ { "msg_contents": "Hello all,\n\nI am very glad to announce first public release of Open Application Server. \n\nThis is an application framework built in C++, to make use of existing APIs in \ninternet\napplication.\n\nIt provides\n\n* A thread based request delivery architecture\n* support of request handlers loaded from external libraries\n* Support of additional application API such as http, SMTP etc.\n\nThe features: \n\n* Written in C++\n* mutlithreaded application\n* High performance and scalable\n* Provides object-packing technology\n* Native interface with apache to act as web application server\n\nThe project is available from http://oasserver.sourceforge.net. The mailing \nlist is not active as yet \nbut it should be up in short time(by tomorrow hopefully).\n\nA complete web application built with OAS+postgresql+apache is also available \nfrom CVS. This is a issue \ntracking and resource booking system.\n\nThere are no packages/tarballs available right now. Please use anonymous cvs. \nI plan to relase packages \nin short time.\n\nThe CVS modules are oasserver and phd respectively.\n\nThis is done so that I can update install documentation to cater for variety \nof build platforms. Right now, \nI can test the build only on slackware/mandrake/freeBSD.\n\nShridhar\n\n", "msg_date": "Fri, 4 Apr 2003 16:31:49 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": true, "msg_subject": "[OT][ANNOUNCEMENT]Announcing first public release of Open Application\n\tServer" } ]
[ { "msg_contents": "The fcc FTP site is ftp.fcc.gov\n\nThe location of the data of interest is at\n/pub/Bureaus/Wireless/Databases/uls/.\n\nThere are zip files (pipe delimited) in complete and the daily changed\nfiles in daily. Theres lots of info in documentation which includes\nexcel spreadsheets of the schema. These will have to be converted to\nsql statemtents.\n\nThe ULS is the database system that holds the data for Fixed and Mobile\nwireless services. This includes most two way systems and point to\nmultipoint (microwave) but not broadcast (AM, FM, TV) and not advanced\nradio.\n\nThe database is really a database of applications. It contains\napplication data submitted by wireless applicants. \n\nThere are two families of tables, prefixed with 'a' and 'l'. The 'a'\ntables stand for application records that are pending being granted by\nthe fcc. The 'l' tables have received licenses and may or may not be\noperating.\n\nCombined, the 'a' and 'l' zipfiles represent a specific service. For\nexample, 'a_micro' and 'l_micro' contain the applications and licensed\ndata for microwave systems. The different services have slightly\ndifferent layouts because they have different requirements.\n\nI strongly suggest looking at LMcomm and LMpriv first. These are the\nfixed land mobile systems, and 90% of the entire database. They also\nhave identical layouts.\n\nThere are a great deal of files in each zipfile, but here are the most\ninteresting:\n\nhd: header data\nad: application detail\nan: antenna data\nlo: location data\nfr: frequency data\nem: emission data\n\nThere are others. I can help you write meaningful queries that are\nquite complex and will require optimization techniques.\n\nMerlin\n\n", "msg_date": "Fri, 4 Apr 2003 11:47:56 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] OSS database needed for testing" }, { "msg_contents": "Merlin,\n\n> The fcc FTP site is ftp.fcc.gov\n> \n> The location of the data of interest is at\n> /pub/Bureaus/Wireless/Databases/uls/.\n\nCool. I'll tackle this in a week or two. Right now, I'm being paid to \nconvert a client's data and that'll keep me busy through the weekend ...\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 4 Apr 2003 09:37:34 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] OSS database needed for testing" }, { "msg_contents": "On Friday 04 April 2003 11:47, Merlin Moncure wrote:\n> The location of the data of interest is at\n> /pub/Bureaus/Wireless/Databases/uls/.\n\n> wireless services. This includes most two way systems and point to\n> multipoint (microwave) but not broadcast (AM, FM, TV) and not advanced\n> radio.\n\nAlso check out the cdbs files (which contain the broadcast stuff as well as \nmore) at /pub/Bureaus/Mass_Media/Databases/cdbs/ (which I would be more \ninterested in doing, since I am a broadcast engineer by profession....)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Fri, 4 Apr 2003 14:08:31 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] OSS database needed for testing" }, { "msg_contents": "Lamar,\n\n> Also check out the cdbs files (which contain the broadcast stuff as well as \n> more) at /pub/Bureaus/Mass_Media/Databases/cdbs/ (which I would be more \n> interested in doing, since I am a broadcast engineer by profession....)\n\nHey, if you're willing to do the text --> postgres conversions, I'll use \nwhichever tables you want ...\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 4 Apr 2003 11:27:16 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] OSS database needed for testing" } ]
[ { "msg_contents": "Josh Berkus wrote:\n> Cool. I'll tackle this in a week or two. Right now, I'm being paid\nto\n> convert a client's data and that'll keep me busy through the weekend\n...\n\nI would suggest downloading the data now. I can help get you started\nwith the create table statements and the import scripts. There are not\nvery many ways to get the data in a reasonable timeframe: the spi\nfunctions or the copy command are a good place to start. Do not bother\nwith running stuff through insert queries: take my word for it, it just\nwon't work. Of course, if you use copy, you have to pre-format. Be\naware that you will have many gigabytes (like more than 20) of data\nbefore you are done.\n\nWhatever you decide to do, document the process: the difficulty of\ngetting large amounts of data into postgres quickly and easily has been\na historical complaint of mine. Using mysql, it was a snap to get the\ndata in but using *that* database I really felt it couldn't handle this\nmuch data.\n \nI can also get you started with some example queries that should be\nquite a challenge to set up to run quickly. After that, it's your\nballgame.\n\nMerlin\n\n", "msg_date": "Fri, 4 Apr 2003 13:00:26 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] OSS database needed for testing" }, { "msg_contents": "Merlin, \n\n> I would suggest downloading the data now. I can help get you started\n\nOK, downloading now.\n\n> with the create table statements and the import scripts. There are not\n> very many ways to get the data in a reasonable timeframe: the spi\n> functions or the copy command are a good place to start. Do not bother\n> with running stuff through insert queries: take my word for it, it just\n> won't work. Of course, if you use copy, you have to pre-format. Be\n> aware that you will have many gigabytes (like more than 20) of data\n> before you are done.\n\n From my perspective, the easiest and fastest way to do this is to create the \ntable definitions in PostgreSQL, and then to use Perl to convert the data \nformat to something COPY will recognize. If you can do the create table \nstatements for the LM* data, I can do the Perl scripts.\n\nGiven that the *total* data is 20G, we'll want to use a subset of it. Per \nyour suggestion, I am downloading the *LM* tables. I may truncate them \nfurther if the resulting database is too large. If some of the other tables \nare reference lists or child tables, please tell me and I will download them \nas well.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 4 Apr 2003 10:07:49 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OSS database needed for testing" } ]
[ { "msg_contents": "We have a perl program that loads data into Postgres version 7.3.1\nusing deletes and inserts.\n\nWe find that the load times are about 50% slower when we use a\nRAID5 disk system as compared to when we use RAID0\n\nAre there any postgres configuration parameters that we can set to improve\nthe performance on RAID5?\n\n", "msg_date": "Mon, 7 Apr 2003 16:23:03 +0200 ", "msg_from": "Howard Oblowitz <[email protected]>", "msg_from_op": true, "msg_subject": "Load times on RAID0 compared to RAID5" }, { "msg_contents": "On Mon, Apr 07, 2003 at 04:23:03PM +0200, Howard Oblowitz wrote:\n> We find that the load times are about 50% slower when we use a\n> RAID5 disk system as compared to when we use RAID0\n\nWell, given that RAID 5 has to do some calculation and RAID 0\ndoesn't, I shouldn't think this is very surprising.\n\nYou should let us know what disk subsystem you have, &c. It's\nimpossible to give any advice on the basis of the little that's here. \nRAID performance is linked to its configuration and the hardware and\nsoftware in use.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 7 Apr 2003 10:36:17 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load times on RAID0 compared to RAID5" }, { "msg_contents": "On Mon, 7 Apr 2003, Howard Oblowitz wrote:\n\n> We have a perl program that loads data into Postgres version 7.3.1\n> using deletes and inserts.\n> \n> We find that the load times are about 50% slower when we use a\n> RAID5 disk system as compared to when we use RAID0\n> \n> Are there any postgres configuration parameters that we can set to improve\n> the performance on RAID5?\n\nWell, RAID0 SHOULD be about twice as fast as RAID5 for most applications, \nmaybe even faster for others. \n\nOf course, RAID0 offers no redundancy, so if any single drive fails your \ndata disappears in a large puff of smoke. RAID5 can survive a single \ndrive failure, and that doesn't come for free.\n\nRAID1 may offer a better compromise of performance and reliability for \nmany apps than RAID5. Generally RAID0 is fastest, RAID1 is fast but can't \ngrow to be as big as RAID5, RAID5 handles large parallel access better \nthan RAID1, RAID1 handles batch processing better than RAID5.\n\nMixing them together sometimes helps, sometimes not. RAID1 on top of \nRAID0 works pretty well but costs the most per meg stored than most plain \nRAID5 or RAID1 setups.\n\n", "msg_date": "Mon, 7 Apr 2003 09:29:48 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load times on RAID0 compared to RAID5" }, { "msg_contents": "Howard,\n\n> We have a perl program that loads data into Postgres version 7.3.1\n> using deletes and inserts.\n\nMight I suggest COPY instead?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Mon, 7 Apr 2003 08:31:09 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load times on RAID0 compared to RAID5" } ]
[ { "msg_contents": "i've an oracle query:\n\nSelect tb_users.ds_description, tb_users.cd_role, tb_users.cd_user, \ntb_users.ds_login, tb_monitores.cod_equipa from tb_users, tb_monitores \nwhere ds_login like 'varLogin' and ds_password like 'varPassword' and \ntb_users.cd_user = tb_monitores.cd_user(+)\n\nhow can i transform it to an postgresql query?\n\n\nbest regards\n\netur\n\n", "msg_date": "Tue, 08 Apr 2003 15:58:57 +0100", "msg_from": "rute solipa <[email protected]>", "msg_from_op": true, "msg_subject": "help need it" }, { "msg_contents": "\nOn Tue, 8 Apr 2003, rute solipa wrote:\n\n> i've an oracle query:\n>\n> Select tb_users.ds_description, tb_users.cd_role, tb_users.cd_user,\n> tb_users.ds_login, tb_monitores.cod_equipa from tb_users, tb_monitores\n> where ds_login like 'varLogin' and ds_password like 'varPassword' and\n> tb_users.cd_user = tb_monitores.cd_user(+)\n>\n> how can i transform it to an postgresql query?\n\nShould be something like:\n\nSelect tb_users.ds_description, tb_users.cd_role, tb_users.cd_user,\ntb_users.ds_login, tb_monitores.cod_equipa from tb_users left\nouter join tb_monitores using (cd_user)\nwhere ds_login like 'varLogin' and ds_password like 'varPassword';\n\n", "msg_date": "Tue, 8 Apr 2003 08:16:34 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help need it" }, { "msg_contents": "On Tue, 8 Apr 2003, rute solipa wrote:\n\n> i've an oracle query:\n> \n> Select tb_users.ds_description, tb_users.cd_role, tb_users.cd_user, \n> tb_users.ds_login, tb_monitores.cod_equipa from tb_users, tb_monitores \n> where ds_login like 'varLogin' and ds_password like 'varPassword' and \n> tb_users.cd_user = tb_monitores.cd_user(+)\n> \n> how can i transform it to an postgresql query?\n\nCan you check the postgresql manual if the (+) operator\nmeans something related to OUTER joins?\nIf yes then use [LEFT|RIGHT] OUTER JOIN of postgresql.\n\n> \n> \n> best regards\n> \n> etur\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Tue, 8 Apr 2003 17:40:39 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help need it" }, { "msg_contents": "Achilleus,\n\n> i think i have an issue regarding the statistics that\n> a) (plain) ANALYZE status and\n> b) VACUUM ANALYZE status\n> produce.\n\nIt's perfectly normal for a query to run faster after a VACUUM ANALYZE than \nafter an ANALYZE ... after all, you just vacuumed it, didn't you?\n\nIf you're demonstrating some other kind of behavioural difference, then please \npost the results of EXPLAIN ANALYZE for the two examples.\n\nOh, and we should probably shift this discussion to the PGSQL-PERFORMANCE \nlist.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Wed, 30 Apr 2003 09:03:38 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] 7.3 analyze & vacuum analyze problem" }, { "msg_contents": "Achilleus,\n\n> I am afraid it is not so simple.\n> What i (unsuccessfully) implied is that \n> dynacom=# VACUUM ANALYZE status ;\n> VACUUM\n> dynacom=# ANALYZE status ;\n> ANALYZE\n> dynacom=#\n\nYou're right, that is mysterious. If you don't get a response from one of \nthe major developers on this forum, I suggest that you post those EXPLAIN \nresults to PGSQL-BUGS.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 30 Apr 2003 11:48:18 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] 7.3 analyze & vacuum analyze problem" }, { "msg_contents": "Hi,\ni think i have an issue regarding the statistics that \na) (plain) ANALYZE status and \nb) VACUUM ANALYZE status\nproduce.\n\nI have a table status:\ndynacom=# \\d status\n Table \"public.status\"\n Column | Type | Modifiers\n \n-------------+--------------------------+---------------------------------------------------\n id | integer | not null default \nnextval('\"status_id_seq\"'::text)\n checkdate | timestamp with time zone |\n assettable | character varying(50) |\n assetidval | integer |\n appname | character varying(100) |\n apptblname | character varying(50) |\n apptblidval | integer |\n colname | character varying(50) |\n colval | double precision |\n status | character varying(5) |\n isvalid | boolean |\n username | character varying(50) |\nIndexes: status_id_key unique btree (id),\n status_all btree (assettable, assetidval, appname, apptblname, \nstatus, isvalid),\n status_all_wo_astidval btree (assettable, appname, apptblname, \nstatus, isvalid),\n status_appname btree (appname),\n status_apptblidval btree (apptblidval),\n status_apptblname btree (apptblname),\n status_assetidval btree (assetidval),\n status_assettable btree (assettable),\n status_checkdate btree (checkdate),\n status_colname btree (colname),\n status_isvalid btree (isvalid),\n status_status btree (status)\n \ndynacom=#\ndynacom=# SELECT count(*) from status ;\n count\n-------\n 33565\n(1 row)\n \ndynacom=#\n\nI very often perform queries of the form:\n\n select count(*) from status where assettable='vessels' and \nappname='ISM PMS' and apptblname='items' and status='warn' \nand isvalid and assetidval=<SOME ID>;\n\nAltho i dont understand exactly why the stats created by\nVACUUM ANALYZE are more accurate (meaning producing faster plans)\nthan the ones created by\nplain ANALYZE, (altho for some attributes they are false for sure)\nthe performance is much much better when\nVACUUM ANALYZE is run than plain ANALYZE.\n\nIn the former case, some times the status_all index is used,\nand sometimes (when the selectivity is small)\na sequential scan is performed.\n\nIn the latter case, no index is ever used even \nfor crazy statements (assetidval is always >0) like:\n\nselect count(*) from status where assettable='vessels' and\nappname='ISM PMS' and apptblname='items' and status='warn'\nand isvalid and assetidval=-10000000;\n\nI attach the statistics of either case.\n\nMy app just performs the above query for most of the assetidval values\n(And for all most popular assetidval values)\nSo the elapsed time of the app i think is a good\nmeasure of the overall performance of these queries.\n\nIn the \"VACUUM ANALYZE\" case it takes 1.2 - 1.5 secs, while\nin the \"ANALYZE\" case it takes >=3+\n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]", "msg_date": "Wed, 30 Apr 2003 18:57:31 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "7.3 analyze & vacuum analyze problem" }, { "msg_contents": "On Wed, 30 Apr 2003, Josh Berkus wrote:\n\n> Achilleus,\n> \n> > i think i have an issue regarding the statistics that\n> > a) (plain) ANALYZE status and\n> > b) VACUUM ANALYZE status\n> > produce.\n> \n> It's perfectly normal for a query to run faster after a VACUUM ANALYZE than \n> after an ANALYZE ... after all, you just vacuumed it, didn't you?\n\nI am afraid it is not so simple.\nWhat i (unsuccessfully) implied is that \ndynacom=# VACUUM ANALYZE status ;\nVACUUM\ndynacom=# ANALYZE status ;\nANALYZE\ndynacom=#\n\nis enuf to damage the performance.\n\n> \n> If you're demonstrating some other kind of behavioural difference, then please \n> post the results of EXPLAIN ANALYZE for the two examples.\n> \ndynacom=# ANALYZE status ;\nANALYZE\ndynacom=# EXPLAIN ANALYZE select count(*) from status where \nassettable='vessels' and appname='ISM PMS' and apptblname='items' and \nstatus='warn' and isvalid and assetidval=49;\n \n QUERY PLAN\n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=4309.53..4309.53 rows=1 width=0) (actual \ntime=242.60..242.60 rows=1 loops=1)\n -> Seq Scan on status (cost=0.00..4306.08 rows=1378 width=0) (actual \ntime=15.75..242.51 rows=50 loops=1)\n Filter: ((assettable = 'vessels'::character varying) AND (appname \n= 'ISM PMS'::character varying) AND (apptblname = 'items'::character \nvarying) AND (status = 'warn'::character varying) AND isvalid AND \n(assetidval = 49))\n Total runtime: 242.74 msec\n(4 rows)\n \ndynacom=#\ndynacom=# VACUUM ANALYZE status ;\nVACUUM\ndynacom=# EXPLAIN ANALYZE select count(*) from status where \nassettable='vessels' and appname='ISM PMS' and apptblname='items' and \nstatus='warn' and isvalid and assetidval=49;\n \n QUERY PLAN\n \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2274.90..2274.90 rows=1 width=0) (actual time=8.89..8.89 \nrows=1 loops=1) -> Index Scan using status_all on status \n(cost=0.00..2274.34 rows=223 width=0) (actual time=8.31..8.83 rows=50 \nloops=1)\n Index Cond: ((assettable = 'vessels'::character varying) AND \n(assetidval = 49) AND (appname = 'ISM PMS'::character varying) AND \n(apptblname = 'items'::character varying) AND (status = 'warn'::character \nvarying))\n Filter: isvalid\n Total runtime: 8.98 msec\n(5 rows)\n \ndynacom=#\n\n> Oh, and we should probably shift this discussion to the PGSQL-PERFORMANCE \n> list.\n> \n\nOK.\n\n> \n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Wed, 30 Apr 2003 19:13:40 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] 7.3 analyze & vacuum analyze problem" }, { "msg_contents": "\nJosh wrote...\n> Achilleus,\n> \n> > I am afraid it is not so simple.\n> > What i (unsuccessfully) implied is that \n> > dynacom=# VACUUM ANALYZE status ;\n> > VACUUM\n> > dynacom=# ANALYZE status ;\n> > ANALYZE\n> > dynacom=#\n> >\n> > [is enuf to damage the performance.]\n> \n> You're right, that is mysterious. If you don't get a response from one of \n> the major developers on this forum, I suggest that you post those EXPLAIN \n> results to PGSQL-BUGS.\n\nI had the same problem a while back.\n\nhttp://archives.postgresql.org/pgsql-bugs/2002-08/msg00015.php\nhttp://archives.postgresql.org/pgsql-bugs/2002-08/msg00018.php\nhttp://archives.postgresql.org/pgsql-bugs/2002-08/msg00018.php\n\nShort summary: Later in the thread Tom explained my problem as free \nspace not being evenly distributed across the table so ANALYZE's \nsampling gave skewed results. In my case, \"pgstatuple\" was a \ngood tool for diagnosing the problem, \"vacuum full\" fixed my table\nand a much larger fsm_* would have probably prevented it.\n\n", "msg_date": "Wed, 30 Apr 2003 15:16:58 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] 7.3 analyze & vacuum analyze problem" }, { "msg_contents": "\"Ron Mayer\" <[email protected]> writes:\n> Short summary: Later in the thread Tom explained my problem as free \n> space not being evenly distributed across the table so ANALYZE's \n> sampling gave skewed results. In my case, \"pgstatuple\" was a \n> good tool for diagnosing the problem, \"vacuum full\" fixed my table\n> and a much larger fsm_* would have probably prevented it.\n\nNot sure if that is Achilleus' problem or not. IIRC, there should be\nno difference at all in what VACUUM ANALYZE and ANALYZE put into\npg_statistic (modulo random sampling variations of course). The only\ndifference is that VACUUM ANALYZE puts an exact tuple count into\npg_class.reltuples (since the VACUUM part groveled over every tuple,\nthis info is available) whereas ANALYZE does not scan the entire table\nand so has to put an estimate into pg_class.reltuples.\n\nIt would be interesting to see the pg_class and pg_stats rows for this\ntable after VACUUM ANALYZE and after ANALYZE --- but I suspect the main\ndifference will be the reltuples values.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 30 Apr 2003 20:10:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] 7.3 analyze & vacuum analyze problem " }, { "msg_contents": "> > I am afraid it is not so simple.\n> > What i (unsuccessfully) implied is that\n> > dynacom=# VACUUM ANALYZE status ;\n> > VACUUM\n> > dynacom=# ANALYZE status ;\n> > ANALYZE\n> > dynacom=#\n>\n> You're right, that is mysterious. If you don't get a response from one\nof\n> the major developers on this forum, I suggest that you post those EXPLAIN\n> results to PGSQL-BUGS.\n\nIs it mysterious? The ANALYZE histogram algorithm does do random sampling\ndoesn't it?\n\nChris\n\n", "msg_date": "Thu, 1 May 2003 09:51:41 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] 7.3 analyze & vacuum analyze problem" }, { "msg_contents": "On Wed, 30 Apr 2003, Tom Lane wrote:\n\n> \"Ron Mayer\" <[email protected]> writes:\n> > Short summary: Later in the thread Tom explained my problem as free \n> > space not being evenly distributed across the table so ANALYZE's \n> > sampling gave skewed results. In my case, \"pgstatuple\" was a \n> > good tool for diagnosing the problem, \"vacuum full\" fixed my table\n> > and a much larger fsm_* would have probably prevented it.\n> \n> Not sure if that is Achilleus' problem or not. IIRC, there should be\n> no difference at all in what VACUUM ANALYZE and ANALYZE put into\n> pg_statistic (modulo random sampling variations of course). The only\n> difference is that VACUUM ANALYZE puts an exact tuple count into\n> pg_class.reltuples (since the VACUUM part groveled over every tuple,\n> this info is available) whereas ANALYZE does not scan the entire table\n> and so has to put an estimate into pg_class.reltuples.\n> \n> It would be interesting to see the pg_class and pg_stats rows for this\n> table after VACUUM ANALYZE and after ANALYZE --- but I suspect the main\n> difference will be the reltuples values.\n\nUnfortunately i did a VACUUM FULL, and later a dump/reload\nwhich eliminated (vanished) the problem regarding the difference between\nplain ANALYZE and VACUUM ANALYZE.\n\nHowever, now the condition is much more wierd, in the sense\nthat after the reload, some planner costs seem too low (~ 6)\nthe expected number of rows is very often 1,\nand the correct index is used, resulting in a \nultra speed situation (that i never had expected!).\n\nAfter vacuum full analyze, or vacuum analyze\nthings get slow again.\n\nI surely must generate a reproducable scenario,\ndescribing the exact steps made, so i'll focus\non that.\n\nIn the meantime if Tom or some other hacker\nhas any ideas that would be great.\n\n\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Fri, 2 May 2003 14:29:34 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] 7.3 analyze & vacuum analyze problem " }, { "msg_contents": "On Fri, 2 May 2003, Achilleus Mantzios wrote:\n\n> On Wed, 30 Apr 2003, Tom Lane wrote:\n> \n> > \n> > It would be interesting to see the pg_class and pg_stats rows for this\n> > table after VACUUM ANALYZE and after ANALYZE --- but I suspect the main\n> > difference will be the reltuples values.\n> \n> I surely must generate a reproducable scenario,\n> describing the exact steps made, so i'll focus\n> on that.\n\nI use a freebsd-current (hereafter called FBSD) as a test environment,\nwith a freshly reloaded db and NO VACUUM or ANALYZE ever run, and i \nEXPLAIN ANALYZE some queries against a linux 2.4.18SMP (hereafter called \nLNX) which is the production environment, and on which a recent VACUUM \nFULL ANALYZE is run.\n\nSome queries run *very* fast on FBSD and very slow on LNX,\nwhere others run very slow on FBSD and very fast on LNX.\n(Here the oper system is not an issue, i just use these\n2 acronyms as aliases for the 2 situations/environments.\n\nSo i have:\n\n================= FBSD ===================\n========= QueryA (A VERY FAST PLAN) =====\ndynacom=# EXPLAIN ANALYZE select count(*) from status where \nassettable='vessels' and appname='ISM PMS' and apptblname='items' and \nstatus='warn' and isvalid and assetidval=57;\n \nQUERY PLAN\n \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=6.02..6.02 rows=1 width=0) (actual time=14.16..14.16 \nrows=1 loops=1)\n -> Index Scan using status_all on status (cost=0.00..6.02 rows=1 \nwidth=0) (actual time=13.09..13.95 rows=75 loops=1)\n Index Cond: ((assettable = 'vessels'::character varying) AND \n(assetidval = 57) AND (appname = 'ISM PMS'::character varying) AND \n(apptblname = 'items'::character\nvarying) AND (status = 'warn'::character varying))\n Filter: isvalid\n Total runtime: 14.40 msec\n(5 rows)\n \ndynacom=#\n===============QueryB A VERY SLOW PLAN =====\ndynacom=# EXPLAIN ANALYZE select it.id from items it,machdefs md where \nit.defid = md.defid and first(md.parents)=16492 and it.vslwhid = 53 and \nit.machtypecount = 1 order\nby md.description,md.partno;\n QUERY \nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=457.76..457.77 rows=1 width=68) (actual time=150.31..150.31 \nrows=0 loops=1)\n Sort Key: md.description, md.partno\n -> Nested Loop (cost=0.00..457.75 rows=1 width=68) (actual \ntime=150.16..150.16 rows=0 loops=1)\n -> Index Scan using items_machtypecount on items it \n(cost=0.00..451.73 rows=1 width=8) (actual time=0.99..89.30 rows=2245 \nloops=1)\n Index Cond: (machtypecount = 1)\n Filter: (vslwhid = 53)\n -> Index Scan using machdefs_pkey on machdefs md \n(cost=0.00..6.01 rows=1 width=60) (actual time=0.02..0.02 rows=0 \nloops=2245)\n Index Cond: (\"outer\".defid = md.defid)\n Filter: (first(parents) = 16492)\n Total runtime: 150.58 msec\n(10 rows)\n \ndynacom=# \n=================END FBSD=================\n\n=================LNX =====================\n========= QueryA (A VERY SLOW PLAN) =====\ndynacom=# EXPLAIN ANALYZE select count(*) from status where \nassettable='vessels' and appname='ISM PMS' and apptblname='items' and \nstatus='warn' and isvalid and assetidval=57;\n \nQUERY PLAN\n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1346.56..1346.56 rows=1 width=0) (actual \ntime=244.05..244.05 rows=1 loops=1)\n -> Seq Scan on status (cost=0.00..1345.81 rows=300 width=0) (actual \ntime=0.63..243.93 rows=75 loops=1)\n Filter: ((assettable = 'vessels'::character varying) AND (appname \n= 'ISM PMS'::character varying) AND (apptblname = 'items'::character \nvarying) AND (status = 'warn'::character varying) AND isvalid AND \n(assetidval = 57))\n Total runtime: 244.12 msec\n(4 rows)\n \ndynacom=#\n=========== QueryB (A VERY FAST PLAN)=======\ndynacom=# EXPLAIN ANALYZE select it.id from items it,machdefs md where \nit.defid = md.defid and first(md.parents)=16492 and it.vslwhid = 53 and \nit.machtypecount = 1 order by md.description,md.partno;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=631.23..631.26 rows=11 width=42) (actual time=0.08..0.08 \nrows=0 loops=1)\n Sort Key: md.description, md.partno\n -> Nested Loop (cost=0.00..631.05 rows=11 width=42) (actual \ntime=0.03..0.03 rows=0 loops=1)\n -> Index Scan using machdefs_dad on machdefs md \n(cost=0.00..228.38 rows=67 width=34) (actual time=0.02..0.02 rows=0 \nloops=1)\n Index Cond: (first(parents) = 16492)\n -> Index Scan using items_defid_vslid_mtcnt on items it \n(cost=0.00..5.99 rows=1 width=8) (never executed)\n Index Cond: ((it.defid = \"outer\".defid) AND (it.vslwhid = \n53) AND (it.machtypecount = 1))\n Total runtime: 0.15 msec\n(8 rows)\n \ndynacom=#\n\n======= END LNX =====================================\n\n* first is a function:\n integer first(integer[]),\nthat returns the first element of a [1xN] array.\n\nNow i run a VACUUM FULL ANALYZE; on the FBSD system\nand after taht,i get *identical* plans as on the LNX system.\nSo, the VACUUM FULL ANALYZE command helps QueryB, but screws\nQueryA.\n\nHere i paste pg_stats,pg_class data for the 3 tables (status, \nmachdefs, items) on the FBSD system\n\n====BEFORE the VACUUM FULL ANALYZE=====\ndynacom=# SELECT * from pg_class where relname='status';\n-[ RECORD 1 ]--+--------\nrelname | status\nrelnamespace | 2200\nreltype | 3470164\nrelowner | 1\nrelam | 0\nrelfilenode | 3470163\nrelpages | 562\nreltuples | 33565\nreltoastrelid | 0\nreltoastidxid | 0\nrelhasindex | t\nrelisshared | f\nrelkind | r\nrelnatts | 12\nrelchecks | 0\nreltriggers | 0\nrelukeys | 0\nrelfkeys | 0\nrelrefs | 0\nrelhasoids | t\nrelhaspkey | f\nrelhasrules | f\nrelhassubclass | f\nrelacl |\n \ndynacom=#\ndynacom=# SELECT * from pg_class where relname='machdefs';\n-[ RECORD 1 ]--+---------\nrelname | machdefs\nrelnamespace | 2200\nreltype | 3470079\nrelowner | 1\nrelam | 0\nrelfilenode | 3470078\nrelpages | 175\nreltuples | 13516\nreltoastrelid | 3470081\nreltoastidxid | 0\nrelhasindex | t\nrelisshared | f\nrelkind | r\nrelnatts | 20\nrelchecks | 0\nreltriggers | 7\nrelukeys | 0\nrelfkeys | 0\nrelrefs | 0\nrelhasoids | t\nrelhaspkey | t\nrelhasrules | f\nrelhassubclass | f\nrelacl |\n\ndynacom=# SELECT * from pg_class where relname='items';\n-[ RECORD 1 ]--+--------\nrelname | items\nrelnamespace | 2200\nreltype | 3470149\nrelowner | 1\nrelam | 0\nrelfilenode | 3470148\nrelpages | 233\nreltuples | 29433\nreltoastrelid | 3470153\nreltoastidxid | 0\nrelhasindex | t\nrelisshared | f\nrelkind | r\nrelnatts | 25\nrelchecks | 0\nreltriggers | 10\nrelukeys | 0\nrelfkeys | 0\nrelrefs | 0\nrelhasoids | t\nrelhaspkey | t\nrelhasrules | f\nrelhassubclass | f\nrelacl |\n \ndynacom=#\n\nBefore the VACUUM [FULL] ANALYZE No statistics are produced\n\n\n====AFTER the VACUUM FULL ANALYZE=====\n\n===========================================================\ndynacom=# SELECT * from pg_class where relname='status';\n-[ RECORD 1 ]--+--------\nrelname | status\nrelnamespace | 2200\nreltype | 3191663\nrelowner | 1\nrelam | 0\nrelfilenode | 3191662\nrelpages | 562\nreltuples | 33565\nreltoastrelid | 0\nreltoastidxid | 0\nrelhasindex | t\nrelisshared | f\nrelkind | r\nrelnatts | 12\nrelchecks | 0\nreltriggers | 0\nrelukeys | 0\nrelfkeys | 0\nrelrefs | 0\nrelhasoids | t\nrelhaspkey | f\nrelhasrules | f\nrelhassubclass | f\nrelacl |\n \ndynacom=#\n\ndynacom=# SELECT * from pg_class where relname='machdefs';\n-[ RECORD 1 ]--+---------\nrelname | machdefs\nrelnamespace | 2200\nreltype | 3191578\nrelowner | 1\nrelam | 0\nrelfilenode | 3191577\nrelpages | 175\nreltuples | 13516\nreltoastrelid | 3191580\nreltoastidxid | 0\nrelhasindex | t\nrelisshared | f\nrelkind | r\nrelnatts | 20\nrelchecks | 0\nreltriggers | 7\nrelukeys | 0\nrelfkeys | 0\nrelrefs | 0\nrelhasoids | t\nrelhaspkey | t\nrelhasrules | f\nrelhassubclass | f\nrelacl |\n \ndynacom=#\n\ndynacom=# SELECT * from pg_class where relname='items';\n-[ RECORD 1 ]--+--------\nrelname | items\nrelnamespace | 2200\nreltype | 3191648\nrelowner | 1\nrelam | 0\nrelfilenode | 3191647\nrelpages | 232\nreltuples | 29433\nreltoastrelid | 3191652\nreltoastidxid | 0\nrelhasindex | t\nrelisshared | f\nrelkind | r\nrelnatts | 25\nrelchecks | 0\nreltriggers | 10\nrelukeys | 0\nrelfkeys | 0\nrelrefs | 0\nrelhasoids | t\nrelhaspkey | t\nrelhasrules | f\nrelhassubclass | f\nrelacl |\n \ndynacom=# SELECT \ntablename,attname,null_frac,avg_width,n_distinct,most_common_vals,most_common_freqs,histogram_bounds,correlation \nfrom pg_stats where tablename='status';\n\n tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation \n-----------+-------------+-----------+-----------+------------+--------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n status | id | 0 | 4 | -1 | | | {8,3677,6977,10159,13753,17012,20228,23620,26864,30311,33859} | 0.795126\n status | checkdate | 0 | 8 | -1 | | | {\"2002-10-19 10:54:53.764+03\",\"2003-03-01 05:00:22.691+02\",\"2003-03-03 05:00:23.876+02\",\"2003-03-04 05:00:28.912+02\",\"2003-03-29 05:00:28.099+02\",\"2003-03-30 05:00:24.009+03\",\"2003-04-02 12:14:34.221+03\",\"2003-04-26 05:02:53.133+03\",\"2003-04-29 05:01:43.716+03\",\"2003-04-30 05:01:05.727+03\",\"2003-04-30 05:01:46.749+03\"} | 0.844914\n status | assettable | 0 | 11 | 1 | {vessels} | {1} | | 1\n status | assetidval | 0 | 4 | 21 | {53,57,48,65,33,61,49} | {0.11,0.108667,0.0916667,0.079,0.073,0.0693333,0.0626667} | {20,24,26,29,32,35,36,43,44,47,79} | 0.15861\n status | appname | 0 | 11 | 6 | {\"ISM PMS\",Class.Certificates,Class.Surveys,Repairs,Class.CMS,Class.Recommendations} | {0.975333,0.01,0.00633333,0.004,0.003,0.00133333} | | 0.963033\n status | apptblname | 0 | 9 | 5 | {items,certificates,surveys,repdat,recommendations} | {0.978333,0.01,0.00633333,0.004,0.00133333} | | 0.96127\n status | apptblidval | 0 | 4 | -0.165914 | {18799,2750,9025,12364,12491,20331,20546,20558,21665,22913} | {0.00166667,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333} | {1,4996,8117,12367,14441,16488,19586,21155,22762,24026,32802} | 0.104023\n status | colname | 0 | 14 | 6 | {lastrepdate,lastinspdate,rh,N/A,status,classsurvey} | {0.685,0.241333,0.049,0.0176667,0.004,0.003} | | 0.487112\n status | colval | 0 | 8 | -0.56769 | {0,1,2991,27,146,1102,412,784,136,1126} | {0.0206667,0.004,0.002,0.00166667,0.00166667,0.00166667,0.00133333,0.00133333,0.001,0.001} | {21,14442.908,14506.476,18028.868,18038.256,18045.821,18053.101,18062.404,18076.057,150212.049,96805423.065} | 0.197915\n status | status | 0 | 8 | 2 | {warn,alarm} | {0.524333,0.475667} | | 0.514211\n status | isvalid | 0 | 1 | 2 | {f,t} | {0.789333,0.210667} | | 0.967602\n status | username | 0 | 12 | 7 | {periodic,amantzio,ckaklaman,secretuser,mitsios,birtsia,lignos} | {0.856333,0.053,0.0433333,0.029,0.013,0.00266667,0.00266667} | | 0.769222\n(12 rows)\n\ndynacom=# SELECT \ntablename,attname,null_frac,avg_width,n_distinct,most_common_vals,most_common_freqs,histogram_bounds,correlation \nfrom pg_stats where tablename='machdefs'; \n\n tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation \n-----------+-------------+-----------+-----------+------------+---------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n machdefs | defid | 0 | 4 | -1 | | | {2482,4607,6556,7957,9339,10662,12006,13822,15082,16533,18224} | 0.315706\n machdefs | parents | 0.124667 | 29 | -0.345266 | {\"{8673}\",\"{4456}\",\"{9338}\",\"{11565}\",\"{6865}\",\"{11183}\",\"{10810}\",\"{9852}\",\"{7016}\",\"{7636}\"} | {0.0166667,0.016,0.016,0.0156667,0.013,0.0126667,0.0106667,0.01,0.01,0.00966667} | | \n machdefs | description | 0.281333 | 20 | -0.101338 | {Inspection,Rings,Overhaul,Greasing/Lubrication,Bearings,Oil,\"Safety devices\",Motor,Cleaning,Crankcase} | {0.0296667,0.01,0.008,0.00733333,0.00633333,0.00633333,0.006,0.00533333,0.005,0.00433333} | {\"1T11 Vortex Pump\",\"Camshaft drive\",\"Cylinder Lubricator Pump body\",\"Ejector pump\",\"Fuel injection pump No5\",\"Inlet valve\",\"Main bearing No6\",\"Piston & Connecting rod No6\",\"Safety cut out device No7\",\"Stuffing box\",\"dP/I Transmitter flow meter kit\"} | 0.04711\n machdefs | partno | 0.840667 | 10 | 327 | | | {0137,151623-54101,302,51.04101-0479,90401-48-296,\"G 21401\",\"Z 11918\",\"Z 23165\",\"Z 27242\",\"Z 27533\",ZK34402} | 0.394772\n machdefs | machtypeid | 0 | 4 | 739 | {358,632,207,364,16,633,1006,31,533,723} | {0.0853333,0.0326667,0.0226667,0.0223333,0.0203333,0.0203333,0.0203333,0.0196667,0.0196667,0.0196667} | {19,64,129,330,456,631,809,932,1048,1242,1575} | 0.128535\n machdefs | rhbec | 0.782667 | 4 | 20 | {6000} | {0.073} | {375,750,1500,1500,3000,3750,3750,7500,9000,12000,37500} | 0.300707\n machdefs | rhdue | 0.782667 | 4 | 20 | {8000} | {0.073} | {500,1000,2000,2000,4000,5000,5000,10000,12000,16000,50000} | 0.300707\n machdefs | periodbec | 0.458667 | 4 | 11 | {22} | {0.262333} | {5,67,67,67,135,135,270,270,675,1350} | 0.415895\n machdefs | perioddue | 0.458667 | 4 | 10 | {30,90,180,360,1800,7,900,720,120,60} | {0.262333,0.0833333,0.053,0.0456667,0.0233333,0.021,0.021,0.0156667,0.0153333,0.000666667} | | 0.419195\n machdefs | action | 0.474333 | 13 | 56 | {Inspection,Overhaul,Cleaning,Clearances,\"Megger Report\"} | {0.151333,0.0966667,0.0746667,0.0273333,0.0236667} | {\"Actuation test\",Check,\"Check Position\",Greasing/Lubrication,Landing,\"Pressure Test\",Renewal,Renewal,\"Report Receipt\",Test,\"Water Washing\"} | 0.180053\n machdefs | application | 0.973333 | 18 | 2 | {\"Megger Report\",\"CrankShaft Deflection Report\"} | {0.0236667,0.003} | | 0.999508\n(11 rows)\n\ndynacom=# SELECT \ntablename,attname,null_frac,avg_width,n_distinct,most_common_vals,most_common_freqs,histogram_bounds,correlation \nfrom pg_stats where tablename='items'; \n\n tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation \n-----------+-----------------+-----------+-----------+------------+-------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n items | id | 0 | 4 | -1 | | | {2315,7279,12104,15875,19170,22170,25511,28420,32582,35753,38322} | 0.427626\n items | vslwhid | 0 | 4 | 19 | {57,53,65,74} | {0.130333,0.125,0.116667,0.0746667} | {24,29,31,33,43,44,48,49,61,76,79} | 0.0679692\n items | serialno | 0.952 | 10 | 149 | | | {014-3255,120092,1294207,20081,318216,56678,80-51,A1-0548,BV54654,KC60525,XL5334} | -0.0161482\n items | rh | 0.863667 | 4 | 191 | {0} | {0.008} | {1,172,400,855,1292,2322,3191,4328,4906,6421,37679} | 0.0437569\n items | lastinspdate | 0.885 | 4 | 120 | | | {1999-05-28,2002-04-23,2002-12-06,2003-01-15,2003-02-01,2003-02-22,2003-03-04,2003-03-15,2003-03-21,2003-03-28,2003-10-09} | 0.101498\n items | classused | 0 | 4 | 2 | {0,1} | {0.985333,0.0146667} | | 0.979994\n items | classaa | 0.985333 | 4 | 43 | | | {5,24,50,69,93,104,132,178,686,1072,1241} | -0.114588\n items | classsurvey | 0.985333 | 31 | 44 | | | {\"Aux Boiler Feed Inner Pump (No.1)\",\"Ballast Inner Pump (No.1)\",\"Emergency Fire Pump\",\"M/E Cylinder Relief valve No2\",\"M/E Piston No4\",\"No.1 Cooling S.W.Pump for G/E\",\"No.2 Cargo Oil Pump\",\"No.2 Main Generator Diesel Engine\",\"No.4 Connecting rod, top end and guides\",\"No.6 Safety valve of M/E\",\"Sea Water Service Pump\"} | -0.0264975\n items | classsurveydate | 0.987333 | 4 | 20 | | | {1998-05-31,1998-05-31,2000-01-31,2000-05-31,2001-03-31,2001-09-30,2002-02-28,2002-07-31,2002-12-31,2003-02-16,2003-04-23} | 0.305832\n items | classduedate | 0.985333 | 4 | 22 | | | {2003-05-31,2003-07-31,2004-07-31,2005-01-31,2005-10-18,2006-07-31,2006-09-30,2007-07-31,2007-12-31,2008-02-28,2008-04-30} | 0.0222692\n items | classcomment | 0.997333 | 26 | 1 | {\"Main Propulsion System\"} | {0.00266667} | | 1\n items | defid | 0 | 4 | -0.243872 | {15856,15859,15851,13801,14179,14181,15860,15865,2771,2775} | {0.00333333,0.00233333,0.002,0.00166667,0.00166667,0.00166667,0.00166667,0.00166667,0.00133333,0.00133333} | {2319,3192,5182,7387,9296,11020,12862,14001,15190,16852,18221} | 0.321816\n items | machtypecount | 0 | 4 | 8 | {1,2,3,4,6,5,7,8} | {0.62,0.22,0.139667,0.0113333,0.00466667,0.003,0.000666667,0.000666667} | | 0.489828\n items | totalrh | 0 | 4 | 2 | {0} | {0.999667} | | 0.999829\n items | comment | 0.928667 | 7 | 34 | | | {1,3,\"90KVA-General service\",No1,No1,No1,No2,No2,No2,No3,Stbd} | 0.384123\n items | lastrepdate | 0.742667 | 4 | 10 | {2003-03-31} | {0.187333} | {2002-06-30,2003-02-28,2003-02-28,2003-02-28,2003-04-01,2003-04-04,2003-04-04,2003-04-04,2003-04-08} | 0.887771\n(16 rows)\n\n\n================================================================================\nIt seems that the presence of Statistics really hurt status table.\nIn the other cases (machdefs,items) VACUUM ANALYZE does\na pretty good job. (or at least compared to the \"no stats at all\" case).\n\nAlso Tom, i could give you access, if you want, to the test environment :)\n\n > > \n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> \n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Fri, 2 May 2003 17:03:21 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] 7.3 analyze & vacuum analyze problem " } ]
[ { "msg_contents": "Hi there,\nI'm running into a quite puzzling simple example where the index I've\ncreated on a fairly big table (465K entries) is not used, against all common\nsense expectations:\nThe query I am trying to do (fast) is:\n\nselect count(*) from addresses;\n\nThis takes more than a second to complete, because, as the 'explain' command\nshows me,\nthe index created on 'addresses' is not used, and a seq scan is being used.\nOne would assume that the creation of an index would allow the counting of\nthe number of entries in a table to be instantanous?\n\nHere are the details:\n\n* Using the latest postgresql 7.3.2 release, built and installed from\nsources on a Linux box, under Red Hat 8.0\n\n* I have an 'addresses' table defined as:\nColumm | Type\n-------------------------------\naddress | text\ncity | char var (20)\nzip | char var (5)\nstate | char var (2)\nUnique keys: addresses_idx\n\n* I have created a unique index 'addresses_idx' on (address, city, zip,\nstate):\n\\d addresses_idx;\nIndex \"addresses_idx\"\nColumm | Type\n-------------------------------\naddress | text\ncity | char var (20)\nzip | char var (5)\nstate | char var (2)\nunique btree\n\n* I did (re)create the index several times\n* I did run the vacuum analyse command several times\n* I forced enable_indexscan to true\n* I forced enable_seqscan to false\n\nDespite of all of this, each time I try:\n===> explain select count(*) from addresses;\nI get the following:\n===> NOTICE: QUERY PLAN:\n===>\n===> Aggregate (cost=100012799.89..100012799.89 rows=1 width=0)\n===> -> Seq Scan on addresses (cost=100000000.00..100011635.11 rows=465911\nwidth=0)\n\nQuite puzzling, isn't it?\nI've searched a bunch of mailing lists and websites, and found many reports\nof special cases where it could be argued that the planner may have had a\ncase for choosing seq scanning over idx scanning, but unless I am missing\nsome fundamental concept, there's something wrong here.\nAny suggestion anyone?\nThanks,\n\nDenis\[email protected]\n\n", "msg_date": "Tue, 8 Apr 2003 12:57:16 -0700", "msg_from": "\"Denis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Yet Another (Simple) Case of Index not used" }, { "msg_contents": "as I remember, mysql keeps the record count in a variable and is instantaneaous \nwith that kind of query. Recent posts suggest the Postgres does not keep that \nvariable and has to do the seq scan.\n\nDenis wrote:\n> Hi there,\n> I'm running into a quite puzzling simple example where the index I've\n> created on a fairly big table (465K entries) is not used, against all common\n> sense expectations:\n> The query I am trying to do (fast) is:\n> \n> select count(*) from addresses;\n> \n> This takes more than a second to complete, because, as the 'explain' command\n> shows me,\n> the index created on 'addresses' is not used, and a seq scan is being used.\n> One would assume that the creation of an index would allow the counting of\n> the number of entries in a table to be instantanous?\n> \n> Here are the details:\n> \n> * Using the latest postgresql 7.3.2 release, built and installed from\n> sources on a Linux box, under Red Hat 8.0\n> \n> * I have an 'addresses' table defined as:\n> Columm | Type\n> -------------------------------\n> address | text\n> city | char var (20)\n> zip | char var (5)\n> state | char var (2)\n> Unique keys: addresses_idx\n> \n> * I have created a unique index 'addresses_idx' on (address, city, zip,\n> state):\n> \\d addresses_idx;\n> Index \"addresses_idx\"\n> Columm | Type\n> -------------------------------\n> address | text\n> city | char var (20)\n> zip | char var (5)\n> state | char var (2)\n> unique btree\n> \n> * I did (re)create the index several times\n> * I did run the vacuum analyse command several times\n> * I forced enable_indexscan to true\n> * I forced enable_seqscan to false\n> \n> Despite of all of this, each time I try:\n> ===> explain select count(*) from addresses;\n> I get the following:\n> ===> NOTICE: QUERY PLAN:\n> ===>\n> ===> Aggregate (cost=100012799.89..100012799.89 rows=1 width=0)\n> ===> -> Seq Scan on addresses (cost=100000000.00..100011635.11 rows=465911\n> width=0)\n> \n> Quite puzzling, isn't it?\n> I've searched a bunch of mailing lists and websites, and found many reports\n> of special cases where it could be argued that the planner may have had a\n> case for choosing seq scanning over idx scanning, but unless I am missing\n> some fundamental concept, there's something wrong here.\n> Any suggestion anyone?\n> Thanks,\n> \n> Denis\n> [email protected]\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n", "msg_date": "Tue, 08 Apr 2003 13:25:44 -0700", "msg_from": "Dennis Gearon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Dennis,\n\n> I'm running into a quite puzzling simple example where the index I've\n> created on a fairly big table (465K entries) is not used, against all common\n> sense expectations:\n> The query I am trying to do (fast) is:\n> \n> select count(*) from addresses;\n\nPostgreSQL is currently unable to use indexes on aggregate queries. This is \nbecause of two factors:\n1) MVCC means that the number of rows must be recalculated for each \nconnection's current transaction, and cannot be \"cached\" anywhere by the \ndatabase system;\n2) Our extensible model of user-defined aggregates means that each aggregate \nis a \"black box\" whose internal operations are invisible to the planner.\n\nThis is a known performance issue for Postgres, and I believe that a couple of \npeople on Hackers are looking at modifying aggregate implementation for 8.0 \nto use appropriate available indexes, at least for MIN, MAX and COUNT. Until \nthen, you will need to either put up with the delay, or create a \ntrigger-driven aggregates caching table.\n\nIf you are trying to do a correlated count, like \"SELECT type, count(*) from \naggregates GROUP BY type\", Tom Lane has already added a hash-aggregates \nstructure in the 7.4 source that will speed this type of query up \nconsiderably for systems with lots of RAM.\n\n(PS: in the future, please stick to posting questions to one list at a time, \nthanks)\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 8 Apr 2003 14:52:40 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "On Tue, Apr 08, 2003 at 12:57:16PM -0700, Denis wrote:\n> The query I am trying to do (fast) is:\n> \n> select count(*) from addresses;\n> \n> This takes more than a second to complete, because, as the 'explain' command\n> shows me,\n> the index created on 'addresses' is not used, and a seq scan is being used.\n> One would assume that the creation of an index would allow the counting of\n> the number of entries in a table to be instantanous?\n\nIncorrect assumption. select count(*) can produce different results in\ndifferent backends depending on the current state of the active\ntransactions.\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> \"the West won the world not by the superiority of its ideas or values or\n> religion but rather by its superiority in applying organized violence.\n> Westerners often forget this fact, non-Westerners never do.\"\n> - Samuel P. Huntington", "msg_date": "Wed, 9 Apr 2003 09:46:23 +1000", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Interesting generic response. In other words, \"it all depends\".\nWell, a de facto observation is: \"In my case, it's always much slower with, say, mysql\".\nUnderstand me, I don't mean to be starting a performance comparaison mysql vs postgresql,\nwhich is probably an old subject, I am just looking for a solution to solve this type\nof performance issues, ie the generic cases:\nselect count(*) from addresses where address is like 'pattern%';\nWhich are very fast on mysql, and very slow on postgresql.\nUnderstood, it will always depend on some parameters, but the real question is: how\nmuch control does one have over those parameters, and how does one tweak them to reach\noptimal performance?\n\nD.\n\n\n\n\n\n > -----Original Message-----\n > From: [email protected]\n > [mailto:[email protected]]On Behalf Of Martijn van\n > Oosterhout\n > Sent: Tuesday, April 08, 2003 4:46 PM\n > To: Denis\n > Cc: [email protected]; [email protected];\n > [email protected]\n > Subject: Re: [PERFORM] [GENERAL] Yet Another (Simple) Case of Index not\n > used\n > \n > \n > On Tue, Apr 08, 2003 at 12:57:16PM -0700, Denis wrote:\n > > The query I am trying to do (fast) is:\n > > \n > > select count(*) from addresses;\n > > \n > > This takes more than a second to complete, because, as the 'explain' command\n > > shows me,\n > > the index created on 'addresses' is not used, and a seq scan is being used.\n > > One would assume that the creation of an index would allow the counting of\n > > the number of entries in a table to be instantanous?\n > \n > Incorrect assumption. select count(*) can produce different results in\n > different backends depending on the current state of the active\n > transactions.\n > -- \n > Martijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n > > \"the West won the world not by the superiority of its ideas or values or\n > > religion but rather by its superiority in applying organized violence.\n > > Westerners often forget this fact, non-Westerners never do.\"\n > > - Samuel P. Huntington\n > \n\n", "msg_date": "Tue, 8 Apr 2003 17:10:01 -0700", "msg_from": "\"Denis @ Next2Me\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Josh,\n\nI am no database expert, and even less knowledgeable about the internals\nof postgresql, so I'll trust you on the 2 points you make below.\n\nAre you saying the 7.4 'group by' trick would be faster than the simple select count(*)?\nThat seems hard to believe, being that the request now has to fetch / sort the data.\nI must be missing something.\n\nThe kind of requests that I am really interested in are:\nselect count(*) from table where table.column like 'pattern%'\nThese seems to go much master on mysql (which I guess it not a MVCC database? or wasn't \nthe Innobase supposed to make it so?), than on postgresql.\n\nSo, in the meantime, I've decided to split up my data into two sets,\nthe static big tables which are handled by mysql, and the rest of it handled \nby postgresql....\n\n\n\nps: apologies for the cross-posting.\n\n > -----Original Message-----\n > From: Josh Berkus [mailto:[email protected]]\n > Sent: Tuesday, April 08, 2003 2:53 PM\n > To: Denis; [email protected]\n > Subject: Re: [SQL] Yet Another (Simple) Case of Index not used\n > \n > \n > Dennis,\n > \n > > I'm running into a quite puzzling simple example where the index I've\n > > created on a fairly big table (465K entries) is not used, against all common\n > > sense expectations:\n > > The query I am trying to do (fast) is:\n > > \n > > select count(*) from addresses;\n > \n > PostgreSQL is currently unable to use indexes on aggregate queries. This is \n > because of two factors:\n > 1) MVCC means that the number of rows must be recalculated for each \n > connection's current transaction, and cannot be \"cached\" anywhere by the \n > database system;\n > 2) Our extensible model of user-defined aggregates means that each aggregate \n > is a \"black box\" whose internal operations are invisible to the planner.\n > \n > This is a known performance issue for Postgres, and I believe that a couple of \n > people on Hackers are looking at modifying aggregate implementation for 8.0 \n > to use appropriate available indexes, at least for MIN, MAX and COUNT. Until \n > then, you will need to either put up with the delay, or create a \n > trigger-driven aggregates caching table.\n > \n > If you are trying to do a correlated count, like \"SELECT type, count(*) from \n > aggregates GROUP BY type\", Tom Lane has already added a hash-aggregates \n > structure in the 7.4 source that will speed this type of query up \n > considerably for systems with lots of RAM.\n > \n > (PS: in the future, please stick to posting questions to one list at a time, \n > thanks)\n > \n > -- \n > -Josh Berkus\n > Aglio Database Solutions\n > San Francisco\n > \n\n", "msg_date": "Tue, 8 Apr 2003 17:21:16 -0700", "msg_from": "\"Denis @ Next2Me\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "\n\nOn Wed, 9 Apr 2003, Martijn van Oosterhout wrote:\n\n> On Tue, Apr 08, 2003 at 12:57:16PM -0700, Denis wrote:\n> > The query I am trying to do (fast) is:\n> >\n> > select count(*) from addresses;\n> >\n> > This takes more than a second to complete, because, as the 'explain' command\n> > shows me,\n> > the index created on 'addresses' is not used, and a seq scan is being used.\n> > One would assume that the creation of an index would allow the counting of\n> > the number of entries in a table to be instantanous?\n>\n> Incorrect assumption. select count(*) can produce different results in\n> different backends depending on the current state of the active\n> transactions.\n\nSome thoughts:\n\nSelect count(*) is often applied to views, and may take some time\ndepending on the underlying query.\n\nHowever, for a single table, I would have thought that if there are no\nwrite locks or open transactions for the table, the index would return a\nfaster result than a scan? Is there room for some optimisation here?\n\nDoes count(<primary_key>) work faster, poss using the unique index on the\nkey (for non-composite keys)?\n\n\nCheers\n Brent Wood\n\n", "msg_date": "Wed, 9 Apr 2003 12:44:48 +1200 (NZST)", "msg_from": "Brent Wood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet Another (Simple) Case of Index not used" }, { "msg_contents": "\nOn Tue, 8 Apr 2003, Denis @ Next2Me wrote:\n\n> The kind of requests that I am really interested in are:\n> select count(*) from table where table.column like 'pattern%'\n\nIf you think an index scan should be faster, you can try\nset enable_seqscan=off;\nand see if that changes the plan generated by explain and with analyze\nyou can compare the time used. Without information on the estimated\nselectivity it's hard to say what's right.\n\nIf it doesn't use the index (ie, it's still using a sequential scan)\nafter the enable_seqscan=off it's likely that you didn't initdb in \"C\"\nlocale in which case like won't use indexes currently (you can see the\narchives for long description, but the short one is that some of the\nlocale rules can cause problems with using the index).\n\n", "msg_date": "Tue, 8 Apr 2003 18:59:37 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Hi Denis,\r\n\r\n> The kind of requests that I am really interested in are:\r\n> select count(*) from table where table.column like 'pattern%'\r\n> These seems to go much master on mysql (which I guess it not a MVCC database? or wasn't \r\n> the Innobase supposed to make it so?), than on postgresql.\r\n\r\nA few things.\r\n\r\n* MVCC in PostgreSQL allows us to be way faster than MySQL when you have heaps of concurrent readers and writers. The tradeoff is that count(*) is slow since PostgreSQL needs to check that each tuple is actually visible to your query (eg. you start a transaction, somone else inserts a row, you do a count(*) - should the result include that new row or not? Answer: no.)\r\n\r\n* Just avoid doing count(*) over the entire table with no where clause!!! It's as easy as that\r\n\r\n* The LIKE 'pattern%' is indexable in Postgresql. You will need to create a normal btree index over table.column. So long as the index is returning a small portion of the table (eg. say only 5-10% of the fields begin with pattern), then the index will be used and it will be fast.\r\n\r\n* If you want really fast full text indexing, check out contrib/tsearch - it's really, really, really fast.\r\n\r\nChris\r\n", "msg_date": "Wed, 9 Apr 2003 10:35:22 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "On Tue, Apr 08, 2003 at 05:10:01PM -0700, Denis @ Next2Me wrote:\n> Interesting generic response. In other words, \"it all depends\".\n> Well, a de facto observation is: \"In my case, it's always much slower with, say, mysql\".\n\nCurious, is mysql still so fast when you have transactions enabled? How does\nit deal with the following:\n\nbegin;\ndelete from bigtable;\nselect count(*) from bigtable; -- Should return 0\nabort;\nselect count(*) from bigtable; -- Should give original size\n\n> Understand me, I don't mean to be starting a performance comparaison mysql\n> vs postgresql, which is probably an old subject, I am just looking for a\n> solution to solve this type of performance issues, ie the generic cases:\n> select count(*) from addresses where address is like 'pattern%';\n> Which are very fast on mysql, and very slow on postgresql.\n\nAh, but that may be caused by something else altogether. LIKE is only\nindexable in the C locale so if you have en_US as your locale, your LIKE\nwon't be indexable. See the discussion threads on this mailing list in the past.\n\n> Understood, it will always depend on some parameters, but the real\n> question is: how much control does one have over those parameters, and how\n> does one tweak them to reach optimal performance?\n\nHmm, it depends. One person put it that mysql goes for performance first,\nthen correctness, whereas postgresql goes for correctness first, then\nperformance.\n\nMaybe fti (full text indexing) would work better?\n\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> \"the West won the world not by the superiority of its ideas or values or\n> religion but rather by its superiority in applying organized violence.\n> Westerners often forget this fact, non-Westerners never do.\"\n> - Samuel P. Huntington", "msg_date": "Wed, 9 Apr 2003 13:18:39 +1000", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Stephan, Martijn,\ngood call, that was it: the C locale.\n\nI had used all the default settings when installing/creating the database,\nand apparently it used my default locale (en_US).\nI recreated (initdb) the database with --no-locale, and recreated the database,\nand sure enough, the query:\nselect count(*) from table where table.column like 'fol%'\nwas a zillion (well almost) time faster than it used to be,\nand on pair with mysql's performance.\nAnd as expected, the EXPLAIN on that query does show indeed \nthe use of the index I had created on the table.\n\nSweet, I can now nuke mysql out of my system.\n\nFolks, thank you all for the help and other suggestions.\n\nDenis Amselem\nNext2Me Inc.\n\n\n\n\nStephan said:\n > If it doesn't use the index (ie, it's still using a sequential scan)\n > after the enable_seqscan=off it's likely that you didn't initdb in \"C\"\n > locale in which case like won't use indexes currently (you can see the\n > archives for long description, but the short one is that some of the\n > locale rules can cause problems with using the index).\n\nMartijn said:\n\n > Ah, but that may be caused by something else altogether. LIKE is only\n > indexable in the C locale so if you have en_US as your locale, your LIKE\n > won't be indexable. See the discussion threads on this mailing list in the past.\n > \n > \n\n", "msg_date": "Tue, 8 Apr 2003 22:54:46 -0700", "msg_from": "\"Denis @ Next2Me\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Denis,\n\n> Are you saying the 7.4 'group by' trick would be faster than the simple\n> select count(*)? That seems hard to believe, being that the request now has\n> to fetch / sort the data. I must be missing something.\n\nNo, I'm saying that the 7.4 hash-aggregate is faster than the same query was \nunder 7.2 or 7.3. Much faster. But it does little to speed up a raw \ncount(*).\n\n> The kind of requests that I am really interested in are:\n> select count(*) from table where table.column like 'pattern%'\n\nHash-aggregates may, in fact, help with this. Care to try downloading the \nthe source from CVS?\n\n> These seems to go much master on mysql (which I guess it not a MVCC\n> database? or wasn't the Innobase supposed to make it so?),\n\nThey did incorporate a lot of MVCC logic into InnoDB tables, yes. Which means \nthat if SELECT count(*) on an InnoDB table is just as fast as a MyISAM table, \nthen it is not accurate. This would be in keeping with MySQL's design \nphilosophy, which values performance and simplicity over accuracy and \nprecision -- the opposite of our philosophy.\n\n> So, in the meantime, I've decided to split up my data into two sets,\n> the static big tables which are handled by mysql, and the rest of it\n> handled by postgresql....\n\nHey, if it works for you, it's probably easier than dealing with the \nPostgreSQL workarounds to this performance issue. I'll ask you to give \nPostgreSQL a try for those tables again when 7.4 comes out.\n\n> ps: apologies for the cross-posting.\n\nDe nada. The Performance list is the right place for this sort of question \nin the future.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Wed, 9 Apr 2003 09:18:45 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Josh Berkus wrote:\n> Denis,\n> \n> > Are you saying the 7.4 'group by' trick would be faster than the simple\n> > select count(*)? That seems hard to believe, being that the request now has\n> > to fetch / sort the data. I must be missing something.\n> \n> No, I'm saying that the 7.4 hash-aggregate is faster than the same query was \n> under 7.2 or 7.3. Much faster. But it does little to speed up a raw \n> count(*).\n> \n> > The kind of requests that I am really interested in are:\n> > select count(*) from table where table.column like 'pattern%'\n> \n> > These seems to go much master on mysql (which I guess it not a MVCC\n> > database? or wasn't the Innobase supposed to make it so?),\n> \n> They did incorporate a lot of MVCC logic into InnoDB tables, yes.\n> Which means that if SELECT count(*) on an InnoDB table is just as\n> fast as a MyISAM table, then it is not accurate.\n\nThis is not necessarily true. The trigger-based approach to tracking\nthe current number of rows in a table might well be implemented\ninternally, and that may actually be much faster than doing it using\ntriggers (the performance losses you saw may well have been the result\nof PG's somewhat poor trigger performance, and not the result of the\napproach itself. It would be interesting to know how triggers effect\nthe performance of other databases).\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Sat, 19 Apr 2003 06:01:46 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> Josh Berkus wrote:\n>> They did incorporate a lot of MVCC logic into InnoDB tables, yes.\n>> Which means that if SELECT count(*) on an InnoDB table is just as\n>> fast as a MyISAM table, then it is not accurate.\n\n> This is not necessarily true. The trigger-based approach to tracking\n> the current number of rows in a table might well be implemented\n> internally, and that may actually be much faster than doing it using\n> triggers\n\nYou missed the point of Josh's comment: in an MVCC system, the correct\nCOUNT() varies depending on which transaction is asking. Therefore it\nis not possible for a centrally maintained row counter to give accurate\nresults to everybody, no matter how cheap it is to maintain.\n\n(The cheapness can be disputed as well, since it creates a single point\nof contention for all inserts and deletes on the table. But that's a\ndifferent topic.)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 19 Apr 2003 11:58:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used " }, { "msg_contents": "Kevin, Tom:\n\n> (The cheapness can be disputed as well, since it creates a single point\n> of contention for all inserts and deletes on the table. But that's a\n> different topic.)\n\nActually, this was the problem with the trigger method of maintaining COUNT \ninformation in PostgreSQL. The statistics table itself becomes a \nsignificant souce of delay, since if a table_A gets 10,000 rows updated than \ntable_count_A must necessarily be updated 10,000 times ... creating a lot of \ndead tuples and severely attenuating the table on disk until the next vacuum \n... resulting in Update #10,000 to table_count_A taking 100+ times as long as \nUpdate #1 does, due to the required random seek time on disk.\n\nI can personally think of two ways around this:\n\nIn MySQL: store table_count_A as a non-MVCC table or global variable. \nDrawback: the count would not be accurate, as you would see changes due to \nincomplete transactions and eventually the count would be knocked off \ncompletely by an overload of multi-user activity. However, this does fit \nwith MySQL's design philosophy of \"Speed over accuracy\", so I suspect that \nthat's what they're doing.\n\nIn PostgreSQL:\na) Put table_count_A on superfast media like a RAM card so that random seeks \nafter 10,000 updates do not become a significant delay;\nb) create an asynchronious table aggregates collector which would collect \nprogrammed statistics (like count(*) from table A) much in the same way that \nthe planner statistics collector does. This would have the disadvantage of \non being up to date when the database is idle, but the advantage of not \nimposing any significant overhead on Updates.\n\t(Incidentally, I proposed this to one of my clients who complained about \nPostgres' slow aggregate performance, but they declined to fund the effort)\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Sat, 19 Apr 2003 12:03:18 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> In PostgreSQL:\n> a) Put table_count_A on superfast media like a RAM card so that random seeks \n> after 10,000 updates do not become a significant delay;\n\nAs long as we're talking ugly, here ;-)\n\nYou could use a sequence to hold the aggregate counter. A sequence\nisn't transactional and so does not accumulate dead tuples. \"setval()\"\nand \"select last_value\" should have constant-time performance.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 19 Apr 2003 16:26:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used " }, { "msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > Josh Berkus wrote:\n> >> They did incorporate a lot of MVCC logic into InnoDB tables, yes.\n> >> Which means that if SELECT count(*) on an InnoDB table is just as\n> >> fast as a MyISAM table, then it is not accurate.\n> \n> > This is not necessarily true. The trigger-based approach to tracking\n> > the current number of rows in a table might well be implemented\n> > internally, and that may actually be much faster than doing it using\n> > triggers\n> \n> You missed the point of Josh's comment: in an MVCC system, the correct\n> COUNT() varies depending on which transaction is asking. Therefore it\n> is not possible for a centrally maintained row counter to give accurate\n> results to everybody, no matter how cheap it is to maintain.\n\nHmm...true...but only if you really implement it as a faithful copy of\nthe trigger-based method. Implementing it on the backend brings some\nadvantages to the table, to wit:\n\n* The individual transactions don't need to update the\n externally-visible count on every insert or delete, they only need\n to update it at commit time.\n\n* The transaction can keep a count of the number of inserted and\n deleted tuples it generates (on a per-table basis) during the life\n of the transaction. The count value it returns to a client is the\n count value it reads from the table that stores the count value plus\n any differences that have been applied during the transaction. This\n is fast, because the backend handling the transaction can keep this\n difference value in its own private memory.\n\n* When a transaction commits, it only needs to apply the \"diff value\"\n it stores internally to the external count value.\n\nContention on the count value is only an issue if the external count\nvalue is currently being written to by a transaction in the commit\nphase. But the only time a transaction will be interested in reading\nthat value is when it's performing a count(*) operation or when it's\ncommitting inserts/deletes that happened on the table in question (and\nthen only if the number of tuples inserted differs from the number\ndeleted). So the total amount of contention should be relatively low.\n\n\n> (The cheapness can be disputed as well, since it creates a single point\n> of contention for all inserts and deletes on the table. But that's a\n> different topic.)\n\nThat's true, but the single point of contention is only an issue at\ntransaction commit time (unless you're implementing READ UNCOMMITTED),\nat least if you do something like what I described above.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Sat, 19 Apr 2003 18:13:37 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> Tom Lane wrote:\n>> You missed the point of Josh's comment: in an MVCC system, the correct\n>> COUNT() varies depending on which transaction is asking. Therefore it\n>> is not possible for a centrally maintained row counter to give accurate\n>> results to everybody, no matter how cheap it is to maintain.\n\n> Hmm...true...but only if you really implement it as a faithful copy of\n> the trigger-based method.\n> [ instead have transactions save up net deltas to apply at commit ]\n\nGood try, but it doesn't solve the problem. SERIALIZABLE transactions\nshould not see deltas applied by any transaction that commits after\nthey start. READ COMMITTED transactions can see such deltas --- but not\ndeltas applied since the start of their current statement. (And there\ncould be several different \"current statements\" with different snapshots\nin progress in a single READ COMMITTED transaction.)\n\nAFAICS, central-counter techniques could only work in an MVCC system\nif each transaction copies every counter in the system at each snapshot\nfreeze point, in case it finds itself needing that counter value later\non. This is a huge amount of mostly-useless overhead, and it makes the\nproblem of lock contention for access to the counters several orders of\nmagnitude worse than you'd first think.\n\nOf course you can dodge lots of this overhead if you're willing to\naccept approximate answers. But I don't believe that central counters\nare useful in an exact-MVCC-semantics system.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 19 Apr 2003 23:34:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used " }, { "msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > Tom Lane wrote:\n> >> You missed the point of Josh's comment: in an MVCC system, the correct\n> >> COUNT() varies depending on which transaction is asking. Therefore it\n> >> is not possible for a centrally maintained row counter to give accurate\n> >> results to everybody, no matter how cheap it is to maintain.\n> \n> > Hmm...true...but only if you really implement it as a faithful copy of\n> > the trigger-based method.\n> > [ instead have transactions save up net deltas to apply at commit ]\n> \n> Good try, but it doesn't solve the problem. SERIALIZABLE transactions\n> should not see deltas applied by any transaction that commits after\n> they start. READ COMMITTED transactions can see such deltas --- but not\n> deltas applied since the start of their current statement. (And there\n> could be several different \"current statements\" with different snapshots\n> in progress in a single READ COMMITTED transaction.)\n\nThis is why I suspect the best way to manage this would be to manage\nthe counter itself using the MVCC mechanism (that is, you treat the\nshared counter as a row in a table just like any other and, in fact,\nit might be most beneficial for it to actually be exactly that), which\nhandles the visibility problem automatically. But I don't know how\nmuch contention there would be as a result.\n\n> Of course you can dodge lots of this overhead if you're willing to\n> accept approximate answers. But I don't believe that central counters\n> are useful in an exact-MVCC-semantics system.\n\nNo, but an MVCC-managed counter would be useful in such a system,\nwouldn't it? Or am I missing something there, too (the deltas\nthemselves would be managed as described, and would be applied as\ndescribed)?\n\nSo: how much contention would there be if the counter were managed in\nexactly the same way as any row of a table is managed? Because I'm\nnot terribly familiar with how PG manages MVCC (pointers to\ndocumentation on it welcomed) I can't answer that question myself.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Sat, 19 Apr 2003 23:28:52 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Kevin Brown wrote:\n\n>\n>No, but an MVCC-managed counter would be useful in such a system,\n>wouldn't it? Or am I missing something there, too (the deltas\n>themselves would be managed as described, and would be applied as\n>described)?\n>\n>So: how much contention would there be if the counter were managed in\n>exactly the same way as any row of a table is managed? Because I'm\n>not terribly familiar with how PG manages MVCC (pointers to\n>documentation on it welcomed) I can't answer that question myself.\n>\n> \n>\nIt looks to me that a \"row number -1\" really would solve this problem.\n\n I think a row counter on each table would be even useful for some kind \nof auto-vacuum mechanism, that could be triggered if pg_class.reltuples \ndeviates too far from the real row count. Observing this mailing list, \nmissing or outdated statistics still seem to be a major source of \nperformance degradation. We all know these 1000 row estimates from \nEXPLAIN, don't we? A default vacuum strategy for pgsql newbies should \nsolve a lot of those problems, preventing a lot of \"pgsql is slow\" threads.\n\nRegards,\nAndreas\n\n", "msg_date": "Sun, 20 Apr 2003 12:07:53 +0200", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> This is why I suspect the best way to manage this would be to manage\n> the counter itself using the MVCC mechanism (that is, you treat the\n> shared counter as a row in a table just like any other and, in fact,\n> it might be most beneficial for it to actually be exactly that), which\n> handles the visibility problem automatically. But I don't know how\n> much contention there would be as a result.\n\nHm. Contention probably wouldn't be the killer, since if transactions\ndon't try to update the count until they are about to commit, they won't\nbe holding the row lock for long. (You'd have to beware of deadlocks\nbetween transactions that need to update multiple counters, but that\nseems soluble.) What *would* be a problem is that such counter tables\nwould accumulate huge numbers of dead rows very quickly, making it\ninefficient to find the live row. Josh already mentioned this as a\nproblem with user-trigger-based counting. You could stanch the bleeding\nwith sufficiently frequent vacuums, perhaps, but it just doesn't look\nvery appealing.\n\nUltimately what this comes down to is \"how much overhead are we willing\nto load onto all other operations in order to make SELECT-COUNT(*)-with-\nno-WHERE-clause fast\"? Postgres has made a set of design choices that\nfavor the other operations. If you've designed an application that\nlives or dies by fast COUNT(*), perhaps you should choose another\ndatabase.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 20 Apr 2003 11:21:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used " }, { "msg_contents": "Andreas Pflug <[email protected]> writes:\n> I think a row counter on each table would be even useful for some kind \n> of auto-vacuum mechanism, that could be triggered if pg_class.reltuples \n> deviates too far from the real row count.\n\nIt would be counting the wrong thing. auto-vacuum needs to know how\nmany dead tuples are in a table, not how many live ones. Example:\nUPDATE doesn't change the live-tuple count (without this property,\nI don't think the sort of count maintenance Kevin is proposing could\npossibly be efficient enough to be interesting). But it does create\na dead tuple that vacuum wants to know about.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 20 Apr 2003 11:25:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used " }, { "msg_contents": "Tom Lane wrote:\n\n>It would be counting the wrong thing. auto-vacuum needs to know how\n>many dead tuples are in a table, not how many live ones. Example:\n>UPDATE doesn't change the live-tuple count (without this property,\n>I don't think the sort of count maintenance Kevin is proposing could\n>possibly be efficient enough to be interesting). But it does create\n>a dead tuple that vacuum wants to know about.\n>\n> \n>\nI understand your point, but is this about VACUUM only or VACUUM ANALYZE \ntoo? People wouldn't bother about big databases if it's still fast \n(until the disk is full :-)\n\nDo dead tuples affect query planning? I thought the plan only cares \nabout existing rows and their data patterns.\nSo count(*), pg_stat_all_tables.n_tup_ins, .n_tup_upd and .n_tup_del all \ntogether can make a VACUUM ANALYZE necessary, right?\n\nRegards,\nAndreas\n\n", "msg_date": "Sun, 20 Apr 2003 17:37:57 +0200", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "On Sun, Apr 20, 2003 at 11:21:32AM -0400, Tom Lane wrote:\n\n> favor the other operations. If you've designed an application that\n> lives or dies by fast COUNT(*), perhaps you should choose another\n> database.\n\nOr consider redesigning the application. The \"no where clause\"\nrestriction sure smacks of poor database normalisation to me. \n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sun, 20 Apr 2003 12:46:09 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > This is why I suspect the best way to manage this would be to manage\n> > the counter itself using the MVCC mechanism (that is, you treat the\n> > shared counter as a row in a table just like any other and, in fact,\n> > it might be most beneficial for it to actually be exactly that), which\n> > handles the visibility problem automatically. But I don't know how\n> > much contention there would be as a result.\n> \n> Hm. Contention probably wouldn't be the killer, since if transactions\n> don't try to update the count until they are about to commit, they won't\n> be holding the row lock for long. (You'd have to beware of deadlocks\n> between transactions that need to update multiple counters, but that\n> seems soluble.) What *would* be a problem is that such counter tables\n> would accumulate huge numbers of dead rows very quickly, making it\n> inefficient to find the live row. \n\nBut that inefficiency is a problem for *all* oft-updated tables, is it\nnot? I know that you'll end up with an additional n tuples per\ntransaction (where n is the average number of tables inserted into or\ndeleted from per transaction), so this isn't an insignificant problem,\nbut it's one faced by any application that often updates a small\ntable.\n\nCausing a transaction which is already doing inserts/deletes to take\nthe hit of doing one additional update doesn't seem to me to be a\nparticularly large sacrifice, especially since the table it's updating\n(the one that contains the counts) is likely to be cached in its\nentirety. The chances are reasonable that the other activity the\ntransaction is performing will dwarf the additional effort that\nmaintaining the count demands.\n\n> Josh already mentioned this as a problem with user-trigger-based\n> counting.\n\nRight, but the trigger based mechanism probably magnifies the issue by\norders of magnitude, and thus can't necessarily be used as an argument\nagainst an internally-implemented method.\n\n> You could stanch the bleeding with sufficiently frequent vacuums,\n> perhaps, but it just doesn't look very appealing.\n\nI would say this is more a strong argument for automatic VACUUM\nmanagement than against count management, because what you say here is\ntrue of any oft-updated, oft-referenced table.\n\n> Ultimately what this comes down to is \"how much overhead are we willing\n> to load onto all other operations in order to make SELECT-COUNT(*)-with-\n> no-WHERE-clause fast\"? Postgres has made a set of design choices that\n> favor the other operations. If you've designed an application that\n> lives or dies by fast COUNT(*), perhaps you should choose another\n> database.\n\nOr perhaps a mechanism similar to the one being discussed should be\nimplemented and controlled with a GUC variable, so instead of forcing\nsomeone to choose another database you force them to choose between\nthe performance tradeoffs involved. We already give DBAs such choices\nelsewhere, e.g. pg_stat_activity.\n\nThe real question in all this is whether or not fast COUNT(*)\noperations are needed often enough to even justify implementing a\nmechanism to make them possible in PG. The question of getting fast\nanswers from COUNT(*) comes up often enough to be a FAQ, and that\nsuggests that there's enough demand for the feature that it may be\nworth implementing just to shut those asking for it up. :-)\n\nPersonally, I'd rather see such development effort go towards more\nbeneficial improvements, such as replication, 2PC, SQL/MED, etc. (or\neven improving the efficiency of MVCC, since it was mentioned here as\na problem! :-). I consider COUNT(*) without a WHERE clause to be a\ncorner case, despite the frequency of questions about it. But I don't\nthink we should reject a patch to implement fast COUNT(*) just because\nit represents a performance tradeoff, at least if it's GUC-controlled.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Sun, 20 Apr 2003 17:46:30 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> Personally, I'd rather see such development effort go towards more\n> beneficial improvements, such as replication, 2PC, SQL/MED, etc. (or\n> even improving the efficiency of MVCC, since it was mentioned here as\n> a problem! :-). I consider COUNT(*) without a WHERE clause to be a\n> corner case, despite the frequency of questions about it.\n\nExactly.\n\n> But I don't\n> think we should reject a patch to implement fast COUNT(*) just because\n> it represents a performance tradeoff, at least if it's GUC-controlled.\n\nWell, this is moot since I see no one offering to provide such a patch.\nBut performance tradeoffs are only one of the costs involved. I suspect\nany such mechanism would be invasive enough to represent a nontrivial\nongoing maintenance cost, whether anyone uses it or not. The extent\nto which it munges core functionality would have to be a factor in\ndeciding whether to accept it. It'd take lots more thought than we've\nexpended in this thread to get an accurate handle on just what would\nbe involved...\n\n(BTW, if anyone actually is thinking about this, please make it a\nper-table option not a global GUC option.)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 20 Apr 2003 21:53:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used " }, { "msg_contents": "Kevin,\n\n> > Josh already mentioned this as a problem with user-trigger-based\n> > counting.\n>\n> Right, but the trigger based mechanism probably magnifies the issue by\n> orders of magnitude, and thus can't necessarily be used as an argument\n> against an internally-implemented method.\n\nI'm not sure about that, Kevin. The production trigger test was written in C \n(by Joe Conway), using some of the best memory/efficiency management he could \ndevise. I could buy that the trigger mechanism adds a certain fixed \noverhead to the process, but not the contention that we were seeing ... \nespecially not the geometric progression of inefficiency as the transaction \ncount went up. We'll talk about this offlist; I may be able to get the \nclient to authorize letting you examine the database.\n\nFor further detail, our setup was sort of a \"destruction test\"; including:\n1) a slightly underpowered server running too many processes;\n2) a very high disk contention environment, with multiple applications \nfighting for I/O.\n3) running COUNT(*), GROUP BY x on a table with 1.4 million rows, which was \nbeing updated in batches of 10,000 rows to 40,000 rows every few minutes.\n\nAs I said before, the overhead for c-trigger based accounting, within the MVCC \nframework, was quite tolerable with small update batches, only 9-11% penalty \nto the updates overall for batches of 100-300 updates. However, as we \nincreased the application activity, the update penalty increased, up to \n40-45% with the full production load.\n\nIt's not hard to figure out why; like most user's servers, the aggregate \ncaching table was on the same disk as the table(s) being updated. The \nresut was a huge amount of disk-head-skipping between the updated table and \nthe aggregate caching table every time a commit hit the database, with random \nseek times increasing the longer the time since the last VACUUM.\n\nNow, on a better server with these tables on fast RAID or on different \nspindles, I expect the result would be somewhat better. However, I also \nsuspect that many of the users who complain the loudest about slow count(*) \nare operating in single-spindle environments.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Mon, 21 Apr 2003 09:14:43 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "On Sat, Apr 19, 2003 at 12:03:18PM -0700, Josh Berkus wrote:\n> Kevin, Tom:\n> \n> > (The cheapness can be disputed as well, since it creates a single point\n> > of contention for all inserts and deletes on the table. But that's a\n> > different topic.)\n> \n> Actually, this was the problem with the trigger method of maintaining COUNT \n> information in PostgreSQL. The statistics table itself becomes a \n> significant souce of delay, since if a table_A gets 10,000 rows updated than \n> table_count_A must necessarily be updated 10,000 times ... creating a lot of \n> dead tuples and severely attenuating the table on disk until the next vacuum \n> ... resulting in Update #10,000 to table_count_A taking 100+ times as long as \n> Update #1 does, due to the required random seek time on disk.\n \nOnce statement level triggers are implimented, the performance would\nprobably be fine, assuming your update was a single statement.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Tue, 22 Apr 2003 03:23:39 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "> Once statement level triggers are implimented, the performance would\n> probably be fine, assuming your update was a single statement.\n\nStatement triggers _are_ implemented in CVS.\n\nChris\n\n", "msg_date": "Tue, 22 Apr 2003 16:33:27 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "\nTom Lane <[email protected]> writes:\n\n> AFAICS, central-counter techniques could only work in an MVCC system\n> if each transaction copies every counter in the system at each snapshot\n> freeze point, in case it finds itself needing that counter value later\n> on. This is a huge amount of mostly-useless overhead, and it makes the\n> problem of lock contention for access to the counters several orders of\n> magnitude worse than you'd first think.\n\nWell, one option would be to do it in a lazy way. If you do an update on a\ntable with cached aggregate data just throw the data out. This way you get to\ncache data on infrequently updated tables and get only a very small penalty on\nfrequently updated tables.\n\n--\ngreg\n\n", "msg_date": "23 Apr 2003 12:32:09 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Yet Another (Simple) Case of Index not used" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Denis [mailto:[email protected]] \n> Sent: Tuesday, April 08, 2003 12:57 PM\n> To: [email protected]; \n> [email protected]; [email protected]\n> Subject: [GENERAL] Yet Another (Simple) Case of Index not used\n> \n> \n> Hi there,\n> I'm running into a quite puzzling simple example where the \n> index I've created on a fairly big table (465K entries) is \n> not used, against all common sense expectations: The query I \n> am trying to do (fast) is:\n> \n> select count(*) from addresses;\n> \n> This takes more than a second to complete, because, as the \n> 'explain' command shows me, the index created on 'addresses' \n> is not used, and a seq scan is being used. \n\nAs well it should be.\n\n> One would assume \n> that the creation of an index would allow the counting of the \n> number of entries in a table to be instantanous?\n\nTraversing the index to perform the count will definitely make the query\nmany times slower.\n\nA general rule of thumb (not sure if it is true with PostgreSQL) is that\nif you have to traverse more than 10% of the data with an index then a\nfull table scan will be faster. This is especially true when there is\nhighly redundant data in the index fields. If there were an index on\nbit data type, and you have half and half 1 and 0, an index scan of the\ntable will be disastrous.\n\nTo simply scan the table, we will just sequentially read pages until the\ndata is exhausted. If we follow the index, we will randomly jump from\npage to page, defeating the read buffering.\n[snip]\n\n", "msg_date": "Tue, 8 Apr 2003 13:26:21 -0700", "msg_from": "\"Dann Corbit\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Yet Another (Simple) Case of Index not used" }, { "msg_contents": "from mysql manual:\n-------------------------------------------------------------\n\"COUNT(*) is optimized to return very quickly if the SELECT retrieves from one \ntable, no other columns are retrieved, and there is no WHERE clause. For example:\n\nmysql> select COUNT(*) from student;\"\n-------------------------------------------------------------\n\nA nice little optimization, maybe not possible in a MVCC system.\n\nDann Corbit wrote:\n>>-----Original Message-----\n>>From: Denis [mailto:[email protected]] \n>>Sent: Tuesday, April 08, 2003 12:57 PM\n>>To: [email protected]; \n>>[email protected]; [email protected]\n>>Subject: [GENERAL] Yet Another (Simple) Case of Index not used\n>>\n>>\n>>Hi there,\n>>I'm running into a quite puzzling simple example where the \n>>index I've created on a fairly big table (465K entries) is \n>>not used, against all common sense expectations: The query I \n>>am trying to do (fast) is:\n>>\n>>select count(*) from addresses;\n>>\n>>This takes more than a second to complete, because, as the \n>>'explain' command shows me, the index created on 'addresses' \n>>is not used, and a seq scan is being used. \n> \n> \n> As well it should be.\n> \n> \n>>One would assume \n>>that the creation of an index would allow the counting of the \n>>number of entries in a table to be instantanous?\n> \n> \n> Traversing the index to perform the count will definitely make the query\n> many times slower.\n> \n> A general rule of thumb (not sure if it is true with PostgreSQL) is that\n> if you have to traverse more than 10% of the data with an index then a\n> full table scan will be faster. This is especially true when there is\n> highly redundant data in the index fields. If there were an index on\n> bit data type, and you have half and half 1 and 0, an index scan of the\n> table will be disastrous.\n> \n> To simply scan the table, we will just sequentially read pages until the\n> data is exhausted. If we follow the index, we will randomly jump from\n> page to page, defeating the read buffering.\n> [snip]\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n", "msg_date": "Tue, 08 Apr 2003 13:43:56 -0700", "msg_from": "Dennis Gearon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet Another (Simple) Case of Index not used" }, { "msg_contents": "Dennis Gearon wrote:\n> from mysql manual:\n> -------------------------------------------------------------\n> \"COUNT(*) is optimized to return very quickly if the SELECT retrieves from one \n> table, no other columns are retrieved, and there is no WHERE clause. For example:\n> \n> mysql> select COUNT(*) from student;\"\n> -------------------------------------------------------------\n> \n> A nice little optimization, maybe not possible in a MVCC system.\n\nI think the only thing you can do with MVCC is to cache the value and\ntranaction id for \"SELECT AGG(*) FROM tab\" and make the cached value\nvisible to transaction id's greater than the one that executed the\nquery, and invalidate the cache every time the table is modified.\n\nIn fact, don't clear the cache, just record the transaction id of the\ntable modification command so we can use standard visibility routines to\nmake the cache usable as long as possiible.\n\nThe cleanest way would probably be to create an aggregate cache system\ntable, and to insert into it when someone does an unqualified aggregate,\nand to delete from it when someone modifies the table --- the MVCC tuple\nvisibility rules are handled automatically. Queries can look in there\nto see if a visible cached value already exists. Of course, the big\nquestion is whether this would be a big win, and whether the cost of\nupkeep would justify it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n", "msg_date": "Tue, 15 Apr 2003 10:23:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "On Tuesday 15 Apr 2003 3:23 pm, Bruce Momjian wrote:\n> Dennis Gearon wrote:\n> > from mysql manual:\n> > -------------------------------------------------------------\n> > \"COUNT(*) is optimized to return very quickly if the SELECT retrieves\n> > from one table, no other columns are retrieved, and there is no WHERE\n> > clause. For example:\n> >\n> > mysql> select COUNT(*) from student;\"\n> > -------------------------------------------------------------\n\n> The cleanest way would probably be to create an aggregate cache system\n> table, and to insert into it when someone does an unqualified aggregate,\n> and to delete from it when someone modifies the table --- the MVCC tuple\n> visibility rules are handled automatically. Queries can look in there\n> to see if a visible cached value already exists. Of course, the big\n> question is whether this would be a big win, and whether the cost of\n> upkeep would justify it.\n\nIf the rule system could handle something like:\n\nCREATE RULE quick_foo_count AS ON SELECT count(*) FROM foo \nDO INSTEAD\nSELECT quick_count FROM agg_cache WHERE tbl_name='foo';\n\nThe whole thing could be handled by user-space triggers/rules and still \ninvisible to the end-user.\n\n-- \n Richard Huxton\n\n", "msg_date": "Tue, 15 Apr 2003 17:29:45 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Yet Another (Simple) Case of Index not used" }, { "msg_contents": "\nAdded to TODO:\n\n\t* Consider using MVCC to cache count(*) queries with no WHERE\n\t clause\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Dennis Gearon wrote:\n> > from mysql manual:\n> > -------------------------------------------------------------\n> > \"COUNT(*) is optimized to return very quickly if the SELECT retrieves from one \n> > table, no other columns are retrieved, and there is no WHERE clause. For example:\n> > \n> > mysql> select COUNT(*) from student;\"\n> > -------------------------------------------------------------\n> > \n> > A nice little optimization, maybe not possible in a MVCC system.\n> \n> I think the only thing you can do with MVCC is to cache the value and\n> tranaction id for \"SELECT AGG(*) FROM tab\" and make the cached value\n> visible to transaction id's greater than the one that executed the\n> query, and invalidate the cache every time the table is modified.\n> \n> In fact, don't clear the cache, just record the transaction id of the\n> table modification command so we can use standard visibility routines to\n> make the cache usable as long as possiible.\n> \n> The cleanest way would probably be to create an aggregate cache system\n> table, and to insert into it when someone does an unqualified aggregate,\n> and to delete from it when someone modifies the table --- the MVCC tuple\n> visibility rules are handled automatically. Queries can look in there\n> to see if a visible cached value already exists. Of course, the big\n> question is whether this would be a big win, and whether the cost of\n> upkeep would justify it.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 30 May 2003 22:31:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Yet Another (Simple) Case of Index not used" } ]
[ { "msg_contents": "Hello all,\n\nI've been the lead developer of a successful (so-far) web application.\nCurrently we run the database and the web application on the same server,\nbut it will soon be necessary to split them and add some more web servers.\n\nI'm not counting on using replication any time soon, so I want to choose the\nhardware platform that will meet my needs for some time. Maybe you can give\nsome suggestions...\n\nMy application is written in PHP and relies on the apache web server. We\nuse persistent db connections, so it appears (correct me if I'm wrong) that\nevery apache child process gets one connection to the database. If my\nMaxClients is 150, each web server will be able to make up to 150 db\nconnections. I'd like to play with that number a little bit once I get the\nwebserver off of the db server. I feel that I could handle a greater number\nof Clients, so let us say that I have up to 200 connections per server.\n\nI'd like to have room to grow, so let's also say that I go to 20 web servers\nfor a total of 4000 connections. (I'd probably like to go even higher, so\nconsider this our starting point)\n\nWith a couple dozen active accounts and a lot of test data, my current\ndatabase is equiv to about 100 active accounts. Its current disk space\nconsumption is:\ndata # du --max-depth=2\n3656 ./base/1\n3656 ./base/16975\n4292 ./base/95378\n177824 ./base/200371\n189432 ./base\n144 ./global\n82024 ./pg_xlog\n2192 ./pg_clog\n273836 .\n\nThis is not optimized and there is a lot of old data, but to be safe, maybe\nwe should assume that each account uses 4 MB of disk space in the db,\ncounting indexes, tables and etc. I'd like to scale to 15,000 - 25,000\naccounts, but I doubt that will be feasible at my budget level. (Also,\nthere is a lot of optimizing to do, so it won't surprise me if this 4MB\nnumber is more like 2MB or even less)\n\nI'm not as concerned with disk subsystem or layout at the moment. I've seen\na lot of good documentation (especially from Bruce Momjian, thanks!) on this\nsubject. I'm mostly concerned with choosing the platform that's going to\nallow the scalability I need.\n\nCurrently I'm most experienced in Linux, especially RedHat. I'm \"certified\"\non SCO Openserver (5.x) and I've played with Irix, OSF/1 (I don't think it's\ncalled that anymore), Free BSD (3.x) and Solaris (2.x). I'm most\ncomfortable with Linux, but I'm willing to use a different platform if it\nwill be beneficial. I've heard that Solaris on the Sparc platform is\ncapable of addressing larger amounts of RAM than Linux on Intel does. I\ndon't know if that's true or if that has bearing, but I'd like to hear your\nopinions.\n\nMy budget is going to be between (US) $5,000 and $10,000 and I'd like to\nstay under $7,000. I'm a major bargain hunter, so I shop e-bay a lot and\nhere are some samplings that I think may be relevant for discussion:\n\nSUN (I'm not an expert in this, advice is requested)\n----------------------------------------------------\nSUN ENTERPRISE 4500 8x400 Mhz 4MB Cache CPUs 8GB RAM no hard drives ~$6,000\nSun E3500 - 8 x 336MHz 4MB Cache CPUs 4GB RAM 8 x 9.1GB FC disks ~$600.00\nAny other suggestions?\n\nINTEL (I'm much more familiar with this area)\n----------------------------------------------------\nCompaq DL580 4x700 MHz 2MB Cache CPUs 4GB RAM (16GB Max) HW Raid w/ 64MB\nCache ~$6000\nIBM Netfinity 7100 4x500 MHz 1MB Cache CPUs up to (16GB Max) HW Raid\nDell PowerEdge 8450 8x550 2M Cache CPUS 4GB (32GB Max) HS RAID w/ 16MB Cache\n~$4,500\nAny other suggestions?\n\nAny other hardware platforms I should consider?\n\nFinally, and I know this sounds silly, but I don't have my own data center,\nso size is something I need to take into consideration. I pay for data\ncenter space by the physical size of my servers. My priorities are\nPerformance, Reasonable amount of scalability (as outlined above) and\nfinally physical size.\n\nThanks for taking the time to read this and for any assistance you can give,\n\nMatthew Nuzum\nwww.bearfruit.org\n\n", "msg_date": "Tue, 8 Apr 2003 23:38:44 -0400", "msg_from": "\"Matthew Nuzum\" <[email protected]>", "msg_from_op": true, "msg_subject": "choosing the right platform" }, { "msg_contents": "Matthew,\n\n> Currently I'm most experienced in Linux, especially RedHat. I'm\n> \"certified\" on SCO Openserver (5.x) and I've played with Irix, OSF/1 (I\n> don't think it's called that anymore), Free BSD (3.x) and Solaris (2.x). \n> I'm most\n> comfortable with Linux, but I'm willing to use a different platform if it\n> will be beneficial. I've heard that Solaris on the Sparc platform is\n> capable of addressing larger amounts of RAM than Linux on Intel does. I\n> don't know if that's true or if that has bearing, but I'd like to hear your\n> opinions.\n\nPlease browse through the list archives. We have numerous posts on the \nplatform subject. In fact, several of us are trying to put together a \nPostgreSQL performance test package to answer this question difinitively \nrather than anecdotally.\n\nAnecdotal responses are:\n\nSolaris is *bad* for PostgreSQL, due to a high per-process overhead. \nUniversal opinion on this list has been that Solaris is optimized for \nmulti-threaded applications, not multi-process applications, and postgres is \nthe latter.\n\n*BSD has a *lot* of fans on the PGSQL lists, many of whom claim significantly \nbetter performance than Linux, mostly due to better filesystem I/O.\n\nLinux is used heavily by a lot of PostgreSQL users. I have yet to see anyone \nprovide actual Linux vs. BSD statistics, though ... something we hope to do.\n\nNobody has come forward and reported on PostgreSQL on SCO Unix.\n\nIrix is widely regarded as a \"dead\" platform, though PostgreSQL does run on it \n...\n\nGood luck, and keep watching this space!\n\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Wed, 9 Apr 2003 09:28:01 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right platform" }, { "msg_contents": "I would say up front that both Linux and BSD are probably your two best \nchoices. If you're familiar with one more than the other, that \nfamiliarity may be more important than the underlying differences in the \ntwo OSes, as they are both good platforms to run postgresql on top of.\n\nSecondly, look carefully at using persistant connections in large numbers. \n\nWhile persistant connections DO represent a big savings in connect time, \nthe savings are lost in the noise of many PHP applications.\n\ni.e. my dual PIII swiss army knife server can initiate single persistant \nconnections at 1,000,000 a second (reusing the old ones of course). \nnon-persistant connects happen at 1,000 times a second. Most of my \nscripts run in 1/10th of a second or so, so the 1/1000th used to connect \nis noise to me.\n\nIf you are going to use persistant connections, it might work better to \nlet apache have only 20 or 40 children, which will force the apache \nchildren to \"round robin\" serve the requests coming in.\n\nThis will usually work fine, since keeping the number of apache children \ndown keeps the number of postgresql backends down, which keeps the system \nfaster in terms of response time. Turn keep alive down to something short \nlike 10 seconds, or just turn it off, as keep alive doesn't really save \nall that much time in apache.\n\nNote that machine testing with 100 simo connections doesn't translate \ndirectly to 100 users. Generally, x simos usually represents about 10 to \n20 x users, since users don't click buttons all that fast. so an apache \nconfigured by 40 max children should handle 100 to 200 users with no \nproblem.\n\nOn Tue, 8 Apr 2003, Matthew Nuzum wrote:\n\n> Hello all,\n> \n> I've been the lead developer of a successful (so-far) web application.\n> Currently we run the database and the web application on the same server,\n> but it will soon be necessary to split them and add some more web servers.\n> \n> I'm not counting on using replication any time soon, so I want to choose the\n> hardware platform that will meet my needs for some time. Maybe you can give\n> some suggestions...\n> \n> My application is written in PHP and relies on the apache web server. We\n> use persistent db connections, so it appears (correct me if I'm wrong) that\n> every apache child process gets one connection to the database. If my\n> MaxClients is 150, each web server will be able to make up to 150 db\n> connections. I'd like to play with that number a little bit once I get the\n> webserver off of the db server. I feel that I could handle a greater number\n> of Clients, so let us say that I have up to 200 connections per server.\n> \n> I'd like to have room to grow, so let's also say that I go to 20 web servers\n> for a total of 4000 connections. (I'd probably like to go even higher, so\n> consider this our starting point)\n> \n> With a couple dozen active accounts and a lot of test data, my current\n> database is equiv to about 100 active accounts. Its current disk space\n> consumption is:\n> data # du --max-depth=2\n> 3656 ./base/1\n> 3656 ./base/16975\n> 4292 ./base/95378\n> 177824 ./base/200371\n> 189432 ./base\n> 144 ./global\n> 82024 ./pg_xlog\n> 2192 ./pg_clog\n> 273836 .\n> \n> This is not optimized and there is a lot of old data, but to be safe, maybe\n> we should assume that each account uses 4 MB of disk space in the db,\n> counting indexes, tables and etc. I'd like to scale to 15,000 - 25,000\n> accounts, but I doubt that will be feasible at my budget level. (Also,\n> there is a lot of optimizing to do, so it won't surprise me if this 4MB\n> number is more like 2MB or even less)\n> \n> I'm not as concerned with disk subsystem or layout at the moment. I've seen\n> a lot of good documentation (especially from Bruce Momjian, thanks!) on this\n> subject. I'm mostly concerned with choosing the platform that's going to\n> allow the scalability I need.\n> \n> Currently I'm most experienced in Linux, especially RedHat. I'm \"certified\"\n> on SCO Openserver (5.x) and I've played with Irix, OSF/1 (I don't think it's\n> called that anymore), Free BSD (3.x) and Solaris (2.x). I'm most\n> comfortable with Linux, but I'm willing to use a different platform if it\n> will be beneficial. I've heard that Solaris on the Sparc platform is\n> capable of addressing larger amounts of RAM than Linux on Intel does. I\n> don't know if that's true or if that has bearing, but I'd like to hear your\n> opinions.\n> \n> My budget is going to be between (US) $5,000 and $10,000 and I'd like to\n> stay under $7,000. I'm a major bargain hunter, so I shop e-bay a lot and\n> here are some samplings that I think may be relevant for discussion:\n> \n> SUN (I'm not an expert in this, advice is requested)\n> ----------------------------------------------------\n> SUN ENTERPRISE 4500 8x400 Mhz 4MB Cache CPUs 8GB RAM no hard drives ~$6,000\n> Sun E3500 - 8 x 336MHz 4MB Cache CPUs 4GB RAM 8 x 9.1GB FC disks ~$600.00\n> Any other suggestions?\n> \n> INTEL (I'm much more familiar with this area)\n> ----------------------------------------------------\n> Compaq DL580 4x700 MHz 2MB Cache CPUs 4GB RAM (16GB Max) HW Raid w/ 64MB\n> Cache ~$6000\n> IBM Netfinity 7100 4x500 MHz 1MB Cache CPUs up to (16GB Max) HW Raid\n> Dell PowerEdge 8450 8x550 2M Cache CPUS 4GB (32GB Max) HS RAID w/ 16MB Cache\n> ~$4,500\n> Any other suggestions?\n> \n> Any other hardware platforms I should consider?\n> \n> Finally, and I know this sounds silly, but I don't have my own data center,\n> so size is something I need to take into consideration. I pay for data\n> center space by the physical size of my servers. My priorities are\n> Performance, Reasonable amount of scalability (as outlined above) and\n> finally physical size.\n> \n> Thanks for taking the time to read this and for any assistance you can give,\n> \n> Matthew Nuzum\n> www.bearfruit.org\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n\n", "msg_date": "Wed, 9 Apr 2003 10:51:34 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right platform" }, { "msg_contents": "> Anecdotal responses are:\n> \n> Solaris is *bad* for PostgreSQL, due to a high per-process overhead.\n> Universal opinion on this list has been that Solaris is optimized for\n> multi-threaded applications, not multi-process applications, and postgres\n> is\n> the latter.\n> \n> *BSD has a *lot* of fans on the PGSQL lists, many of whom claim\n> significantly\n> better performance than Linux, mostly due to better filesystem I/O.\n> \n> Linux is used heavily by a lot of PostgreSQL users. I have yet to see\n> anyone\n> provide actual Linux vs. BSD statistics, though ... something we hope to\n> do.\n...\n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n\nThanks for the reply. Three things come to mind:\n\nAbout the list archives...\n\nI read through the entire archive at\nhttp://archives.postgresql.org/pgsql-performance/ and didn't see much talk\non the subject. It only goes back 8 months though, so I don't know if there\nis another archive that is more comprehensive...\n\nAlso,\n\nI'm glad to hear your comments about Solaris, I'm really most comfortable\nwith Linux and I think I can pick up BSD pretty easily.\n\nAbout the Intel platform though,\n\nIt's only been pretty recently (relatively speaking) that servers based on\nIA32 architecture have had support for greater than 2GB of RAM. I've heard\ntalk about problems with applications that require more than 2GB. I do\nbelieve that my tables will become larger than this, and the way I\nunderstand it, sort mem works best when the tables can be loaded completely\nin RAM.\n\nI don't suspect that individual tables will be 2GB, but that the size of all\ntables combined will be. If there is a limitation on the largest chunk of\nRAM allocated to a program, will I have problems?\n\nFinally, can someone suggest a *BSD to evaluate? FreeBSD 4.8? 5.0? Is Apple\na good choice? (I've heard it's based on BSD Unix)\n\nThanks,\n\n--\nMatthew Nuzum\nwww.bearfruit.org\[email protected]\n \n\n", "msg_date": "Wed, 9 Apr 2003 13:13:46 -0400", "msg_from": "\"Matthew Nuzum\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: choosing the right platform" }, { "msg_contents": "\n> Finally, can someone suggest a *BSD to evaluate? FreeBSD 4.8? 5.0? Is Apple\n> a good choice? (I've heard it's based on BSD Unix)\n\nI wouldn't recommend OSX for deployment if you're worried about performance. The hardware availiable and the settings to take advantage of it just aren't there yet, compared to the more established FreeBSD and Linux offerings. \n\nDevelopment on a tibook is another matter, I'd recommend it to anyone with an attraction to shiny things that do real work. \n\neric\n\n", "msg_date": "Wed, 09 Apr 2003 10:20:22 -0700", "msg_from": "eric soroos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right platform" }, { "msg_contents": "On Wed, 9 Apr 2003, Matthew Nuzum wrote:\n\n> I'm glad to hear your comments about Solaris, I'm really most comfortable\n> with Linux and I think I can pick up BSD pretty easily.\n> \n> About the Intel platform though,\n> \n> It's only been pretty recently (relatively speaking) that servers based on\n> IA32 architecture have had support for greater than 2GB of RAM. I've heard\n> talk about problems with applications that require more than 2GB. I do\n> believe that my tables will become larger than this, and the way I\n> understand it, sort mem works best when the tables can be loaded completely\n> in RAM.\n> \n> I don't suspect that individual tables will be 2GB, but that the size of all\n> tables combined will be. If there is a limitation on the largest chunk of\n> RAM allocated to a program, will I have problems?\n\nA couple more suggestions. One is to never allocate more than 50% of your \nmemory to a database's shared buffers, i.e. let the OS buffer the disks en \nmasse, while the database should have a smaller buffer for the most recent \naccesses. This is because kernel caching is usually faster and more \nefficient than the database doing it, and this becomes more an issue with \nlarge chunks of memory, which both Linux and BSD are quite good at \ncaching, and postgresql, not so good.\n\nThe other is to look at Linux or BSD on 64 bit hardware (Sparc, IBM \nZseries mainframes, SGI Altix, etc...) where the one thing that's worth \nbeing on the bleeding edge for is databases and their memory hungry ways. \n:-)\n\n", "msg_date": "Wed, 9 Apr 2003 11:55:56 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right platform" }, { "msg_contents": "Matthew,\n\n> I read through the entire archive at\n> http://archives.postgresql.org/pgsql-performance/ and didn't see much talk\n> on the subject. It only goes back 8 months though, so I don't know if there\n> is another archive that is more comprehensive...\n\nReally? There was a long-running Mac OS X vs. Solaris thread that touched on \nmost major platforms, about 2-3 months ago.\n\n> I don't suspect that individual tables will be 2GB, but that the size of all\n> tables combined will be. If there is a limitation on the largest chunk of\n> RAM allocated to a program, will I have problems?\n\nNo. Since PostgreSQL is a multi-process architecture, not a multi-threaded, \nyou only need enough RAM per process to load the current largest query.\n\nPlus, in my experience, Disk I/O issues are vastly more important than RAM in \ndatabase performance. You're better off spending money on really fast disks \nin Linux RAID or really good hardware RAID 1+0 ....\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 9 Apr 2003 11:03:48 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right platform" }, { "msg_contents": "On Wed, Apr 09, 2003 at 01:13:46PM -0400, Matthew Nuzum wrote:\n> Finally, can someone suggest a *BSD to evaluate? FreeBSD 4.8? 5.0? Is Apple\n> a good choice? (I've heard it's based on BSD Unix)\n \nFreeBSD has 3 different branches:\n\n-current:\nThis is bleeding edge. Definitely need to be careful with this one, and\nit's not recommended for production.\n\n-stable:\nThis is still a 'live' branch that any FBSD coder can (generally) commit\nto, but they are far more careful about breaking this branch. Not as\nstable as a release branch, but it's probably suitable for production so\nlong as you're careful to test things.\n\nrelease branches:\nEvery time an official release is done (ie: 4.8), a branch is created.\nThe only code committed to these branches are security patches and fixes\nfor very serious bugs. These branches are extremely stable.\n\n5.0 is the first release after several years of development in -current.\nIt incorporates some major changes designed to allow the kernel to run\nmulti-threaded. However, unlike what usually happens, 5.0 is not\nconsidered to be -stable yet. First, this is still very new code;\nsecond, I believe there's some performance issues that are still being\naddressed. The intention is that 5.1 will be the first -stable release\nof the 5.x code.\n\nBecause you're looking for something that's production ready, you\nprobably want 4.8 (cvs tag RELENG_4_8). However, if you don't plan to\nhit production until late this year (when 5.1 should be out), you might\nwant to try 5.0.\n\nFar more info is available at http://www.freebsd.org/releng/index.html\n\nBTW, I've heard of many, many companies moving their Oracle installs\nfrom Sun to RS/6000 because RS/6000's typically need 1/2 the processors\nthat Sun does for a given load. If you're going to look at big-iron,\nRS/6000 is definitely worth a look if you see anything.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Wed, 9 Apr 2003 18:45:01 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right platform" }, { "msg_contents": "On Wed, Apr 09, 2003 at 11:55:56AM -0600, scott.marlowe wrote:\n> A couple more suggestions. One is to never allocate more than 50% of your \n> memory to a database's shared buffers, i.e. let the OS buffer the disks en \n> masse, while the database should have a smaller buffer for the most recent \n> accesses. This is because kernel caching is usually faster and more \n> efficient than the database doing it, and this becomes more an issue with \n> large chunks of memory, which both Linux and BSD are quite good at \n> caching, and postgresql, not so good.\n \nThat seems odd... shouldn't pgsql be able to cache information better\nsince it would be cached in whatever format is best for it, rather than\nthe raw page format (or maybe that is the best format). There's also the\nissue of having to go through more layers of software if you're relying\non the OS caching. All the tuning info I've seen for every other\ndatabase I've worked with specifically recommends giving the database as\nmuch memory as you possibly can, the theory being that it will do a much\nbetter job of caching than the OS will.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Wed, 9 Apr 2003 18:47:44 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right platform" }, { "msg_contents": "On Wed, Apr 09, 2003 at 10:51:34AM -0600, scott.marlowe wrote:\n> Secondly, look carefully at using persistant connections in large numbers. \n> \n> While persistant connections DO represent a big savings in connect time, \n> the savings are lost in the noise of many PHP applications.\n> \n> i.e. my dual PIII swiss army knife server can initiate single persistant \n> connections at 1,000,000 a second (reusing the old ones of course). \n> non-persistant connects happen at 1,000 times a second. Most of my \n> scripts run in 1/10th of a second or so, so the 1/1000th used to connect \n> is noise to me.\n\nMy $0.02 from my experience with Sybase and DB2:\nIt's not the connection *time* that's an issue, it's the amount of\nresources (mostly memory) used by each database connection. Each db2\nconnection to a database uses 4-8 meg of memory; on my pgsql system,\neach connection appears to be using about 4M. This is the resident set,\nwhich I believe indicates memory that basically can't be shared. All\nthis memory is memory that can't be used for buffering/caching; on a\nsystem with a hundred connections, it can really start to add up.\n\nIf your PHP is written in such a way that it does all the database work\nin one section of code, and only holds a connection to the database in\nthat one section, then you can potentially have a lot of apache\nprocesses for each database connection.\n\nOf course, all this holds true wether you're using pooling or not. How\nmuch pooling will help depends on how expensive it is for the *database*\nto handle each new connection request, and how your code is written.\nSince it's often not possible to put all you database code in one place\nlike I mentioned above, an alternative is to connect right before you do\nan operation, and disconnect as soon as you're done. This doesn't add\nmuch (if any) expense if you're using pooling, but it's a very different\nstory if you're not using pooling.\n\n> If you are going to use persistant connections, it might work better to \n> let apache have only 20 or 40 children, which will force the apache \n> children to \"round robin\" serve the requests coming in.\n> \n> This will usually work fine, since keeping the number of apache children \n> down keeps the number of postgresql backends down, which keeps the system \n> faster in terms of response time. Turn keep alive down to something short \n> like 10 seconds, or just turn it off, as keep alive doesn't really save \n> all that much time in apache.\n\nVery important advice. Generally, once you push a database past a\ncertain point, your performance degrades severely as the database\nthrashes about trying to answer all the pending queries.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Wed, 9 Apr 2003 18:58:43 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right platform" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> That seems odd... shouldn't pgsql be able to cache information better\n> since it would be cached in whatever format is best for it, rather than\n> the raw page format (or maybe that is the best format). There's also the\n> issue of having to go through more layers of software if you're relying\n> on the OS caching. All the tuning info I've seen for every other\n> database I've worked with specifically recommends giving the database as\n> much memory as you possibly can, the theory being that it will do a much\n> better job of caching than the OS will.\n\nThere are a number of reasons why that's a dubious policy for PG (I\nwon't take a position on whether these apply to other databases...)\n\nOne is that because we sit on top of the OS' filesystem, we can't\n(portably) prevent the OS from caching blocks. So it's quite easy to\nget into a situation where the same data is cached twice, once in PG\nbuffers and once in kernel disk cache. That's clearly a waste of RAM\nhowever you slice it, and it's worst when you set the PG shared buffer\nsize to be about half of available RAM. You can minimize the\nduplication by skewing the allocation one way or the other: either set\nPG's allocation relatively small, relying heavily on the OS to do the\ncaching; or make PG's allocation most of RAM and hope to squeeze out\nthe OS' cache. There are partisans for both approaches on this list.\nI lean towards the first policy because I think that starving the kernel\nfor RAM is a bad idea. (Especially if you run on Linux, where this\npolicy tempts the kernel to start kill -9'ing random processes ...)\n\nAnother reason is that PG uses a simplistic fixed-number-of-buffers\ninternal cache, and therefore it can't adapt on-the-fly to varying\nmemory pressure, whereas the kernel can and will give up disk cache\nspace to make room when it's needed for processes. Since PG isn't\neven aware of the total memory pressure on the system as a whole,\nit couldn't do as good a job of trading off cache vs process workspace\nas the kernel can do, even if we had a variable-size cache scheme.\n\nA third reason is that on many (most?) Unixen, SysV shared memory is\nsubject to swapping, and the bigger you make the shared_buffer arena,\nthe more likely it gets that some of the arena will be touched seldom\nenough to make it a candidate for swapping. A disk buffer that gets\nswapped to disk is worse than useless (if it's dirty, the swapping\nis downright counterproductive, since an extra read and write cycle\nwill be needed before the data can make it to its rightful place).\n\nPG is *not* any smarter about the usage patterns of its disk buffers\nthan the kernel is; it uses a simple LRU algorithm that is surely no\nbrighter than what the kernel uses. (We have looked at smarter buffer\nrecycling rules, but failed to see any performance improvement.) So the\nnotion that PG can do a better job of cache management than the kernel\nis really illusory. About the only advantage you gain from having data\ndirectly in PG buffers rather than kernel buffers is saving the CPU\neffort needed to move data across the userspace boundary --- which is\nnot zero, but it's sure a lot less than the time spent for actual I/O.\n\nSo my take on it is that you want shared_buffers fairly small, and let\nthe kernel do the bulk of the heavy lifting for disk cache. That's what\nit does for a living, so let it do what it does best. You only want\nshared_buffers big enough so you don't spend too many CPU cycles shoving\ndata back and forth between PG buffers and kernel disk cache. The\ndefault shared_buffers setting of 64 is surely too small :-(, but my\nfeeling is that values in the low thousands are enough to get past the\nknee of that curve in most cases.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 09 Apr 2003 20:20:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Caching (was Re: choosing the right platform)" }, { "msg_contents": "Thanks for all the feedback, this is very informative.\n\nMy current issues that I'm still not clear on, are:\n* Is the ia32 architecture going to impose uncomfortable limits on my\napplication? I'm seeing lots of confirmation that this platform, regardless\nof the OS is going to limit me to less the 4GB of memory allocated to a\nsingle application (i.e. http://www.spack.org/index.cgi/LinuxRamLimits).\nThis may or may not be an issue because: (note that these are questions, not\nstatements)\n** Postgres is multi-process, not multi-threaded (?)\n** It's better to not use huge amount of sort-mem but instead let the OS do\nthe caching (?)\n** My needs are really not going to be as big as I think they are if I\nmanage the application/environment correctly (?)\n\nHere are some of the performance suggestions I've heard, please, if I\nmis-understood, could you help me get clarity?\n* It's better to run fewer apache children and turn off persistent\nconnections (I had suggested 200 children per server, someone else suggested\n40)\n* FreeBSD is going to provide a better file system than Linux (because Linux\nonly supports large files on journaling filesystems which impose extra over\nhead) (this gleaned from this conversation and previous threads in archives)\n* Running Linux or *BSD on a 64 bit platform can alleviate some potential\nRAM limitations (if there are truly going to be limitations). If this is\nso, I've heard suggestions for Itanium, Sparc and RS/6000. Maybe someone\ncan give some more info on these, here are my immediate thoughts: I've heard\nthat the industry as a whole has not yet warmed up to Itanium. I can't\nafford the newest Sparc Servers, so I'd need to settle with a previous\ngeneration if I went that route, any problems with that? I know nothing\nabout the RS/6000 servers (I did see one once though :-), does linux|*BSD\nrun well on them and any suggestions for series/models I should look at?\n\nFinally, some specific questions,\nWhat's the max number of connections someone has seen on a database server?\nWhat type of hardware was it? How much RAM did postgres use?\n\nThanks again,\n\n--\nMatthew Nuzum\nwww.bearfruit.org\[email protected]\n \n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Tom Lane\n> Sent: Wednesday, April 09, 2003 8:21 PM\n> To: [email protected]\n> Cc: scott.marlowe; Matthew Nuzum; 'Josh Berkus'; 'Pgsql-Performance'\n> Subject: Caching (was Re: [PERFORM] choosing the right platform)\n> \n> \"Jim C. Nasby\" <[email protected]> writes:\n> > That seems odd... shouldn't pgsql be able to cache information better\n> > since it would be cached in whatever format is best for it, rather than\n> > the raw page format (or maybe that is the best format). There's also the\n> > issue of having to go through more layers of software if you're relying\n> > on the OS caching. All the tuning info I've seen for every other\n> > database I've worked with specifically recommends giving the database as\n> > much memory as you possibly can, the theory being that it will do a much\n> > better job of caching than the OS will.\n> \n> There are a number of reasons why that's a dubious policy for PG (I\n> won't take a position on whether these apply to other databases...)\n> \n> One is that because we sit on top of the OS' filesystem, we can't\n> (portably) prevent the OS from caching blocks. So it's quite easy to\n> get into a situation where the same data is cached twice, once in PG\n> buffers and once in kernel disk cache. That's clearly a waste of RAM\n> however you slice it, and it's worst when you set the PG shared buffer\n> size to be about half of available RAM. You can minimize the\n> duplication by skewing the allocation one way or the other: either set\n> PG's allocation relatively small, relying heavily on the OS to do the\n> caching; or make PG's allocation most of RAM and hope to squeeze out\n> the OS' cache. There are partisans for both approaches on this list.\n> I lean towards the first policy because I think that starving the kernel\n> for RAM is a bad idea. (Especially if you run on Linux, where this\n> policy tempts the kernel to start kill -9'ing random processes ...)\n> \n> Another reason is that PG uses a simplistic fixed-number-of-buffers\n> internal cache, and therefore it can't adapt on-the-fly to varying\n> memory pressure, whereas the kernel can and will give up disk cache\n> space to make room when it's needed for processes. Since PG isn't\n> even aware of the total memory pressure on the system as a whole,\n> it couldn't do as good a job of trading off cache vs process workspace\n> as the kernel can do, even if we had a variable-size cache scheme.\n> \n> A third reason is that on many (most?) Unixen, SysV shared memory is\n> subject to swapping, and the bigger you make the shared_buffer arena,\n> the more likely it gets that some of the arena will be touched seldom\n> enough to make it a candidate for swapping. A disk buffer that gets\n> swapped to disk is worse than useless (if it's dirty, the swapping\n> is downright counterproductive, since an extra read and write cycle\n> will be needed before the data can make it to its rightful place).\n> \n> PG is *not* any smarter about the usage patterns of its disk buffers\n> than the kernel is; it uses a simple LRU algorithm that is surely no\n> brighter than what the kernel uses. (We have looked at smarter buffer\n> recycling rules, but failed to see any performance improvement.) So the\n> notion that PG can do a better job of cache management than the kernel\n> is really illusory. About the only advantage you gain from having data\n> directly in PG buffers rather than kernel buffers is saving the CPU\n> effort needed to move data across the userspace boundary --- which is\n> not zero, but it's sure a lot less than the time spent for actual I/O.\n> \n> So my take on it is that you want shared_buffers fairly small, and let\n> the kernel do the bulk of the heavy lifting for disk cache. That's what\n> it does for a living, so let it do what it does best. You only want\n> shared_buffers big enough so you don't spend too many CPU cycles shoving\n> data back and forth between PG buffers and kernel disk cache. The\n> default shared_buffers setting of 64 is surely too small :-(, but my\n> feeling is that values in the low thousands are enough to get past the\n> knee of that curve in most cases.\n> \n> \t\t\tregards, tom lane\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n", "msg_date": "Wed, 9 Apr 2003 21:15:05 -0400", "msg_from": "\"Matthew Nuzum\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching (was Re: choosing the right platform)" }, { "msg_contents": "Tom,\n\nWhat appends when PG scans a table that is is too big to fit in the\ncache?\nWon't the whole cache get trashed and swapped off to disk?\nShouldn't there be a way to lock some tables in PG cache?\nWho about caracterizing some of the RAM like: scan, index, small\nfrequently used tables.\n\nJLL\n\nTom Lane wrote:\n> [...]\n> PG is *not* any smarter about the usage patterns of its disk buffers\n> than the kernel is; it uses a simple LRU algorithm that is surely no\n> brighter than what the kernel uses. (We have looked at smarter buffer\n> recycling rules, but failed to see any performance improvement.) So the\n> notion that PG can do a better job of cache management than the kernel\n> is really illusory. About the only advantage you gain from having data\n> directly in PG buffers rather than kernel buffers is saving the CPU\n> effort needed to move data across the userspace boundary --- which is\n> not zero, but it's sure a lot less than the time spent for actual I/O.\n> \n> So my take on it is that you want shared_buffers fairly small, and let\n> the kernel do the bulk of the heavy lifting for disk cache. That's what\n> it does for a living, so let it do what it does best. You only want\n> shared_buffers big enough so you don't spend too many CPU cycles shoving\n> data back and forth between PG buffers and kernel disk cache. The\n> default shared_buffers setting of 64 is surely too small :-(, but my\n> feeling is that values in the low thousands are enough to get past the\n> knee of that curve in most cases.\n\n", "msg_date": "Thu, 10 Apr 2003 10:27:16 -0400", "msg_from": "Jean-Luc Lachance <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching (was Re: choosing the right platform)" }, { "msg_contents": "Jean-Luc Lachance <[email protected]> writes:\n> Shouldn't there be a way to lock some tables in PG cache?\n\nIn my opinion, no. I do not think a manual locking feature could\npossibly be used effectively. It could very easily be abused to\ndecrease net performance, though :-(\n\nIt does seem that a smarter buffer management algorithm would be a good\nidea, but past experiments have failed to show any measurable benefit.\nPerhaps those experiments were testing the wrong conditions. I'd still\nbe happy to see LRU(k) or some such method put in, if someone can prove\nthat it actually does anything useful for us. (As best I recall, I only\ntested LRU-2 with pgbench. Perhaps Josh's benchmarking project will\noffer a wider variety of interesting scenarios.)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 10 Apr 2003 10:40:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching (was Re: choosing the right platform) " }, { "msg_contents": "On Wed, 9 Apr 2003, Jim C. Nasby wrote:\n\n> On Wed, Apr 09, 2003 at 10:51:34AM -0600, scott.marlowe wrote:\n> > Secondly, look carefully at using persistant connections in large numbers. \n> > \n> > While persistant connections DO represent a big savings in connect time, \n> > the savings are lost in the noise of many PHP applications.\n> > \n> > i.e. my dual PIII swiss army knife server can initiate single persistant \n> > connections at 1,000,000 a second (reusing the old ones of course). \n> > non-persistant connects happen at 1,000 times a second. Most of my \n> > scripts run in 1/10th of a second or so, so the 1/1000th used to connect \n> > is noise to me.\n> \n> My $0.02 from my experience with Sybase and DB2:\n> It's not the connection *time* that's an issue, it's the amount of\n> resources (mostly memory) used by each database connection. Each db2\n> connection to a database uses 4-8 meg of memory; \n\nAgreed.\n\n> on my pgsql system,\n> each connection appears to be using about 4M. This is the resident set,\n> which I believe indicates memory that basically can't be shared. All\n> this memory is memory that can't be used for buffering/caching; on a\n> system with a hundred connections, it can really start to add up.\n\nIf I run \"select * from logs\" from two different psql sessions on my \nbackup box hitting my main box (psql would hold the result set and throw \nthe results off if I ran it on the main box) I get this output from top:\n\nNo (pgsql) load:\n\n 8:58am up 9 days, 22:43, 4 users, load average: 0.65, 0.54, 0.35\n169 processes: 168 sleeping, 1 running, 0 zombie, 0 stopped\nCPU0 states: 0.1% user, 0.1% system, 0.0% nice, 99.1% idle\nCPU1 states: 32.1% user, 3.2% system, 0.0% nice, 64.0% idle\nMem: 1543980K av, 1049864K used, 494116K free, 265928K shrd, 31404K buff\nSwap: 2048208K av, 0K used, 2048208K free 568600K cached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n10241 postgres 9 0 4216 4216 4136 S 0.0 0.2 0:05 postmaster\n10242 postgres 9 0 4444 4444 4156 S 0.0 0.2 0:00 postmaster\n10243 postgres 9 0 4812 4812 4148 S 0.0 0.3 0:00 postmaster\n\n1 psql select *:\n 9:03am up 9 days, 22:48, 2 users, load average: 0.71, 0.71, 0.46\n166 processes: 165 sleeping, 1 running, 0 zombie, 0 stopped\nCPU0 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nCPU1 states: 0.1% user, 2.0% system, 0.0% nice, 97.3% idle\nMem: 1543980K av, 1052188K used, 491792K free, 265928K shrd, 32036K buff\nSwap: 2048208K av, 0K used, 2048208K free 570656K cached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n10241 postgres 10 0 4216 4216 4136 S 0.0 0.2 0:05 postmaster\n10242 postgres 9 0 4448 4448 4156 S 0.0 0.2 0:00 postmaster\n10243 postgres 9 0 4812 4812 4148 S 0.0 0.3 0:00 postmaster\n18026 postgres 9 0 236M 236M 235M S 0.0 15.6 0:12 postmaster\n18035 postgres 10 0 5832 5732 5096 S 0.0 0.3 0:00 postmaster\n\n2 psql select *:\n 9:03am up 9 days, 22:49, 2 users, load average: 0.58, 0.66, 0.45\n166 processes: 165 sleeping, 1 running, 0 zombie, 0 stopped\nCPU0 states: 0.0% user, 2.2% system, 0.0% nice, 97.2% idle\nCPU1 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nMem: 1543980K av, 1053152K used, 490828K free, 265928K shrd, 32112K buff\nSwap: 2048208K av, 0K used, 2048208K free 570684K cached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n10241 postgres 8 0 4216 4216 4136 S 0.0 0.2 0:05 postmaster\n10242 postgres 9 0 4448 4448 4156 S 0.0 0.2 0:00 postmaster\n10243 postgres 9 0 4812 4812 4148 S 0.0 0.3 0:00 postmaster\n18026 postgres 9 0 236M 236M 235M S 0.0 15.6 0:12 postmaster\n18035 postgres 9 0 236M 236M 235M S 0.0 15.6 0:12 postmaster\n\nThe difference between SIZE and SHARE is the delta, which is only \nsomething like 3 or 4 megs for the initial select * from logs, but the \nsecond one is only 1 meg. On average, the actual increase in memory usage \nfor postgresql isn't that great, usually about 1 meg.\n\nRunning out of memory isn't really a problem with connections<=200 and 1 \ngig of ram, as long as sort_mem isn't too high. I/O contention is the \nkiller at that point, as is CPU load.\n\n", "msg_date": "Thu, 10 Apr 2003 10:42:35 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right platform" }, { "msg_contents": "On Wed, 9 Apr 2003, Jim C. Nasby wrote:\n\n> On Wed, Apr 09, 2003 at 11:55:56AM -0600, scott.marlowe wrote:\n> > A couple more suggestions. One is to never allocate more than 50% of your \n> > memory to a database's shared buffers, i.e. let the OS buffer the disks en \n> > masse, while the database should have a smaller buffer for the most recent \n> > accesses. This is because kernel caching is usually faster and more \n> > efficient than the database doing it, and this becomes more an issue with \n> > large chunks of memory, which both Linux and BSD are quite good at \n> > caching, and postgresql, not so good.\n> \n> That seems odd... shouldn't pgsql be able to cache information better\n> since it would be cached in whatever format is best for it, rather than\n> the raw page format (or maybe that is the best format).\n\nYes and no. The problem isn't that the data is closer to postgresql in \nit's buffers versus further away in kernel buffers, it's that postgresql's \ncaching algorhythm isn't performance tweaked for very large settings, it's \nperformance tweaked to provide good performance on smaller machines, with \nsay 4 or 16 Megs of shared buffers. Handling large buffers requires a \ndifferent approach to handling small ones, and the kernel is optimized in \nthat direction.\n\nAlso, the kernel in most Oses, i.e. Linux and BSD tends to use \"spare ram\" \nwith abandon as cache memory, so if you've got 4 gigs of ram, with 200 \nMegs set aside for postgresql, it's quite likely that the kernel cache can \nhold ALL your dataset for you once it's been read in once. So, the data \nis already cached once. Caching it again in Postgresql only gains a \nlittle, since the speed difference of postgresql shared buffer / cache and \nkernel caches is very small. However, the speed going to the hard drive \nis much slower.\n\nWhat you don't want is a postgresql cache that's bigger (on average) than \nthe kernel cache, since the kernel cache will then be \"thrashing\" when you \naccess information not currently in either cache. I.e. postgresql becomes \nyour only cache, and kernel caching stops working for you and becomes just \noverhead, since you never get anything from it if it's too small to cache \nsomething long enough to be used again.\n\n> There's also the\n> issue of having to go through more layers of software if you're relying\n> on the OS caching. All the tuning info I've seen for every other\n> database I've worked with specifically recommends giving the database as\n> much memory as you possibly can, the theory being that it will do a much\n> better job of caching than the OS will.\n\nThat's old school thinking. There was a day when kernel caching was much \nslower, and writing directly to your devices in a raw mode was the only \nway to ensure good performance. Nowadays, most modern Unix kernels and \ntheir file systems are a match for most database needs. heck, with some \nstorage systems, the performance of the file system is just not really an \nissue, it's the bandwidth of the connector you use. \n\nNote that this is a good thing (TM) since it frees the postgresql \ndevelopment team to do other things than worry about caching 1 gig of \ndata.\n\n", "msg_date": "Thu, 10 Apr 2003 10:51:38 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right platform" }, { "msg_contents": "\"scott.marlowe\" <[email protected]> writes:\n> Note that this is a good thing (TM) since it frees the postgresql \n> development team to do other things than worry about caching 1 gig of \n> data.\n\nYeah. I think this is one very fundamental difference of design\nphilosophy between Postgres and more-traditional databases such as\nOracle. We prefer to let the kernel and filesystem do their jobs,\nand we assume they will do them well; whereas Oracle wants to bypass\nif not replace the kernel and filesystem. Partly this is a matter of\nthe PG project not having the manpower to replace those layers. But\nI believe the world has changed in the last twenty years, and the Oracle\napproach is now obsolete: it's now costing them design and maintenance\neffort that isn't providing much return. Modern kernels and filesystems\nare *good*, and it's not easy to do better. We should focus our efforts\non functionality that doesn't just duplicate what the OS can do.\n\nThis design approach shows up in other areas too. For instance, in\nanother thread I was just pointing out that there is no need for our\nfrontend/backend protocol to solve communication problems like dropped\nor duplicated packets; TCP does that perfectly well already.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 10 Apr 2003 13:17:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right platform " }, { "msg_contents": "How can we solve the problem of cache trashing when scanning large\ntables?\n\nTom Lane wrote:\n> \n> Jean-Luc Lachance <[email protected]> writes:\n> > Shouldn't there be a way to lock some tables in PG cache?\n> \n> In my opinion, no. I do not think a manual locking feature could\n> possibly be used effectively. It could very easily be abused to\n> decrease net performance, though :-(\n> \n> It does seem that a smarter buffer management algorithm would be a good\n> idea, but past experiments have failed to show any measurable benefit.\n> Perhaps those experiments were testing the wrong conditions. I'd still\n> be happy to see LRU(k) or some such method put in, if someone can prove\n> that it actually does anything useful for us. (As best I recall, I only\n> tested LRU-2 with pgbench. Perhaps Josh's benchmarking project will\n> offer a wider variety of interesting scenarios.)\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Thu, 10 Apr 2003 14:59:55 -0400", "msg_from": "Jean-Luc Lachance <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching (was Re: choosing the right platform)" }, { "msg_contents": "On Thu, Apr 10, 2003 at 10:42:35AM -0600, scott.marlowe wrote:\n> The difference between SIZE and SHARE is the delta, which is only \n> something like 3 or 4 megs for the initial select * from logs, but the \n> second one is only 1 meg. On average, the actual increase in memory usage \n> for postgresql isn't that great, usually about 1 meg.\n> \n> Running out of memory isn't really a problem with connections<=200 and 1 \n> gig of ram, as long as sort_mem isn't too high. I/O contention is the \n> killer at that point, as is CPU load.\n \nExcept you should consider what you could be doing with that 200M, ie:\ncaching data. Even something as small as 1M per connection starts to add\nup.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Tue, 22 Apr 2003 03:30:58 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right platform" } ]
[ { "msg_contents": "Hello all,\n\nThis is to announce couple of things regarding OAS Server.\n\n1. Mailing list for OAS Server is up now. It is available at \nhttps://lists.sourceforge.net/lists/listinfo/oasserver-general.\n\nSo I wouldn't be bothering you guys next time like this..:-)\n\n2. First packaging release of OAS Server is made available. So in case, you do \nnot want to use CVS, try the source tarballs.\n\nThe project website is http://oasserver.sourceforge.net\n\nRegards\n Shridhar\n\n", "msg_date": "Wed, 9 Apr 2003 21:12:56 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": true, "msg_subject": "[OT][Announce] Availability of OAS Server pakcages" } ]
[ { "msg_contents": "Folks,\n\nWhat follows is a 7.2.4 EXPLAIN ANALYZE statement for the shown query. This \nquery is currently taking 570 msec, an OK amount of time until you realize \nthat the system only has test data currently, and the tables in full \nproduction will have 100-1000 times as much data.\n\n Becuase it's 7.2.4, it's a little hard to tell exactly which part of the \nquery is taking up 90% of the processing time. The line which claims to be \ntaking that time is:\n -> Seq Scan on users (cost=0.00..3595.33 rows=12 width=87) (actual \ntime=13.50..547.59 rows=41 loops=1)\n\nHowever, since users has only 200 records, I suspect that what's actually \nbeing represented here is the processing time for the PL/pgSQL procedure in \nthe correlated subselect, if_addendee_conflict().\n\nQuestions: \n1. Can anyone confirm my analysis in the paragraph above?\n2. Can anyone point out any obvious ways to speed up the query below?\n3. In the query below, if_attendee_conflict needs to be run once for each \n(attorney times events) on the same day. Further, if_attendee_conflict \ninvolves a database lookup in 10-25% of cases. Given that \nif_attendee_conflict needs to apply complex criteria to determine whether or \nnot there is a conflict, can anyone suggest possible ways to cut down on the \nnumber of required loops?\n\nThanks everyone! Query and analyze follows.\n\n\nj_test=> explain analyze\nSELECT users.user_id, (users.fname || COALESCE(' ' || users.minit, '') || ' ' \n|| users.lname) as atty_name,\n\t users.lname,\n\t (SELECT if_addendee_conflict(users.user_id, 3272, '2003-04-15 10:00', '1 \ndays'::INTERVAL,\n\t \tevents.event_id, events.event_date, events.duration, event_cats.status, '30 \nminutes') as cflt\n\t\tFROM events, event_types, event_cats, event_days\n\t\tWHERE events.event_id = event_days.event_id\n\t\t\tand events.etype_id = event_types.etype_id\n\t\t\t AND event_types.ecat_id = event_cats.ecat_id\n\t\t\t AND event_days.event_day\n\t\t\t \tBETWEEN '2003-04-15' AND '2003-04-16 10:00' \n\t\tORDER BY cflt LIMIT 1) AS conflict\nFROM users \nWHERE EXISTS (SELECT teams_users.user_id FROM teams_users JOIN teams_tree\n\tON teams_users.team_id = teams_tree.team_id WHERE teams_tree.treeno\n\tBETWEEN 3 and 4 AND teams_users.user_id = users.user_id)\nAND users.status > 0\n\tAND NOT EXISTS (SELECT staff_id FROM event_staff WHERE event_id = 3272\n\t\t AND staff_id = users.user_id) \nORDER BY conflict, users.lname, atty_name;\n\nNOTICE: QUERY PLAN:\n\nSort (cost=3595.55..3595.55 rows=12 width=87) (actual time=547.89..547.91 \nrows=41 loops=1)\n -> Seq Scan on users (cost=0.00..3595.33 rows=12 width=87) (actual \ntime=13.50..547.59 rows=41 loops=1)\n SubPlan\n -> Limit (cost=54.03..54.03 rows=1 width=46) (actual \ntime=13.14..13.14 rows=1 loops=41)\n -> Sort (cost=54.03..54.03 rows=1 width=46) (actual \ntime=13.13..13.13 rows=2 loops=41)\n -> Hash Join (cost=52.77..54.02 rows=1 width=46) \n(actual time=5.09..12.94 rows=95 loops=41)\n -> Seq Scan on event_cats (cost=0.00..1.16 \nrows=16 width=6) (actual time=0.01..0.05 rows=16 loops=41)\n -> Hash (cost=52.77..52.77 rows=1 width=40) \n(actual time=4.72..4.72 rows=0 loops=41)\n -> Hash Join (cost=49.94..52.77 rows=1 \nwidth=40) (actual time=4.19..4.59 rows=95 loops=41)\n -> Seq Scan on event_types \n(cost=0.00..2.54 rows=54 width=8) (actual time=0.01..0.12 rows=54 loops=41)\n -> Hash (cost=49.93..49.93 rows=5 \nwidth=32) (actual time=4.10..4.10 rows=0 loops=41)\n -> Nested Loop \n(cost=0.00..49.93 rows=5 width=32) (actual time=0.16..3.95 rows=95 loops=41)\n -> Seq Scan on event_days \n(cost=0.00..25.00 rows=5 width=4) (actual time=0.12..2.31 rows=95 loops=41)\n -> Index Scan using \nevents_pkey on events (cost=0.00..4.97 rows=1 width=28) (actual \ntime=0.01..0.01 rows=1 loops=3895)\n -> Nested Loop (cost=0.00..19.47 rows=1 width=12) (actual \ntime=0.04..0.04 rows=0 loops=147)\n -> Index Scan using idx_teams_tree_node on teams_tree \n(cost=0.00..8.58 rows=2 width=4) (actual time=0.01..0.02 rows=2 loops=147)\n -> Index Scan using teams_users_pk on teams_users \n(cost=0.00..4.83 rows=1 width=8) (actual time=0.01..0.01 rows=0 loops=252)\n -> Index Scan using event_staff_table_pk on event_staff \n(cost=0.00..4.95 rows=1 width=4) (actual time=0.01..0.01 rows=0 loops=41)\nTotal runtime: 548.20 msec\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 9 Apr 2003 17:15:19 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Help analyzing 7.2.4 EXPLAIN" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Becuase it's 7.2.4, it's a little hard to tell exactly which part of the \n> query is taking up 90% of the processing time.\n\nKeep in mind that in the subqueries, the \"actual time\" shown is the time\nper iteration --- you should multiply by the \"loops\" value to get an\naccurate idea of where the time is going. With that in mind, it's real\nclear that the first subplan is eating the bulk of the time.\n\nI think you are probably right that execution of the\nif_addendee_conflict() function is the main cost. But\ngiven this subquery that's not too surprising:\n\n> \t (SELECT if_addendee_conflict(users.user_id, 3272, '2003-04-15 10:00', '1 days'::INTERVAL,\n> \t \tevents.event_id, events.event_date, events.duration, event_cats.status, '30 minutes') as cflt\n> \t\tFROM events, event_types, event_cats, event_days\n> \t\tWHERE events.event_id = event_days.event_id\n> \t\t\tand events.etype_id = event_types.etype_id\n> \t\t\t AND event_types.ecat_id = event_cats.ecat_id\n> \t\t\t AND event_days.event_day\n> \t\t\t \tBETWEEN '2003-04-15' AND '2003-04-16 10:00' \n> \t\tORDER BY cflt LIMIT 1) AS conflict\n\nWhat you have here is a subquery that will execute\nif_addendee_conflict() for *each* row of the events table; then throw\naway all but one of the results. And then do that over again for each\nuser row. It looks to me like if_addendee_conflict() is being called\nnearly 4000 times in this query. No wonder it's slow.\n\nThe first thing that pops to mind is whether you really need the *first*\nconflict, or would it be enough to find any old conflict? If you could\ndispense with the ORDER BY then at least some evaluations of\nif_addendee_conflict() could be saved.\n\nRealistically, though, I think you're going to have to refactor the work\nto make this perform reasonably. How much of what\nif_addendee_conflict() does is actually dependent on the user_id? Could\nyou separate out tests that depend only on the event, and do that in a\nseparate pass that is done only once per event, instead once per\nevent*user? If you could reduce the number of events that need to be\nexamined for any given user, you could get somewhere.\n\nAlso, I don't see where this query checks to see if the user is actually\ninterested in attending the event. Is that one of the things\nif_addendee_conflict checks? If so, you should pull it out and make it\na join condition. You're essentially forcing the stupidest possible\njoin algorithm by burying that condition inside a user-defined function.\nIt would win to check that sooner instead of later, since presumably the\nset of interesting events for any one user is a lot smaller than the set\nof all events.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 09 Apr 2003 20:51:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help analyzing 7.2.4 EXPLAIN " }, { "msg_contents": "Tom,\n\n> Keep in mind that in the subqueries, the \"actual time\" shown is the time\n> per iteration --- you should multiply by the \"loops\" value to get an\n> accurate idea of where the time is going. With that in mind, it's real\n> clear that the first subplan is eating the bulk of the time.\n\nThanks, that's what I thought, but I wanted confirmation.\n\n> The first thing that pops to mind is whether you really need the *first*\n> conflict, or would it be enough to find any old conflict? If you could\n> dispense with the ORDER BY then at least some evaluations of\n> if_addendee_conflict() could be saved.\n\nThe problem is that I need the lowest-sorted non-NULL conflict. The majority \n(95%) of the runs of if_attendee_conflict will return NULL. But we can't \nknow that until we run the test, which is a bit too complex for a case \nstatement.\n\nNow, if I could figure out a way to stop testing for a particular user the \nfirst time if_attendee_conflict returned a particular result, that could cut \nthe number of subquery loops by 1/3. Any ideas?\n\n> Realistically, though, I think you're going to have to refactor the work\n> to make this perform reasonably. How much of what\n> if_addendee_conflict() does is actually dependent on the user_id? \n\nAlmost all of it. The question being answered by the query is \"Please give me \nthe list of all users, plus which of them have a conflict for that particular \ndate and time and what kind of conflict it is\".\n\n>Could\n> you separate out tests that depend only on the event, and do that in a\n> separate pass that is done only once per event, instead once per\n> event*user? If you could reduce the number of events that need to be\n> examined for any given user, you could get somewhere.\n\nRegrettably, no. We have to run it for each user. I was acutally hoping to \ncome up with a way of running for less events, acutally ....\n\n>\n> Also, I don't see where this query checks to see if the user is actually\n> interested in attending the event. Is that one of the things\n> if_addendee_conflict checks? \n\nNo. <grin> the users aren't given a choice about what they want to attend -- \nthe purpose of the query is to supply the calendar staff with a list of who's \navailable so the users can be assigned -- whether they want to or not.\n\nWell, we'll see if the current incarnation bogs down in a couple of months, \nand I'll rework the query if so. Thanks for the advice!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Wed, 9 Apr 2003 20:39:00 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help analyzing 7.2.4 EXPLAIN" }, { "msg_contents": "Tom,\n\nIf you're interested, here's the query I ended up with. It's much uglier than \nthe original query, but gives me slightly more data (one bit of information \nis seperated into 2 columns rather than rolled up), is 100ms faster, and \nshould not slow down much with the growth of the tables:\n\nSELECT users.user_id, (users.fname || COALESCE(' ' || users.minit, '') || ' ' \n|| users.lname) as atty_name,\n\tusers.lname,\n\tCOALESCE (\n\t(SELECT if_addendee_conflict(users.user_id, 3272, '2003-04-15 10:00', '1 \ndays'::INTERVAL,\n\t\tevents.event_id, events.event_date, events.duration, event_cats.status,\n\t\t'30 minutes', staff_id) as cflt\n\t\tFROM event_types, event_cats, event_days, events, event_staff\n\t\tWHERE events.event_id = event_days.event_id\n\t\t\tand events.etype_id = event_types.etype_id\n\t\t\tAND event_types.ecat_id = event_cats.ecat_id\n\t\t\tAND event_days.event_day BETWEEN '2003-04-15' AND '2003-04-16 10:00'\n\t\t\tAND events.event_id <> 3272\n\t\t\tAND events.event_id = event_staff.event_id\n\t\t\tAND event_staff.staff_id = users.user_id\n\t\t\tAND event_cats.status IN (1,3)\n\t\tORDER BY cflt LIMIT 1),\n\t(SELECT 'LEAVE'::TEXT\n\t FROM event_types, event_cats, event_days, events\n\t WHERE events.event_id = event_days.event_id\n\t\t\tand events.etype_id = event_types.etype_id\n\t\t\tAND event_types.ecat_id = event_cats.ecat_id\n\t\t\tAND event_days.event_day BETWEEN '2003-04-15' AND '2003-04-16 10:00'\n\t\t\tAND events.event_id <> 3272\n\t\t\tAND event_cats.status = 4)\n\t\t ) AS conflict,\n\t(SELECT (staff_id > 0) FROM event_staff\n\t\tWHERE event_id = 3272\n\t\tAND staff_id = users.user_id) as assigned\nFROM users\nWHERE EXISTS (SELECT teams_users.user_id FROM teams_users JOIN teams_tree\n\tON teams_users.team_id = teams_tree.team_id WHERE teams_tree.treeno\n\tBETWEEN 3 and 4 AND teams_users.user_id = users.user_id)\n\tAND users.status > 0\nORDER BY conflict, users.lname, atty_name;\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 10 Apr 2003 17:13:19 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help analyzing 7.2.4 EXPLAIN" } ]
[ { "msg_contents": "Thanks for all the feedback, this is very informative.\n\nMy current issues that I'm still not clear on, are:\n* Is the ia32 architecture going to impose uncomfortable limits on my\napplication? I'm seeing lots of confirmation that this platform, regardless\nof the OS is going to limit me to less the 4GB of memory allocated to a\nsingle application (i.e. http://www.spack.org/index.cgi/LinuxRamLimits).\nThis may or may not be an issue because: (note that these are questions, not\nstatements)\n** Postgres is multi-process, not multi-threaded (?)\n** It's better to not use huge amount of sort-mem but instead let the OS do\nthe caching (?)\n** My needs are really not going to be as big as I think they are if I\nmanage the application/environment correctly (?)\n\nHere are some of the performance suggestions I've heard, please, if I\nmis-understood, could you help me get clarity?\n* It's better to run fewer apache children and turn off persistent\nconnections (I had suggested 200 children per server, someone else suggested\n40)\n* FreeBSD is going to provide a better file system than Linux (because Linux\nonly supports large files on journaling filesystems which impose extra over\nhead) (this gleaned from this conversation and previous threads in archives)\n* Running Linux or *BSD on a 64 bit platform can alleviate some potential\nRAM limitations (if there are truly going to be limitations). If this is\nso, I've heard suggestions for Itanium, Sparc and RS/6000. Maybe someone\ncan give some more info on these, here are my immediate thoughts: I've heard\nthat the industry as a whole has not yet warmed up to Itanium. I can't\nafford the newest Sparc Servers, so I'd need to settle with a previous\ngeneration if I went that route, any problems with that? I know nothing\nabout the RS/6000 servers (I did see one once though :-), does linux|*BSD\nrun well on them and any suggestions for series/models I should look at?\n\nFinally, some specific questions,\nWhat's the max number of connections someone has seen on a database server?\nWhat type of hardware was it? How much RAM did postgres use?\n\nThanks again,\n\n--\nMatthew Nuzum\nwww.bearfruit.org\[email protected]\n \n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Tom Lane\n> Sent: Wednesday, April 09, 2003 8:21 PM\n> To: [email protected]\n> Cc: scott.marlowe; Matthew Nuzum; 'Josh Berkus'; 'Pgsql-Performance'\n> Subject: Caching (was Re: [PERFORM] choosing the right platform)\n> \n> \"Jim C. Nasby\" <[email protected]> writes:\n> > That seems odd... shouldn't pgsql be able to cache information better\n> > since it would be cached in whatever format is best for it, rather than\n> > the raw page format (or maybe that is the best format). There's also the\n> > issue of having to go through more layers of software if you're relying\n> > on the OS caching. All the tuning info I've seen for every other\n> > database I've worked with specifically recommends giving the database as\n> > much memory as you possibly can, the theory being that it will do a much\n> > better job of caching than the OS will.\n> \n> There are a number of reasons why that's a dubious policy for PG (I\n> won't take a position on whether these apply to other databases...)\n> \n> One is that because we sit on top of the OS' filesystem, we can't\n> (portably) prevent the OS from caching blocks. So it's quite easy to\n> get into a situation where the same data is cached twice, once in PG\n> buffers and once in kernel disk cache. That's clearly a waste of RAM\n> however you slice it, and it's worst when you set the PG shared buffer\n> size to be about half of available RAM. You can minimize the\n> duplication by skewing the allocation one way or the other: either set\n> PG's allocation relatively small, relying heavily on the OS to do the\n> caching; or make PG's allocation most of RAM and hope to squeeze out\n> the OS' cache. There are partisans for both approaches on this list.\n> I lean towards the first policy because I think that starving the kernel\n> for RAM is a bad idea. (Especially if you run on Linux, where this\n> policy tempts the kernel to start kill -9'ing random processes ...)\n> \n> Another reason is that PG uses a simplistic fixed-number-of-buffers\n> internal cache, and therefore it can't adapt on-the-fly to varying\n> memory pressure, whereas the kernel can and will give up disk cache\n> space to make room when it's needed for processes. Since PG isn't\n> even aware of the total memory pressure on the system as a whole,\n> it couldn't do as good a job of trading off cache vs process workspace\n> as the kernel can do, even if we had a variable-size cache scheme.\n> \n> A third reason is that on many (most?) Unixen, SysV shared memory is\n> subject to swapping, and the bigger you make the shared_buffer arena,\n> the more likely it gets that some of the arena will be touched seldom\n> enough to make it a candidate for swapping. A disk buffer that gets\n> swapped to disk is worse than useless (if it's dirty, the swapping\n> is downright counterproductive, since an extra read and write cycle\n> will be needed before the data can make it to its rightful place).\n> \n> PG is *not* any smarter about the usage patterns of its disk buffers\n> than the kernel is; it uses a simple LRU algorithm that is surely no\n> brighter than what the kernel uses. (We have looked at smarter buffer\n> recycling rules, but failed to see any performance improvement.) So the\n> notion that PG can do a better job of cache management than the kernel\n> is really illusory. About the only advantage you gain from having data\n> directly in PG buffers rather than kernel buffers is saving the CPU\n> effort needed to move data across the userspace boundary --- which is\n> not zero, but it's sure a lot less than the time spent for actual I/O.\n> \n> So my take on it is that you want shared_buffers fairly small, and let\n> the kernel do the bulk of the heavy lifting for disk cache. That's what\n> it does for a living, so let it do what it does best. You only want\n> shared_buffers big enough so you don't spend too many CPU cycles shoving\n> data back and forth between PG buffers and kernel disk cache. The\n> default shared_buffers setting of 64 is surely too small :-(, but my\n> feeling is that values in the low thousands are enough to get past the\n> knee of that curve in most cases.\n> \n> \t\t\tregards, tom lane\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n", "msg_date": "Wed, 9 Apr 2003 21:16:36 -0400", "msg_from": "\"Matthew Nuzum\" <[email protected]>", "msg_from_op": true, "msg_subject": "Caching (was Re: choosing the right platform)" }, { "msg_contents": "Matthew,\n\n> ** Postgres is multi-process, not multi-threaded (?)\n\nCorrect.\n\n> ** It's better to not use huge amount of sort-mem but instead let the OS do\n> the caching (?)\n\nThat's \"don't use a huge amount of *shared_buffers*\". Sort_mem is a different \nsetting. However, I have never seen a database use more than 32mb sort mem \nin a single process, so I don't think the 2GB limit will hurt you much ...\n\n> ** My needs are really not going to be as big as I think they are if I\n> manage the application/environment correctly (?)\n\nYour needs *per process*. Also, PostgreSQL is not as much of a consumer of \nRAM as it is a consumer of disk I/O.\n\n> * FreeBSD is going to provide a better file system than Linux (because\n> Linux only supports large files on journaling filesystems which impose\n> extra over head) (this gleaned from this conversation and previous threads\n> in archives) \n\nNo, the jury is still out on this one. ReiserFS is optimized for small \nfiles, and I've done well with it although some posters report stability \nproblems, though all second-hand. We hope to test this sometime in the \nupcoming months.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Wed, 9 Apr 2003 20:45:29 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching (was Re: choosing the right platform)" }, { "msg_contents": "On Wed, 2003-04-09 at 20:16, Matthew Nuzum wrote:\n> Thanks for all the feedback, this is very informative.\n[snip]\n> * Running Linux or *BSD on a 64 bit platform can alleviate some potential\n> RAM limitations (if there are truly going to be limitations). If this is\n> so, I've heard suggestions for Itanium, Sparc and RS/6000. Maybe someone\n> can give some more info on these, here are my immediate thoughts: I've heard\n> that the industry as a whole has not yet warmed up to Itanium. I can't\n> afford the newest Sparc Servers, so I'd need to settle with a previous\n> generation if I went that route, any problems with that? I know nothing\n> about the RS/6000 servers (I did see one once though :-), does linux|*BSD\n> run well on them and any suggestions for series/models I should look at?\n\nIf you want 64-bit, maybe wait for Operon, or look at Alphas. You could\nprobably get a used DS20 or ES40 for a pretty good price, and Linux is\n*well* supported on Alpha. If you want something that really smokes,\nand have some buck lying around, try an ES47.\n\n-- \n+----------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"A C program is like a fast dance on a newly waxed dance floor |\n| by people carrying razors.\" |\n| Waldi Ravens |\n+----------------------------------------------------------------+\n\n", "msg_date": "10 Apr 2003 03:17:59 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching (was Re: choosing the right platform)" }, { "msg_contents": "On Wed, Apr 09, 2003 at 09:16:36PM -0400, Matthew Nuzum wrote:\n> Thanks for all the feedback, this is very informative.\n>\n> Here are some of the performance suggestions I've heard, please, if I\n> mis-understood, could you help me get clarity?\n> * It's better to run fewer apache children and turn off persistent\n> connections (I had suggested 200 children per server, someone else suggested\n> 40)\n\nHi Matthew,\n\nI'm coming in a bit late and slightly OT here, but one common Apache\nsolution you might want to look at is a \"reverse proxy\" configuration.\nThis works very well if there's a good proportion of static vs dynamic\ncontent on your site - if your pages contain a lot of graphics then this\nmay well be the case.\n\nTo do this, you compile 2 Apache servers listening on different ports on\nthe same machine (or you can have them on different machines too).\n\nServer 1 (we'll call the \"front server\") is just a vanilla Apache\nlistening on Port 80, compiled with mod_rewrite and mod_proxy but\nnothing else.\n\nServer 2 (\"back server\" or \"heavy server\") has mod_php and anything else\nyou need which is quite bulky (e.g. XML processing stuff, mod_perl ...)\nIt can listen on Port 8080 or something. Your persistent DB connections\ncome from Server 2.\n\nAll web requests come in to Server 1 in the normal way and Server 1\ndeals with static content as before. By setting up Apache rewrite rules\non Server 1, requests for *.php and other dynamic stuff can be forwarded\nto Server 2 for processing. Server 2 returns its response back through\nServer 1 and the end-user is oblivious to what's going on. (Server 2\nand/or your firewall can be configured to allow connections only from\nServer 1 too.)\n\nIt's a bit of effort to set up and does require a wee bit more\nmaintenance than a single server but it comes with a few nice\nadvantages:\n\n* You can have a lower MaxClients setting on server 2 and hence less\n persistent DB connections and less memory used by heavy Apache modules\n and PostgreSQL instances.\n\n* Server 1 is nice and light - no DB, low memory use (much of which is\n probably shared) - so you can set its MaxClients much higher.\n\n* The overall impact of each dynamic page is lower as all of the\n images and stylesheets it references can be quickly dealt with by\n Server 1, rather than wasting an unnecessary wodge of memory and\n persistent DB connection.\n\nI used this recently for transforming XML web pages into HTML using XSLT\nand mod_perl on a slightly old and underpowered Solaris server and it\nworked really well. Of course, YMMV!\n\nThere are lots of tutorials on setting this up on the web - the mod_perl\nguide has some very handy stuff in it which ought to apply reasonably\nwell to PHP too:\n\nhttp://perl.apache.org/docs/1.0/guide/scenario.html\n\nHope that might help,\nDavid.\n\n", "msg_date": "Thu, 10 Apr 2003 10:34:35 +0100", "msg_from": "David McKain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching (was Re: choosing the right platform)" }, { "msg_contents": "On Thursday 10 April 2003 15:04, you wrote:\n> On Wed, Apr 09, 2003 at 09:16:36PM -0400, Matthew Nuzum wrote:\n> > Thanks for all the feedback, this is very informative.\n> >\n> > Here are some of the performance suggestions I've heard, please, if I\n> > mis-understood, could you help me get clarity?\n> > * It's better to run fewer apache children and turn off persistent\n> > connections (I had suggested 200 children per server, someone else\n> > suggested 40)\n>\n> Hi Matthew,\n>\n> I'm coming in a bit late and slightly OT here, but one common Apache\n> solution you might want to look at is a \"reverse proxy\" configuration.\n> This works very well if there's a good proportion of static vs dynamic\n> content on your site - if your pages contain a lot of graphics then this\n> may well be the case.\n>\n> To do this, you compile 2 Apache servers listening on different ports on\n\nUmm.. AFAIK, if you use fastCGI, persistence of connection should be a lot \nbetter and <self drumming on> or OAS Server, which gives you explicit control \non how much resources to allocate. </self drumming on>\n\n Shridhar\n\n", "msg_date": "Thu, 10 Apr 2003 15:29:05 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching (was Re: choosing the right platform)" }, { "msg_contents": "\nShort summary...\n\n I think sort_mem matters quite a bit (20-40%) on\n my data-warehousing applications.\n\n Am I doing something wrong to need so much sort_mem?\n\nJosh wrote:\n>> ** It's better to not use huge amount of sort-mem...\n>\n>...However, I have never seen a database use more than 32mb sort mem\n>in a single process, so I don't think the 2GB limit will hurt you much ...\n\nDo you think this is true in data warehousing applications as well?\n\nDuring the ETL part of data warehousing, large sorts are often\nused to get the \"new\" values that need to be inserted\ninto \"dimension\" tables, like this:\n INSERT INTO dimension_val (id,val)\n SELECT nextval('val_seq'),val\n FROM (SELECT DISTINCT val FROM import_table\n EXCEPT\n SELECT val FROM dimension_val) as a;\nAs far as I can tell, this query typically does two sorts,\none for the distinct, and one for the except.\n\n\nIn a data warehouse we have here, we load about 3 million rows\neach week; load time improved from about 9 to 7 hours\nby breaking up such queries into expressions that only require\none sort at a time, and surrounding the expressions with\n\"set sort_mem=something_big\" statements to give it enough\nspace to not hit the disk.\n\n SET SORT_MEM=300000;\n CREATE TEMPORARY TABLE potential_new_values AS\n SELECT DISTINCT val FROM import_table;\n ...\n SET SORT_MEM=1000;\n\nAnyone else have similar experience, or am I doing something\nwrong to need so much SORT_MEM?\n\n\n Ron\n\n\n\nPS:\n\nBelow is an example of another real-world query from the same\nreporting system that benefits from a sort_mem over 32M.\nExplain analyze (below) shows a 40% improvement by having\nthe sort fit in memory.\n\n10Meg and 32Meg take over 22 seconds. 100Meg takes 14.\n\n====================================================================================================\nlogs2=#\nlogs2=#\nlogs2=# set sort_mem=10000;\nSET VARIABLE\nlogs2=# explain analyze select distinct category from c_transaction_credit;\nNOTICE: QUERY PLAN:\n\nUnique (cost=71612.82..72838.69 rows=49035 width=17) (actual time=20315.47..22457.21 rows=2914 loops=1)\n -> Sort (cost=71612.82..71612.82 rows=490348 width=17) (actual time=20315.46..21351.42 rows=511368 loops=1)\n -> Seq Scan on c_transaction_credit (cost=0.00..14096.48 rows=490348 width=17) (actual time=0.08..2932.72 rows=511368\nloops=1)\nTotal runtime: 22475.63 msec\n\nEXPLAIN\nlogs2=# set sort_mem=32000;\nSET VARIABLE\nlogs2=# explain analyze select distinct category from c_transaction_credit;\nNOTICE: QUERY PLAN:\n\nUnique (cost=60442.82..61668.69 rows=49035 width=17) (actual time=22657.31..24794.19 rows=2914 loops=1)\n -> Sort (cost=60442.82..60442.82 rows=490348 width=17) (actual time=22657.30..23714.43 rows=511368 loops=1)\n -> Seq Scan on c_transaction_credit (cost=0.00..14096.48 rows=490348 width=17) (actual time=0.07..3020.83 rows=511368\nloops=1)\nTotal runtime: 24811.65 msec\n\nEXPLAIN\nlogs2=# set sort_mem=100000;\nSET VARIABLE\nlogs2=# explain analyze select distinct category from c_transaction_credit;\nNOTICE: QUERY PLAN:\n\nUnique (cost=60442.82..61668.69 rows=49035 width=17) (actual time=12205.19..14012.57 rows=2914 loops=1)\n -> Sort (cost=60442.82..60442.82 rows=490348 width=17) (actual time=12205.18..12710.16 rows=511368 loops=1)\n -> Seq Scan on c_transaction_credit (cost=0.00..14096.48 rows=490348 width=17) (actual time=0.08..3001.05 rows=511368\nloops=1)\nTotal runtime: 14187.96 msec\n\nEXPLAIN\nlogs2=#\n\n", "msg_date": "Thu, 10 Apr 2003 16:06:40 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching (was Re: choosing the right platform)" }, { "msg_contents": "On Thu, 10 Apr 2003, Ron Mayer wrote:\n\n> \n> Short summary...\n> \n> I think sort_mem matters quite a bit (20-40%) on\n> my data-warehousing applications.\n> \n> Am I doing something wrong to need so much sort_mem?\n\nNo. In fact, it's not uncommon for certain queries to need WAY more sort \nmemory than most queries. The mistake that gets made is setting sort_mem \nto something like 32 meg for every sort. There are many \"sorts\" on my \nmachine that are coming from well ordered data, and don't really need to \nbe done in memory to be reasonably fast. Those can run fine with 8 meg \nsort_mem. For things with less well ordered in the database, or where the \ndata set is really big (100s of megs of data being sorted) it often helps \nto just grab a 100 meg sort_mem for the session. \n\nIf sort_mem is too big, the OS will likely wind up swapping it or shared \nmemory out and thrashing at the worst, or just surrendering all spare \nmemory to sort_mem, thus flushing all fs cache. For a lot of apps, it's \nall about the sweet spot of memory to each subsystem, and sort_mem can go \nfrom nibbling memory to eating it like Nibbler from Futurama in seconds if \nyou set it just a little too high and have the right parallel load on your \nserver.\n\nSo, as long as you aren't starving your server of resources, setting \nsort_mem higher is fine.\n\n", "msg_date": "Fri, 11 Apr 2003 09:14:00 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching (was Re: choosing the right platform)" }, { "msg_contents": "Ron,\n\n> In a data warehouse we have here, we load about 3 million rows\n> each week; load time improved from about 9 to 7 hours\n> by breaking up such queries into expressions that only require\n> one sort at a time, and surrounding the expressions with\n> \"set sort_mem=something_big\" statements to give it enough\n> space to not hit the disk.\n>\n> SET SORT_MEM=300000;\n> CREATE TEMPORARY TABLE potential_new_values AS\n> SELECT DISTINCT val FROM import_table;\n> ...\n> SET SORT_MEM=1000;\n>\n> Anyone else have similar experience, or am I doing something\n> wrong to need so much SORT_MEM?\n\nNo, this sounds very reasonable to me. I do a similar operation on one of my \nsystems as part of a nightly data transformation for reporting. Since I \nhaven't had to do those on tables over 150,000 rows, I haven't seen the kind \nof RAM usage you experience.\n\n> Below is an example of another real-world query from the same\n> reporting system that benefits from a sort_mem over 32M.\n> Explain analyze (below) shows a 40% improvement by having\n> the sort fit in memory.\n\nCool! That's a perfect example of sizing sort_mem for the query. Mind if I \nsteal it for an article at some point?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Fri, 11 Apr 2003 09:15:15 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching (was Re: choosing the right platform)" }, { "msg_contents": "\nJosh wrote:\n>Ron,\n>> Below is an example of another real-world query from the same\n>> reporting system that benefits from a sort_mem over 32M.\n>> Explain analyze (below) shows a 40% improvement by having\n>> the sort fit in memory.\n>\n>Cool! That's a perfect example of sizing sort_mem for the query. Mind if I \n>steal it for an article at some point?\n\nGladly!\n\n\nBTW... if you're writing a tuning article, the most interesting one \nI've seen is:\n http://otn.oracle.com/oramag/webcolumns/2002/techarticles/scalzo_linux01.html\nI like how they broke down the process in many steps and measured after each.\nI'm was intrigued by how much Linux's VM tweaking (vm.bdflush) affected \nperformance mattered as much at the more-commontly tweaked \"noatime\".\n\n Ron\n\n", "msg_date": "Fri, 11 Apr 2003 10:25:24 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Caching (was Re: choosing the right platform)" } ]
[ { "msg_contents": "Hello,\n\nWe're evaluating postgresql as a backend choice for our next\ngeneration software and would like to perform some rough measurements\nin-house. Where could I get my hands on some reference data, say few\nvery large tables with a total size of over 1G that we could run. I\nnoticed earlier discussion about Tiger data, but 30G is a bit too much\nfor what we need. Any other ideas or suggestions?\n\nThanks.\n\n--\n-Boris\n\n", "msg_date": "Thu, 10 Apr 2003 10:20:49 -0700", "msg_from": "Boris Popov <[email protected]>", "msg_from_op": true, "msg_subject": "Reference data for performance testing?" }, { "msg_contents": "Boris,\n\n> We're evaluating postgresql as a backend choice for our next\n> generation software and would like to perform some rough measurements\n> in-house. Where could I get my hands on some reference data, say few\n> very large tables with a total size of over 1G that we could run. I\n> noticed earlier discussion about Tiger data, but 30G is a bit too much\n> for what we need. Any other ideas or suggestions?\n\nThe same discussion references the FCC data, which is more managably sized.\n\nPlease share your results, if you can!\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 10 Apr 2003 10:22:50 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reference data for performance testing?" }, { "msg_contents": "Hello Josh,\n\nThursday, April 10, 2003, 10:22:50 AM, you wrote:\n\nJB> Boris,\n\n>> We're evaluating postgresql as a backend choice for our next\n>> generation software and would like to perform some rough measurements\n>> in-house. Where could I get my hands on some reference data, say few\n>> very large tables with a total size of over 1G that we could run. I\n>> noticed earlier discussion about Tiger data, but 30G is a bit too much\n>> for what we need. Any other ideas or suggestions?\n\nJB> The same discussion references the FCC data, which is more managably sized.\n\nJB> Please share your results, if you can!\n\n\nI can't find a link right now, could you tell me where can I download it?\n\n--\n-Boris\n\n", "msg_date": "Thu, 10 Apr 2003 10:24:51 -0700", "msg_from": "Boris Popov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reference data for performance testing?" }, { "msg_contents": "\nOn Thursday, April 10, 2003, at 01:20 PM, Boris Popov wrote:\n\n> We're evaluating postgresql as a backend choice for our next\n> generation software and would like to perform some rough measurements\n> in-house. Where could I get my hands on some reference data, say few\n> very large tables with a total size of over 1G that we could run. I\n> noticed earlier discussion about Tiger data, but 30G is a bit too much\n> for what we need. Any other ideas or suggestions?\n\nActually Tiger is broken down into easily digestable chunks; you don't \ngrab all 30G at once. Pick one moderate size state to work with and \nyou've got about the right size data set.\n\n", "msg_date": "Thu, 10 Apr 2003 13:47:44 -0400", "msg_from": "Chris Hedemark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reference data for performance testing?" }, { "msg_contents": "Hi folks, sorry if the following is confusing, I have just tried to \nprovide the pertinent info and I have been up for more than 24 hours \nworking. I am getting weary....\n\nI am having a problem with an update transaction. I need to update 4000+ \nrecords but the update query keeps blowing out postgres and at times I \nam forced to restart the postmaster or reboot my server if I update \n2500+ records. The query is fine with 2225 records it is just somewhere \nbeyond 2225 that brings the server down.\n\nI assumed this was related to the shared memory settings but when I \ntried changing those values the behavior was identical. I did not try \nbeyond 256 megs for shmmax.\nI then tried the temporary solution of lowering the number of \nshared_buffers and max_connections but that did not change anything either.\nI then tried using the IN operator but that did not change anything either\n\nI am wondering if there is some other limit that I am hitting in MacOSX \nthat is not related to the SHM vars.\n\nI hope I am just overlooking something simple and that the list will \ncome back with some chiding and an answer :)\n\ndoes anybody have any suggestions?\n\nthank you very much,\n\n-Shane\n\n\n-----------------------------------------------------------\nSQL DETAILS:\n\nmy query is of this form:\n\nBEGIN;\n\nUPDATE \"mytable\" SET \"n_filtered\"=0,\"n_dirty\"=1 where (\"s_fileName\" = \n'filename1' OR \"s_fileName\" = 'filename2' ....... OR \"s_fileName\" = \n'filename2000') AND (\"n_objId=12345);\n\nCOMMIT;\n\nexplain tells me this:\n\nseq scan on \"mytable\" (cost=0.00..5020.00 rows=5 width=174) Filter \n .........\n\n-----------------------------------------------------------\n\n-----------------------------------------------------------\nSYSTEM:\n\nos: Mac 10.2.4\nchip: 1.4 GHz\nram: 1 GB\n-----------------------------------------------------------\n\n-----------------------------------------------------------\nERROR MESSAGE:\n\nserver process (pid 650) was terminated by signal 11\nall server processes terminated; reinitializing shared memopry and \nsemaphores\n\n-----------------------------------------------------------\n\n-----------------------------------------------------------\nSOLUTIONS I HAVE TRIED\n\n1. tweaking the five kern.sysv vars. to every configurable option possible\n\ncurrently I am at:\n\nkern.sysv.shmmax: 1073741824\nkern.sysv.shmmin: 256\nkern.sysv.shmmni: 8192\nkern.sysv.shmseg: 2048\nkern.sysv.shmall: 262144\n\n(I know shmmin is way too high according to the stuff I read, I was just \ngetting desperate, I have tried just leaving it at 1)\n\n\n2. lowering the number of shared_buffers and max_connections to 32:16 \n32:8 16:8 in postgresql.conf\n\ncurrently I am at 64 shared_bufs and 32 max_connects (these were the \ndefaults)\n\n3. Using the IN sql operator rather than a bunch of ORs, but I still \nhave the same problem\n\n-------------------------------------\n\n", "msg_date": "Sat, 12 Apr 2003 10:14:52 -0700", "msg_from": "shane hill <[email protected]>", "msg_from_op": false, "msg_subject": "update query blows out" }, { "msg_contents": "shane hill <[email protected]> writes:\n> I am having a problem with an update transaction. I need to update 4000+ \n> records but the update query keeps blowing out postgres and at times I \n> am forced to restart the postmaster or reboot my server if I update \n> 2500+ records. The query is fine with 2225 records it is just somewhere \n> beyond 2225 that brings the server down.\n\nWhat Postgres version? Can you get a backtrace from the core dump?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 12 Apr 2003 13:16:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update query blows out " }, { "msg_contents": "Postgres 7.3.1\n\nThe crash log is attached:\n\nthank you very much!\n\n-Shane\n\n\nTom Lane wrote:\n\n>shane hill <[email protected]> writes:\n> \n>\n>>I am having a problem with an update transaction. I need to update 4000+ \n>>records but the update query keeps blowing out postgres and at times I \n>>am forced to restart the postmaster or reboot my server if I update \n>>2500+ records. The query is fine with 2225 records it is just somewhere \n>>beyond 2225 that brings the server down.\n>> \n>>\n>\n>What Postgres version? Can you get a backtrace from the core dump?\n>\n>\t\t\tregards, tom lane\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://archives.postgresql.org\n>\n>\n> \n>", "msg_date": "Sat, 12 Apr 2003 11:14:55 -0700", "msg_from": "shane hill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update query blows out" }, { "msg_contents": "shane hill <[email protected]> writes:\n> I am having a problem with an update transaction. I need to update 4000+ \n> records but the update query keeps blowing out postgres and at times I \n> am forced to restart the postmaster or reboot my server if I update \n> 2500+ records. The query is fine with 2225 records it is just somewhere \n> beyond 2225 that brings the server down.\n\n> [ core dump in heavily-recursive routine ]\n\nI think you are running into a stack-size problem. A quick look at\n\"ulimit -a\" on my own OS X machine shows that the default stack limit\nis a mere 512KB, which is verging on the ridiculously small :-(.\n\nTry setting \"ulimit -s 10000\" or so in the script that launches the\npostmaster. Now that I look at it, the -d setting is on the miserly\nside as well ...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 12 Apr 2003 16:53:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: update query blows out " } ]
[ { "msg_contents": "Hi,\nI have a visual basic application and the access to postgress is very slow.\nIf I run the query on PGadminII the answer is quite good but if I run the same query from the application it takes a long time (I debugged it). 11seconds .vs. 5 minutes.\nI would like to know if I have a problem with my connection to postgres or my odbc.\nThe string connection is:\n \ngobjBD.StringConexion = \"Provider=MSDataShape.1;DRIVER={PostgreSQL};DATABASE=mydatabase;SERVER=192.9.200.5;PORT=5432;UID=postgres;PWD=\"\n \nI�ll really apreciate your help.\nThanks\nCecilia�nete al mayor servicio mundial de correo electr�nico: Haz clic aqu� \n", "msg_date": "Thu, 10 Apr 2003 12:32:49 -0500", "msg_from": "\"Cecilia Alvarez\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow Visual Basic application" }, { "msg_contents": "\nIt appears you are trying to create a cube by using MSDataShape. Correct?\nThis could be the cause of the slow query. Why not use a straight ADO\nconnection?\n\nDim cn As ADODB.Connection\nDim rs As ADODB.Recordset\n\nSet cn = New ADODB.Connection\nSet rs = New ADODB.Recordset\n\ncn.ConnectionString = \"driver={PostgreSQL};server=192.9.200.5;uid=;pwd\n=;database=mydatabase\"\ncn.ConnectionTimeout = 300\ncn.CursorLocation = adUseClient\ncn.Open\n\nSet rs = cn.Execute(\"Select * from table1)\n\n\n\n\n \n \"Cecilia Alvarez\" \n <[email protected]> To: [email protected] \n Sent by: cc: \n pgsql-performance-owner@post Subject: [PERFORM] Slow Visual Basic application \n gresql.org \n \n \n 04/10/2003 10:32 AM \n \n\n\n\n\nHi,\nI have a visual basic application and the access to postgress is very slow.\nIf I run the query on PGadminII the answer is quite good but if I run the\nsame query from the application it takes a long time (I debugged it).\n11seconds .vs. 5 minutes.\nI would like to know if I have a problem with my connection to postgres or\nmy odbc.\nThe string connection is:\n\ngobjBD.StringConexion = \"Provider=MSDataShape.1;DRIVER\n={PostgreSQL};DATABASE=mydatabase;\nSERVER=192.9.200.5;PORT=5432;UID=postgres;PWD=\"\n\nI´ll really apreciate your help.\nThanks\nCecilia\n\nÚnete al mayor servicio mundial de correo electrónico: Haz clic aquí\n\n", "msg_date": "Thu, 10 Apr 2003 11:52:45 -0700", "msg_from": "\"Patrick Hatcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Visual Basic application" } ]
[ { "msg_contents": "I noticed that when a multicolumn index exists it isn't necessarily\nfully used when the first column is constrained by an equals condition.\nHowever by adding a redundant sort condition you can get both columns\nused.\n\nIn the following examples crate has an index on gameid and areaid.\n\nThe examples below are for 7.4 development, but 7.3.2 behaves similarly.\n\nexplain analyze select areaid from crate where gameid = 'TTN' order by areaid;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------\n Sort (cost=132.93..133.02 rows=36 width=11) (actual time=5.44..5.57 rows=287 loops=1)\n Sort Key: areaid\n -> Index Scan using crate_game on crate (cost=0.00..132.00 rows=36 width=11) (actual time=0.06..1.94 rows=287 loops=1)\n Index Cond: (gameid = 'TTN'::text)\n Total runtime: 5.81 msec\n(5 rows)\n\n\nexplain analyze select areaid from crate where gameid = 'TTN' order by gameid, areaid;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------\n Index Scan using crate_game on crate (cost=0.00..132.00 rows=36 width=18) (actual time=0.08..2.06 rows=287 loops=1)\n Index Cond: (gameid = 'TTN'::text)\n Total runtime: 2.51 msec\n(3 rows)\n\n", "msg_date": "Sun, 13 Apr 2003 20:45:47 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": true, "msg_subject": "Multicolumn indexes and equal conditions" } ]
[ { "msg_contents": "Hi,\n\n\n\n I am writing to you to discuss the performance problem of postgreSQL\ndatabase we encountered in our project. I want to get suggestions from you.\n\n\n\nThe postgreSQL database we used need to process several millions records.\nThere are only six tables in the database. one of them contains several\nmillion records, the Others are less smaller. We need select more than 100\nthousands records from the talbe which contains several million records in\n10 seconds. In the process of selecting, the speed of selecting is not\nstable. Sometimes it cost 2 minutes , but sometimes 20 seconds. After\nanalyzing the time wasting in the process, we found the speed of function\nCount(*) is very slow. At the same time we have finished the setup of some\nparameters like max_fsm_relation, max_fsm_pages, share memory size etc, but\nthe performance is not improved satisfied.\n\n\n\nUnder this condition, I want get some useful suggestion from you. How to\noptimize the database? How to improve the Count(*)? Because we want to get\nthe number of records in the recordset we got.\n\n\n\nThank you every much! I hope hear from you soon.\n\n\nWind\n\n\n2003-4-15\n\n\n\n\n\n\n\n\nHi,\n \n I am writing \nto you to discuss the performance problem of postgreSQL database we encountered \nin our project. I \n want to get suggestions from \nyou.\n \nThe postgreSQL database we used need to process several \nmillions records. There are only six tables in the database. one of them \ncontains several \nmillion records,\nthe  Others are less \nsmaller. We need \nselect more than 100 thousands records from the talbe which contains several \nmillion records in 10 seconds.  \nIn the process of selecting, the speed of selecting is not stable. \nSometimes it cost 2 minutes , but sometimes 20 seconds. After analyzing the time \nwasting in the process, we found the speed of  function Count(*) is very slow. At the \nsame time we have finished the setup of some parameters like max_fsm_relation, \nmax_fsm_pages, share memory size etc, but the performance is not improved \nsatisfied.\n \nUnder this condition, I want get some useful \nsuggestion from you. How to optimize the database?  How to improve the Count(*)? Because \nwe  want to get the number of \nrecords in the recordset  we \ngot.\n  \n\nThank you every much! I hope hear from you \nsoon.\n                                                                                                 \nWind\n                                                                  \n                                    2003-4-15", "msg_date": "Tue, 15 Apr 2003 17:44:44 +0800", "msg_from": "linweidong <[email protected]>", "msg_from_op": true, "msg_subject": "for help!" }, { "msg_contents": "On Tuesday 15 April 2003 15:14, you wrote:\n> The postgreSQL database we used need to process several millions records.\n> There are only six tables in the database. one of them contains several\n> million records, the Others are less smaller. We need select more than 100\n> thousands records from the talbe which contains several million records in\n> 10 seconds. In the process of selecting, the speed of selecting is not\n> stable. Sometimes it cost 2 minutes , but sometimes 20 seconds. After\n> analyzing the time wasting in the process, we found the speed of function\n> Count(*) is very slow. At the same time we have finished the setup of some\n> parameters like max_fsm_relation, max_fsm_pages, share memory size etc, but\n> the performance is not improved satisfied.\n\nWhy do you need to do select count(*) to select more than 100 thousand \nrecords?\n\nPostgresql being MVCC database, select count(*) is not going to be anywhere \nnear good, especially if you have transactions occuring on table.\n\nAs far as just selecting rows from table, that should be tad fast if there are \nproper indexes, table in analyzed every now and then and there are enough \nshared buffers.\n\nIf you post your queries and table schemas, that would be much helpful. Your \ntweaked settings in postgresql.conf and hardware spec. would be good as well.\n\n> Under this condition, I want get some useful suggestion from you. How to\n> optimize the database? How to improve the Count(*)? Because we want to\n> get the number of records in the recordset we got.\n\nIf you are using say libpq, you don't need to issue a select count(*) where \nfoo and select where foo, to obtain record count and the records themselves. \nI believe every other interface stemming from libpq should provide any such \nhooks as well. Never used any other myself (barring ecpg)\n\n HTH\n\n Shridhar\n\n", "msg_date": "Tue, 15 Apr 2003 15:24:11 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: for help!" }, { "msg_contents": "<SNIP>\n> > Under this condition, I want get some useful suggestion from you. How to\n> > optimize the database? How to improve the Count(*)? Because we want to\n> > get the number of records in the recordset we got.\n> \n> If you are using say libpq, you don't need to issue a select count(*) where \n> foo and select where foo, to obtain record count and the records themselves. \n> I believe every other interface stemming from libpq should provide any such \n> hooks as well. Never used any other myself (barring ecpg)\n\nThe python interfaces most definitely do. Doing the count is quite\nunnecessary just as Shridhar points out.\n\n> HTH\n> \n> Shridhar\n>", "msg_date": "15 Apr 2003 08:40:05 -0700", "msg_from": "Will LaShell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: for help!" }, { "msg_contents": "On Tue, 15 Apr 2003, linweidong wrote:\n\n> Under this condition, I want get some useful suggestion from you. How to\n> optimize the database? How to improve the Count(*)? Because we want to get\n> the number of records in the recordset we got.\n\nWell, you can always use the trick of putting an on insert / delete \ntrigger on the table that maintains a single row table with the current \ncount. That way, whenever a row is added or removed, the count is \nupdated. this will slow down inserts and deletes a little, but TANSTAAFL.\n\n", "msg_date": "Tue, 15 Apr 2003 10:34:21 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: for help!" }, { "msg_contents": "Scott,\n\n> Well, you can always use the trick of putting an on insert / delete\n> trigger on the table that maintains a single row table with the current\n> count. That way, whenever a row is added or removed, the count is\n> updated. this will slow down inserts and deletes a little, but TANSTAAFL.\n\nBTW, I tested this for a client. I found the performance penalty on inserts \nand updates to be:\n\n-- For a single stream of intermittent updates from a single connection\n on an adequately powered server with moderate disk support (IDE Linux RAID)\n (100 inserts/updates per minute, with VACUUM every 5 minutes)\n PL/pgSQL Trigger: 20% penalty C Trigger: 9-11% penalty\n\n-- For 5 streams of inserts and updates at high volume on an overloaded\n server with moderate disk support (dual fast SCSI disks)\n (1000 inserts/updates per minute, vacuum every 5 minutes)\n PL/pgSQL Trigger: 65% penalty C Trigger: 40% penalty\n\nPlease note that the effective performance penalty on inserts and updates was \ndramatically higher for large batches of updates than for small ones.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Wed, 16 Apr 2003 08:48:36 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: for help!" }, { "msg_contents": "On Wed, Apr 16, 2003 at 08:48:36AM -0700, Josh Berkus wrote:\n> Scott,\n> \n> > Well, you can always use the trick of putting an on insert / delete\n> > trigger on the table that maintains a single row table with the current\n> > count. That way, whenever a row is added or removed, the count is\n\n> BTW, I tested this for a client. I found the performance penalty\n> on inserts and updates to be:\n\n[. . .]\n\n> Please note that the effective performance penalty on inserts and\n> updates was dramatically higher for large batches of updates than\n> for small ones.\n\nPresumably the problem was to do with contention? This is why I\ndon't really like the \"update one row\" approach for this sort of\nthing.\n\nBut you _could_ write a trigger which inserts into a \"staging\" table,\nand write a little daemon which only updates the count table with the\ndata from the staging table. It's a mighty ugly hack, but it ought\nto work.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 16 Apr 2003 11:53:12 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: for help!" }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> But you _could_ write a trigger which inserts into a \"staging\" table,\n> and write a little daemon which only updates the count table with the\n> data from the staging table. It's a mighty ugly hack, but it ought\n> to work.\n\nThe $64 question with this sort of thing is \"how accurate (up-to-date)\ndoes the count have to be?\".\n\nGiven that Josh is willing to vacuum every five minutes, he might find\nthat returning pg_class.reltuples is Close Enough (TM).\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 16 Apr 2003 12:01:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: for help! " }, { "msg_contents": "On Wed, Apr 16, 2003 at 12:01:56PM -0400, Tom Lane wrote:\n> Given that Josh is willing to vacuum every five minutes, he might find\n> that returning pg_class.reltuples is Close Enough (TM).\n\nCertainly, it's not going to be any farther off than the\nstaging-table+real-table approach, anyway.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 16 Apr 2003 12:11:58 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: for help!" } ]
[ { "msg_contents": "Hi all,\n\nWhilst I often use views for convenience, is there any performance\nadvantage at all in using a view rather than running the same query\ndirectly on the tables themselves?\n\nIs it quicker to run a full complex query rather than run a much simpler\nquery on a view? (Hope that makes sense)\n\n\nYours Unwhettedly,\nRobert John Shepherd.\n\nEditor\nDVD REVIEWER\nThe UK's BIGGEST Online DVD Magazine\nhttp://www.dvd.reviewer.co.uk\n\nFor a copy of my Public PGP key, email: [email protected] \n\n", "msg_date": "Tue, 15 Apr 2003 16:49:18 +0100", "msg_from": "\"Robert John Shepherd\" <[email protected]>", "msg_from_op": true, "msg_subject": "Do Views offer any performance advantage?" }, { "msg_contents": "\"Robert John Shepherd\" <[email protected]> writes:\n> Whilst I often use views for convenience, is there any performance\n> advantage at all in using a view rather than running the same query\n> directly on the tables themselves?\n\nNo, a view is just a macro.\n\nThere is probably some minuscule cost difference involved --- you save\nparsing and parse analysis of a long query string. On the other hand,\nyou pay to pull the view definition from the catalogs and merge it into\nthe given query. I'd not care to hazard a guess on whether the actual\nnet cost is more or less; but in any case these costs will be swamped\nby query planning and execution, if the query is complex.\n\nIf you're concerned about reducing parse/plan overhead for repetitive\nqueries, the prepared-statement facility (new in 7.3) is what to look\nat. Views won't do much for you.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 15 Apr 2003 13:26:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do Views offer any performance advantage? " }, { "msg_contents": "Tom Lane wrote:\n\n>\n>There is probably some minuscule cost difference involved --- you save\n>parsing and parse analysis of a long query string. On the other hand,\n>you pay to pull the view definition from the catalogs and merge it into\n>the given query. I'd not care to hazard a guess on whether the actual\n>net cost is more or less; but in any case these costs will be swamped\n>by query planning and execution, if the query is complex.\n>\nActually, there are cases when a view can impact performance.\nIf you are joining a view, it seems to be treated as a subquery, that \nmight have a much larger result than you would like.\n\nImagine\nSELECT something\n FROM A JOIN B JOIN C ...\n WHERE A.primaryKeyFoo=1234 ...\n\n where C is a view, containing JOINs itself, I observed a query plan \n(7.3.2) like\nA JOIN B JOIN (D JOIN E)\ninstead of\nA JOIN B JOIN D JOIN E which would be much more efficient for the \nA.primaryKeyFoo restriction.\n\n", "msg_date": "Tue, 15 Apr 2003 21:55:20 +0200", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do Views offer any performance advantage?" }, { "msg_contents": "Andreas Pflug <[email protected]> writes:\n> Actually, there are cases when a view can impact performance.\n> If you are joining a view, it seems to be treated as a subquery, that \n> might have a much larger result than you would like.\n\n> Imagine\n> SELECT something\n> FROM A JOIN B JOIN C ...\n> WHERE A.primaryKeyFoo=1234 ...\n\n> where C is a view, containing JOINs itself, I observed a query plan \n> (7.3.2) like\n> A JOIN B JOIN (D JOIN E)\n> instead of\n> A JOIN B JOIN D JOIN E which would be much more efficient for the \n> A.primaryKeyFoo restriction.\n\nThis is not the view's fault though --- the same would have happened\nif you'd written explicitly\n\n\tFROM A JOIN B JOIN (D JOIN E)\n\n7.4 will be less rigid about this (with or without a view ...)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 15 Apr 2003 19:37:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do Views offer any performance advantage? " }, { "msg_contents": "Tom Lane wrote:\n\n>This is not the view's fault though --- the same would have happened\n>if you'd written explicitly\n>\n>\tFROM A JOIN B JOIN (D JOIN E)\n>\nThat's right, I just wanted to warn about accessive use of joins with \nviews. I noticed this in an application, where quite big views where \njoined for convenience, and the result wasn't satisfying.\n\n>\n>7.4 will be less rigid about this (with or without a view ...)\n>\nGood!\nRegards,\nAndreas\n\n", "msg_date": "Wed, 16 Apr 2003 10:45:20 +0200", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do Views offer any performance advantage?" } ]
[ { "msg_contents": "\nIn the following query postgres doesn't use the index. In the hard-coded\nversion below it does. I suppose it can't because it's possible the \"target\"\ncould have wildcards etc in them. Is there any way to indicate the postgres\nthat that won't happen? \n\nThis is going to be even more of an issue when preparsed queries happen\nbecause even in the hard coded example it will be an issue. I know in Oracle\nif you parse a query with a LIKE :1||'%' type expression it still plans to use\nthe index and that's extremely useful. I don't know what it does if there's a\n% in the parameter, it either takes the performance hit or it doesn't treat\nthem as special?\n\ndb=> explain analyze select postalcode, abs(substr(target,6,1)::integer-substr(postalcode,6,1)::integer) as dist from postalcodes, (select 'L6C2M6'::text as target) as t where postalcode like substr(target,1,5)||'%' order by dist asc limit 2;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=12182.77..12182.77 rows=2 width=42) (actual time=9226.17..9226.18 rows=2 loops=1)\n -> Sort (cost=12182.77..12186.16 rows=1359 width=42) (actual time=9226.16..9226.16 rows=2 loops=1)\n Sort Key: abs(((substr(t.target, 6, 1))::integer - (substr((postalcodes.postalcode)::text, 6, 1))::integer))\n -> Nested Loop (cost=0.00..12112.04 rows=1359 width=42) (actual time=3262.89..9205.25 rows=8 loops=1)\n Join Filter: (\"inner\".postalcode ~~ (substr(\"outer\".target, 1, 5) || '%'::text))\n -> Subquery Scan t (cost=0.00..0.01 rows=1 width=0) (actual time=0.04..0.05 rows=1 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.02..0.02 rows=1 loops=1)\n -> Seq Scan on postalcodes (cost=0.00..7335.69 rows=271769 width=10) (actual time=5.52..3268.74 rows=271769 loops=1)\n Total runtime: 9241.92 msec\n(9 rows)\n\ndb=> explain analyze select postalcode, abs(substr('L6C2M6',6,1)::integer-substr(postalcode,6,1)::integer) as dist from postalcodes where postalcode like substr('L6C2M6',1,5)||'%' order by dist asc limit 2;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3.29..3.29 rows=1 width=10) (actual time=36.54..36.55 rows=2 loops=1)\n -> Sort (cost=3.29..3.29 rows=1 width=10) (actual time=36.53..36.54 rows=2 loops=1)\n Sort Key: abs((6 - (substr((postalcode)::text, 6, 1))::integer))\n -> Index Scan using idx_postalcodes_postalcodeon on postalcodes (cost=0.00..3.28 rows=1 width=10) (actual time=35.91..36.33 rows=8 loops=1)\n Index Cond: ((postalcode >= 'L6C2M'::bpchar) AND (postalcode < 'L6C2N'::bpchar))\n Filter: (postalcode ~~ 'L6C2M%'::text)\n Total runtime: 36.93 msec\n(7 rows)\n\n-- \ngreg\n\n", "msg_date": "15 Apr 2003 15:04:04 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": true, "msg_subject": "Using indexes for like foo% type queries when foo isn't constant (not\n\ta locale issue)" } ]
[ { "msg_contents": "Hello,\n\nIn building a schema, I'd like to know if it makes sense\nfrom a performance standpoint to use views instead of\nan object oriented structure (i.e. inherits).\n\nI would guess that the overhead of the queries against\ninherited tables is higher than queries against views,\nbut I don't know.\n\nIn the cities / capitals example below, I could make\nqueries such as:\n\nSELECT name FROM capitals;\n\nor\n\nSELECT name FROM capital_cities;\n\nBut which one would be faster? In my real world example,\nI will have one small base object table (i.e. cities in \nthe example) and many direct descendents of that base\ntable (e.g. capitals, beaches, national parks, suburbs\nin the example). This could be implemented as one\nsmall base table and with many tables inheriting from\nthe base. Or, it could be implemented as one larger \n(but not huge) lookup table with many views.\n\nWhat's the better choice from a performance standpoint?\n\nThanks!\n\n--dex\n\n\n--\n-- Schema with Inherits\n--\nCREATE TABLE cities (\n name text,\n population float,\n altitude int -- (in ft)\n );\n\n CREATE TABLE capitals (\n state char(2)\n ) INHERITS (cities);\n\n\n--\n-- Schema with View\n--\nCREATE TABLE all_cities (\n name text,\n population float,\n altitude int,\n state char(2)\n);\n\nCREATE VIEW just_cities AS SELECT \n\tall_cities.name, \n\tall_cities.population, \n\tall_cities.altitude \nFROM all_cities;\n\n-- or perhaps with a where clause, as in\nCREATE VIEW capital_cities AS SELECT \n\tall_cities.name, \n\tall_cities.population, \n\tall_cities.altitude \nFROM all_cities WHERE (all_cities.state IS NOT NULL);\n\n", "msg_date": "Tue, 15 Apr 2003 23:10:44 -0700", "msg_from": "\"dex\" <[email protected]>", "msg_from_op": true, "msg_subject": "Is there a performance between Inherits and Views?" } ]
[ { "msg_contents": "I am wondering why it uses the O(n^2) nested loop when there is a O(N)\nmethoud using btree indexes for a merg join. I am using 7.2.1 would\nupgrading fix my problime or is it somthing else?\n\nGiven the schema:\n\ndrop table Entry_Pairs;\ncreate table Entry_Pairs (\n left_entry int REFERENCES Entry ON DELETE RESTRICT,\n right_entry int REFERENCES Entry ON DELETE RESTRICT,\n relation int NOT NULL ,\n subtract bool NOT NULL ,\n comment int NULL REFERENCES Comment ON DELETE SET NULL,\n UNIQUE (left_entry, right_entry, relation)\n);\nCREATE INDEX entry_pairs_left_index ON entry_pairs (left_entry);\nCREATE INDEX entry_pairs_right_index ON entry_pairs (right_entry);\n--\n\nYou get this\"\n\ndblex=> explain select A.left_entry from entry_pairs A, entry_pairs B\nwhere A.right_entry != B.left_entry;\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=100000000.00..102876671.17 rows=97545252 width=12)\n -> Seq Scan on entry_pairs a (cost=0.00..167.77 rows=9877 width=8)\n -> Seq Scan on entry_pairs b (cost=0.00..167.77 rows=9877 width=4)\n\nEXPLAIN\n\nThat is dum. If you just walk both B-Tree indexes there is a O(n)\nsearch. I tryed to turn off netsed loops but it still did it. (the\nreason the cost is 100000000.00 is a artifact from turing off loops)\n\n-Jonathan\n\n", "msg_date": "15 Apr 2003 23:58:28 -0700", "msg_from": "Jonathan Moore <[email protected]>", "msg_from_op": true, "msg_subject": "dum query plan" }, { "msg_contents": "\nOn 15 Apr 2003, Jonathan Moore wrote:\n\n> I am wondering why it uses the O(n^2) nested loop when there is a O(N)\n> methoud using btree indexes for a merg join. I am using 7.2.1 would\n> upgrading fix my problime or is it somthing else?\n>\n> Given the schema:\n>\n> drop table Entry_Pairs;\n> create table Entry_Pairs (\n> left_entry int REFERENCES Entry ON DELETE RESTRICT,\n> right_entry int REFERENCES Entry ON DELETE RESTRICT,\n> relation int NOT NULL ,\n> subtract bool NOT NULL ,\n> comment int NULL REFERENCES Comment ON DELETE SET NULL,\n> UNIQUE (left_entry, right_entry, relation)\n> );\n> CREATE INDEX entry_pairs_left_index ON entry_pairs (left_entry);\n> CREATE INDEX entry_pairs_right_index ON entry_pairs (right_entry);\n> --\n>\n> You get this\"\n>\n> dblex=> explain select A.left_entry from entry_pairs A, entry_pairs B\n> where A.right_entry != B.left_entry;\n> NOTICE: QUERY PLAN:\n>\n> Nested Loop (cost=100000000.00..102876671.17 rows=97545252 width=12)\n> -> Seq Scan on entry_pairs a (cost=0.00..167.77 rows=9877 width=8)\n> -> Seq Scan on entry_pairs b (cost=0.00..167.77 rows=9877 width=4)\n>\n> EXPLAIN\n>\n> That is dum. If you just walk both B-Tree indexes there is a O(n)\n> search. I tryed to turn off netsed loops but it still did it. (the\n> reason the cost is 100000000.00 is a artifact from turing off loops)\n\nCan you describe the algorithm you think it should be taking with perhaps\na small set of data like say (given only left and right):\n\n(1,2)\n(3,4)\n(5,6)\n\n(I think the query should return 1,1,1,3,3,3,5,5,5 for this case)\n\n", "msg_date": "Wed, 16 Apr 2003 18:21:31 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dum query plan" }, { "msg_contents": "Jonathan Moore <[email protected]> writes:\n> I am wondering why it uses the O(n^2) nested loop when there is a O(N)\n> methoud using btree indexes for a merg join.\n\nWith an inequality for the WHERE condition? I don't think so. The\nexpected output is of size O(N^2), so how could the algorithm take\nless than O(N^2) steps?\n\n> dblex=> explain select A.left_entry from entry_pairs A, entry_pairs B\n> where A.right_entry != B.left_entry;\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 16 Apr 2003 23:15:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dum query plan " }, { "msg_contents": "Your WHERE clause is the reason you get O(N^2).\n\nHow about describing in words what you want.\n\nmaybe what you really want is:\n\nselect left_entry from entry_pairs A where not exists ( \n select 1 from entry_pairs B where A.right_entry = B.left_entry)\n\nJLL\n\nJonathan Moore wrote:\n> \n> I am wondering why it uses the O(n^2) nested loop when there is a O(N)\n> methoud using btree indexes for a merg join. I am using 7.2.1 would\n> upgrading fix my problime or is it somthing else?\n> \n> Given the schema:\n> \n> drop table Entry_Pairs;\n> create table Entry_Pairs (\n> left_entry int REFERENCES Entry ON DELETE RESTRICT,\n> right_entry int REFERENCES Entry ON DELETE RESTRICT,\n> relation int NOT NULL ,\n> subtract bool NOT NULL ,\n> comment int NULL REFERENCES Comment ON DELETE SET NULL,\n> UNIQUE (left_entry, right_entry, relation)\n> );\n> CREATE INDEX entry_pairs_left_index ON entry_pairs (left_entry);\n> CREATE INDEX entry_pairs_right_index ON entry_pairs (right_entry);\n> --\n> \n> You get this\"\n> \n> dblex=> explain select A.left_entry from entry_pairs A, entry_pairs B\n> where A.right_entry != B.left_entry;\n> NOTICE: QUERY PLAN:\n> \n> Nested Loop (cost=100000000.00..102876671.17 rows=97545252 width=12)\n> -> Seq Scan on entry_pairs a (cost=0.00..167.77 rows=9877 width=8)\n> -> Seq Scan on entry_pairs b (cost=0.00..167.77 rows=9877 width=4)\n> \n> EXPLAIN\n> \n> That is dum. If you just walk both B-Tree indexes there is a O(n)\n> search. I tryed to turn off netsed loops but it still did it. (the\n> reason the cost is 100000000.00 is a artifact from turing off loops)\n> \n> -Jonathan\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n", "msg_date": "Thu, 17 Apr 2003 10:12:35 -0400", "msg_from": "Jean-Luc Lachance <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dum query plan" } ]
[ { "msg_contents": "Hi,\n\nI want to ask the 'which RAID setup is best for PostgreSQL?' question again.\nI've read a large portion of the archives of this list, but generally the\nanswer is 'depends on your needs' with a few different camps.\n\nMy needs are as follows: dedicated PostgreSQL server for a website, which does\nmuch more select queries than insert/updates (although, due to a lot of\ncaching outside the database, we will be doing more updates than usual for a\nwebsite).\n\nThe machine which will be built for this is going to be something like a dual\nXeon 2.4GHz, 4GB RAM, and a SCSI hardware RAID controller with some cache RAM\nand 6-7 36GB 15K rpm disks. We have good experiences with ICP Vortex\ncontrollers, so I'll probably end up buying on of those again (the GDT8514RZ\nlooks nice: http://www.icp-vortex.com/english/product/pci/rzu320/8514rz_e.htm\n)\n\nWe normally use Debian linux with a 2.4 kernel, but we're thinking we might\nplay around with FreeBSD and see how that runs before making the final choice.\n\nThe RAID setup I have in my head is as follows:\n\n4 disks for a RAID 10 array, for the PG data area\n2 disks for a RAID 1 array, for the OS, swap (it won't swap) and, most\nimportantly, WAL files\n1 disk for hot spare\n\nRAID 1 isn't ideal for a WAL disks because of the (small) write penalty, but\nI'm not sure I want to risk losing the WAL files. As far as I know PG doesn't\nreally like losing them :) This array shouldn't see much I/O outside of the\nWAL files, since the OS and PG itself should be completely in RAM when it's\nstarted up.\n\nRAID 5 is more cost-effective for the data storage, but write-performance is\nmuch lower than RAID 10.\n\nThe hot-spare is non-negotiable, it has saved my life a number of times ;)\n\nPerformance and reliability are the prime concerns for this setup. We normally\nrun our boxes at extremely high loads because we don't have the budget we\nneed. Cost is an issue, but since our website is always growing at an insane\npace I'd rather drop some cash on a fast server now and hope to hold out till\nthe end of this year than having to rush out and buy another mediocre server\nin a few months.\n\nAm I on the right track or does anyone have any tips I could use?\n\n\nOn a side note: this box will be bought a few days or weeks from now and\ntested during a week or so before we put it in our production environment (if\neverything goes well). If anyone is interested in any benchmark results from\nit (possibly even FreeBSD vs Linux :)) that can probably be arranged.\n\n\nVincent van Leeuwen\nMedia Design - http://www.mediadesign.nl/\n\n", "msg_date": "Wed, 16 Apr 2003 18:26:58 +0200", "msg_from": "Vincent van Leeuwen <[email protected]>", "msg_from_op": true, "msg_subject": "the RAID question, again" } ]
[ { "msg_contents": "I now under stand that my join was rong but none of the seguestions are\nthe optimal solution to the problime. You can make this order n if you\ntry. The trick is to use a mearg join using sorted list of the unique\nkeys in each colum join. The question you are asking is what left hand\nentrys do not exist on the right. \n\nselect A.left form pairs A, pairs B where A.left != B.right;\n\n(note: my code in the first example select left form the rong table but\nit dosn't change the search.)\n\ntake the DB:\n\n(1,2)\n(2,1)\n(1,5)\n(4,5)\n(5,2)\n\nSort the colums:\nleft right\n==== =====\n 1 1\n 2 2\n 4 5\n 5 \n \nStart at the top you see that you have 1 in both columes there for you\nknow that 1 is not a answer. pop both colums. same for 2. Whe you get to\nthe top of the lists as 4, 5; you know that 4 apperas on the only in the\nleft colum as you don't see it on the right. pop the left colum. now you\nsee that 5 is on both sides so 5 is not a canadate. You are out of\noptions so you are done 4 is the only value that is on the left and only\non the left. \n\nThis methoud is order O(n) if both colums have b-tree indexes so you\ndon't have to pre sort them othere wise it is O(n*log(n)) as the sort is\nthe greatest complexity. In eathere case it is way better then O(n^2)\nfor almost any n. \n\nI have this implmented in my code by selecting each colum and then doing\nthe mearg my self more expensive then a in db join as there is pointless\ndata copys.\n\nsudo perl for the hole thing is:\n\n#!/usr/bin/not-realy-perl\n\nmy @left = select distinct left_entry from entry_pairs order by \nleft_entry;\n\nmy @right = select distinct right_entry from entry_pairs order by \nright_entry;\n\nmy @only_left;\n\nwhile (1) {\n if (not @left) {\n last; #done\n }\n\n elsif (not @right) {\n push @only_left, $left[0];\n pop @left;\n }\n\n elsif ($left[0] == $right[0]) {\n pop @left;\n pop @right;\n }\n\n elsif ($left[0] < $right[0]) {\n push @only_left, $left[0];\n pop @left;\n }\n\n elsif ($left[0] > $right[0]) {\n pop @right;\n }\n}\n\n\n\n-Jonathan\n\n", "msg_date": "16 Apr 2003 15:02:49 -0700", "msg_from": "Jonathan Moore <[email protected]>", "msg_from_op": true, "msg_subject": "dum query plan: more info." }, { "msg_contents": "On 16 Apr 2003 15:02:49 -0700, Jonathan Moore <[email protected]>\nwrote:\n>select A.left form pairs A, pairs B where A.left != B.right;\n>\n>take the DB:\n>\n>(1,2)\n>(2,1)\n>(1,5)\n>(4,5)\n>(5,2)\n>\n>[...] 4 is the only value that is on the left and only\n>on the left.\n\nBut this is not the answer to your SQL statement. The correct answer\nis:\n left\n------\n 1\n 1\n 1\n 1\n 2\n 2\n 2\n 1\n 1\n 1\n 1\n 4\n 4\n 4\n 4\n 4\n 5\n 5\n 5\n(19 rows)\n\nWhat you are looking for is more like\n\n\tSELECT left FROM pairs\n\tEXCEPT\n\tSELECT right FROM pairs;\n\n>This methoud is order O(n) if both colums have b-tree indexes so you\n>don't have to pre sort them othere wise it is O(n*log(n)) as the sort is\n>the greatest complexity. In eathere case it is way better then O(n^2)\n>for almost any n. \n\nAnd I'm sure it will not take O(n^2) time in Postgres.\n\nServus\n Manfred\n\n", "msg_date": "Thu, 17 Apr 2003 18:24:24 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dum query plan: more info." }, { "msg_contents": "Jonathan Moore <[email protected]> writes:\n> I now under stand that my join was rong but none of the seguestions are\n> the optimal solution to the problime. You can make this order n if you\n> try. The trick is to use a mearg join using sorted list of the unique\n> keys in each colum join. The question you are asking is what left hand\n> entrys do not exist on the right. \n\nIn that case maybe what you are after is\n\nselect a.* from a left join b on (a.left = b.right) where b.right is null;\n\nwhich is a pretty grotty hack using the outer-join rules, but should\nwork efficiently.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 17 Apr 2003 12:31:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dum query plan: more info. " } ]
[ { "msg_contents": "Vincent,\n\nIn my eyes the best disk I/O configuration is a balance\nof performance, price and administrative effort.\n\nYour set-up looks relatively good. Howver, price seems\nnot to be your greatest concern. Otherwise you would\nfavor RAID 5 and/or leave out the spare disk.\n\nOne improvement area may be to put all 6 disks into a\nRAID 10 group. That way you have more I/O bandwith. \n\nOne watchout is that the main memory of your machine\nmay be better than the one of your RAID controller.\nThe RAID controller has Integrated 128MB PC133 ECC\nSDRAM. You did not state what kind of memory your\nserver has.\n\nRegards,\nNikolaus\n\nOn Wed, 16 Apr 2003 18:26:58 +0200, Vincent van Leeuwen\nwrote:\n\n> \n> Hi,\n> \n> I want to ask the 'which RAID setup is best for\n> PostgreSQL?' question again.\n> I've read a large portion of the archives of this\nlist,\n> but generally the\n> answer is 'depends on your needs' with a few different\n> camps.\n> \n> My needs are as follows: dedicated PostgreSQL server\n> for a website, which does\n> much more select queries than insert/updates\n(although,\n> due to a lot of\n> caching outside the database, we will be doing more\n> updates than usual for a\n> website).\n> \n> The machine which will be built for this is going to\nbe\n> something like a dual\n> Xeon 2.4GHz, 4GB RAM, and a SCSI hardware RAID\n> controller with some cache RAM\n> and 6-7 36GB 15K rpm disks. We have good experiences\n> with ICP Vortex\n> controllers, so I'll probably end up buying on of\nthose\n> again (the GDT8514RZ\n> looks nice:\n>\nhttp://www.icp-vortex.com/english/product/pci/rzu320/8514rz_e.htm\n> )\n> \n> We normally use Debian linux with a 2.4 kernel, but\n> we're thinking we might\n> play around with FreeBSD and see how that runs before\n> making the final choice.\n> \n> The RAID setup I have in my head is as follows:\n> \n> 4 disks for a RAID 10 array, for the PG data area\n> 2 disks for a RAID 1 array, for the OS, swap (it won't\n> swap) and, most\n> importantly, WAL files\n> 1 disk for hot spare\n> \n> RAID 1 isn't ideal for a WAL disks because of the\n> (small) write penalty, but\n> I'm not sure I want to risk losing the WAL files. As\n> far as I know PG doesn't\n> really like losing them :) This array shouldn't see\n> much I/O outside of the\n> WAL files, since the OS and PG itself should be\n> completely in RAM when it's\n> started up.\n> \n> RAID 5 is more cost-effective for the data storage,\nbut\n> write-performance is\n> much lower than RAID 10.\n> \n> The hot-spare is non-negotiable, it has saved my life\na\n> number of times ;)\n> \n> Performance and reliability are the prime concerns for\n> this setup. We normally\n> run our boxes at extremely high loads because we don't\n> have the budget we\n> need. Cost is an issue, but since our website is\nalways\n> growing at an insane\n> pace I'd rather drop some cash on a fast server now\nand\n> hope to hold out till\n> the end of this year than having to rush out and buy\n> another mediocre server\n> in a few months.\n> \n> Am I on the right track or does anyone have any tips I\n> could use?\n> \n> \n> On a side note: this box will be bought a few days or\n> weeks from now and\n> tested during a week or so before we put it in our\n> production environment (if\n> everything goes well). If anyone is interested in any\n> benchmark results from\n> it (possibly even FreeBSD vs Linux :)) that can\n> probably be arranged.\n> \n> \n> Vincent van Leeuwen\n> Media Design - http://www.mediadesign.nl/\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the\n> unregister command\n> (send \"unregister YourEmailAddressHere\" to\n> [email protected])\n\n", "msg_date": "Wed, 16 Apr 2003 19:32:54 -0700 (PDT)", "msg_from": "\"Nikolaus Dilger\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the RAID question, again" }, { "msg_contents": "Vincent,\n\n> One watchout is that the main memory of your machine\n> may be better than the one of your RAID controller.\n> The RAID controller has Integrated 128MB PC133 ECC\n> SDRAM. You did not state what kind of memory your\n> server has.\n\nNickolaus has a good point. With a high-end Linux server, and a medium-end \nRAID card, it's sometimes faster to use Linux software RAID than harware \nraid. Not all the time, though.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Wed, 16 Apr 2003 20:20:50 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the RAID question, again" }, { "msg_contents": "On 2003-04-16 19:32:54 -0700, Nikolaus Dilger wrote:\n> One improvement area may be to put all 6 disks into a\n> RAID 10 group. That way you have more I/O bandwith. \n\nA concern I have about that setup is that a large WAL write will have to wait\nfor 6 spindles to write the data before returning instead of 2 spindles. But\nas you say it does create way more I/O bandwidth. I think I'll just test that\nwhen the box is here instead of speculating further :)\n\n> One watchout is that the main memory of your machine\n> may be better than the one of your RAID controller.\n> The RAID controller has Integrated 128MB PC133 ECC\n> SDRAM. You did not state what kind of memory your\n> server has.\n> \n\nOn 2003-04-16 20:20:50 -0700, Josh Berkus wrote:\n> Nickolaus has a good point. With a high-end Linux server, and a medium-end\n> RAID card, it's sometimes faster to use Linux software RAID than harware\n> raid. Not all the time, though.\n\nI've heard rumors that software raid performs poor when stacking raid layers\n(raid 0 on raid 1). Not sure if that's still true though. My own experiences\nwith linux software raid (raid 5 on a low-cost fileserver for personal use)\nare very good (especially in the reliability department, I've recovered from\ntwo-disk failures due to controllers hanging up with only a few percent data\nloss), although I've never been overly concerned with performance on that\nsetup so haven't really tested that.\n\nBut if this controller is medium-end, could anyone recommend a high-end RAID\ncard that has excellent linux support? One of the things I especially like\nabout ICP Vortex products is the official linux support and the excellent\nsoftware utility for monitoring and (re)configuring the raid arrays. Comes in\nhandy when replacing hot-spares and rebuilding failed arrays while keeping\nthe box running :)\n\nVincent van Leeuwen\nMedia Design - http://www.mediadesign.nl/\n\n", "msg_date": "Tue, 22 Apr 2003 14:11:50 +0200", "msg_from": "Vincent van Leeuwen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the RAID question, again" }, { "msg_contents": "Vincent,\n\n> But if this controller is medium-end, could anyone recommend a high-end\n> RAID card that has excellent linux support? One of the things I especially\n> like about ICP Vortex products is the official linux support and the\n> excellent software utility for monitoring and (re)configuring the raid\n> arrays. Comes in handy when replacing hot-spares and rebuilding failed\n> arrays while keeping the box running :)\n\nNo, just negative advice. Mylex support is dead until someone steps into the \nshoes of the late developer of that driver. Adaptec is only paying their \nlinux guy to do Red Hat support for their new RAID cards, so you're SOL with \nother distributions.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Tue, 22 Apr 2003 10:18:57 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the RAID question, again" }, { "msg_contents": "On Tue, 22 Apr 2003, Vincent van Leeuwen wrote:\n\n> On 2003-04-16 19:32:54 -0700, Nikolaus Dilger wrote:\n> > One improvement area may be to put all 6 disks into a\n> > RAID 10 group. That way you have more I/O bandwith. \n> \n> A concern I have about that setup is that a large WAL write will have to wait\n> for 6 spindles to write the data before returning instead of 2 spindles. But\n> as you say it does create way more I/O bandwidth. I think I'll just test that\n> when the box is here instead of speculating further :)\n\nNot in a RAID 10. Assuming the setup is:\n\nRAID0-0: disk0, disk1, disk2\nRAID0-1: disk3, disk4, disk5\nRAID1-0: RAID0-0, RAID0-1\n\nThen a write would only have to wait on two disks. Assuming the physical \nsetup is one SCSI channel for RAID0-0 and one for RAID0-1, then both \ndrives can write at the same time and your write performance is virtually \nidentical to a single drive.\n\n> On 2003-04-16 20:20:50 -0700, Josh Berkus wrote:\n> > Nickolaus has a good point. With a high-end Linux server, and a medium-end\n> > RAID card, it's sometimes faster to use Linux software RAID than harware\n> > raid. Not all the time, though.\n> \n> I've heard rumors that software raid performs poor when stacking raid layers\n> (raid 0 on raid 1). Not sure if that's still true though.\n\nI tested it and was probably the one spreading the rumors. I was testing \non Linux kernels 2.4.9 at the time on a Dual PPro - 200 with 256 Meg RAM \nand 6 Ultra Wide 4 gig SCSI drives at 10krpm. I've also tested other \nsetups.\n\nMy experience was that RAID5 and RAID1 were no faster on top of RAID0 then \non bare drives. note that I didn't test for massive parallel \nperformance, which would probably have better performance with the extra \nplatters. I was testing something like 4 to 10 simo connects with pgbench \nand my own queries, some large, some small.\n\n\n> My own experiences\n> with linux software raid (raid 5 on a low-cost fileserver for personal use)\n> are very good (especially in the reliability department, I've recovered from\n> two-disk failures due to controllers hanging up with only a few percent data\n> loss), although I've never been overly concerned with performance on that\n> setup so haven't really tested that.\n\nMy experience with Linux RAID is similar to yours. It's always been rock \nsolid reliable, and acutally seems more intuitive to me now than any of \nthe hardware RAID cards I've played with. Plus you can FORCE it to do \nwhat you want, whereas many cards refuse to do what you want.\n\nfor really fast RAID, look at external RAID enclosures, that take x drives \nand make them look like one great big drive. Good speed and easy to \nmanage, and to Linux it's just a big drive, so you don't need any special \ndrivers for it.\n\n", "msg_date": "Tue, 22 Apr 2003 11:29:07 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the RAID question, again" }, { "msg_contents": "We use LSI Megaraid cards for all of our servers. Their older cards are\na bit dated now, but the new Elite 1650 is a pretty nice card. The\nAdaptec cards are pretty hot, but as Josh has pointed out their\nreference driver is for RedHat. Granted, that doesn't bother us here at\nOFS because that's all we use on machine but to each their own.\n\nSincerely,\n\nWill LaShell\n\nOn Tue, 2003-04-22 at 10:18, Josh Berkus wrote:\n> Vincent,\n> \n> > But if this controller is medium-end, could anyone recommend a high-end\n> > RAID card that has excellent linux support? One of the things I especially\n> > like about ICP Vortex products is the official linux support and the\n> > excellent software utility for monitoring and (re)configuring the raid\n> > arrays. Comes in handy when replacing hot-spares and rebuilding failed\n> > arrays while keeping the box running :)\n> \n> No, just negative advice. Mylex support is dead until someone steps into the \n> shoes of the late developer of that driver. Adaptec is only paying their \n> linux guy to do Red Hat support for their new RAID cards, so you're SOL with \n> other distributions.\n> \n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])", "msg_date": "22 Apr 2003 10:44:48 -0700", "msg_from": "Will LaShell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the RAID question, again" } ]
[ { "msg_contents": "Hello,\n\nIs it more efficient to use a schema with Inherits or\nschema with Views. I can see logically how to use both\nfor my case and I'm trying to make a decision. I would \nguess that the overhead of the queries against\ninherited tables is higher than queries against views,\nbut I don't know.\n\nAt the bottom of this message, I've included the cities /\ncapitals examples implemented both as schema using\ninheritance and as schema using views. \n\nUsing the example, I could make queries such as:\n\nSELECT name FROM capitals; -- capitals in inherited\n\nor\n\nSELECT name FROM capital_cities; -- capital cities is a view\n\nBut which one would be faster? In my real world example,\nI will either have one small base class table (i.e. cities in \nthe example) and many direct descendents of that base\ntable (e.g. capitals, beaches, national parks, suburbs\nin the example). Or, it could be implemented as one larger \n(but not huge) lookup table with many views against\nthat lookup table.\n\nWhat would you do?\n\nThanks!\n\n--dex\n\n\n--\n-- Schema with Inherits\n--\nCREATE TABLE cities (\n name text,\n population float,\n altitude int -- (in ft)\n );\n\n CREATE TABLE capitals (\n state char(2)\n ) INHERITS (cities);\n\n\n--\n-- Schema with View\n--\nCREATE TABLE all_cities (\n name text,\n population float,\n altitude int,\n state char(2)\n);\n\nCREATE VIEW just_cities AS SELECT \n\tall_cities.name, \n\tall_cities.population, \n\tall_cities.altitude \nFROM all_cities;\n\n-- or perhaps with a where clause, as in\nCREATE VIEW capital_cities AS SELECT \n\tall_cities.name, \n\tall_cities.population, \n\tall_cities.altitude \nFROM all_cities WHERE (all_cities.state IS NOT NULL);\n\n", "msg_date": "Thu, 17 Apr 2003 10:48:16 -0700", "msg_from": "\"dex\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of Inherits versus Views" } ]
[ { "msg_contents": "\tHi,\n\n\tIn the process of developing an API for web/perl/postrgres\ninteractions, I have come up against a peculiar problem; a rather simple\nquery, run on two relatively small tables, takes as much as 0.4 seconds\non my development system (it's a P2 266, which in this case is a good\nthing, as it exposes speed issues). I tried accomplishging the same\nthing via subqueries and joins, and both methods give me similarly bad\nresult (join query is a little slower, but only a little).\n\n\tThe queries I have tested are as follows:\n\nSELECT DISTINCT maker.* FROM maker,model WHERE maker.id=model.maker\nSELECT DISTINCT maker.* FROM maker join model ON maker.id=model.maker\n\n\tThe point of the queries is to extract only the maker rows which\nare referenced from the model table. I would happily use another way to\nachieve the same end, should anyone suggest it.\n\n\t\"maker\" has only 137 rows, \"model\" only 1233 rows. I test the\nperformance in perl, by taking time right before and after query\nexecution. Executing the queries takes anywhere between .3 and .5\nseconds, depending on some other factors (removing the 'distinct'\nkeyword from the 1st query shaves about .1 second off of the execution\ntime for example).\n\n\tThese execution times seem ridiculous. Any idea what the culprit\nmay be? I hope it's not the text fields, 'cuz those fields are\nimportant.\n\n\tBoth tables are quite simple:\n\n# \\d maker\n Table \"public.maker\"\n Column | Type | Modifiers\n------------+-----------------------+-----------\n id | character varying(4) | not null\n fullname | character varying(20) |\n contact | character varying(20) |\n phone | character varying(15) |\n service_no | character varying(20) |\n lastuser | character varying(30) |\n comments | text |\nIndexes: maker_pkey primary key btree (id)\nTriggers: RI_ConstraintTrigger_18881,\n RI_ConstraintTrigger_18882\n\n# \\d model\n Table \"public.model\"\n Column | Type | Modifiers\n---------------+-----------------------+---------------------------------------------\n id | integer | not null default nextval('model_ids'::text)\n name | character varying(20) | not null\n maker | character varying(4) |\n type_hardware | character varying(4) |\n fullname | character varying(40) |\n spec | character varying(50) |\n lastuser | character varying(30) |\n comments | text |\n size_cap | character varying(10) |\nIndexes: model_pkey primary key btree (id),\n unique_model unique btree (name, maker, type_hardware)\nCheck constraints: \"nonempty_fullname\" (fullname > ''::character varying)\nForeign Key constraints: valid_maker FOREIGN KEY (maker) REFERENCES \\\n maker(id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n valid_type FOREIGN KEY (type_hardware)\nREFERENCES type_hardware(id) ON UPDATE NO ACTION ON DELETE NO ACTION\n\n-- \n| Victor Danilchenko | Any sufficiently advanced |\n| [email protected] | technology is indistinguishable |\n| CSCF | 5-4231 | from a Perl script. |\n\n", "msg_date": "Thu, 17 Apr 2003 15:17:01 -0400 (EDT)", "msg_from": "Victor Danilchenko <[email protected]>", "msg_from_op": true, "msg_subject": "Query speed problems" }, { "msg_contents": "\tSorry, I forgot to specify software versions.\n\n\tI am running RHL 8.0 (Linux kernel 2.4.18), and postgres 7.3.\n\n-- \n| Victor Danilchenko +------------------------------------+\n| [email protected] | I don't have to outrun the bear -- |\n| CSCF | 5-4231 | I just have to outrun YOU! |\n\n", "msg_date": "Thu, 17 Apr 2003 15:38:37 -0400 (EDT)", "msg_from": "Victor Danilchenko <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query speed problems" }, { "msg_contents": "\nOn Thu, 17 Apr 2003, Victor Danilchenko wrote:\n\n> \tThe queries I have tested are as follows:\n>\n> SELECT DISTINCT maker.* FROM maker,model WHERE maker.id=model.maker\n> SELECT DISTINCT maker.* FROM maker join model ON maker.id=model.maker\n>\n> \tThe point of the queries is to extract only the maker rows which\n> are referenced from the model table. I would happily use another way to\n> achieve the same end, should anyone suggest it.\n\nWhat does explain analyze show for the query?\n\n> \t\"maker\" has only 137 rows, \"model\" only 1233 rows. I test the\n> performance in perl, by taking time right before and after query\n> execution. Executing the queries takes anywhere between .3 and .5\n> seconds, depending on some other factors (removing the 'distinct'\n> keyword from the 1st query shaves about .1 second off of the execution\n> time for example).\n\n> Column | Type | Modifiers\n> ---------------+-----------------------+---------------------------------------------\n> id | integer | not null default nextval('model_ids'::text)\n> name | character varying(20) | not null\n> maker | character varying(4) |\n> type_hardware | character varying(4) |\n> fullname | character varying(40) |\n> spec | character varying(50) |\n> lastuser | character varying(30) |\n> comments | text |\n> size_cap | character varying(10) |\n> Indexes: model_pkey primary key btree (id),\n> unique_model unique btree (name, maker, type_hardware)\n> Check constraints: \"nonempty_fullname\" (fullname > ''::character varying)\n> Foreign Key constraints: valid_maker FOREIGN KEY (maker) REFERENCES \\\n> maker(id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n> valid_type FOREIGN KEY (type_hardware)\n> REFERENCES type_hardware(id) ON UPDATE NO ACTION ON DELETE NO ACTION\n\nHmm, it doesn't look to me like model.maker=<value> type queries are\nindexable with this set of things. An index on model(maker) might help.\n\n", "msg_date": "Thu, 17 Apr 2003 12:55:10 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query speed problems" }, { "msg_contents": "Victor,\n\tI'm not sure, but I think an exists might be faster for you. It wouldn't\nhave to deal with the Cartesian product of the tables.\n\nSELECT DISTINCT maker.* FROM maker WHERE exists (SELECT 1 FROM model WHERE\nmodel.maker=maker.id);\n\nThanks,\nPeter Darley\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Victor\nDanilchenko\nSent: Thursday, April 17, 2003 12:17 PM\nTo: [email protected]\nSubject: [PERFORM] Query speed problems\n\n\n\tHi,\n\n\tIn the process of developing an API for web/perl/postrgres\ninteractions, I have come up against a peculiar problem; a rather simple\nquery, run on two relatively small tables, takes as much as 0.4 seconds\non my development system (it's a P2 266, which in this case is a good\nthing, as it exposes speed issues). I tried accomplishging the same\nthing via subqueries and joins, and both methods give me similarly bad\nresult (join query is a little slower, but only a little).\n\n\tThe queries I have tested are as follows:\n\nSELECT DISTINCT maker.* FROM maker,model WHERE maker.id=model.maker\nSELECT DISTINCT maker.* FROM maker join model ON maker.id=model.maker\n\n\tThe point of the queries is to extract only the maker rows which\nare referenced from the model table. I would happily use another way to\nachieve the same end, should anyone suggest it.\n\n\t\"maker\" has only 137 rows, \"model\" only 1233 rows. I test the\nperformance in perl, by taking time right before and after query\nexecution. Executing the queries takes anywhere between .3 and .5\nseconds, depending on some other factors (removing the 'distinct'\nkeyword from the 1st query shaves about .1 second off of the execution\ntime for example).\n\n\tThese execution times seem ridiculous. Any idea what the culprit\nmay be? I hope it's not the text fields, 'cuz those fields are\nimportant.\n\n\tBoth tables are quite simple:\n\n# \\d maker\n Table \"public.maker\"\n Column | Type | Modifiers\n------------+-----------------------+-----------\n id | character varying(4) | not null\n fullname | character varying(20) |\n contact | character varying(20) |\n phone | character varying(15) |\n service_no | character varying(20) |\n lastuser | character varying(30) |\n comments | text |\nIndexes: maker_pkey primary key btree (id)\nTriggers: RI_ConstraintTrigger_18881,\n RI_ConstraintTrigger_18882\n\n# \\d model\n Table \"public.model\"\n Column | Type | Modifiers\n---------------+-----------------------+------------------------------------\n---------\n id | integer | not null default\nnextval('model_ids'::text)\n name | character varying(20) | not null\n maker | character varying(4) |\n type_hardware | character varying(4) |\n fullname | character varying(40) |\n spec | character varying(50) |\n lastuser | character varying(30) |\n comments | text |\n size_cap | character varying(10) |\nIndexes: model_pkey primary key btree (id),\n unique_model unique btree (name, maker, type_hardware)\nCheck constraints: \"nonempty_fullname\" (fullname > ''::character varying)\nForeign Key constraints: valid_maker FOREIGN KEY (maker) REFERENCES \\\n maker(id) ON UPDATE NO ACTION ON DELETE NO\nACTION,\n valid_type FOREIGN KEY (type_hardware)\nREFERENCES type_hardware(id) ON UPDATE NO ACTION ON DELETE NO ACTION\n\n--\n| Victor Danilchenko | Any sufficiently advanced |\n| [email protected] | technology is indistinguishable |\n| CSCF | 5-4231 | from a Perl script. |\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n", "msg_date": "Thu, 17 Apr 2003 12:59:07 -0700", "msg_from": "\"Peter Darley\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query speed problems" }, { "msg_contents": "On Thu, 17 Apr 2003, Peter Darley wrote:\n\n>Victor,\n>\tI'm not sure, but I think an exists might be faster for you. It wouldn't\n>have to deal with the Cartesian product of the tables.\n>\n>SELECT DISTINCT maker.* FROM maker WHERE exists (SELECT 1 FROM model WHERE\n>model.maker=maker.id);\n\n\tThat was indeed significantly faster. *very* significantly\nfaster.\n\n\tAs you may guess, I am an SQL newbie, and working my way through\nthe language. I figured there would be a faster way to do what I was\ndoing, but sunqueries or joins was the only way I could figure out.\n\n\tAgain, thanks for the helpful reply, and for your promptness. I\nstill want to figure out why the subquery version was taking so damned\nlong, but it's nice to have a working fast solution.\n\n>-----Original Message-----\n>From: [email protected]\n>[mailto:[email protected]]On Behalf Of Victor\n>Danilchenko\n>Sent: Thursday, April 17, 2003 12:17 PM\n>To: [email protected]\n>Subject: [PERFORM] Query speed problems\n>\n>\n>\tHi,\n>\n>\tIn the process of developing an API for web/perl/postrgres\n>interactions, I have come up against a peculiar problem; a rather simple\n>query, run on two relatively small tables, takes as much as 0.4 seconds\n>on my development system (it's a P2 266, which in this case is a good\n>thing, as it exposes speed issues). I tried accomplishging the same\n>thing via subqueries and joins, and both methods give me similarly bad\n>result (join query is a little slower, but only a little).\n>\n>\tThe queries I have tested are as follows:\n>\n>SELECT DISTINCT maker.* FROM maker,model WHERE maker.id=model.maker\n>SELECT DISTINCT maker.* FROM maker join model ON maker.id=model.maker\n>\n>\tThe point of the queries is to extract only the maker rows which\n>are referenced from the model table. I would happily use another way to\n>achieve the same end, should anyone suggest it.\n>\n>\t\"maker\" has only 137 rows, \"model\" only 1233 rows. I test the\n>performance in perl, by taking time right before and after query\n>execution. Executing the queries takes anywhere between .3 and .5\n>seconds, depending on some other factors (removing the 'distinct'\n>keyword from the 1st query shaves about .1 second off of the execution\n>time for example).\n>\n>\tThese execution times seem ridiculous. Any idea what the culprit\n>may be? I hope it's not the text fields, 'cuz those fields are\n>important.\n>\n>\tBoth tables are quite simple:\n>\n># \\d maker\n> Table \"public.maker\"\n> Column | Type | Modifiers\n>------------+-----------------------+-----------\n> id | character varying(4) | not null\n> fullname | character varying(20) |\n> contact | character varying(20) |\n> phone | character varying(15) |\n> service_no | character varying(20) |\n> lastuser | character varying(30) |\n> comments | text |\n>Indexes: maker_pkey primary key btree (id)\n>Triggers: RI_ConstraintTrigger_18881,\n> RI_ConstraintTrigger_18882\n>\n># \\d model\n> Table \"public.model\"\n> Column | Type | Modifiers\n>---------------+-----------------------+------------------------------------\n>---------\n> id | integer | not null default\n>nextval('model_ids'::text)\n> name | character varying(20) | not null\n> maker | character varying(4) |\n> type_hardware | character varying(4) |\n> fullname | character varying(40) |\n> spec | character varying(50) |\n> lastuser | character varying(30) |\n> comments | text |\n> size_cap | character varying(10) |\n>Indexes: model_pkey primary key btree (id),\n> unique_model unique btree (name, maker, type_hardware)\n>Check constraints: \"nonempty_fullname\" (fullname > ''::character varying)\n>Foreign Key constraints: valid_maker FOREIGN KEY (maker) REFERENCES \\\n> maker(id) ON UPDATE NO ACTION ON DELETE NO\n>ACTION,\n> valid_type FOREIGN KEY (type_hardware)\n>REFERENCES type_hardware(id) ON UPDATE NO ACTION ON DELETE NO ACTION\n>\n>--\n>| Victor Danilchenko | Any sufficiently advanced |\n>| [email protected] | technology is indistinguishable |\n>| CSCF | 5-4231 | from a Perl script. |\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\n-- \n| Victor Danilchenko | Curiosity was framed; |\n| [email protected] | Ignorance killed the cat. |\n| CSCF | 5-4231 | -- Anonymous |\n\n", "msg_date": "Thu, 17 Apr 2003 16:24:17 -0400 (EDT)", "msg_from": "Victor Danilchenko <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query speed problems" }, { "msg_contents": "On Thu, 17 Apr 2003, Stephan Szabo wrote:\n\n>\n>On Thu, 17 Apr 2003, Victor Danilchenko wrote:\n>\n>> \tThe queries I have tested are as follows:\n>>\n>> SELECT DISTINCT maker.* FROM maker,model WHERE maker.id=model.maker\n>> SELECT DISTINCT maker.* FROM maker join model ON maker.id=model.maker\n>>\n>> \tThe point of the queries is to extract only the maker rows which\n>> are referenced from the model table. I would happily use another way to\n>> achieve the same end, should anyone suggest it.\n>\n>What does explain analyze show for the query?\n\n# explain analyze SELECT DISTINCT * FROM maker WHERE id=model.maker;\nNOTICE: Adding missing FROM-clause entry for table \"model\"\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=230.58..255.24 rows=123 width=171) (actual time=238.20..293.21 rows=128 loops=1)\n -> Sort (cost=230.58..233.66 rows=1233 width=171) (actual time=238.19..241.07 rows=1233 loops=1)\n Sort Key: maker.id, maker.fullname, maker.contact, maker.phone, maker.service_no, maker.lastuser, maker.comments\n -> Merge Join (cost=0.00..167.28 rows=1233 width=171) (actual time=0.27..81.49 rows=1233 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".maker)\n -> Index Scan using maker_pkey on maker (cost=0.00..52.00 rows=1000 width=164) (actual time=0.11..4.29 rows=137 loops=1)\n -> Index Scan using makers on model (cost=0.00..94.28 rows=1233 width=7) (actual time=0.04..27.34 rows=1233 loops=1)\n Total runtime: 295.30 msec\n(8 rows)\n\n\tFollowing a suggestion sent in private mail, I have created an\nindex for model.maker column:\n\n# create index model_maker on model(maker);\n\n\tbut that doesn't seem to have made an appreciable difference in\nperformance -- it's only about .05 seconds more than the above number if\nI drop the index.\n\n\tMany thanks for your help.\n\n>> \t\"maker\" has only 137 rows, \"model\" only 1233 rows. I test the\n>> performance in perl, by taking time right before and after query\n>> execution. Executing the queries takes anywhere between .3 and .5\n>> seconds, depending on some other factors (removing the 'distinct'\n>> keyword from the 1st query shaves about .1 second off of the execution\n>> time for example).\n>\n>> Column | Type | Modifiers\n>> ---------------+-----------------------+---------------------------------------------\n>> id | integer | not null default nextval('model_ids'::text)\n>> name | character varying(20) | not null\n>> maker | character varying(4) |\n>> type_hardware | character varying(4) |\n>> fullname | character varying(40) |\n>> spec | character varying(50) |\n>> lastuser | character varying(30) |\n>> comments | text |\n>> size_cap | character varying(10) |\n>> Indexes: model_pkey primary key btree (id),\n>> unique_model unique btree (name, maker, type_hardware)\n>> Check constraints: \"nonempty_fullname\" (fullname > ''::character varying)\n>> Foreign Key constraints: valid_maker FOREIGN KEY (maker) REFERENCES \\\n>> maker(id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n>> valid_type FOREIGN KEY (type_hardware)\n>> REFERENCES type_hardware(id) ON UPDATE NO ACTION ON DELETE NO ACTION\n>\n>Hmm, it doesn't look to me like model.maker=<value> type queries are\n>indexable with this set of things. An index on model(maker) might help.\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n>\n\n-- \n| Victor Danilchenko | Curiosity was framed; |\n| [email protected] | Ignorance killed the cat. |\n| CSCF | 5-4231 | -- Anonymous |\n\n", "msg_date": "Thu, 17 Apr 2003 16:29:57 -0400 (EDT)", "msg_from": "Victor Danilchenko <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query speed problems" }, { "msg_contents": "On Thu, 17 Apr 2003, Victor Danilchenko wrote:\n\n> Unique (cost=230.58..255.24 rows=123 width=171) (actual\n> time=238.20..293.21 rows=128 loops=1)\n> -> Sort (cost=230.58..233.66 rows=1233 width=171) (actual\n> time=238.19..241.07 rows=1233 loops=1)\n> Sort Key: maker.id, maker.fullname, maker.contact,\n> maker.phone, maker.service_no, maker.lastuser, maker.comments\n> -> Merge Join (cost=0.00..167.28 rows=1233 width=171) (actual\n> time=0.27..81.49 rows=1233 loops=1)\n> Merge Cond: (\"outer\".id = \"inner\".maker)\n> -> Index Scan using maker_pkey on maker\n> (cost=0.00..52.00 rows=1000 width=164) (actual time=0.11..4.29\n> rows=137 loops=1)\n> -> Index Scan using makers on model (cost=0.00..94.28\n> rows=1233 width=7) (actual time=0.04..27.34 rows=1233 loops=1)\n> Total runtime: 295.30 msec\n> (8 rows)\n\nHmm, well, for this version, it looks like most of the time is probably\ngoing into the sort. I wonder if raising sort_mem would help this version\nof the query (try a set sort_mem=8192; before running the query). This\nisn't likely to get the time below like 160 msec though.\n\n> \tFollowing a suggestion sent in private mail, I have created an\n> index for model.maker column:\n>\n> # create index model_maker on model(maker);\n>\n> \tbut that doesn't seem to have made an appreciable difference in\n> performance -- it's only about .05 seconds more than the above number if\n> I drop the index.\nYeah, it looks like it's already using an index, but I didn't see that\nindex in the list of indexes on the table in the original mail, wierd.\n\n", "msg_date": "Thu, 17 Apr 2003 14:08:51 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query speed problems" } ]
[ { "msg_contents": "Victor,\n\nWhat is the issue? You get sub second response time.\nWhy waste your time trying to make it faster?\nIf you have a query that runs serveral minutes or hours\nthen its worthwhile tuning. Or if your query gets\nexecuted several thausend times a day.\n\nRegards,\nNikolaus\n\nOn Thu, 17 Apr 2003 15:17:01 -0400 (EDT), Victor\nDanilchenko wrote:\n\n> \n> \tHi,\n> \n> \tIn the process of developing an API for\n> web/perl/postrgres\n> interactions, I have come up against a peculiar\n> problem; a rather simple\n> query, run on two relatively small tables, takes as\n> much as 0.4 seconds\n> on my development system (it's a P2 266, which in this\n> case is a good\n> thing, as it exposes speed issues). I tried\n> accomplishging the same\n> thing via subqueries and joins, and both methods give\n> me similarly bad\n> result (join query is a little slower, but only a\n> little).\n> \n> \tThe queries I have tested are as follows:\n> \n> SELECT DISTINCT maker.* FROM maker,model WHERE\n> maker.id=model.maker\n> SELECT DISTINCT maker.* FROM maker join model ON\n> maker.id=model.maker\n> \n> \tThe point of the queries is to extract only the maker\n> rows which\n> are referenced from the model table. I would happily\n> use another way to\n> achieve the same end, should anyone suggest it.\n> \n> \t\"maker\" has only 137 rows, \"model\" only 1233 rows. I\n> test the\n> performance in perl, by taking time right before and\n> after query\n> execution. Executing the queries takes anywhere\nbetween\n> .3 and .5\n> seconds, depending on some other factors (removing the\n> 'distinct'\n> keyword from the 1st query shaves about .1 second off\n> of the execution\n> time for example).\n> \n> \tThese execution times seem ridiculous. Any idea what\n> the culprit\n> may be? I hope it's not the text fields, 'cuz those\n> fields are\n> important.\n> \n> \tBoth tables are quite simple:\n> \n> # \\d maker\n> Table \"public.maker\"\n> Column | Type | Modifiers\n> ------------+-----------------------+-----------\n> id | character varying(4) | not null\n> fullname | character varying(20) |\n> contact | character varying(20) |\n> phone | character varying(15) |\n> service_no | character varying(20) |\n> lastuser | character varying(30) |\n> comments | text |\n> Indexes: maker_pkey primary key btree (id)\n> Triggers: RI_ConstraintTrigger_18881,\n> RI_ConstraintTrigger_18882\n> \n> # \\d model\n> Table \"public.model\"\n> Column | Type | \n \n> Modifiers\n>\n---------------+-----------------------+---------------------------------------------\n> id | integer | not null\n> default nextval('model_ids'::text)\n> name | character varying(20) | not null\n> maker | character varying(4) |\n> type_hardware | character varying(4) |\n> fullname | character varying(40) |\n> spec | character varying(50) |\n> lastuser | character varying(30) |\n> comments | text |\n> size_cap | character varying(10) |\n> Indexes: model_pkey primary key btree (id),\n> unique_model unique btree (name, maker,\n> type_hardware)\n> Check constraints: \"nonempty_fullname\" (fullname >\n> ''::character varying)\n> Foreign Key constraints: valid_maker FOREIGN KEY\n> (maker) REFERENCES \\\n> maker(id) ON UPDATE NO\n> ACTION ON DELETE NO ACTION,\n> valid_type FOREIGN KEY\n> (type_hardware)\n> REFERENCES type_hardware(id) ON UPDATE NO ACTION ON\n> DELETE NO ACTION\n> \n> -- \n> | Victor Danilchenko | Any sufficiently advanced \n \n> |\n> | [email protected] | technology is\n> indistinguishable |\n> | CSCF | 5-4231 | from a Perl script. \n \n> |\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the\n> unregister command\n> (send \"unregister YourEmailAddressHere\" to\n> [email protected])\n\n", "msg_date": "Thu, 17 Apr 2003 18:26:12 -0700 (PDT)", "msg_from": "\"Nikolaus Dilger\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query speed problems" }, { "msg_contents": "On Thu, 17 Apr 2003, Nikolaus Dilger wrote:\n\n>Victor,\n>\n>What is the issue? You get sub second response time.\n\n\tThe issue is that the query is a part of *user interface*, as I\nwrote in my original message; and there is a small number of such\nqueries (about 3) that run per each user action. A second-long wait in\n*UI* is unacceptable -- people tend to find even third-of-a-second wait\nto be annoying. UI interactions should be so fast as to appear nearly\ninstant.\n\n>Why waste your time trying to make it faster?\n\n\tWell, there's also the learning aspect of it -- this is my first\nmajor SQL project, and I am trying to understand as much as I can about\nunder-the-surface stuff. Thanks to Peter Darley, I already have a fast\nsolution -- now I simply want to understand more about the performance\nissues inherent in reverse-lookup subqueries.\n\n>If you have a query that runs serveral minutes or hours\n>then its worthwhile tuning. Or if your query gets\n>executed several thausend times a day.\n\n-- \n| Victor Danilchenko | Of course my password is the same as |\n| [email protected] | my pet's name. My macaw's name was |\n| CSCF | 5-4231 | Q47pY!3, but I change it every 90 days. |\n\n", "msg_date": "Fri, 18 Apr 2003 11:01:18 -0400 (EDT)", "msg_from": "Victor Danilchenko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query speed problems" } ]
[ { "msg_contents": "I'm using 7.3.2 on Linux, with a decent amount of muscle behind it\n(1.5 GHz PPro CPU, 1G mem, 20M/sec disks, xlog on different disk than\ndata).\n\nI've got a database that has several foreign keys, and I'm copying a\nbunch of data from an MS-SQL server into it via Perl DBI. I noticed\nthat inserts into this database are very slow, on the order of 100 per\nsecond on this hardware. All the inserts are happening in a single\ntransaction. The postmaster I'm connected to appears to be CPU\nlimited, as it's pegging the CPU at a constant 85 percent or more.\n\nI have no problem with that under normal circumstances (i.e., the\nforeign key constraints are actively being enforced): it may well be\nthe nature of foreign keys, but the problem is this: all the keys are\nDEFERRABLE INITIALLY DEFERRED and, on top of that, the Perl program\nwill SET CONSTRAINTS ALL DEFERRED at the beginning of the transaction.\n\nIf I remove all the foreign key constraints, my performance goes up to\n700 inserts per second!\n\nWhy isn't the insert performance with all the constraints deferred\napproximating that of the performance I get without the foreign keys??\nIf anything, I should get a big delay at transaction commit time while\nall the foreign key constraints are checked (and, indeed, I get that\ntoo), but the performance during the transaction prior to the commit\nshould be the same as it is without the foreign key constraints.\n\nIt's almost as if the foreign key constraints are being invoked and\nthe results ignored during the inserts...\n\nIn essence, this smells like a bug to me, but I don't know enough\nabout the internals to really call it that.\n\n\nAny ideas on what can be done about this?\n\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Thu, 17 Apr 2003 22:11:33 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Foreign key performance" }, { "msg_contents": "On Thu, 17 Apr 2003, Kevin Brown wrote:\n\n> I have no problem with that under normal circumstances (i.e., the\n> foreign key constraints are actively being enforced): it may well be\n> the nature of foreign keys, but the problem is this: all the keys are\n> DEFERRABLE INITIALLY DEFERRED and, on top of that, the Perl program\n> will SET CONSTRAINTS ALL DEFERRED at the beginning of the transaction.\n>\n> If I remove all the foreign key constraints, my performance goes up to\n> 700 inserts per second!\n>\n> Why isn't the insert performance with all the constraints deferred\n> approximating that of the performance I get without the foreign keys??\n\nIt appears (from some not terribly scientific experiments - see below)\nthat it's likely to be related to managing the deferred trigger queue\ngiven that in my case at least running the constraints non-deferred was\nnegligible in comparison.\n\nOn batch inserts to three tables each with a foreign key to a table\ncontaining one row (and inserts of lots of that value), I saw a ratio of\napproximately 1:1.7:7 for normal inserts:non-deferred fk:deferred fk on my\n7.4 dev server.\n\n", "msg_date": "Thu, 17 Apr 2003 22:30:45 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Foreign key performance" }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> It appears (from some not terribly scientific experiments - see below)\n> that it's likely to be related to managing the deferred trigger queue\n> given that in my case at least running the constraints non-deferred was\n> negligible in comparison.\n\nAt one time the deferred-trigger queue had an O(N^2) behavioral problem\nfor large N = number of pending trigger events. But I thought we'd\nfixed that. What's the test case exactly? Can you get a profile with\ngprof?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 18 Apr 2003 02:06:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Foreign key performance " }, { "msg_contents": "On Fri, 18 Apr 2003, Tom Lane wrote:\n\n> Stephan Szabo <[email protected]> writes:\n> > It appears (from some not terribly scientific experiments - see below)\n> > that it's likely to be related to managing the deferred trigger queue\n> > given that in my case at least running the constraints non-deferred was\n> > negligible in comparison.\n>\n> At one time the deferred-trigger queue had an O(N^2) behavioral problem\n> for large N = number of pending trigger events. But I thought we'd\n> fixed that. What's the test case exactly? Can you get a profile with\n> gprof?\n\nI'm going to tomorrow hopefully - but it looks to me that we fixed one, but\npossibly not another place where we read through the list unnecessarily\nAFAICS. I think deferredTriggerInvokeEvents (when called with\nimmediate_only = true) is going to go through the entire list looking for\nimmediate triggers to fire after each statement. However, excepting set\nconstraints, any immediate triggers for any event added prior to this\nstatement will by necessity have already been run unless I'm missing\nsomething, which means that we're often looking through entries that\naren't going to have any triggers to run now in any case.\n\nKeeping a pointer to the end of the list as of last statement and going\nthrough the list from there cut the time for the deferred case in half in\nmy simple test (about 3.3x the no fk and just under 2x the immediate).\n\n", "msg_date": "Thu, 17 Apr 2003 23:25:20 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Foreign key performance " }, { "msg_contents": "\n[Not sure this really is relevant for -performance at this point]\n\nOn Thu, 17 Apr 2003, Stephan Szabo wrote:\n\n> On Fri, 18 Apr 2003, Tom Lane wrote:\n>\n> > Stephan Szabo <[email protected]> writes:\n> > > It appears (from some not terribly scientific experiments - see below)\n> > > that it's likely to be related to managing the deferred trigger queue\n> > > given that in my case at least running the constraints non-deferred was\n> > > negligible in comparison.\n> >\n> > At one time the deferred-trigger queue had an O(N^2) behavioral problem\n> > for large N = number of pending trigger events. But I thought we'd\n> > fixed that. What's the test case exactly? Can you get a profile with\n> > gprof?\n>\n> I'm going to tomorrow hopefully - but it looks to me that we fixed one, but\n\nArgh. I'm getting that state where gprof returns all 0s for times. I'm\npretty sure this has come up before along with how to get it to work, but\nI couldn't find it in the archives. Someday I'll learn how to use gprof. :(\n\nIn any case, the call list seemed reasonable. It's currently doing O(n^2)\ncalls to MemoryContextReset and deferredTriggerCheckState in InvokeEvents\nI don't see anything else that's at that kind of number of calls (50\nmillion calls for a backend that's only done 10000 inserts stands out a\nbit). Going only from last statement seems to make it linear (I think my\nattempt is checking 1 too many trigger values, need to change that\nprobably).\n\n", "msg_date": "Fri, 18 Apr 2003 07:47:15 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Foreign key performance " }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> Argh. I'm getting that state where gprof returns all 0s for times. I'm\n> pretty sure this has come up before along with how to get it to work, but\n> I couldn't find it in the archives. Someday I'll learn how to use gprof. :(\n\nYou're on Linux? You need to compile postmaster.c with -DLINUX_PROFILE.\nBut the call counts do sound pretty damning.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 18 Apr 2003 10:51:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Foreign key performance " }, { "msg_contents": "\nOn Fri, 18 Apr 2003, Tom Lane wrote:\n\n> Stephan Szabo <[email protected]> writes:\n> > Argh. I'm getting that state where gprof returns all 0s for times. I'm\n> > pretty sure this has come up before along with how to get it to work, but\n> > I couldn't find it in the archives. Someday I'll learn how to use gprof. :(\n>\n> You're on Linux? You need to compile postmaster.c with -DLINUX_PROFILE.\n\nYep, thanks. :)\n\n> But the call counts do sound pretty damning.\n\nYeah, but even with my hack last night it was still appreciably slower\nthan immediate constraints. Comparing the call counts in that function\nfor the immediate versus deferred(hacked) weren't giving me a good idea of\nwhere that time was going.\n\n", "msg_date": "Fri, 18 Apr 2003 08:12:39 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Foreign key performance " }, { "msg_contents": "On Fri, 18 Apr 2003, Stephan Szabo wrote:\n\n> On Fri, 18 Apr 2003, Tom Lane wrote:\n>\n> > But the call counts do sound pretty damning.\n>\n> Yeah, but even with my hack last night it was still appreciably slower\n> than immediate constraints. Comparing the call counts in that function\n> for the immediate versus deferred(hacked) weren't giving me a good idea of\n> where that time was going.\n\nThis last was due to assert checking I think. AllocSetCheck was the big\ntime waster on the hacked deferred case. Turning off assert checking I\nget:\n\nMedian over 3 100000 inserts in one transaction (excepting the original\ncode which took a really long time so I ran it once) from time psql ...\n\nNo Fk 24.14s\nImmediate FK 42.80s\nOriginal Deferred FK 1862.06s\nHacked Deferred FK 35.30s\n\nThe hack was just the keeping around the list pointer from the last run\nthrough (see attached - passed simple fk tests and regression, but there\nmight be problems I don't see). Looking at the code, I also wonder if we\nwould get some gain by not allocating the per_tuple_context at the\nbeginning but only when a non-deferred constraint is found since otherwise\nwe're creating and destroying the context and possibly never using it.\nThe cost would presumably be some boolean tests inside the inner loop.", "msg_date": "Sat, 19 Apr 2003 12:03:02 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Foreign key performance " }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> The hack was just the keeping around the list pointer from the last run\n> through (see attached - passed simple fk tests and regression, but there\n> might be problems I don't see).\n\nShouldn't this patch update the comment in deferredTriggerInvokeEvents\n(c. line 1860 in cvs tip)?\n\n> Looking at the code, I also wonder if we\n> would get some gain by not allocating the per_tuple_context at the\n> beginning but only when a non-deferred constraint is found since otherwise\n> we're creating and destroying the context and possibly never using it.\n\nI doubt it's worth worrying over. Creation/destruction of a never-used\nmemory context is pretty cheap, I think.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 19 Apr 2003 16:58:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Foreign key performance " }, { "msg_contents": "On Sat, 19 Apr 2003, Tom Lane wrote:\n\n> Stephan Szabo <[email protected]> writes:\n> > The hack was just the keeping around the list pointer from the last run\n> > through (see attached - passed simple fk tests and regression, but there\n> > might be problems I don't see).\n>\n> Shouldn't this patch update the comment in deferredTriggerInvokeEvents\n> (c. line 1860 in cvs tip)?\n\nProbably, since the second part of that is basically what this is. I'll\nupdate and send updated patch tomorrow.\n\n> > Looking at the code, I also wonder if we\n> > would get some gain by not allocating the per_tuple_context at the\n> > beginning but only when a non-deferred constraint is found since otherwise\n> > we're creating and destroying the context and possibly never using it.\n>\n> I doubt it's worth worrying over. Creation/destruction of a never-used\n> memory context is pretty cheap, I think.\n\nOkay, sounds good enough for me. :)\n\n", "msg_date": "Sat, 19 Apr 2003 23:56:56 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Foreign key performance " }, { "msg_contents": "On Sat, 19 Apr 2003, Stephan Szabo wrote:\n\n> On Sat, 19 Apr 2003, Tom Lane wrote:\n>\n> > Stephan Szabo <[email protected]> writes:\n> > > The hack was just the keeping around the list pointer from the last run\n> > > through (see attached - passed simple fk tests and regression, but there\n> > > might be problems I don't see).\n> >\n> > Shouldn't this patch update the comment in deferredTriggerInvokeEvents\n> > (c. line 1860 in cvs tip)?\n>\n> Probably, since the second part of that is basically what this is. I'll\n> update and send updated patch tomorrow.\n\nOkay, this changes the second paragraph of that comment. I left in the\ncomment that's really similar next to where I actually do the selection of\nwhich start point to use.", "msg_date": "Sun, 20 Apr 2003 09:04:10 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Foreign key performance " }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> Okay, this changes the second paragraph of that comment. I left in the\n> comment that's really similar next to where I actually do the selection of\n> which start point to use.\n\nThis had a bit of a problem yet: the loop in deferredTriggerInvokeEvents\nexpects 'prev_event' to point to the list entry just before 'event'.\nA nice byproduct of fixing that is we don't uselessly rescan the last list\nentry. I also tried to improve the comments a little. You can see what\nI actually applied at\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/backend/commands/trigger.c\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 20 Apr 2003 13:09:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Foreign key performance " }, { "msg_contents": "Tom Lane wrote:\n> Stephan Szabo <[email protected]> writes:\n> > Okay, this changes the second paragraph of that comment. I left in the\n> > comment that's really similar next to where I actually do the selection of\n> > which start point to use.\n> \n> This had a bit of a problem yet: the loop in deferredTriggerInvokeEvents\n> expects 'prev_event' to point to the list entry just before 'event'.\n> A nice byproduct of fixing that is we don't uselessly rescan the last list\n> entry. I also tried to improve the comments a little. You can see what\n> I actually applied at\n>\n> http://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/backend/commands/trigger.c\n\nAny chance of backporting these changes to 7_3_STABLE (when you're\nsatisfied they don't break anything)? Just looking at the CVS log for\ntrigger.c, it appears there have been enough changes since then that\nit might not be easy to do (and since it's not necessarily a \"bug fix\"\nas such, it might not qualify for backporting to a stable version).\n\nEven if it's not something that can be put into another release of\n7.3, it would certainly be useful to me. It might be useful to enough\npeople to justify releasing it as a patch on -patches, if nothing\nelse.\n\nI'd do it myself but I don't understand the trigger code at all (and\nif there's any documentation you can point me to that describes the\nvarious functions and supporting data structures in trigger.c, that\nwould help a lot), and I'd rather not touch something like that until\nI understand it thoroughly.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Sun, 20 Apr 2003 17:09:34 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Foreign key performance" }, { "msg_contents": "Kevin Brown <[email protected]> writes:\n> Any chance of backporting these changes to 7_3_STABLE (when you're\n> satisfied they don't break anything)? Just looking at the CVS log for\n> trigger.c, it appears there have been enough changes since then that\n> it might not be easy to do (and since it's not necessarily a \"bug fix\"\n> as such, it might not qualify for backporting to a stable version).\n\nI'd be pretty hesitant to make such a change in the stable branch ---\nat least not without a lot of testing. If you and others want to\nprovide such testing, go to it. The patch appears to apply cleanly\nenough to 7.3, but here's an adjusted patch if fuzz makes you nervous...\n\n\t\t\tregards, tom lane\n\n*** trigger.c~\tSun Apr 20 20:28:55 2003\n--- trigger.c\tSun Apr 20 20:29:13 2003\n***************\n*** 1461,1472 ****\n--- 1461,1478 ----\n * Because this can grow pretty large, we don't use separate List nodes,\n * but instead thread the list through the dte_next fields of the member\n * nodes. Saves just a few bytes per entry, but that adds up.\n+ * \n+ * deftrig_events_imm holds the tail pointer as of the last \n+ * deferredTriggerInvokeEvents call; we can use this to avoid rescanning\n+ * entries unnecessarily. It is NULL if deferredTriggerInvokeEvents\n+ * hasn't run since the last state change.\n *\n * XXX Need to be able to shove this data out to a file if it grows too\n *\t large...\n * ----------\n */\n static DeferredTriggerEvent deftrig_events;\n+ static DeferredTriggerEvent deftrig_events_imm;\n static DeferredTriggerEvent deftrig_event_tail;\n \n \n***************\n*** 1680,1686 ****\n deferredTriggerInvokeEvents(bool immediate_only)\n {\n \tDeferredTriggerEvent event,\n! \t\t\t\tprev_event = NULL;\n \tMemoryContext per_tuple_context;\n \tRelation\trel = NULL;\n \tTriggerDesc *trigdesc = NULL;\n--- 1686,1692 ----\n deferredTriggerInvokeEvents(bool immediate_only)\n {\n \tDeferredTriggerEvent event,\n! \t\t\t\tprev_event;\n \tMemoryContext per_tuple_context;\n \tRelation\trel = NULL;\n \tTriggerDesc *trigdesc = NULL;\n***************\n*** 1692,1704 ****\n \t * are going to discard the whole event queue on return anyway, so no\n \t * need to bother with \"retail\" pfree's.\n \t *\n! \t * In a scenario with many commands in a transaction and many\n! \t * deferred-to-end-of-transaction triggers, it could get annoying to\n! \t * rescan all the deferred triggers at each command end. To speed this\n! \t * up, we could remember the actual end of the queue at EndQuery and\n! \t * examine only events that are newer. On state changes we simply\n! \t * reset the saved position to the beginning of the queue and process\n! \t * all events once with the new states.\n \t */\n \n \t/* Make a per-tuple memory context for trigger function calls */\n--- 1698,1709 ----\n \t * are going to discard the whole event queue on return anyway, so no\n \t * need to bother with \"retail\" pfree's.\n \t *\n! \t * If immediate_only is true, we need only scan from where the end of\n! \t * the queue was at the previous deferredTriggerInvokeEvents call;\n! \t * any non-deferred events before that point are already fired.\n! \t * (But if the deferral state changes, we must reset the saved position\n! \t * to the beginning of the queue, so as to process all events once with\n! \t * the new states. See DeferredTriggerSetState.)\n \t */\n \n \t/* Make a per-tuple memory context for trigger function calls */\n***************\n*** 1709,1715 ****\n \t\t\t\t\t\t\t ALLOCSET_DEFAULT_INITSIZE,\n \t\t\t\t\t\t\t ALLOCSET_DEFAULT_MAXSIZE);\n \n! \tevent = deftrig_events;\n \twhile (event != NULL)\n \t{\n \t\tbool\t\tstill_deferred_ones = false;\n--- 1714,1735 ----\n \t\t\t\t\t\t\t ALLOCSET_DEFAULT_INITSIZE,\n \t\t\t\t\t\t\t ALLOCSET_DEFAULT_MAXSIZE);\n \n! \t/*\n! \t * If immediate_only is true, then the only events that could need firing\n! \t * are those since deftrig_events_imm. (But if deftrig_events_imm is\n! \t * NULL, we must scan the entire list.)\n! \t */\n! \tif (immediate_only && deftrig_events_imm != NULL)\n! \t{\n! \t\tprev_event = deftrig_events_imm;\n! \t\tevent = prev_event->dte_next;\n! \t}\n! \telse\n! \t{\n! \t\tprev_event = NULL;\n! \t\tevent = deftrig_events;\n! \t}\n! \n \twhile (event != NULL)\n \t{\n \t\tbool\t\tstill_deferred_ones = false;\n***************\n*** 1830,1835 ****\n--- 1850,1858 ----\n \t/* Update list tail pointer in case we just deleted tail event */\n \tdeftrig_event_tail = prev_event;\n \n+ \t/* Set the immediate event pointer for next time */\n+ \tdeftrig_events_imm = prev_event;\n+ \n \t/* Release working resources */\n \tif (rel)\n \t\theap_close(rel, NoLock);\n***************\n*** 1917,1922 ****\n--- 1940,1946 ----\n \tMemoryContextSwitchTo(oldcxt);\n \n \tdeftrig_events = NULL;\n+ \tdeftrig_events_imm = NULL;\n \tdeftrig_event_tail = NULL;\n }\n \n***************\n*** 2146,2153 ****\n \t * CONSTRAINTS command applies retroactively. This happens \"for free\"\n \t * since we have already made the necessary modifications to the\n \t * constraints, and deferredTriggerEndQuery() is called by\n! \t * finish_xact_command().\n \t */\n }\n \n \n--- 2170,2180 ----\n \t * CONSTRAINTS command applies retroactively. This happens \"for free\"\n \t * since we have already made the necessary modifications to the\n \t * constraints, and deferredTriggerEndQuery() is called by\n! \t * finish_xact_command(). But we must reset deferredTriggerInvokeEvents'\n! \t * tail pointer to make it rescan the entire list, in case some deferred\n! \t * events are now immediately invokable.\n \t */\n+ \tdeftrig_events_imm = NULL;\n }\n \n \n\n", "msg_date": "Sun, 20 Apr 2003 20:35:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Foreign key performance " }, { "msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > Any chance of backporting these changes to 7_3_STABLE (when you're\n> > satisfied they don't break anything)? Just looking at the CVS log for\n> > trigger.c, it appears there have been enough changes since then that\n> > it might not be easy to do (and since it's not necessarily a \"bug fix\"\n> > as such, it might not qualify for backporting to a stable version).\n> \n> I'd be pretty hesitant to make such a change in the stable branch ---\n> at least not without a lot of testing. If you and others want to\n> provide such testing, go to it. The patch appears to apply cleanly\n> enough to 7.3, but here's an adjusted patch if fuzz makes you\n> nervous...\n\nThanks, Tom. I've applied the patch to my server and it has so far\npassed the few tests I've thrown at it so far (it has detected foreign\nkey violations in both immediate and deferred trigger mode). And just\nso you know, it performs FAR better than the pre-patched version does\n-- in the overall transaction I'm doing, I see very little difference\nnow between deferred triggers and no triggers!\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n\n", "msg_date": "Tue, 22 Apr 2003 04:44:30 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Foreign key performance" }, { "msg_contents": "Stephan Szabo wrote:\n> \n> [Not sure this really is relevant for -performance at this point]\n> \n> On Thu, 17 Apr 2003, Stephan Szabo wrote:\n> \n> > On Fri, 18 Apr 2003, Tom Lane wrote:\n> >\n> > > Stephan Szabo <[email protected]> writes:\n> > > > It appears (from some not terribly scientific experiments - see below)\n> > > > that it's likely to be related to managing the deferred trigger queue\n> > > > given that in my case at least running the constraints non-deferred was\n> > > > negligible in comparison.\n> > >\n> > > At one time the deferred-trigger queue had an O(N^2) behavioral problem\n> > > for large N = number of pending trigger events. But I thought we'd\n> > > fixed that. What's the test case exactly? Can you get a profile with\n> > > gprof?\n> >\n> > I'm going to tomorrow hopefully - but it looks to me that we fixed one, but\n> \n> Argh. I'm getting that state where gprof returns all 0s for times. I'm\n> pretty sure this has come up before along with how to get it to work, but\n> I couldn't find it in the archives. Someday I'll learn how to use gprof. :(\n\nYou have to save and restore the timers around the fork() under Linux.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Fri, 25 Apr 2003 15:52:16 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Foreign key performance" } ]
[ { "msg_contents": "Hello,\n\nI'm not sure if this is really a problem. I'm working on a distributed web\ncrawling system which uses several clients on different machines. Results\nare logged to a central Postgres system. When I start it, it works fine,\nbut seems to slow down drastically after several hours/days. To test the\ndatabase, I wrote a short Perl script which makes up random strings and\ninserts them, and writes the benchmark times to a logfile.\n\nComparing the beginning and end times from the log, it seems to take the\nsame amount of time to insert at the beginning of the process as after\nabout twenty minutes. However, I also logged the input from vmstat, which\nshows the amount of memory available shrinking rapidly.\n\nBefore running test program:\n total used free shared buffers cached\nMem: 516136 120364 395772 0 4776 75884\n-/+ buffers/cache: 39704 476432\nSwap: 248996 0 248996\nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 1 0 0 395768 4776 75888 0 0 124 27 124 236 13 2 86 0\n\nThe first 20 lines of vmstat output (3 seconds apart each, 512M total):\nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 3 0 0 267676 5116 194136 0 0 74 143 123 825 32 4 64 0\n 3 0 0 266244 5116 195532 0 0 0 0 105 2343 85 15 0 0\n 1 0 0 264816 5120 196932 0 0 0 0 104 2182 89 11 0 0\n 2 0 0 263324 5120 198308 0 0 0 299 126 2299 90 10 0 0\n 1 0 0 261856 5120 199744 0 0 0 0 101 2482 92 8 0 0\n 1 0 0 260376 5124 201188 0 0 1 683 114 2484 93 7 0 0\n 2 0 0 259152 5124 202392 0 0 0 640 119 2336 91 9 0 0\n 3 0 0 257880 5128 203628 0 0 0 0 102 2414 86 14 0 0\n 2 0 0 256772 5128 204712 0 0 0 640 116 2378 92 8 0 0\n\n\nEventually the system moves to using swap and things really slow down.\nInterestingly, when I stop the program and shut down postgres, only some\nof the memory comes back. Here is the current state of my system with\npostgres shut down, after running the test: procs\n-----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 0 252 63396 5468 373792 0 0 29 318 133 1131 37 5 58 0\n\nI seem to be missing some memory. However, I might not understand the\nresults from vmstat properly. Does anyone know what is going on or how I\ncan solve this?\n\nIan Knopke\n\n\n\nTest program:\n#############################################################\n#!/usr/bin/perl -w\n##test.pl - Program to test postgres performance\n \nuse DBI;\nuse Time::HiRes qw(gettimeofday);\nuse IO::Socket;\nuse IO::File;\nuse Number::Format;\n\n\nmy $str='abcdefghijklmnopqrstuvwxyz';\nmy @str=split('',$str);\n\n\n$SIG{INT}=\\&int_stoproutine;\nopen (LOGFILE,\">testlog.txt\") or die \"Can't open logfile\\n\";\n\nmy $fmt=Number::Format->new(DECIMAL_DIGITS => 0);\nmy $dbh=DBI->connect(\"DBI:Pg:dbname=inserttests\",,) or die \"Can't connect: $DBI::errstr\\n\";\n\nmy $counter=0;\nwhile(1) {\n\n my $starttime=gettimeofday();\n print LOGFILE \"COUNTER: $counter \";\n print LOGFILE \"START: $starttime \";\n my $str=&genmystr(); \n print LOGFILE \"STR: $str \";\n my $donetime=gettimeofday();\n print LOGFILE \"STRTIME: $donetime \";\n\n my $query_string=\"insert into tablea (tablea_term) values(\\'$str\\')\";\n $sth=$dbh->prepare($query_string);\n my $error_code=$sth->execute();\n\n my $endtime=gettimeofday();\n print LOGFILE \"END: $endtime \";\n my $difftime=$endtime-$starttime;\n print LOGFILE \"DIFF: $difftime\\n\";\n $counter++;\n\n}\n\nclose(LOGFILE);\n\nsub int_stoproutine {\n exit;\n}\n\nsub genmystr{\n my $str='';\n foreach(1 .. 8) {\n\n\tmy $a=rand(25);\n\tmy $b=$fmt->round($a);\n\t$str=$str.$str[$b];\n }\n return $str;\n}\n\n\n\n\n-- \n\n", "msg_date": "Sun, 20 Apr 2003 14:09:05 -0400 (EDT)", "msg_from": "Ian Knopke <[email protected]>", "msg_from_op": true, "msg_subject": "problems" }, { "msg_contents": "Ian Knopke <[email protected]> writes:\n> Comparing the beginning and end times from the log, it seems to take the\n> same amount of time to insert at the beginning of the process as after\n> about twenty minutes. However, I also logged the input from vmstat, which\n> shows the amount of memory available shrinking rapidly.\n\nAFAICT you are just showing us kernel disk cache expanding to fill\nunused memory. This is normal and not a cause for alarm.\n\n> Eventually the system moves to using swap and things really slow down.\n\nDisk cache can't cause swapping --- the kernel will just throw away\ncached pages when the memory is needed for something else. It could\nbe that you have growth in the number of Postgres processes, or the\nsizes of individual processes, but vmstat isn't very helpful for\ndetermining that (good ol' top would be more useful). In any case\nyou haven't actually shown us any data from the state where the system\nis slow, so it's a bit hard to conjecture about what's going on.\n\nSome other important information that you haven't let us in on is the\nplatform and the Postgres version you're using.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 20 Apr 2003 14:36:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems " } ]
[ { "msg_contents": "Hi,\n\n(PostgreSQL 7.3.2 on i386-portbld-freebsd4.7, compiled by GCC 2.95.4)\n\nI've a curious performance problem with a function returning set of\nrows. The query alone is very fast, but not when called from the\nfunction.\n\nTo \"emulate\" a parametred view¹, I created a function as follow:\n\nCREATE FUNCTION get_info (integer) RETURNS SETOF type_get_info\n AS '...' <- here the query show below, where 'LIMIT $1' is used instead of 'LIMIT 10'\n LANGUAGE sql;\n\nThe table table1 have 330K rows, and table2 have 3K rows.\n\nWhen I run the following query (prefixed with SELECT * to try to get\nthe same behavior that the second query), I obtain very good time.\n\ndatabase=# SELECT * FROM (\n\t(SELECT a.field1,a.field2,a.field3,b.field3,b.field4,a.field5\n\t\tFROM table1 AS a, table1 AS b\n\t\tWHERE a.field6=b.field4\n\t\tORDER BY a.field6 DESC\n\t\tLIMIT 10)\n\tUNION\n\t(SELECT a.field1,a.field2,b.field3,a.field3,a.field4,b.field5\n\t\tFROM table2 AS a, table1 AS b\n\t\tWHERE a.field4=b.field6\n\t\tORDER BY a.field4 DESC\n\t\tLIMIT 10)\n\tORDER BY field4 DESC\n\tLIMIT 10\n) AS type_get_info;\n\n[...]\n(10 rows)\n\nTime: 185.86 ms\n\nBut, when I run the function (with 10 as parameter, but even 1 is\nslow) I get poor time:\n\ndatabase=# SELECT * FROM get_info(10);\n[...]\n(10 rows)\n\nTime: 32782.26 ms\ndatabase=# \n\n(even after a VACUUM FULL ANALYZE, and REINDEX of indexes used in the\nqueries)\n\nWhat is curious is that I remember that the function was fast at a\ntime..\n\nWhat is the difference between the two case ?\n\n[1] Is there another solution to this 'hack' ? I can't simply create a\nview and use 'LIMIT 10' because intermediate SELECT have be limited\ntoo (to avoid UNION with 300K rows where only the first 10 are of\ninterest to me.)\n\n-- \nFrédéric Jolliton\n\n", "msg_date": "Wed, 23 Apr 2003 19:53:55 +0200", "msg_from": "Frederic Jolliton <[email protected]>", "msg_from_op": true, "msg_subject": "Important speed difference between a query and a function with the\n\tsame query" }, { "msg_contents": "> (PostgreSQL 7.3.2 on i386-portbld-freebsd4.7, compiled by GCC 2.95.4)\n>\n> I've a curious performance problem with a function returning set of\n> rows. The query alone is very fast, but not when called from the\n> function.\n>\n> To \"emulate\" a parametred view, I created a function as follow:\n>\n> CREATE FUNCTION get_info (integer) RETURNS SETOF type_get_info\n> AS '...' <- here the query show below, where 'LIMIT $1' is used instead of 'LIMIT 10'\n> LANGUAGE sql;\n\nSetting enable_seqscan to off give same result speed between the query\nand the function !\n\nSo, the query in the function is not using index but the exact same\nquery alone does !\n\nIs there an explanation ?\n\n-- \nFrédéric Jolliton\n\n", "msg_date": "Thu, 24 Apr 2003 17:47:53 +0200", "msg_from": "Frederic Jolliton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Important speed difference between a query and a" }, { "msg_contents": "Frederic Jolliton <[email protected]> writes:\n>> To \"emulate\" a parametred view, I created a function as follow:\n>> \n>> CREATE FUNCTION get_info (integer) RETURNS SETOF type_get_info\n>> AS '...' <- here the query show below, where 'LIMIT $1' is used instead of 'LIMIT 10'\n>> LANGUAGE sql;\n\n> So, the query in the function is not using index but the exact same\n> query alone does !\n\nBut it's not the same query, is it? With \"LIMIT $1\" the planner can't\nknow what the limit value is exactly, so it has to generate a plan that\nwon't be too unreasonable for either a small or a large limit.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 24 Apr 2003 11:56:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Important speed difference between a query and a " }, { "msg_contents": "\nOn Thu, 24 Apr 2003, Frederic Jolliton wrote:\n\n> > (PostgreSQL 7.3.2 on i386-portbld-freebsd4.7, compiled by GCC 2.95.4)\n> >\n> > I've a curious performance problem with a function returning set of\n> > rows. The query alone is very fast, but not when called from the\n> > function.\n> >\n> > To \"emulate\" a parametred view, I created a function as follow:\n> >\n> > CREATE FUNCTION get_info (integer) RETURNS SETOF type_get_info\n> > AS '...' <- here the query show below, where 'LIMIT $1' is used instead of 'LIMIT 10'\n> > LANGUAGE sql;\n>\n> Setting enable_seqscan to off give same result speed between the query\n> and the function !\n>\n> So, the query in the function is not using index but the exact same\n> query alone does !\n>\n> Is there an explanation ?\n\nMy guess is that limit $1 is assuming a larger number of rows when\nplanning the queries, large enough that it expects seqscan to be better\n(assuming the limit is what it expects). It's probably not going to plan\nthat query each time the function is called so it's not going to know\nwhether you're calling with a small number (index scan may be better) or a\nlarge number (seq scan may be better). For example, if you sent 100000,\nthe index scan might be a loser.\n\nPerhaps plpgsql with EXECUTE would work better for that, although it's\nlikely to have some general overhead.\n\n", "msg_date": "Thu, 24 Apr 2003 08:57:13 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Important speed difference between a query and a" }, { "msg_contents": "> Frederic Jolliton <[email protected]> writes:\n>>> To \"emulate\" a parametred view, I created a function as follow:\n>>> \n>>> CREATE FUNCTION get_info (integer) RETURNS SETOF type_get_info\n>>> AS '...' <- here the query show below, where 'LIMIT $1' is used instead of 'LIMIT 10'\n>>> LANGUAGE sql;\n>\n>> So, the query in the function is not using index but the exact same\n>> query alone does !\n\nTom Lane <[email protected]> writes:\n> But it's not the same query, is it? With \"LIMIT $1\" the planner can't\n> know what the limit value is exactly, so it has to generate a plan that\n> won't be too unreasonable for either a small or a large limit.\n\nOk. So the query is optimized once and not each time.. I understand\nnow.\n\nBut, since I \"know\" better that PostgreSQL that query must use index\nin most of case, can I force in some manner the function when\ndeclaring it to take this in account ? I suppose (not tested) that\nsetting enable_seqscan just before will probably do it, but what about\ndump/restore of the database when recreating the function and keep\nthis \"fix\" automatically ?\n\n", "msg_date": "Thu, 24 Apr 2003 18:33:35 +0200", "msg_from": "Frederic Jolliton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Important speed difference between a query and a" }, { "msg_contents": "\n> On Thu, 24 Apr 2003, Frederic Jolliton wrote:\n>> > CREATE FUNCTION get_info (integer) RETURNS SETOF type_get_info\n>> > AS '...' <- here the query show below, where 'LIMIT $1' is used instead of 'LIMIT 10'\n>> > LANGUAGE sql;\n>>\n>> Setting enable_seqscan to off give same result speed between the query\n>> and the function !\n>>\n>> So, the query in the function is not using index but the exact same\n>> query alone does !\n>>\n>> Is there an explanation ?\n\nStephan Szabo <[email protected]> writes:\n> My guess is that limit $1 is assuming a larger number of rows when\n> planning the queries, large enough that it expects seqscan to be\n> better (assuming the limit is what it expects). It's probably not\n> going to plan that query each time the function is called so it's\n> not going to know whether you're calling with a small number (index\n> scan may be better) or a large number (seq scan may be better). For\n> example, if you sent 100000, the index scan might be a loser.\n>\n> Perhaps plpgsql with EXECUTE would work better for that, although\n> it's likely to have some general overhead.\n\nThe server is rather fast, and the query return 10 to 50 rows in most\ncase. So, this is probably a solution, even if it's not very\nclean. (Well, I have to find an example to RETURN the result of\nEXECUTE..)\n\nBut, what I don't understand is why enable_seqscan change something if\nthe query is already planed.\n\n-- \nFrédéric Jolliton\n\n", "msg_date": "Thu, 24 Apr 2003 18:41:49 +0200", "msg_from": "Frederic Jolliton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Important speed difference between a query and a" }, { "msg_contents": "On Thu, 24 Apr 2003, Frederic Jolliton wrote:\n\n> > On Thu, 24 Apr 2003, Frederic Jolliton wrote:\n> > Perhaps plpgsql with EXECUTE would work better for that, although\n> > it's likely to have some general overhead.\n>\n> The server is rather fast, and the query return 10 to 50 rows in most\n> case. So, this is probably a solution, even if it's not very\n> clean. (Well, I have to find an example to RETURN the result of\n> EXECUTE..)\n\nCheck out\nhttp://techdocs.postgresql.org/guides/SetReturningFunctions\n\nspecifically the GetRows() function for an example of using for in execute\nwith set returning functions.\n\n", "msg_date": "Thu, 24 Apr 2003 10:28:08 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Important speed difference between a query and a" }, { "msg_contents": "> On Thu, 24 Apr 2003, Frederic Jolliton wrote:\n[...]\n>> The server is rather fast, and the query return 10 to 50 rows in\n>> most case. So, this is probably a solution, even if it's not very\n>> clean. (Well, I have to find an example to RETURN the result of\n>> EXECUTE..)\n\nStephan Szabo <[email protected]> writes:\n> Check out\n> http://techdocs.postgresql.org/guides/SetReturningFunctions\n>\n> specifically the GetRows() function for an example of using for in\n> execute with set returning functions.\n\nOh right. Thanks you for pointing out this.\n\n-- \nFrédéric Jolliton\n\n", "msg_date": "Thu, 24 Apr 2003 19:40:17 +0200", "msg_from": "Frederic Jolliton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Important speed difference between a query and a" }, { "msg_contents": "Frederic Jolliton kirjutas N, 24.04.2003 kell 19:33:\n> > Frederic Jolliton <[email protected]> writes:\n> >>> To \"emulate\" a parametred view, I created a function as follow:\n> >>> \n> >>> CREATE FUNCTION get_info (integer) RETURNS SETOF type_get_info\n> >>> AS '...' <- here the query show below, where 'LIMIT $1' is used instead of 'LIMIT 10'\n> >>> LANGUAGE sql;\n> >\n> >> So, the query in the function is not using index but the exact same\n> >> query alone does !\n> \n> Tom Lane <[email protected]> writes:\n> > But it's not the same query, is it? With \"LIMIT $1\" the planner can't\n> > know what the limit value is exactly, so it has to generate a plan that\n> > won't be too unreasonable for either a small or a large limit.\n> \n> Ok. So the query is optimized once and not each time.. I understand\n> now.\n> \n> But, since I \"know\" better that PostgreSQL that query must use index\n> in most of case, can I force in some manner the function when\n> declaring it to take this in account ? \n\nYou could define two functions - one for small sets with constant LIMITs\n(maybe 50) in UNION parts, and another with $1. Then use accordingly.\n\n> I suppose (not tested) that\n> setting enable_seqscan just before will probably do it, but what about\n> dump/restore of the database when recreating the function and keep\n> this \"fix\" automatically ?\n\n-------------\nHannu\n\n", "msg_date": "25 Apr 2003 10:01:31 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Important speed difference between a query and a" }, { "msg_contents": "On Wed, Apr 23, 2003 at 07:53:55PM +0200, Frederic Jolliton wrote:\n> CREATE FUNCTION get_info (integer) RETURNS SETOF type_get_info\n> AS '...' <- here the query show below, where 'LIMIT $1' is used instead of 'LIMIT 10'\n> LANGUAGE sql;\n \nYou should probably define the function to be STABLE.\n\n LANGUAGE sql STABLE;\n\nSee\nhttp://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=sql-createfunction.html\nfor more info.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Fri, 25 Apr 2003 10:09:05 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Important speed difference between a query and a function with\n\tthe same query" } ]
[ { "msg_contents": "On this table\n\n project_id | integer | not null\n id | integer | not null\n date | date | not null\n team_id | integer | not null\n work_units | bigint | not null\nIndexes: email_contrib_pkey primary key btree (project_id, id, date)\n\nwith this breakdown of data\n\n project_id | count \n------------+----------\n 5 | 56427141\n 8 | 1058843\n 24 | 361595\n 25 | 4092575\n 205 | 58512516\n\nAny kind of operation on an entire project wants to tablescan, even\nthough it's going to take way longer.\n\nexplain analyze select sum(work_units) from email_contrib where\nproject_id=8;\n\nIndex scan 126, 56, 55 seconds\nSeq. scan 1517, 850, 897 seconds \n\nIt seems like the metrics used for the cost of index scanning v. table\nscanning on large tables need to be revisited. It might be such a huge\ndifference in this case because the table is essentially clustered on\nthe primary key. I can test this by doing an aggregate for, say, a\nspecific team_id, which would be pretty well spread across the entire\ntable, but that'll have to wait a bit.\n\nAnyone have any thoughts on this? Also, is there a TODO to impliment\nreal clustered indexes? Doing stuff by project_id on this table in\nsybase was very efficient, because there was a real clustered index on\nthe PK. By clustered index, I mean an index where the leaf nodes of the\nB-tree were the actual table rows. This means the only overhead in going\nthrough the index is scanning the branches, which in this case would be\npretty light-weight.\n\nIs this something that I should be using some PGSQL-specific feature\nfor, like inheritance?\n\nI've been really happy so far with PGSQL (comming from Sybase and DB2),\nbut it seems there's still some pretty big performance issues that want\nto be addressed (or I should say performance issues that hurt really big\nwhen you hit them :) ).\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Thu, 24 Apr 2003 18:38:17 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "More tablescanning fun" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> It seems like the metrics used for the cost of index scanning v. table\n> scanning on large tables need to be revisited. It might be such a huge\n> difference in this case because the table is essentially clustered on\n> the primary key.\n\nProbably. What does the correlation figure in pg_stats show as?\n\nThere's been some previous debate about the equation used to correct\nfor correlation, which is certainly bogus (I picked it more or less\nout of the air ;-)). But so far no one has proposed a replacement\nequation with any better foundation ... take a look in \nsrc/backend/optimizer/path/costsize.c if you want to get involved.\n\n> Also, is there a TODO to impliment\n> real clustered indexes?\n\nNo. It's not apparent to me how you could do that without abandoning\nMVCC, which we're not likely to do.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 24 Apr 2003 19:58:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More tablescanning fun " }, { "msg_contents": "On Thu, Apr 24, 2003 at 07:58:30PM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > It seems like the metrics used for the cost of index scanning v. table\n> > scanning on large tables need to be revisited. It might be such a huge\n> > difference in this case because the table is essentially clustered on\n> > the primary key.\n> \n> Probably. What does the correlation figure in pg_stats show as?\n \nstats=# select attname, correlation from pg_stats where\ntablename='email_contrib';\n attname | correlation \n ------------+-------------\n project_id | 1\n id | 0.449204\n date | 0.271775\n team_id | 0.165588\n work_units | 0.0697928\n\n> There's been some previous debate about the equation used to correct\n> for correlation, which is certainly bogus (I picked it more or less\n> out of the air ;-)). But so far no one has proposed a replacement\n> equation with any better foundation ... take a look in \n> src/backend/optimizer/path/costsize.c if you want to get involved.\n\nAre you reffering to the PF formula?\n\n> > Also, is there a TODO to impliment\n> > real clustered indexes?\n> \n> No. It's not apparent to me how you could do that without abandoning\n> MVCC, which we're not likely to do.\n \nHmm... does MVCC mandate inserts go at the end? My understanding is that\neach tuple indicates it's insert/last modified time; if this is the\ncase, why would a true clustered index break mvcc? I guess an update\nthat moves the tuple would be tricky, but I'm guesing there's some kind\nof magic that could happen there... worst case would be adding an\n'expired' timestamp.\n\nOn the other hand, it might be possible to get the advantages of a\nclustered index without doing a *true* clustered index. The real point\nis to be able to use indexes; I've heard things like 'if you need to\naccess more than 10% of a table then using an index would be\ndisasterous', and that's not good... that number should really be over\n50% for most reasonable ratios of fields indexed to fields in table (of\ncourse field size plays a factor).\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Thu, 24 Apr 2003 23:59:24 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: More tablescanning fun" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Thu, Apr 24, 2003 at 07:58:30PM -0400, Tom Lane wrote:\n>> There's been some previous debate about the equation used to correct\n>> for correlation, which is certainly bogus (I picked it more or less\n>> out of the air ;-)). But so far no one has proposed a replacement\n>> equation with any better foundation ... take a look in \n>> src/backend/optimizer/path/costsize.c if you want to get involved.\n\n> Are you reffering to the PF formula?\n\nThe PF formula is good as far as I know, but it assumes an uncorrelated\ntable order. The debate is how to correct it for nonzero correlation.\nSpecifically, this bit:\n\n * When the index ordering is exactly correlated with the table ordering\n * (just after a CLUSTER, for example), the number of pages fetched should\n * be just sT. What's more, these will be sequential fetches, not the\n * random fetches that occur in the uncorrelated case. So, depending on\n * the extent of correlation, we should estimate the actual I/O cost\n * somewhere between s * T * 1.0 and PF * random_cost. We currently\n * interpolate linearly between these two endpoints based on the\n * correlation squared (XXX is that appropriate?).\n\nI believe the endpoints s*T and PF*random_cost, I think, but the curve\nbetween them is anyone's guess. It's also quite possible that the\ncorrelation stat that we currently compute is inadequate to model what's\ngoing on.\n\n>> No. It's not apparent to me how you could do that without abandoning\n>> MVCC, which we're not likely to do.\n \n> Hmm... does MVCC mandate inserts go at the end?\n\nAnywhere that there's free space. The point is that you can't promise\nupdates will fit on the same page as the original tuple. So whatever\ndesirable physical ordering you may have started with will surely\ndegrade over time.\n\n> On the other hand, it might be possible to get the advantages of a\n> clustered index without doing a *true* clustered index. The real point\n> is to be able to use indexes; I've heard things like 'if you need to\n> access more than 10% of a table then using an index would be\n> disasterous', and that's not good... that number should really be over\n> 50% for most reasonable ratios of fields indexed to fields in table (of\n> course field size plays a factor).\n\nIf you have to read 50% of a table, you certainly should be doing a\nlinear scan. There will be hardly any pages you can skip (unless the\ntable is improbably well clustered), and the extra I/O needed to read\nthe index will buy you nothing.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 25 Apr 2003 01:23:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More tablescanning fun " }, { "msg_contents": "On Fri, Apr 25, 2003 at 01:23:10AM -0400, Tom Lane wrote:\n> I believe the endpoints s*T and PF*random_cost, I think, but the curve\n> between them is anyone's guess. It's also quite possible that the\n> correlation stat that we currently compute is inadequate to model what's\n> going on.\n\nIn this case, the interpolation can't be at fault, because correlation\nis 1 (unless the interpolation is backwards, but that doesn't appear to\nbe the case).\n\nOne possibility is that IndexSelectivity isn't taking\nmost_common_(vals|freqs) into account.\n\nLooking at this from an idea case, most (or all) of this query should be\nretrieved by simply incrementing through both the index and the tuples\nat the same time. We should end up pulling 0.7% of the index and raw\npages combined. Analyze thinks that using the index will be about 50%\nmore expensive, though. (3258557 v. 2274866)\n\nA thought that comes to mind here is that it would be incredible if\npgsql could take metrics of how long things actually take on a live\nsystem and incorporate them... basically learning as it goes. A first\nstep in this case would be to keep tabs on how close real page-read\ncounts come to what the optimizer predicted, and storing that for later\nanalysis. This would make it easier for you to verify your linear\ncorrelation assumption, for example (it'd also make it easier to\nvalidate the PF formula).\n\n> >> No. It's not apparent to me how you could do that without abandoning\n> >> MVCC, which we're not likely to do.\n> \n> > Hmm... does MVCC mandate inserts go at the end?\n> \n> Anywhere that there's free space. The point is that you can't promise\n> updates will fit on the same page as the original tuple. So whatever\n> desirable physical ordering you may have started with will surely\n> degrade over time.\n\nYes, updates are the tricky part to clustered indexes, and MVCC might\nmake it harder. What Sybase 11 (which only supports page locking) does\nis see if the update moves the tuple off it's current page. If it\ndoesn't, it just shuffles the page around as needed and goes on with\nbusiness. If it needs to move, it grabs (and locks) the page it needs to\nmove to, inserts it on that page (possibly incurring a page split), and\ndeletes it from the old page. My guess is that with MVCC, you can't\nsimply delete the old tuple... you'd have to leave some kind of 'bread\ncrumb' behind for older transactions to see (though, I guess this would\nalready have to be happening somehow).\n\nThe reason to do this in this case is well worth it though... we end up\nwith one table (simplifies code) that should essentially act as if it\nwas multiple (5 in this case) tables, so performance should still be\nvery good.\n\n> > On the other hand, it might be possible to get the advantages of a\n> > clustered index without doing a *true* clustered index. The real point\n> > is to be able to use indexes; I've heard things like 'if you need to\n> > access more than 10% of a table then using an index would be\n> > disasterous', and that's not good... that number should really be over\n> > 50% for most reasonable ratios of fields indexed to fields in table (of\n> > course field size plays a factor).\n> \n> If you have to read 50% of a table, you certainly should be doing a\n> linear scan. There will be hardly any pages you can skip (unless the\n> table is improbably well clustered), and the extra I/O needed to read\n> the index will buy you nothing.\n\nYes, and it's that 'improbably well clustered' case that I have here. :)\nBut even if you're only 25% clustered, I think you'll still see a huge\ngain on a very large table, especially if the index tuples are\nsubstantially smaller than the raw tuples (which they normally should\nbe).\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Fri, 25 Apr 2003 09:38:01 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: More tablescanning fun" }, { "msg_contents": "Jim C. Nasby kirjutas R, 25.04.2003 kell 07:59:\n> > > Also, is there a TODO to impliment\n> > > real clustered indexes?\n> > \n> > No. It's not apparent to me how you could do that without abandoning\n> > MVCC, which we're not likely to do.\n> \n> Hmm... does MVCC mandate inserts go at the end? \n\nI have been pondering if keeping pages half-empty (or even 70% empty)\ncould solve both clustering problems and longish updates for much data.\n\nIf we could place the copy in the same page than original, most big\nupdates would be possible by one sweep of disk heads and also clustering\norder would be easier to keep if pages were kept intentionally half\nempty.\n\nSo \"VACUUM FULL 65% EMPTY;\" could make sense ?\n\n\n-------------\nHannu\n\n", "msg_date": "25 Apr 2003 19:28:06 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More tablescanning fun" }, { "msg_contents": "On Fri, Apr 25, 2003 at 07:28:06PM +0300, Hannu Krosing wrote:\n> I have been pondering if keeping pages half-empty (or even 70% empty)\n> could solve both clustering problems and longish updates for much data.\n> \n> If we could place the copy in the same page than original, most big\n> updates would be possible by one sweep of disk heads and also clustering\n> order would be easier to keep if pages were kept intentionally half\n> empty.\n> \n> So \"VACUUM FULL 65% EMPTY;\" could make sense ?\n \nThat's actually a recommended practice, at least for sybase when you're\nusing a clustered index, depending on what you're using it for. If you\ncluster a table in such a way that inserts will happen across the entire\ntable, you'll actually end up with a fillratio (amount of data v. empty\nspace on a page) of 75% over time, because of page splits. When sybase\ngoes to insert, if it can't find room on the page it needs to insert\ninto (keep in mind this is a clustered table, so a given row *must* go\ninto a given position), it will split that single page into two pages,\neach of which will then have a fillratio of 50%. Of course they'll\neventually approach 100%, so the average fill ratio across all pages for\nthe table would be 75%.\n\nI'm not familiar enough with pgsql's guts to know how big an impact\nupdates across pages are, or if they even happen often at all. If you're\nnot maintaining a clustered table, couldn't all updates just occur\nin-place? Or are you thinking of the case where you have a lot of\nvariable-length stuff?\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Fri, 25 Apr 2003 11:56:13 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: More tablescanning fun" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> I have been pondering if keeping pages half-empty (or even 70% empty)\n> could solve both clustering problems and longish updates for much data.\n\nYou could achieve that pretty easily if you simply don't ever VACUUM\nFULL ;-)\n\nUPDATE has always (AFAIR) attempted to place the new version on the same\npage as the old, moving it elsewhere only if it doesn't fit. So that\npart of the logic is already there.\n\n> So \"VACUUM FULL 65% EMPTY;\" could make sense ?\n\nNot so much that, as a parameter to CLUSTER telling it to fill pages\nonly x% full.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 25 Apr 2003 16:10:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More tablescanning fun " }, { "msg_contents": "Jim C. Nasby kirjutas R, 25.04.2003 kell 19:56:\n> I'm not familiar enough with pgsql's guts to know how big an impact\n> updates across pages are, or if they even happen often at all. If you're\n> not maintaining a clustered table, couldn't all updates just occur\n> in-place?\n\nIn postgres _no_ updates happen in-place. The MVCC concurrency works by\nalways inserting a new tuple on update .\n\n-----------\nHannu\n\n", "msg_date": "27 Apr 2003 09:50:51 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More tablescanning fun" }, { "msg_contents": "On Fri, 25 Apr 2003 09:38:01 -0500, \"Jim C. Nasby\" <[email protected]>\nwrote:\n>In this case, the interpolation can't be at fault, because correlation\n>is 1 (unless the interpolation is backwards, but that doesn't appear to\n>be the case).\n\nBut your index has 3 columns which causes the index correlation to be\nassumed as 1/3. So the interpolation uses 1/9 (correlation squared)\nand you get a cost estimation that almost equals the upper bound.\n\nIf you want to play around with other interpolation methods, you might\nwant to get this patch: http://www.pivot.at/pg/16-correlation-732.diff\n\nA short description of the GUC parameters introduced by this patch can\nbe found here:\nhttp://archives.postgresql.org/pgsql-performance/2002-11/msg00256.php\n\nAs a short term workaround for an unmodified Postgres installation,\nyou can create an index on email_contrib(project_id).\n\nServus\n Manfred\n\n", "msg_date": "Wed, 30 Apr 2003 16:14:46 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More tablescanning fun" }, { "msg_contents": "On Wed, Apr 30, 2003 at 04:14:46PM +0200, Manfred Koizar wrote:\n> On Fri, 25 Apr 2003 09:38:01 -0500, \"Jim C. Nasby\" <[email protected]>\n> wrote:\n> >In this case, the interpolation can't be at fault, because correlation\n> >is 1 (unless the interpolation is backwards, but that doesn't appear to\n> >be the case).\n> \n> But your index has 3 columns which causes the index correlation to be\n> assumed as 1/3. So the interpolation uses 1/9 (correlation squared)\n> and you get a cost estimation that almost equals the upper bound.\n \nHmm... interesting... maybe it would also be a good idea to expand\nANALYZE so that it will analyze actual index correlation? ie: in this\ncase, it would notice that the index on project_id, id, date is highly\ncorrelated, across all 3 columns.\n\nSupporting something close to a real clustered index would also work as\nwell, since the optimizer would treat that case differently (essentially\nas a combination between an index scan but doing a seq. scan within each\npage).\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Sun, 4 May 2003 11:22:14 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: More tablescanning fun" } ]
[ { "msg_contents": "Hi,\nI�ve already created an concatenated index in Postgres V3.0 with different datatypes:\nCREATE INDEX mov_i4 ON movimiento USING btree (id_company, id_status, id_docum, id_origen_mov);\nid_company int2\nid_status char(1)\nid_docum numeric(15,0)\nid_origen_mov int4\nand after several tests the query doesn�t use the index because it seems that id_company must be a char.\nIf a use the value for the id_company eg.   select * from movimiento where id_company = 120\n                                                          and id_status = 'X' and id_docto = 10000056789 and mount = 12345.56\n---- it doesn�t use the index                                                                                \nIf a use the value for the id_company eg.   select * from movimiento where id_company = '120' and\n                                                     and id_status = 'X' and id_docto = 10000056789 and mount = 12345.56\n---- it  uses the index\n \nThe problem is that I can�t change the datatypes in the hole application and the table has 240,000 rows and we need to use concatenated indexes, because we access the table in different ways, the table has another five concatenated indexes.\nCould you suggest something to resolve this?\nThank you very much.\nRegards,\nCecilia\n \n ï¿½nete al mayor servicio mundial de correo electr�nico: Haz clic aqu� \n", "msg_date": "Fri, 25 Apr 2003 17:33:13 -0500", "msg_from": "\"Cecilia Alvarez\" <[email protected]>", "msg_from_op": true, "msg_subject": "Indexes with different datatypes" } ]
[ { "msg_contents": "\n\n\nSorry, this is the good one:\nI�ve already created an concatenated index in Postgres V3.0 with different datatypes:\nCREATE INDEX mov_i4 ON movimiento USING btree (id_company, id_status, id_docum, id_origen_mov);\n\nid_company int2\n\nid_status char(1)\n\nid_docum numeric(15,0)\n\nid_origen_mov int4\n\nand after several tests the query doesn�t use the index because it seems that id_company must be a char.\n\nIf a use the value for the id_company eg.   select * from movimiento where id_company = 120\n\n                                                          and id_status = 'X' and id_docum = 10000056789 and id_origen_mov = 12345\n\n---- it doesn�t use the index                                                                                \n\nIf a use the value for the id_company eg.   select * from movimiento where id_company = '120' and\n\n                                                     and id_status = 'X' and id_docum = 10000056789 and id_origen_mov = 12345\n\n---- it  uses the index\n\n \n\nThe problem is that I can�t change the datatypes in the hole application and the table has 240,000 rows and we need to use concatenated indexes, because we access the table in different ways, the table has another five concatenated indexes.\n\nCould you suggest something to resolve this?\n\nThank you very much.\n\nRegards,\n\nCecilia\n\n \n\n \nMSN. M�s �til Cada D�a Haz clic aqu� smart spam protection and 2 months FREE* \n", "msg_date": "Fri, 25 Apr 2003 17:36:48 -0500", "msg_from": "\"Cecilia Alvarez\" <[email protected]>", "msg_from_op": true, "msg_subject": "Indexes with different datatypes:Correction" }, { "msg_contents": "On Fri, 25 Apr 2003, Cecilia Alvarez wrote:\n\n> \n> \n> \n> Sorry, this is the good one:\n> \n> I´ve already created an concatenated index in Postgres V3.0 with different datatypes:\n> \n> CREATE INDEX mov_i4 ON movimiento USING btree (id_company, id_status, id_docum,\n> id_origen_mov);\n> \n> id_company int2\n> \n> id_status char(1)\n> \n> id_docum numeric(15,0)\n> \n> id_origen_mov int4\n> \n> and after several tests the query doesn´t use the index because it seems that id_company must\n> be a char.\n> \n> If a use the value for the id_company eg.   select * from movimiento where id_company = 120\n> \n>                                                           and id_status = 'X' and id_docum =\n> 10000056789 and id_origen_mov = 12345\n> \n> ---- it doesn´t use the\n> index                                                                               \n> \n> If a use the value for the id_company eg.   select * from movimiento where id_company = '120'\n> and\n> \n>                                                      and id_status = 'X' and id_docum =\n> 10000056789 and id_origen_mov = 12345\n> \n> ---- it  uses the index\n> \n>  \n> \n> The problem is that I can´t change the datatypes in the hole application and the table has\n> 240,000 rows and we need to use concatenated indexes, because we access the table in\n> different ways, the table has another five concatenated indexes.\n> \n> Could you suggest something to resolve this?\n\nHi Cecilia. It looks like the problem is that Postgresql assumes that a \nnon-quoted number is generally an int4, and since the id_company is int2, \nit isn't automatically converted. You can either change your app to force \ncoercion (which the '' quotes are doing) or like:\n\nwhere id_company = 120::int2 \nOR\nwhere id = cast(120 as int2)\n\nOR you can recreate your table with id_company being int4. If you NEED to \nrestrict it to int2 range, then you can use a constraint to make it act \nlike an int2 without actually being one.\n\n", "msg_date": "Fri, 25 Apr 2003 16:40:54 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes with different datatypes:Correction" } ]