threads
listlengths
1
275
[ { "msg_contents": "This is a wild and crazy thought which I am sure is invalid for some \ngood reason.\n\nBut why can't postgres just vacuum itself as it goes along?\n\nWhen a row is orphaned it's added to a list of possibly available rows. \nWhen a new row is needed the list of possible rows is examined and the \nfirst one with a transaction id less then the lowest running transaction \nid is chosen to be the new row? These rows can be in a heap so it's \nreally fast to find one.\n\nLike magic - no more vacuuming. No more holes for people to fall into.\n\nIs this an oversimplification of the problem?\n\nRalph\n", "msg_date": "Wed, 31 Aug 2005 10:21:20 +1200", "msg_from": "Ralph Mason <[email protected]>", "msg_from_op": true, "msg_subject": "'Real' auto vacuum?" }, { "msg_contents": "On Wed, Aug 31, 2005 at 10:21:20AM +1200, Ralph Mason wrote:\n> This is a wild and crazy thought which I am sure is invalid for some \n> good reason.\n> \n> But why can't postgres just vacuum itself as it goes along?\n> \n> When a row is orphaned it's added to a list of possibly available rows. \n> When a new row is needed the list of possible rows is examined and the \n> first one with a transaction id less then the lowest running transaction \n> id is chosen to be the new row? These rows can be in a heap so it's \n> really fast to find one.\n> \n> Like magic - no more vacuuming. No more holes for people to fall into.\n\nYes please. :-)\n\n> Is this an oversimplification of the problem?\n\nBut, yeah. It's probably not that easy, especially with really big\ndatabases. Where is this free list stored? How efficient is it to keep\ntrack of the lowest running transaction at all times? How does one\nsynchronize access to this free list, to ensure that processes don't\nblock up waiting for access to the free list? Is the fre list\njournalled to prevent corruption, and the accidental re-use of a still\nin use row? And, there would be a cost to scanning this list on every\ninsert or update.\n\nAs an outsider (like you?) I see the current model as a design flaw as\nwell. A neat and tidy model on paper. Not so nice in real life. The\nneed to vacuum in batch mode, to keep the database from dying, seems\nintuitively bad.\n\nI think there must be answers to this problem. Even simple\noptimizations, such as defining a table such that any delete or update\nwithin a table, upon commit, will attempt to vacuum just the rows that\nshould not be considered free for any new transactions. If it's in\nuse by an active transaction, oh well. It can be picked up by a batch\nrun of vacuum. If it's free though - let's do it now.\n\nI think any optimizations we come up with, will be more happily accepted\nwith a working patch that causes no breakage... :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Tue, 30 Aug 2005 18:35:19 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: 'Real' auto vacuum?" }, { "msg_contents": "Ralph,\n\n> When a row is orphaned it's added to a list of possibly available rows.\n> When a new row is needed the list of possible rows is examined and the\n> first one with a transaction id less then the lowest running transaction\n> id is chosen to be the new row? These rows can be in a heap so it's\n> really fast to find one.\n\nThis is the long-term plan. However, it's actually a lot harder than it \nsounds. Patches welcome.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 30 Aug 2005 15:58:34 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'Real' auto vacuum?" }, { "msg_contents": "[email protected] wrote:\n\n>But, yeah. It's probably not that easy, especially with really big\n>databases. Where is this free list stored? How efficient is it to keep\n>track of the lowest running transaction at all times? How does one\n>synchronize access to this free list, to ensure that processes don't\n>block up waiting for access to the free list? Is the fre list\n>journalled to prevent corruption, and the accidental re-use of a still\n>in use row? And, there would be a cost to scanning this list on every\n>insert or update.\n> \n>\nI suspect the freelist could be stored as an index, and just handily \npostgres supports those out of the box. There would be a cost yes, \nbut then what is the cost of adding pages to the file all the time? I \nguess as with all things there is no one size fits all, so perhaps you \ncould turn it off - although I expect for 99.9% of the cases 'on' would \nbe the better choice. If it gets broken there is already the reindex \ncode that can fix it. A coherency / fixing / recover of a table command \nwould probably be a useful tool anyway.\n\n>As an outsider (like you?) I see the current model as a design flaw as\n>well. A neat and tidy model on paper. Not so nice in real life. The\n>need to vacuum in batch mode, to keep the database from dying, seems\n>intuitively bad.\n> \n>\nWe have a script that vacuums the database every 5 minutes, excessive - \nyes, but turns out that any less is no good really. I think that this \nis sub optimal, the DB work keeps running, but the vacuum can slow down \nother tasks. It also probably flushes data that we would need out of \nthe page cache so it can look at data that isn't used often as the \nvacuum runs. Not the most optimal data access pattern I could imagine.\n\n>I think there must be answers to this problem. Even simple\n>optimizations, such as defining a table such that any delete or update\n>within a table, upon commit, will attempt to vacuum just the rows that\n>should not be considered free for any new transactions. If it's in\n>use by an active transaction, oh well. It can be picked up by a batch\n>run of vacuum. If it's free though - let's do it now.\n> \n>\nAnything would be good - I think it's the achilles heel of postgres. \nPerhaps there is something simple like that could fix 95% of the problem.\n\n>I think any optimizations we come up with, will be more happily accepted\n>with a working patch that causes no breakage... :-)\n>\n> \n>\nI am sure they would.\n\nCheers\nRalph\n\n", "msg_date": "Wed, 31 Aug 2005 11:39:17 +1200", "msg_from": "Ralph Mason <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 'Real' auto vacuum?" }, { "msg_contents": "\n> > When a row is orphaned it's added to a list of possibly available rows.\n> > When a new row is needed the list of possible rows is examined and the\n> > first one with a transaction id less then the lowest running transaction\n> > id is chosen to be the new row? These rows can be in a heap so it's\n> > really fast to find one.\n>\n> This is the long-term plan. However, it's actually a lot harder than it\n> sounds. Patches welcome.\n\n Some ETA? Since that would be the most welcome addition for us. We\nhave few very heavily updated databases where table bloat and constant\nvacuuming is killing performance.\n\n Mindaugas\n\n", "msg_date": "Wed, 31 Aug 2005 08:17:53 +0300", "msg_from": "\"Mindaugas Riauba\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'Real' auto vacuum?" }, { "msg_contents": "Mindaugas Riauba wrote:\n\n>>>When a row is orphaned it's added to a list of possibly available rows.\n>>>When a new row is needed the list of possible rows is examined and the\n>>>first one with a transaction id less then the lowest running transaction\n>>>id is chosen to be the new row? These rows can be in a heap so it's\n>>>really fast to find one.\n>>> \n>>>\n>>This is the long-term plan. However, it's actually a lot harder than it\n>>sounds. Patches welcome.\n>> \n>>\n>\n> Some ETA? Since that would be the most welcome addition for us. We\n>have few very heavily updated databases where table bloat and constant\n>vacuuming is killing performance.\n>\n> \n>\nHow often are you vacuuming (the definition of 'constantly' tends to \nvary)? Are you vacuuming the whole database each time? If so, identify \nwhich tables are being updated frequently, and vacuum those often. \nVacuum other tables less frequently.\n\nAlso, are you you using VACUUM FULL (if so, you certainly don't want to be).\n\n-- \nBrad Nicholson 416-673-4106 [email protected]\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Wed, 31 Aug 2005 09:51:43 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'Real' auto vacuum?" } ]
[ { "msg_contents": "Hi all\n\nI'm having a strange problem with a query which looks like this:\n\nSELECT id FROM orders WHERE id NOT IN (SELECT order_id FROM orders_items);\n\nThe id fields are varchars (32), both indexed. The number of rows in the\ntables are about 60000.\n\nNow, the really strange part is if I delete all data from orders_items,\nrun VACUUM ANALYZE, then import all the data, the query finshes in about 3\nseconds. Then I run VACUUM ANALYZE, and *after* the vacuum, the query\ntakes\nabout 30 minutes to run. The data is the same and this is the only query\nrunning, and the machine load is effectively none.\n\nEXPLAIN'ng the query shows, before VACUUM ANALYZE, shows this:\n\n QUERY PLAN\n-------------------------------------------------------------------------\n Seq Scan on orders (cost=0.00..12184.14 rows=29526 width=33)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Seq Scan on orders_items (cost=0.00..0.00 rows=1 width=33)\n\nAfter the vacuum, the plan is like this:\n\n QUERY PLAN\n--------------------------------------------------------------------------------\n Seq Scan on fsi_orders (cost=0.00..40141767.46 rows=29526 width=33)\n Filter: (NOT (subplan))\n SubPlan\n -> Seq Scan on fsi_orders_items (cost=0.00..1208.12 rows=60412\nwidth=33)\n\n\nAny ideas what I can do to make the query running in < 10 seconds?\n\nThanks,\nGu�mundur.\n", "msg_date": "Wed, 31 Aug 2005 10:22:44 -0000 (GMT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Query slow after VACUUM ANALYZE" }, { "msg_contents": "Hi again\n\n[..]\n\n>\n> QUERY PLAN\n> -------------------------------------------------------------------------\n> Seq Scan on orders (cost=0.00..12184.14 rows=29526 width=33)\n> Filter: (NOT (hashed subplan))\n> SubPlan\n> -> Seq Scan on orders_items (cost=0.00..0.00 rows=1 width=33)\n>\n> After the vacuum, the plan is like this:\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------\n> Seq Scan on fsi_orders (cost=0.00..40141767.46 rows=29526 width=33)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Seq Scan on fsi_orders_items (cost=0.00..1208.12 rows=60412\n> width=33)\n>\n\nThis, of course, should be \"orders\", not \"fsi_orders\", and \"orders_items\",\nnot \"fsi_orders_items\". Sorry for the confusion.\n\nAdditional info: I'm running PostgreSQL 7.4.8.\n\nThanks,\nGu�mundur.\n\n", "msg_date": "Wed, 31 Aug 2005 10:26:22 -0000 (GMT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Query slow after VACUUM ANALYZE" }, { "msg_contents": "[email protected] writes:\n> Any ideas what I can do to make the query running in < 10 seconds?\n\nIncrease work_mem (or sort_mem in older releases). PG is dropping\nback from the hash plan because it thinks the hashtable won't fit\nin work_mem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Sep 2005 23:53:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query slow after VACUUM ANALYZE " } ]
[ { "msg_contents": "Hi,\n \nI�m trying to tune a linux box with a 12 GB database and 4 GB RAM. First of all I would like to stop the swapping, so the shared_buffers and sort_mem were decreased but even so it started swapping two hours after DBMS started up.\n \nI would like to know some suggestions how to discover why is it swapping?\n \nI�ve collected the following data from the environment and saved at http://paginas.terra.com.br/educacao/rei/dados.htm\n \n1. select version()\n2. uname -a\n3. cat /proc/cpuinfo\n4. cat /proc/meminfo\n5. vmstat 5\n6. pg_stat_activity\n7. postgresql.conf\n \nThanks in advance!\n \nReimer\n\n\n__________________________________________________\nConverse com seus amigos em tempo real com o Yahoo! Messenger \nhttp://br.download.yahoo.com/messenger/ \n\nHi,\n \nI�m trying to tune a linux box with a 12 GB database and 4 GB RAM. First of all I would like to stop the swapping, so the shared_buffers and sort_mem were decreased but even so it started swapping two hours after DBMS started up.\n \nI would like to know some suggestions how to discover why is it swapping?\n \nI�ve collected the following data from the environment and saved at http://paginas.terra.com.br/educacao/rei/dados.htm\n \n1. select version()\n2. uname -a\n3. cat /proc/cpuinfo\n4. cat /proc/meminfo\n5. vmstat 5\n6. pg_stat_activity\n7. postgresql.conf\n \nThanks in advance!\n \nReimer__________________________________________________Converse com seus amigos em tempo real com o Yahoo! Messenger http://br.download.yahoo.com/messenger/", "msg_date": "Wed, 31 Aug 2005 15:25:15 -0300 (ART)", "msg_from": "Carlos Henrique Reimer <[email protected]>", "msg_from_op": true, "msg_subject": "Swapping" }, { "msg_contents": "Carlos Henrique Reimer <[email protected]> writes:\n> I would like to know some suggestions how to discover why is it swapping?\n\nZero swap-in rate and swap-out rates in the single digits do not\nconstitute a swapping problem. It's reasonably likely that that\ntraffic isn't even coming from Postgres, but something else.\nI'd say ignore it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 Aug 2005 14:49:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping " }, { "msg_contents": "[Carlos Henrique Reimer - Wed at 03:25:15PM -0300]\n> I´m trying to tune a linux box with a 12 GB database and 4 GB RAM. First\n> of all I would like to stop the swapping, so the shared_buffers and sort_mem\n> were decreased but even so it started swapping two hours after DBMS started\n> up.\n> \n> I would like to know some suggestions how to discover why is it swapping?\n\nI agree with Tom Lane, nothing to worry about. Swapping is not a problem\nper se, aggressive swapping is a problem. If you are absolutely sure you\nwant to ban all swapping, use \"swapoff\"?\n\nI'd trust linux to handle swap/cache sensibly. Eventually, become involved\nwith kernel hacking ;-)\n\n-- \nNotice of Confidentiality: This email is sent unencrypted over the network,\nand may be stored on several email servers; it can be read by third parties\nas easy as a postcard. Do not rely on email for confidential information.\n", "msg_date": "Wed, 31 Aug 2005 21:22:17 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping" }, { "msg_contents": "[Tobias Brox - Wed at 09:22:17PM +0200]\n> I'd trust linux to handle swap/cache sensibly. Eventually, become involved\n> with kernel hacking ;-)\n\nOf course, there are also some files in /proc/sys/vm that you may want to\npeek into, for tuning the swapping. Particularly, at later 2.6-kernels (I'm\nrunning 2.6.12) you have the file /proc/sys/vm/swappiness, where the number\nshould be some percentage. I'm not completely sure how it works, but I\nsuppose that the higher you set it, the more likely it is to swap out \nmemory not beeing used. I think the default setting is probably sane, but\nyou may want to google a bit about it.\n\n-- \nNotice of Confidentiality: This email is sent unencrypted over the network,\nand may be stored on several email servers; it can be read by third parties\nas easy as a postcard. Do not rely on email for confidential information.\n", "msg_date": "Wed, 31 Aug 2005 21:33:17 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping" } ]
[ { "msg_contents": "Hi,\n\nI am currently trying to speed up the insertion of bulk loads to my database. I have fiddled with all of the parameters that I have seen suggested(aka checkpoint_segments, checkpoint_timeout, maintinence_work_mem, and shared buffers) with no success. I even turned off fysnc with no effect so I am pretty sure the biggest problem is that the DB is CPU limited at the moment because of the rather weak machine that postmaster is running on(Athlon 2400+ xp with 512 RAM), but that will change in the future so I am trying to get performance increases that don't involve changing the machine at the moment.\n\nI am currently inserting into the database through lipqxx's C++ interface. I am using prepared statements that perform regular inserts. I would like to use COPY FROM since I have read so much about its increased performance with respect to INSERT, but I am not sure how to use it in my case. So let me give you an idea on how the tables are laid out. \n\nThe real DB has more tables, but lets say for the sake of argument I have 3 tables; TB1, TB2, TB3. Lets say that TB1 has a primary key PK1 and a unique identifier column(type text) UK1 that has an index on it. TB2 then has a PK2, a UK2(type text) of its own with an index, and a foreign key FK2 that points to TB1's PK1. TB3 has a PK3 and a FK3 that points to FK2. \nTB1 TB2 TB3\n-------------- ------------------------------- ----------------------\nPK1, UK1 PK2, UK2, FK2(PK1) PK3, FK3(PK2)\n\nNow in lipqxx I am parsing an input list of objects that are then written to these tables. Each object may produce one row in TB1, one row in TB2, and one row in TB3. The UK1 and UK2 indentifiers are used to prevent duplicate entries for TB1 and TB2 respectively. I know COPY FROM obeys these unique checks; however, my problem is the FKs. So lets say I try to insert a row into TB1. If it is unique on UK1 then it inserts a new row with some new primary key int4 identifier and if it is a duplicate then no insert is done but the already existing row's primary key identifier is returned. This identifier(duplicate or not) is used when populating TB2's row as the FK2 identifier. The row that is to be inserted into TB2 needs the primary key indentifier from the result of the attempted insert into TB1. Similarily the insert into TB3 needs the result of the pk indentifier of the attempted insert into TB2. Once that is done then I move on to parsing the next object for insertion into the 3 tables.\n\nSo lets say I want to insert a list of objects using COPY FROM... whats the way to do it? How can I at the very least get a list of the primary keys of TB1(newly inserted rows or from already existings row) returned from the COPY FROM insert into TB1 so I can use them for the COPY FROM insert into TB2 and so on? Is there a better way to do this?\n\nP.S. I am going to setup autovacuum for these bulk loads. My question though is why for bulkloads is VACUUM useful? I understand that it frees up dead rows as a result of UPDATE and such, but where are the dead rows created from plain INSERTS?\n\nThanks,\nMorgan\n", "msg_date": "Wed, 31 Aug 2005 12:17:38 -0700", "msg_from": "\"Morgan Kita\" <[email protected]>", "msg_from_op": true, "msg_subject": "Big question on insert performance/using COPY FROM" }, { "msg_contents": "Morgan Kita wrote:\n> Hi,\n> \n> I am currently trying to speed up the insertion of bulk loads to my\n> database. I have fiddled with all of the parameters that I have seen\n> suggested(aka checkpoint_segments, checkpoint_timeout,\n> maintinence_work_mem, and shared buffers) with no success. I even\n> turned off fysnc with no effect so I am pretty sure the biggest\n> problem is that the DB is CPU limited at the moment because of the\n> rather weak machine that postmaster is running on(Athlon 2400+ xp\n> with 512 RAM)\n\nDon't be pretty sure, be abolutely sure. What do your various \nsystem-load figures show? Windows has a system performance monitoring \ntool that can show CPU/Memory/Disk IO, and *nix tools have vmstat or iostat.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 01 Sep 2005 11:45:25 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big question on insert performance/using COPY FROM" } ]
[ { "msg_contents": "Hi again,\n\nfirst I want to say ***THANK YOU*** for everyone who kindly shared their \nthoughts on my hardware problems. I really appreciate it. I started to \nlook for a new server and I am quite sure we'll get a serious hardware \n\"update\". As suggested by some people I would like now to look closer at \npossible algorithmic improvements.\n\nMy application basically imports Apache log files into a Postgres \ndatabase. Every row in the log file gets imported in one of three (raw \ndata) tables. My columns are exactly as in the log file. The import is \nrun approx. every five minutes. We import about two million rows a month.\n\nBetween 30 and 50 users are using the reporting at the same time.\n\nBecause reporting became so slow, I did create a reporting table. In \nthat table data is aggregated by dropping time (date is preserved), ip, \nreferer, user-agent. And although it breaks normalization some data from \na master table is copied, so no joins are needed anymore.\n\nAfter every import the data from the current day is deleted from the \nreporting table and recalculated from the raw data table.\n\n\nIs this description understandable? If so\n\nWhat do you think of this approach? Are there better ways to do it? Is \nthere some literature you recommend reading?\n\nTIA\n\nUlrich\n\n", "msg_date": "Thu, 01 Sep 2005 15:25:37 +0200", "msg_from": "Ulrich Wisser <[email protected]>", "msg_from_op": true, "msg_subject": "Need for speed 3" }, { "msg_contents": "Ulrich,\n\nOn 9/1/05 6:25 AM, \"Ulrich Wisser\" <[email protected]> wrote:\n\n> My application basically imports Apache log files into a Postgres\n> database. Every row in the log file gets imported in one of three (raw\n> data) tables. My columns are exactly as in the log file. The import is\n> run approx. every five minutes. We import about two million rows a month.\n\nBizgres Clickstream does this job using an ETL (extract transform and load)\nprocess to transform the weblogs into an optimized schema for reporting.\n \n> After every import the data from the current day is deleted from the\n> reporting table and recalculated from the raw data table.\n\nThis is something the optimized ETL in Bizgres Clickstream also does well.\n \n> What do you think of this approach? Are there better ways to do it? Is\n> there some literature you recommend reading?\n\nI recommend the Bizgres Clickstream docs, you can get it from Bizgres CVS,\nand there will shortly be a live html link on the website.\n\nBizgres is free - it also improves COPY performance by almost 2x, among\nother enhancements.\n\n- Luke \n\n\n", "msg_date": "Thu, 01 Sep 2005 09:37:53 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed 3" } ]
[ { "msg_contents": "Your HD raw IO rate seems fine, so the problem is not likely to be \nwith the HDs.\n\nThat consistent ~10x increase in how long it takes to do an import or \na select is noteworthy.\n\nThis \"smells\" like an interconnect problem. Was the Celeron locally \nconnected to the HDs while the new Xeons are network \nconnected? Getting 10's or even 100's of MBps throughput out of \nlocal storage is much easier than it is to do over a network. 1GbE \nis required if you want HDs to push 72.72MBps over a network, and not \neven one 10GbE line will allow you to match local buffered IO of \n1885.34MBps. What size are those network connects (Server A <-> \nstorage, Server B <-> storage, Server A <-> Server B)?\n\nRon Peacetree\n\n\nAt 10:16 AM 9/1/2005, Ernst Einstein wrote:\n\n>I've set up a Package Cluster ( Fail-Over Cluster ) on our two HP \n>DL380 G4 with MSA Storage G2.( Xeon 3,4Ghz, 6GB Ram, 2x 36GB@15rpm- \n>Raid1). The system is running under Suse Linux Enterprise Server.\n>\n>My problem is, that the performance is very low. On our old Server ( \n>Celeron 2Ghz with 2 GB of Ram ) an import of our Data takes about 10\n>minutes. ( 1,1GB data ). One of the DL380 it takes more than 90 minutes...\n>Selects response time have also been increased. Celeron 3 sec, Xeon 30-40sec.\n>\n>I'm trying to fix the problem for two day's now, googled a lot, but \n>i don't know what to do.\n>\n>Top says, my CPU spends ~50% time with wait io.\n>\n>top - 14:07:34 up 22 min, 3 users, load average: 1.09, 1.04, 0.78\n>Tasks: 74 total, 3 running, 71 sleeping, 0 stopped, 0 zombie\n>Cpu(s): 50.0% us, 5.0% sy, 0.0% ni, 0.0% id, 45.0% wa, 0.0% hi, 0.0% si\n>Mem: 6050356k total, 982004k used, 5068352k free, 60300k buffers\n>Swap: 2097136k total, 0k used, 2097136k free, 786200k cached\n>\n> PID USER PR NI VIRT RES SHR S %CPU \n> %MEM TIME+COMMAND\n> 9939 postgres 18 0 254m 143m 140m \n> R 49.3 2.4 8:35.43 postgres:postgres plate [local] \n> INSERT\n> 9938 postgres 16 0 13720 1440 1120 \n> S 4.9 0.0 0:59.08 psql -d plate -f \n> dump.sql\n>10738 root 15 0 3988 1120 840 \n>R 4.9 0.0 0:00.05 top -d \n>0.2\n> 1 root 16 0 640 264 216 \n> S 0.0 0.0 0:05.03 \n> init[3]\n> 2 root 34 19 0 0 0 \n> S 0.0 0.0 0:00.00 [ksoftirqd/0]\n>\n>vmstat 1:\n>\n>ClusterNode2 root $ vmstat 1\n>procs -----------memory---------- ---swap-- -----io---- --system------cpu----\n> r b swpd free buff cache si so bi bo \n> in cs us sy id wa\n> 1 0 0 5032012 60888 821008 0 0 216 6938 1952 5049 \n> 40 8 15 37\n> 0 1 0 5031392 60892 821632 0 0 0 8152 \n> 2126 5725 45 6 0 49\n> 0 1 0 5030896 60900 822144 0 0 0 8124 \n> 2052 5731 46 6 0 47\n> 0 1 0 5030400 60908 822768 0 0 0 8144 \n> 2124 5717 44 7 0 50\n> 1 0 0 5029904 60924 823272 0 0 0 8304 \n> 2062 5763 43 7 0 49\n>\n>I've read (2004), that Xeon may have problems with content switching \n>- is the problem still existing? Can I do something to minimize the\n>problem?\n>\n>\n>postgresql.conf:\n>\n>shared_buffers = 28672\n>effective_cache_size = 400000\n>random_page_cost = 2\n>\n>\n>shmall & shmmax are set to 268435456\n>\n>hdparm:\n>\n>ClusterNode2 root $ hdparm -tT /dev/cciss/c0d0p1\n>\n>/dev/cciss/c0d0p1:\n>Timing buffer-cache reads: 3772 MB in 2.00 seconds = 1885.34 MB/sec\n>Timing buffered disk reads: 150 MB in 2.06 seconds = 72.72 MB/sec\n\n\n\n", "msg_date": "Thu, 01 Sep 2005 09:27:49 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor performance on HP Package Cluster" }, { "msg_contents": "Hi!\n\nI've set up a Package Cluster ( Fail-Over Cluster ) on our two HP DL380\nG4 with MSA Storrage G2.( Xeon 3,4Ghz, 6GB Ram, 2x 36GB@15rpm- Raid1)\nThe system is running under Suse Linux Enterprise Server.\n\nMy problem is, that the performance is very low. On our old Server\n( Celeron 2Ghz with 2 GB of Ram ) an import of our Data takes about 10\nminutes. ( 1,1GB data )\nOne of the DL380 it takes more than 90 minutes...\nSelects response time have also been increased. Celeron 3 sec, Xeon\n30-40sec.\n\n I'm trying to fix the problem for two day's now, googled a lot, but i\ndon't know what to do.\n\nTop says, my CPU spends ~50% time with wait io.\n\ntop - 14:07:34 up 22 min, 3 users, load average: 1.09, 1.04, 0.78\nTasks: 74 total, 3 running, 71 sleeping, 0 stopped, 0 zombie\nCpu(s): 50.0% us, 5.0% sy, 0.0% ni, 0.0% id, 45.0% wa, 0.0% hi,\n0.0% si\nMem: 6050356k total, 982004k used, 5068352k free, 60300k buffers\nSwap: 2097136k total, 0k used, 2097136k free, 786200k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\nCOMMAND \n 9939 postgres 18 0 254m 143m 140m R 49.3 2.4 8:35.43 postgres:\npostgres plate [local] INSERT \n 9938 postgres 16 0 13720 1440 1120 S 4.9 0.0 0:59.08 psql -d\nplate -f dump.sql \n10738 root 15 0 3988 1120 840 R 4.9 0.0 0:00.05 top -d\n0.2 \n 1 root 16 0 640 264 216 S 0.0 0.0 0:05.03 init\n[3] \n 2 root 34 19 0 0 0 S 0.0 0.0 0:00.00\n[ksoftirqd/0] \n\nvmstat 1:\n\nClusterNode2 root $ vmstat 1\nprocs -----------memory---------- ---swap-- -----io---- --system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy\nid wa\n 1 0 0 5032012 60888 821008 0 0 216 6938 1952 5049 40\n8 15 37\n 0 1 0 5031392 60892 821632 0 0 0 8152 2126 5725 45\n6 0 49\n 0 1 0 5030896 60900 822144 0 0 0 8124 2052 5731 46\n6 0 47\n 0 1 0 5030400 60908 822768 0 0 0 8144 2124 5717 44\n7 0 50\n 1 0 0 5029904 60924 823272 0 0 0 8304 2062 5763 43\n7 0 49\n\nI've read (2004), that Xeon may have problems with content switching -\nis the problem still existing? Can I do something to minimize the\nproblem?\n\n\npostgresql.conf:\n\nshared_buffers = 28672\neffective_cache_size = 400000 \nrandom_page_cost = 2 \n\n\nshmall & shmmax are set to 268435456\n\nhdparm:\n\nClusterNode2 root $ hdparm -tT /dev/cciss/c0d0p1\n\n/dev/cciss/c0d0p1:\nTiming buffer-cache reads: 3772 MB in 2.00 seconds = 1885.34 MB/sec\nTiming buffered disk reads: 150 MB in 2.06 seconds = 72.72 MB/sec\n\ngreetings Ernst\n\n\n\n\n", "msg_date": "Thu, 01 Sep 2005 16:16:07 +0200", "msg_from": "Ernst Einstein <[email protected]>", "msg_from_op": false, "msg_subject": "Poor performance on HP Package Cluster" }, { "msg_contents": "Are you using the built-in HP SmartArray RAID/SCSI controllers? If so, that\ncould be your problem, they are known to have terrible and variable\nperformance with Linux.\n\nThe only good fix is to add a simple SCSI controller to your system (HP\nsells them) and stay away from hardware RAID.\n\n- Luke \n\n\nOn 9/1/05 7:16 AM, \"Ernst Einstein\" <[email protected]> wrote:\n\n> Hi!\n> \n> I've set up a Package Cluster ( Fail-Over Cluster ) on our two HP DL380\n> G4 with MSA Storrage G2.( Xeon 3,4Ghz, 6GB Ram, 2x 36GB@15rpm- Raid1)\n> The system is running under Suse Linux Enterprise Server.\n> \n> My problem is, that the performance is very low. On our old Server\n> ( Celeron 2Ghz with 2 GB of Ram ) an import of our Data takes about 10\n> minutes. ( 1,1GB data )\n> One of the DL380 it takes more than 90 minutes...\n> Selects response time have also been increased. Celeron 3 sec, Xeon\n> 30-40sec.\n> \n> I'm trying to fix the problem for two day's now, googled a lot, but i\n> don't know what to do.\n> \n> Top says, my CPU spends ~50% time with wait io.\n> \n> top - 14:07:34 up 22 min, 3 users, load average: 1.09, 1.04, 0.78\n> Tasks: 74 total, 3 running, 71 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 50.0% us, 5.0% sy, 0.0% ni, 0.0% id, 45.0% wa, 0.0% hi,\n> 0.0% si\n> Mem: 6050356k total, 982004k used, 5068352k free, 60300k buffers\n> Swap: 2097136k total, 0k used, 2097136k free, 786200k cached\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n> COMMAND \n> 9939 postgres 18 0 254m 143m 140m R 49.3 2.4 8:35.43 postgres:\n> postgres plate [local] INSERT\n> 9938 postgres 16 0 13720 1440 1120 S 4.9 0.0 0:59.08 psql -d\n> plate -f dump.sql\n> 10738 root 15 0 3988 1120 840 R 4.9 0.0 0:00.05 top -d\n> 0.2 \n> 1 root 16 0 640 264 216 S 0.0 0.0 0:05.03 init\n> [3] \n> 2 root 34 19 0 0 0 S 0.0 0.0 0:00.00\n> [ksoftirqd/0] \n> \n> vmstat 1:\n> \n> ClusterNode2 root $ vmstat 1\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy\n> id wa\n> 1 0 0 5032012 60888 821008 0 0 216 6938 1952 5049 40\n> 8 15 37\n> 0 1 0 5031392 60892 821632 0 0 0 8152 2126 5725 45\n> 6 0 49\n> 0 1 0 5030896 60900 822144 0 0 0 8124 2052 5731 46\n> 6 0 47\n> 0 1 0 5030400 60908 822768 0 0 0 8144 2124 5717 44\n> 7 0 50\n> 1 0 0 5029904 60924 823272 0 0 0 8304 2062 5763 43\n> 7 0 49\n> \n> I've read (2004), that Xeon may have problems with content switching -\n> is the problem still existing? Can I do something to minimize the\n> problem?\n> \n> \n> postgresql.conf:\n> \n> shared_buffers = 28672\n> effective_cache_size = 400000\n> random_page_cost = 2\n> \n> \n> shmall & shmmax are set to 268435456\n> \n> hdparm:\n> \n> ClusterNode2 root $ hdparm -tT /dev/cciss/c0d0p1\n> \n> /dev/cciss/c0d0p1:\n> Timing buffer-cache reads: 3772 MB in 2.00 seconds = 1885.34 MB/sec\n> Timing buffered disk reads: 150 MB in 2.06 seconds = 72.72 MB/sec\n> \n> greetings Ernst\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n", "msg_date": "Thu, 01 Sep 2005 13:24:30 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance on HP Package Cluster" }, { "msg_contents": "Do you have any sources for that information? I am running dual \nSmartArray 6402's in my DL585 and haven't noticed anything poor about \ntheir performance.\n\nOn Sep 1, 2005, at 2:24 PM, Luke Lonergan wrote:\n\n> Are you using the built-in HP SmartArray RAID/SCSI controllers? If \n> so, that\n> could be your problem, they are known to have terrible and variable\n> performance with Linux.\n", "msg_date": "Thu, 1 Sep 2005 17:02:49 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance on HP Package Cluster" }, { "msg_contents": "Dan,\n\nOn 9/1/05 4:02 PM, \"Dan Harris\" <[email protected]> wrote:\n\n> Do you have any sources for that information? I am running dual\n> SmartArray 6402's in my DL585 and haven't noticed anything poor about\n> their performance.\n\nI've previously posted comprehensive results using the 5i and 6xxx series\nsmart arrays using software RAID, HW RAID on 3 different kernels, alongside\nLSI and Adaptec SCSI controllers, and an Adaptec 24xx HW RAID adapter.\nResults with bonnie++ and simple sequential read/write with dd.\n\nI'll post them again here for reference in the next message. Yes, the\nperformance of the SmartArray controllers under Linux was abysmal, even when\nrun by the labs at HP.\n\n- Luke\n\n\n", "msg_date": "Thu, 01 Sep 2005 21:54:34 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance on HP Package Cluster" }, { "msg_contents": "Hi !\n\nSorry, for my late answer. I was unavailable for a few days...\n\nYes, I'm using the build-in HP Smart Array Controller. Both, the\ninternal disks, and the external storrage are using the controller.\n\nCan you send me your test results? I'm interested in it.\n\nI've done some testing now. I've imported the data again and tuned the\nDB like I was told in some performance howtos. Now, the database has a\ngood performance - until it has to read from the disks. \n\nGreetings Ernst\n\n\nAm Donnerstag, den 01.09.2005, 21:54 -0700 schrieb Luke Lonergan:\n\n> Dan,\n> \n> On 9/1/05 4:02 PM, \"Dan Harris\" <[email protected]> wrote:\n> \n> > Do you have any sources for that information? I am running dual\n> > SmartArray 6402's in my DL585 and haven't noticed anything poor about\n> > their performance.\n> \n> I've previously posted comprehensive results using the 5i and 6xxx series\n> smart arrays using software RAID, HW RAID on 3 different kernels, alongside\n> LSI and Adaptec SCSI controllers, and an Adaptec 24xx HW RAID adapter.\n> Results with bonnie++ and simple sequential read/write with dd.\n> \n> I'll post them again here for reference in the next message. Yes, the\n> performance of the SmartArray controllers under Linux was abysmal, even when\n> run by the labs at HP.\n> \n> - Luke\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nErnst Einstein <[email protected]>\n\n\n\n\n\n\n\nHi !\n\nSorry, for my late answer. I was unavailable for a few days...\n\nYes, I'm using the build-in HP Smart Array Controller. Both, the internal disks, and the external storrage are using the controller.\n\nCan you send me your test results? I'm interested in it.\n\nI've done some testing now. I've imported the data again and tuned the DB like I was told in some performance howtos. Now, the database has a good performance - until it has to read from the disks. \n\nGreetings Ernst\n\n\nAm Donnerstag, den 01.09.2005, 21:54 -0700 schrieb Luke Lonergan:\n\n\nDan,\n\nOn 9/1/05 4:02 PM, \"Dan Harris\" <[email protected]> wrote:\n\n> Do you have any sources for that information? I am running dual\n> SmartArray 6402's in my DL585 and haven't noticed anything poor about\n> their performance.\n\nI've previously posted comprehensive results using the 5i and 6xxx series\nsmart arrays using software RAID, HW RAID on 3 different kernels, alongside\nLSI and Adaptec SCSI controllers, and an Adaptec 24xx HW RAID adapter.\nResults with bonnie++ and simple sequential read/write with dd.\n\nI'll post them again here for reference in the next message. Yes, the\nperformance of the SmartArray controllers under Linux was abysmal, even when\nrun by the labs at HP.\n\n- Luke\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n\n\n\n\n-- \nErnst Einstein <[email protected]>", "msg_date": "Sun, 04 Sep 2005 17:47:55 +0200", "msg_from": "Ernst Einstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance on HP Package Cluster" }, { "msg_contents": "Sure ­ I posted the excel spreadsheet to the list right after my message,\nbut I think it blocks attachments. I¹ll send it to you now privately.\n\nI recommend switching to software RAID 10 or 5 using simple SCSI U320\nadapter(s) from LSI or Adaptec, which you can buy from HP if you must.\n\nCheers,\n\n- Luke \n\n\nOn 9/4/05 8:47 AM, \"Ernst Einstein\" <[email protected]> wrote:\n\n> Hi !\n> \n> Sorry, for my late answer. I was unavailable for a few days...\n> \n> Yes, I'm using the build-in HP Smart Array Controller. Both, the internal\n> disks, and the external storrage are using the controller.\n> \n> Can you send me your test results? I'm interested in it.\n> \n> I've done some testing now. I've imported the data again and tuned the DB like\n> I was told in some performance howtos. Now, the database has a good\n> performance - until it has to read from the disks.\n> \n> Greetings Ernst\n> \n> \n> Am Donnerstag, den 01.09.2005, 21:54 -0700 schrieb Luke Lonergan:\n>> \n>> Dan,\n>> \n>> On 9/1/05 4:02 PM, \"Dan Harris\" <[email protected]> wrote:\n>> \n>>> > Do you have any sources for that information? I am running dual\n>>> > SmartArray 6402's in my DL585 and haven't noticed anything poor about\n>>> > their performance.\n>> \n>> I've previously posted comprehensive results using the 5i and 6xxx series\n>> smart arrays using software RAID, HW RAID on 3 different kernels, alongside\n>> LSI and Adaptec SCSI controllers, and an Adaptec 24xx HW RAID adapter.\n>> Results with bonnie++ and simple sequential read/write with dd.\n>> \n>> I'll post them again here for reference in the next message. Yes, the\n>> performance of the SmartArray controllers under Linux was abysmal, even when\n>> run by the labs at HP.\n>> \n>> - Luke\n>> \n>> \n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n\n\n\n\n\nRe: [PERFORM] Poor performance on HP Package Cluster\n\n\nSure – I posted the excel spreadsheet to the list right after my message, but I think it blocks attachments.  I’ll send it to you now privately.\n\nI recommend switching to software RAID 10 or 5 using simple SCSI U320 adapter(s) from LSI or Adaptec, which you can buy from HP if you must.\n\nCheers,\n\n- Luke \n\n\nOn 9/4/05 8:47 AM, \"Ernst Einstein\" <[email protected]> wrote:\n\nHi !\n\nSorry, for my late answer. I was unavailable for a few days...\n\nYes, I'm using the build-in HP Smart Array Controller. Both, the internal disks, and the external storrage are using the controller.\n\nCan you send me your test results? I'm interested in it.\n\nI've done some testing now. I've imported the data again and tuned the DB like I was told in some performance howtos. Now, the database has a good performance - until it has to read from the disks. \n\nGreetings Ernst\n\n\nAm Donnerstag, den 01.09.2005, 21:54 -0700 schrieb Luke Lonergan: \n\nDan,\n\nOn 9/1/05 4:02 PM, \"Dan Harris\" <[email protected]> wrote:\n\n> Do you have any sources for that information?  I am running dual\n> SmartArray 6402's in my DL585 and haven't noticed anything poor about\n> their performance.\n\nI've previously posted comprehensive results using the 5i and 6xxx series\nsmart arrays using software RAID, HW RAID on 3 different kernels, alongside\nLSI and Adaptec SCSI controllers, and an Adaptec 24xx HW RAID adapter.\nResults with bonnie++ and simple sequential read/write with dd.\n\nI'll post them again here for reference in the next message.  Yes, the\nperformance of the SmartArray controllers under Linux was abysmal, even when\nrun by the labs at HP.\n\n- Luke\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n       subscribe-nomail command to [email protected] so that your\n       message can get through to the mailing list cleanly", "msg_date": "Sun, 04 Sep 2005 10:19:53 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance on HP Package Cluster" } ]
[ { "msg_contents": "\nUlrich wrote:\n> Hi again,\n> \n> first I want to say ***THANK YOU*** for everyone who kindly shared\ntheir\n> thoughts on my hardware problems. I really appreciate it. I started to\n> look for a new server and I am quite sure we'll get a serious hardware\n> \"update\". As suggested by some people I would like now to look closer\nat\n> possible algorithmic improvements.\n> \n> My application basically imports Apache log files into a Postgres\n> database. Every row in the log file gets imported in one of three (raw\n> data) tables. My columns are exactly as in the log file. The import is\n> run approx. every five minutes. We import about two million rows a\nmonth.\n> \n> Between 30 and 50 users are using the reporting at the same time.\n> \n> Because reporting became so slow, I did create a reporting table. In\n> that table data is aggregated by dropping time (date is preserved),\nip,\n> referer, user-agent. And although it breaks normalization some data\nfrom\n> a master table is copied, so no joins are needed anymore.\n> \n> After every import the data from the current day is deleted from the\n> reporting table and recalculated from the raw data table.\n> \n\nschemas would be helpful. You may be able to tweak the import table a\nbit and how it moves over to the data tables.\n\nJust a thought: have you considered having apache logs write to a\nprocess that immediately makes insert query(s) to postgresql? \n\nYou could write small C program which executes advanced query interface\ncall to the server.\n\nMerlin\n", "msg_date": "Thu, 1 Sep 2005 09:42:41 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need for speed 3" }, { "msg_contents": "Hi Merlin,\n\n> schemas would be helpful. \n\nright now I would like to know if my approach to the problem makes \nsense. Or if I should rework the whole procedure of import and aggregate.\n\n> Just a thought: have you considered having apache logs write to a\n> process that immediately makes insert query(s) to postgresql? \n\nYes we have considered that, but dismissed the idea very soon. We need \nApache to be as responsive as possible. It's a two server setup with \nload balancer and failover. Serving about ones thousand domains and \ncounting. It needs to be as failsafe as possible and under no \ncircumstances can any request be lost. (The click counting is core \nbusiness and relates directly to our income.)\nThat said it seemed quite save to let Apache write logfiles. And import \nthem later. By that a database downtime wouldn't be mission critical.\n\n\n> You could write small C program which executes advanced query interface\n> call to the server.\n\nHow would that improve performance?\n\nUlrich\n", "msg_date": "Thu, 01 Sep 2005 16:06:18 +0200", "msg_from": "Ulrich Wisser <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need for speed 3" } ]
[ { "msg_contents": "> Hi Merlin,\n> > Just a thought: have you considered having apache logs write to a\n> > process that immediately makes insert query(s) to postgresql?\n> \n> Yes we have considered that, but dismissed the idea very soon. We need\n> Apache to be as responsive as possible. It's a two server setup with\n> load balancer and failover. Serving about ones thousand domains and\n> counting. It needs to be as failsafe as possible and under no\n> circumstances can any request be lost. (The click counting is core\n> business and relates directly to our income.)\n> That said it seemed quite save to let Apache write logfiles. And\nimport\n> them later. By that a database downtime wouldn't be mission critical.\n\nhm. well, it may be possible to do this in a fast and safe way but I\nunderstand your reservations here, but I'm going to spout off my opinion\nanyways :).\n\nIf you are not doing this the following point is moot. But take into\nconsideration you could set a very low transaction time out (like .25\nseconds) and siphon log entries off to a text file if your database\nserver gets in trouble. 2 million hits a month is not very high even if\nyour traffic is bursty (there are approx 2.5 million seconds in a\nmonth).\n\nWith a direct linked log file you get up to date stats always and spare\nyourself the dump/load song and dance which is always a headache :(.\nAlso, however you are doing your billing, it will be easier to manage it\nif everything is extracted from pg and not some conglomeration of log\nfiles, *if* you can put 100% faith in your database. When it comes to\npg now, I'm a believer.\n\n> > You could write small C program which executes advanced query\ninterface\n> > call to the server.\n> \n> How would that improve performance?\n\nThe functions I'm talking about are PQexecParams and PQexecPrepared.\nThe query string does not need to be encoded or decoded and is very\nlight on server resources and is very low latency. Using them you could\nget prob. 5000 inserts/sec on a cheap server if you have some type of\nwrite caching in place with low cpu load. \n\nMerlin\n\n\n", "msg_date": "Thu, 1 Sep 2005 11:28:33 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need for speed 3" } ]
[ { "msg_contents": "Ulrich,\n\nLuke cc'd me on his reply and you definitely should have a look at\nBizgres Clickstream. Even if the whole stack doesn't match you needs,\nthough it sounds like it would. The clickstream focused TELL and BizGres\nenhancements could make your life a little easier.\n\nBasically the stack components that you might want to look at first are:\n\nBizGres flavor of PostGreSQL - Enhanced for business intelligence and\ndata warehousing - The www.bizgres.com website can speak to this in more\ndetail.\nClickstream Data Model - Pageview fact table surrounded by various\ndimensions and 2 core staging tables for the cleansed weblog data.\nETL Platform - Contains a weblog sessionizer, cleanser and ETL\ntransformations, which can handle 2-3 million hits without any trouble.\nWith native support for the COPY command, for even greater performance.\nJasperReports - For pixel perfect reporting.\n\nSorry for sounding like I'm in marketing or sales, however I'm not.\n\nCouple of key features that might interest you, considering your email.\nThe weblog parsing component allows for relatively complex cleansing,\nallowing for less data to be written to the DB and therefore increasing\nthroughput. In addition, if you run every 5 minutes there would be no\nneed to truncate the days data and reload, the ETL knows how to connect\nthe data from before. The copy enhancement to postgresql found in\nbizgres, makes a noticeable improvement when loading data.\nThe schema is basically\n\nDimension tables Session, Known Party (If cookies are logged), Page, IP\nAddress, Date, Time, Referrer, Referrer Page.\nFact tables: Pageview, Hit Subset (Not everyone wants all hits).\n\nStaging Tables: Hits (Cleansed hits or just pageviews without surrogate\nkeys), Session (Session data gathered while parsing the log).\n\nRegards\n\nNick\n\n\n-----Original Message-----\nFrom: Luke Lonergan [mailto:[email protected]] \nSent: Thursday, September 01, 2005 9:38 AM\nTo: Ulrich Wisser; [email protected]\nCc: Nicholas E. Wakefield; Barry Klawans; Daria Hutchinson\nSubject: Re: [PERFORM] Need for speed 3\n\nUlrich,\n\nOn 9/1/05 6:25 AM, \"Ulrich Wisser\" <[email protected]>\nwrote:\n\n> My application basically imports Apache log files into a Postgres \n> database. Every row in the log file gets imported in one of three (raw\n> data) tables. My columns are exactly as in the log file. The import is\n\n> run approx. every five minutes. We import about two million rows a\nmonth.\n\nBizgres Clickstream does this job using an ETL (extract transform and\nload) process to transform the weblogs into an optimized schema for\nreporting.\n \n> After every import the data from the current day is deleted from the \n> reporting table and recalculated from the raw data table.\n\nThis is something the optimized ETL in Bizgres Clickstream also does\nwell.\n \n> What do you think of this approach? Are there better ways to do it? Is\n\n> there some literature you recommend reading?\n\nI recommend the Bizgres Clickstream docs, you can get it from Bizgres\nCVS, and there will shortly be a live html link on the website.\n\nBizgres is free - it also improves COPY performance by almost 2x, among\nother enhancements.\n\n- Luke \n\n\n\n", "msg_date": "Thu, 1 Sep 2005 10:30:43 -0700", "msg_from": "\"Nicholas E. Wakefield\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need for speed 3" } ]
[ { "msg_contents": "Hi,\n\nI'm having performance issues with a table consisting of 2,043,133 rows. The\nschema is:\n\n\\d address\n Table \"public.address\"\n Column | Type | Modifiers \n----------------------+------------------------+-----------\n postcode_top | character varying(2) | not null\n postcode_middle | character varying(4) | not null\n postcode_bottom | character varying(7) | not null\n postcode | character varying(10) | not null\n property_type | character varying(15) | not null\n sale_type | character varying(10) | not null\n flat_extra | character varying(100) | not null\n number | character varying(100) | not null\n street | character varying(100) | not null\n locality_1 | character varying(100) | not null\n locality_2 | character varying(100) | not null\n city | character varying(100) | not null\n county | character varying(100) | not null\nIndexes:\n \"address_city_index\" btree (city)\n \"address_county_index\" btree (county)\n \"address_locality_1_index\" btree (locality_1)\n \"address_locality_2_index\" btree (locality_2)\n \"address_pc_bottom_index\" btree (postcode_bottom)\n \"address_pc_middle_index\" btree (postcode_middle)\n \"address_pc_top_index\" btree (postcode_top)\n \"address_pc_top_middle_bottom_index\" btree (postcode_top,\n postcode_middle, postcode_bottom)\n \"address_pc_top_middle_index\" btree (postcode_top, postcode_middle)\n \"address_postcode_index\" btree (postcode)\n \"address_property_type_index\" btree (property_type)\n \"address_street_index\" btree (street)\n \"street_prefix\" btree (lower(\"substring\"((street)::text, 1, 1)))\n\nThis is with postgresql 7.4 running on linux 2.6.11 with a 3GHz P4 and a\nSATA harddrive.\n\nQueries such as:\n\nselect locality_2 from address where locality_2 = 'Manchester';\n\nare taking 14 seconds to complete, and this is only 2 years worth of\ndata - we will have up to 15 years (so over 15 million rows).\n\nInterestingly, doing:\nexplain select locality_2 from address where locality_2 = 'Manchester';\ngives\n QUERY PLAN \n----------------------------------------------------------------\n Seq Scan on address (cost=0.00..80677.16 rows=27923 width=12)\n Filter: ((locality_2)::text = 'Manchester'::text)\n\nbut:\nexplain select locality_1 from address where locality_1 = 'Manchester';\ngives\n QUERY PLAN \n----------------------------------------------------------------\n Index Scan using address_locality_1_index on address\n(cost=0.00..69882.18 rows=17708 width=13)\n Index Cond: ((locality_1)::text = 'Manchester'::text)\n\nSadly, using the index makes things worse, the query taking 17 seconds.\n\nlocality_1 has 16650 distinct values and locality_2 has 1156 distinct\nvalues.\n\nWhilst the locality_2 query is in progress, both the disk and the CPU\nare maxed out with the disk constantly reading at 60MB/s and the CPU\nrarely dropping under 100% load.\n\nWith the locality_1 query in progress, the CPU is maxed out but the disk\nis reading at just 3MB/s.\n\nObviously, to me, this is a problem, I need these queries to be under a\nsecond to complete. Is this unreasonable? What can I do to make this \"go\nfaster\"? I've considered normalising the table but I can't work out\nwhether the slowness is in dereferencing the pointers from the index\ninto the table or in scanning the index in the first place. And\nnormalising the table is going to cause much pain when inserting values\nand I'm not entirely sure if I see why normalising it should cause a\nmassive performance improvement.\n\nI need to get to the stage where I can run queries such as:\nselect street, locality_1, locality_2, city from address \nwhere (city = 'Nottingham' or locality_2 = 'Nottingham'\n or locality_1 = 'Nottingham')\n and upper(substring(street from 1 for 1)) = 'A' \ngroup by street, locality_1, locality_2, city\norder by street\nlimit 20 offset 0\n\nand have the results very quickly.\n\nAny help most gratefully received (even if it's to say that I should be\nposting to a different mailing list!).\n\nMany thanks,\n\nMatthew\n\n", "msg_date": "Thu, 1 Sep 2005 18:42:31 +0100", "msg_from": "Matthew Sackman <[email protected]>", "msg_from_op": true, "msg_subject": "Massive performance issues" }, { "msg_contents": "Matthew Sackman <[email protected]> writes:\n> Obviously, to me, this is a problem, I need these queries to be under a\n> second to complete. Is this unreasonable?\n\nYes. Pulling twenty thousand rows at random from a table isn't free.\nYou were pretty vague about your disk hardware, which makes me think\nyou didn't spend a lot of money on it ... and on low-ball hardware,\nthat sort of random access speed just isn't gonna happen.\n\nIf the queries you need are very consistent, you might be able to get\nsome mileage out of CLUSTERing by the relevant index ... but the number\nof indexes you've created makes me think that's not so ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Sep 2005 14:47:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues " }, { "msg_contents": "On Thu, Sep 01, 2005 at 02:47:06PM -0400, Tom Lane wrote:\n> Matthew Sackman <[email protected]> writes:\n> > Obviously, to me, this is a problem, I need these queries to be under a\n> > second to complete. Is this unreasonable?\n> \n> Yes. Pulling twenty thousand rows at random from a table isn't free.\n\nI appreciate that. But I'm surprised by how un-free it seems to be.\nAnd it seems others here have performance I need on similar hardware.\n\n> You were pretty vague about your disk hardware, which makes me think\n> you didn't spend a lot of money on it ... and on low-ball hardware,\n> that sort of random access speed just isn't gonna happen.\n\nWell, this is a development box. But the live box wouldn't be much more\nthan RAID 1 on SCSI 10ks so that should only be a halving of seek time,\nnot the 1000 times reduction I'm after!\n\nIn fact, now I think about it, I have been testing on a 2.4 kernel on a\ndual HT 3GHz Xeon with SCSI RAID array and the performance is only\nmarginally better.\n\n> If the queries you need are very consistent, you might be able to get\n> some mileage out of CLUSTERing by the relevant index ... but the number\n> of indexes you've created makes me think that's not so ...\n\nNo, the queries, whilst in just three distinct forms, will effectively\nbe for fairly random values.\n\nMatthew\n", "msg_date": "Thu, 1 Sep 2005 20:08:08 +0100", "msg_from": "Matthew Sackman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "This should be able to run _very_ fast.\n\nAt 01:42 PM 9/1/2005, Matthew Sackman wrote:\n>Hi,\n>\n>I'm having performance issues with a table consisting of 2,043,133 rows. The\n>schema is:\n>\n>\\d address\n> Table \"public.address\"\n> Column | Type | Modifiers\n>----------------------+------------------------+-----------\n> postcode_top | character varying(2) | not null\n> postcode_middle | character varying(4) | not null\n> postcode_bottom | character varying(7) | not null\n> postcode | character varying(10) | not null\n> property_type | character varying(15) | not null\n> sale_type | character varying(10) | not null\n> flat_extra | character varying(100) | not null\n> number | character varying(100) | not null\n> street | character varying(100) | not null\n> locality_1 | character varying(100) | not null\n> locality_2 | character varying(100) | not null\n> city | character varying(100) | not null\n> county | character varying(100) | not null\n>Indexes:\n> \"address_city_index\" btree (city)\n> \"address_county_index\" btree (county)\n> \"address_locality_1_index\" btree (locality_1)\n> \"address_locality_2_index\" btree (locality_2)\n> \"address_pc_bottom_index\" btree (postcode_bottom)\n> \"address_pc_middle_index\" btree (postcode_middle)\n> \"address_pc_top_index\" btree (postcode_top)\n> \"address_pc_top_middle_bottom_index\" btree (postcode_top,\n> postcode_middle, postcode_bottom)\n> \"address_pc_top_middle_index\" btree (postcode_top, postcode_middle)\n> \"address_postcode_index\" btree (postcode)\n> \"address_property_type_index\" btree (property_type)\n> \"address_street_index\" btree (street)\n> \"street_prefix\" btree (lower(\"substring\"((street)::text, 1, 1)))\n\nIOW, each row takes ~1KB on HD. First suggestion: format your HD to \nuse 8KB pages with 1KB segments. That'll out each row down on HD as \nan atomic unit. 8KB pages also \"play nice\" with pg.\n\nAt 1KB per row, this table takes up ~2.1GB and should fit into RAM \nfairly easily on a decently configured DB server (my _laptop_ has 2GB \nof RAM after all...)\n\nSince you are using ~2.1GB for 2 years worth of data, 15 years worth \nshould take no more than 2.1GB*7.5= 15.75GB.\n\nIf you replace some of those 100 char fields with integers for code \nnumbers and have an auxiliary table for each of those fields mapping \nthe code numbers to the associated 100 char string, you should be \nable to shrink a row considerably. Your target is to have each row \ntake <= 512B. Once a row fits into one 512B sector on HD, there's a \nno point in making it smaller unless you can shrink it enough to fit \n2 rows into one sector (<= 256B). Once two rows fit into one sector, \nthere's no point shrinking a row unless you can make 3 rows fit into \na sector. Etc.\n\nAssuming each 100 char (eg 100B) field can be replaced with a 4B int, \neach row could be as small as 76B. That makes 85B per row the goal \nas it would allow you to fit 6 rows per 512B HD sector. So in the \nbest case your table will be 12x smaller in terms of real HD space.\n\nFitting one (or more) row(s) into one sector will cut down the real \nspace used on HD for the table to ~7.88GB (or 1.32GB in the best \ncase). Any such streamlining will make it faster to load, make the \nworking set that needs to be RAM for best performance smaller, etc, etc.\n\n\n>This is with postgresql 7.4 running on linux 2.6.11 with a 3GHz P4 \n>and a SATA harddrive.\n\nUpgrade pg to 8.0.3 and make sure you have enough RAM for your real \nday to day load. Frankly, RAM is so cheap ($75-$150/GB), I'd just \nupgrade the machine to 4GB as a matter of course. P4's have PAE, so \nif your mainboard can hold it, put more than 4GB of RAM in if you \nfind you need it.\n\nSince you are describing your workload as being predominantly reads, \nyou can get away with far less HD capability as long as you crank up \nRAM high enough to hold the working set of the DB. The indications \nfrom the OP are that you may very well be able to hold the entire DB \nin RAM. That's a big win whenever you can achieve it.\n\nAfter these steps, there may still be performance issues that need \nattention, but the DBMS should be _much_ faster.\n\nRon Peacetree\n\n\n", "msg_date": "Thu, 01 Sep 2005 15:43:01 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "On Thu, Sep 01, 2005 at 06:42:31PM +0100, Matthew Sackman wrote:\n> flat_extra | character varying(100) | not null\n> number | character varying(100) | not null\n> street | character varying(100) | not null\n> locality_1 | character varying(100) | not null\n> locality_2 | character varying(100) | not null\n> city | character varying(100) | not null\n> county | character varying(100) | not null\n\nHaving these fixed probably won't give you any noticeable improvements;\nunless there's something natural about your data setting 100 as a hard limit,\nyou could just as well drop these.\n\n> \"address_city_index\" btree (city)\n> \"address_county_index\" btree (county)\n> \"address_locality_1_index\" btree (locality_1)\n> \"address_locality_2_index\" btree (locality_2)\n> \"address_pc_bottom_index\" btree (postcode_bottom)\n> \"address_pc_middle_index\" btree (postcode_middle)\n> \"address_pc_top_index\" btree (postcode_top)\n> \"address_pc_top_middle_bottom_index\" btree (postcode_top,\n> postcode_middle, postcode_bottom)\n> \"address_pc_top_middle_index\" btree (postcode_top, postcode_middle)\n> \"address_postcode_index\" btree (postcode)\n> \"address_property_type_index\" btree (property_type)\n> \"address_street_index\" btree (street)\n> \"street_prefix\" btree (lower(\"substring\"((street)::text, 1, 1)))\n\nWow, that's quite a lof of indexes... but your problem isn't reported as\nbeing in insert/update/delete.\n\n> This is with postgresql 7.4 running on linux 2.6.11 with a 3GHz P4 and a\n> SATA harddrive.\n\n8.0 or 8.1 might help you some -- better (and more!) disks will probably help\na _lot_.\n\n> Queries such as:\n> \n> select locality_2 from address where locality_2 = 'Manchester';\n> \n> are taking 14 seconds to complete, and this is only 2 years worth of\n> data - we will have up to 15 years (so over 15 million rows).\n\nAs Tom pointed out; you're effectively doing random searches here, and using\nCLUSTER might help. Normalizing your data to get smaller rows (and avoid\npossibly costly string comparisons if your strcoll() is slow) will probably\nalso help.\n\n> I need to get to the stage where I can run queries such as:\n> select street, locality_1, locality_2, city from address \n> where (city = 'Nottingham' or locality_2 = 'Nottingham'\n> or locality_1 = 'Nottingham')\n> and upper(substring(street from 1 for 1)) = 'A' \n> group by street, locality_1, locality_2, city\n> order by street\n> limit 20 offset 0\n\nThis might be a lot quicker than pulling all the records like in your example\nqueries...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 1 Sep 2005 22:09:30 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "Ron <[email protected]> writes:\n> ... Your target is to have each row take <= 512B.\n\nRon, are you assuming that the varchar fields are blank-padded or\nsomething? I think it's highly unlikely that he's got more than a\ncouple hundred bytes per row right now --- at least if the data is\nwhat it sounds like.\n\nThe upthread comment about strcoll() set off some alarm bells in my head.\nIf the database wasn't initdb'd in C locale already, try making it so.\nAlso, use a single-byte encoding if you can (LatinX is fine, Unicode not).\n\n> Upgrade pg to 8.0.3 and make sure you have enough RAM for your real \n> day to day load.\n\nNewer PG definitely better. Some attention to the configuration\nparameters might also be called for. I fear though that these things\nare probably just chipping at the margins ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Sep 2005 16:25:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues " }, { "msg_contents": "Matthew Sackman schrieb:\n\n>Hi,\n>\n>I'm having performance issues with a table consisting of 2,043,133 rows. The\n>schema is:\n>\n>\\d address\n> Table \"public.address\"\n> Column | Type | Modifiers \n>----------------------+------------------------+-----------\n> postcode_top | character varying(2) | not null\n> postcode_middle | character varying(4) | not null\n> postcode_bottom | character varying(7) | not null\n> postcode | character varying(10) | not null\n> property_type | character varying(15) | not null\n> sale_type | character varying(10) | not null\n> flat_extra | character varying(100) | not null\n> number | character varying(100) | not null\n> street | character varying(100) | not null\n> locality_1 | character varying(100) | not null\n> locality_2 | character varying(100) | not null\n> city | character varying(100) | not null\n> county | character varying(100) | not null\n>Indexes:\n> \"address_city_index\" btree (city)\n> \"address_county_index\" btree (county)\n> \"address_locality_1_index\" btree (locality_1)\n> \"address_locality_2_index\" btree (locality_2)\n> \"address_pc_bottom_index\" btree (postcode_bottom)\n> \"address_pc_middle_index\" btree (postcode_middle)\n> \"address_pc_top_index\" btree (postcode_top)\n> \"address_pc_top_middle_bottom_index\" btree (postcode_top,\n> postcode_middle, postcode_bottom)\n> \"address_pc_top_middle_index\" btree (postcode_top, postcode_middle)\n> \"address_postcode_index\" btree (postcode)\n> \"address_property_type_index\" btree (property_type)\n> \"address_street_index\" btree (street)\n> \"street_prefix\" btree (lower(\"substring\"((street)::text, 1, 1)))\n>\n>This is with postgresql 7.4 running on linux 2.6.11 with a 3GHz P4 and a\n>SATA harddrive.\n>\n>Queries such as:\n>\n>select locality_2 from address where locality_2 = 'Manchester';\n>\n>are taking 14 seconds to complete, and this is only 2 years worth of\n>data - we will have up to 15 years (so over 15 million rows).\n>\n>Interestingly, doing:\n>explain select locality_2 from address where locality_2 = 'Manchester';\n>gives\n> QUERY PLAN \n>----------------------------------------------------------------\n> Seq Scan on address (cost=0.00..80677.16 rows=27923 width=12)\n> Filter: ((locality_2)::text = 'Manchester'::text)\n>\n>but:\n>explain select locality_1 from address where locality_1 = 'Manchester';\n>gives\n> QUERY PLAN \n>----------------------------------------------------------------\n> Index Scan using address_locality_1_index on address\n>(cost=0.00..69882.18 rows=17708 width=13)\n> Index Cond: ((locality_1)::text = 'Manchester'::text)\n>\n>Sadly, using the index makes things worse, the query taking 17 seconds.\n>\n>locality_1 has 16650 distinct values and locality_2 has 1156 distinct\n>values.\n>\n>Whilst the locality_2 query is in progress, both the disk and the CPU\n>are maxed out with the disk constantly reading at 60MB/s and the CPU\n>rarely dropping under 100% load.\n>\n>With the locality_1 query in progress, the CPU is maxed out but the disk\n>is reading at just 3MB/s.\n>\n>Obviously, to me, this is a problem, I need these queries to be under a\n>second to complete. Is this unreasonable? What can I do to make this \"go\n>faster\"? I've considered normalising the table but I can't work out\n>whether the slowness is in dereferencing the pointers from the index\n>into the table or in scanning the index in the first place. And\n>normalising the table is going to cause much pain when inserting values\n>and I'm not entirely sure if I see why normalising it should cause a\n>massive performance improvement.\n>\n> \n>\nJust an idea: When you do not want to adapt your application to use a\nnormalized database you may push the data into normalized table using\ntriggers.\nExample:\nAdd a table city with column id, name\nand add a column city_id to your main table.\nIn this case you have redundant data in your main table (locality_1 and\ncity_id) but you could make queries to the city table when searching for\n'Man%'\n\n-- \nBest Regards / Viele Gr��e\n\nSebastian Hennebrueder\n\n----\n\nhttp://www.laliluna.de\n\nTutorials for JSP, JavaServer Faces, Struts, Hibernate and EJB \n\nGet support, education and consulting for these technologies.\n\n", "msg_date": "Thu, 01 Sep 2005 22:39:13 +0200", "msg_from": "Sebastian Hennebrueder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "On 1-9-2005 19:42, Matthew Sackman wrote:\n> Obviously, to me, this is a problem, I need these queries to be under a\n> second to complete. Is this unreasonable? What can I do to make this \"go\n> faster\"? I've considered normalising the table but I can't work out\n> whether the slowness is in dereferencing the pointers from the index\n> into the table or in scanning the index in the first place. And\n> normalising the table is going to cause much pain when inserting values\n> and I'm not entirely sure if I see why normalising it should cause a\n> massive performance improvement.\n\nIn this case, I think normalising will give a major decrease in on-disk \ntable-size of this large table and the indexes you have. If that's the \ncase, that in itself will speed-up all i/o-bound queries quite a bit.\n\nlocality_1, _2, city and county can probably be normalised away without \nmuch problem, but going from varchar's to integers will probably safe \nyou quite a bit of (disk)space.\n\nBut since it won't change the selectivity of indexes, so you won't get \nmore index-scans instead of sequential scans, I suppose.\nI think its not that hard to create a normalized set of tables from this \n data-set (using insert into tablename select distinct ... from address \nand such, insert into address_new (..., city) select ... (select cityid \nfrom cities where city = address.city) from address)\nSo its at least relatively easy to figure out the performance \nimprovement from normalizing the dataset a bit.\n\nIf you want to improve your hardware, have a look at the Western Digital \nRaptor-series SATA disks, they are fast scsi-like SATA drives. You may \nalso have a look at the amount of memory available, to allow caching \nthis (entire) table.\n\nBest regards,\n\nArjen\n", "msg_date": "Thu, 01 Sep 2005 22:54:45 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "At 04:25 PM 9/1/2005, Tom Lane wrote:\n>Ron <[email protected]> writes:\n> > ... Your target is to have each row take <= 512B.\n>\n>Ron, are you assuming that the varchar fields are blank-padded or \n>something? I think it's highly unlikely that he's got more than a \n>couple hundred bytes per row right now --- at least if the data is \n>what it sounds like.\n\nAs it stands, each row will take 55B - 748B and each field is \nvariable in size up to the maximums given in the OP's schema. Since \npg uses an underlying OS FS, and not a native one, there will be \nextra FS overhead no matter what we do, particularly to accommodate \nsuch flexibility... The goal is to minimize overhead and maximize \nregularity in layout. The recipe I know for HD IO speed is in \nkeeping the data small, regular, and as simple as possible.\n\nEven better, if the table(s) can be made RAM resident, then searches, \neven random ones, can be very fast. He wants a 1000x performance \nimprovement. Going from disk resident to RAM resident should help \ngreatly in attaining that goal.\n\nIn addition, by replacing as many variable sized text strings as \npossible with ints, the actual compare functions he used as examples \nshould run faster as well.\n\n\n>The upthread comment about strcoll() set off some alarm bells in my \n>head. If the database wasn't initdb'd in C locale already, try \n>making it so. Also, use a single-byte encoding if you can (LatinX \n>is fine, Unicode not).\n\nGood thoughts I hadn't had.\n\n\n> > Upgrade pg to 8.0.3 and make sure you have enough RAM for your real\n> > day to day load.\n>\n>Newer PG definitely better. Some attention to the configuration \n>parameters might also be called for. I fear though that these \n>things are probably just chipping at the margins ...\n\nI don't expect 8.0.3 to be a major performance improvement. I do \nexpect it to be a major _maintenance_ improvement for both him and \nthose of us trying to help him ;-)\n\nThe performance difference between not having the working set of the \nDB fit into RAM during ordinary operation vs having it be so (or \nbetter, having the whole DB fit into RAM during ordinary operation) \nhas been considerably more effective than \"chipping at the margins\" \nIME. Especially so if the HD IO subsystem is wimpy.\n\nRon Peacetree\n\n\n", "msg_date": "Thu, 01 Sep 2005 17:05:33 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues " }, { "msg_contents": "On Thu, Sep 01, 2005 at 10:09:30PM +0200, Steinar H. Gunderson wrote:\n> > \"address_city_index\" btree (city)\n> > \"address_county_index\" btree (county)\n> > \"address_locality_1_index\" btree (locality_1)\n> > \"address_locality_2_index\" btree (locality_2)\n> > \"address_pc_bottom_index\" btree (postcode_bottom)\n> > \"address_pc_middle_index\" btree (postcode_middle)\n> > \"address_pc_top_index\" btree (postcode_top)\n> > \"address_pc_top_middle_bottom_index\" btree (postcode_top,\n> > postcode_middle, postcode_bottom)\n> > \"address_pc_top_middle_index\" btree (postcode_top, postcode_middle)\n> > \"address_postcode_index\" btree (postcode)\n> > \"address_property_type_index\" btree (property_type)\n> > \"address_street_index\" btree (street)\n> > \"street_prefix\" btree (lower(\"substring\"((street)::text, 1, 1)))\n> \n> Wow, that's quite a lof of indexes... but your problem isn't reported as\n> being in insert/update/delete.\n\nHah, well now that you mention it. Basically, 100,000 rows come in in a\nbulk import every month and the only way I can get it to complete in any\nsane time frame at all is to drop the indexes, do the import and then\nrecreate the indexes. But that's something that I'm OK with - the\nimports don't have to be that fast and whilst important, it's not *the*\ncritical path. Selection from the database is, hence the indexes.\n\n> > This is with postgresql 7.4 running on linux 2.6.11 with a 3GHz P4 and a\n> > SATA harddrive.\n> \n> 8.0 or 8.1 might help you some -- better (and more!) disks will probably help\n> a _lot_.\n\nOk, I did try 8.0 when I started this and found that the server bind\nparameters (both via DBD::Pg (with pg_prepare_server => 1) and via JDBC\n(various versions I tried)) failed - the parameters were clearly not\nbeing substituted. This was Postgresql 8.0 from Debian unstable. That\nwas a couple of weeks ago and I've not been back to check whether its\nbeen fixed. Anyway, because of these problems I dropped back to 7.4.\n\n> > Queries such as:\n> > \n> > select locality_2 from address where locality_2 = 'Manchester';\n> > \n> > are taking 14 seconds to complete, and this is only 2 years worth of\n> > data - we will have up to 15 years (so over 15 million rows).\n> \n> As Tom pointed out; you're effectively doing random searches here, and using\n> CLUSTER might help. Normalizing your data to get smaller rows (and avoid\n> possibly costly string comparisons if your strcoll() is slow) will probably\n> also help.\n\nOk, so you're saying that joining the address table into an address_city\ntable (the obvious normalization) will help here?\n\nThe locale settings in postgresql.conf all have en_GB and a \\l shows\nencoding of LATIN1. So I don't think I've set anything to UTF8 or such\nlike.\n\n> > I need to get to the stage where I can run queries such as:\n> > select street, locality_1, locality_2, city from address \n> > where (city = 'Nottingham' or locality_2 = 'Nottingham'\n> > or locality_1 = 'Nottingham')\n> > and upper(substring(street from 1 for 1)) = 'A' \n> > group by street, locality_1, locality_2, city\n> > order by street\n> > limit 20 offset 0\n> \n> This might be a lot quicker than pulling all the records like in your example\n> queries...\n\nYes, that certainly does seem to be the case - around 4 seconds. But I\nneed it to be 10 times faster (or thereabouts) otherwise I have big\nproblems!\n\nMany thanks for all the advice so far.\n\nMatthew\n\n", "msg_date": "Thu, 1 Sep 2005 22:06:26 +0100", "msg_from": "Matthew Sackman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "On Thu, Sep 01, 2005 at 10:54:45PM +0200, Arjen van der Meijden wrote:\n> On 1-9-2005 19:42, Matthew Sackman wrote:\n> >Obviously, to me, this is a problem, I need these queries to be under a\n> >second to complete. Is this unreasonable? What can I do to make this \"go\n> >faster\"? I've considered normalising the table but I can't work out\n> >whether the slowness is in dereferencing the pointers from the index\n> >into the table or in scanning the index in the first place. And\n> >normalising the table is going to cause much pain when inserting values\n> >and I'm not entirely sure if I see why normalising it should cause a\n> >massive performance improvement.\n> \n> In this case, I think normalising will give a major decrease in on-disk \n> table-size of this large table and the indexes you have. If that's the \n> case, that in itself will speed-up all i/o-bound queries quite a bit.\n\nWell that's the thing - on the queries where it decides to use the index\nit only reads at around 3MB/s and the CPU is maxed out, whereas when it\ndoesn't use the index, the disk is being read at 60MB/s. So when it\ndecides to use an index, I don't seem to be IO bound at all. Or at least\nthat's the way it seems to me.\n\n> locality_1, _2, city and county can probably be normalised away without \n> much problem, but going from varchar's to integers will probably safe \n> you quite a bit of (disk)space.\n\nSure, that's what I've been considering today.\n\n> But since it won't change the selectivity of indexes, so you won't get \n> more index-scans instead of sequential scans, I suppose.\n> I think its not that hard to create a normalized set of tables from this \n> data-set (using insert into tablename select distinct ... from address \n> and such, insert into address_new (..., city) select ... (select cityid \n> from cities where city = address.city) from address)\n> So its at least relatively easy to figure out the performance \n> improvement from normalizing the dataset a bit.\n\nYeah, the initial creation isn't too painful but when adding rows into\nthe address table it gets more painful. However, as I've said elsewhere,\nthe import isn't the critical path so I can cope with that pain,\npossibly coding around it in a stored proceedure and triggers as\nsuggested.\n\n> If you want to improve your hardware, have a look at the Western Digital \n> Raptor-series SATA disks, they are fast scsi-like SATA drives. You may \n> also have a look at the amount of memory available, to allow caching \n> this (entire) table.\n\nWell I've got 1GB of RAM, but from analysis of its use, a fair amount\nisn't being used. About 50% is actually in use by applications and about\nhalf of the rest is cache and the rest isn't being used. Has this to do\nwith the max_fsm_pages and max_fsm_relations settings? I've pretty much\nnot touched the configuration and it's the standard Debian package.\n\nMatthew\n", "msg_date": "Thu, 1 Sep 2005 22:13:59 +0100", "msg_from": "Matthew Sackman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "> Well I've got 1GB of RAM, but from analysis of its use, a fair amount\n> isn't being used. About 50% is actually in use by applications and about\n> half of the rest is cache and the rest isn't being used. Has this to do\n> with the max_fsm_pages and max_fsm_relations settings? I've pretty much\n> not touched the configuration and it's the standard Debian package.\n\nMatt, have a look at the annotated postgresql.conf for 7.x here:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\nIf you have the default settings, you're likely hampering yourself quite a \nbit. You probably care about shared_buffers, sort_mem, \nvacuum_mem, max_fsm_pages, effective_cache_size\n\nAlso, you may want to read the PostgreSQL 8.0 Performance Checklist. Even \nthough it's for 8.0, it'll give you good ideas on what to change in 7.4. You \ncan find it here: http://www.powerpostgresql.com/PerfList/\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 1 Sep 2005 14:26:47 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "On Thu, Sep 01, 2005 at 10:13:59PM +0100, Matthew Sackman wrote:\n> Well that's the thing - on the queries where it decides to use the index\n> it only reads at around 3MB/s and the CPU is maxed out, whereas when it\n> doesn't use the index, the disk is being read at 60MB/s. So when it\n> decides to use an index, I don't seem to be IO bound at all. Or at least\n> that's the way it seems to me.\n\nYou are I/O bound; your disk is doing lots and lots of seeks. The SATA\ninterface is not the bottleneck; the disk's ability to rotate and move its\nheads is.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 1 Sep 2005 23:52:45 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "On Thu, Sep 01, 2005 at 02:26:47PM -0700, Jeff Frost wrote:\n> >Well I've got 1GB of RAM, but from analysis of its use, a fair amount\n> >isn't being used. About 50% is actually in use by applications and about\n> >half of the rest is cache and the rest isn't being used. Has this to do\n> >with the max_fsm_pages and max_fsm_relations settings? I've pretty much\n> >not touched the configuration and it's the standard Debian package.\n> \n> Matt, have a look at the annotated postgresql.conf for 7.x here:\n> \n> http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n> \n> If you have the default settings, you're likely hampering yourself quite a \n> bit. You probably care about shared_buffers, sort_mem, \n> vacuum_mem, max_fsm_pages, effective_cache_size\n\nThat's a useful resource, thanks for the pointer. I'll work through that\ntomorrow.\n\n> Also, you may want to read the PostgreSQL 8.0 Performance Checklist. Even \n> though it's for 8.0, it'll give you good ideas on what to change in 7.4. \n> You can find it here: http://www.powerpostgresql.com/PerfList/\n\nThanks, another good resource. I'll work through that too.\n\nMatthew\n", "msg_date": "Thu, 1 Sep 2005 23:00:07 +0100", "msg_from": "Matthew Sackman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "On Thu, Sep 01, 2005 at 11:52:45PM +0200, Steinar H. Gunderson wrote:\n> On Thu, Sep 01, 2005 at 10:13:59PM +0100, Matthew Sackman wrote:\n> > Well that's the thing - on the queries where it decides to use the index\n> > it only reads at around 3MB/s and the CPU is maxed out, whereas when it\n> > doesn't use the index, the disk is being read at 60MB/s. So when it\n> > decides to use an index, I don't seem to be IO bound at all. Or at least\n> > that's the way it seems to me.\n> \n> You are I/O bound; your disk is doing lots and lots of seeks. The SATA\n> interface is not the bottleneck; the disk's ability to rotate and move its\n> heads is.\n\nAhh of course (/me hits head against wall). Because I've /seen/ it read\nat 60MB/s I was assuming that if it wasn't reading that fast then I'm\nnot IO bound but of course, it's not reading sequentially. That all\nmakes sense. Been a long day etc... ;-)\n\nMatthew\n", "msg_date": "Thu, 1 Sep 2005 23:01:58 +0100", "msg_from": "Matthew Sackman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "At 05:06 PM 9/1/2005, Matthew Sackman wrote:\n>On Thu, Sep 01, 2005 at 10:09:30PM +0200, Steinar H. Gunderson wrote:\n> > > \"address_city_index\" btree (city)\n> > > \"address_county_index\" btree (county)\n> > > \"address_locality_1_index\" btree (locality_1)\n> > > \"address_locality_2_index\" btree (locality_2)\n> > > \"address_pc_bottom_index\" btree (postcode_bottom)\n> > > \"address_pc_middle_index\" btree (postcode_middle)\n> > > \"address_pc_top_index\" btree (postcode_top)\n> > > \"address_pc_top_middle_bottom_index\" btree (postcode_top,\n> > > postcode_middle, postcode_bottom)\n> > > \"address_pc_top_middle_index\" btree (postcode_top, postcode_middle)\n> > > \"address_postcode_index\" btree (postcode)\n> > > \"address_property_type_index\" btree (property_type)\n> > > \"address_street_index\" btree (street)\n> > > \"street_prefix\" btree (lower(\"substring\"((street)::text, 1, 1)))\n> >\n> > Wow, that's quite a lof of indexes... but your problem isn't reported as\n> > being in insert/update/delete.\n>\n>Hah, well now that you mention it. Basically, 100,000 rows come in in a\n>bulk import every month and the only way I can get it to complete in any\n>sane time frame at all is to drop the indexes, do the import and then\n>recreate the indexes. But that's something that I'm OK with -\n\nFTR, this \"drop the indexes, do <foo>, recreate the indexes\" is \nIndustry Standard Practice for bulk \ninserts/updates/deletes. Regardless of DB product used.\n\n\n> - the imports don't have to be that fast and whilst important, \n> it's not *the*\n>critical path. Selection from the database is, hence the indexes.\n\nA DB _without_ indexes that fits into RAM during ordinary operation \nmay actually be faster than a DB _with_ indexes that does \nnot. Fitting the entire DB into RAM during ordinary operation if at \nall possible should be the first priority with a small data mine-like \napplication such as you've described.\n\nAlso normalization is _not_ always a good thing for data mining like \napps. Having most or everything you need in one place in a compact \nand regular format is usually more effective for data mines than \"Nth \nOrder Normal Form\" optimization to the degree usually found in \ntextbooks using OLTP-like examples.\n\nIndexes are a complication used as a performance enhancing technique \nbecause without them the DB is not performing well enough. IME, it's \nusually better to get as much performance as one can from other \naspects of design and _then_ start adding complications. Including \nindexes. Even if you fit the whole DB in RAM, you are very likely to \nneed some indexes; but profile your performance first and then add \nindexes as needed rather than just adding them willy nilly early in \nthe design process.\n\nYou said you had 1GB of RAM on the machine now. That clearly is \ninadequate to your desired performance given what you said about the \nDB. Crank that box to 4GB and tighten up your data structures. Then \nsee where you are.\n\n\n> > > This is with postgresql 7.4 running on linux 2.6.11 with a 3GHz P4 and a\n> > > SATA harddrive.\n> >\n> > 8.0 or 8.1 might help you some -- better (and more!) disks will \n> probably help\n> > a _lot_.\n>\n>Ok, I did try 8.0 when I started this and found that the server bind\n>parameters (both via DBD::Pg (with pg_prepare_server => 1) and via JDBC\n>(various versions I tried)) failed - the parameters were clearly not\n>being substituted. This was Postgresql 8.0 from Debian unstable. That\n>was a couple of weeks ago and I've not been back to check whether its\n>been fixed. Anyway, because of these problems I dropped back to 7.4.\n\nSince I assume you are not going to run anything with the string \n\"unstable\" in its name in production (?!), why not try a decent \nproduction ready distro like SUSE 9.x and see how pg 8.0.3 runs on a \nOS more representative of what you are likely (or at least what is \nsafe...) to run in production?\n\n\n> > > Queries such as:\n> > >\n> > > select locality_2 from address where locality_2 = 'Manchester';\n> > >\n> > > are taking 14 seconds to complete, and this is only 2 years worth of\n> > > data - we will have up to 15 years (so over 15 million rows).\n> >\n> > As Tom pointed out; you're effectively doing random searches \n> here, and using\n> > CLUSTER might help. Normalizing your data to get smaller rows (and avoid\n> > possibly costly string comparisons if your strcoll() is slow) will probably\n> > also help.\n>\n>Ok, so you're saying that joining the address table into an address_city\n>table (the obvious normalization) will help here?\n>\n>The locale settings in postgresql.conf all have en_GB and a \\l shows\n>encoding of LATIN1. So I don't think I've set anything to UTF8 or such\n>like.\n>\n> > > I need to get to the stage where I can run queries such as:\n> > > select street, locality_1, locality_2, city from address\n> > > where (city = 'Nottingham' or locality_2 = 'Nottingham'\n> > > or locality_1 = 'Nottingham')\n> > > and upper(substring(street from 1 for 1)) = 'A'\n> > > group by street, locality_1, locality_2, city\n> > > order by street\n> > > limit 20 offset 0\n> >\n> > This might be a lot quicker than pulling all the records like in \n> your example\n> > queries...\n>\n>Yes, that certainly does seem to be the case - around 4 seconds. But I\n>need it to be 10 times faster (or thereabouts) otherwise I have big\n>problems!\n\n*beats drum* Get it in RAM, Get it in RAM, ...\n\nRon Peacetree\n\n\n", "msg_date": "Thu, 01 Sep 2005 18:05:43 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "On Thu, Sep 01, 2005 at 06:05:43PM -0400, Ron wrote:\n> > Selection from the database is, hence the indexes.\n> \n> A DB _without_ indexes that fits into RAM during ordinary operation \n> may actually be faster than a DB _with_ indexes that does \n> not. Fitting the entire DB into RAM during ordinary operation if at \n> all possible should be the first priority with a small data mine-like \n> application such as you've described.\n\nThat makes sense.\n\n> Also normalization is _not_ always a good thing for data mining like \n> apps. Having most or everything you need in one place in a compact \n> and regular format is usually more effective for data mines than \"Nth \n> Order Normal Form\" optimization to the degree usually found in \n> textbooks using OLTP-like examples.\n\nSure.\n\n> >Ok, I did try 8.0 when I started this and found that the server bind\n> >parameters (both via DBD::Pg (with pg_prepare_server => 1) and via JDBC\n> >(various versions I tried)) failed - the parameters were clearly not\n> >being substituted. This was Postgresql 8.0 from Debian unstable. That\n> >was a couple of weeks ago and I've not been back to check whether its\n> >been fixed. Anyway, because of these problems I dropped back to 7.4.\n> \n> Since I assume you are not going to run anything with the string \n> \"unstable\" in its name in production (?!), why not try a decent \n> production ready distro like SUSE 9.x and see how pg 8.0.3 runs on a \n> OS more representative of what you are likely (or at least what is \n> safe...) to run in production?\n\nWell, you see, as ever, it's a bit complicated. The company I'm doing\nthe development for has been subcontracted to do it and the contractor was\ncontracted by the actual \"client\". So there are two companies involved\nin addition to the \"client\". Sadly, the \"client\" actually has dictated\nthings like \"it will be deployed on FreeBSD and thou shall not argue\".\nAt this point in time, I actually have very little information about the\nspecification of the boxen that'll be running this application. This is\nsomething I'm hoping to solve very soon. The worst part of it is that\nI'm not going have direct (ssh) access to the box and all configuration\nchanges will most likely have to be relayed through techies at the\n\"client\" so fine tuning this is going to be a veritable nightmare.\n\n> >> > I need to get to the stage where I can run queries such as:\n> >> > select street, locality_1, locality_2, city from address\n> >> > where (city = 'Nottingham' or locality_2 = 'Nottingham'\n> >> > or locality_1 = 'Nottingham')\n> >> > and upper(substring(street from 1 for 1)) = 'A'\n> >> > group by street, locality_1, locality_2, city\n> >> > order by street\n> >> > limit 20 offset 0\n> >>\n> >> This might be a lot quicker than pulling all the records like in \n> >your example\n> >> queries...\n> >\n> >Yes, that certainly does seem to be the case - around 4 seconds. But I\n> >need it to be 10 times faster (or thereabouts) otherwise I have big\n> >problems!\n> \n> *beats drum* Get it in RAM, Get it in RAM, ...\n\nOk, but I currently have 2 million rows. When this launches in a couple\nof weeks, it'll launch with 5 million+ and then gain > a million a year.\nI think the upshot of this all is 4GB RAM as a minimum and judicious use\nof normalization so as to avoid more expensive string comparisons and\nreduce table size is my immediate plan (along with proper configuration\nof pg).\n\nMatthew\n", "msg_date": "Thu, 1 Sep 2005 23:22:39 +0100", "msg_from": "Matthew Sackman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "On Thu, Sep 01, 2005 at 06:42:31PM +0100, Matthew Sackman wrote:\n>\n> \"address_pc_top_index\" btree (postcode_top)\n> \"address_pc_top_middle_bottom_index\" btree (postcode_top,\n> postcode_middle, postcode_bottom)\n> \"address_pc_top_middle_index\" btree (postcode_top, postcode_middle)\n\nThis doesn't address the query performance problem, but isn't only\none of these indexes necessary? The second one, on all three\ncolumns, because searches involving only postcode_top or only\npostcode_top and postcode_middle could use it, making the indexes\non only those columns superfluous. Or am I missing something?\n\n-- \nMichael Fuhr\n", "msg_date": "Thu, 1 Sep 2005 16:34:36 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "At 06:22 PM 9/1/2005, Matthew Sackman wrote:\n>On Thu, Sep 01, 2005 at 06:05:43PM -0400, Ron wrote:\n> >\n> > Since I assume you are not going to run anything with the string\n> > \"unstable\" in its name in production (?!), why not try a decent\n> > production ready distro like SUSE 9.x and see how pg 8.0.3 runs on a\n> > OS more representative of what you are likely (or at least what is\n> > safe...) to run in production?\n>\n>Well, you see, as ever, it's a bit complicated. The company I'm doing\n>the development for has been subcontracted to do it and the contractor was\n>contracted by the actual \"client\". So there are two companies involved\n>in addition to the \"client\". Sadly, the \"client\" actually has dictated\n>things like \"it will be deployed on FreeBSD and thou shall not argue\".\n\nAt least get them to promise they will use a release the BSD folks \nmark \"stable\"!\n\n\n>At this point in time, I actually have very little information about the\n>specification of the boxen that'll be running this application. This is\n>something I'm hoping to solve very soon. The worst part of it is that\n>I'm not going have direct (ssh) access to the box and all configuration\n>changes will most likely have to be relayed through techies at the\n>\"client\" so fine tuning this is going to be a veritable nightmare.\n\nIME, what you have actually just said is \"It will not be possible to \nsafely fine tune the DB unless or until I have direct access; and/or \nsomeone who does have direct access is correctly trained.\"\n\nIck.\n\n\n> > >> > I need to get to the stage where I can run queries such as:\n> > >> > select street, locality_1, locality_2, city from address\n> > >> > where (city = 'Nottingham' or locality_2 = 'Nottingham'\n> > >> > or locality_1 = 'Nottingham')\n> > >> > and upper(substring(street from 1 for 1)) = 'A'\n> > >> > group by street, locality_1, locality_2, city\n> > >> > order by street\n> > >> > limit 20 offset 0\n> > >>\n> > >> This might be a lot quicker than pulling all the records like in\n> > >your example\n> > >> queries...\n> > >\n> > >Yes, that certainly does seem to be the case - around 4 seconds. But I\n> > >need it to be 10 times faster (or thereabouts) otherwise I have big\n> > >problems!\n> >\n> > *beats drum* Get it in RAM, Get it in RAM, ...\n>\n>Ok, but I currently have 2 million rows. When this launches in a couple\n>of weeks, it'll launch with 5 million+ and then gain > a million a year.\n\nAt my previously mentioned optimum of 85B per row, 2M rows is \n170MB. 5M rows is 425MB. Assuming the gain of 1M rows per year, \nthat's +85MB per year for this table.\n\nUp to 2GB DIMMs are currently standard, and 4GB DIMMs are just in the \nprocess of being introduced. Mainboards with anything from 4 to 16 \nDIMM slots are widely available.\n\nIOW, given the description you've provided this DB should _always_ \nfit in RAM. Size the production system such that the entire DB fits \ninto RAM during ordinary operation with an extra 1GB of RAM initially \ntossed on as a safety measure and the client will be upgrading the HW \nbecause it's obsolete before they run out of room in RAM.\n\n\n>I think the upshot of this all is 4GB RAM as a minimum and judicious use\n>of normalization so as to avoid more expensive string comparisons and\n>reduce table size is my immediate plan (along with proper configuration\n>of pg).\n\nMy suggestion is only slightly different. Reduce table size(s) and \nup the RAM to the point where the whole DB fits comfortably in RAM.\n\nYou've got the rare opportunity to build a practical Memory Resident \nDatabase. It should run like a banshee when you're done. I'd love \nto see the benches on the final product.\n\nRon Peacetree\n\n\n", "msg_date": "Thu, 01 Sep 2005 19:09:32 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "It would be good to see EXPLAIN ANALYZE output for the three queries \nbelow (the real vs. estimated row counts being of interest).\n\nThe number of pages in your address table might be interesting to know too.\n\nregards\n\nMark\n\nMatthew Sackman wrote (with a fair bit of snippage):\n> explain select locality_2 from address where locality_2 = 'Manchester';\n> gives\n> QUERY PLAN \n> ----------------------------------------------------------------\n> Seq Scan on address (cost=0.00..80677.16 rows=27923 width=12)\n> Filter: ((locality_2)::text = 'Manchester'::text)\n> \n> \n> explain select locality_1 from address where locality_1 = 'Manchester';\n> gives\n> QUERY PLAN \n> ----------------------------------------------------------------\n> Index Scan using address_locality_1_index on address\n> (cost=0.00..69882.18 rows=17708 width=13)\n> Index Cond: ((locality_1)::text = 'Manchester'::text)\n> \n >\n> select street, locality_1, locality_2, city from address \n> where (city = 'Nottingham' or locality_2 = 'Nottingham'\n> or locality_1 = 'Nottingham')\n> and upper(substring(street from 1 for 1)) = 'A' \n> group by street, locality_1, locality_2, city\n> order by street\n> limit 20 offset 0\n> \n", "msg_date": "Fri, 02 Sep 2005 15:19:53 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "Matthew Sackman wrote:\n\n> I need to get to the stage where I can run queries such as:\n >\n> select street, locality_1, locality_2, city from address \n> where (city = 'Nottingham' or locality_2 = 'Nottingham'\n> or locality_1 = 'Nottingham')\n> and upper(substring(street from 1 for 1)) = 'A' \n> group by street, locality_1, locality_2, city\n> order by street\n> limit 20 offset 0\n> \n> and have the results very quickly.\n> \n\nThis sort of query will be handled nicely in 8.1 - it has bitmap and/or \nprocessing to make use of multiple indexes. Note that 8.1 is in beta now.\n\nCheers\n\nMark\n\n", "msg_date": "Fri, 02 Sep 2005 16:24:32 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "Ron,\n\nCan you give me some pointers to make the tables RAM resident. If one\ndoes so, is the program accessing the data need to change. Does pgsql\ntake care to write the data to disk?\n\nRegards,\n\nakshay\n\n---------------------------------------\nAkshay Mathur\nSMTS, Product Verification\nAirTight Networks, Inc. (www.airtightnetworks.net)\nO: +91 20 2588 1555 ext 205\nF: +91 20 2588 1445\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Ron\nSent: Friday, September 02, 2005 2:36 AM\nTo: Tom Lane; [email protected]\nSubject: Re: [PERFORM] Massive performance issues \n\nEven better, if the table(s) can be made RAM resident, then searches, \neven random ones, can be very fast. He wants a 1000x performance \nimprovement. Going from disk resident to RAM resident should help \ngreatly in attaining that goal.\n\n\n\n\n", "msg_date": "Fri, 2 Sep 2005 14:53:58 +0530", "msg_from": "\"Akshay Mathur\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues " }, { "msg_contents": "Matthew,\n\n> Well, this is a development box. But the live box wouldn't be much more\n> than RAID 1 on SCSI 10ks so that should only be a halving of seek time,\n> not the 1000 times reduction I'm after!\n\nIf you're looking for 1000 times reduction, I think you're going to need \n*considerably* beefier hardware. You'd pretty much have to count on the \nwhole DB being in RAM, and a CPU being always available for incoming queries.\n\n> In fact, now I think about it, I have been testing on a 2.4 kernel on a\n> dual HT 3GHz Xeon with SCSI RAID array and the performance is only\n> marginally better.\n\nYes, but HT sucks for databases, so you're probably bottlenecking yourself on \nCPU on that machine. \n\nHowever, if this is the query you really want to optimize for:\n\nselect street, locality_1, locality_2, city from address \nwhere (city = 'Nottingham' or locality_2 = 'Nottingham'\n       or locality_1 = 'Nottingham')\n  and upper(substring(street from 1 for 1)) = 'A' \ngroup by street, locality_1, locality_2, city\norder by street\nlimit 20 offset 0\n\n... then this is the query you should test on. Although I will say that your \ndenormalized schema is actually hurting you siginificantly with the above \ntype of query; indexes aren't going to be possible for it.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 2 Sep 2005 09:50:01 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "Akshay Mathur wrote:\n\n>Ron,\n>\n>Can you give me some pointers to make the tables RAM resident. If one\n>does so, is the program accessing the data need to change. Does pgsql\n>take care to write the data to disk?\n>\n> \n>\nPostgreSQL tried to intelligently cache information and then will also \nuse the OS disk cache as a secondary cache. So a sufficiently small and \nfrequently accessed table will be resident in RAM.\n\nThe simplest way to affect this calculus is to put more RAM in the \nmachine. There are hacks I can think of to create RAM caches of \nspecific tables, but I don't want to take responsibility for anyone \ntrying these and running into trouble.\n\nBest Wishes,\nChris Travers\nMetatron Technology Consulting\n", "msg_date": "Fri, 02 Sep 2005 11:00:46 -0700", "msg_from": "Chris Travers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "Mark Kirkwood wrote:\n> Matthew Sackman wrote:\n> \n>> I need to get to the stage where I can run queries such as:\n> \n> >\n> \n>> select street, locality_1, locality_2, city from address where (city = \n>> 'Nottingham' or locality_2 = 'Nottingham'\n>> or locality_1 = 'Nottingham')\n>> and upper(substring(street from 1 for 1)) = 'A' group by street, \n>> locality_1, locality_2, city\n>> order by street\n>> limit 20 offset 0\n>>\n>> and have the results very quickly.\n>>\n> \n> This sort of query will be handled nicely in 8.1 - it has bitmap and/or \n> processing to make use of multiple indexes. Note that 8.1 is in beta now.\n> \n\nAs others have commented, you will probably need better hardware to \nachieve a factor of 1000 improvement, However I think using 8.1 could by \nitself give you a factor or 10->100 improvement.\n\ne.g. Using your schema and generating synthetic data:\n\n\nEXPLAIN\nSELECT street, locality_1, locality_2, city\nFROM address\nWHERE (city = '500TH CITY'\n OR locality_2 = '50TH LOCALITY'\n OR locality_1 = '500TH LOCALITY')\n AND upper(substring(street from 1 for 1)) = 'A'\nGROUP BY street, locality_1, locality_2, city\nORDER BY street\nLIMIT 20 OFFSET 0\n \n QUERY PLAN \n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=59559.04..59559.09 rows=20 width=125)\n -> Sort (cost=59559.04..59559.09 rows=21 width=125)\n Sort Key: street\n -> HashAggregate (cost=59558.37..59558.58 rows=21 width=125)\n -> Bitmap Heap Scan on address (cost=323.19..59556.35 \nrows=202 width=125)\n Recheck Cond: (((city)::text = '500TH CITY'::text) \nOR ((locality_2)::text = '50TH LOCALITY'::text) OR ((locality_1)::text = \n'500TH LOCALITY'::text))\n Filter: (upper(\"substring\"((street)::text, 1, 1)) \n= 'A'::text)\n -> BitmapOr (cost=323.19..323.19 rows=40625 width=0)\n -> Bitmap Index Scan on address_city_index \n (cost=0.00..15.85 rows=1958 width=0)\n Index Cond: ((city)::text = '500TH \nCITY'::text)\n -> Bitmap Index Scan on \naddress_locality_2_index (cost=0.00..143.00 rows=18000 width=0)\n Index Cond: ((locality_2)::text = \n'50TH LOCALITY'::text)\n -> Bitmap Index Scan on \naddress_locality_1_index (cost=0.00..164.33 rows=20667 width=0)\n Index Cond: ((locality_1)::text = \n'500TH LOCALITY'::text)\n(14 rows)\n\n\nThis takes 0.5s -> 2s to execute (depending on the frequencies generated \nfor the two localities).\n\nSo we are a factor of 10 better already, on modest HW (2xPIII 1Ghz 2G \nrunning FreeBSD 5.4).\n\nTo go better than this you could try a specific summary table:\n\n\nCREATE TABLE address_summary AS\nSELECT street,\n locality_1,\n locality_2,\n city,\n upper(substring(street from 1 for 1)) AS cut_street\nFROM address\nGROUP BY street, locality_1, locality_2, city\n;\n\nCREATE INDEX address_summary_city_index ON address_summary(city);\nCREATE INDEX address_summary_locality_1_index ON \naddress_summary(locality_1);\nCREATE INDEX address_summary_locality_2_index ON \naddress_summary(locality_2);\nCREATE INDEX address_summary_street_index ON address_summary(street);\nCREATE INDEX street_summary_prefix ON address_summary(cut_street);\n\n\nAnd the query can be rewritten as:\n\nEXPLAIN\nSELECT street, locality_1, locality_2, city\nFROM address_summary\nWHERE (city = '500TH CITY'\n OR locality_2 = '50TH LOCALITY'\n OR locality_1 = '500TH LOCALITY')\n AND cut_street = 'A'\nORDER BY street\nLIMIT 20 OFFSET 0\n \n QUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..2006.05 rows=20 width=125)\n -> Index Scan using address_summary_street_index on address_summary \n (cost=0.00..109028.81 rows=1087 width=125)\n Filter: ((((city)::text = '500TH CITY'::text) OR \n((locality_2)::text = '50TH LOCALITY'::text) OR ((locality_1)::text = \n'500TH LOCALITY'::text)) AND (cut_street = 'A'::text))\n(3 rows)\n\n\nThis takes 0.02s - so getting close to the factor of 1000 (a modern \nmachine with 3-5 times the memory access speed will get you there easily).\n\nThe effectiveness of the summary table will depend on the how much the \nGROUP BY reduces the cardinality (not much in this synthetic case), so \nyou will probably get better improvement with the real data!\n\nCheers\n\nMark\n", "msg_date": "Sat, 03 Sep 2005 13:22:01 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "Hi,\n\nMany thanks for all your thoughts and advice. With just 2GB or RAM, no\nchange to the harddisc (still SATA) but proper tuning of Postgresql\n(still 7.4) and aggressive normalization to shrink row width, I have\nmanaged to get suitable performance, with, when fully cached, queries on\na 5 million row data set, including queries such as:\n\nselect to_char(sale_date, 'DD Mon YYYY') as sale_date_text, cost,\n property_types.type as property_type, sale_types.type as sale_type,\n flat_extra, number, street, loc1.component as locality_1,\n loc2.component as locality_2, city.component as city,\n county.component as county, postcode \nfrom address\n inner join (\n select id from address_components\n where component = 'Woodborough'\n ) as t1\n on locality_1_id = t1.id or locality_2_id = t1.id or city_id = t1.id\n inner join (\n select id, street from streets where street = 'Lowdham Lane'\n ) as t2\n on street_id = t2.id\n inner join sale_types\n on sale_types.id = sale_type_id\n inner join property_types\n on property_types.id = property_type_id\n inner join address_components as county\n on county_id = county.id\n inner join address_components as city\n on city_id = city.id\n inner join address_components as loc2\n on locality_2_id = loc2.id\n inner join address_components as loc1\n on locality_1_id = loc1.id\norder by sale_date desc limit 11 offset 0\n\ncompleting within 50ms. I've also now managed to request that the full\nproduction system will have 4GB of RAM (there are still a few queries\nthat don't quite fit in 2GB of RAM) and a 15kRPM SCSI HD.\n\nSo once again, thanks for all your help. I've literally been pulling my\nhair out over this so it's great to have basically got it solved.\n\nMatthew\n", "msg_date": "Tue, 6 Sep 2005 16:51:24 +0100", "msg_from": "Matthew Sackman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Massive performance issues" } ]
[ { "msg_contents": "> Table \"public.address\"\n> Column | Type | Modifiers\n> ----------------------+------------------------+-----------\n> postcode_top | character varying(2) | not null\n> postcode_middle | character varying(4) | not null\n> postcode_bottom | character varying(7) | not null\n\n\nconsider making above fields char(x) not varchar(x) for small but\nimportant savings.\n\n> postcode | character varying(10) | not null\n> property_type | character varying(15) | not null\n> sale_type | character varying(10) | not null\n> flat_extra | character varying(100) | not null\n> number | character varying(100) | not null\n> street | character varying(100) | not null\n> locality_1 | character varying(100) | not null\n> locality_2 | character varying(100) | not null\n> city | character varying(100) | not null\n> county | character varying(100) | not null\n> Indexes:\n> \"address_city_index\" btree (city)\n> \"address_county_index\" btree (county)\n> \"address_locality_1_index\" btree (locality_1)\n> \"address_locality_2_index\" btree (locality_2)\n> \"address_pc_bottom_index\" btree (postcode_bottom)\n> \"address_pc_middle_index\" btree (postcode_middle)\n> \"address_pc_top_index\" btree (postcode_top)\n> \"address_pc_top_middle_bottom_index\" btree (postcode_top,\n> postcode_middle, postcode_bottom)\n> \"address_pc_top_middle_index\" btree (postcode_top,\npostcode_middle)\n> \"address_postcode_index\" btree (postcode)\n> \"address_property_type_index\" btree (property_type)\n> \"address_street_index\" btree (street)\n> \"street_prefix\" btree (lower(\"substring\"((street)::text, 1, 1)))\n> \n> Obviously, to me, this is a problem, I need these queries to be under\na\n> second to complete. Is this unreasonable? What can I do to make this\n\"go\n> faster\"? I've considered normalising the table but I can't work out\n> whether the slowness is in dereferencing the pointers from the index\n> into the table or in scanning the index in the first place. And\n> normalising the table is going to cause much pain when inserting\nvalues\n> and I'm not entirely sure if I see why normalising it should cause a\n> massive performance improvement.\n\nhttp://www.dbdebunk.com :)\n\n> I need to get to the stage where I can run queries such as:\n> select street, locality_1, locality_2, city from address\n> where (city = 'Nottingham' or locality_2 = 'Nottingham'\n> or locality_1 = 'Nottingham')\n> and upper(substring(street from 1 for 1)) = 'A'\n> group by street, locality_1, locality_2, city\n> order by street\n> limit 20 offset 0\n> \n> and have the results very quickly.\n> \n> Any help most gratefully received (even if it's to say that I should\nbe\n> posting to a different mailing list!).\n\nthis is correct list. did you run vacuum/analyze, etc?\nPlease post vacuum analyze times.\n\nMerlin\n", "msg_date": "Thu, 1 Sep 2005 14:04:54 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "On Thu, Sep 01, 2005 at 02:04:54PM -0400, Merlin Moncure wrote:\n> > Any help most gratefully received (even if it's to say that I should\n> be\n> > posting to a different mailing list!).\n> \n> this is correct list. did you run vacuum/analyze, etc?\n> Please post vacuum analyze times.\n\n2005-09-01 19:47:08 LOG: statement: vacuum full analyze address;\n2005-09-01 19:48:44 LOG: duration: 96182.777 ms\n\n2005-09-01 19:50:20 LOG: statement: vacuum analyze address;\n2005-09-01 19:51:48 LOG: duration: 87675.268 ms\n\nI run them regularly, pretty much after every bulk import.\n\nMatthew\n", "msg_date": "Thu, 1 Sep 2005 19:52:31 +0100", "msg_from": "Matthew Sackman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "On Thu, Sep 01, 2005 at 02:04:54PM -0400, Merlin Moncure wrote:\n> > Table \"public.address\"\n> > Column | Type | Modifiers\n> > ----------------------+------------------------+-----------\n> > postcode_top | character varying(2) | not null\n> > postcode_middle | character varying(4) | not null\n> > postcode_bottom | character varying(7) | not null\n> \n> consider making above fields char(x) not varchar(x) for small but\n> important savings.\n\nHuh, hang on -- AFAIK there's no saving at all by doing that. Quite the\nopposite really, because with char(x) you store the padding blanks,\nwhich are omitted with varchar(x), so less I/O (not necessarily a\nmeasurable amount, mind you, maybe even zero because of padding issues.)\n\n-- \nAlvaro Herrera -- Valdivia, Chile Architect, www.EnterpriseDB.com\nYou liked Linux a lot when he was just the gawky kid from down the block\nmowing your lawn or shoveling the snow. But now that he wants to date\nyour daughter, you're not so sure he measures up. (Larry Greenemeier)\n", "msg_date": "Thu, 1 Sep 2005 15:33:31 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" } ]
[ { "msg_contents": "> I'm having performance issues with a table consisting of 2,043,133\nrows.\n> The\n> schema is:\n\n> locality_1 has 16650 distinct values and locality_2 has 1156 distinct\n> values.\n\nJust so you know I have a 2GHz p4 workstation with similar size (2M\nrows), several keys, and can find and fetch 2k rows based on 20k unique\nvalue key in about 60 ms. (.06 seconds).\n\nMerlin\n", "msg_date": "Thu, 1 Sep 2005 14:10:31 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "Any chance it's a vacuum thing?\nOr configuration (out of the box it needs adjusting)?\n\nJoel Fradkin\n \n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Merlin Moncure\nSent: Thursday, September 01, 2005 2:11 PM\nTo: Matthew Sackman\nCc: [email protected]\nSubject: Re: [PERFORM] Massive performance issues\n\n> I'm having performance issues with a table consisting of 2,043,133\nrows.\n> The\n> schema is:\n\n> locality_1 has 16650 distinct values and locality_2 has 1156 distinct\n> values.\n\nJust so you know I have a 2GHz p4 workstation with similar size (2M\nrows), several keys, and can find and fetch 2k rows based on 20k unique\nvalue key in about 60 ms. (.06 seconds).\n\nMerlin\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n", "msg_date": "Thu, 1 Sep 2005 14:42:51 -0400", "msg_from": "\"Joel Fradkin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Alvaro Herrera [mailto:[email protected]]\n> Sent: Thursday, September 01, 2005 3:34 PM\n> To: Merlin Moncure\n> Cc: Matthew Sackman; [email protected]\n> Subject: Re: [PERFORM] Massive performance issues\n> \n> On Thu, Sep 01, 2005 at 02:04:54PM -0400, Merlin Moncure wrote:\n> > > Table \"public.address\"\n> > > Column | Type | Modifiers\n> > > ----------------------+------------------------+-----------\n> > > postcode_top | character varying(2) | not null\n> > > postcode_middle | character varying(4) | not null\n> > > postcode_bottom | character varying(7) | not null\n> >\n> > consider making above fields char(x) not varchar(x) for small but\n> > important savings.\n> \n> Huh, hang on -- AFAIK there's no saving at all by doing that. Quite\nthe\n> opposite really, because with char(x) you store the padding blanks,\n> which are omitted with varchar(x), so less I/O (not necessarily a\n> measurable amount, mind you, maybe even zero because of padding\nissues.)\n\nYou are right, all this time I thought there was a 4 byte penalty for\nstoring varchar type and not in char :(. So there is no reason at all\nto use the char type?\n\nMerlin\n\n", "msg_date": "Thu, 1 Sep 2005 15:51:35 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Massive performance issues" }, { "msg_contents": "On Thu, Sep 01, 2005 at 03:51:35PM -0400, Merlin Moncure wrote:\n\n> > Huh, hang on -- AFAIK there's no saving at all by doing that. Quite\n> > the opposite really, because with char(x) you store the padding\n> > blanks, which are omitted with varchar(x), so less I/O (not\n> > necessarily a measurable amount, mind you, maybe even zero because\n> > of padding issues.)\n> \n> You are right, all this time I thought there was a 4 byte penalty for\n> storing varchar type and not in char :(. So there is no reason at all\n> to use the char type?\n\nOther than SQL conformance, apparently not.\n\n-- \nAlvaro Herrera -- Valdivia, Chile Architect, www.EnterpriseDB.com\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n", "msg_date": "Thu, 1 Sep 2005 15:53:47 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Massive performance issues" } ]
[ { "msg_contents": "Hi.\n\nI have an interesting problem with the JDBC drivers. When I use a \nselect like this:\n\n\"SELECT t0.aktiv, t0.id, t0.ist_teilnehmer, t0.nachname, t0.plz, \nt0.vorname FROM public.dga_dienstleister t0 WHERE t0.plz \nlike ?::varchar(256) ESCAPE '|'\" withBindings: 1:\"53111\"(plz)>\n\nthe existing index on the plz column is not used.\n\nWhen I the same select with a concrete value, the index IS used.\n\nI use PostgreSQL 8.0.3 on Mac OS X and the JDBC driver 8.0-312 JDBC 3.\n\nAfter a lot of other things, I tried using a 7.4 driver and with \nthis, the index is used in both cases.\n\nWhy can this happen? Is there a setting I might have not seen? \nSomething I do wrong?\n\ncug\n", "msg_date": "Fri, 2 Sep 2005 01:09:03 +0200", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": true, "msg_subject": "Prepared statement not using index" }, { "msg_contents": "Guido Neitzer wrote:\n> Hi.\n>\n> I have an interesting problem with the JDBC drivers. When I use a\n> select like this:\n>\n> \"SELECT t0.aktiv, t0.id, t0.ist_teilnehmer, t0.nachname, t0.plz,\n> t0.vorname FROM public.dga_dienstleister t0 WHERE t0.plz like\n> ?::varchar(256) ESCAPE '|'\" withBindings: 1:\"53111\"(plz)>\n>\n> the existing index on the plz column is not used.\n>\n> When I the same select with a concrete value, the index IS used.\n>\n> I use PostgreSQL 8.0.3 on Mac OS X and the JDBC driver 8.0-312 JDBC 3.\n>\n> After a lot of other things, I tried using a 7.4 driver and with this,\n> the index is used in both cases.\n>\n> Why can this happen? Is there a setting I might have not seen?\n> Something I do wrong?\n>\n> cug\n\nI've had this problem in the past. In my case, the issue was that the\ncolumn I was searching had a mixed blend of possible values. For\nexample, with 1M rows, the number 3 occurred 100 times, but the number\n18 occurred 700,000 times.\n\nSo when I manually did a search for 3, it naturally realized that it\ncould use an index scan, because it had the statistics to say it was\nvery selective. If I manually did a search for 18, it switched to\nsequential scan, because it was not very selective (both are the correct\nplans).\n\nBut if you create a prepared statement, parameterized on this number,\npostgres has no way of knowing ahead of time, whether you will be asking\nabout 3 or 18, so when the query is prepared, it has to be pessimistic,\nand avoid worst case behavior, so it choses to always use a sequential scan.\n\nThe only way I got around this was with writing a plpgsql function which\nused the EXECUTE syntax to dynamically re-plan part of the query.\n\nHope this makes sense. This may or may not be your problem, without\nknowing more about you setup. But the symptoms seem similar.\n\nJohn\n=:->", "msg_date": "Mon, 12 Sep 2005 00:58:34 -0400", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Prepared statement not using index" }, { "msg_contents": "The difference between the 7.4 driver and the 8.0.3 driver is the \n8.0.3 driver is using server side prepared statements and binding the \nparameter to the type in setXXX(n,val).\n\nThe 7.4 driver just replaces the ? with the value and doesn't use \nserver side prepared statements.\n\nDave\n\n\nOn 1-Sep-05, at 7:09 PM, Guido Neitzer wrote:\n\n> Hi.\n>\n> I have an interesting problem with the JDBC drivers. When I use a \n> select like this:\n>\n> \"SELECT t0.aktiv, t0.id, t0.ist_teilnehmer, t0.nachname, t0.plz, \n> t0.vorname FROM public.dga_dienstleister t0 WHERE t0.plz \n> like ?::varchar(256) ESCAPE '|'\" withBindings: 1:\"53111\"(plz)>\n>\n> the existing index on the plz column is not used.\n>\n> When I the same select with a concrete value, the index IS used.\n>\n> I use PostgreSQL 8.0.3 on Mac OS X and the JDBC driver 8.0-312 JDBC 3.\n>\n> After a lot of other things, I tried using a 7.4 driver and with \n> this, the index is used in both cases.\n>\n> Why can this happen? Is there a setting I might have not seen? \n> Something I do wrong?\n>\n> cug\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n>\n\n", "msg_date": "Mon, 12 Sep 2005 08:38:35 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Prepared statement not using index" }, { "msg_contents": "On 12.09.2005, at 14:38 Uhr, Dave Cramer wrote:\n\n> The difference between the 7.4 driver and the 8.0.3 driver is the \n> 8.0.3 driver is using server side prepared statements and binding \n> the parameter to the type in setXXX(n,val).\n\nWould be a good idea when this were configurable.\n\nI found my solution (use the JDBC2 drivers with protocolVersion=2), \nbut how long will this work?\n\ncug\n", "msg_date": "Mon, 12 Sep 2005 15:22:11 +0200", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Prepared statement not using index" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> The difference between the 7.4 driver and the 8.0.3 driver is the\n> 8.0.3 driver is using server side prepared statements and binding the\n> parameter to the type in setXXX(n,val).\n>\n> The 7.4 driver just replaces the ? with the value and doesn't use\n> server side prepared statements.\n\nDBD::Pg has a few flags that enables you to do things like purposely avoid\nusing server side prepares, and force a reprepare of a particular statement.\nPerhaps something like that is available for the JDBC driver? If not,\nmaybe someone would be willing to add it in?\n\n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200509120925\nhttps://www.biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n\n-----BEGIN PGP SIGNATURE-----\n\niEYEARECAAYFAkMlgdAACgkQvJuQZxSWSsjMlQCePc4dpE0BCT3W//y/N9uolkmK\nViIAnjR1fF14KbP+cX+xV8lmdlL6Be2k\n=NtXw\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Mon, 12 Sep 2005 13:26:44 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Prepared statement not using index" }, { "msg_contents": "\nOn 12-Sep-05, at 9:22 AM, Guido Neitzer wrote:\n\n> On 12.09.2005, at 14:38 Uhr, Dave Cramer wrote:\n>\n>\n>> The difference between the 7.4 driver and the 8.0.3 driver is the \n>> 8.0.3 driver is using server side prepared statements and binding \n>> the parameter to the type in setXXX(n,val).\n>>\n>\n> Would be a good idea when this were configurable.\nYou found the configuration for it\n>\n> I found my solution (use the JDBC2 drivers with protocolVersion=2), \n> but how long will this work?\n\nI think you would be better understanding what the correct type is \nfor the index to work properly.\n>\n> cug\n>\n>\n\n", "msg_date": "Mon, 12 Sep 2005 11:28:27 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Prepared statement not using index" }, { "msg_contents": "It's added, just use the old protocol .\n\nHere are the connection parameters\n\nhttp://jdbc.postgresql.org/documentation/head/connect.html#connection- \nparameters\n\nDave\n\n\nOn 12-Sep-05, at 9:26 AM, Greg Sabino Mullane wrote:\n\n>\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n>\n>\n>> The difference between the 7.4 driver and the 8.0.3 driver is the\n>> 8.0.3 driver is using server side prepared statements and binding the\n>> parameter to the type in setXXX(n,val).\n>>\n>> The 7.4 driver just replaces the ? with the value and doesn't use\n>> server side prepared statements.\n>>\n>\n> DBD::Pg has a few flags that enables you to do things like \n> purposely avoid\n> using server side prepares, and force a reprepare of a particular \n> statement.\n> Perhaps something like that is available for the JDBC driver? If not,\n> maybe someone would be willing to add it in?\n>\n> - --\n> Greg Sabino Mullane [email protected]\n> PGP Key: 0x14964AC8 200509120925\n> https://www.biglumber.com/x/web? \n> pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n>\n> -----BEGIN PGP SIGNATURE-----\n>\n> iEYEARECAAYFAkMlgdAACgkQvJuQZxSWSsjMlQCePc4dpE0BCT3W//y/N9uolkmK\n> ViIAnjR1fF14KbP+cX+xV8lmdlL6Be2k\n> =NtXw\n> -----END PGP SIGNATURE-----\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n>\n\n", "msg_date": "Mon, 12 Sep 2005 11:30:14 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Prepared statement not using index" } ]
[ { "msg_contents": "Hi all.\n In a cluster, is there any way to use the main memory of the other nodes \ninstead of the swap? If I have a query with many sub-queries and a lot of \ndata, I can easily fill all the memory in a node. The point is: is there any \nway to continue using the main memory from other nodes in the same query \ninstead of the swap?\n Thank you,\nRicardo.\n\nHi all.\n \nIn a cluster, is there any way to use the main memory of the other nodes instead of the swap? If I have a query with many sub-queries and a lot of data, I can easily fill all the memory in a node. The point is: is there any way to continue using the main memory from other nodes in the same query instead of the swap?\n\n \nThank you,\nRicardo.", "msg_date": "Thu, 1 Sep 2005 22:04:10 -0300", "msg_from": "Ricardo Humphreys <[email protected]>", "msg_from_op": true, "msg_subject": "Avoid using swap in a cluster" }, { "msg_contents": "Ricardo Humphreys wrote:\n> Hi all.\n> In a cluster, is there any way to use the main memory of the other nodes \n> instead of the swap? If I have a query with many sub-queries and a lot of \n> data, I can easily fill all the memory in a node. The point is: is there any \n> way to continue using the main memory from other nodes in the same query \n> instead of the swap?\n\nI don't know of any clustered version of PG that can spread queries over \nmultiple machines. Can I ask what you are using?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 02 Sep 2005 10:42:59 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid using swap in a cluster" }, { "msg_contents": "\nOn 2 Sep 2005, at 10:42, Richard Huxton wrote:\n\n> Ricardo Humphreys wrote:\n>\n>> Hi all.\n>> In a cluster, is there any way to use the main memory of the \n>> other nodes instead of the swap? If I have a query with many sub- \n>> queries and a lot of data, I can easily fill all the memory in a \n>> node. The point is: is there any way to continue using the main \n>> memory from other nodes in the same query instead of the swap?\n>>\n>\n> I don't know of any clustered version of PG that can spread queries \n> over multiple machines. Can I ask what you are using?\n\nIIRC GreenPlums DeepGreen MPP (Version 2) can do it. It does cost \nmoney though, but it is a very nice product.\n\n> -- \n> Richard Huxton\n> Archonet Ltd\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n", "msg_date": "Fri, 2 Sep 2005 12:18:22 +0100", "msg_from": "Alex Stapleton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid using swap in a cluster" } ]
[ { "msg_contents": "\n\n\n\nHi All,\n I have an ODBC application( using postgres database) which has three\ndifferent operations. Each operation is having combination of SELECT and\nUPDATE.\n\nFor example:\nOperation A: 6 Fetch + 1 Update\nOperation B: 9 Fetch\nOperation C: 5 Fetch + 3 Update ( Tables has 140 records)\n\nI have run these operations while Auto Vacumm is running and observed the\ntime taken in thse operations. I found that Operation C is taking highest\ntime and A is the lowest.\nSo i inferrred that, UPDATE takes more time.\n\nNow i run these operations again, without running Auto Vacuum. I observed\nthat, time taken for operation A & B is almost same but time for Operation\nC is increasing.\n\nI am not able to analyze, why only for operation C, time is increasing??\nDoes auto vacuum affects more on UPDATE.\n\n\nPlease help me to understand these things.\n\nThanks in advance.\n\nHemant\n\n*********************** FSS-Unclassified ***********************\n\"DISCLAIMER: This message is proprietary to Flextronics Software\nSystems Limited (FSS) and is intended solely for the use of the\nindividual to whom it is addressed. It may contain privileged or\nconfidential information and should not be circulated or used for\nany purpose other than for what it is intended. If you have received\nthis message in error, please notify the originator immediately.\nIf you are not the intended recipient, you are notified that you are\nstrictly prohibited from using, copying, altering, or disclosing\nthe contents of this message. FSS accepts no responsibility for\nloss or damage arising from the use of the information transmitted\nby this email including damage from virus.\"\n\n", "msg_date": "Fri, 2 Sep 2005 09:21:51 +0530", "msg_from": "Hemant Pandey <[email protected]>", "msg_from_op": true, "msg_subject": "Update is more affected( taking more time) than Select ( if Auto\n\tvacuum is not running)" }, { "msg_contents": "Hemant Pandey wrote:\n> Operation A: 6 Fetch + 1 Update\n> Operation B: 9 Fetch\n> Operation C: 5 Fetch + 3 Update ( Tables has 140 records)\n> \n> I have run these operations while Auto Vacumm is running and observed the\n> time taken in thse operations. I found that Operation C is taking highest\n> time and A is the lowest.\n> So i inferrred that, UPDATE takes more time.\n> \n> Now i run these operations again, without running Auto Vacuum. I observed\n> that, time taken for operation A & B is almost same but time for Operation\n> C is increasing.\n> \n> I am not able to analyze, why only for operation C, time is increasing??\n> Does auto vacuum affects more on UPDATE.\n\nDepends on what is happening. Without vacuuming (automatic or manual) a \ntable will tend to have \"holes\" since an update with MVCC is basically a \ndelete and an insert.\n\nSince you say the table has only 140 records, almost any operation will \ntend to scan rather than use an index. However, if you issue lots of \nupdates you will end up with many \"holes\" which have to be scanned past. \nPG won't know they are there because its statistics will be out of date \nunless you have analysed that table recently. So - everything will start \nto get slower.\n\nSo - for a small, rapidly updated table make sure you vacuum a lot \n(perhaps as often as once a minute). Or, run autovacuum and let it cope.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 02 Sep 2005 10:48:11 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update is more affected( taking more time) than Select" } ]
[ { "msg_contents": "unsubscribe-digest\n\n\t\n\n\t\n\t\t\n___________________________________________________________ \nGesendet von Yahoo! Mail - Jetzt mit 1GB Speicher kostenlos - Hier anmelden: http://mail.yahoo.de\n", "msg_date": "Fri, 02 Sep 2005 07:12:34 +0200", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "Hi,\n\nI'm using inherited tables to partition some data which can grow very \nlarge. Recently I discovered that a simple query that on a regular table \nwould use an index was instead using seq scans (70s vs a guessed 2s).\nThe well known query is:\n\nSELECT foo FROM bar ORDER BY foo DESC LIMIT 1\n\n(The same applies for SELECT MIN(foo) FROM bar using 8.1)\n\n\nThe query plan generated when running the query on a table which has \ninheritance forces the planner to choose a seq_scan for each table. \nWouldn't be a good thing to also promote ORDER BYs and LIMITs to each \nsubscan (like WHERE does)?\n\nI needed a quick solution, so I wrote a function which looks each \ninherited table separately and my problem is partially solved, but I \nthink that a (hopefully) little change in the optimizer could be advisable.\n\nAttached are some EXPLAIN ANALYZE outputs of my suggestion.\n\n\nBest regards\n--\nMatteo Beccati\nhttp://phpadsnew.com\nhttp://phppgads.com", "msg_date": "Fri, 02 Sep 2005 12:20:57 +0200", "msg_from": "Matteo Beccati <[email protected]>", "msg_from_op": true, "msg_subject": "ORDER BY and LIMIT not propagated on inherited tables / UNIONs" }, { "msg_contents": "On Fri, 2005-09-02 at 12:20 +0200, Matteo Beccati wrote:\n\n> I'm using inherited tables to partition some data which can grow very \n> large. Recently I discovered that a simple query that on a regular table \n> would use an index was instead using seq scans (70s vs a guessed 2s).\n> The well known query is:\n> \n> SELECT foo FROM bar ORDER BY foo DESC LIMIT 1\n> \n> (The same applies for SELECT MIN(foo) FROM bar using 8.1)\n> \n> \n> The query plan generated when running the query on a table which has \n> inheritance forces the planner to choose a seq_scan for each table. \n> Wouldn't be a good thing to also promote ORDER BYs and LIMITs to each \n> subscan (like WHERE does)?\n\nThe tuple_fraction implied by LIMIT is already passed through to each\nchild table when using an inherited table structure. This would then be\ntaken into account when plans are made for each child table. I don't\nthink the situation you observe occurs as a result of query planning.\n\nDo your child tables have indexes on them? Indexes are not inherited\nonto child tables, so it is possible that there is no index for the\nplanner to elect to use.\n\nBest Regards, Simon Riggs\n\n\n\n", "msg_date": "Fri, 02 Sep 2005 13:35:32 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY and LIMIT not propagated on inherited" }, { "msg_contents": "Simon Riggs wrote:\n>>The query plan generated when running the query on a table which has \n>>inheritance forces the planner to choose a seq_scan for each table. \n>>Wouldn't be a good thing to also promote ORDER BYs and LIMITs to each \n>>subscan (like WHERE does)?\n> \n> The tuple_fraction implied by LIMIT is already passed through to each\n> child table when using an inherited table structure. This would then be\n> taken into account when plans are made for each child table. I don't\n> think the situation you observe occurs as a result of query planning.\n> \n> Do your child tables have indexes on them? Indexes are not inherited\n> onto child tables, so it is possible that there is no index for the\n> planner to elect to use.\n\nIn this cases the tuple_fraction is useless if the planner doesn't know \nthat a ORDER BY on each child table is requested. In fact the sort is \napplied after all the rows are appended. The correct strategy IMHO would \nbe applying the order by and limit for each child table (which results \nin an index scan, if possible), appending, then finally sorting a bunch \nof rows, and limiting again.\n\nEvery table has indexes, as you can see in the third attacheed EXPLAIN \nANALYZE output.\n\n\nBest regards\n--\nMatteo Beccati\nhttp://phpadsnew.com\nhttp://phppgads.com\n", "msg_date": "Fri, 02 Sep 2005 14:54:39 +0200", "msg_from": "Matteo Beccati <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ORDER BY and LIMIT not propagated on inherited" }, { "msg_contents": "Matteo Beccati <[email protected]> writes:\n> The correct strategy IMHO would \n> be applying the order by and limit for each child table (which results \n> in an index scan, if possible), appending, then finally sorting a bunch \n> of rows, and limiting again.\n\nThis would be a win in some cases, and in many others a loss (ie, wasted\nsort steps). The hard part is determining when to apply it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Sep 2005 10:40:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY and LIMIT not propagated on inherited " }, { "msg_contents": "Hi,\n\n>>The correct strategy IMHO would \n>>be applying the order by and limit for each child table (which results \n>>in an index scan, if possible), appending, then finally sorting a bunch \n>>of rows, and limiting again.\n> \n> This would be a win in some cases, and in many others a loss (ie, wasted\n> sort steps). The hard part is determining when to apply it.\n\nI don't actually know how many smaller separate sorts compare to a \nsingle big sort, but I guess the difference wouldn't be so big if the \nLIMIT is low. Add to this that you don't need to append the whole \nrowsets, but just smaller ones.\n\n\nBest regards\n--\nMatteo Beccati\nhttp://phpadsnew.com\nhttp://phppgads.com\n", "msg_date": "Fri, 02 Sep 2005 17:00:50 +0200", "msg_from": "Matteo Beccati <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ORDER BY and LIMIT not propagated on inherited" }, { "msg_contents": ">>> The correct strategy IMHO would\n>>> be applying the order by and limit for each child table (which results\n>>> in an index scan, if possible), appending, then finally sorting a bunch\n>>> of rows, and limiting again.\n>>\n>> This would be a win in some cases, and in many others a loss (ie, wasted\n>> sort steps). The hard part is determining when to apply it.\n>\n> I don't actually know how many smaller separate sorts compare to a single\n> big sort, but I guess the difference wouldn't be so big if the LIMIT is\n> low. Add to this that you don't need to append the whole rowsets, but\n> just smaller ones.\n\nI think if you have a bunch of sorted thingies, you'd perform exactly one \nmerge step and be done, should be possible to do that in O(child_tables * \nrows)...\n\nMit freundlichem Gruß\nJens Schicke\n-- \nJens Schicke\t\t [email protected]\nasco GmbH\t\t http://www.asco.de\nMittelweg 7\t\t Tel 0531/3906-127\n38106 Braunschweig\t Fax 0531/3906-400\n", "msg_date": "Fri, 02 Sep 2005 17:09:18 +0200", "msg_from": "Jens-Wolfhard Schicke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY and LIMIT not propagated on inherited" }, { "msg_contents": "On Fri, 2005-09-02 at 12:20 +0200, Matteo Beccati wrote:\n> I'm using inherited tables to partition some data which can grow very \n> large. Recently I discovered that a simple query that on a regular table \n> would use an index was instead using seq scans (70s vs a guessed 2s).\n> The well known query is:\n> \n> SELECT foo FROM bar ORDER BY foo DESC LIMIT 1\n> \n> (The same applies for SELECT MIN(foo) FROM bar using 8.1)\n> \n\nReturning to Matteo's original query, what we are saying is that the new\noptimization for MIN/MAX queries doesn't work with inherited tables. \n\nIt could do, by running optimize_minmax_aggregates() for each query that\ngets planned to see if a better plan exists for each child table.\n\nI think that's a TODO item.\n\nOptimizing ORDER BY and LIMIT down looks like it would be harder to do\nin the general case, even if Matteo's simple transform looks good. I'm\nnot sure it's a very common query type though...\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Fri, 02 Sep 2005 18:54:56 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY and LIMIT not propagated on inherited" }, { "msg_contents": "Simon Riggs wrote:\n> Returning to Matteo's original query, what we are saying is that the new\n> optimization for MIN/MAX queries doesn't work with inherited tables. \n> \n> It could do, by running optimize_minmax_aggregates() for each query that\n> gets planned to see if a better plan exists for each child table.\n> \n> I think that's a TODO item.\n\nGreat. Of course I'm using ORDER BY ... LIMIT as a workaround to get the \nindex scan on pre-8.1, and because I'm used to it insted of the \npreviously not optimized MIN/MAX aggregates.\n\n> Optimizing ORDER BY and LIMIT down looks like it would be harder to do\n> in the general case, even if Matteo's simple transform looks good. I'm\n> not sure it's a very common query type though...\n\nIf I can find some time, I'll try to write some hacks... I just need to \nfind out where to start ;)\n\n\nBest regards\n--\nMatteo Beccati\nhttp://phpadsnew.com/\nhttp://phppgads.com/\n", "msg_date": "Fri, 02 Sep 2005 20:33:51 +0200", "msg_from": "Matteo Beccati <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ORDER BY and LIMIT not propagated on inherited" }, { "msg_contents": "Hi all\nI have got lot of information from ur group.\nNow i want to relieve from this group.\nI kindly request all of you.\nPlz unsubcribe me.\nThankz a lot\nRamesh\n\n On 9/3/05, Matteo Beccati <[email protected]> wrote: \n> \n> Simon Riggs wrote:\n> > Returning to Matteo's original query, what we are saying is that the new\n> > optimization for MIN/MAX queries doesn't work with inherited tables.\n> >\n> > It could do, by running optimize_minmax_aggregates() for each query that\n> > gets planned to see if a better plan exists for each child table.\n> >\n> > I think that's a TODO item.\n> \n> Great. Of course I'm using ORDER BY ... LIMIT as a workaround to get the\n> index scan on pre-8.1, and because I'm used to it insted of the\n> previously not optimized MIN/MAX aggregates.\n> \n> > Optimizing ORDER BY and LIMIT down looks like it would be harder to do\n> > in the general case, even if Matteo's simple transform looks good. I'm\n> > not sure it's a very common query type though...\n> \n> If I can find some time, I'll try to write some hacks... I just need to\n> find out where to start ;)\n> \n> \n> Best regards\n> --\n> Matteo Beccati\n> http://phpadsnew.com/\n> http://phppgads.com/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n\n\n-- \nurs\n\nRameshKumar.M\n\nHi all\nI have got lot of information from ur group.\nNow i want to relieve from this group.\nI kindly request all of you.\nPlz unsubcribe me.\nThankz a lot\nRamesh \nOn 9/3/05, Matteo Beccati <[email protected]> wrote:\nSimon Riggs wrote:> Returning to Matteo's original query, what we are saying is that the new> optimization for MIN/MAX queries doesn't work with inherited tables.\n>> It could do, by running optimize_minmax_aggregates() for each query that> gets planned to see if a better plan exists for each child table.>> I think that's a TODO item.Great. Of course I'm using ORDER BY ... LIMIT as a workaround to get the\nindex scan on pre-8.1, and because I'm used to it insted of thepreviously not optimized MIN/MAX aggregates.> Optimizing ORDER BY and LIMIT down looks like it would be harder to do> in the general case, even if Matteo's simple transform looks good. I'm\n> not sure it's a very common query type though...If I can find some time, I'll try to write some hacks... I just need tofind out where to start ;)Best regards--Matteo Beccati\nhttp://phpadsnew.com/http://phppgads.com/---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate\n      subscribe-nomail command to [email protected] so that your      message can get through to the mailing list cleanly\n-- ursRameshKumar.M", "msg_date": "Sat, 3 Sep 2005 01:38:53 +0530", "msg_from": "Ramesh kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY and LIMIT not propagated on inherited" } ]
[ { "msg_contents": "Hi all,\n\n I have the following table:\n\nespsm_asme=# \\d statistics_sasme\n Table \"public.statistics_sasme\"\n Column | Type | \n Modifiers\n--------------------------+--------------------------+--------------------------------------------------------------\n statistic_id | numeric(10,0) | not null default \nnextval('STATISTICS_OPERATOR_ID_SEQ'::text)\n input_message_id | character varying(50) |\n timestamp_in | timestamp with time zone |\n telecom_operator_id | numeric(4,0) |\n enduser_number | character varying(15) | not null\n telephone_number | character varying(15) | not null\n application_id | numeric(10,0) |\n customer_id | numeric(10,0) |\n customer_app_config_id | numeric(10,0) |\n customer_app_contents_id | numeric(10,0) |\n message | character varying(160) |\n message_type_id | numeric(4,0) |\nIndexes:\n \"pk_stsasme_statistic_id\" primary key, btree (statistic_id)\nTriggers:\n \"RI_ConstraintTrigger_17328735\" AFTER INSERT OR UPDATE ON \nstatistics_sasme FROM telecom_operators NOT DEFERRABLE INITIALLY \nIMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \n\"RI_FKey_check_ins\"('fk_stsasme_telecom_operator_id', \n'statistics_sasme', 'telecom_operators', 'UNSPECIFIED', \n'telecom_operator_id', 'telecom_operator_id')\n \"RI_ConstraintTrigger_17328738\" AFTER INSERT OR UPDATE ON \nstatistics_sasme FROM applications NOT DEFERRABLE INITIALLY IMMEDIATE \nFOR EACH ROW EXECUTE PROCEDURE \n\"RI_FKey_check_ins\"('fk_stsasme_application_id', 'statistics_sasme', \n'applications', 'UNSPECIFIED', 'application_id', 'application_id')\n \"RI_ConstraintTrigger_17328741\" AFTER INSERT OR UPDATE ON \nstatistics_sasme FROM customers NOT DEFERRABLE INITIALLY IMMEDIATE FOR \nEACH ROW EXECUTE PROCEDURE \"RI_FKey_check_ins\"('fk_stsasme_customer_id', \n'statistics_sasme', 'customers', 'UNSPECIFIED', 'customer_id', \n'customer_id')\n\n\nThat contains about 7.000.000 entries and I have to remove 33.000 \nentries. I have created an sql file with all the delete sentences, e.g.:\n\n \"DELETE FROM statistics_sasme WHERE statistic_id = 9832;\"\n\nthen I do \\i delete_items.sql. Remove a single entry takes more than 10 \nseconds. What would you do to speed it up?\n\nThank you very much\n\n", "msg_date": "Fri, 02 Sep 2005 13:43:05 +0200", "msg_from": "Arnau <[email protected]>", "msg_from_op": true, "msg_subject": "Advise about how to delete entries" }, { "msg_contents": "On Fri, Sep 02, 2005 at 01:43:05PM +0200, Arnau wrote:\n>\n> statistic_id | numeric(10,0) | not null default \n> nextval('STATISTICS_OPERATOR_ID_SEQ'::text)\n\nAny reason this column is numeric instead of integer or bigint?\n\n> That contains about 7.000.000 entries and I have to remove 33.000 \n> entries. I have created an sql file with all the delete sentences, e.g.:\n> \n> \"DELETE FROM statistics_sasme WHERE statistic_id = 9832;\"\n> \n> then I do \\i delete_items.sql. Remove a single entry takes more than 10 \n> seconds. What would you do to speed it up?\n\nThe referential integrity triggers might be slowing down the delete.\nDo you have indexes on all foreign key columns that refer to this\ntable? Do all foreign key columns that refer to statistic_id have\nthe same type as statistic_id (numeric)? What's the output \"EXPLAIN\nANALYZE DELETE ...\"? Do you vacuum and analyze the tables regularly?\nWhat version of PostgreSQL are you using?\n\n-- \nMichael Fuhr\n", "msg_date": "Fri, 2 Sep 2005 07:36:10 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advise about how to delete entries" }, { "msg_contents": "\n> \"DELETE FROM statistics_sasme WHERE statistic_id = 9832;\"\n\n\tAs Michael said, why use a NUMERIC when a bigint is faster and better for \nyour use case, as you only need an integer and not a fixed precision \ndecimal ?\n\n\tAlso if you use postgres < 8, the index will not be used if you search on \na type different from the column type. So, if your key is a bigint, you \nshould do WHERE statistic_id = 9832::bigint.\n\n\tFor mass deletes like this, you should use one of the following, which \nwill be faster :\n\n\tDELETE FROM ... WHERE ID IN (list of values)\n\tDon't put the 30000 values in the same query, but rather do 300 queries \nwith 100 values in each.\n\n\tCOPY FROM a file with all the ID's to delete, into a temporary table, and \ndo a joined delete to your main table (thus, only one query).\n\n\tEXPLAIN DELETE is your friend.\n", "msg_date": "Fri, 02 Sep 2005 18:47:21 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advise about how to delete entries" }, { "msg_contents": "Hi all,\n\n >\n > COPY FROM a file with all the ID's to delete, into a temporary \ntable, and do a joined delete to your main table (thus, only one query).\n\n\n I already did this, but I don't have idea about how to do this join, \ncould you give me a hint ;-) ?\n\nThank you very much\n-- \nArnau\n\n", "msg_date": "Mon, 05 Sep 2005 11:26:32 +0200", "msg_from": "Arnau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Advise about how to delete entries" }, { "msg_contents": "Arnau wrote:\n\n> Hi all,\n>\n> >\n> > COPY FROM a file with all the ID's to delete, into a temporary \n> table, and do a joined delete to your main table (thus, only one query).\n>\n>\n> I already did this, but I don't have idea about how to do this join, \n> could you give me a hint ;-) ?\n>\n> Thank you very much\n\nmaybe something like this:\n\nDELETE FROM statistics_sasme s\n LEFT JOIN temp_table t ON (s.statistic_id = t.statistic_id)\nWHERE t.statistic_id IS NOT NULL\n\n", "msg_date": "Tue, 06 Sep 2005 05:05:25 -0600", "msg_from": "Kevin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advise about how to delete entries" }, { "msg_contents": "Kevin wrote:\n> Arnau wrote:\n>\n>> Hi all,\n>>\n>> >\n>> > COPY FROM a file with all the ID's to delete, into a temporary\n>> table, and do a joined delete to your main table (thus, only one query).\n>>\n>>\n>> I already did this, but I don't have idea about how to do this join,\n>> could you give me a hint ;-) ?\n>>\n>> Thank you very much\n>\n>\n> maybe something like this:\n>\n> DELETE FROM statistics_sasme s\n> LEFT JOIN temp_table t ON (s.statistic_id = t.statistic_id)\n> WHERE t.statistic_id IS NOT NULL\n>\n\nWhy can't you do:\nDELETE FROM statistics_sasme s JOIN temp_table t ON (s.statistic_id =\nt.statistic_id);\n\nOr possibly:\n\nDELETE FROM statistics_sasme s\n WHERE s.id IN (SELECT t.statistic_id FROM temp_table t);\n\nI'm not sure how delete exactly works with joins, but the IN form should\nbe approximately correct.\n\nJohn\n=:->", "msg_date": "Mon, 12 Sep 2005 01:06:41 -0400", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advise about how to delete entries" } ]
[ { "msg_contents": "\n\n\n\nHey there folks. I'm at a loss as to how to increase the speed of this\nquery. It's something I need to run each day, but can't at the rate this\nruns. Tables are updated 1/day and is vacuum analyzed after each load.\n\nselect ddw_tran_key, r.price_type_id, t.price_type_id\nfrom\ncdm.cdm_ddw_tran_item_header h JOIN cdm.cdm_ddw_tran_item t\non t.appl_xref=h.appl_xref\nJOIN\nmdc_upc u ON\nu.upc = t.item_upc\nJOIN\nmdc_price_history r\nON\nr. upc_id = u.keyp_upc and date(r.site_timestamp) = h.first_order_date\nwhere\ncal_date = '2005-08-31'\nand\nh.appl_id= 'MCOM'\nand tran_typ_id='S'\nlimit 1000\n\n\nMy explain is just horrendous:\n\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=288251.71..342657.36 rows=258 width=14)\n -> Merge Join (cost=288251.71..342657.36 rows=258 width=14)\n Merge Cond: ((\"outer\".appl_xref)::text = \"inner\".\"?column6?\")\n Join Filter: (date(\"inner\".site_timestamp) =\n\"outer\".first_order_date)\n -> Index Scan using cdm_ddw_tran_item_header_pkey on\ncdm_ddw_tran_item_header h (cost=0.00..51188.91 rows=789900 width=21)\n Filter: ((appl_id)::text = 'MCOM'::text)\n -> Sort (cost=288251.71..288604.31 rows=141038 width=39)\n Sort Key: (t.appl_xref)::text\n -> Hash Join (cost=29708.54..276188.93 rows=141038\nwidth=39)\n Hash Cond: (\"outer\".upc_id = \"inner\".keyp_upc)\n -> Seq Scan on mdc_price_history r\n(cost=0.00..189831.09 rows=11047709 width=16)\n -> Hash (cost=29698.81..29698.81 rows=3892 width=31)\n -> Nested Loop (cost=0.00..29698.81 rows=3892\nwidth=31)\n -> Index Scan using\ncdm_ddw_tran_item_cal_date on cdm_ddw_tran_item t (cost=0.00..14046.49\nrows=3891 width=35)\n Index Cond: (cal_date =\n'2005-08-31'::date)\n Filter: (tran_typ_id = 'S'::bpchar)\n -> Index Scan using mdcupcidx on mdc_upc\nu (cost=0.00..4.01 rows=1 width=12)\n Index Cond: (u.upc =\n\"outer\".item_upc)\n(18 rows)\n\n\n\nWhat I found is that I remove change the line:\nr.upc_id = u.keyp_upc and date(r.site_timestamp) = h.first_order_date\n\nTo\nr.upc_id = u.keyp_upc\n\nMy query plan drops to:\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=33327.39..37227.51 rows=1000 width=14)\n -> Hash Join (cost=33327.39..279027.01 rows=62998 width=14)\n Hash Cond: (\"outer\".upc_id = \"inner\".keyp_upc)\n -> Seq Scan on mdc_price_history r (cost=0.00..189831.09\nrows=11047709 width=8)\n -> Hash (cost=33323.05..33323.05 rows=1738 width=14)\n -> Nested Loop (cost=0.00..33323.05 rows=1738 width=14)\n -> Nested Loop (cost=0.00..26335.62 rows=1737\nwidth=18)\n -> Index Scan using cdm_ddw_tran_item_cal_date\non cdm_ddw_tran_item t (cost=0.00..14046.49 rows=3891 width=35)\n Index Cond: (cal_date =\n'2005-08-31'::date)\n Filter: (tran_typ_id = 'S'::bpchar)\n -> Index Scan using\ncdm_ddw_tran_item_header_pkey on cdm_ddw_tran_item_header h\n(cost=0.00..3.15 rows=1 width=17)\n Index Cond: ((\"outer\".appl_xref)::text =\n(h.appl_xref)::text)\n Filter: ((appl_id)::text = 'MCOM'::text)\n -> Index Scan using mdcupcidx on mdc_upc u\n(cost=0.00..4.01 rows=1 width=12)\n Index Cond: (u.upc = \"outer\".item_upc)\n(15 rows)\n\n\n\n\nUnfortunately, I need this criteria since it contains the first date of the\norder and is used to pull the correct price.\nAny suggestions?\nTIA\nPatrick\n\n\n\n\n", "msg_date": "Fri, 2 Sep 2005 09:12:04 -0700", "msg_from": "Patrick Hatcher <[email protected]>", "msg_from_op": true, "msg_subject": "Poor SQL performance" } ]
[ { "msg_contents": "Place\n'and date(r.site_timestamp) = h.first_order_date'\nafter WHERE\n\nBest regards,\n Alexander Kirpa\n\n", "msg_date": "Sat, 3 Sep 2005 01:56:03 +0300", "msg_from": "\"Alexander Kirpa\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor SQL performance" } ]
[ { "msg_contents": "Hi,\n \nIs there a way to improve the performance of the following query? \n \nSELECT * FROM SSIRRA where \n(YEAR = 2004 and CUSTOMER = 0000000004 and CODE = 00 and PART >= 00) or \n(YEAR = 2004 and CUSTOMER = 0000000004 and CODE > 00) or \n(YEAR = 2004 and CUSTOMER > 0000000004) or \n(YEAR > 2004)\n \nThanks in advance!\n \nBenkendorf\n\n\n__________________________________________________\nConverse com seus amigos em tempo real com o Yahoo! Messenger \nhttp://br.download.yahoo.com/messenger/ \nHi,\n \nIs there a way to improve the performance of the following query? \n \nSELECT * FROM SSIRRA where (YEAR = 2004 and CUSTOMER = 0000000004 and CODE = 00 and PART >= 00) or (YEAR = 2004 and CUSTOMER = 0000000004 and CODE > 00) or (YEAR = 2004 and CUSTOMER > 0000000004) or (YEAR > 2004)\n \nThanks in advance!\n \nBenkendorf__________________________________________________Converse com seus amigos em tempo real com o Yahoo! Messenger http://br.download.yahoo.com/messenger/", "msg_date": "Sat, 3 Sep 2005 21:02:27 +0000 (GMT)", "msg_from": "Carlos Benkendorf <[email protected]>", "msg_from_op": true, "msg_subject": "Improving performance of a query " }, { "msg_contents": "On Sat, Sep 03, 2005 at 09:02:27PM +0000, Carlos Benkendorf wrote:\n> Is there a way to improve the performance of the following query? \n> \n> SELECT * FROM SSIRRA where \n> (YEAR = 2004 and CUSTOMER = 0000000004 and CODE = 00 and PART >= 00) or \n> (YEAR = 2004 and CUSTOMER = 0000000004 and CODE > 00) or \n> (YEAR = 2004 and CUSTOMER > 0000000004) or \n> (YEAR > 2004)\n\nCould you post the EXPLAIN ANALYZE output of the query? It might\nalso be useful to see the EXPLAIN ANALYZE output for each of those\nWHERE conditions individually. Also, what are the table and index\ndefinitions? How many rows are in the table? What version of\nPostgreSQL are you using? Do you vacuum and analyze regularly?\n\nIn simple tests in 8.0.3 with random data -- which almost certainly\nhas a different distribution than yours -- I see about a 10%\nimprovement with a multi-column index on (year, customer, code,\npart) over using single-column indexes on each of those columns.\nVarious multi-column indexes on two or three of the columns gave\nworse performance than single-column indexes. Your results will\nprobably vary, however.\n\nIn my tests, 8.1beta1 with its bitmap scans was about 20-25% faster\nthan 8.0.3 with single-column indexes and about 35% faster with a\nfour-column index. 8.1beta1's use of a four-column index was about\n45% faster than 8.0.3's use of single-column indexes. Don't trust\nthese numbers too much, though -- I simply inserted 20,000 random\nrecords into a table and ran the above query.\n\n-- \nMichael Fuhr\n", "msg_date": "Sat, 3 Sep 2005 16:11:00 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving performance of a query" } ]
[ { "msg_contents": "Hello,\n\nWe have been experiencing poor performance of VACUUM in our production\ndatabase. Relevant details of our implementation are as follows:\n\n1. We have a database that grows to about 100GB.\n2. The database is a mixture of large and small tables.\n3. Bulk data (stored primarily in pg_largeobject, but also in various\nTOAST tables) comprises about 45% of our data.\n4. Some of our small tables are very active, with several hundred\nupdates per hour.\n5. We have a \"rolling delete\" function that purges older data on a\nperiodic basis to keep our maximum database size at or near 100GB.\n\nEverything works great until our rolling delete kicks in. Of course,\nwe are doing periodic VACUUMS on all tables, with frequent VACUUMs on\nthe more active tables. The problem arises when we start deleting the\nbulk data and have to VACUUM pg_largeobject and our other larger\ntables. We have seen VACUUM run for several hours (even tens of\nhours). During this VACUUM process, our smaller tables accumulate\ndead rows (we assume because of the transactional nature of the\nVACUUM) at a very rapid rate. Statistics are also skewed during this\nprocess and we have observed the planner choosing sequential scans on\ntables where it is obvious that an index scan would be more efficient.\n\nWe're looking for ways to improve the performance of VACUUM. We are\nalready experimenting with Hannu Krosing's patch for VACUUM, but it's\nnot really helping (we are still faced with doing a database wide\nVACUUM about once every three weeks or so as we approach the\ntransaction id rollover point... this VACUUM has been measured at 28\nhours in an active environment).\n\nOther things we're trying are partitioning tables (rotating the table\nthat updates happen to and using a view to combine the sub-tables for\nquerying). Unfortunately, we are unable to partition the\npg_largeobject table, and that table alone can take up 40+% of our\ndatabase storage. We're also looking at somehow storing our large\nobjects externally (as files in the local file system) and\nimplementing a mechanism similar to Oracle's bfile functionality. Of\ncourse, we can't afford to give up the transactional security of being\nable to roll back if a particular update doesn't succeed.\n\nDoes anyone have any suggestions to offer on good ways to proceed\ngiven our constraints? Thanks in advance for any help you can\nprovide.\n\n -jan-\n-- \nJan L. Peterson\n<[email protected]>\n", "msg_date": "Sun, 4 Sep 2005 00:16:10 -0600", "msg_from": "Jan Peterson <[email protected]>", "msg_from_op": true, "msg_subject": "poor VACUUM performance on large tables" }, { "msg_contents": "\nOn Sep 4, 2005, at 1:16 AM, Jan Peterson wrote:\n\n> Hello,\n>\n> We have been experiencing poor performance of VACUUM in our production\n> database. Relevant details of our implementation are as follows:\n>\n> 1. We have a database that grows to about 100GB.\n> 2. The database is a mixture of large and small tables.\n> 3. Bulk data (stored primarily in pg_largeobject, but also in various\n> TOAST tables) comprises about 45% of our data.\n> 4. Some of our small tables are very active, with several hundred\n> updates per hour.\n> 5. We have a \"rolling delete\" function that purges older data on a\n> periodic basis to keep our maximum database size at or near 100GB.\n>\n> Everything works great until our rolling delete kicks in. Of course,\n> we are doing periodic VACUUMS on all tables, with frequent VACUUMs on\n> the more active tables. The problem arises when we start deleting the\n> bulk data and have to VACUUM pg_largeobject and our other larger\n> tables. We have seen VACUUM run for several hours (even tens of\n> hours). During this VACUUM process, our smaller tables accumulate\n> dead rows (we assume because of the transactional nature of the\n> VACUUM) at a very rapid rate. Statistics are also skewed during this\n> process and we have observed the planner choosing sequential scans on\n> tables where it is obvious that an index scan would be more efficient.\n>\n> We're looking for ways to improve the performance of VACUUM. We are\n> already experimenting with Hannu Krosing's patch for VACUUM, but it's\n> not really helping (we are still faced with doing a database wide\n> VACUUM about once every three weeks or so as we approach the\n> transaction id rollover point... this VACUUM has been measured at 28\n> hours in an active environment).\n>\n> Other things we're trying are partitioning tables (rotating the table\n> that updates happen to and using a view to combine the sub-tables for\n> querying). Unfortunately, we are unable to partition the\n> pg_largeobject table, and that table alone can take up 40+% of our\n> database storage. We're also looking at somehow storing our large\n> objects externally (as files in the local file system) and\n> implementing a mechanism similar to Oracle's bfile functionality. Of\n> course, we can't afford to give up the transactional security of being\n> able to roll back if a particular update doesn't succeed.\n>\n> Does anyone have any suggestions to offer on good ways to proceed\n> given our constraints? Thanks in advance for any help you can\n> provide.\n>\n> -jan-\n\nDo you have your Free Space Map settings configured appropriately? \nSee section 16.4.3.2 of the docs:\n\nhttp://www.postgresql.org/docs/8.0/static/runtime-config.html#RUNTIME- \nCONFIG-RESOURCE\n\nYou'll want to run a VACUUM VERBOSE and note the numbers at the end, \nwhich describe how many pages are used and how many are needed. \nmax_fsm_pages should be set according to that, and you can set \nmax_fsm_relations based on it, too, although typically one knows \nroughly how many relations are in a database.\n\nhttp://www.postgresql.org/docs/8.0/static/sql-vacuum.html\n\nFinally, have you experimented with pg_autovacuum, which is located \nin contrib in the source tarballs (and is integrated into the backend \nin 8.1 beta and beyond)? You don't really say how often you're \nrunning VACUUM, and it might be that you're not vacuuming often enough.\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source: Open Your i�\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-469-5150\n615-469-5151 (fax)", "msg_date": "Sun, 4 Sep 2005 09:58:01 -0500", "msg_from": "\"Thomas F. O'Connell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor VACUUM performance on large tables" }, { "msg_contents": "Jan Peterson <[email protected]> writes:\n> We have been experiencing poor performance of VACUUM in our production\n> database.\n\nWhich PG version, exactly?\n\n> Everything works great until our rolling delete kicks in. Of course,\n> we are doing periodic VACUUMS on all tables, with frequent VACUUMs on\n> the more active tables. The problem arises when we start deleting the\n> bulk data and have to VACUUM pg_largeobject and our other larger\n> tables. We have seen VACUUM run for several hours (even tens of\n> hours).\n\nPlain VACUUM (not FULL) certainly ought not take that long. (If you're\nusing VACUUM FULL, the answer is going to be \"don't do that\".) What\nmaintenance_work_mem (or vacuum_mem in older releases) are you running\nit under? Can you get VACUUM VERBOSE output from some of these cases\nso we can see which phase(s) are eating the time? It'd also be\ninteresting to watch the output of vmstat or local equivalent --- it\nmight just be that your I/O capability is nearly saturated and VACUUM is\npushing the system over the knee of the response curve. If so, the\nvacuum delay options of 8.0 would be worth experimenting with.\n\n> Statistics are also skewed during this\n> process and we have observed the planner choosing sequential scans on\n> tables where it is obvious that an index scan would be more efficient.\n\nThat's really pretty hard to believe; VACUUM doesn't affect the\nstatistics until the very end. Can you give some specifics of how\nthe \"statistics are skewed\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Sep 2005 19:01:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor VACUUM performance on large tables " }, { "msg_contents": "Thomas F. O'Connell:\n>Do you have your Free Space Map settings configured appropriately?\n\nOur current FSM settings are:\nmax_fsm_pages = 500000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 1000 # min 100, ~50 bytes each\n\n> You'll want to run a VACUUM VERBOSE and note the numbers at the end,\n> which describe how many pages are used and how many are needed.\n> max_fsm_pages should be set according to that, and you can set\n> max_fsm_relations based on it, too, although typically one knows\n> roughly how many relations are in a database.\n\nHere are the last two lines from a VACUUM VERBOSE FULL we did when the\ndatabase was totally full:\n\nINFO: free space map: 82 relations, 532349 pages stored; 632352 total\npages needed\nDETAIL: Allocated FSM size: 1000 relations + 500000 pages = 2995 kB\nshared memory.\nVACUUM \n\nBased on this, it looks like we could stand to bump up our FSM another\ncouple hundred thousand. Does it buy us anything to reduce the number\nof FSM relations from the default of 1000?\n\n> have you experimented with pg_autovacuum\n\nWe're not using pg_autovacuum. We have our own mechanism that works\nbasically the same as pg_autovacuum, but split into two separate\nthreads, one for large tables and one for small tables. We consider\ntables to be \"large\" if their size exceeds 100MB. Tables are selected\nfor vacuuming if they've changed \"enough\" (I can get you actual\nmetrics for what is \"enough\", but I don't know off the top of my\nhead). Our main reason for splitting out small vs. large tables was\nthat the large tables take a long time to VACUUM and we didn't want\nour small tables to go a long time between VACUUMs. Of course, due to\nthe transactional nature of VACUUM, we aren't really gaining much\nhere, anyway (this was one of the things we were hoping to address\nwith Hannu's patch, but there are issues with his patch on 8.0.2 that\nwe haven't tracked down yet).\n\nTom Lane:\n> Which PG version, exactly?\n\nWe're currently running 8.0.2.\n\n> Plain VACUUM (not FULL) certainly ought not take that long. (If you're\n> using VACUUM FULL, the answer is going to be \"don't do that\".)\n\nHeh, we're definitely not doing a VACUUM FULL. We're doing VACUUM\nANALYZE {tablename} exclusively, except when we get close to the\ntransaction id wraparound threshold when we do a VACUUM ANALYZE of the\nentire database.\n\n> What maintenance_work_mem (or vacuum_mem in older releases) are \n> you running it under? \n\nIt looks like we are using the defaults for work_mem (1024) and\nmaintenance_work_mem (16384). We could certainly bump these up. Is\nthere a good way to determine what settings would be reasonable? I'll\nnote, however, that we had experimented with bumping these previously\nand not noticed any change in performance.\n\n> Can you get VACUUM VERBOSE output from some of these cases\n> so we can see which phase(s) are eating the time? \n\nI'll get some, but it will take a few more days as we have recently\nreset our test environment. I can get some sample runs of VACUUM\nVERBOSE on pg_largeobject in a few hours (it takes a few hours to run)\nand will post them when I have them.\n\n> It'd also be interesting to watch the output of vmstat or local \n> equivalent --- it might just be that your I/O capability is nearly \n> saturated and VACUUM is pushing the system over the knee \n> of the response curve. If so, the vacuum delay options of 8.0 \n> would be worth experimenting with.\n\nWe've been monitoring I/O rates with iostat and we're generally\nrunning around 90% I/O usage after we kick into the rolling delete\nstage (before we reach that stage, we're running around 20%-50% I/O\nusage). We are definitely I/O bound, hence trying to find a way to\nmake VACUUM process less data.\n\nOur system (the database is on an appliance system) is a dual CPU box,\nand we're burning about 25% of our CPU time in I/O waits (again, after\nour rolling delete kicks in). A higher performance I/O subsystem is\nsomething we could try.\n\nOur biggest concern with increasing the vacuum delay options is the\nlength of time it currently takes to VACUUM our large tables (and\npg_largeobject). Holding a transaction open for these long periods\ndegrades performance in other places.\n\n> > Statistics are also skewed during this\n> > process and we have observed the planner choosing sequential scans on\n> > tables where it is obvious that an index scan would be more efficient.\n> \n> That's really pretty hard to believe; VACUUM doesn't affect the\n> statistics until the very end. Can you give some specifics of how\n> the \"statistics are skewed\"?\n\nI don't have any hard evidence for this, but we have noticed that at\ncertain times a particular query which we run will run for an\nextremely long time (several hours). Re-running the query with\nEXPLAIN always shows it using an index scan and it runs very quickly. \nWe haven't been able to catch it with an EXPLAIN in the state where it\nwill take a long time (it's not deterministic). Our assumption is\nthat the planner is taking the wrong path because we can't figure out\nany other reason why the query would take such a long time. We'll run\nsome more experiments and try to reproduce this behavior. Is there\nanything specific that would help track this down (other than getting\nEXPLAIN output showing the bogus execution plan)?\n\nThanks for your help.\n\n -jan-\n-- \nJan L. Peterson\n<[email protected]>\n", "msg_date": "Tue, 6 Sep 2005 18:54:24 -0600", "msg_from": "Jan Peterson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: poor VACUUM performance on large tables" }, { "msg_contents": "Jan Peterson <[email protected]> writes:\n> Based on this, it looks like we could stand to bump up our FSM another\n> couple hundred thousand. Does it buy us anything to reduce the number\n> of FSM relations from the default of 1000?\n\nNot a lot; as the comment says, those slots are only about 50 bytes\neach. (I think the true figure is closer to 70, according to some\nmeasurements I did recently on CVS tip, but in any case it's less than\n100 bytes apiece.) Still, a byte saved is a byte earned ...\n\n> It looks like we are using the defaults for work_mem (1024) and\n> maintenance_work_mem (16384). We could certainly bump these up. Is\n> there a good way to determine what settings would be reasonable?\n\nI'd bump up maintenance_work_mem by a factor of 10 and see if it makes a\ndifference. It should reduce the number of passes over the indexes when\nvacuuming up lots of deleted rows. If you have lots of RAM you might be\nable to increase it more, but try that for starters.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Sep 2005 21:07:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor VACUUM performance on large tables " } ]
[ { "msg_contents": "Hello,\n\nMy company has decided to migrate our Oracle database to postgresql8. We\nwill aquire a new server for this, and would very much appreciate your\nadvice.\n\nNOTE: The applications accessing the database are developed and\nmaintained externally, and unfortunately, the developers have not yet\ngiven us detailed information on their requirements. The only info I can\ngive so far is that the database size is about 60GB, and that it will be\nfrequently accessed by multiple users (about 100 will be connected\nduring business hours). The applications accessing the database are\nmostly reporting tools.\n\nI know that the performance question will ultimately boil down to \"it\ndepends what you want to do with it\", but at the moment I'm very much\ninterested if there are any general issues we should look out for.\n\nThe questions we are asking us now are:\n\n1) Intel or AMD (or alternate Platform)\nAre we better of with Xeons or Opterons? Should we consider the IBM\nOpenPower platform?\n\n2) CPUs vs cache\nWould you rather have more CPUs or more cache? Eg: 4x Xeon 1MB vs 2x\nXeon 8MB\n\n3) CPUs vs Memory\nWould you rather have 4x CPUs and 8GB of memory, or 2x CPUs with 16GB of\nmemory?\n\nThanks in advance for all your replies!\n\nBest Regards,\nChristian Kastner\n", "msg_date": "Mon, 5 Sep 2005 15:50:07 +0200", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql Hardware - Recommendations" }, { "msg_contents": "\n\n\nOn 9/5/05 6:50 AM, \"[email protected]\"\n<[email protected]> wrote:\n> The questions we are asking us now are:\n> \n> 1) Intel or AMD (or alternate Platform)\n> Are we better of with Xeons or Opterons? Should we consider the IBM\n> OpenPower platform?\n\n\nOpteron spanks Xeon for database loads. Advantage AMD, and you generally\nwon't have to spend much extra money for the privilege. I've never used\nPostgres on the IBM OpenPower platform, but I would expect that it would\nperform quite well, certainly better than the Xeons and probably competitive\nwith the Opterons in many respects -- I am not sufficiently knowledgeable to\nmake a definitive recommendation.\n\n \n> 2) CPUs vs cache\n> Would you rather have more CPUs or more cache? Eg: 4x Xeon 1MB vs 2x\n> Xeon 8MB\n\n\nI would expect that cache sizes are relatively unimportant compared to\nnumber of processors, but it would depend on the specifics of your load.\nCache coherence is a significant issue for high concurrency database\napplications, and a few megabytes of cache here and there will likely make\nlittle difference for a 60GB database. Databases spend most of their time\nplaying in main memory, not in cache. The biggest advantage I can see to\nbigger cache would be connection scaling, in which case you'll probably buy\nmore mileage with more processors.\n\nThere are a lot of architecture dependencies here. Xeons scale badly to 4\nprocessors, Opterons scale just fine.\n\n\n \n> 3) CPUs vs Memory\n> Would you rather have 4x CPUs and 8GB of memory, or 2x CPUs with 16GB of\n> memory?\n\n\nUh, for what purpose? CPU and memory are not fungible, so how you\ndistribute them depends very much on your application. You can never have\ntoo much memory for a large database, but having extra processors on a\nscalable architecture is pretty nice too. What they both buy you is not\nreally related. \n\nThe amount of memory you need is determined by the size of your cache-able\nworking set and the nature of your queries. Spend whatever money is left on\nthe processors; if your database spends all its time waiting for disks, no\nquantity of processors will help you unless you are doing a lot of math on\nthe results.\n\n\nYMMV, as always. Recommendations more specific than \"Opterons rule, Xeons\nsuck\" depend greatly on what you plan on doing with the database.\n\n\nCheers,\n\nJ. Andrew Rogers\n\n\n\n", "msg_date": "Tue, 06 Sep 2005 00:03:29 -0700", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql Hardware - Recommendations" }, { "msg_contents": "On 9/5/05, [email protected] <[email protected]> wrote:\n> ... The only info I can\n> give so far is that the database size is about 60GB, and that it will be\n> frequently accessed by multiple users (about 100 will be connected\n> during business hours). The applications accessing the database are\n> mostly reporting tools.\n\nOptimizing hardware for mostly selects is different than optimizing\nfor lots of inserts. You will get good responses from this list if you\ncan give a little more details. Here are some questions:\nHow do you get your data into the db? Do you do bullk loads at\nperiodic intervals during the day? Do you do frequent updates/inserts?\n\nYou say reporting, do you use many stored procedures and calculations\non the server side? I've used some reporting apps that simply grab\ntons of data from the server and then process it on the client side\n(ODBC apps seem to do this), while other applications formulate the\nqueries and use stored procedures in order to transfer little data.\n\nOf your 60GB, how much of that is active? Does your budget allow you\nto buy enough RAM to get your active data into the disk cache? For\nreporting, this *might* be your biggest win.\n\nHere are some scenarios:\nS1: Bulk uploads once or twice daily of about 250 MB of data. Few\ninserts and updates during the day (1-2%). Reporting is largely done\non data from the last 5 business days. In this case you have < 2GB of\nactive data and your disk cache will hold all of your active data in\nRAM (provided your db structure is diskcache friendly). An example of\nthis I have experienced is a sales application that queries current\ninventory. Telephone agents queried, quieried, queried the\ninstock-inventory.\n\nS2: Same as above but reporting is largely done on data covering 200+\nbusiness days. Its doubtful that you will get 50GB of RAM in your\nserver, you need to focus on disk speed. An example of this I have\nexperienced was an application that looked at sales trends and\nperformed commission calculations and projected sales forecasts.\n\nS3: Lots of inserts/updates throughout the day (15 - 25%) - you need\nto focus on disk speed. The content management system my employer\ndevelops fits this model.\n\n> 3) CPUs vs Memory\n> Would you rather have 4x CPUs and 8GB of memory, or 2x CPUs with 16GB of\n> memory?\n\nVery hard to say without knowing your application. I have limited\nexperience but what I've found is that applications that support\nmultiple db architectures do not fully utilize the database server and\nCPU utilization is low. Disk and network i/o is high. I don't know if\nyour application supports multiple backeneds, but chances are good\nyour biggest wins will come from RAM, disk and network investments.\n\n-- \nMatthew Nuzum\nwww.bearfruit.org\n", "msg_date": "Tue, 6 Sep 2005 10:09:10 -0500", "msg_from": "Matthew Nuzum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql Hardware - Recommendations" } ]
[ { "msg_contents": "Carlos wrote:\nSELECT * FROM SSIRRA where \n(YEAR = 2004 and CUSTOMER = 0000000004 and CODE = 00 and PART >= 00) or \n(YEAR = 2004 and CUSTOMER = 0000000004 and CODE > 00) or \n(YEAR = 2004 and CUSTOMER > 0000000004) or \n(YEAR > 2004)\n[snip]\n\nah, the positional query. You can always rewrite this query in the\nfollowing form:\n\n(YEAR >= 2004) and\n(YEAR = 2004 or CUSTOMER >= 0000000004) and\n(YEAR = 2004 or CUSTOMER = 0000000004 or CODE >= 00) and\n(YEAR = 2004 or CUSTOMER = 0000000004 or CODE = 00 or PART > 00) \n\nThis is better because it will index scan using 'year' (not customer or\npart though). The true answer is to lobby for/develop proper row\nconstructor support so you can just \n\nSELECT * FROM SSIRRA where (YEAR, CUSTOMER, CODE, PART) > (2004,\n0000000004, 00, 00)\n\nthis is designed to do what you are trying to do but currently doesn't\nwork quite right.\n\nnote: in all these queries, 'order by YEAR, CUSTOMER, CODE, PART' should\nprobably be on the query.\n\nOther solution: use cursor/fetch or some type of materialized solution. \n\nMerlin\n", "msg_date": "Tue, 6 Sep 2005 08:59:46 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving performance of a query " }, { "msg_contents": "On Tue, 6 Sep 2005, Merlin Moncure wrote:\n\n> Carlos wrote:\n> SELECT * FROM SSIRRA where\n> (YEAR = 2004 and CUSTOMER = 0000000004 and CODE = 00 and PART >= 00) or\n> (YEAR = 2004 and CUSTOMER = 0000000004 and CODE > 00) or\n> (YEAR = 2004 and CUSTOMER > 0000000004) or\n> (YEAR > 2004)\n> [snip]\n>\n> ah, the positional query. You can always rewrite this query in the\n> following form:\n>\n> (YEAR >= 2004) and\n> (YEAR = 2004 or CUSTOMER >= 0000000004) and\n> (YEAR = 2004 or CUSTOMER = 0000000004 or CODE >= 00) and\n> (YEAR = 2004 or CUSTOMER = 0000000004 or CODE = 00 or PART > 00)\n\nUnless I'm not seeing something, I don't think that's a correct\nreformulation in general. If customer < 4 and year > 2004 the original\nclause would return true but the reformulation would return false since\n(year=2004 or customer >= 4) would be false.\n", "msg_date": "Tue, 6 Sep 2005 07:09:08 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving performance of a query " } ]
[ { "msg_contents": "Hi, \n\nI usually use PostgreSQL coupled with Linux, but I have to use Windows for a \nperticular project.\n\nSo I wanted to do some tests to know if the performance will be acceptable (I \ndon't need PostgreSQL to be as fast with windows as with linux, but it has to \nbe usable...).\n\nI started with trying to do lots of inserts, and I'm quite astonished by the \ncatastrophics results ...\n\nThe test :\nThe computer was the same (my workstation, a PIV Dell with SATA disk), dual \nboot\n\nThe windows OS is XP.\n\nBoth Oses are PostgreSQL 8.0.3\n\nBoth PostgreSQL clusters (windows and linux) have the same tuning \n(shared_buffers=20000, wal_buffers=128, checkpoint_segments=10)\n\nBefore each test, the clusters are vacuum analyzed, and the test database is \nrecreated.\n\nThe script is quite dumb :\nBEGIN;\nCREATE TABLE test (col1 serial, col2 text);\nINSERT INTO test (col2) values ('test');\nINSERT INTO test (col2) values ('test');\nINSERT INTO test (col2) values ('test');\nINSERT INTO test (col2) values ('test');\nINSERT INTO test (col2) values ('test');\n...... 500,000 times\nThen COMMIT.\n\nI know it isn't realistic, but I needed to start with something :)\n\nThe results are as follows :\nLinux : 1'9''\nWindows : 9'38''\n\nWhat I've tried to solve, and didn't work :\n\n- Deactivate antivirus on windows\n- fsync=no\n- raise the checkpoint_segments value (32)\n- remove hyperthreading (who knows...)\n\nI don't know what could cause this (I'm not a windows admin...at all). All I \nsee is a very high kernel load during the execution of this script, but I \ncan't determine where it comes from.\n\n\nI'd like to know if this is a know problem, if there is something I can do, \netc...\n\nThanks a lot.\n", "msg_date": "Tue, 6 Sep 2005 16:12:27 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "insert performance for win32" } ]
[ { "msg_contents": "> > Carlos wrote:\n> > SELECT * FROM SSIRRA where\n> > (YEAR = 2004 and CUSTOMER = 0000000004 and CODE = 00 and PART >= 00)\nor\n> > (YEAR = 2004 and CUSTOMER = 0000000004 and CODE > 00) or\n> > (YEAR = 2004 and CUSTOMER > 0000000004) or\n> > (YEAR > 2004)\n> > [snip]\n> >\n> > ah, the positional query. You can always rewrite this query in the\n> > following form:\n> >\n> > (YEAR >= 2004) and\n> > (YEAR = 2004 or CUSTOMER >= 0000000004) and\n> > (YEAR = 2004 or CUSTOMER = 0000000004 or CODE >= 00) and\n> > (YEAR = 2004 or CUSTOMER = 0000000004 or CODE = 00 or PART > 00)\n> \n> Unless I'm not seeing something, I don't think that's a correct\n> reformulation in general. If customer < 4 and year > 2004 the original\n> clause would return true but the reformulation would return false\nsince\n> (year=2004 or customer >= 4) would be false.\n\nYou are correct, you also have to exchange '=' with '>' to exchange\n'and' with 'or'. \n\nCorrect answer is:\n> > (YEAR >= 2004) and\n> > (YEAR > 2004 or CUSTOMER >= 0000000004) and\n> > (YEAR > 2004 or CUSTOMER > 0000000004 or CODE >= 00) and\n> > (YEAR > 2004 or CUSTOMER > 0000000004 or CODE > 00 or PART > 00)\n\nIt's easy to get tripped up here: the basic problem is how to get the\nnext record based on a multi part key. My ISAM bridge can write them\neither way but the 'and' major form is always faster ;).\n\nMErlin\n", "msg_date": "Tue, 6 Sep 2005 10:47:16 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving performance of a query " } ]
[ { "msg_contents": "> Hi,\n> \n> I usually use PostgreSQL coupled with Linux, but I have to use Windows\nfor\n> a\n> perticular project.\n> \n> So I wanted to do some tests to know if the performance will be\nacceptable\n> (I\n> don't need PostgreSQL to be as fast with windows as with linux, but it\nhas\n> to\n> be usable...).\n\nIn my experience win32 is par with linux generally with a few gotchas on\neither side. Are your times with fsync=no? It's much harder to give\napples-apples comparison with fsync=on for various reasons.\n\nAre you running stats_command_string=on? Try disabling and compare\nresults.\nIs your loading app running locally or on the server?\n\nI am very interesting in discovering sources of high cpu load problems\non win32. If you are still having problems could you get a gprof\nprofile together? There is a recent thread on win32-hackers discussing\nhow to do this.\n\nMerlin\n\n\n", "msg_date": "Tue, 6 Sep 2005 10:56:26 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32" }, { "msg_contents": ">\n> In my experience win32 is par with linux generally with a few gotchas on\n> either side.  Are your times with fsync=no? It's much harder to give\n> apples-apples comparison with fsync=on for various reasons.\nIt is with fsync=off on windows, fsync=on on linux\n\n>\n> Are you running stats_command_string=on?  Try disabling and compare\n> results.\nDeactivated on windows, activated on linux\n\n> Is your loading app running locally or on the server?\nYes\n>\n> I am very interesting in discovering sources of high cpu load problems\n> on win32.  If you are still having problems could you get a gprof\n> profile together?  There is a recent thread on win32-hackers discussing\n> how to do this.\nI'll give it a look....\n>\n> Merlin\n", "msg_date": "Tue, 6 Sep 2005 17:13:41 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32" } ]
[ { "msg_contents": "Hello,\n\nWe are seeing a very strange behavior from postgres. For one of our very common tasks we have to delete records from a table of around 500,000 rows. The delete is by id which is the primary key. It seems to be consistently taking around 10 minutes to preform. This is totally out of line with the rest of the performance of the database.\n\nThis table does have children through foreign keys, but I am pretty sure that all foreign key constraints in the schema have indexes on their children. \n\nSometimes if we do a vacuum right before running the process the delete will go much faster. But then the next time we run the task, even just a few minutes later, the delete takes a long time to run.\n\nWe deploy the same application also on Oracle. The schemas are pretty much identical. On similar hardware with actually about 4 to 5 times the data, Oracle does not seem to have the same problem. Not that that really means anything since the internals of Oracle and PostgreSQL are so different, but an interesting fact anyway.\n\nAny ideas on what might be going on?\n\nThanks,\nB.\n\n\n\n\n\nPoor performance of delete by primary key\n\n\n\nHello,\n\nWe are seeing a very strange behavior from postgres. For one of our very common tasks we have to delete records from a table of around 500,000 rows. The delete is by id which is the primary key. It seems to be consistently taking around 10 minutes to preform. This is totally out of line with the rest of the performance of the database.\n\nThis table does have children through foreign keys, but I am pretty sure that all foreign key constraints in the schema have indexes on their children.\n\nSometimes if we do a vacuum right before running the process the delete will go much faster. But then the next time we run the task, even just a few minutes later, the delete takes a long time to run.\n\nWe deploy the same application also on Oracle. The schemas are pretty much identical. On similar hardware with actually about 4 to 5 times the data, Oracle does not seem to have the same problem. Not that that really means anything since the internals of Oracle and PostgreSQL are so different, but an interesting fact anyway.\n\nAny ideas on what might be going on?\n\nThanks,\nB.", "msg_date": "Tue, 6 Sep 2005 08:04:22 -0700", "msg_from": "\"Brian Choate\" <[email protected]>", "msg_from_op": true, "msg_subject": "Poor performance of delete by primary key" }, { "msg_contents": "\"Brian Choate\" <[email protected]> writes:\n> We are seeing a very strange behavior from postgres. For one of our very =\n> common tasks we have to delete records from a table of around 500,000 =\n> rows. The delete is by id which is the primary key. It seems to be =\n> consistently taking around 10 minutes to preform. This is totally out of =\n> line with the rest of the performance of the database.\n\nI'll bet this table has foreign-key references from elsewhere, and the\nreferencing columns are either not indexed, or not of the same datatype\nas the master column.\n\nUnfortunately there's no very simple way to determine which FK is the\nproblem. (In 8.1 it'll be possible to do that with EXPLAIN ANALYZE,\nbut in existing releases EXPLAIN doesn't break out the time spent in\neach trigger ...) You have to just eyeball the schema :-(.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Sep 2005 11:32:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance of delete by primary key " }, { "msg_contents": "On Tue, Sep 06, 2005 at 11:32:00AM -0400, Tom Lane wrote:\n> \"Brian Choate\" <[email protected]> writes:\n> > We are seeing a very strange behavior from postgres. For one of our very =\n> > common tasks we have to delete records from a table of around 500,000 =\n> > rows. The delete is by id which is the primary key. It seems to be =\n> > consistently taking around 10 minutes to preform. This is totally out of =\n> > line with the rest of the performance of the database.\n> \n> I'll bet this table has foreign-key references from elsewhere, and the\n> referencing columns are either not indexed, or not of the same datatype\n> as the master column.\n\nWouldn't setting the FK as deferrable and initially deferred help here\ntoo as then the FK wouldn't be checked until the transaction ended?\n\nMatthew\n", "msg_date": "Tue, 6 Sep 2005 16:43:15 +0100", "msg_from": "Matthew Sackman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance of delete by primary key" }, { "msg_contents": "Brian Choate wrote:\n> Hello,\n> \n> We are seeing a very strange behavior from postgres. For one of our\n> very common tasks we have to delete records from a table of around\n> 500,000 rows. The delete is by id which is the primary key. It seems\n> to be consistently taking around 10 minutes to preform. This is\n> totally out of line with the rest of the performance of the database.\n\n> Any ideas on what might be going on?\n\nWell, it sounds like *something* isn't using an index. You say that all \nyour FK's are indexed, but that's something worth checking. Also keep an \neye out for type conflicts.\n\nIf the system is otherwise idle, it might be worthwhile to compare \nbefore and after values of pg_stat* (user-tables and user-indexes).\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 06 Sep 2005 16:51:05 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance of delete by primary key" }, { "msg_contents": "I had a similar problem, so I downloaded 8.1 from CVS, ran it on a\nrelatively gnarly dev workstation, imported a dump of my 8.0 database,\nand ran my troublesome queries with the new EXPLAIN ANALYZE.\n\nThis process took about an hour and worked great, provided that you've\nactually named your foreign key constraints. Otherwise, you'll find out\nthat there's a trigger for a constraint called $3 that's taking up all\nof your time, but you won't know what table that constraint is on.\n\n-- Mark\n\n\n\nOn Tue, 2005-09-06 at 11:32 -0400, Tom Lane wrote:\n> \"Brian Choate\" <[email protected]> writes:\n> > We are seeing a very strange behavior from postgres. For one of our very =\n> > common tasks we have to delete records from a table of around 500,000 =\n> > rows. The delete is by id which is the primary key. It seems to be =\n> > consistently taking around 10 minutes to preform. This is totally out of =\n> > line with the rest of the performance of the database.\n> \n> I'll bet this table has foreign-key references from elsewhere, and the\n> referencing columns are either not indexed, or not of the same datatype\n> as the master column.\n> \n> Unfortunately there's no very simple way to determine which FK is the\n> problem. (In 8.1 it'll be possible to do that with EXPLAIN ANALYZE,\n> but in existing releases EXPLAIN doesn't break out the time spent in\n> each trigger ...) You have to just eyeball the schema :-(.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n\n", "msg_date": "Tue, 06 Sep 2005 10:04:38 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance of delete by primary key" }, { "msg_contents": "Mark Lewis <[email protected]> writes:\n> I had a similar problem, so I downloaded 8.1 from CVS, ran it on a\n> relatively gnarly dev workstation, imported a dump of my 8.0 database,\n> and ran my troublesome queries with the new EXPLAIN ANALYZE.\n\n> This process took about an hour and worked great, provided that you've\n> actually named your foreign key constraints. Otherwise, you'll find out\n> that there's a trigger for a constraint called $3 that's taking up all\n> of your time, but you won't know what table that constraint is on.\n\nBut at least you've got something you can work with. Once you know the\nname of the problem trigger you can look in pg_trigger to see which\nother table it's connected to. Try something like\n\n\tselect tgname, tgconstrrelid::regclass, tgargs from pg_trigger\n\twhere tgrelid = 'mytable'::regclass;\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Sep 2005 13:42:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance of delete by primary key " }, { "msg_contents": "> Unfortunately there's no very simple way to determine which FK is the\n> problem. (In 8.1 it'll be possible to do that with EXPLAIN ANALYZE,\n> but in existing releases EXPLAIN doesn't break out the time spent in\n> each trigger ...) You have to just eyeball the schema :-(.\n\nphpPgAdmin has a handy info feature where you can see all tables that \nrefer to the current one. You can always go and steal that query to \nfind them...\n\nChris\n", "msg_date": "Wed, 07 Sep 2005 11:07:04 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance of delete by primary key" }, { "msg_contents": "On Wed, Sep 07, 2005 at 11:07:04AM +0800, Christopher Kings-Lynne wrote:\n> >Unfortunately there's no very simple way to determine which FK is the\n> >problem. (In 8.1 it'll be possible to do that with EXPLAIN ANALYZE,\n> >but in existing releases EXPLAIN doesn't break out the time spent in\n> >each trigger ...) You have to just eyeball the schema :-(.\n> \n> phpPgAdmin has a handy info feature where you can see all tables that \n> refer to the current one. You can always go and steal that query to \n> find them...\n\nYou can also use pg_user_foreighn_key* from\nhttp://pgfoundry.org/projects/newsysviews/.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 7 Sep 2005 16:38:34 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance of delete by primary key" } ]
[ { "msg_contents": "> > In my experience win32 is par with linux generally with a few gotchas on\n> > either side.  Are your times with fsync=no? It's much harder to give\n> > apples-apples comparison with fsync=on for various reasons.\n> It is with fsync=off on windows, fsync=on on linux\n\nwell, inside a transaction this shouldn't have mattered anyways.\n \n> > Are you running stats_command_string=on?  Try disabling and compare\n> > results.\n> Deactivated on windows, activated on linux\n \n> > Is your loading app running locally or on the server?\n> Yes\n\nhm :(. Well, you had me curious so I went ahead and re-ran your test case and profiled it (on windows). I got similar results time wise. It's interesting to note that the command I used to generate the test table before dumping w/inserts\n\ninsert into test select nextval('test_id_seq'), 'test' from generate_series(1,500000) \n\nran in just a few seconds. \n\nWell, I cut the #recs down to 50k and here is profile trace:\n % cumulative self self total \n time seconds seconds calls s/call s/call name \n 10.78 0.62 0.62 50001 0.00 0.00 yyparse\n 5.39 0.93 0.31 5101422 0.00 0.00 AllocSetAlloc\n 4.52 1.19 0.26 799970 0.00 0.00 base_yylex\n 2.78 1.35 0.16 299998 0.00 0.00 SearchCatCache\n 2.43 1.49 0.14 554245 0.00 0.00 hash_search\n 2.26 1.62 0.13 49998 0.00 0.00 XLogInsert\n 1.74 1.72 0.10 453363 0.00 0.00 LWLockAcquire\n 1.74 1.82 0.10 299988 0.00 0.00 ScanKeywordLookup\n\nThis makes me wonder if we are looking in the wrong place. Maybe the problem is coming from psql? More results to follow.\n\nMerlin\n", "msg_date": "Tue, 6 Sep 2005 12:16:42 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32" } ]
[ { "msg_contents": "> This makes me wonder if we are looking in the wrong place. Maybe the\n> problem is coming from psql? More results to follow.\n\nproblem is not coming from psql. \n\nOne thing I did notice that in a 250k insert transaction the insert time\ngrows with #recs inserted. Time to insert first 50k recs is about 27\nsec and last 50 k recs is 77 sec. I also confimed that size of table is\nnot playing a role here.\n\nMarc, can you do select timeofday() every 50k recs from linux? Also a\ngprof trace from linux would be helpful.\n\nMerlin\n", "msg_date": "Tue, 6 Sep 2005 13:11:18 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32" }, { "msg_contents": "On Tuesday 06 September 2005 19:11, Merlin Moncure wrote:\n> > This makes me wonder if we are looking in the wrong place. Maybe the\n> > problem is coming from psql? More results to follow.\n>\n> problem is not coming from psql.\n>\n> One thing I did notice that in a 250k insert transaction the insert time\n> grows with #recs inserted. Time to insert first 50k recs is about 27\n> sec and last 50 k recs is 77 sec. I also confimed that size of table is\n> not playing a role here.\n>\n> Marc, can you do select timeofday() every 50k recs from linux? Also a\n> gprof trace from linux would be helpful.\n>\n\nHere's the timeofday ... i'll do the gprof as soon as I can.\nEvery 50000 rows...\n\nWed Sep 07 13:58:13.860378 2005 CEST\nWed Sep 07 13:58:20.926983 2005 CEST\nWed Sep 07 13:58:27.928385 2005 CEST\nWed Sep 07 13:58:35.472813 2005 CEST\nWed Sep 07 13:58:42.825709 2005 CEST\nWed Sep 07 13:58:50.789486 2005 CEST\nWed Sep 07 13:58:57.553869 2005 CEST\nWed Sep 07 13:59:04.298136 2005 CEST\nWed Sep 07 13:59:11.066059 2005 CEST\nWed Sep 07 13:59:19.368694 2005 CEST\n\n\n\n\n\n> Merlin\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n", "msg_date": "Wed, 7 Sep 2005 14:02:02 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert performance for win32" } ]
[ { "msg_contents": "Hi,\n \nWe want to discover how to improve the performance of an application and with that intention I turned on log_duration, log_statement=all and the time stamp escape character (%t) of log_line_prefix. \n \nSubtracting the time stamp of the last SQL statement from the first one I disovered that the whole application takes about 10 seconds to run. Almost the same time we have at the client workstation.\n \nAdding all the log_duration times I found almost 3 seconds (30% of the total time). \n \nSo, I realized that to improve performance it will be better to discover who is spending the 7 remaining seconds than making changes in database structure or SQL syntax.\n \nHow could I discover who is using the 7 remaining seconds? Network? ODBC? Application?\n \nThanks in advance!\n \nReimer\n \n \n\n__________________________________________________\nConverse com seus amigos em tempo real com o Yahoo! Messenger \nhttp://br.download.yahoo.com/messenger/ \nHi,\n \nWe want to discover how to improve the performance of an application and with that intention I turned on log_duration, log_statement=all and the time stamp escape character (%t) of log_line_prefix. \n \nSubtracting the time stamp of the last SQL statement from the first one I disovered that the whole application takes about 10 seconds to run. Almost the same time we have at the client workstation.\n \nAdding all the log_duration times I found almost 3 seconds (30% of the total time). \n \nSo, I realized that to improve performance it will be better to discover who is spending the 7 remaining seconds than making changes in database structure or SQL syntax.\n \nHow could I discover who is using the 7 remaining seconds? Network? ODBC? Application?\n \nThanks in advance!\n \nReimer\n \n __________________________________________________Converse com seus amigos em tempo real com o Yahoo! Messenger http://br.download.yahoo.com/messenger/", "msg_date": "Tue, 6 Sep 2005 17:34:12 -0300 (ART)", "msg_from": "Carlos Henrique Reimer <[email protected]>", "msg_from_op": true, "msg_subject": "log_duration times " } ]
[ { "msg_contents": "Andrew, Matthew, thanks to you both four your advice. I'm sorry I couldn't provide more details to the situation, I will post again as soon I get them. \n\nTime to share your insights with the colleagues :)\n\nBest Regards,\nChris\n\n-----Ursprüngliche Nachricht-----\nVon: Paul Ramsey [mailto:[email protected]] \nGesendet: Dienstag, 06. September 2005 06:13\nAn: Kastner Christian; Kastner Christian\nBetreff: Re: [PERFORM] Postgresql Hardware - Recommendations\n\nFor a database, I would almost always prioritize:\n- I/O\n- RAM\n- CPU\n\nSo, fast drives (SCSI 10000RPM or better in a RAID configuration, \nmore spindles == more throughput), then memory (more memory == more \nof the database off disk in cache == faster response), then more CPU \n(more concurrent request handling).\n\nPaul\n\nOn 5-Sep-05, at 6:50 AM, <[email protected]> \n<[email protected]> wrote:\n\n> Hello,\n>\n> My company has decided to migrate our Oracle database to \n> postgresql8. We\n> will aquire a new server for this, and would very much appreciate your\n> advice.\n>\n> NOTE: The applications accessing the database are developed and\n> maintained externally, and unfortunately, the developers have not yet\n> given us detailed information on their requirements. The only info \n> I can\n> give so far is that the database size is about 60GB, and that it \n> will be\n> frequently accessed by multiple users (about 100 will be connected\n> during business hours). The applications accessing the database are\n> mostly reporting tools.\n>\n> I know that the performance question will ultimately boil down to \"it\n> depends what you want to do with it\", but at the moment I'm very much\n> interested if there are any general issues we should look out for.\n>\n> The questions we are asking us now are:\n>\n> 1) Intel or AMD (or alternate Platform)\n> Are we better of with Xeons or Opterons? Should we consider the IBM\n> OpenPower platform?\n>\n> 2) CPUs vs cache\n> Would you rather have more CPUs or more cache? Eg: 4x Xeon 1MB vs 2x\n> Xeon 8MB\n>\n> 3) CPUs vs Memory\n> Would you rather have 4x CPUs and 8GB of memory, or 2x CPUs with \n> 16GB of\n> memory?\n>\n> Thanks in advance for all your replies!\n>\n> Best Regards,\n> Christian Kastner\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n", "msg_date": "Wed, 7 Sep 2005 11:04:43 +0200", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql Hardware - Recommendations" } ]
[ { "msg_contents": "> > One thing I did notice that in a 250k insert transaction the insert\ntime\n> > grows with #recs inserted. Time to insert first 50k recs is about\n27\n> > sec and last 50 k recs is 77 sec. I also confimed that size of\ntable is\n> > not playing a role here.\n> >\n> > Marc, can you do select timeofday() every 50k recs from linux? Also\na\n> > gprof trace from linux would be helpful.\n> >\n> \n> Here's the timeofday ... i'll do the gprof as soon as I can.\n> Every 50000 rows...\n> \nWere those all in a single transaction?\n\nMerlin\n", "msg_date": "Wed, 7 Sep 2005 08:04:15 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert performance for win32" } ]
[ { "msg_contents": "> On Tuesday 06 September 2005 19:11, Merlin Moncure wrote:\n> Here's the timeofday ... i'll do the gprof as soon as I can.\n> Every 50000 rows...\n> \n> Wed Sep 07 13:58:13.860378 2005 CEST\n> Wed Sep 07 13:58:20.926983 2005 CEST\n> Wed Sep 07 13:58:27.928385 2005 CEST\n> Wed Sep 07 13:58:35.472813 2005 CEST\n> Wed Sep 07 13:58:42.825709 2005 CEST\n> Wed Sep 07 13:58:50.789486 2005 CEST\n> Wed Sep 07 13:58:57.553869 2005 CEST\n> Wed Sep 07 13:59:04.298136 2005 CEST\n> Wed Sep 07 13:59:11.066059 2005 CEST\n> Wed Sep 07 13:59:19.368694 2005 CEST\n\nok, I've been in crunching profile profile graphs, and so far have been\nonly been able to draw following conclusions.\n\nFor bulk, 'in-transaction' insert:\n1. win32 is slower than linux. win32 time for each insert grows with #\ninserts in xact, linux does not (or grows much slower). Win32 starts\nout about 3x slower and grows to 10x slower after 250k inserts.\n\n2. ran a 50k profile vs. 250k profile. Nothing jumps out as being\nslower or faster: most time is spent in yyparse on either side. From\nthis my preliminary conclusion is that there is something going on in\nthe win32 api which is not showing in the profile.\n\n3. The mingw gprof cumulative seconds does not show measurable growth in\ncpu time/insert in 50k/250k profile.\n\nI'm now talking suggestions about where to look for performance problems\n:(.\nMerlin\n", "msg_date": "Wed, 7 Sep 2005 09:08:18 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] insert performance for win32" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> ok, I've been in crunching profile profile graphs, and so far have been\n> only been able to draw following conclusions.\n\n> For bulk, 'in-transaction' insert:\n> 1. win32 is slower than linux. win32 time for each insert grows with #\n> inserts in xact, linux does not (or grows much slower). Win32 starts\n> out about 3x slower and grows to 10x slower after 250k inserts.\n\nJust to be clear: what you were testing was\n\tBEGIN;\n\tINSERT ... VALUES (...);\n\trepeat insert many times\n\tCOMMIT;\nwith each statement issued as a separate PQexec() operation, correct?\nWas this set up as a psql script, or specialized C code? (If a psql\nscript, I wonder whether it's psql that's chewing the time.)\n\n> 2. ran a 50k profile vs. 250k profile. Nothing jumps out as being\n> slower or faster: most time is spent in yyparse on either side. From\n> this my preliminary conclusion is that there is something going on in\n> the win32 api which is not showing in the profile.\n\nHmm. Client/server data transport maybe? It would be interesting to\ntry inserting the same data in other ways:\n\t* COPY from client\n\t* COPY from disk file\n\t* INSERT/SELECT from another table\nand see whether you see a similar slowdown.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Sep 2005 11:02:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] insert performance for win32 " } ]
[ { "msg_contents": "Hello,\n\nI'm a newbie in postgresql, I've installed it on a Windows XP machine\n( I can't use linux, it's a company machine ), I'm courious why this\nquery takes so long\n\nSELECT \"Rut Cliente\"\nFROM \"Internet_Abril\"\nWHERE \"Rut Cliente\" NOT IN ((SELECT \"Rut Cliente\" FROM\n\"Internet_Enero\") UNION (SELECT \"Rut Cliente\" FROM\n\"Internet_Febrero\") UNION (SELECT \"Rut Cliente\" FROM\n\"Internet_Marzo\"));\n\nit takes about 100 minutes to complete the query.\nAll tables has index created ( Rut Cliente is a VarChar ), and tables\nhas 50.000 records each.\n\nThe explain for the query tells the following\n\n\"QUERY PLAN\n Seq Scan on \"Internet_Abril\" (cost=19406.67..62126112.70 rows=24731 width=13)\n Filter: (NOT (subplan))\n SubPlan\n -> Materialize (cost=19406.67..21576.07 rows=136740 width=13)\n -> Unique (cost=17784.23..18467.93 rows=136740 width=13)\n -> Sort (cost=17784.23..18126.08 rows=136740 width=13)\n Sort\nKey: \"Rut Cliente\"\n -> Append (cost=0.00..3741.80 rows=136740 width=13)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..1233.38\nrows=45069 width=13)\n -> Seq Scan on \"Internet_Enero\" (cost=0.00..782.69\nrows=45069 width=13)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..1104.06\nrows=40353 width=13)\n -> Seq Scan on \"Internet_Febrero\" (cost=0.00..700.53\nrows=40353 width=13)\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..1404.36\nrows=51318 width=13)\n -> Seq Scan on \"Internet_Marzo\" (cost=0.00..891.18\nrows=51318 width=13)\n\nAny help will be apreciated, It's for my thesis\n\n\nsaludos\nChristian\n", "msg_date": "Wed, 7 Sep 2005 12:22:27 -0400", "msg_from": "Christian Compagnon <[email protected]>", "msg_from_op": true, "msg_subject": "Query take 101 minutes, help, please" }, { "msg_contents": "On Wed, Sep 07, 2005 at 12:22:27PM -0400, Christian Compagnon wrote:\n> I'm a newbie in postgresql, I've installed it on a Windows XP machine\n> ( I can't use linux, it's a company machine ), I'm courious why this\n> query takes so long\n\nIt sounds like you've set work_mem too low; increasing it might help. Also\ntry rewriting your query to\n\n SELECT \"Rut Cliente\"\n FROM \"Internet_Abril\"\n WHERE\n \"Rut Cliente\" NOT IN ( SELECT \"Rut Cliente\" FROM \"Internet_Enero\" )\n AND \"Rut Cliente\" NOT IN ( SELECT \"Rut Cliente\" FROM \"Internet_Febrero\" )\n AND \"Rut Cliente\" NOT IN ( SELECT \"Rut Cliente\" FROM \"Internet_Marzo\" )\n\n(I'm not sure how optimized UNION inside an IN/NOT IN is.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 7 Sep 2005 18:57:54 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query take 101 minutes, help, please" }, { "msg_contents": "PG is creating the union of January, February and March tables first and \nthat doesn't have an index on it. If you're going to do many queries using \nthe union of those three tables, you might want to place their contents into \none table and create an index on it.\n\nOtherwise, try something like this:\n\nSELECT \"Rut Cliente\"\nFROM \"Internet_Abril\"\nWHERE \"Rut Cliente\" NOT IN (SELECT \"Rut Cliente\" FROM\n\"Internet_Enero\")\nAND \"Rut Cliente\" NOT IN (SELECT \"Rut Cliente\" FROM\n\"Internet_Febrero\")\nAND \"Rut Cliente\" NOT IN (SELECT \"Rut Cliente\" FROM\n\"Internet_Marzo\");\n\nYou could also compare the performance of that to this and choose the one \nthat works the best:\n\nSELECT \"Rut Cliente\"\nFROM \"Internet_Abril\" a\nLEFT JOIN \"Internet_Enero\" e ON a.\"Rut Cliente\" = e.\"Rut Cliente\"\nLEFT JOIN \"Internet_Febrero\" f ON a.\"Rut Cliente\" = f.\"Rut Cliente\"\nLEFT JOIN \"Internet_Marzo\" m ON a.\"Rut Cliente\" = m.\"Rut Cliente\"\nWHERE e.\"Rut Cliente\" IS NULL AND f.\"Rut Cliente\" IS NULL and m.\"Rut \nCliente\" IS NULL;\n\nMeetesh\n\nOn 9/7/05, Christian Compagnon <[email protected]> wrote:\n> \n> Hello,\n> \n> I'm a newbie in postgresql, I've installed it on a Windows XP machine\n> ( I can't use linux, it's a company machine ), I'm courious why this\n> query takes so long\n> \n> SELECT \"Rut Cliente\"\n> FROM \"Internet_Abril\"\n> WHERE \"Rut Cliente\" NOT IN ((SELECT \"Rut Cliente\" FROM\n> \"Internet_Enero\") UNION (SELECT \"Rut Cliente\" FROM\n> \"Internet_Febrero\") UNION (SELECT \"Rut Cliente\" FROM\n> \"Internet_Marzo\"));\n> \n> it takes about 100 minutes to complete the query.\n> All tables has index created ( Rut Cliente is a VarChar ), and tables\n> has 50.000 records each.\n> \n> The explain for the query tells the following\n> \n> \"QUERY PLAN\n> Seq Scan on \"Internet_Abril\" (cost=19406.67..62126112.70 rows=24731 \n> width=13)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Materialize (cost=19406.67..21576.07 rows=136740 width=13)\n> -> Unique (cost=17784.23..18467.93 rows=136740 width=13)\n> -> Sort (cost=17784.23..18126.08 rows=136740 width=13)\n> Sort\n> Key: \"Rut Cliente\"\n> -> Append (cost=0.00..3741.80 rows=136740 width=13)\n> -> Subquery Scan \"*SELECT* 1\" (cost=0.00..1233.38\n> rows=45069 width=13)\n> -> Seq Scan on \"Internet_Enero\" (cost=0.00..782.69\n> rows=45069 width=13)\n> -> Subquery Scan \"*SELECT* 2\" (cost=0.00..1104.06\n> rows=40353 width=13)\n> -> Seq Scan on \"Internet_Febrero\" (cost=0.00..700.53\n> rows=40353 width=13)\n> -> Subquery Scan \"*SELECT* 3\" (cost=0.00..1404.36\n> rows=51318 width=13)\n> -> Seq Scan on \"Internet_Marzo\" (cost=0.00..891.18\n> rows=51318 width=13)\n> \n> Any help will be apreciated, It's for my thesis\n> \n> \n> saludos\n> Christian\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nPG is creating the union of January, February and March tables first\nand that doesn't have an index on it.  If you're going to do many\nqueries using the union of those three tables, you might want to place\ntheir contents into one table and create an index on it.\n\nOtherwise, try something like this:\n\nSELECT \"Rut Cliente\"\nFROM \"Internet_Abril\"\nWHERE \"Rut Cliente\"  NOT IN (SELECT \"Rut Cliente\"  FROM\n\"Internet_Enero\")\nAND \"Rut Cliente\" NOT IN (SELECT \"Rut Cliente\"  FROM\n\"Internet_Febrero\")\nAND \"Rut Cliente\" NOT IN (SELECT \"Rut Cliente\"  FROM\n\"Internet_Marzo\");\n\nYou could also compare the performance of that to this and choose the one that works the best:\n\nSELECT \"Rut Cliente\"\n\nFROM \"Internet_Abril\" a\nLEFT JOIN \"Internet_Enero\" e ON a.\"Rut Cliente\" = e.\"Rut Cliente\"\nLEFT JOIN \"Internet_Febrero\" f ON a.\"Rut Cliente\" = f.\"Rut Cliente\"\nLEFT JOIN \"Internet_Marzo\" m ON a.\"Rut Cliente\" = m.\"Rut Cliente\"\nWHERE e.\"Rut Cliente\" IS NULL AND f.\"Rut Cliente\" IS NULL and m.\"Rut Cliente\" IS NULL;\n\nMeeteshOn 9/7/05, Christian Compagnon <[email protected]> wrote:\nHello,I'm a newbie in postgresql, I've installed it on a Windows XP machine( I can't use linux, it's a company machine ), I'm courious why thisquery takes so longSELECT \"Rut Cliente\"FROM \"Internet_Abril\"\nWHERE \"Rut Cliente\"  NOT IN ((SELECT \"Rut Cliente\"  FROM\"Internet_Enero\") UNION (SELECT \"Rut Cliente\"  FROM\"Internet_Febrero\") UNION (SELECT \"Rut Cliente\"  FROM\n\"Internet_Marzo\"));it takes about 100 minutes to complete the query.All tables has index created ( Rut Cliente is a VarChar ), and tableshas 50.000 records each.The explain for the query tells the following\n\"QUERY PLAN Seq Scan on \"Internet_Abril\"  (cost=19406.67..62126112.70 rows=24731 width=13)  Filter: (NOT (subplan))   SubPlan       ->  Materialize  (cost=19406.67..21576.07 rows=136740 width=13)\n        ->  Unique  (cost=17784.23..18467.93\nrows=136740 width=13)        ->  Sort  (cost=17784.23..18126.08 rows=136740 width=13)                                                              \nSortKey: \"Rut Cliente\"          ->  Append  (cost=0.00..3741.80\nrows=136740 width=13)          ->  Subquery\nScan \"*SELECT* 1\"  (cost=0.00..1233.38rows=45069 width=13)          \n->  Seq Scan on\n\"Internet_Enero\"  (cost=0.00..782.69rows=45069 width=13)          \n->  Subquery Scan \"*SELECT*\n2\"  (cost=0.00..1104.06rows=40353 width=13)            \n->  Seq Scan on\n\"Internet_Febrero\"  (cost=0.00..700.53rows=40353 width=13)          \n->  Subquery Scan \"*SELECT*\n3\"  (cost=0.00..1404.36rows=51318 width=13)            ->  Seq\nScan on \"Internet_Marzo\"  (cost=0.00..891.18rows=51318 width=13)Any help will be apreciated, It's for my thesissaludosChristian---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate       subscribe-nomail command to [email protected] so that your       message can get through to the mailing list cleanly", "msg_date": "Wed, 7 Sep 2005 19:09:39 +0200", "msg_from": "Meetesh Karia <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query take 101 minutes, help, please" }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> (I'm not sure how optimized UNION inside an IN/NOT IN is.)\n\nNOT IN is pretty nonoptimal, period. It'd help a lot to boost work_mem\nto the point where the planner figures it can use a hashtable (look for\nEXPLAIN to say \"hashed subplan\" rather than just \"subplan\"). Of course,\nif there's enough stuff in the UNION that that drives you into swapping,\nit's gonna be painful anyway.\n\nUsing UNION ALL instead of UNION might save a few cycles too.\n\nIf you're willing to rewrite the query wholesale, you could try the old\ntrick of a LEFT JOIN where you discard rows for which there's a match,\nie, the righthand join value isn't NULL.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Sep 2005 13:57:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query take 101 minutes, help, please " }, { "msg_contents": "On Wed, 7 Sep 2005, Meetesh Karia wrote:\n\n> PG is creating the union of January, February and March tables first and\n> that doesn't have an index on it. If you're going to do many queries using\n> the union of those three tables, you might want to place their contents into\n> one table and create an index on it.\n>\n> Otherwise, try something like this:\n>\n> SELECT \"Rut Cliente\"\n> FROM \"Internet_Abril\"\n> WHERE \"Rut Cliente\" NOT IN (SELECT \"Rut Cliente\" FROM\n> \"Internet_Enero\")\n> AND \"Rut Cliente\" NOT IN (SELECT \"Rut Cliente\" FROM\n> \"Internet_Febrero\")\n> AND \"Rut Cliente\" NOT IN (SELECT \"Rut Cliente\" FROM\n> \"Internet_Marzo\");\n\nYou may also wish to try:\n\nSELECT \"Rut Cliente\"\nFROM \"Internet_Abril\"\nWHERE NOT EXISTS\n (SELECT 1 FROM \"Internet_Enero\"\n WHERE \"Internet_Enero\".\"Rut Cliente\"=\"Internet_Abril\".\"Rut Cliente\")\nAND NOT EXISTS\n (SELECT 1 FROM \"Internet_Febrero\"\n WHERE \"Internet_Febrero\".\"Rut Cliente\"=\"Internet_Abril\".\"Rut Cliente\")\nAND NOT EXISTS\n (SELECT 1 FROM \"Internet_Marzo\"\n WHERE \"Internet_Marzo\".\"Rut Cliente\"=\"Internet_Abril\".\"Rut Cliente\")\n\nwhich will probably scan the indexes on the January, February and March\nindexes once for each row in the April table.\n\n", "msg_date": "Wed, 7 Sep 2005 19:40:57 +0100 (BST)", "msg_from": "Alex Hayward <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query take 101 minutes, help, please" } ]
[ { "msg_contents": "please help me ,\ncomment on postgresql (8.x.x) performance on cpu AMD, INTEL\nand why i should use 32 bit or 64 cpu ? (what the performance difference)\n\nthank you\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE! \nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n", "msg_date": "Fri, 09 Sep 2005 05:54:15 +0000", "msg_from": "\"wisan watcharinporn\" <[email protected]>", "msg_from_op": true, "msg_subject": "please comment on cpu 32 bit or 64 bit" }, { "msg_contents": "\"wisan watcharinporn\" <[email protected]> writes:\n> comment on postgresql (8.x.x) performance on cpu AMD, INTEL\n> and why i should use 32 bit or 64 cpu ? (what the performance difference)\n\nFor most database applications, you're better off spending your money\non faster disk drives and/or more RAM than on a sexier CPU.\n\nMaybe your application doesn't follow that general rule --- but since\nyou told us exactly zero about what your application is, this advice\nis worth what you paid for it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Sep 2005 02:58:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: please comment on cpu 32 bit or 64 bit " }, { "msg_contents": "[email protected] (\"wisan watcharinporn\") writes:\n> please help me ,\n> comment on postgresql (8.x.x) performance on cpu AMD, INTEL\n> and why i should use 32 bit or 64 cpu ? (what the performance difference)\n\nGenerally speaking, the width of your I/O bus will be more important\nto performance than the width of the processor bus.\n\nThat is, having more and better disk will have more impact on\nperformance than getting a better CPU.\n\nThat being said, if you plan to have a system with significantly more\nthan 2GB of memory, there seem to be pretty substantial benefits to\nthe speed of AMD memory bus access, and that can be quite significant,\ngiven that if you have a lot of memory, and thus are often operating\nout of cache, and are slinging around big queries, THAT implies a lot\nof shoving data around in memory. AMD/Opteron has a way faster memory\nbus than the Intel/Xeon systems.\n\nBut this is only likely to be significant if you're doing processing\nintense enough that you commonly have >> 4GB of memory in use.\n\nIf not, then you'd better focus on I/O speed, which is typically\npretty independent of the CPU...\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"ntlug.org\")\nhttp://www.ntlug.org/~cbbrowne/rdbms.html\n\"Anyway I know how to not be bothered by consing on the fly.\"\n-- Dave Moon\n", "msg_date": "Sat, 10 Sep 2005 01:21:09 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: please comment on cpu 32 bit or 64 bit" } ]
[ { "msg_contents": "I need to generate unused random id with format is ID[0-9]{4}\nso i write below query but it seems to be too slow\n\nSELECT * FROM ( \n SELECT user_id FROM (\n SELECT 'ID' || LPAD(r, 4, '0') AS user_id \n FROM generate_series(1, 9999) as r) AS s \n EXCEPT\n SELECT user_id FROM account ) AS t \nORDER BY random() \nLIMIT 1\n\nand I execute explain analyze query.\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=318.17..318.17 rows=1 width=32) (actual\ntime=731.703..731.707 rows=1 loops=1)\n -> Sort (cost=318.17..318.95 rows=312 width=32) (actual\ntime=731.693..731.693 rows=1 loops=1)\n Sort Key: random()\n -> Subquery Scan t (cost=285.79..305.24 rows=312 width=32)\n(actual time=424.299..659.193 rows=9999 loops=1)\n -> SetOp Except (cost=285.79..301.35 rows=311\nwidth=16) (actual time=424.266..566.254 rows=9999 loops=1)\n -> Sort (cost=285.79..293.57 rows=3112\nwidth=16) (actual time=424.139..470.529 rows=12111 loops=1)\n Sort Key: user_id\n -> Append (cost=0.00..105.24 rows=3112\nwidth=16) (actual time=5.572..276.485 rows=12111 loops=1)\n -> Subquery Scan \"*SELECT* 1\" \n(cost=0.00..30.00 rows=1000 width=4) (actual time=5.565..149.615\nrows=9999 loops=1)\n -> Function Scan on\ngenerate_series r (cost=0.00..20.00 rows=1000 width=4) (actual\ntime=5.553..63.224 rows=9999 loops=1)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=0.00..75.24 rows=2112 width=16) (actual time=0.030..28.473\nrows=2112 loops=1)\n -> Seq Scan on account \n(cost=0.00..54.12 rows=2112 width=16) (actual time=0.019..10.155\nrows=2112 loops=1)\nTotal runtime: 738.809 ms\n\n\ndo you have any idea for optimize?\n-- \nChoe, Cheng-Dae(최정대)\nBlog: http://www.comdongin.com/\n", "msg_date": "Fri, 9 Sep 2005 15:32:21 +0900", "msg_from": "\"Choe, Cheng-Dae\" <[email protected]>", "msg_from_op": true, "msg_subject": "Too slow query, do you have an idea to optimize?" }, { "msg_contents": "Generate them all into a table and just delete them as you use them.\nIt's only 10000 rows...\n\nChris\n\nChoe, Cheng-Dae wrote:\n> I need to generate unused random id with format is ID[0-9]{4}\n> so i write below query but it seems to be too slow\n> \n> SELECT * FROM ( \n> SELECT user_id FROM (\n> SELECT 'ID' || LPAD(r, 4, '0') AS user_id \n> FROM generate_series(1, 9999) as r) AS s \n> EXCEPT\n> SELECT user_id FROM account ) AS t \n> ORDER BY random() \n> LIMIT 1\n> \n> and I execute explain analyze query.\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=318.17..318.17 rows=1 width=32) (actual\n> time=731.703..731.707 rows=1 loops=1)\n> -> Sort (cost=318.17..318.95 rows=312 width=32) (actual\n> time=731.693..731.693 rows=1 loops=1)\n> Sort Key: random()\n> -> Subquery Scan t (cost=285.79..305.24 rows=312 width=32)\n> (actual time=424.299..659.193 rows=9999 loops=1)\n> -> SetOp Except (cost=285.79..301.35 rows=311\n> width=16) (actual time=424.266..566.254 rows=9999 loops=1)\n> -> Sort (cost=285.79..293.57 rows=3112\n> width=16) (actual time=424.139..470.529 rows=12111 loops=1)\n> Sort Key: user_id\n> -> Append (cost=0.00..105.24 rows=3112\n> width=16) (actual time=5.572..276.485 rows=12111 loops=1)\n> -> Subquery Scan \"*SELECT* 1\" \n> (cost=0.00..30.00 rows=1000 width=4) (actual time=5.565..149.615\n> rows=9999 loops=1)\n> -> Function Scan on\n> generate_series r (cost=0.00..20.00 rows=1000 width=4) (actual\n> time=5.553..63.224 rows=9999 loops=1)\n> -> Subquery Scan \"*SELECT* 2\" \n> (cost=0.00..75.24 rows=2112 width=16) (actual time=0.030..28.473\n> rows=2112 loops=1)\n> -> Seq Scan on account \n> (cost=0.00..54.12 rows=2112 width=16) (actual time=0.019..10.155\n> rows=2112 loops=1)\n> Total runtime: 738.809 ms\n> \n> \n> do you have any idea for optimize?\n\n", "msg_date": "Fri, 09 Sep 2005 15:00:23 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too slow query, do you have an idea to optimize?" } ]
[ { "msg_contents": "\nWhich is faster, where the list involved is fixed? My thought is that \nsince it doesn't have to check a seperate table, the CHECK itself should \nbe the faster of the two, but I can't find anything that seems to validate \nthat theory ...\n\nThe case is where I just want to check that a value being inserted is one \nof a few possible values, with that list of values rarely (if ever) \nchanging, so havng a 'flexible list' REFERENCED seems relatively overkill \n...\n\nThoughts, or pointers to a doc that disproves, or proves, what I believe?\n\nThanks ...\n\n----\nMarc G. Fournier Hub.Org Networking Services (http://www.hub.org)\nEmail: [email protected] Yahoo!: yscrappy ICQ: 7615664\n", "msg_date": "Sat, 10 Sep 2005 00:23:19 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "CHECK vs REFERENCES" }, { "msg_contents": "On Sat, Sep 10, 2005 at 12:23:19AM -0300, Marc G. Fournier wrote:\n> Which is faster, where the list involved is fixed? My thought is that \n> since it doesn't have to check a seperate table, the CHECK itself should \n> be the faster of the two, but I can't find anything that seems to validate \n> that theory ...\n\nWhy not just benchmark each method as you intend to use them? Here's\na simplistic example:\n\nCREATE TABLE test_none (\n val integer NOT NULL\n);\n\nCREATE TABLE test_check (\n val integer NOT NULL CHECK (val IN (1, 2, 3, 4, 5))\n);\n\nCREATE TABLE test_vals (\n id integer PRIMARY KEY\n);\nINSERT INTO test_vals SELECT * FROM generate_series(1, 5);\n\nCREATE TABLE test_fk (\n val integer NOT NULL REFERENCES test_vals\n);\n\n\\timing\n\nINSERT INTO test_none SELECT 1 FROM generate_series(1, 100000);\nINSERT 0 100000\nTime: 3109.089 ms\n\nINSERT INTO test_check SELECT 1 FROM generate_series(1, 100000);\nINSERT 0 100000\nTime: 3492.344 ms\n\nINSERT INTO test_fk SELECT 1 FROM generate_series(1, 100000);\nINSERT 0 100000\nTime: 23578.853 ms\n\n-- \nMichael Fuhr\n", "msg_date": "Fri, 9 Sep 2005 21:49:07 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CHECK vs REFERENCES" }, { "msg_contents": "On Fri, 9 Sep 2005, Michael Fuhr wrote:\n\n> On Sat, Sep 10, 2005 at 12:23:19AM -0300, Marc G. Fournier wrote:\n>> Which is faster, where the list involved is fixed? My thought is that\n>> since it doesn't have to check a seperate table, the CHECK itself should\n>> be the faster of the two, but I can't find anything that seems to validate\n>> that theory ...\n>\n> Why not just benchmark each method as you intend to use them? Here's\n> a simplistic example:\n>\n> CREATE TABLE test_none (\n> val integer NOT NULL\n> );\n>\n> CREATE TABLE test_check (\n> val integer NOT NULL CHECK (val IN (1, 2, 3, 4, 5))\n> );\n>\n> CREATE TABLE test_vals (\n> id integer PRIMARY KEY\n> );\n> INSERT INTO test_vals SELECT * FROM generate_series(1, 5);\n>\n> CREATE TABLE test_fk (\n> val integer NOT NULL REFERENCES test_vals\n> );\n>\n> \\timing\n>\n> INSERT INTO test_none SELECT 1 FROM generate_series(1, 100000);\n> INSERT 0 100000\n> Time: 3109.089 ms\n>\n> INSERT INTO test_check SELECT 1 FROM generate_series(1, 100000);\n> INSERT 0 100000\n> Time: 3492.344 ms\n>\n> INSERT INTO test_fk SELECT 1 FROM generate_series(1, 100000);\n> INSERT 0 100000\n> Time: 23578.853 ms\n\nYowch, I expected CHECK to be better ... but not so significantly ... I \nfigured I'd be saving milliseconds, which, on a busy server, would add up \nfast ... but not 10k' of milliseconds ...\n\nThanks, that definitely shows a major benefit ...\n\n----\nMarc G. Fournier Hub.Org Networking Services (http://www.hub.org)\nEmail: [email protected] Yahoo!: yscrappy ICQ: 7615664\n", "msg_date": "Sat, 10 Sep 2005 01:03:03 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CHECK vs REFERENCES" }, { "msg_contents": "On Sat, Sep 10, 2005 at 01:03:03AM -0300, Marc G. Fournier wrote:\n> On Fri, 9 Sep 2005, Michael Fuhr wrote:\n> >INSERT INTO test_check SELECT 1 FROM generate_series(1, 100000);\n> >INSERT 0 100000\n> >Time: 3492.344 ms\n> >\n> >INSERT INTO test_fk SELECT 1 FROM generate_series(1, 100000);\n> >INSERT 0 100000\n> >Time: 23578.853 ms\n> \n> Yowch, I expected CHECK to be better ... but not so significantly ... I \n> figured I'd be saving milliseconds, which, on a busy server, would add up \n> fast ... but not 10k' of milliseconds ...\n\nResults will differ depending on the table structure: if you're\nindexing ten columns and have five triggers then the foreign key\ncheck will have less of an overall impact.\n\n-- \nMichael Fuhr\n", "msg_date": "Sat, 10 Sep 2005 07:06:27 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CHECK vs REFERENCES" }, { "msg_contents": "\nOn Sep 9, 2005, at 11:23 PM, Marc G. Fournier wrote:\n\n> The case is where I just want to check that a value being inserted \n> is one of a few possible values, with that list of values rarely \n> (if ever) changing, so havng a 'flexible list' REFERENCED seems \n> relatively overkill ...\n>\n\nThat's what I thought until the first time that list needed to be \naltered. At this point, it becomes a royal pain.\n\npoint to take: do it right the first time, or you have to do it over, \nand over, and over...\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806\n\n\n", "msg_date": "Wed, 21 Sep 2005 11:35:02 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CHECK vs REFERENCES" } ]
[ { "msg_contents": "All,\n\nIn the psql output below, I would expect the second query to run faster, because \nthe b-tree index on two columns knows the values of 'b' for any given value of \n'a', and hence does not need to fetch a row from the actual table. I am not \nseeing a speed-up, however, so I expect my understanding of the index mechanism \nis wrong. Could anyone enlighten me?\n\nSpecifically, I would expect the first query to walk the b-tree looking for \nvalues of 'a' equal to 1, 100001, etc., and then a dereference over to the main \ntable to fetch the value for column 'b'.\n\nBut I would expect the second query to walk the b-tree looking for values of 'a' \nequal to 1, 100001, etc., and then find on that same page in the b-tree the \nvalue of 'b', thereby avoiding the dereference and extra page fetch.\n\nIs the problem that the two-column b-tree contains more data, is spread across \nmore disk pages, and is hence slower to access, canceling out the performance \ngain of not having to fetch from the main table? Or is the query system not \nusing the second column information from the index and doing the table fetch \nanyway? Or does the index store the entire row from the main table regardless \nof the column being indexed?\n\nI am running postgresql 8.0.3 on a Pentium 4 with ide hard drives and the \ndefault configuration file settings.\n\nThanks in advance,\n\nmark\n\n\n\nmark=# create sequence test_id_seq;\nCREATE SEQUENCE\nmark=# create table test (a integer not null default nextval('test_id_seq'), b \ninteger not null);\nCREATE TABLE\nmark=# create function testfunc () returns void as $$\nmark$# declare\nmark$# i integer;\nmark$# begin\nmark$# for i in 1..1000000 loop\nmark$# insert into test (b) values (i);\nmark$# end loop;\nmark$# return;\nmark$# end;\nmark$# $$ language plpgsql;\nCREATE FUNCTION\nmark=# select * from testfunc();\n testfunc\n----------\n\n(1 row)\n\nmark=# select count(*) from test;\n count\n---------\n 1000000\n(1 row)\n\nmark=# create index test_single_idx on test(a);\nCREATE INDEX\nmark=# vacuum full;\nVACUUM\nmark=# analyze;\nANALYZE\nmark=# explain analyze select b from test where a in (1, 100001, 200001, 300001, \n400001, 500001, 600001, 700001, 800001, 900001);\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using test_single_idx, test_single_idx, test_single_idx, \ntest_single_idx, test_single_idx, test_single_idx, test_single_idx, \ntest_single_idx, test_single_idx, test_single_idx on test (cost=0.00..30.36 \nrows=10 width=4) (actual time=0.145..0.917 rows=10 loops=1)\n Index Cond: ((a = 1) OR (a = 100001) OR (a = 200001) OR (a = 300001) OR (a = \n400001) OR (a = 500001) OR (a = 600001) OR (a = 700001) OR (a = 800001) OR(a = \n900001))\n Total runtime: 1.074 ms\n(3 rows)\n\nmark=# drop index test_single_idx;\nDROP INDEX\nmark=# create index test_double_idx on test(a,b);\nCREATE INDEX\nmark=# vacuum full;\nVACUUM\nmark=# analyze;\nANALYZE\nmark=# explain analyze select b from test where a in (1, 100001, 200001, 300001, \n400001, 500001, 600001, 700001, 800001, 900001);\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using test_double_idx, test_double_idx, test_double_idx, \ntest_double_idx, test_double_idx, test_double_idx, test_double_idx, \ntest_double_idx, test_double_idx, test_double_idx on test (cost=0.00..43.48 \nrows=10 width=4) (actual time=0.283..1.119 rows=10 loops=1)\n Index Cond: ((a = 1) OR (a = 100001) OR (a = 200001) OR (a = 300001) OR (a = \n400001) OR (a = 500001) OR (a = 600001) OR (a = 700001) OR (a = 800001) OR(a = \n900001))\n Total runtime: 1.259 ms\n(3 rows)\n", "msg_date": "Sun, 11 Sep 2005 00:43:18 -0700", "msg_from": "Mark Dilger <[email protected]>", "msg_from_op": true, "msg_subject": "performance discrepancy indexing one column versus two columns" }, { "msg_contents": "On Sun, 11 Sep 2005, Mark Dilger wrote:\n\n> All,\n>\n> In the psql output below, I would expect the second query to run faster,\n> because the b-tree index on two columns knows the values of 'b' for any\n> given value of 'a', and hence does not need to fetch a row from the\n> actual table. I am not seeing a speed-up, however, so I expect my\n> understanding of the index mechanism is wrong. Could anyone enlighten\n> me?\n\nA common but incorrect assumption. We must consult the underlying table\nwhen we do an index scan so that we can check visibility information. The\nreason it is stored there in the table is so that we have only one place\nto check for tuple visibility and therefore avoid race conditions.\n\nA brief explanation of this system is described here:\nhttp://www.postgresql.org/docs/8.0/static/mvcc.html.\n\nand this page shows what information we store in the to do visibility\nchecks:\n\nhttp://www.postgresql.org/docs/8.0/static/storage-page-layout.html\n\nThanks,\n\nGavin\n", "msg_date": "Sun, 11 Sep 2005 18:05:03 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance discrepancy indexing one column versus" } ]
[ { "msg_contents": "Hi.\n\nI have a performance problem with prepared statements (JDBC prepared \nstatement).\n\nThis query:\n\nPreparedStatement st = conn.prepareStatement(\"SELECT id FROM \ndga_dienstleister WHERE plz like '45257'\");\n\ndoes use an index.\n\nThis query:\n\n String plz = \"45257\";\n PreparedStatement st = conn.prepareStatement(\"SELECT id FROM \ndga_dienstleister WHERE plz like ?\");\n st.setString(1, plz);\n\ndoes NOT use an index.\n\nAs it should in the end result in absolutely the same statement, the \nindex should be used all the time. I have to set the \nprotocolVersion=2 and use the JDBC2 driver to get it working (but \nthen the statements are created like in the first query, so no \nsolution, only a workaround).\n\nI'm not sure whether this is a bug (I think it is) or a problem of \nunderstanding.\n\nKnown problem?\n\nI have tried PG 8.0.1, 8.0.3, 8.1beta with the JDBC-drivers\n\n- postgresql-8.0-312.jdbc2.jar --> okay with protocolVersion=2 in the \nURL\n- postgresql-8.0-312.jdbc3.jar --> not okay whatever I do\n\nI'm on Mac OS X, if that matters.\n\ncug\n", "msg_date": "Sun, 11 Sep 2005 10:29:09 +0200", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": true, "msg_subject": "Index not used with prepared statement" }, { "msg_contents": "Guido Neitzer schrob:\n\n> I have a performance problem with prepared statements (JDBC prepared \n> statement).\n>\n> This query:\n>\n> PreparedStatement st = conn.prepareStatement(\"SELECT id FROM \n> dga_dienstleister WHERE plz like '45257'\");\n>\n> does use an index.\n>\n> This query:\n>\n> String plz = \"45257\";\n> PreparedStatement st = conn.prepareStatement(\"SELECT id FROM \n> dga_dienstleister WHERE plz like ?\");\n> st.setString(1, plz);\n>\n> does NOT use an index.\n>\n> As it should in the end result in absolutely the same statement, the \n> index should be used all the time.\n\nI'm not perfectly sure, but since the index could only be used with a\nsubset of all possible parameters (the pattern for like has to be\nleft-anchored), I could imagine the planner has to avoid the index in\norder to produce an universal plan (the thing behind a prepared\nstatement).\n\nIs there a reason you are using the like operator at all? IMO using\nthe =-operator instead in your example should produce an \"index-using\nprepared statement\".\n\nHTH\nAndreas\n-- \n", "msg_date": "Sun, 11 Sep 2005 11:03:23 +0200", "msg_from": "Andreas Seltenreich <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used with prepared statement" }, { "msg_contents": "On 11.09.2005, at 11:03 Uhr, Andreas Seltenreich wrote:\n\n> I'm not perfectly sure, but since the index could only be used with a\n> subset of all possible parameters (the pattern for like has to be\n> left-anchored), I could imagine the planner has to avoid the index in\n> order to produce an universal plan (the thing behind a prepared\n> statement).\n\nHmm. Now I get it. So I have to look that my framework doesn't \nproduce a preparedStatement, instead build a complete statement \nstring. Weird.\n\n> Is there a reason you are using the like operator at all? IMO using\n> the =-operator instead in your example should produce an \"index-using\n> prepared statement\".\n\nYes, you are right, but then I can't pass anything like '45%' to the \nquery. It will just return nothing.\n\nI use the \"like\" because I build the queries on the fly and add a % \nat the end where necessary.\n\nAnd, to be clear: this is a minimal example, most of my queries are \ngenerated by a framework. This was an example to test the behaviour.\n\nOkay, I had problems with the understanding of prepared statements on \nthe client and the server side. What I thought was, that I get a \npreparedStatement by JDBC which also inserts the values into the \nstring and this is executed on the server side.\n\ncug\n", "msg_date": "Sun, 11 Sep 2005 12:35:09 +0200", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index not used with prepared statement" } ]
[ { "msg_contents": "Hi list,\n\nI don't have much experience with Postgres optimization, somehow I was\nhappily avoiding anything more difficult than simple select statement,\nand it was working all right.\n\nNow LEFT JOIN must be used, and I am not happy with the performance:\nIt takes about 5 seconds to run very simple LEFT JOIN query on a table\n\"user_\" with ~ 13.000 records left joined to table \"church\" with ~ 300\nrecords on Powerbook PPC 1.67 GHz with 1.5 GB ram.\nIs it normal?\n\nSome details:\n\ntest=# explain select * from user_ left join church on user_.church_id\n= church.id;\n QUERY PLAN \n---------------------------------------------------------------------\n Hash Left Join (cost=6.44..7626.69 rows=12763 width=325)\n Hash Cond: (\"outer\".church_id = \"inner\".id)\n -> Seq Scan on user_ (cost=0.00..7430.63 rows=12763 width=245)\n -> Hash (cost=5.75..5.75 rows=275 width=80)\n -> Seq Scan on church (cost=0.00..5.75 rows=275 width=80)\n(5 rows)\n\n\n From what I understand, it doesn't use foreign key index on user_\ntable. So I tried:\n\nmydb=# set enable_seqscan='false';\nSET\nmydb=# explain select * from user_ left join church on user_.church_id\n= church.id;\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Merge Right Join (cost=0.00..44675.77 rows=12763 width=325)\n Merge Cond: (\"outer\".id = \"inner\".church_id)\n -> Index Scan using chirch_pkey on church (cost=0.00..17.02\nrows=275 width=80)\n -> Index Scan using user__church_id on user_ (cost=0.00..44500.34\nrows=12763 width=245)\n(4 rows)\n\n\nIt's my first time reading Query plans, but from wat I understand, it\ndoesn't make the query faster..\n\nAny tips are greatly appreciated.\n-- \nKsenia\n", "msg_date": "Sun, 11 Sep 2005 20:12:58 +0300", "msg_from": "Ksenia Marasanova <[email protected]>", "msg_from_op": true, "msg_subject": "LEFT JOIN optimization" }, { "msg_contents": "* Ksenia Marasanova ([email protected]) wrote:\n> Any tips are greatly appreciated.\n\nEXPLAIN ANALYZE of the same queries would be much more useful.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Sun, 11 Sep 2005 17:00:36 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LEFT JOIN optimization" }, { "msg_contents": "2005/9/12, Stephen Frost <[email protected]>:\n> * Ksenia Marasanova ([email protected]) wrote:\n> > Any tips are greatly appreciated.\n> \n> EXPLAIN ANALYZE of the same queries would be much more useful.\n\nThanks, here it is:\n\ntest=# explain analyze select * from user_ left join church on\nuser_.church_id = church.id;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=6.44..7626.69 rows=12763 width=325) (actual\ntime=388.573..2016.929 rows=12763 loops=1)\n Hash Cond: (\"outer\".church_id = \"inner\".id)\n -> Seq Scan on user_ (cost=0.00..7430.63 rows=12763 width=245)\n(actual time=360.431..1120.012 rows=12763 loops=1)\n -> Hash (cost=5.75..5.75 rows=275 width=80) (actual\ntime=27.985..27.985 rows=0 loops=1)\n -> Seq Scan on church (cost=0.00..5.75 rows=275 width=80)\n(actual time=0.124..26.953 rows=275 loops=1)\n Total runtime: 2025.946 ms\n(6 rows)\n\ntest=# set enable_seqscan='false';\nSET\ntest=# explain analyze select * from user_ left join church on\nuser_.church_id = church.id;\n \nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Merge Right Join (cost=0.00..44675.77 rows=12763 width=325) (actual\ntime=0.808..2119.099 rows=12763 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".church_id)\n -> Index Scan using chirch_pkey on church (cost=0.00..17.02\nrows=275 width=80) (actual time=0.365..5.471 rows=275 loops=1)\n -> Index Scan using user__church_id on user_ (cost=0.00..44500.34\nrows=12763 width=245) (actual time=0.324..1243.348 rows=12763 loops=1)\n Total runtime: 2131.364 ms\n(5 rows)\n\n\nI followed some tips on the web and vacuum-ed database, I think the\nquery is faster now, almost acceptable, but still interesting to know\nif it possible to optimize it...\n\nThanks again,\n-- \nKsenia\n", "msg_date": "Mon, 12 Sep 2005 00:47:57 +0300", "msg_from": "Ksenia Marasanova <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LEFT JOIN optimization" }, { "msg_contents": "* Ksenia Marasanova ([email protected]) wrote:\n> test=# explain analyze select * from user_ left join church on\n> user_.church_id = church.id;\n[...]\n> Total runtime: 2025.946 ms\n> (6 rows)\n> \n> test=# set enable_seqscan='false';\n> SET\n> test=# explain analyze select * from user_ left join church on\n> user_.church_id = church.id;\n> \n[...]\n> Total runtime: 2131.364 ms\n> (5 rows)\n> \n> \n> I followed some tips on the web and vacuum-ed database, I think the\n> query is faster now, almost acceptable, but still interesting to know\n> if it possible to optimize it...\n\nI have to say that it does seem a bit slow for only 12,000 rows..\nWhat's the datatype of user_.church_id and church.id? Are you sure you\nreally want all 12,000 rows every time you run that query? Perhaps\nthere's a 'where' clause you could apply with an associated index to\nlimit the query to just what you actually need?\n\nYou'll noticed from above, though, that the non-index scan is faster.\nI'd expect that when using a left-join query: you have to go through the\nentire table on an open left-join like that, a sequencial scan is going\nto be the best way to do that. The fact that church.id is hashed makes\nthe solution the planner came up with almost certainly the best one\npossible.\n\nAre you sure a left-join is what you want? Sounds like maybe you've\nmoved (for some reason) from a regular join to a left join with a\nfiltering in the application which is probably a bad move... If you can\nuse at least some filtering in the database I expect that'd help..\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Sun, 11 Sep 2005 20:34:50 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LEFT JOIN optimization" }, { "msg_contents": "On Mon, 12 Sep 2005 00:47:57 +0300, Ksenia Marasanova\n<[email protected]> wrote:\n> -> Seq Scan on user_ (cost=0.00..7430.63 rows=12763 width=245)\n>(actual time=360.431..1120.012 rows=12763 loops=1)\n\nIf 12000 rows of the given size are stored in more than 7000 pages, then\nthere is a lot of free space in these pages. Try VACUUM FULL ...\n\nServus\n Manfred\n", "msg_date": "Mon, 12 Sep 2005 12:41:28 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LEFT JOIN optimization" } ]
[ { "msg_contents": "Hello everyone\n\nI must be doing something very wrong here so help please! I have two tables\n\ntableA has 300,000 recs\ntableB has 20,000 recs\n\nI need to set the value of a field in table A to a value in table B depending on the existence of the record in table B. So what I have done is\n\nUPDATE tableA set a.val1=b.somefield FROM tableA a, tableB b WHERE a.key1=b.key1;\n\nThe primary key of tableA is key1 and that of tableB is key1 ie the join is on primary keys.\n\nThe \"optimizer\" has elected to d a sequential scan on tableA to determine which fields to update rather than the query being driveb by tableB and it is taking forever. Surely I must be able to force the system to read down tableB in preference to reading down tableA?\n\n(Please don't ask why tableA and tableB are not amalgamated - that's another story altogether!!!)\n\nMany thanks in advance\nHilary\n\n\nHilary Forbes\nThe DMR Information and Technology Group (www.dmr.co.uk)\nDirect tel 01689 889950 Fax 01689 860330 \nDMR is a UK registered trade mark of DMR Limited\n**********************************************************\n\n", "msg_date": "Mon, 12 Sep 2005 10:14:25 +0100", "msg_from": "Hilary Forbes <[email protected]>", "msg_from_op": true, "msg_subject": "Slow update" }, { "msg_contents": "Hilary Forbes wrote:\n> \n> I need to set the value of a field in table A to a value in table B depending on the existence of the record in table B. So what I have done is\n> \n> UPDATE tableA set a.val1=b.somefield FROM tableA a, tableB b WHERE a.key1=b.key1;\n\nCheck the EXPLAIN carefully, are you sure the tableA in \"UPDATE\" is the \nsame as that in your \"FROM\" clause. If so, why are you SETting a.val1?\n\nIf not, you've probably got an unconstrained join.\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 12 Sep 2005 11:51:17 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow update" }, { "msg_contents": "On Mon, Sep 12, 2005 at 10:14:25 +0100,\n Hilary Forbes <[email protected]> wrote:\n> Hello everyone\n> \n> I must be doing something very wrong here so help please! I have two tables\n> \n> tableA has 300,000 recs\n> tableB has 20,000 recs\n> \n> I need to set the value of a field in table A to a value in table B depending on the existence of the record in table B. So what I have done is\n> \n> UPDATE tableA set a.val1=b.somefield FROM tableA a, tableB b WHERE a.key1=b.key1;\n> \n> The primary key of tableA is key1 and that of tableB is key1 ie the join is on primary keys.\n> \n> The \"optimizer\" has elected to d a sequential scan on tableA to determine which fields to update rather than the query being driveb by tableB and it is taking forever. Surely I must be able to force the system to read down tableB in preference to reading down tableA?\n\nIt would help to see the exact query and the explain analyze output. Hopefully\nyou didn't really write the query similar to above, since it is using illegal\nsyntax and the if it was changed slightly to become legal than it would do a\ncross join of table A with the inner join of tableA and tableB, which isn't\nwhat you want.\n", "msg_date": "Mon, 12 Sep 2005 08:34:11 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow update" }, { "msg_contents": "Hilary Forbes <[email protected]> writes:\n> I need to set the value of a field in table A to a value in table B depending on the existence of the record in table B. So what I have done is\n\n> UPDATE tableA set a.val1=b.somefield FROM tableA a, tableB b WHERE a.key1=b.key1;\n\nYou've written an unconstrained join to a second copy of tableA.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Sep 2005 10:07:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow update " } ]
[ { "msg_contents": "I'm in the process of developing an application which uses PostgreSQL for \ndata storage. Our database traffic is very atypical, and as a result it has \nbeen rather challenging to figure out how to best tune PostgreSQL on what \ndevelopment hardware we have, as well as to figure out exactly what we \nshould be evaluating and eventually buying for production hardware.\n\nThe vast, overwhelming majority of our database traffic is pretty much a \nnon-stop stream of INSERTs filling up tables. It is akin to data \nacquisition. Several thousand clients are sending once-per-minute updates \nfull of timestamped numerical data at our central server, which in turn \nperforms INSERTs into several distinct tables as part of the transaction for \nthat client. We're talking on the order of ~100 transactions per second, \neach containing INSERTs to multiple tables (which contain only integer and \nfloating point columns and a timestamp column - the primary key (and only \nindex) is on a unique integer ID for the client and the timestamp). The \ntransaction load is spread evenly over time by having the clients send their \nper-minute updates at random times rather than on the exact minute mark.\n\nThere will of course be users using a web-based GUI to extract data from \nthese tables and display them in graphs and whatnot, but the SELECT query \ntraffic will always be considerably less frequent and intensive than the \nincessant INSERTs, and it's not that big a deal if the large queries take a \nlittle while to run.\n\nThis data also expires - rows with timestamps older than X days will be \nDELETEd periodically (once an hour or faster), such that the tables will \nreach a relatively stable size (pg_autovacuum is handling vacuuming for now, \nbut considering our case, we're thinking of killing pg_autovacuum in favor \nof having the periodic DELETE process also do a vacuum of affected tables \nright after the DELETE, and then have it vacuum the other low traffic tables \nonce a day while it's at it).\n\nThere is an aggregation layer in place which proxies the inbound data from \nthe clients into a small(er) number of persistent postgresql backend \nprocesses. Right now we're doing one aggregator per 128 clients (so instead \nof 128 seperate database connections over the course of a minute for a small \ntransaction each, there is a single database backend that is constantly \ncommitting transactions at a rate of ~ 2/second). At a test load of ~1,000 \nclients, we would have 8 aggregators running and 8 postgresql backends. \nTesting has seemed to indicate we should aggregate even harder - the planned \nproduction load is ~5,000 clients initially, but will grow to almost double \nthat in the not-too-distant future, and that would mean ~40 backends at 128 \nclients each initially. Even on 8 cpus, I'm betting 40 concurrent backends \ndoing 2 tps is much worse off than 10 backends doing 8 tps.\n\nTest hardware right now is a dual Opteron with 4G of ram, which we've barely \ngotten 1,000 clients running against. Current disk hardware in testing is \nwhatever we could scrape together (4x 3-ware PCI hardware RAID controllers, \nwith 8 SATA drives in a RAID10 array off of each - aggregated up in a 4-way \nstripe with linux md driver and then formatted as ext3 with an appropriate \nstride parameter and data=writeback). Production will hopefully be a 4-8-way \nOpteron, 16 or more G of RAM, and a fiberchannel hardware raid array or two \n(~ 1TB available RAID10 storage) with 15krpm disks and battery-backed write \ncache.\n\nI know I haven't provided a whole lot of application-level detail here, but \ndoes anyone have any general advice on tweaking postgresql to deal with a \nvery heavy load of concurrent and almost exclusively write-only \ntransactions? Increasing shared_buffers seems to always help, even out to \nhalf of the dev box's ram (2G). A 100ms commit_delay seemed to help, but \ntuning it (and _siblings) has been difficult. We're using 8.0 with the \ndefault 8k blocksize, but are strongly considering both developing against \n8.1 (seems it might handle the heavy concurrency better), and re-compiling \nwith 32k blocksize since our storage arrays will inevitably be using fairly \nwide stripes. Any advice on any of this (other than drop the project while \nyou're still a little bit sane)?\n\n--Brandon\n\n\nI'm in the process of developing an application which uses PostgreSQL\nfor data storage.  Our database traffic is very atypical, and as a\nresult it has been rather challenging to figure out how to best tune\nPostgreSQL on what development hardware we have, as well as to figure\nout exactly what we should be evaluating and eventually buying for\nproduction hardware.\n\nThe vast, overwhelming majority of our database traffic is pretty much\na non-stop stream of INSERTs filling up tables.  It is akin to\ndata acquisition.  Several thousand clients are sending\nonce-per-minute updates full of timestamped numerical data at our\ncentral server, which in turn performs INSERTs into several distinct\ntables as part of the transaction for that client.  We're talking\non the order of ~100 transactions per second, each containing INSERTs\nto multiple tables (which contain only integer and floating point\ncolumns and a timestamp column - the primary key (and only index) is on\na unique integer ID for the client and the timestamp).  The\ntransaction load is spread evenly over time by having the clients send\ntheir per-minute updates at random times rather than on the exact\nminute mark.\n\nThere will of course be users using a web-based GUI to extract data\nfrom these tables and display them in graphs and whatnot, but the\nSELECT query traffic will always be considerably less frequent and\nintensive than the incessant INSERTs, and it's not that big a deal if\nthe large queries take a little while to run.\n\nThis data also expires - rows with timestamps older than X days will be\nDELETEd periodically (once an hour or faster), such that the tables\nwill reach a relatively stable size (pg_autovacuum is handling\nvacuuming for now, but considering our case, we're thinking of killing\npg_autovacuum in favor of having the periodic DELETE process also do a\nvacuum of affected tables right after the DELETE, and then have it\nvacuum the other low traffic tables once a day while it's at it).\n\nThere is an aggregation layer in place which proxies the inbound data\nfrom the clients into a small(er) number of persistent postgresql\nbackend processes.  Right now we're doing one aggregator per 128\nclients (so instead of 128 seperate database connections over the\ncourse of a minute for a small transaction each, there is a single\ndatabase backend that is constantly committing transactions at a rate\nof ~ 2/second).  At a test load of ~1,000 clients, we would have 8\naggregators running and 8 postgresql backends.  Testing has seemed\nto indicate we should aggregate even harder - the planned production\nload is ~5,000 clients initially, but will grow to almost double that\nin the not-too-distant future, and that would mean ~40 backends at 128\nclients each initially.  Even on 8 cpus, I'm betting 40 concurrent\nbackends doing 2 tps is much worse off than 10 backends doing 8 tps.\n\n\nTest hardware right now is a dual Opteron with 4G of ram, which we've\nbarely gotten 1,000 clients running against.  Current disk hardware in\ntesting is whatever we could scrape together (4x 3-ware PCI hardware\nRAID controllers, with 8 SATA drives in a RAID10 array off of each -\naggregated up in a 4-way stripe with linux md driver and then formatted\nas ext3 with an appropriate stride parameter and data=writeback). \nProduction will hopefully be a 4-8-way Opteron, 16 or more G of RAM,\nand a fiberchannel hardware raid array or two (~ 1TB available RAID10 storage) with 15krpm disks and\nbattery-backed write cache.\n\nI know I haven't provided a whole lot of application-level detail here,\nbut does anyone have any general advice on tweaking postgresql to deal\nwith a very heavy load of concurrent and almost exclusively write-only\ntransactions?  Increasing shared_buffers seems to always help,\neven out to half of the dev box's ram (2G).  A 100ms commit_delay\nseemed to help, but tuning it (and _siblings) has been difficult. \nWe're using 8.0 with the default 8k blocksize, but are strongly\nconsidering both developing against 8.1 (seems it might handle the\nheavy concurrency better), and re-compiling with 32k blocksize since\nour storage arrays will inevitably be using fairly wide stripes. \nAny advice on any of this (other than drop the project while you're\nstill a little bit sane)?\n\n--Brandon", "msg_date": "Mon, 12 Sep 2005 16:04:06 -0500", "msg_from": "Brandon Black <[email protected]>", "msg_from_op": true, "msg_subject": "Performance considerations for very heavy INSERT traffic" }, { "msg_contents": "\n\n> I know I haven't provided a whole lot of application-level detail here,\n\n\tYou did !\n\n\tWhat about :\n\n\t- using COPY instead of INSERT ?\n\t\t(should be easy to do from the aggregators)\n\n\t- using Bizgres ? \t\n\t\t(which was designed for your use case)\n\n\t- splitting the xlog and the data on distinct physical drives or arrays\n\n\t- benchmarking something else than ext3\n\t\t(xfs ? reiser3 ?)\n", "msg_date": "Mon, 12 Sep 2005 23:24:24 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance considerations for very heavy INSERT traffic" }, { "msg_contents": "Split your system into multiple partitions of RAID 10s.For max performance, \nten drive RAID 10 for pg_xlog (This will max out a PCI-X bus) on Bus A, \nmultiple 4/6Drive RAID 10s for tablespaces on Bus B. For max performance I \nwould recommend using one RAID 10 for raw data tables, one for aggregate \ntables and one for indexes. More RAM will only help you with queries against \nyour data, if you are pre-aggregating, then you may not need all that much \nRAM.\n\nYou can easily get 100 tansacts per second with even less hardware with a \nlittle data partitioning.\n\nChoose your controller carefully as many don't co-operate with linux well.\n\nAlex Turner\nNetEconomist\n\nOn 9/12/05, Brandon Black <[email protected]> wrote:\n> \n> \n> I'm in the process of developing an application which uses PostgreSQL for \n> data storage. Our database traffic is very atypical, and as a result it has \n> been rather challenging to figure out how to best tune PostgreSQL on what \n> development hardware we have, as well as to figure out exactly what we \n> should be evaluating and eventually buying for production hardware.\n> \n> The vast, overwhelming majority of our database traffic is pretty much a \n> non-stop stream of INSERTs filling up tables. It is akin to data \n> acquisition. Several thousand clients are sending once-per-minute updates \n> full of timestamped numerical data at our central server, which in turn \n> performs INSERTs into several distinct tables as part of the transaction for \n> that client. We're talking on the order of ~100 transactions per second, \n> each containing INSERTs to multiple tables (which contain only integer and \n> floating point columns and a timestamp column - the primary key (and only \n> index) is on a unique integer ID for the client and the timestamp). The \n> transaction load is spread evenly over time by having the clients send their \n> per-minute updates at random times rather than on the exact minute mark.\n> \n> There will of course be users using a web-based GUI to extract data from \n> these tables and display them in graphs and whatnot, but the SELECT query \n> traffic will always be considerably less frequent and intensive than the \n> incessant INSERTs, and it's not that big a deal if the large queries take a \n> little while to run.\n> \n> This data also expires - rows with timestamps older than X days will be \n> DELETEd periodically (once an hour or faster), such that the tables will \n> reach a relatively stable size (pg_autovacuum is handling vacuuming for now, \n> but considering our case, we're thinking of killing pg_autovacuum in favor \n> of having the periodic DELETE process also do a vacuum of affected tables \n> right after the DELETE, and then have it vacuum the other low traffic tables \n> once a day while it's at it).\n> \n> There is an aggregation layer in place which proxies the inbound data from \n> the clients into a small(er) number of persistent postgresql backend \n> processes. Right now we're doing one aggregator per 128 clients (so instead \n> of 128 seperate database connections over the course of a minute for a small \n> transaction each, there is a single database backend that is constantly \n> committing transactions at a rate of ~ 2/second). At a test load of ~1,000 \n> clients, we would have 8 aggregators running and 8 postgresql backends. \n> Testing has seemed to indicate we should aggregate even harder - the planned \n> production load is ~5,000 clients initially, but will grow to almost double \n> that in the not-too-distant future, and that would mean ~40 backends at 128 \n> clients each initially. Even on 8 cpus, I'm betting 40 concurrent backends \n> doing 2 tps is much worse off than 10 backends doing 8 tps.\n> \n> Test hardware right now is a dual Opteron with 4G of ram, which we've \n> barely gotten 1,000 clients running against. Current disk hardware in \n> testing is whatever we could scrape together (4x 3-ware PCI hardware RAID \n> controllers, with 8 SATA drives in a RAID10 array off of each - aggregated \n> up in a 4-way stripe with linux md driver and then formatted as ext3 with an \n> appropriate stride parameter and data=writeback). Production will hopefully \n> be a 4-8-way Opteron, 16 or more G of RAM, and a fiberchannel hardware raid \n> array or two (~ 1TB available RAID10 storage) with 15krpm disks and \n> battery-backed write cache.\n> \n> I know I haven't provided a whole lot of application-level detail here, \n> but does anyone have any general advice on tweaking postgresql to deal with \n> a very heavy load of concurrent and almost exclusively write-only \n> transactions? Increasing shared_buffers seems to always help, even out to \n> half of the dev box's ram (2G). A 100ms commit_delay seemed to help, but \n> tuning it (and _siblings) has been difficult. We're using 8.0 with the \n> default 8k blocksize, but are strongly considering both developing against \n> 8.1 (seems it might handle the heavy concurrency better), and re-compiling \n> with 32k blocksize since our storage arrays will inevitably be using fairly \n> wide stripes. Any advice on any of this (other than drop the project while \n> you're still a little bit sane)?\n> \n> --Brandon\n> \n>\n\nSplit your system into multiple partitions of RAID 10s.For max\nperformance,  ten drive RAID 10 for pg_xlog (This will max out a\nPCI-X bus) on Bus A, multiple 4/6Drive RAID 10s for tablespaces on Bus\nB. For max performance I would recommend using one RAID 10 for raw data\ntables, one for aggregate tables and one for indexes.  More RAM\nwill only help you with queries against your data, if you are\npre-aggregating, then you may not need all that much RAM.\n\nYou can easily get 100 tansacts per second with even less hardware with a little data partitioning.\n\nChoose your controller carefully as many don't co-operate with linux well.\nAlex Turner\nNetEconomist\nOn 9/12/05, Brandon Black <[email protected]> wrote:\n\nI'm in the process of developing an application which uses PostgreSQL\nfor data storage.  Our database traffic is very atypical, and as a\nresult it has been rather challenging to figure out how to best tune\nPostgreSQL on what development hardware we have, as well as to figure\nout exactly what we should be evaluating and eventually buying for\nproduction hardware.\n\nThe vast, overwhelming majority of our database traffic is pretty much\na non-stop stream of INSERTs filling up tables.  It is akin to\ndata acquisition.  Several thousand clients are sending\nonce-per-minute updates full of timestamped numerical data at our\ncentral server, which in turn performs INSERTs into several distinct\ntables as part of the transaction for that client.  We're talking\non the order of ~100 transactions per second, each containing INSERTs\nto multiple tables (which contain only integer and floating point\ncolumns and a timestamp column - the primary key (and only index) is on\na unique integer ID for the client and the timestamp).  The\ntransaction load is spread evenly over time by having the clients send\ntheir per-minute updates at random times rather than on the exact\nminute mark.\n\nThere will of course be users using a web-based GUI to extract data\nfrom these tables and display them in graphs and whatnot, but the\nSELECT query traffic will always be considerably less frequent and\nintensive than the incessant INSERTs, and it's not that big a deal if\nthe large queries take a little while to run.\n\nThis data also expires - rows with timestamps older than X days will be\nDELETEd periodically (once an hour or faster), such that the tables\nwill reach a relatively stable size (pg_autovacuum is handling\nvacuuming for now, but considering our case, we're thinking of killing\npg_autovacuum in favor of having the periodic DELETE process also do a\nvacuum of affected tables right after the DELETE, and then have it\nvacuum the other low traffic tables once a day while it's at it).\n\nThere is an aggregation layer in place which proxies the inbound data\nfrom the clients into a small(er) number of persistent postgresql\nbackend processes.  Right now we're doing one aggregator per 128\nclients (so instead of 128 seperate database connections over the\ncourse of a minute for a small transaction each, there is a single\ndatabase backend that is constantly committing transactions at a rate\nof ~ 2/second).  At a test load of ~1,000 clients, we would have 8\naggregators running and 8 postgresql backends.  Testing has seemed\nto indicate we should aggregate even harder - the planned production\nload is ~5,000 clients initially, but will grow to almost double that\nin the not-too-distant future, and that would mean ~40 backends at 128\nclients each initially.  Even on 8 cpus, I'm betting 40 concurrent\nbackends doing 2 tps is much worse off than 10 backends doing 8 tps.\n\n\nTest hardware right now is a dual Opteron with 4G of ram, which we've\nbarely gotten 1,000 clients running against.  Current disk hardware in\ntesting is whatever we could scrape together (4x 3-ware PCI hardware\nRAID controllers, with 8 SATA drives in a RAID10 array off of each -\naggregated up in a 4-way stripe with linux md driver and then formatted\nas ext3 with an appropriate stride parameter and data=writeback). \nProduction will hopefully be a 4-8-way Opteron, 16 or more G of RAM,\nand a fiberchannel hardware raid array or two (~ 1TB available RAID10 storage) with 15krpm disks and\nbattery-backed write cache.\n\nI know I haven't provided a whole lot of application-level detail here,\nbut does anyone have any general advice on tweaking postgresql to deal\nwith a very heavy load of concurrent and almost exclusively write-only\ntransactions?  Increasing shared_buffers seems to always help,\neven out to half of the dev box's ram (2G).  A 100ms commit_delay\nseemed to help, but tuning it (and _siblings) has been difficult. \nWe're using 8.0 with the default 8k blocksize, but are strongly\nconsidering both developing against 8.1 (seems it might handle the\nheavy concurrency better), and re-compiling with 32k blocksize since\nour storage arrays will inevitably be using fairly wide stripes. \nAny advice on any of this (other than drop the project while you're\nstill a little bit sane)?\n\n--Brandon", "msg_date": "Mon, 12 Sep 2005 17:45:21 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance considerations for very heavy INSERT traffic" }, { "msg_contents": "On 9/12/05, PFC <[email protected]> wrote:\n> \n> \n> \n> > I know I haven't provided a whole lot of application-level detail here,\n> \n> You did !\n> \n> What about :\n> \n> - using COPY instead of INSERT ?\n> (should be easy to do from the aggregators)\n\n\nPossibly, although it would kill the current design of returning the \ndatabase transaction status for a single client packet back to the client on \ntransaction success/failure. The aggregator could put several clients' data \ninto a series of delayed multi-row copy statements.\n\n- using Bizgres ?\n> (which was designed for your use case)\n\n\nI only briefly scanned their \"About\" page, but they didn't sound \nparticularly suited to my case at the time (it sounded kinds buzzwordy \nactually, which I suppose is great for business apps people :) ). We're more \nof a sciency shop. I'll go look at the project in more detail tonight in \nlight of your recommendation.\n\n- splitting the xlog and the data on distinct physical drives or arrays\n\n\nThat would almost definitely help, I haven't tried it yet. Speaking of the \nxlog, anyone know anything specific about the WAL tuning parameters for \nheavy concurrent write traffic? What little I could dig up on WAL tuning was \ncontradictory, and testing some random changes to the parameters hasn't been \nvery conclusive yet. I would imagine the WAL buffers stuff could potentially \nhave a large effect for us.\n\n- benchmarking something else than ext3\n> (xfs ? reiser3 ?)\n> \n\nWe've had bad experiences under extreme and/or strange workloads with XFS \nhere in general, although this is the first major postgresql project - the \nrest were with other applications writing to XFS. Bad experiences like XFS \nfilesystems \"detecting internal inconsistencies\" at runtime and unmounting \nthemselves from within the kernel module (much to the dismay of applications \nwith open files on the filesystem), on machines with validated good \nhardware. It has made me leary of using anything other than ext3 for fear of \nstability problems. Reiser3 might be worth taking a look at though.\n\nThanks for the ideas,\n-- Brandon\n\nOn 9/12/05, PFC <[email protected]> wrote:\n> I know I haven't provided a whole lot of application-level detail here,        You did !        What about :        - using COPY instead of INSERT ?                (should\nbe easy to do from the aggregators)\nPossibly, although it would kill the current design of returning the\ndatabase transaction status for a single client packet back to the\nclient on transaction success/failure.   The aggregator could put\nseveral clients' data into a series of delayed multi-row copy\nstatements.\n        - using Bizgres ?                (which\nwas designed for your use case)\nI only briefly scanned their \"About\" page, but they didn't sound\nparticularly suited to my case at the time (it sounded kinds buzzwordy\nactually, which I suppose is great for business apps people :) ). \nWe're more of a sciency shop.  I'll go look at the project in more\ndetail tonight in light of your recommendation.\n        - splitting the xlog and the data on distinct physical drives or arrays\n\nThat would almost definitely help, I haven't tried it yet. \nSpeaking of the xlog, anyone know anything specific about the WAL\ntuning parameters for heavy concurrent write traffic?  What little\nI could dig up on WAL tuning was contradictory, and testing some random\nchanges to the parameters hasn't been very conclusive yet.  I\nwould imagine the WAL buffers stuff could potentially have a large\neffect for us.\n        - benchmarking something else than ext3                (xfs ? reiser3 ?)\n\nWe've had bad experiences under extreme and/or strange workloads with\nXFS here in general, although this is the first major postgresql\nproject - the rest were with other applications writing to XFS. \nBad experiences like XFS filesystems \"detecting internal\ninconsistencies\" at runtime and unmounting themselves from within the\nkernel module (much to the dismay of applications with open files on\nthe filesystem), on machines with validated good hardware.  It has\nmade me leary of using anything other than ext3 for fear of stability\nproblems.  Reiser3 might be worth taking a look at though.\n\nThanks for the ideas,\n-- Brandon", "msg_date": "Mon, 12 Sep 2005 17:02:30 -0500", "msg_from": "Brandon Black <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance considerations for very heavy INSERT traffic" }, { "msg_contents": "\"Brandon Black\" <[email protected]> wrote ...\n\n Increasing shared_buffers seems to always help, even out to half of the dev box's ram (2G). \n\n Though officially PG does not prefer huge shared_buffers size, I did see several times that performance was boosted in case IO is the bottleneck. Also, if you want to use big bufferpool setting, make sure your version has Tom's split BufMgrLock patch\n (http://archives.postgresql.org/pgsql-committers/2005-03/msg00025.php), which might already in 8.0.x somewhere. And if you want to use bufferpool bigger than 2G on 64-bit machine, you may need 8.1 (http://archives.postgresql.org/pgsql-committers/2005-08/msg00221.php).\n\n Regards,\n Qingqing\n\n\n\n\n\n\n \n\n\"Brandon Black\" <[email protected]> wrote ...\nIncreasing shared_buffers seems to always help, even out to half of \n the dev box's ram (2G).  \n \nThough officially PG does not prefer huge \n shared_buffers size, I did see several times that performance was \n boosted in case IO is the bottleneck. Also, if you want to use big bufferpool \n setting, make sure your version has Tom's split BufMgrLock patch\n(http://archives.postgresql.org/pgsql-committers/2005-03/msg00025.php), \n which might already in 8.0.x somewhere. And if you want to use bufferpool \n bigger than 2G on 64-bit machine, you may need 8.1 (http://archives.postgresql.org/pgsql-committers/2005-08/msg00221.php).\n \nRegards,\nQingqing", "msg_date": "Mon, 12 Sep 2005 17:37:19 -0700", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance considerations for very heavy INSERT traffic" }, { "msg_contents": "Brandon Black wrote:\n\n>\n>\n> On 9/12/05, *PFC* <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n>\n>\n> - benchmarking something else than ext3\n> (xfs ? reiser3 ?)\n>\n>\n> We've had bad experiences under extreme and/or strange workloads with \n> XFS here in general, although this is the first major postgresql \n> project - the rest were with other applications writing to XFS. Bad \n> experiences like XFS filesystems \"detecting internal inconsistencies\" \n> at runtime and unmounting themselves from within the kernel module \n> (much to the dismay of applications with open files on the \n> filesystem), on machines with validated good hardware. It has made me \n> leary of using anything other than ext3 for fear of stability \n> problems. Reiser3 might be worth taking a look at though.\n\nJust one tidbit. We tried XFS on a very active system similar to what \nyou describe. Dual opterons, 8GB memory, fiber channel drives, 2.6 \nkernel, etc. And the reliability was awful. We spent a lot of time \nmaking changes one at a time to try and isolate the cause; when we \nswitched out from XFS to ReiserFS our stability problems went away.\n\nIt may be the case that the XFS problems have all been corrected in \nnewer kernels, but I'm not going to put too much effort into trying that \nagain.\n\n\nI recently built a postgres with 32KB block sizes and have been doing \nsome testing. For our particular workloads it has been a win.\n\n-- Alan\n", "msg_date": "Mon, 12 Sep 2005 20:38:34 -0400", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance considerations for very heavy INSERT traffic" }, { "msg_contents": "\nBrandon Black <[email protected]> writes:\n\n> The vast, overwhelming majority of our database traffic is pretty much a \n> non-stop stream of INSERTs filling up tables. \n\nThat part Postgres should handle pretty well. It should be pretty much limited\nby your I/O bandwidth so big raid 1+0 arrays are ideal. Putting the WAL on a\ndedicated array would also be critical.\n\nThe WAL parameters like commit_delay and commit_siblings are a bit of a\nmystery. Nobody has done any extensive testing of them. It would be quite\nhelpful if you find anything conclusive and post it. It would also be\nsurprising if they had a very large effect. They almost got chopped recently\nbecause they weren't believed to be useful.\n\nYou might also ponder whether you need to by issuing a commit for every datum.\nIf you only need to commit periodically you can get much better throughput. I\nsuppose that's the same as commit_siblings. It would be interesting to know if\nyou can get those parameters to perform as well as batching up records\nyourself.\n\n> There will of course be users using a web-based GUI to extract data from \n> these tables and display them in graphs and whatnot, but the SELECT query \n> traffic will always be considerably less frequent and intensive than the \n> incessant INSERTs, and it's not that big a deal if the large queries take a \n> little while to run.\n\nI do fear these queries. Even if they aren't mostly terribly intensive if\nyou're pushing the edges of your write I/O bandwidth then a single seek to\nsatisfy one of these selects could really hurt your throughput. \n\nThat said, as long as your WAL is on a dedicated drive Postgres's architecture\nshould in theory be ideal and allow you do run these things with impunity. The\nWAL is purely write-only and it's the only thing your inserts will be blocking\non.\n\n> This data also expires - rows with timestamps older than X days will be \n> DELETEd periodically (once an hour or faster), such that the tables will \n> reach a relatively stable size (pg_autovacuum is handling vacuuming for now, \n> but considering our case, we're thinking of killing pg_autovacuum in favor \n> of having the periodic DELETE process also do a vacuum of affected tables \n> right after the DELETE, and then have it vacuum the other low traffic tables \n> once a day while it's at it).\n\nAy, there's the rub.\n\nTaking this approach means you have vacuums running which have to scan your\nentire table and your inserts are being sprayed all over the disk. \n\nAn alternative you may consider is using partitioned tables. Then when you\nwant to expire old records you simply drop the oldest partition. Or in your\ncase you might rotate through the partitions, so you just truncate the oldest\none and start inserting into it.\n\nUnfortunately there's no built-in support for partitioned tables in Postgres.\nYou get to roll your own using UNION ALL or using inherited tables. Various\npeople have described success with each option though they both have\ndownsides too.\n\nUsing partitioned tables you would never need to delete any records except for\nwhen you delete all of them. So you would never need to run vacuum except on\nnewly empty partitions. That avoids having to scan through all those pages\nthat you know will never have holes. If you use TRUNCATE (or VACUUM ALL or\nCLUSTER) that would mean your inserts are always sequential too (though it\nalso means lots of block allocations along the way, possibly not an\nadvantage).\n\nThis may be a lot more work to set up and maintain but I think it would be a\nbig performance win. It would directly speed up the WAL writes by avoiding\nthose big full page dumps. And it would also cut out all the I/O traffic\nvacuum generates.\n\n\n> Increasing shared_buffers seems to always help, even out to half of the dev\n> box's ram (2G).\n\nHalf should be a pessimal setting. It means virtually everything is buffered\ntwice. Once in the kernel and once in Postgres. Effectively halving your\nmemory. If that's really helping try raising it even further, to something\nlike 90% of your memory. But the conventional dogma is that shared buffers\nshould never be more than about 10k no matter how much memory you have. Let\nthe kernel do the bulk of the buffering.\n\nThat said it's even more mysterious in your situation. Why would a large\nshared buffers help insert-only performance much at all? My guess is that it's\nletting the vacuums complete quicker. Perhaps try raising work_mem?\n\n-- \ngreg\n\n", "msg_date": "12 Sep 2005 23:07:49 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance considerations for very heavy INSERT traffic" }, { "msg_contents": "On 9/12/05, Brandon Black <[email protected]> wrote:\n> \n> I'm in the process of developing an application which uses PostgreSQL for\n> data storage. Our database traffic is very atypical, and as a result it has\n> been rather challenging to figure out how to best tune PostgreSQL on what\n> development hardware we have, as well as to figure out exactly what we\n> should be evaluating and eventually buying for production hardware.\n\nA few suggestions...\n\n1) Switch to COPY if you can, it's anywhere from 10-100x faster than\nINSERT, but it does not necessarily fit your idea of updating multiple\ntables. In that case, try and enlarge the transaction's scope and do\nmultiple INSERTs in the same transaction. Perhaps batching once per\nsecond, or 5 seconds, and returning the aggregate result ot the\nclients.\n\n2) Tune ext3. The default configuration wrecks high-write situations.\n Look into data=writeback for mounting, turning off atime (I hope\nyou've done this already) updates, and also modifying the scheduler to\nthe elevator model. This is poorly documented in Linux (like just\nabout everything), but it's crtical.\n\n3) Use 8.1 and strongly look at Bizgres. The data partitioning is critical.\n\n4) Make sure you are not touching more data than you need, and don't\nhave any extraneous indexes. Use the planner to make sure every index\nis used, as it substantially increases the write load.\n\nI've worked on a few similar applications, and this is a hard thing in\nany database, even Oracle.\n\nChris \n\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Mon, 12 Sep 2005 23:39:14 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance considerations for very heavy INSERT traffic" }, { "msg_contents": "On 12 Sep 2005 23:07:49 -0400, Greg Stark <[email protected]> wrote:\n\n> \n> The WAL parameters like commit_delay and commit_siblings are a bit of a\n> mystery. Nobody has done any extensive testing of them. It would be quite\n> helpful if you find anything conclusive and post it. It would also be\n> surprising if they had a very large effect. They almost got chopped \n> recently\n> because they weren't believed to be useful.\n> \n> You might also ponder whether you need to by issuing a commit for every \n> datum.\n> If you only need to commit periodically you can get much better \n> throughput. I\n> suppose that's the same as commit_siblings. It would be interesting to \n> know if\n> you can get those parameters to perform as well as batching up records\n> yourself.\n\n\nIdeally I'd like to commit the data seperately, as the data could contain \nerrors which abort the transaction, but it may come down to batching it and \ncoding things such that I can catch and discard the offending row and retry \nthe transaction if it fails (which should be fairly rare I would hope). I \nwas hoping that the commit_delay/commit_siblings stuff would allow me to \nmaintain simplistic transaction failure isolation while giving some of the \nbenefits of batching things up, as you've said. I have seen performance \ngains with it set at 100ms and a 3-6 siblings with 8 backends running, but I \nhaven't been able to extensively tune these values, they were mostly random \nguesses that seemed to work. My cycles of performance testing take a while, \nat least a day or two per change being tested, and the differences can even \nthen be hard to see due to variability in the testing load (as it's not \nreally a static test load, but a window on reality). On top of that, with \nthe time it takes, I've succumbed more than once to the temptation of \ntweaking more than one thing per performance run, which really muddies the \nresults.\n\n> Increasing shared_buffers seems to always help, even out to half of the \n> dev\n> > box's ram (2G).\n> \n> Half should be a pessimal setting. It means virtually everything is \n> buffered\n> twice. Once in the kernel and once in Postgres. Effectively halving your\n> memory. If that's really helping try raising it even further, to something\n> like 90% of your memory. But the conventional dogma is that shared buffers\n> should never be more than about 10k no matter how much memory you have. \n> Let\n> the kernel do the bulk of the buffering.\n> \n> That said it's even more mysterious in your situation. Why would a large\n> shared buffers help insert-only performance much at all? My guess is that \n> it's\n> letting the vacuums complete quicker. Perhaps try raising work_mem?\n\n\nI find it odd as well. After reading the standard advice on shared_buffers, \nI had only intended on raising it slightly. But seeing ever-increasing \nperformance gains, I just kept tuning it upwards all the way to the 2G \nlimit, and saw noticeable gains every time. During at least some of the test \ncycles, there was no deleting or vacuuming going on, just insert traffic. I \nguessed that shared_buffers' caching strategy must have been much more \neffective than the OS cache at something or other, but I don't know what \nexactly. The only important read traffic that comes to mind is the index \nwhich is both being constantly updated and constantly checked for primary \nkey uniqueness violations.\n\nAll of these replies here on the list (and a private one or two) have given \nme a long list of things to try, and I feel confident that at least some of \nthem will gain me enough performance to comfortably deploy this application \nin the end on somewhat reasonable hardware. Thanks to everyone here on the \nlist for all the suggestions, it has been very helpful in giving me \ndirections to go with this that I hadn't thought of before.\n\nWhen I finally get all of this sorted out and working reasonably optimally, \nI'll be sure to come back and report what techniques/settings did and didn't \nwork for this workload.\n\n-- Brandon\n\nOn 12 Sep 2005 23:07:49 -0400, Greg Stark <[email protected]> wrote:\nThe WAL parameters like commit_delay and commit_siblings are a bit of amystery. Nobody has done any extensive testing of them. It would be quite\nhelpful if you find anything conclusive and post it. It would also besurprising if they had a very large effect. They almost got chopped recentlybecause they weren't believed to be useful.You might also ponder whether you need to by issuing a commit for every datum.\nIf you only need to commit periodically you can get much better throughput. Isuppose that's the same as commit_siblings. It would be interesting to know ifyou can get those parameters to perform as well as batching up records\nyourself.\nIdeally I'd like to commit the data seperately, as the data could\ncontain errors which abort the transaction, but it may come down to\nbatching it and coding things such that I can catch and discard the\noffending row and retry the transaction if it fails (which should be\nfairly rare I would hope).  I was hoping that the\ncommit_delay/commit_siblings stuff would allow me to maintain\nsimplistic transaction failure isolation while giving some of the\nbenefits of batching things up, as you've said.  I have seen\nperformance gains with it set at 100ms and a 3-6 siblings with 8\nbackends running, but I haven't been able to extensively tune these\nvalues, they were mostly random guesses that seemed to work.  My\ncycles of performance testing take a while, at least a day or two per\nchange being tested, and the differences can even then be hard to see\ndue to variability in the testing load (as it's not really a static\ntest load, but a window on reality).  On top of that, with the\ntime it takes, I've succumbed more than once to the temptation of\ntweaking more than one thing per performance run, which really muddies\nthe results.\n\n> Increasing shared_buffers seems to always help, even out to half of the dev> box's ram (2G).\nHalf should be a pessimal setting. It means virtually everything is bufferedtwice. Once in the kernel and once in Postgres. Effectively halving yourmemory. If that's really helping try raising it even further, to something\nlike 90% of your memory. But the conventional dogma is that shared buffersshould never be more than about 10k no matter how much memory you have. Letthe kernel do the bulk of the buffering.That said it's even more mysterious in your situation. Why would a large\nshared buffers help insert-only performance much at all? My guess is that it'sletting the vacuums complete quicker. Perhaps try raising work_mem?\nI find it odd as well.  After reading the standard advice on\nshared_buffers, I had only intended on raising it slightly.  But\nseeing ever-increasing performance gains, I just kept tuning it upwards\nall the way to the 2G limit, and saw noticeable gains every time. \nDuring at least some of the test cycles, there was no deleting or\nvacuuming going on, just insert traffic.  I guessed that\nshared_buffers' caching strategy must have been much more effective\nthan the OS cache at something or other, but I don't know what\nexactly.  The only important read traffic that comes to mind is\nthe index which is both being constantly updated and constantly checked\nfor primary key uniqueness violations.\n\nAll of these replies here on the list (and a private one or two) have\ngiven me a long list of things to try, and I feel confident that at\nleast some of them will gain me enough performance to comfortably\ndeploy this application in the end on somewhat reasonable\nhardware.  Thanks to everyone here on the list for all the\nsuggestions, it has been very helpful in giving me directions to go\nwith this that I hadn't thought of before.\n\nWhen I finally get all of this sorted out and working reasonably\noptimally, I'll be sure to come back and report what\ntechniques/settings did and didn't work for this workload.\n\n-- Brandon", "msg_date": "Mon, 12 Sep 2005 22:44:35 -0500", "msg_from": "Brandon Black <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance considerations for very heavy INSERT traffic" }, { "msg_contents": "On 9/12/05, Christopher Petrilli <[email protected]> wrote:\n> \n> \n> 2) Tune ext3. The default configuration wrecks high-write situations.\n> Look into data=writeback for mounting, turning off atime (I hope\n> you've done this already) updates, and also modifying the scheduler to\n> the elevator model. This is poorly documented in Linux (like just\n> about everything), but it's crtical.\n\n\nI'm using noatime and data=writeback already. I changed my scheduler from \nthe default anticipatory to deadline and saw an improvement, but I haven't \nyet delved into playing with specific elevator tunable values per-device.\n\n3) Use 8.1 and strongly look at Bizgres. The data partitioning is critical.\n\n\nI've just started down the path of getting 8.1 running with a larger block \nsize, and tommorow I'm going to look at Bizgres's partitioning as opposed to \nsome manual schemes. Will the current Bizgres have a lot of the performance \nenhancements of 8.1 already (or will 8.1 or 8.2 eventually get Bizgres's \npartitioning?)?\n\n-- Brandon\n\nOn 9/12/05, Christopher Petrilli <[email protected]> wrote:\n2) Tune ext3.  The default configuration wrecks high-write situations. Look into data=writeback for mounting, turning off atime (I hopeyou've done this already) updates, and also modifying the scheduler to\nthe elevator model.  This is poorly documented in Linux (like justabout everything), but it's crtical.\nI'm using noatime and data=writeback already.  I changed my\nscheduler from the default anticipatory to deadline and saw an\nimprovement, but I haven't yet delved into playing with specific\nelevator tunable values per-device.\n3) Use 8.1 and strongly look at Bizgres. The data partitioning is critical.\n\nI've just started down the path of getting 8.1 running with a larger\nblock size, and tommorow I'm going to look at Bizgres's partitioning as\nopposed to some manual schemes.  Will the current Bizgres have a\nlot of the performance enhancements of 8.1 already (or will 8.1 or 8.2\neventually get Bizgres's partitioning?)?\n-- Brandon", "msg_date": "Mon, 12 Sep 2005 22:53:55 -0500", "msg_from": "Brandon Black <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance considerations for very heavy INSERT traffic" }, { "msg_contents": "* Brandon Black ([email protected]) wrote:\n> Ideally I'd like to commit the data seperately, as the data could contain \n> errors which abort the transaction, but it may come down to batching it and \n> coding things such that I can catch and discard the offending row and retry \n> the transaction if it fails (which should be fairly rare I would hope). I \n\nDon't really know if it'd be better, or worse, or what, but another\nthought to throw out there is to perhaps use savepoints instead of full\ntransactions? Not sure if that's more expensive or cheaper than doing\nfull commits but it might be something to consider.\n\n> When I finally get all of this sorted out and working reasonably optimally, \n> I'll be sure to come back and report what techniques/settings did and didn't \n> work for this workload.\n\nThat'd be great, many thanks,\n\n\tStephen", "msg_date": "Tue, 13 Sep 2005 10:10:31 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance considerations for very heavy INSERT traffic" }, { "msg_contents": "On Mon, 12 Sep 2005 16:04:06 -0500\nBrandon Black <[email protected]> wrote:\n\nI've seen serveral tests PostgreSQL on JFS file system, it runs faster than using ext3.\nOur production server works using JFS and RAID10,\nwe have 250K+ transactions per day and everything is OK.\nTry switching to separate RAID10 arrays and use one for xlog, and others for data.\nIf you are using indexes, try to put them on separate RAID10 array.\n\n> Test hardware right now is a dual Opteron with 4G of ram, which we've barely \n> gotten 1,000 clients running against. Current disk hardware in testing is \n> whatever we could scrape together (4x 3-ware PCI hardware RAID controllers, \n> with 8 SATA drives in a RAID10 array off of each - aggregated up in a 4-way \n> stripe with linux md driver and then formatted as ext3 with an appropriate \n> stride parameter and data=writeback). Production will hopefully be a 4-8-way \n> Opteron, 16 or more G of RAM, and a fiberchannel hardware raid array or two \n> (~ 1TB available RAID10 storage) with 15krpm disks and battery-backed write \n> cache.\n\n-- \nEvgeny Gridasov\nSoftware Developer\nI-Free, Russia\n", "msg_date": "Tue, 13 Sep 2005 19:26:38 +0400", "msg_from": "evgeny gridasov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance considerations for very heavy INSERT" }, { "msg_contents": "On 9/12/05, Christopher Petrilli <[email protected]> wrote:\n> \n> \n> 3) Use 8.1 and strongly look at Bizgres. The data partitioning is \n> critical.\n\n\nI started looking closer at my options for partitioning (inheritance, union \nall), and at Bizgres today. Bizgres partitioning appears to be basically the \nsame kind of inheritance partitioning one can do in mainline PostgreSQL. Am \nI correct in thinking that the main difference is that they've coded support \nfor \"enable_constraint_exclusion=true\" so that the query planner can be more \neffective at taking advantage of the partitioning when you've specified \nCHECK constraints on the child tables? I may go for 8.1 instead in that \ncase, as the main win I'm looking for is that with inheritance I'll be doing \ninserts into smaller tables instead of ones that grow to unmanageable sizes \n(and that I can drop old child tables instead of delete/vacuum).\n\nOn 9/12/05, Christopher Petrilli <[email protected]> wrote:\n3) Use 8.1 and strongly look at Bizgres. The data partitioning is critical.\n\nI started looking closer at my options for partitioning (inheritance,\nunion all), and at Bizgres today.  Bizgres partitioning appears to be\nbasically the same kind of inheritance partitioning one can do in\nmainline PostgreSQL.  Am I correct in thinking that the main difference\nis that they've coded support for \"enable_constraint_exclusion=true\" so\nthat the query planner can be more effective at taking advantage of the\npartitioning when you've specified CHECK constraints on the child\ntables?  I may go for 8.1 instead in that case, as the main win I'm\nlooking for is that with inheritance I'll be doing inserts into smaller\ntables instead of ones that grow to unmanageable sizes (and that I can\ndrop old child tables instead of delete/vacuum).", "msg_date": "Tue, 13 Sep 2005 10:30:59 -0500", "msg_from": "Brandon Black <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance considerations for very heavy INSERT traffic" }, { "msg_contents": "On Tue, 2005-09-13 at 11:30, Brandon Black wrote:\n> I started looking closer at my options for partitioning (inheritance,\n> union all), and at Bizgres today. Bizgres partitioning appears to be\n> basically the same kind of inheritance partitioning one can do in\n> mainline PostgreSQL. Am I correct in thinking that the main\n> difference is that they've coded support for\n> \"enable_constraint_exclusion=true\" so that the query planner can be\n> more effective at taking advantage of the partitioning when you've\n> specified CHECK constraints on the child tables? I may go for 8.1\n> instead in that case, as the main win I'm looking for is that with\n> inheritance I'll be doing inserts into smaller tables instead of ones\n> that grow to unmanageable sizes (and that I can drop old child tables\n> instead of delete/vacuum).\n\nPerhaps I missed something in this thread, but don't forget\nyou still need vacuum to reclaim XIDs.\n\n\t--Ian\n\n\n", "msg_date": "Tue, 13 Sep 2005 12:16:55 -0400", "msg_from": "Ian Westmacott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance considerations for very heavy INSERT" }, { "msg_contents": "On Tue, Sep 13, 2005 at 12:16:55PM -0400, Ian Westmacott wrote:\n> On Tue, 2005-09-13 at 11:30, Brandon Black wrote:\n> > I started looking closer at my options for partitioning (inheritance,\n> > union all), and at Bizgres today. Bizgres partitioning appears to be\n> > basically the same kind of inheritance partitioning one can do in\n> > mainline PostgreSQL. Am I correct in thinking that the main\n> > difference is that they've coded support for\n> > \"enable_constraint_exclusion=true\" so that the query planner can be\n> > more effective at taking advantage of the partitioning when you've\n> > specified CHECK constraints on the child tables? I may go for 8.1\n> > instead in that case, as the main win I'm looking for is that with\n> > inheritance I'll be doing inserts into smaller tables instead of ones\n> > that grow to unmanageable sizes (and that I can drop old child tables\n> > instead of delete/vacuum).\n> \n> Perhaps I missed something in this thread, but don't forget\n> you still need vacuum to reclaim XIDs.\n\nYes, but if you are going to drop the partition before 1 billion\ntransactions, you can skip vacuuming it completely.\n\n-- \nAlvaro Herrera -- Valdivia, Chile Architect, www.EnterpriseDB.com\n\"Es fil�sofo el que disfruta con los enigmas\" (G. Coli)\n", "msg_date": "Tue, 13 Sep 2005 13:22:43 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance considerations for very heavy INSERT" }, { "msg_contents": "On Sep 12, 2005, at 6:02 PM, Brandon Black wrote:\n\n> - splitting the xlog and the data on distinct physical \n> drives or arrays\n>\n> That would almost definitely help, I haven't tried it yet. \n> Speaking of the xlog, anyone know anything specific about the WAL \n> tuning parameters for heavy concurrent write traffic? What little \n> I could dig up on WAL tuning was contradictory, and testing some \n> random changes to the parameters hasn't been very conclusive yet. \n> I would imagine the WAL buffers stuff could potentially have a \n> large effect for us.\n>\n\nyou will want to make your pg_xlog RAID volume BIG, and then tell \npostgres to use that space: bump up checkpoint_segments (and suitably \nthe checkpoint timeouts). I run with 256 segments and a timeout of 5 \nminutes. The timeout refletcs your expected crash recovery time, so \nadjust it wisely....\n\nAlso, you should consider how you split your drives across your RAID \ndata channels on your test machine: I put each pair of the RAID10 \nmirrors on opposite channels, so both channels of my RAID controller \nare pretty evenly loaded during write.\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806\n\n\n\nOn Sep 12, 2005, at 6:02 PM, Brandon Black wrote:        - splitting the xlog and the data on distinct physical drives or arraysThat would almost definitely help, I haven't tried it yet.  Speaking of the xlog, anyone know anything specific about the WAL tuning parameters for heavy concurrent write traffic?  What little I could dig up on WAL tuning was contradictory, and testing some random changes to the parameters hasn't been very conclusive yet.  I would imagine the WAL buffers stuff could potentially have a large effect for us.you will want to make your pg_xlog RAID volume BIG, and then tell postgres to use that space: bump up checkpoint_segments (and suitably the checkpoint timeouts).  I run with 256 segments and a timeout of 5 minutes.  The timeout refletcs your  expected crash recovery time, so adjust it wisely....Also, you should consider how you split your drives across your RAID data channels on your test machine: I put each pair of the RAID10 mirrors on opposite channels, so both channels of my RAID controller are pretty evenly loaded during write. Vivek Khera, Ph.D. +1-301-869-4449 x806", "msg_date": "Wed, 21 Sep 2005 11:57:42 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance considerations for very heavy INSERT traffic" }, { "msg_contents": "On Sep 12, 2005, at 6:02 PM, Brandon Black wrote:\n\n> - using COPY instead of INSERT ?\n> (should be easy to do from the aggregators)\n>\n> Possibly, although it would kill the current design of returning \n> the database transaction status for a single client packet back to \n> the client on transaction success/failure. The aggregator could \n> put several clients' data into a series of delayed multi-row copy \n> statements.\n>\n\nbuffer through the file system on your aggregator. once you \"commit\" \nto local disk file, return back to your client that you got the \ndata. then insert into the actual postgres DB in large batches of \ninserts inside a single Postgres transaction.\n\nwe have our web server log certain tracking requests to a local \nfile. with file locks and append mode, it is extremely quick and has \nlittle contention delays. then every so often, we lock the file, \nrename it, release the lock, then process it at our leisure to do \nthe inserts to Pg in one big transaction.\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806\n\n\n\nOn Sep 12, 2005, at 6:02 PM, Brandon Black wrote:        - using COPY instead of INSERT ?                (should be easy to do from the aggregators)Possibly, although it would kill the current design of returning the database transaction status for a single client packet back to the client on transaction success/failure.   The aggregator could put several clients' data into a series of delayed multi-row copy statements.buffer through the file system on your aggregator.  once you \"commit\" to local disk file, return back to your client that you got the data.  then insert into the actual postgres DB in large batches of inserts inside a single Postgres transaction.we have our web server log certain tracking requests to a local file.  with file locks and append mode, it is extremely quick and has little contention delays. then every so often, we lock the file, rename  it, release the lock, then process it at our leisure to do the inserts to Pg in one big transaction. Vivek Khera, Ph.D. +1-301-869-4449 x806", "msg_date": "Wed, 21 Sep 2005 12:01:48 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance considerations for very heavy INSERT traffic" } ]
[ { "msg_contents": ">From: Brandon Black <[email protected]>\n>Sent: Sep 12, 2005 5:04 PM\n>To: [email protected]\n>Subject: [PERFORM] Performance considerations for very heavy INSERT traffic\n\n>I'm in the process of developing an application which uses PostgreSQL for \n>data storage. Our database traffic is very atypical, and as a result it has \n>been rather challenging to figure out how to best tune PostgreSQL on what \n>development hardware we have, as well as to figure out exactly what we \n>should be evaluating and eventually buying for production hardware.\n\n>The vast, overwhelming majority of our database traffic is pretty much a \n>non-stop stream of INSERTs filling up tables. It is akin to data \n>acquisition. Several thousand clients are sending once-per-minute updates \n>full of timestamped numerical data at our central server, which in turn \n>performs INSERTs into several distinct tables as part of the transaction for \n>that client. We're talking on the order of ~100 transactions per second, \n>each containing INSERTs to multiple tables (which contain only integer and \n>floating point columns and a timestamp column - the primary key (and only \n>index) is on a unique integer ID for the client and the timestamp). The \n>transaction load is spread evenly over time by having the clients send their \n>per-minute updates at random times rather than on the exact minute mark.\n\nI have built two such systems. TBF, neither used PostgreSQL. OTOH, the principles are the same.\n\nOne perhaps non-obvious point: You definitely are going to want a way to adjust exactly when a specific client is sending its approximately once-per-minute update via functionality similar to adjtime(). Such functionality will be needed to smooth the traffic across the clients as much as possible over the 1 minute polling period given the real-world vagracies of WAN connections.\n\nPut your current active data acquisition table on its own RAID 10 array, and keep it _very_ small and simple in structure (see below for a specific suggestion regarding this). If you must have indexes for this table, put them on a _different_ array. Basically, you are going to treat the active table like you would an log file: you want as little HD head movement as possible so appending to the table is effectively sequential IO.\n\nAs in a log file, you will get better performance if you batch your updates to HD into reasonably sized chunks rather than writing to HD every INSERT.\n\nThe actual log file will have to be treated in the same way to avoid it becoming a performance bottleneck.\n\n\n>There will of course be users using a web-based GUI to extract data from \n>these tables and display them in graphs and whatnot, but the SELECT query \n>traffic will always be considerably less frequent and intensive than the \n>incessant INSERTs, and it's not that big a deal if the large queries take a \n>little while to run.\n\nMore details here would be helpful.\n\n\n>This data also expires - rows with timestamps older than X days will be \n>DELETEd periodically (once an hour or faster), such that the tables will \n>reach a relatively stable size (pg_autovacuum is handling vacuuming for now, \n>but considering our case, we're thinking of killing pg_autovacuum in favor \n>of having the periodic DELETE process also do a vacuum of affected tables \n>right after the DELETE, and then have it vacuum the other low traffic tables \n>once a day while it's at it).\n\nDesign Idea: split the data into tables where _at most_ the tables are of the size that all the data in the table expires at the same time and DROP the entire table rather than scanning a big table for deletes at the same time you want to do inserts to said. Another appropriate size for these tables may be related to the chunk you want to write INSERTS to HD in. \n\nThis will also have the happy side effect of breaking the data into smaller chunks that are more likely to be cached in their entirety when used. \n\n\n>There is an aggregation layer in place which proxies the inbound data from \n>the clients into a small(er) number of persistent postgresql backend \n>processes. Right now we're doing one aggregator per 128 clients (so instead \n>of 128 seperate database connections over the course of a minute for a small \n>transaction each, there is a single database backend that is constantly \n>committing transactions at a rate of ~ 2/second). At a test load of ~1,000 \n>clients, we would have 8 aggregators running and 8 postgresql backends. \n>Testing has seemed to indicate we should aggregate even harder - the planned \n>production load is ~5,000 clients initially, but will grow to almost double \n>that in the not-too-distant future, and that would mean ~40 backends at 128 \n>clients each initially. Even on 8 cpus, I'm betting 40 concurrent backends \n>doing 2 tps is much worse off than 10 backends doing 8 tps.\n\nExperience has taught me that the above is not likely to be the proper architecture for this kind of application.\n\nThe best exact approach is dependent on The Details, but in general you want to optimize the amount of data sent from NIC to CPU per transfer (multiple small copies and lots of interrupts _kill_ system performance) and use a combined Threading and Event Queue model with Processor Affinity being used to optimize the NIC <-> CPU path for a given NIC. Have each physical NIC+CPU Affinity set be hidden underneath an overall Virtual NIC+CPU abstraction for load balancing purposes. The traffic thinks it's talking to one NIC attached to one CPU.\n\nThis architecture allowed for a 4P Xeon system (Compaq Proliant 8500s) to handle 250K+ simultaneous active web connections. In 2000 using Windows 2K. As HW + SW has gotten better, it has scaled very well. It's a very high performance model.\n\nSuch an architecture using AMD64 CPUs running a 64b Linux 2.6 distro now in late 2005 should _easily_ handle your projected demand with plenty of room for growth.\n\n\n>Test hardware right now is a dual Opteron with 4G of ram, which we've barely \n>gotten 1,000 clients running against. Current disk hardware in testing is \n>whatever we could scrape together (4x 3-ware PCI hardware RAID controllers, \n>with 8 SATA drives in a RAID10 array off of each - aggregated up in a 4-way \n>stripe with linux md driver and then formatted as ext3 with an appropriate \n>stride parameter and data=writeback). Production will hopefully be a 4-8-way \n>Opteron, 16 or more G of RAM, and a fiberchannel hardware raid array or two \n>(~ 1TB available RAID10 storage) with 15krpm disks and battery-backed write \n>cache.\n\nFor deployment, get a 2P or 4P 16 DIMM slot mainboard and stuff it with at least 16GB of RAM. A 2P 16 DIMM slot mainboard like the IWill DK88 (Tyan and a few others also have such boards in the works) is IMHO your best bet.\n\nIME, you will not need CPU for this application as much as you will need RAM first and HD IO bandwidth second.\nGet a 1C2P AMD64 system and don't worry about CPU until you are CPU bound. I highly doubt you will need more CPU than a 2C2P (4 cores total) based on say 4400+'s, 4600+'s, or 4800+'s can provide.\n\n\n>I know I haven't provided a whole lot of application-level detail here, but \n>does anyone have any general advice on tweaking postgresql to deal with a \n>very heavy load of concurrent and almost exclusively write-only \n>transactions? Increasing shared_buffers seems to always help, even out to \n>half of the dev box's ram (2G). A 100ms commit_delay seemed to help, but \n>tuning it (and _siblings) has been difficult. We're using 8.0 with the \n>default 8k blocksize, but are strongly considering both developing against \n>8.1 (seems it might handle the heavy concurrency better), and re-compiling \n>with 32k blocksize since our storage arrays will inevitably be using fairly \n>wide stripes. Any advice on any of this (other than drop the project while \n>you're still a little bit sane)?\n\nSince you are doing more writing than reading, and those writes are going to be relatively small, you may not get as much out of block sizes larger than the average DB write.\n\nMy initial instinct on this point is that you should keep the size of the \"chunk\" the same from NIC to CPU to HD, and make said chunk as large as possible.\n\n\nHope this helps,\nRon Peacetree\n\nPS. Yes, I'm available under the proper circumstances for a consulting gig.\n\n", "msg_date": "Tue, 13 Sep 2005 01:02:16 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance considerations for very heavy INSERT" } ]
[ { "msg_contents": "Hello.\n\nI would like to build a shared repository for Enterprise Architect\n(http://www.sparxsystems.com.au/ea.htm) using PostgreSQL. I have done it\nbefore with Linux and FreeBSD servers and everything was working out of the\nbox. The repository is pretty simple database with less than 100 tables (the\nschema is at\nhttp://www.sparxsystems.com.au/downloads/corp/Postgres_Basemodel.sql).\n\nThe problem is that at the moment I have only a Windows XP \"server\" at my\ndisposal. I have installed PostgreSQL 8.0.3 for Windows and set the\nrepository up. Unfortunately the performance is unacceptable: every\noperation with the model stored in the repository is by the order of\nmagnitude slower than on the FreeBSD server with half as good hardware.\n(BTW CPU load is nearly 0, network load is under 5%, the machine has 1GB\nRAM and the database size is 14MB.)\n\nI have tried to:\n- tweak the postgresql.conf - no apparent change\n- kill all unnecessary services - no apparent change\n- install MySQL on the same machine to compare - it is as fast as PostgreSQL\n on FreeBSD (= way faster than PG on the machine)\n\nAnyway I believe the problem is in the Win PostgreSQL server but I have no\nidea where to look and neither do I have much time to spend.\n(Also I really do not want to run MySQL ;-)\n\nAny suggestions are welcome.\n\nThanks\n\nDalibor Sramek\n\nP.S.\n\nMore information about EA PGSQL repositories:\nhttp://sparxsystems.com.au/resources/corporate/\nhttp://sparxsystems.com.au/EAUserGuide/index.html?connecttoapostgresqlreposi.htm\nhttp://sparxsystems.com.au/EAUserGuide/index.html?setupapostgresqlodbcdriver.htm\n\n-- \nDalibor Sramek http://www.insula.cz/dali \\ In the eyes of cats\n / [email protected] \\ all things\n/ >H blog http://www.transhumanismus.cz/blog.php \\ belong to cats.\n", "msg_date": "Tue, 13 Sep 2005 13:05:09 +0200", "msg_from": "Dalibor Sramek <[email protected]>", "msg_from_op": true, "msg_subject": "Low performance on Windows problem" } ]
[ { "msg_contents": "Good afternoon,\nWe have an application based on opencms6 / Tomcat5. The data management of the content is kept in a PostgreSQL 8.0.2 base.\nThe pages' display is extremely slow (it can take up to several minutes per refresh). Thus, it would not be applicable in production.\nMaterial configuration details of the server hosting PostgreSQL :\n- Windows Server 2003\n- Bi-processor Intel Xeon 3.0 GHz\n- RAM 4 Go\n- Disks on SAN - access through optical fibre\nI have noticed that several sequential accesses happen on the tables of the cms during the questioning.\nMy questions :\nWhat would be the best configuration parameters of the postgresql.conf file (or another file) to enhance the performances ?\nWhat could we do on Opencms' side to improve the data interrogation?\nIs the creation of the tables (index) through opencms optimum ?\nYour advices are most welcome.\nSylvie\n\n\n", "msg_date": "Tue, 13 Sep 2005 13:17:27 +0200", "msg_from": "\"Bouchard Sylvie\" <[email protected]>", "msg_from_op": true, "msg_subject": "Opencms6 ans Postgres8" } ]
[ { "msg_contents": "> Hello.\n> \n> I would like to build a shared repository for Enterprise Architect\n> (http://www.sparxsystems.com.au/ea.htm) using PostgreSQL. I have done\nit\n> before with Linux and FreeBSD servers and everything was working out\nof\n> the\n> box. The repository is pretty simple database with less than 100\ntables\n> (the\n> schema is at\n> http://www.sparxsystems.com.au/downloads/corp/Postgres_Basemodel.sql).\n> \n> The problem is that at the moment I have only a Windows XP \"server\" at\nmy\n> disposal. I have installed PostgreSQL 8.0.3 for Windows and set the\n> repository up. Unfortunately the performance is unacceptable: every\n> operation with the model stored in the repository is by the order of\n> magnitude slower than on the FreeBSD server with half as good\nhardware.\n> (BTW CPU load is nearly 0, network load is under 5%, the machine has\n1GB\n> RAM and the database size is 14MB.)\n> \n> I have tried to:\n> - tweak the postgresql.conf - no apparent change\n> - kill all unnecessary services - no apparent change\n> - install MySQL on the same machine to compare - it is as fast as\n> PostgreSQL\n> on FreeBSD (= way faster than PG on the machine)\n \nCan you give specific examples of cases that are not performing like you\nexpect? If possible, give a few queries with explain analyze times and\nall that.\n\nAre you syncing your data? Win32 fsync is about 1/3 as fast as linux\nfsync, although this was changed to fsync_writethrough for clarification\npurposes.\n\nMerlin\n", "msg_date": "Tue, 13 Sep 2005 07:58:20 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low performance on Windows problem" }, { "msg_contents": "On Tue, Sep 13, 2005 at 07:58:20AM -0400, Merlin Moncure wrote:\n> Can you give specific examples of cases that are not performing like you\n> expect? If possible, give a few queries with explain analyze times and\n> all that.\n\nO.K. I have found one particular problem:\n\n2005-09-13 14:43:02 LOG: statement: declare SQL_CUR03949008 cursor for\nSELECT * FROM t_umlpattern\n2005-09-13 14:43:02 LOG: duration: 0.000 ms\n2005-09-13 14:43:02 LOG: statement: fetch 1000 in SQL_CUR03949008\n2005-09-13 14:43:22 LOG: duration: 20185.000 ms\n\nThis command is executed while a model is loaded from the repository.\n\nThe table definition is:\nCREATE TABLE t_umlpattern ( \n\tPatternID INTEGER DEFAULT nextval('\"patternid_seq\"'::text) NOT NULL\nPRIMARY KEY,\n\tPatternCategory VARCHAR(100),\n\tPatternName VARCHAR(150),\n\tStyle VARCHAR(250),\n\tNotes TEXT,\n\tPatternXML TEXT,\n\tVersion VARCHAR(50)\n);\n\nIt has just 23 rows but the PatternXML column is rather large. The table\ndump has over 900 kB.\n\nNow\nselect * from t_umlpattern limit 2\n\ntakes 1500+ msec on the Windows machine and 60 on a comparable Linux\nmachine. Both selects performed from remote PgAdmin.\nThe same select performed localy on the windows machine takes 60 msec.\n\nSo I guess the problem is in the transfer of the bigger amount of data from\nthe Windows server.\n\nI put the dump at http://www.insula.cz/dali/misc/table.zip\n\nCould anybody confirm the difference?\n\nThanks for any suggestions.\n\nDalibor Sramek\n\n-- \nDalibor Sramek http://www.insula.cz/dali \\ In the eyes of cats\n / [email protected] \\ all things\n/ >H blog http://www.transhumanismus.cz/blog.php \\ belong to cats.\n", "msg_date": "Tue, 13 Sep 2005 15:49:51 +0200", "msg_from": "Dalibor Sramek <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low performance on Windows problem" }, { "msg_contents": "Dalibor Sramek <[email protected]> writes:\n> select * from t_umlpattern limit 2\n> takes 1500+ msec on the Windows machine and 60 on a comparable Linux\n> machine. Both selects performed from remote PgAdmin.\n> The same select performed localy on the windows machine takes 60 msec.\n\nSo it's a networking issue. I haven't paid real close attention to\nWindows problems, but I recall that we've heard a couple of reports\nof Windows performance problems that were resolved by removing various\nthird-party network filters and/or installing Windows service pack\nupdates. Check through the list archives ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Sep 2005 11:32:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low performance on Windows problem " }, { "msg_contents": "On Tue, Sep 13, 2005 at 11:32:02AM -0400, Tom Lane wrote:\n> So it's a networking issue. I haven't paid real close attention to\n> ...\n> updates. Check through the list archives ...\n\nThis one\nhttp://archives.postgresql.org/pgsql-performance/2005-06/msg00593.php\n\nseems to be very similar to my problem. Somebody suggested that setting\nTCP_NODELAY option to the TCP connection may help. Before I dive into the\nsource: could some win-pg guru tell me if the Windows server tries to set\nthis option? Is it possible to change it via configuration? Is there a way\nto find out if the TCP_NODELAY option was actually used for a connection?\n\nAnyway thank you all. I believe I am getting closer to a solution.\n\nDalibor Sramek \n\n-- \nDalibor Sramek http://www.insula.cz/dali \\ In the eyes of cats\n / [email protected] \\ all things\n/ >H blog http://www.transhumanismus.cz/blog.php \\ belong to cats.\n", "msg_date": "Tue, 13 Sep 2005 21:21:20 +0200", "msg_from": "Dalibor Sramek <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low performance on Windows problem" } ]
[ { "msg_contents": "> On Tue, Sep 13, 2005 at 07:58:20AM -0400, Merlin Moncure wrote:\n> This command is executed while a model is loaded from the repository.\n> \n> The table definition is:\n> CREATE TABLE t_umlpattern (\n> \tPatternID INTEGER DEFAULT nextval('\"patternid_seq\"'::text) NOT\nNULL\n> PRIMARY KEY,\n> \tPatternCategory VARCHAR(100),\n> \tPatternName VARCHAR(150),\n> \tStyle VARCHAR(250),\n> \tNotes TEXT,\n> \tPatternXML TEXT,\n> \tVersion VARCHAR(50)\n> );\n> \n> It has just 23 rows but the PatternXML column is rather large. The\ntable\n> dump has over 900 kB.\n> \n> Now\n> select * from t_umlpattern limit 2\n> \n> takes 1500+ msec on the Windows machine and 60 on a comparable Linux\n> machine. Both selects performed from remote PgAdmin.\n> The same select performed localy on the windows machine takes 60 msec.\n> \n> So I guess the problem is in the transfer of the bigger amount of data\n> from\n> the Windows server.\n> \n> I put the dump at http://www.insula.cz/dali/misc/table.zip\n> \n> Could anybody confirm the difference?\n \nI loaded your dump and was able to select entire table in trivial time\nfrom both pgAdmin and psql shell. I am suspecting some type of tcp\nproblem here. Can you confirm slow times on unloaded server?\n\nMerlin\n", "msg_date": "Tue, 13 Sep 2005 10:20:05 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low performance on Windows problem" }, { "msg_contents": "On Tue, Sep 13, 2005 at 10:20:05AM -0400, Merlin Moncure wrote:\n> I loaded your dump and was able to select entire table in trivial time\n> from both pgAdmin and psql shell. I am suspecting some type of tcp\n> problem here. Can you confirm slow times on unloaded server?\n\nDid you run the select remotely on a Windows server?\n\nYes the server load is practically 0. Note the difference between local and\nremote execution of the command. I think you are right about the network\nproblem possibility. But it is bound to PostgreSQL. MySQL on the same\nmachine (and same database content) had no problem.\n\nSo are there any known issues with PostgreSQL on Windows sending data to\nremote hosts connected via ODBC?\nWhat should I do to find out more debug info?\n\nThanks\n\nDalibor Sramek\n\n-- \nDalibor Sramek http://www.insula.cz/dali \\ In the eyes of cats\n / [email protected] \\ all things\n/ >H blog http://www.transhumanismus.cz/blog.php \\ belong to cats.\n", "msg_date": "Tue, 13 Sep 2005 16:34:02 +0200", "msg_from": "Dalibor Sramek <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low performance on Windows problem" } ]
[ { "msg_contents": "This is sounding suspiciously similar to behavior I've seen with other types of TCP database connections when the tcp-no-delay option is not on. Is it possible that the ODBC driver for Windows is not successfully setting this up?\n \n-Kevin\n \n \n>>> Dalibor Sramek <[email protected]> 09/13/05 9:34 AM >>>\nOn Tue, Sep 13, 2005 at 10:20:05AM -0400, Merlin Moncure wrote:\n> I loaded your dump and was able to select entire table in trivial time\n> from both pgAdmin and psql shell. I am suspecting some type of tcp\n> problem here. Can you confirm slow times on unloaded server?\n\nDid you run the select remotely on a Windows server?\n\nYes the server load is practically 0. Note the difference between local and\nremote execution of the command. I think you are right about the network\nproblem possibility. But it is bound to PostgreSQL. MySQL on the same\nmachine (and same database content) had no problem.\n\nSo are there any known issues with PostgreSQL on Windows sending data to\nremote hosts connected via ODBC?\nWhat should I do to find out more debug info?\n\n", "msg_date": "Tue, 13 Sep 2005 10:01:13 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low performance on Windows problem" } ]
[ { "msg_contents": "> Did you run the select remotely on a Windows server?\n\nyes.\n \n> Yes the server load is practically 0. Note the difference between\nlocal\n> and\n> remote execution of the command. I think you are right about the\nnetwork\n> problem possibility. But it is bound to PostgreSQL. MySQL on the same\n> machine (and same database content) had no problem.\n> \n> So are there any known issues with PostgreSQL on Windows sending data\nto\n> remote hosts connected via ODBC?\n> What should I do to find out more debug info?\n\n1. turn on all your logging and make sure we looking at the right place\n(planner stats, etc).\n2. run explain analyze and compare timings (which returns only explain\noutput).\n3. do a select max(patternxml) test.t_umlpattern and observe the time.\n4. do a select substr(patternxml, 1, 10) from test.t_umlpattern and\nobserve the time.\n5. do select array_accum(q::text) from generate_series(1,10000) q;\n\nif array_accum errors out, do:\n\nCREATE AGGREGATE public.array_accum (\n sfunc = array_append,\n basetype = anyelement,\n stype = anyarray,\n initcond = '{}'\n);\n\nand observe the time.\n\nMerlin\n", "msg_date": "Tue, 13 Sep 2005 11:05:00 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low performance on Windows problem" }, { "msg_contents": "On Tue, Sep 13, 2005 at 11:05:00AM -0400, Merlin Moncure wrote:\n> 5. do select array_accum(q::text) from generate_series(1,10000) q;\n\nI made the tests you suggested and the pattern is clear. The difference\nbetween local and remote command execution is caused by moving data over\nthe network. E.g. the command above takes 700 ms locally and 1500 ms\nremotely. Remote explain analyze takes exactly the 700 ms.\n\nI downloaded PCATTCP - http://www.pcausa.com/Utilities/pcattcp.htm\nand the measured throughput between the two machines is over 10000 kB/s.\nPCATTCP allows setting TCP_NODELAY but it had no effect on the transfer\nspeed. So the difference between local and remote execution should IMHO stay\nin the 10 ms range. Definitely not 800 ms. The 8.1 has the same problem.\n\nJust for the record: the server PC is Dell Precision 330 with 3Com 3C920\nintegrated network card. OS MS Windows Professional 2002 with service pack\n2. There is Symantec Antivirus installed - which I have (hopefully)\ncompletely disabled.\n\nThanks for any help\n\nDalibor Sramek\n\n-- \nDalibor Sramek http://www.insula.cz/dali \\ In the eyes of cats\n / [email protected] \\ all things\n/ >H blog http://www.transhumanismus.cz/blog.php \\ belong to cats.\n", "msg_date": "Wed, 14 Sep 2005 15:02:22 +0200", "msg_from": "Dalibor Sramek <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low performance on Windows problem" } ]
[ { "msg_contents": "I have a database of hundreds of millions of web links (between sites)\nin Postgres. For each link, we record the url, the referer, and the\nmost recent date the link exists. I'm having some serious performance\nissues when it comes to writing new data into the database.\n\nOne machine is simply not going to be able to scale with the quantities\nof links we hope to store information about and we want to move to some\nkind of cluster. Because of the quantities of data, it seems to make\nsense to go for a cluster setup such that in a 4 machine cluster, each\nmachine has a quarter of the data (is this \"Share nothing,\" or, \"Share\neverything\"?). To that end, we figured a good first step was to\npartition the data on one machine into multiple tables defining the\nlogic which would find the appropriate table given a piece of data.\nThen, we assumed, adding the logic to find the appropriate machine and\ndatabase in our cluster would only be an incremental upgrade.\n\nWe implemented a partitioning scheme that segments the data by the\nreferring domain of each link. This is clearly not the most regular\n(in terms of even distribution) means of partitioning, but the data in\neach table is most likely related to each other, so queries would hit\nthe smallest number of tables. We currently have around 400,000 tables\nand I would estimate that the vast majority of these tables are\nrelatively small (less than 200 rows).\n\nOur queries use UNION ALL to combine data from multiple tables (when\nthat's applicable, never more than 1000 tables at once, usually much\nfewer). When writing to the database, the table for the referring\ndomain is locked while data is added and updated for the whole\nreferring domain at once. We only store one copy of each link, so when\nloading we have to do a SELECT (for the date) then INSERT or UPDATE\nwhere applicable for each link.\n\nAt this point, the primary performance bottleneck is in adding\nadditional data to the database. Our loader program (we load text\nfiles of link information) is currently getting about 40 rows a second,\nwhich is nowhere near the performance we need to be seeing. In theory,\nwe want to be able to re-write our entire archive of data within on a\n1-2 month cycle, so this is a very heavy write application (though\nwe're also constantly generating reports from the data, so its not\nwrite only).\n\nIs the total number of tables prohibitively affecting our write speed\nor is that an IO problem that can only be addressed by better drive\npartitioning (all data is on one drive, which I've already read is a\nproblem)? Is this approach to data partitioning one which makes any\nsense for performance, or should we move to a more normal distribution\nof links across fewer tables which house more rows each?\n\nThanks in advance for your advice.\n\n-matt\n\n", "msg_date": "13 Sep 2005 13:23:52 -0700", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "How many tables is too many tables?" }, { "msg_contents": "\n<[email protected]> wrote\n>\n> One machine is simply not going to be able to scale with the quantities\n> of links we hope to store information about and we want to move to some\n> kind of cluster. Because of the quantities of data, it seems to make\n> sense to go for a cluster setup such that in a 4 machine cluster, each\n> machine has a quarter of the data (is this \"Share nothing,\" or, \"Share\n> everything\"?). To that end, we figured a good first step was to\n> partition the data on one machine into multiple tables defining the\n> logic which would find the appropriate table given a piece of data.\n> Then, we assumed, adding the logic to find the appropriate machine and\n> database in our cluster would only be an incremental upgrade.\n>\n\nSo you set up 4 separate copies of PG in 4 machines? This is neither SN or \nSE.\n\nThe partition is good for performance if you distribute IOs and CPUs. In \nyour design, I believe IO is distributed (to 4 machines), but since you \nsliced data into too small pieces, you will get penality from other places. \nFor example, each table has to maintain separate indices (index becomes an \nuseless burden when table is too small), so there will be so many Btree root \n... System tables (pg_class/pg_attribute, etc) has to contains many rows to \nrecord your tables ... though we cached system table rows, but the memory \nspace is limited ...\n\nIn short, too many tables. To design your new partition method, jsut keep in \nmind that database access data in a page-wise IO.\n\nRegards,\nQingqing \n\n\n", "msg_date": "Wed, 14 Sep 2005 18:49:06 -0700", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How many tables is too many tables?" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> We currently have around 400,000 tables\n> and I would estimate that the vast majority of these tables are\n> relatively small (less than 200 rows).\n\nStop right there, and go redesign your schema. This is unbelievably\nwrong :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Sep 2005 23:21:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How many tables is too many tables? " }, { "msg_contents": "[email protected] wrote:\n> I have a database of hundreds of millions of web links (between sites)\n> in Postgres. For each link, we record the url, the referer, and the\n> most recent date the link exists. I'm having some serious performance\n> issues when it comes to writing new data into the database.\n>\n> One machine is simply not going to be able to scale with the quantities\n> of links we hope to store information about and we want to move to some\n> kind of cluster. Because of the quantities of data, it seems to make\n> sense to go for a cluster setup such that in a 4 machine cluster, each\n> machine has a quarter of the data (is this \"Share nothing,\" or, \"Share\n> everything\"?). To that end, we figured a good first step was to\n> partition the data on one machine into multiple tables defining the\n> logic which would find the appropriate table given a piece of data.\n> Then, we assumed, adding the logic to find the appropriate machine and\n> database in our cluster would only be an incremental upgrade.\n\nIn a database app, you generally don't win by going to a cluster,\nbecause you are almost always bound by your I/O. Which means that a\nsingle machine, just with more disks, is going to outperform a group of\nmachines.\n\nAs Tom mentioned, your schema is not very good. So lets discuss what a\nbetter schema would be, and also how you might be able to get decent\nperformance with a cluster.\n\nFirst, 200rows * 400,000 tables = 80M rows. Postgres can handle this in\na single table without too much difficulty. It all depends on the\nselectivity of your indexes, etc.\n\nI'm not sure how you are trying to normalize your data, but it sounds\nlike having a url table so that each entry can be a simple integer,\nrather than the full path, considering that you are likely to have a\nbunch of repeated information.\n\nThis makes your main table something like 2 integers, plus the\ninteresting stuff (from url, to url, data).\n\nIf you are finding you are running into I/O problems, you probably could\nuse this layout to move your indexes off onto their own spindles, and\nmaybe separate the main table from the url tables.\n\nWhat is your hardware? What are you trying to do that you don't think\nwill scale?\n\nIf you were SELECT bound, then maybe a cluster would help you, because\nyou could off-load the SELECTs onto slave machines, and leave your\nprimary machine available for INSERTs and replication.\n\n>\n...\n\n>\n> At this point, the primary performance bottleneck is in adding\n> additional data to the database. Our loader program (we load text\n> files of link information) is currently getting about 40 rows a second,\n> which is nowhere near the performance we need to be seeing. In theory,\n> we want to be able to re-write our entire archive of data within on a\n> 1-2 month cycle, so this is a very heavy write application (though\n> we're also constantly generating reports from the data, so its not\n> write only).\n\nAre you VACUUMing enough? If you are rewriting all of the data, postgres\nneeds you to clean up afterwards. It is pessimistic, and leaves old rows\nin their place.\n\n>\n> Is the total number of tables prohibitively affecting our write speed\n> or is that an IO problem that can only be addressed by better drive\n> partitioning (all data is on one drive, which I've already read is a\n> problem)? Is this approach to data partitioning one which makes any\n> sense for performance, or should we move to a more normal distribution\n> of links across fewer tables which house more rows each?\n\nIf all data is on a single drive, you are nowhere near needing a cluster\nto improve your database. What you need is a 14-drive RAID array. It's\nprobably cheaper than 4x powerful machines, and will provide you with\nmuch better performance. And put all of your tables back into one.\n\nJohn\n=:->\n\n>\n> Thanks in advance for your advice.\n>\n> -matt\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>", "msg_date": "Tue, 20 Sep 2005 00:22:20 -0400", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How many tables is too many tables?" } ]
[ { "msg_contents": "Hello All,\n\nWe are struggling with a specific query that is killing us. When doing \nexplain analyze on the entire query, we *seem* to be getting killed by the \nestimated number of rows on a case statement calculation.\n\nI've included a snippet from the explain analyze of the much larger query. The \nline in question, (cost=0.00..106.52 rows=1 width=16) (actual \ntime=0.048..67.728 rows=4725 loops=1) shows that it returned 4700 rows \ninstead of 1 which when coupled with a later join causes the statement to run \nover 3 minutes.[1] \n\nIt seems that it thinks that the scan on role_id is going to return 1 row, but \nin reality returns 4725 rows. The case statement causing the problem uses \ntodays date to see if a particular row is still active. Here is a test case \nshowing how far off the estimate is from the reality. [2]\n\nI'm not too surprised to see that the estimate is off because it is \ncalculated, but does anyone know either how to make the estimate more \naccurate so it picks a better plan, or is there a better way to do a \"status\" \nfunction based off of the current date so that it is more efficient? I've \nplayed with statistics on this table (racheting them up to 1000) with no \nchange in the plan.\n\nAny thoughts?\n\n-Chris\n\n[1] explain analyze snippet from larger query\n-> Nested Loop (cost=0.00..955.70 rows=1 width=204) (actual \ntime=3096.689..202704.649 rows=17 loops=1)\n Join Filter: (\"inner\".nameid = \"outer\".name_id)\n -> Nested Loop (cost=0.00..112.25 rows=1 width=33) (actual \ntime=0.271..90.760 rows=4725 loops=1)\n -> Index Scan using role_definition_description_idx on \nrole_definition rdf (cost=0.00..5.72 rows=1 width=21) (actual \ntime=0.215..0.218 rows=1 loops=1)\n Index Cond: (description = 'Participant'::text)\n Filter: (program_id = 120)\n -> Index Scan using roles_role_id_idx on roles rol \n(cost=0.00..106.52 rows=1 width=16) (actual time=0.048..67.728 rows=4725 \nloops=1)\n Index Cond: (rol.role_id = \"outer\".role_id)\n Filter: (CASE WHEN (role_id IS NULL) THEN NULL::text WHEN \n((\"begin\" IS NOT NULL) AND (\"end\" IS NOT NULL)) THEN CASE WHEN \n((('now'::text)::date >= \"begin\") AND (('now'::text)::date <= \"end\")) THEN \n'Active'::text ELSE 'Inactive'::text END WHEN (\"begin\" IS NOT NULL) THEN CASE \nWHEN (('now'::text)::date >= \"begin\") THEN 'Active'::text ELSE \n'Inactive'::text END WHEN (\"end\" IS NOT NULL) THEN CASE WHEN \n(('now'::text)::date <= \"end\") THEN 'Active'::text ELSE 'Inactive'::text END \nELSE 'Active'::text END = 'Active'::text)\n -> Nested Loop Left Join (cost=0.00..842.19 rows=97 width=175) (actual \ntime=6.820..42.863 rows=21 loops=4725)\n -> Index Scan using namemaster_programid_idx on namemaster dem \n(cost=0.00..470.12 rows=97 width=164) (actual time=6.811..42.654 rows=21 \nloops=4725)\n Index Cond: (programid = 120)\n Filter: ((name_float_lfm ~~* '%clark%'::text) OR \n(metaphone(name_float_lfm, 4) = 'KLRK'::text) OR (soundex(name_float_lfm) = \n'C462'::text))\n -> Index Scan using validanswerid_pk on validanswer ina \n(cost=0.00..3.82 rows=1 width=19) (actual time=0.003..0.004 rows=1 \nloops=99225)\n Index Cond: (ina.validanswerid = \"outer\".inactive)\n\n---------------------\n[2] A much simpler statement triggers the incorrect row counts here.\n\nexplain analyze\nselect * \nfrom roles rol\nwhere \n\n CASE\n WHEN rol.role_id IS NULL\n THEN NULL\n WHEN rol.\"begin\" IS NOT NULL and rol.\"end\" IS NOT NULL\n THEN\n CASE WHEN TIMESTAMP 'now'>=rol.\"begin\" and TIMESTAMP \n'now'<=rol.\"end\"\n THEN 'Active'\n ELSE 'Inactive' END\n WHEN rol.\"begin\" IS NOT NULL\n THEN\n CASE WHEN TIMESTAMP 'now'>=rol.\"begin\"\n THEN 'Active'\n ELSE 'Inactive' END\n WHEN rol.\"end\" IS NOT NULL\n THEN\n CASE WHEN TIMESTAMP 'now'<=rol.\"end\"\n THEN 'Active'\n ELSE 'Inactive' END\n ELSE 'Active'\n END = 'Active'\n\nSeq Scan on roles rol (cost=0.00..2368.54 rows=413 width=20) (actual \ntime=0.046..562.635 rows=79424 loops=1)\n Filter: (CASE WHEN (role_id IS NULL) THEN NULL::text WHEN ((\"begin\" IS NOT \nNULL) AND (\"end\" IS NOT NULL)) THEN CASE WHEN (('2005-09-13 \n16:43:18.721214'::timestamp without time zone >= \"begin\") AND ('2005-09-13 \n16:43:18.721214'::timestamp without time zone <= \"end\")) THEN 'Active'::text \nELSE 'Inactive'::text END WHEN (\"begin\" IS NOT NULL) THEN CASE WHEN \n('2005-09-13 16:43:18.721214'::timestamp without time zone >= \"begin\") THEN \n'Active'::text ELSE 'Inactive'::text END WHEN (\"end\" IS NOT NULL) THEN CASE \nWHEN ('2005-09-13 16:43:18.721214'::timestamp without time zone <= \"end\") \nTHEN 'Active'::text ELSE 'Inactive'::text END ELSE 'Active'::text END = \n'Active'::text)\n Total runtime: 884.456 ms\n(3 rows)\n-- \nChris Kratz\n", "msg_date": "Tue, 13 Sep 2005 17:08:26 -0400", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Help with performance on current status column" }, { "msg_contents": "Chris Kratz wrote:\n> Hello All,\n> \n> We are struggling with a specific query that is killing us. When doing \n> explain analyze on the entire query, we *seem* to be getting killed by the \n> estimated number of rows on a case statement calculation.\n> \n> I've included a snippet from the explain analyze of the much larger query. The \n> line in question, (cost=0.00..106.52 rows=1 width=16) (actual \n> time=0.048..67.728 rows=4725 loops=1) shows that it returned 4700 rows \n> instead of 1 which when coupled with a later join causes the statement to run \n> over 3 minutes.[1] \n> \n> It seems that it thinks that the scan on role_id is going to return 1 row, but \n> in reality returns 4725 rows. The case statement causing the problem uses \n> todays date to see if a particular row is still active. Here is a test case \n> showing how far off the estimate is from the reality. [2]\n\n> [2] A much simpler statement triggers the incorrect row counts here.\n> \n> explain analyze\n> select * \n> from roles rol\n> where \n> \n> CASE\n> WHEN rol.role_id IS NULL\n> THEN NULL\n> WHEN rol.\"begin\" IS NOT NULL and rol.\"end\" IS NOT NULL\n> THEN\n> CASE WHEN TIMESTAMP 'now'>=rol.\"begin\" and TIMESTAMP \n> 'now'<=rol.\"end\"\n> THEN 'Active'\n> ELSE 'Inactive' END\n> WHEN rol.\"begin\" IS NOT NULL\n> THEN\n> CASE WHEN TIMESTAMP 'now'>=rol.\"begin\"\n> THEN 'Active'\n> ELSE 'Inactive' END\n> WHEN rol.\"end\" IS NOT NULL\n> THEN\n> CASE WHEN TIMESTAMP 'now'<=rol.\"end\"\n> THEN 'Active'\n> ELSE 'Inactive' END\n> ELSE 'Active'\n> END = 'Active'\n\nAside #1 - I'm not entirely clear how role_id can be null since you \nseemed to be joining against it in the real query.\n\nAside #2 - You're probably better off with CURRENT_DATE since begin/end \nseem to be dates, rather than TIMESTAMP 'now' - and in any case you \nwanted \"timestamp with time zone\"\n\nOK, I think the root of your problem is your use of null to mean \"not \nended\" or \"not started\" (whatever 'not started' means). PostgreSQL has \nthe handy timestamptz value \"infinity\", but only for timestamps and not \nfor dates. I'd probably cheat a little and use an end date of \n'9999-12-31' or similar to simulate \"infinity\". Then your test is simply:\n\nWHERE\n ...\n AND (rol.begin <= CURRENT_DATE AND rol.end >= CURRENT_DATE)\n\nThat should estimate simply enough.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 14 Sep 2005 10:13:58 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with performance on current status column" }, { "msg_contents": "Hello Richard,\n\nThank you for the response. I did forget to mention that the columns have the \nfollowing meanings.\n\nOne, if a begin or end date is null, it means that the role is open ended in \nthat direction. For example, if there is no end date, that means currently \nthe role will go on forever beginning with the start date. Your idea of \nusing minimal and maximum dates is an interesting one and not one that I had \nconsidered. I will do some testing later today and see if that makes a \ndifference.\n\nThe other option I am toying with is simply having a status column which is \nupdated nightly via a cron job. This will probably be the most efficient and \ncan be indexed.\n\nI also forgot to say that we have seen this behavior on 2 boxes both on Linux \n(Red Hat ES & Mandrake) and both are running Postgres 8.0 (8.0.1 and 8.0.3). \nStrangely, after playing with statistics some yesterday (setting from 10 to \n100 to 1000 and back to 10 and analyzing), the 8.0.1 machine picks a \ndifferent plan and runs in a 101.104ms. The larger machine (dual proc Opt, 6 \ndisk raid 10, etc) with 8.0.3 still takes 3-5minutes to run the same query \nwith the same data set even after playing with statistics and repeated \nanalyze on the same table. It just seems odd. It seems it is picking the \nincorrect plan based off of an overly optimistic estimate of rows returned \nfrom the calculation.\n\nThe other frustration with this is that this sql is machine generated which is \nwhy we have some of the awkwardness in the calculation. That calc gets used \nfor a lot of different things including column definitions when people want \nto see the column on screen.\n\nThanks,\n\n-Chris\n\nOn Wednesday 14 September 2005 05:13 am, Richard Huxton wrote:\n> Chris Kratz wrote:\n> > Hello All,\n> >\n> > We are struggling with a specific query that is killing us. When doing\n> > explain analyze on the entire query, we *seem* to be getting killed by\n> > the estimated number of rows on a case statement calculation.\n> >\n> > I've included a snippet from the explain analyze of the much larger\n> > query. The line in question, (cost=0.00..106.52 rows=1 width=16) (actual\n> > time=0.048..67.728 rows=4725 loops=1) shows that it returned 4700 rows\n> > instead of 1 which when coupled with a later join causes the statement to\n> > run over 3 minutes.[1]\n> >\n> > It seems that it thinks that the scan on role_id is going to return 1\n> > row, but in reality returns 4725 rows. The case statement causing the\n> > problem uses todays date to see if a particular row is still active. \n> > Here is a test case showing how far off the estimate is from the reality.\n> > [2]\n> >\n> > [2] A much simpler statement triggers the incorrect row counts here.\n> >\n> > explain analyze\n> > select *\n> > from roles rol\n> > where\n> >\n> > CASE\n> > WHEN rol.role_id IS NULL\n> > THEN NULL\n> > WHEN rol.\"begin\" IS NOT NULL and rol.\"end\" IS NOT NULL\n> > THEN\n> > CASE WHEN TIMESTAMP 'now'>=rol.\"begin\" and TIMESTAMP\n> > 'now'<=rol.\"end\"\n> > THEN 'Active'\n> > ELSE 'Inactive' END\n> > WHEN rol.\"begin\" IS NOT NULL\n> > THEN\n> > CASE WHEN TIMESTAMP 'now'>=rol.\"begin\"\n> > THEN 'Active'\n> > ELSE 'Inactive' END\n> > WHEN rol.\"end\" IS NOT NULL\n> > THEN\n> > CASE WHEN TIMESTAMP 'now'<=rol.\"end\"\n> > THEN 'Active'\n> > ELSE 'Inactive' END\n> > ELSE 'Active'\n> > END = 'Active'\n>\n> Aside #1 - I'm not entirely clear how role_id can be null since you\n> seemed to be joining against it in the real query.\n>\n> Aside #2 - You're probably better off with CURRENT_DATE since begin/end\n> seem to be dates, rather than TIMESTAMP 'now' - and in any case you\n> wanted \"timestamp with time zone\"\n>\n> OK, I think the root of your problem is your use of null to mean \"not\n> ended\" or \"not started\" (whatever 'not started' means). PostgreSQL has\n> the handy timestamptz value \"infinity\", but only for timestamps and not\n> for dates. I'd probably cheat a little and use an end date of\n> '9999-12-31' or similar to simulate \"infinity\". Then your test is simply:\n>\n> WHERE\n> ...\n> AND (rol.begin <= CURRENT_DATE AND rol.end >= CURRENT_DATE)\n>\n> That should estimate simply enough.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n-- \nChris Kratz\n", "msg_date": "Wed, 14 Sep 2005 09:35:00 -0400", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with performance on current status column" } ]
[ { "msg_contents": "Hello all!\n\nOn my master course, I'm studying the PostgreSQL's optimizer.\nI don't know if anyone in this list have been participated from the \nPostgreSQL's Optimizer development, but maybe someone can help me on this \nquestion.\nPostgreSQL generates all possible plans of executing the query (using an \nalmost exhaustive search), then gives a cost to each plan and finally the \ncheapest one is selected for execution.\nThere are other methods for query optimization, one of them is based on plan \ntransformations (for example, using A-Star algorithm) instead of plan \nconstructions used by PostgreSQL. \nDoes anyone know why this method was choosen? Are there any papers or \nresearches about it?\n\nThank's a lot,\nPryscila.\n\nHello all!\n\nOn my master course, I'm studying the PostgreSQL's optimizer.\nI don't know if anyone in this list have been participated from the\nPostgreSQL's Optimizer development, but maybe someone can help me on\nthis question.\nPostgreSQL generates all possible plans of executing the query (using\nan almost exhaustive search), then gives a cost to each plan and\nfinally the cheapest one is selected for execution.\nThere are other methods for query optimization, one of them is based on\nplan transformations (for example, using A-Star algorithm) instead of\nplan constructions used by PostgreSQL. \nDoes anyone know why this method was choosen? Are there any papers or researches about it?\n\nThank's a lot,\nPryscila.", "msg_date": "Tue, 13 Sep 2005 19:50:42 -0300", "msg_from": "Pryscila B Guttoski <[email protected]>", "msg_from_op": true, "msg_subject": "About method of PostgreSQL's Optimizer" }, { "msg_contents": "Pryscila B Guttoski wrote:\n> On my master course, I'm studying the PostgreSQL's optimizer.\n> I don't know if anyone in this list have been participated from the \n> PostgreSQL's Optimizer development, but maybe someone can help me on this \n> question.\n\npgsql-hackers might be more appropriate.\n\n> PostgreSQL generates all possible plans of executing the query (using an \n> almost exhaustive search), then gives a cost to each plan and finally the \n> cheapest one is selected for execution.\n> There are other methods for query optimization, one of them is based on plan \n> transformations (for example, using A-Star algorithm) instead of plan \n> constructions used by PostgreSQL. \n\nRight, the main query planner uses a nearly-exhaustive search. For \nqueries with many joins (when the cost of an exhaustive search would be \nprohibitive), \"GEQO\" is used, which uses a genetic algorithm to avoid an \nexhaustive search of the solution space.\n\n> Does anyone know why this method was choosen?\n\nAs far as I know, the main planner algorithm is fairly standard and is \nmainly different from System R's canonical algorithm in the details, \nlike whether non-left-deep plans are pruned.\n\n> Are there any papers or researches about it?\n\nThere are many papers on the System R algorithm and similar techniques, \nwhich should explain the basic motivations for the design. I'm not aware \nof any papers specifically on the PostgreSQL query optimizer, although \nthere have been a few presentations on it:\n\nhttp://neilc.treehou.se/optimizer.pdf\nhttp://conferences.oreillynet.com/presentations/os2003/lane_tom.pdf\n\n-Neil\n", "msg_date": "Tue, 13 Sep 2005 19:16:26 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About method of PostgreSQL's Optimizer" }, { "msg_contents": "Pryscila B Guttoski wrote:\n> Hello all!\n> \n> On my master course, I'm studying the PostgreSQL's optimizer.\n> I don't know if anyone in this list have been participated from the \n> PostgreSQL's Optimizer development, but maybe someone can help me on \n> this question.\n> PostgreSQL generates all possible plans of executing the query (using an \n> almost exhaustive search), then gives a cost to each plan and finally \n> the cheapest one is selected for execution.\n> There are other methods for query optimization, one of them is based on \n> plan transformations (for example, using A-Star algorithm) instead of \n> plan constructions used by PostgreSQL.\n> Does anyone know why this method was choosen? Are there any papers or \n> researches about it?\n\nYou may want to pass this question over to pgsql-hackers.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Thank's a lot,\n> Pryscila.\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n", "msg_date": "Tue, 13 Sep 2005 16:23:02 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About method of PostgreSQL's Optimizer" }, { "msg_contents": "I know you almost had read this, but I think it is a good paper to start with... \n\nhttp://lca2005.linux.org.au/Papers/Neil%20Conway/Inside%20the%20PostgreSQL%20Query%20Optimizer/pg_query_optimizer.pdf\n\nAnyway, do you know where could I get more info and theory about database optimizer plan? (in general) I like that topic, thanks a lot man!\n ----- Original Message ----- \n From: Pryscila B Guttoski \n To: [email protected] \n Sent: Tuesday, September 13, 2005 4:50 PM\n Subject: [PERFORM] About method of PostgreSQL's Optimizer\n\n\n Hello all!\n\n On my master course, I'm studying the PostgreSQL's optimizer.\n I don't know if anyone in this list have been participated from the PostgreSQL's Optimizer development, but maybe someone can help me on this question.\n PostgreSQL generates all possible plans of executing the query (using an almost exhaustive search), then gives a cost to each plan and finally the cheapest one is selected for execution.\n There are other methods for query optimization, one of them is based on plan transformations (for example, using A-Star algorithm) instead of plan constructions used by PostgreSQL. \n Does anyone know why this method was choosen? Are there any papers or researches about it?\n\n Thank's a lot,\n Pryscila.\n\n\n\n\n\n\n\nI know you almost had read this, but I think it is \na good paper to start with... \n \nhttp://lca2005.linux.org.au/Papers/Neil%20Conway/Inside%20the%20PostgreSQL%20Query%20Optimizer/pg_query_optimizer.pdf\n \nAnyway, do you know where could I get more info and \ntheory about database optimizer plan? (in general) I like that topic, thanks a \nlot man!\n\n----- Original Message ----- \nFrom:\nPryscila B Guttoski \nTo: [email protected]\n\nSent: Tuesday, September 13, 2005 4:50 \n PM\nSubject: [PERFORM] About method of \n PostgreSQL's Optimizer\nHello all!On my master course, I'm studying the \n PostgreSQL's optimizer.I don't know if anyone in this list have been \n participated from the PostgreSQL's Optimizer development, but maybe someone \n can help me on this question.PostgreSQL generates all possible plans of \n executing the query (using an almost exhaustive search), then gives a cost to \n each plan and finally the cheapest one is selected for execution.There are \n other methods for query optimization, one of them is based on plan \n transformations (for example, using A-Star algorithm) instead of plan \n constructions used by PostgreSQL. Does anyone know why this method was \n choosen? Are there any papers or researches about it?Thank's a \n lot,Pryscila.", "msg_date": "Tue, 13 Sep 2005 17:31:54 -0600", "msg_from": "\"Cristian Prieto\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About method of PostgreSQL's Optimizer" }, { "msg_contents": "Thank's guys!\nI'll send to pgsql-hackers...\n\n[]'s\nPryscila\n\nOn 9/13/05, Neil Conway <[email protected]> wrote:\n> \n> Pryscila B Guttoski wrote:\n> > On my master course, I'm studying the PostgreSQL's optimizer.\n> > I don't know if anyone in this list have been participated from the\n> > PostgreSQL's Optimizer development, but maybe someone can help me on \n> this\n> > question.\n> \n> pgsql-hackers might be more appropriate.\n> \n> > PostgreSQL generates all possible plans of executing the query (using an\n> > almost exhaustive search), then gives a cost to each plan and finally \n> the\n> > cheapest one is selected for execution.\n> > There are other methods for query optimization, one of them is based on \n> plan\n> > transformations (for example, using A-Star algorithm) instead of plan\n> > constructions used by PostgreSQL.\n> \n> Right, the main query planner uses a nearly-exhaustive search. For\n> queries with many joins (when the cost of an exhaustive search would be\n> prohibitive), \"GEQO\" is used, which uses a genetic algorithm to avoid an\n> exhaustive search of the solution space.\n> \n> > Does anyone know why this method was choosen?\n> \n> As far as I know, the main planner algorithm is fairly standard and is\n> mainly different from System R's canonical algorithm in the details,\n> like whether non-left-deep plans are pruned.\n> \n> > Are there any papers or researches about it?\n> \n> There are many papers on the System R algorithm and similar techniques,\n> which should explain the basic motivations for the design. I'm not aware\n> of any papers specifically on the PostgreSQL query optimizer, although\n> there have been a few presentations on it:\n> \n> http://neilc.treehou.se/optimizer.pdf\n> http://conferences.oreillynet.com/presentations/os2003/lane_tom.pdf\n> \n> -Neil\n>\n\nThank's guys!\nI'll send to pgsql-hackers...\n\n[]'s\nPryscilaOn 9/13/05, Neil Conway <[email protected]> wrote:\nPryscila B Guttoski wrote:> On my master course, I'm studying the PostgreSQL's optimizer.> I don't know if anyone in this list have been participated from the> PostgreSQL's Optimizer development, but maybe someone can help me on this\n> question.pgsql-hackers might be more appropriate.> PostgreSQL generates all possible plans of executing the query (using an> almost exhaustive search), then gives a cost to each plan and finally the\n> cheapest one is selected for execution.> There are other methods for query optimization, one of them is based on plan> transformations (for example, using A-Star algorithm) instead of plan> constructions used by PostgreSQL.\nRight, the main query planner uses a nearly-exhaustive search. Forqueries with many joins (when the cost of an exhaustive search would beprohibitive), \"GEQO\" is used, which uses a genetic algorithm to avoid an\nexhaustive search of the solution space.> Does anyone know why this method was choosen?As far as I know, the main planner algorithm is fairly standard and ismainly different from System R's canonical algorithm in the details,\nlike whether non-left-deep plans are pruned.> Are there any papers or researches about it?There are many papers on the System R algorithm and similar techniques,which should explain the basic motivations for the design. I'm not aware\nof any papers specifically on the PostgreSQL query optimizer, althoughthere have been a few presentations on it:http://neilc.treehou.se/optimizer.pdf\nhttp://conferences.oreillynet.com/presentations/os2003/lane_tom.pdf-Neil", "msg_date": "Tue, 13 Sep 2005 21:40:45 -0300", "msg_from": "Pryscila B Guttoski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: About method of PostgreSQL's Optimizer" }, { "msg_contents": "Hello all!\n\nOn my master course, I'm studying the PostgreSQL's optimizer.\nI don't know if anyone in this list have been participated from the \nPostgreSQL's Optimizer development, but maybe someone can help me on this \nquestion.\nPostgreSQL generates all possible plans of executing the query (using an \nalmost exhaustive search), then gives a cost to each plan and finally the \ncheapest one is selected for execution.\nThere are other methods for query optimization, one of them is based on plan \ntransformations (for example, using A-Star algorithm) instead of plan \nconstructions used by PostgreSQL. \nDoes anyone know why this method was choosen? Are there any papers or \nresearches about it?\n\nThank's a lot,\nPryscila.\n\nHello all!\n\nOn my master course, I'm studying the PostgreSQL's optimizer.\nI don't know if anyone in this list have been participated from the\nPostgreSQL's Optimizer development, but maybe someone can help me on\nthis question.\nPostgreSQL generates all possible plans of executing the query (using\nan almost exhaustive search), then gives a cost to each plan and\nfinally the cheapest one is selected for execution.\nThere are other methods for query optimization, one of them is based on\nplan transformations (for example, using A-Star algorithm) instead of\nplan constructions used by PostgreSQL. \nDoes anyone know why this method was choosen? Are there any papers or researches about it?\n\nThank's a lot,\nPryscila.", "msg_date": "Tue, 13 Sep 2005 21:41:56 -0300", "msg_from": "Pryscila B Guttoski <[email protected]>", "msg_from_op": true, "msg_subject": "About method of PostgreSQL's Optimizer" }, { "msg_contents": "Cristian Prieto wrote:\n> Anyway, do you know where could I get more info and theory about\n> database optimizer plan? (in general)\n\nPersonally I like this survey paper on query optimization:\n\n http://citeseer.csail.mit.edu/371707.html\n\nThe paper also cites a lot of other papers that cover specific \ntechniques in more detail.\n\n-Neil\n", "msg_date": "Tue, 13 Sep 2005 21:34:50 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About method of PostgreSQL's Optimizer" }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> Pryscila B Guttoski wrote:\n>> On my master course, I'm studying the PostgreSQL's optimizer.\n>> I don't know if anyone in this list have been participated from the \n>> PostgreSQL's Optimizer development, but maybe someone can help me on this \n>> question.\n\n> pgsql-hackers might be more appropriate.\n\nAFAIK the basic code goes back to Berkeley days. Elein might possibly\nremember something about it, but no one else that's on the project now\nwas involved then. The right place to look would be in the Berkeley\nproject's publications:\n\nhttp://db.cs.berkeley.edu//papers/\n\nI agree with Neil's point that it's a spiritual descendant of System R\nand there's plenty of material about that in the general database\nliterature.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Sep 2005 22:26:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About method of PostgreSQL's Optimizer " }, { "msg_contents": "Pryscila,\n\nWhile I haven't been too involved in the open source PostgreSQL optimizer, I \nhave done some work on it and optimizers in other database systems.\n\nBased on my work, it is my opinion that PostgreSQL, as-well-as other \ndatabases which use a cost-based optimizer, prefer a breadth-first algorithm \nbecause one cannot determine the \"real\" cost of each node at run-time \nwithout systematically examining all possibilities through calculation. This \nis the opposite of a rule-based optimizer which defines heuristics which can \nbe evaulated by a best-first algorithm such as A*.\n\nIn a cost-based optimizer, the system must calculate the \"cost\" of each path \nbased on data that changes during run-time including indexing, cardinality, \ntuple size, available memory, CPU usage, disk access times, etc. To a \ncost-based optimizer, every query is unique and therefore cannot follow a \nweighted path in the same fashion. I can certainly see A* being used in a \nrule-based optimizer but not in a real-time cost-based optimizer.\n\nPerhaps Tom, Bruce, et al have more specifics on PostgreSQL's \nimplementation.\n\n-Jonah\n\n\n\nOn 9/13/05, Pryscila B Guttoski <[email protected]> wrote:\n> \n> Hello all!\n> \n> On my master course, I'm studying the PostgreSQL's optimizer.\n> I don't know if anyone in this list have been participated from the \n> PostgreSQL's Optimizer development, but maybe someone can help me on this \n> question.\n> PostgreSQL generates all possible plans of executing the query (using an \n> almost exhaustive search), then gives a cost to each plan and finally the \n> cheapest one is selected for execution.\n> There are other methods for query optimization, one of them is based on \n> plan transformations (for example, using A-Star algorithm) instead of plan \n> constructions used by PostgreSQL. \n> Does anyone know why this method was choosen? Are there any papers or \n> researches about it?\n> \n> Thank's a lot,\n> Pryscila.\n> \n\n\n\n-- \nRespectfully,\n\nJonah H. Harris, Database Internals Architect\nEnterpriseDB Corporation\nhttp://www.enterprisedb.com/\n\nPryscila,\n\nWhile I haven't been too involved in the open source PostgreSQL\noptimizer, I have done some work on it and optimizers in other database\nsystems.\n\nBased on my work, it is my opinion that PostgreSQL, as-well-as other\ndatabases which use a cost-based optimizer, prefer a breadth-first\nalgorithm because one cannot determine the \"real\" cost of each node at\nrun-time without systematically examining all possibilities through\ncalculation.  This is the opposite of a rule-based optimizer which\ndefines heuristics which can be evaulated by a best-first algorithm\nsuch as A*.\n\nIn a cost-based optimizer, the system must calculate the \"cost\" of each\npath based on data that changes during run-time including indexing,\ncardinality, tuple size, available memory, CPU usage, disk access\ntimes, etc.  To a cost-based optimizer, every query is unique and\ntherefore cannot follow a weighted path in the same fashion.  I\ncan certainly see A* being used in a rule-based optimizer but not in a\nreal-time cost-based optimizer.\n\nPerhaps Tom, Bruce, et al have more specifics on PostgreSQL's implementation.\n\n-Jonah\n\n\nOn 9/13/05, Pryscila B Guttoski <[email protected]> wrote:\nHello all!\n\nOn my master course, I'm studying the PostgreSQL's optimizer.\nI don't know if anyone in this list have been participated from the\nPostgreSQL's Optimizer development, but maybe someone can help me on\nthis question.\nPostgreSQL generates all possible plans of executing the query (using\nan almost exhaustive search), then gives a cost to each plan and\nfinally the cheapest one is selected for execution.\nThere are other methods for query optimization, one of them is based on\nplan transformations (for example, using A-Star algorithm) instead of\nplan constructions used by PostgreSQL. \nDoes anyone know why this method was choosen? Are there any papers or researches about it?\n\nThank's a lot,\nPryscila.\n\n-- Respectfully,Jonah H. Harris, Database Internals ArchitectEnterpriseDB Corporationhttp://www.enterprisedb.com/", "msg_date": "Tue, 13 Sep 2005 23:08:31 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About method of PostgreSQL's Optimizer" }, { "msg_contents": "Hi guys,\n\nI really appreciate your suggestions abouts papers, specially this one: \nhttp://citeseer.csail.mit.edu/371707.html\n\nI found some answers on it, like this:\n\nQ: Why the main query planner uses a nearly-exhaustive search?\nA: (Page 20 - 4.2.2) ... up to about ten joins, dynamic programming is \npreferred over the randomized algorithms because it is faster and it \nguarantees finding the optimal plan. For larger queries, the situation is \nreversed, and despite the probabilistic nature of the randomized algorithms, \ntheir efficiency makes them the algorithms of choice. \n\nAlso in this paper, there is something about the A* algorithm very \ninteresting for my research.\n\nI have one more question, sorry for doing it on this list, but only here I \nhad answers...\nDoes anybody hear anything about using PDDL (\"Planning Domain Definition \nLanguage\") for query optimization?\n\n[]'s,\nPryscila\n\nOn 9/13/05, Tom Lane <[email protected]> wrote:\n> \n> Neil Conway <[email protected]> writes:\n> > Pryscila B Guttoski wrote:\n> >> On my master course, I'm studying the PostgreSQL's optimizer.\n> >> I don't know if anyone in this list have been participated from the\n> >> PostgreSQL's Optimizer development, but maybe someone can help me on \n> this\n> >> question.\n> \n> > pgsql-hackers might be more appropriate.\n> \n> AFAIK the basic code goes back to Berkeley days. Elein might possibly\n> remember something about it, but no one else that's on the project now\n> was involved then. The right place to look would be in the Berkeley\n> project's publications:\n> \n> http://db.cs.berkeley.edu//papers/\n> \n> I agree with Neil's point that it's a spiritual descendant of System R\n> and there's plenty of material about that in the general database\n> literature.\n> \n> regards, tom lane\n>\n\nHi guys,\n\nI really appreciate your suggestions abouts papers, specially this one: http://citeseer.csail.mit.edu/371707.html\n\n\nI found some answers on it, like this:\n\nQ: Why the main query planner uses a nearly-exhaustive search?\nA: (Page 20 - 4.2.2) ... up to about ten joins, dynamic programming is\npreferred over the randomized algorithms because it is faster and it\nguarantees finding the optimal plan. For larger queries, the situation\nis reversed, and despite the probabilistic nature of the randomized\nalgorithms, their efficiency makes them the algorithms of choice. \n\nAlso in this paper, there is something about the A* algorithm very interesting for my research.\n\nI have one more question, sorry for doing it on this list, but only here I had answers...\nDoes anybody hear anything about using PDDL (\"Planning Domain Definition Language\") for query optimization?\n\n[]'s,\nPryscila\nOn 9/13/05, Tom Lane <[email protected]> wrote:\nNeil Conway <[email protected]> writes:> Pryscila B Guttoski wrote:>> On my master course, I'm studying the PostgreSQL's optimizer.>> I don't know if anyone in this list have been participated from the\n>> PostgreSQL's Optimizer development, but maybe someone can help me on this>> question.> pgsql-hackers might be more appropriate.AFAIK the basic code goes back to Berkeley days.  Elein might possibly\nremember something about it, but no one else that's on the project nowwas involved then.  The right place to look would be in the Berkeleyproject's publications:\nhttp://db.cs.berkeley.edu//papers/I agree with Neil's point that it's a spiritual descendant of System Rand there's plenty of material about that in the general databaseliterature.                        regards,\ntom lane", "msg_date": "Wed, 14 Sep 2005 00:20:19 -0300", "msg_from": "Pryscila B Guttoski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: About method of PostgreSQL's Optimizer" }, { "msg_contents": "Pryscila,\n\n> > There are other methods for query optimization, one of them is based on\n> > plan transformations (for example, using A-Star algorithm) instead of\n> > plan constructions used by PostgreSQL.\n\nWe do certainly need a specific optimization for large star-schema joins. I'm \nnot certain that A* is suitable for our cost model, though; I think we might \nneed to work up something more particular to Postgres.\n\n> > Does anyone know why this method was choosen? Are there any papers or\n> > researches about it?\n\nThere probably are on ACM but I've not read them. Ours is a pretty \nstraightforward implementation of a cost-based optimizer. You can always \nread the code ;-)\n\nMark Kirkwood put together this nice paper on planner statistics:\nhttp://www.powerpostgresql.com/PlanStats\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 14 Sep 2005 08:44:02 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About method of PostgreSQL's Optimizer" }, { "msg_contents": "Hi Jonah,\n\nThank's for your email, I really appreciate your opinions.\n\nIs it interesting to use both techniques? For example:\nGiven a query, an optimizer:\n1. Generates one of the possible execution plans.\n2. Does transformations on the original plan, based on rules and\nheuristics, resulting in new alternative plans.\n3. Evaluates the cost of generated plans by using statistics.\n4. Keeps plans that have lower cost than the original plan\n5. Repeat 2-4 over the new alternative plans.\nWhat do you think about it? Are there any restrictions that I haven't seen?\n\nAbout other method...\nHave you heard about using PDDL (\"Planning Domain Definition\nLanguage\") for query optimization?\n\n[]'s\nPryscila\n\n\nOn 9/14/05, Jonah H. Harris <[email protected]> wrote:\n> Pryscila,\n> \n> While I haven't been too involved in the open source PostgreSQL optimizer,\n> I have done some work on it and optimizers in other database systems.\n> \n> Based on my work, it is my opinion that PostgreSQL, as-well-as other\n> databases which use a cost-based optimizer, prefer a breadth-first algorithm\n> because one cannot determine the \"real\" cost of each node at run-time\n> without systematically examining all possibilities through calculation. \n> This is the opposite of a rule-based optimizer which defines heuristics\n> which can be evaulated by a best-first algorithm such as A*.\n> \n> In a cost-based optimizer, the system must calculate the \"cost\" of each\n> path based on data that changes during run-time including indexing,\n> cardinality, tuple size, available memory, CPU usage, disk access times,\n> etc. To a cost-based optimizer, every query is unique and therefore cannot\n> follow a weighted path in the same fashion. I can certainly see A* being\n> used in a rule-based optimizer but not in a real-time cost-based optimizer.\n> \n> Perhaps Tom, Bruce, et al have more specifics on PostgreSQL's\n> implementation.\n> \n> -Jonah\n> \n> \n> \n> \n> On 9/13/05, Pryscila B Guttoski <[email protected]> wrote:\n> > Hello all!\n> > \n> > On my master course, I'm studying the PostgreSQL's optimizer.\n> > I don't know if anyone in this list have been participated from the\n> PostgreSQL's Optimizer development, but maybe someone can help me on this\n> question.\n> > PostgreSQL generates all possible plans of executing the query (using an\n> almost exhaustive search), then gives a cost to each plan and finally the\n> cheapest one is selected for execution.\n> > There are other methods for query optimization, one of them is based on\n> plan transformations (for example, using A-Star algorithm) instead of plan\n> constructions used by PostgreSQL. \n> > Does anyone know why this method was choosen? Are there any papers or\n> researches about it?\n> > \n> > Thank's a lot,\n> > Pryscila.\n> > \n> \n> \n> \n> -- \n> Respectfully,\n> \n> Jonah H. Harris, Database Internals Architect\n> EnterpriseDB Corporation\n> http://www.enterprisedb.com/ \n>\n", "msg_date": "Wed, 14 Sep 2005 12:52:50 -0300", "msg_from": "Pryscila B Guttoski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: About method of PostgreSQL's Optimizer" }, { "msg_contents": "Pryscila,\n\nStep 2 is basically where you find the difference between a cost-based \noptimizer (CBO) and a rule-based optimizer (RBO). A CBO is based on the \ncomputed execution cost of the query whereas an RBO uses more generalized \nheuristics.\n\nLet's get an example of what you're proposing and see if we can work it out \nfrom there.\n\nSay we have the following (this is a generalized CBO approach, not \nPostgreSQL specific):\n\nOracle's SCOTT.EMP table with cardinality of 1 million and an index on empno \nand ename. For storage purposes say that the empno index takes up 3600 \nblocks, the ename index takes up 7800 blocks, and the table itself takes up \n17000 blocks. We'll also say that we have a 256 megabyte buffer cache of \nwhich we have cached 50% of the empno index, 10% of the ename index, and 5% \nof the emp table data.\n\nA user then issues the following query:\n\nSELECT empno, ename FROM emp;\n\nA cost-based optimizer will see the following:\n1. See that the query is a full table scan (FTS) and calculate the cost of \nretrieving all 17000 blocks from disk.\n2. See that the query is a FTS and that it can retrieve all data from the \nindexes (11400 blocks) and join the data (which join algorithm?)\n\nWithout performing a breadth-first algorithm, how can one evaluate both \noptions in a way that would allow you to perform heuristic transformations \ndynamically? What transformation/heuristic/rule can you use? A CBO \nimplementation has to calculate the amount of I/O needed on each plan based \non several statistics such as what's *potentially* in the cache, what's the \naccess time for block I/O (including prefetching if the storage manager has \nit), and other factors. If you could name a database that uses a best-first \nalgorithm, such as A*, please send me the link to their docs; I'd be \ninterested in reading the implementation.\n\nAs for using both in the same optimizer, I could only see an algorithm such \nas a customized-A* being used to planning *some* large queries. The reason I \nsay this is because the cost calculation, which would still need to be \nbreadth-first, could calculate and cache the cost of most nodes thereby \nallowing you to possibly perform transformations at the tail of calculation.\n\nAs for references to query optimization possibly using best-first \nalgorithms, I think I saw several types of algorithms used in work from a \nuniversity query optimization engine. I can't remember if it was Cornell, \nStanford, or Wisconsin... I'll try and get you a link to their info.\n\n-Jonah\n\nOn 9/14/05, Pryscila B Guttoski <[email protected]> wrote:\n> \n> Hi Jonah,\n> \n> Thank's for your email, I really appreciate your opinions.\n> \n> Is it interesting to use both techniques? For example:\n> Given a query, an optimizer:\n> 1. Generates one of the possible execution plans.\n> 2. Does transformations on the original plan, based on rules and\n> heuristics, resulting in new alternative plans.\n> 3. Evaluates the cost of generated plans by using statistics.\n> 4. Keeps plans that have lower cost than the original plan\n> 5. Repeat 2-4 over the new alternative plans.\n> What do you think about it? Are there any restrictions that I haven't \n> seen?\n> \n> About other method...\n> Have you heard about using PDDL (\"Planning Domain Definition\n> Language\") for query optimization?\n> \n> []'s\n> Pryscila\n> \n> \n> On 9/14/05, Jonah H. Harris <[email protected]> wrote:\n> > Pryscila,\n> >\n> > While I haven't been too involved in the open source PostgreSQL \n> optimizer,\n> > I have done some work on it and optimizers in other database systems.\n> >\n> > Based on my work, it is my opinion that PostgreSQL, as-well-as other\n> > databases which use a cost-based optimizer, prefer a breadth-first \n> algorithm\n> > because one cannot determine the \"real\" cost of each node at run-time\n> > without systematically examining all possibilities through calculation.\n> > This is the opposite of a rule-based optimizer which defines heuristics\n> > which can be evaulated by a best-first algorithm such as A*.\n> >\n> > In a cost-based optimizer, the system must calculate the \"cost\" of each\n> > path based on data that changes during run-time including indexing,\n> > cardinality, tuple size, available memory, CPU usage, disk access times,\n> > etc. To a cost-based optimizer, every query is unique and therefore \n> cannot\n> > follow a weighted path in the same fashion. I can certainly see A* being\n> > used in a rule-based optimizer but not in a real-time cost-based \n> optimizer.\n> >\n> > Perhaps Tom, Bruce, et al have more specifics on PostgreSQL's\n> > implementation.\n> >\n> > -Jonah\n> >\n> >\n> >\n> >\n> > On 9/13/05, Pryscila B Guttoski <[email protected]> wrote:\n> > > Hello all!\n> > >\n> > > On my master course, I'm studying the PostgreSQL's optimizer.\n> > > I don't know if anyone in this list have been participated from the\n> > PostgreSQL's Optimizer development, but maybe someone can help me on \n> this\n> > question.\n> > > PostgreSQL generates all possible plans of executing the query (using \n> an\n> > almost exhaustive search), then gives a cost to each plan and finally \n> the\n> > cheapest one is selected for execution.\n> > > There are other methods for query optimization, one of them is based \n> on\n> > plan transformations (for example, using A-Star algorithm) instead of \n> plan\n> > constructions used by PostgreSQL.\n> > > Does anyone know why this method was choosen? Are there any papers or\n> > researches about it?\n> > >\n> > > Thank's a lot,\n> > > Pryscila.\n> > >\n> >\n> >\n> >\n> > --\n> > Respectfully,\n> >\n> > Jonah H. Harris, Database Internals Architect\n> > EnterpriseDB Corporation\n> > http://www.enterprisedb.com/\n> >\n> \n\n\n\n-- \nRespectfully,\n\nJonah H. Harris, Database Internals Architect\nEnterpriseDB Corporation\nhttp://www.enterprisedb.com/\n\nPryscila,\n\nStep 2 is basically where you find the difference between a cost-based\noptimizer (CBO) and a rule-based optimizer (RBO).  A CBO is based\non the computed execution cost of the query whereas an RBO uses more\ngeneralized heuristics.\n\nLet's get an example of what you're proposing and see if we can work it out from there.\n\nSay we have the following (this is a generalized CBO approach, not PostgreSQL specific):\n\nOracle's SCOTT.EMP table with cardinality of 1 million and an index on\nempno and ename.  For storage purposes say that the empno index\ntakes up 3600 blocks, the ename index takes up 7800 blocks, and the\ntable itself takes up 17000 blocks.  We'll also say that we have a\n256 megabyte buffer cache of which we have cached 50% of the empno\nindex, 10% of the ename index, and 5% of the emp table data.\n\nA user then issues the following query:\n\nSELECT empno, ename FROM emp;\n\nA cost-based optimizer will see the following:\n1. See that the query is a full table scan (FTS) and calculate the cost of retrieving all 17000 blocks from disk.\n2. See that the query is a FTS and that it can retrieve all data from\nthe indexes (11400 blocks) and join the data (which join algorithm?)\n\nWithout performing a breadth-first algorithm, how can one evaluate both\noptions in a way that would allow you to perform heuristic\ntransformations dynamically?  What transformation/heuristic/rule\ncan you use?  A CBO implementation has to calculate the amount of\nI/O needed on each plan based on several statistics such as what's\n*potentially* in the cache, what's the access time for block I/O\n(including prefetching if the storage manager has it), and other\nfactors.  If you could name a database that uses a best-first\nalgorithm, such as A*, please send me the link to their docs; I'd be\ninterested in reading the implementation.\n\nAs for using both in the same optimizer, I could only see an algorithm\nsuch as a customized-A* being used to planning *some* large\nqueries.  The reason I say this is because the cost calculation,\nwhich would still need to be breadth-first, could calculate and cache\nthe cost of most nodes thereby allowing you to possibly perform\ntransformations at the tail of calculation.\n\nAs for references to query optimization possibly using best-first\nalgorithms, I think I saw several types of algorithms used in work from\na university query optimization engine.  I can't remember if it\nwas Cornell, Stanford, or Wisconsin... I'll try and get you a link to\ntheir info.\n\n-JonahOn 9/14/05, Pryscila B Guttoski <[email protected]> wrote:\nHi Jonah,Thank's for your email, I really appreciate your opinions.Is it interesting to use both techniques? For example:Given a query, an optimizer:1. Generates one of the possible execution plans.\n2. Does transformations on the original plan, based on rules andheuristics, resulting in new alternative plans.3. Evaluates the cost of generated plans by using statistics.4. Keeps plans that have lower cost than the original plan\n5. Repeat 2-4 over the new alternative plans.What do you think about it? Are there any restrictions that I haven't seen?About other method...Have you heard about using PDDL (\"Planning Domain Definition\nLanguage\") for query optimization?[]'sPryscilaOn 9/14/05, Jonah H. Harris <[email protected]> wrote:> Pryscila,>>  While I haven't been too involved in the open source PostgreSQL optimizer,\n> I have done some work on it and optimizers in other database systems.>>  Based on my work, it is my opinion that PostgreSQL, as-well-as other> databases which use a cost-based optimizer, prefer a breadth-first algorithm\n> because one cannot determine the \"real\" cost of each node at run-time> without systematically examining all possibilities through calculation.> This is the opposite of a rule-based optimizer which defines heuristics\n> which can be evaulated by a best-first algorithm such as A*.>>  In a cost-based optimizer, the system must calculate the \"cost\" of each> path based on data that changes during run-time including indexing,\n> cardinality, tuple size, available memory, CPU usage, disk access times,> etc.  To a cost-based optimizer, every query is unique and therefore cannot> follow a weighted path in the same fashion.  I can certainly see A* being\n> used in a rule-based optimizer but not in a real-time cost-based optimizer.>>  Perhaps Tom, Bruce, et al have more specifics on PostgreSQL's> implementation.>>  -Jonah>\n>>>> On 9/13/05, Pryscila B Guttoski <[email protected]> wrote:> > Hello all!> >> > On my master course, I'm studying the PostgreSQL's optimizer.\n> > I don't know if anyone in this list have been participated from the> PostgreSQL's Optimizer development, but maybe someone can help me on this> question.> > PostgreSQL generates all possible plans of executing the query (using an\n> almost exhaustive search), then gives a cost to each plan and finally the> cheapest one is selected for execution.> > There are other methods for query optimization, one of them is based on> plan transformations (for example, using A-Star algorithm) instead of plan\n> constructions used by PostgreSQL.> > Does anyone know why this method was choosen? Are there any papers or> researches about it?> >> > Thank's a lot,> > Pryscila.\n> >>>>> --> Respectfully,>> Jonah H. Harris, Database Internals Architect> EnterpriseDB Corporation> http://www.enterprisedb.com/\n>-- Respectfully,Jonah H. Harris, Database Internals ArchitectEnterpriseDB Corporationhttp://www.enterprisedb.com/", "msg_date": "Wed, 14 Sep 2005 12:36:37 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About method of PostgreSQL's Optimizer" }, { "msg_contents": "\"Jonah H. Harris\" <[email protected]> writes:\n> As for using both in the same optimizer, I could only see an algorithm such \n> as a customized-A* being used to planning *some* large queries. The reason I \n> say this is because the cost calculation, which would still need to be \n> breadth-first, could calculate and cache the cost of most nodes thereby \n> allowing you to possibly perform transformations at the tail of calculation.\n\nWe do already have two different plan search algorithms: the strict\nbottom-up dynamic programming approach (System R style) and the GEQO\noptimizer, which we switch to when there are too many joins needed to\nallow exhaustive search. The GEQO code still depends on the normal\nplan cost estimation code, but it doesn't consider every possible plan.\n\nI've never been very happy with the GEQO code: the random component of\nthe algorithm means you get unpredictable (and sometimes awful) plans,\nand the particular variant that we are using is really designed to solve\ntraveling-salesman problems. It's at best a poor fit to the join\nplanning problem.\n\nSo it seems interesting to me to think about replacing GEQO with a\nrule-based optimizer for large join search spaces.\n\nThere are previous discussions about this in the archives, I believe.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Sep 2005 13:54:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About method of PostgreSQL's Optimizer " }, { "msg_contents": "Tom,\n\nI agree. There have been several occasions where GEQO has performed poorly \nfor me. I'll search the archives for the past discussions.\n\nsorry for sending this to you twice Tom... forgot to hit reply all :(\n\nOn 9/14/05, Tom Lane <[email protected]> wrote:\n> \n> \"Jonah H. Harris\" <[email protected]> writes:\n> > As for using both in the same optimizer, I could only see an algorithm \n> such\n> > as a customized-A* being used to planning *some* large queries. The \n> reason I\n> > say this is because the cost calculation, which would still need to be\n> > breadth-first, could calculate and cache the cost of most nodes thereby\n> > allowing you to possibly perform transformations at the tail of \n> calculation.\n> \n> We do already have two different plan search algorithms: the strict\n> bottom-up dynamic programming approach (System R style) and the GEQO\n> optimizer, which we switch to when there are too many joins needed to\n> allow exhaustive search. The GEQO code still depends on the normal\n> plan cost estimation code, but it doesn't consider every possible plan.\n> \n> I've never been very happy with the GEQO code: the random component of\n> the algorithm means you get unpredictable (and sometimes awful) plans,\n> and the particular variant that we are using is really designed to solve\n> traveling-salesman problems. It's at best a poor fit to the join\n> planning problem.\n> \n> So it seems interesting to me to think about replacing GEQO with a\n> rule-based optimizer for large join search spaces.\n> \n> There are previous discussions about this in the archives, I believe.\n> \n> regards, tom lane\n> \n\n\n\n-- \nRespectfully,\n\nJonah H. Harris, Database Internals Architect\nEnterpriseDB Corporation\nhttp://www.enterprisedb.com/\n\nTom,\n\n\nI agree.  There have been several occasions where GEQO has\nperformed poorly for me.  I'll search the archives for the past\ndiscussions.\n\nsorry for sending this to you twice Tom... forgot to hit reply all :(On 9/14/05, Tom Lane <[email protected]\n> wrote:\"Jonah H. Harris\" <[email protected]\n> writes:> As for using both in the same optimizer, I could only see an algorithm such> as a customized-A* being used to planning *some* large queries. The reason I> say this is because the cost calculation, which would still need to be\n> breadth-first, could calculate and cache the cost of most nodes thereby> allowing you to possibly perform transformations at the tail of calculation.We do already have two different plan search algorithms: the strict\nbottom-up dynamic programming approach (System R style) and the GEQOoptimizer, which we switch to when there are too many joins needed toallow exhaustive search.  The GEQO code still depends on the normalplan cost estimation code, but it doesn't consider every possible plan.\nI've never been very happy with the GEQO code: the random component ofthe algorithm means you get unpredictable (and sometimes awful) plans,and the particular variant that we are using is really designed to solve\ntraveling-salesman problems.  It's at best a poor fit to the joinplanning problem.So it seems interesting to me to think about replacing GEQO with arule-based optimizer for large join search spaces.\nThere are previous discussions about this in the archives, I believe.                        regards,\ntom lane-- Respectfully,Jonah H. Harris, Database Internals ArchitectEnterpriseDB Corporationhttp://www.enterprisedb.com/", "msg_date": "Thu, 15 Sep 2005 00:39:55 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About method of PostgreSQL's Optimizer" }, { "msg_contents": "Pryscila,\n\nFor research reference, you may want to look at the work done on the \nColumbia Query Optimization Framework. As I recall, I think it (or its \npredecessors) had both cost and rule-based optimization. If you need the \ncode to it, I can dig it up on one of my old systems.\n\nAlbeit dated, another good reference for optimizer implementation is the \ncascades query optimization framework.\n\n\nOn 9/15/05, Jonah H. Harris <[email protected]> wrote:\n> \n> Tom,\n> \n> I agree. There have been several occasions where GEQO has performed poorly \n> for me. I'll search the archives for the past discussions.\n> \n> sorry for sending this to you twice Tom... forgot to hit reply all :(\n> \n> On 9/14/05, Tom Lane <[email protected] > wrote:\n> > \n> > \"Jonah H. Harris\" <[email protected] > writes:\n> > > As for using both in the same optimizer, I could only see an algorithm \n> > such\n> > > as a customized-A* being used to planning *some* large queries. The \n> > reason I\n> > > say this is because the cost calculation, which would still need to be \n> > \n> > > breadth-first, could calculate and cache the cost of most nodes \n> > thereby\n> > > allowing you to possibly perform transformations at the tail of \n> > calculation.\n> > \n> > We do already have two different plan search algorithms: the strict \n> > bottom-up dynamic programming approach (System R style) and the GEQO\n> > optimizer, which we switch to when there are too many joins needed to\n> > allow exhaustive search. The GEQO code still depends on the normal\n> > plan cost estimation code, but it doesn't consider every possible plan. \n> > \n> > I've never been very happy with the GEQO code: the random component of\n> > the algorithm means you get unpredictable (and sometimes awful) plans,\n> > and the particular variant that we are using is really designed to solve \n> > \n> > traveling-salesman problems. It's at best a poor fit to the join\n> > planning problem.\n> > \n> > So it seems interesting to me to think about replacing GEQO with a\n> > rule-based optimizer for large join search spaces.\n> > \n> > There are previous discussions about this in the archives, I believe.\n> > \n> > regards, tom lane\n> > \n> \n> \n> \n> -- \n> Respectfully,\n> \n> Jonah H. Harris, Database Internals Architect\n> EnterpriseDB Corporation\n> http://www.enterprisedb.com/ \n> \n\n\n\n-- \nRespectfully,\n\nJonah H. Harris, Database Internals Architect\nEnterpriseDB Corporation\nhttp://www.enterprisedb.com/\n\nPryscila,\n\nFor research reference, you may want to look at the work done on the\nColumbia Query Optimization Framework.  As I recall, I think it\n(or its predecessors) had both cost and rule-based optimization. \nIf you need the code to it, I can dig it up on one of my old systems.\n\nAlbeit dated, another good reference for optimizer implementation is the cascades query optimization framework.\n\nOn 9/15/05, Jonah H. Harris <[email protected]> wrote:\nTom,\n\n\nI agree.  There have been several occasions where GEQO has\nperformed poorly for me.  I'll search the archives for the past\ndiscussions.\n\nsorry for sending this to you twice Tom... forgot to hit reply all :(On 9/14/05, Tom Lane <\[email protected]\n> wrote:\"Jonah H. Harris\" <\[email protected]\n> writes:> As for using both in the same optimizer, I could only see an algorithm such> as a customized-A* being used to planning *some* large queries. The reason I> say this is because the cost calculation, which would still need to be\n> breadth-first, could calculate and cache the cost of most nodes thereby> allowing you to possibly perform transformations at the tail of calculation.We do already have two different plan search algorithms: the strict\nbottom-up dynamic programming approach (System R style) and the GEQOoptimizer, which we switch to when there are too many joins needed toallow exhaustive search.  The GEQO code still depends on the normal\nplan cost estimation code, but it doesn't consider every possible plan.\nI've never been very happy with the GEQO code: the random component ofthe algorithm means you get unpredictable (and sometimes awful) plans,and the particular variant that we are using is really designed to solve\ntraveling-salesman problems.  It's at best a poor fit to the joinplanning problem.So it seems interesting to me to think about replacing GEQO with arule-based optimizer for large join search spaces.\nThere are previous discussions about this in the archives, I believe.                        regards,\ntom lane-- Respectfully,Jonah H. Harris, Database Internals ArchitectEnterpriseDB Corporation\nhttp://www.enterprisedb.com/\n\n-- Respectfully,Jonah H. Harris, Database Internals ArchitectEnterpriseDB Corporationhttp://www.enterprisedb.com/", "msg_date": "Thu, 15 Sep 2005 01:08:47 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About method of PostgreSQL's Optimizer" } ]
[ { "msg_contents": "Hi, I've reading around there about some way to help pgsql with the data caching using memcached inside the sps in the database (not in the application), does anybody have success with that?\n\nThanks a lot!\n\n\n\n\n\n\nHi, I've reading around there about some way to \nhelp pgsql with the data caching using memcached inside the sps in the database \n(not in the application), does anybody have success with that?\n \nThanks a lot!", "msg_date": "Tue, 13 Sep 2005 17:34:36 -0600", "msg_from": "\"Cristian Prieto\" <[email protected]>", "msg_from_op": true, "msg_subject": "Any other idea for better performance?" } ]
[ { "msg_contents": "(1) Latency and throughput don't necessarily correlate well. When blasting\nquantities of data to test throughput, TCP_NODELAY might not matter\nmuch -- a full buffer will be sent without a delay anyway. What do you get\non a ping while running the throughput test?\n \n(2) Besides the TCP_NODELAY issue, another issue which has caused\nsimilar problems is a mismatch between half duplex and full duplex in the\nconfiguration of the switch and the server. Sometimes auto-negotiate\ndoesn't work as advertised; you might want to try setting the configuration\nexplicitly, if you aren't already doing so.\n \n-Kevin\n \n \n>>> Dalibor Sramek <[email protected]> 09/14/05 8:02 AM >>>\nOn Tue, Sep 13, 2005 at 11:05:00AM -0400, Merlin Moncure wrote:\n> 5. do select array_accum(q::text) from generate_series(1,10000) q;\n\nI made the tests you suggested and the pattern is clear. The difference\nbetween local and remote command execution is caused by moving data over\nthe network. E.g. the command above takes 700 ms locally and 1500 ms\nremotely. Remote explain analyze takes exactly the 700 ms.\n\nI downloaded PCATTCP - http://www.pcausa.com/Utilities/pcattcp.htm\nand the measured throughput between the two machines is over 10000 kB/s.\nPCATTCP allows setting TCP_NODELAY but it had no effect on the transfer\nspeed. So the difference between local and remote execution should IMHO stay\nin the 10 ms range. Definitely not 800 ms. The 8.1 has the same problem.\n\nJust for the record: the server PC is Dell Precision 330 with 3Com 3C920\nintegrated network card. OS MS Windows Professional 2002 with service pack\n2. There is Symantec Antivirus installed - which I have (hopefully)\ncompletely disabled.\n\nThanks for any help\n\nDalibor Sramek\n\n", "msg_date": "Wed, 14 Sep 2005 09:23:22 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low performance on Windows problem" } ]
[ { "msg_contents": "> in the 10 ms range. Definitely not 800 ms. The 8.1 has the same\nproblem.\n> \n> Just for the record: the server PC is Dell Precision 330 with 3Com\n3C920\n> integrated network card. OS MS Windows Professional 2002 with service\npack\n> 2. There is Symantec Antivirus installed - which I have (hopefully)\n> completely disabled.\n\nTry throwing in another network card and see if it helps. Next step is\nto try twinking tcp settings\n(http://support.microsoft.com/default.aspx?scid=kb;en-us;314053) and see\nif that helps. Beyond that, try playing the update driver game. If you\nare still having problems, try receiving bigger and bigger results to\nsee where problem occurs. 1-2k range suggests mtu problem, 4-8k range\nsuggests tcp receive window problem.\n\nBeyond that, I'm stumped, uh, buy Opteron? :)\n\nMerlin\n\n", "msg_date": "Wed, 14 Sep 2005 13:10:32 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low performance on Windows problem" } ]
[ { "msg_contents": "Folks,\n\tI'm getting a new server for our database, and I have a quick question\nabout RAID controllers with a battery backed cache. I understand that the\ncache will allow the cache to be written out if the power fails to the box,\nwhich allows it to report a write as committed safely when it's not actually\ncommitted.\n\tMy question is, if the power goes off, and the drives stop, how does the\nbattery backed cache save things out to the dead drives? Is there another\ncomponent that is implied that will provide power to the drives that I\nshould be looking into as well?\nThanks,\nPeter Darley\n\n", "msg_date": "Wed, 14 Sep 2005 11:25:38 -0700", "msg_from": "\"Peter Darley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Battery Backed Cache for RAID" }, { "msg_contents": "On Wed, 2005-09-14 at 11:25 -0700, Peter Darley wrote:\n> \tI'm getting a new server for our database, and I have a quick question\n> about RAID controllers with a battery backed cache. I understand that the\n> cache will allow the cache to be written out if the power fails to the box,\n> which allows it to report a write as committed safely when it's not actually\n> committed.\n\nActually the cache will just hold its contents while the power is out.\nWhen the power is restored, the RAID controller will complete the writes\nto disk. If the battery does not last through the outage, the data is\nlost.\n\n> \tMy question is, if the power goes off, and the drives stop, how does the\n> battery backed cache save things out to the dead drives? Is there another\n> component that is implied that will provide power to the drives that I\n> should be looking into as well?\n\nA UPS would allow you to do an orderly shutdown and write contents to\ndisk during a power failure. However a UPS can be an extra point of\nfailure.\n\n-jwb\n", "msg_date": "Wed, 14 Sep 2005 11:28:43 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Battery Backed Cache for RAID" }, { "msg_contents": "On Wed, Sep 14, 2005 at 11:28:43AM -0700, Jeffrey W. Baker wrote:\n> On Wed, 2005-09-14 at 11:25 -0700, Peter Darley wrote:\n> > \tI'm getting a new server for our database, and I have a quick question\n> > about RAID controllers with a battery backed cache. I understand that the\n> > cache will allow the cache to be written out if the power fails to the box,\n> > which allows it to report a write as committed safely when it's not actually\n> > committed.\n> \n> Actually the cache will just hold its contents while the power is out.\n> When the power is restored, the RAID controller will complete the writes\n> to disk. If the battery does not last through the outage, the data is\n> lost.\n\nJust curious: how long are the batteries supposed to last?\n\n-- \nAlvaro Herrera -- Valdivia, Chile Architect, www.EnterpriseDB.com\nHi! I'm a .signature virus!\ncp me into your .signature file to help me spread!\n", "msg_date": "Wed, 14 Sep 2005 16:03:24 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Battery Backed Cache for RAID" }, { "msg_contents": "Alvaro Herrera wrote:\n> On Wed, Sep 14, 2005 at 11:28:43AM -0700, Jeffrey W. Baker wrote:\n>\n>>On Wed, 2005-09-14 at 11:25 -0700, Peter Darley wrote:\n>>\n>>>\tI'm getting a new server for our database, and I have a quick question\n>>>about RAID controllers with a battery backed cache. I understand that the\n>>>cache will allow the cache to be written out if the power fails to the box,\n>>>which allows it to report a write as committed safely when it's not actually\n>>>committed.\n>>\n>>Actually the cache will just hold its contents while the power is out.\n>>When the power is restored, the RAID controller will complete the writes\n>>to disk. If the battery does not last through the outage, the data is\n>>lost.\n>\n>\n> Just curious: how long are the batteries supposed to last?\n>\n\nThe recent *cheap* version of a ramdisk had battery backup for 16 hours.\n(Very expensive ramdisks actually have enough battery power to power a\nsmall hard-drive to dump the contents into).\n\nI'm guessing for a RAID controller, the time would be in the max 1 day\nrange.\n\nJohn\n=:->", "msg_date": "Wed, 14 Sep 2005 16:40:25 -0400", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Battery Backed Cache for RAID" }, { "msg_contents": "On 14-9-2005 22:03, Alvaro Herrera wrote:\n> On Wed, Sep 14, 2005 at 11:28:43AM -0700, Jeffrey W. Baker wrote:\n> \n>>On Wed, 2005-09-14 at 11:25 -0700, Peter Darley wrote:\n>>\n>>Actually the cache will just hold its contents while the power is out.\n>>When the power is restored, the RAID controller will complete the writes\n>>to disk. If the battery does not last through the outage, the data is\n>>lost.\n> \n> \n> Just curious: how long are the batteries supposed to last?\n\nFor the LSI-Logic MegaRaid 320-2e its about 72 hours for the standard \n128MB version. Their SATA2-solution offers 32 and 72 hour-options. So I \nassume its \"in the order of days\" for most RAID controllers.\n\nBest regards,\n\nArjen van der Meijden\n", "msg_date": "Wed, 14 Sep 2005 22:47:35 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Battery Backed Cache for RAID" }, { "msg_contents": "Bear in mind you will lose data if the raid controller itself fails (or the\ncache memory module). Many solutions have mirrored cache for this reason. But\nthat's more $$, depending on the risks you want to take.\n\nQuoting Arjen van der Meijden <[email protected]>:\n\n> On 14-9-2005 22:03, Alvaro Herrera wrote:\n> > On Wed, Sep 14, 2005 at 11:28:43AM -0700, Jeffrey W. Baker wrote:\n> > \n> >>On Wed, 2005-09-14 at 11:25 -0700, Peter Darley wrote:\n> >>\n> >>Actually the cache will just hold its contents while the power is out.\n> >>When the power is restored, the RAID controller will complete the writes\n> >>to disk. If the battery does not last through the outage, the data is\n> >>lost.\n> > \n> > \n> > Just curious: how long are the batteries supposed to last?\n> \n> For the LSI-Logic MegaRaid 320-2e its about 72 hours for the standard \n> 128MB version. Their SATA2-solution offers 32 and 72 hour-options. So I \n> assume its \"in the order of days\" for most RAID controllers.\n> \n> Best regards,\n> \n> Arjen van der Meijden\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n", "msg_date": "Wed, 14 Sep 2005 14:58:28 -0700", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Battery Backed Cache for RAID" } ]
[ { "msg_contents": "> On Wed, Sep 14, 2005 at 11:28:43AM -0700, Jeffrey W. Baker wrote:\n> > On Wed, 2005-09-14 at 11:25 -0700, Peter Darley wrote:\n> > > \tI'm getting a new server for our database, and I have a quick\n> question\n> > > about RAID controllers with a battery backed cache. I understand\nthat\n> the\n> > > cache will allow the cache to be written out if the power fails to\nthe\n> box,\n> > > which allows it to report a write as committed safely when it's\nnot\n> actually\n> > > committed.\n> >\n> > Actually the cache will just hold its contents while the power is\nout.\n> > When the power is restored, the RAID controller will complete the\nwrites\n> > to disk. If the battery does not last through the outage, the data\nis\n> > lost.\n> \n> Just curious: how long are the batteries supposed to last?\n \nFor the length of time it will take for you to get fired for not getting\nthe server running plus one hour :).\n\nMerlin\n", "msg_date": "Wed, 14 Sep 2005 16:17:06 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Battery Backed Cache for RAID" } ]
[ { "msg_contents": "\nJohn A Meinel wrote:\n>The recent *cheap* version of a ramdisk had battery backup for 16 hours.\n>(Very expensive ramdisks actually have enough battery power to power a\n>small hard-drive to dump the contents into).\n\n>I'm guessing for a RAID controller, the time would be in the max 1 day\n>range.\n\ni think some will go a bit longer. i have seen an IBM ServeRaid (rebranded\nmylex in this particular case) keep its memory after being pulled for a\nremarkably long period of time.\n\nno guarantees, though, so i'm not actually going to say how long so that nobody\ngets unreasonable expectations.\n\nrichard\n", "msg_date": "Wed, 14 Sep 2005 16:46:47 -0400", "msg_from": "\"Welty, Richard\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Battery Backed Cache for RAID" } ]
[ { "msg_contents": "Well, pg being a multi-process architecture, on a 64 bit system you get\nthe advantages of extra memory for cache all day long. I don't thing a\n2gb mem limit/backend is not a wall people are hitting very often even\non high end systems. Performance wise, 32 vs. 64 bit is a tug of war\nbetween extra registers & faster 64 bit ops on one side vs. smaller\npointers and better memory footprint on the other.\n \nNote I am assuming Opteron here where the 32/64 performance is basically\nthe same.\n \nMerlin\n \n\t \n\tThat being said, from what I'm hearing it may be moot because\nit's probably best to run postgres as 32 bit on a 64 bit operating\nsystem performance wise.\n\t \n\t \nOh? why's that then?\n \n/D\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWell, pg being a multi-process\narchitecture, on a 64 bit system you get the advantages of extra memory for\ncache all day long.  I don’t\nthing a 2gb mem\nlimit/backend is not a wall people are hitting very often even on high end\nsystems.  Performance wise, 32 vs.\n64 bit is a tug of war between extra registers & faster 64 bit ops on one\nside vs. smaller pointers and better memory footprint on the other.\n \nNote I am assuming Opteron here where the\n32/64 performance is basically the same.\n \nMerlin\n \n\n\n \nThat being said, from what I’m\nhearing it may be moot because it’s probably best to run postgres as 32\nbit on a 64 bit operating system performance wise.\n \n \n\nOh? why's that then?\n \n/D", "msg_date": "Thu, 15 Sep 2005 10:01:12 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ODBC] ODBC Driver on Windows 64 bit" } ]
[ { "msg_contents": "Hi Everyone\n\nThe machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15k\ndisks.\n\n2 disks are in RAID1 and hold the OS, SWAP & pg_xlog\n4 disks are in RAID10 and hold the Cluster itself.\n\nthe DB will have two major tables 1 with 10 million rows and one with\n100 million rows.\nAll the activities against this tables will be SELECT.\n\nCurrently the strip size is 8k. I read in many place this is a poor\nsetting.\n\nAm i right ?\n\n", "msg_date": "16 Sep 2005 04:51:43 -0700", "msg_from": "\"bm\\\\mbn\" <[email protected]>", "msg_from_op": true, "msg_subject": "RAID Stripe size" }, { "msg_contents": "bm\\mbn wrote:\n> Hi Everyone\n>\n> The machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15k\n> disks.\n>\n> 2 disks are in RAID1 and hold the OS, SWAP & pg_xlog\n> 4 disks are in RAID10 and hold the Cluster itself.\n>\n> the DB will have two major tables 1 with 10 million rows and one with\n> 100 million rows.\n> All the activities against this tables will be SELECT.\n\nWhat type of SELECTs will you be doing? Mostly sequential reads of a\nbunch of data, or indexed lookups of random pieces?\n\n>\n> Currently the strip size is 8k. I read in many place this is a poor\n> setting.\n\n>From what I've heard of RAID, if you are doing large sequential\ntransfers, larger stripe sizes (128k, 256k) generally perform better.\nFor postgres, though, when you are writing, having the stripe size be\naround the same size as your page size (8k) could be advantageous, as\nwhen postgres reads a page, it only reads a single stripe. So if it were\nreading a series of pages, each one would come from a different disk.\n\nI may be wrong about that, though.\n\nJohn\n=:->\n\n>\n> Am i right ?", "msg_date": "Tue, 20 Sep 2005 00:24:59 -0400", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Stripe size" }, { "msg_contents": "Hi\n\n\nJohn A Meinel wrote:\n\n>bm\\mbn wrote:\n> \n>\n>>Hi Everyone\n>>\n>>The machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15k\n>>disks.\n>>\n>>2 disks are in RAID1 and hold the OS, SWAP & pg_xlog\n>>4 disks are in RAID10 and hold the Cluster itself.\n>>\n>>the DB will have two major tables 1 with 10 million rows and one with\n>>100 million rows.\n>>All the activities against this tables will be SELECT.\n>> \n>>\n>\n>What type of SELECTs will you be doing? Mostly sequential reads of a\n>bunch of data, or indexed lookups of random pieces?\n> \n>\nAll of them. some Rtree some btree some without using indexes.\n\n> \n>\n>>Currently the strip size is 8k. I read in many place this is a poor\n>>setting.\n>> \n>>\n>\n>>From what I've heard of RAID, if you are doing large sequential\n>transfers, larger stripe sizes (128k, 256k) generally perform better.\n>For postgres, though, when you are writing, having the stripe size be\n>around the same size as your page size (8k) could be advantageous, as\n>when postgres reads a page, it only reads a single stripe. So if it were\n>reading a series of pages, each one would come from a different disk.\n>\n>I may be wrong about that, though.\n> \n>\nI must admit im a bit amazed how such important parameter is so \nambiguous. an optimal strip size can improve the performance of the db \nsignificantly. I bet that the difference in performance between a poor \nstripe setting to an optimal one is more important then how much RAM or \nCPU you have.\nI hope to run some tests soon thugh i have limited time on the \nproduction server to do such tests.\n\n>John\n>=:->\n>\n> \n>\n>>Am i right ?\n>> \n>>\n>\n> \n>\n\n-- \n--------------------------\nCanaan Surfing Ltd.\nInternet Service Providers\nBen-Nes Michael - Manager\nTel: 972-4-6991122\nCel: 972-52-8555757\nFax: 972-4-6990098\nhttp://www.canaan.net.il\n--------------------------\n\n", "msg_date": "Tue, 20 Sep 2005 10:51:41 +0300", "msg_from": "Michael Ben-Nes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Stripe size" }, { "msg_contents": "On Tue, Sep 20, 2005 at 10:51:41AM +0300, Michael Ben-Nes wrote:\n>I must admit im a bit amazed how such important parameter is so \n>ambiguous. an optimal strip size can improve the performance of the db \n>significantly. \n\nIt's configuration dependent. IME, it has an insignificant effect. If\nanything, changing it from the vendor default may make performance worse\n(maybe the firmware on the array is tuned for a particular size?)\n\n>I bet that the difference in performance between a poor stripe setting\n>to an optimal one is more important then how much RAM or CPU you have.\n\nI'll take that bet, because I've benched it. If something so trivial\n(and completely free) was the single biggest factor in performance, do\nyou really think it would be an undiscovered secret?\n\n>I hope to run some tests soon thugh i have limited time on the\n>production server to do such tests.\n\nWell, benchmarking your data on your hardware is the first thing you\nshould do, not something you should try to cram in late in the game. You\ncan't get a valid answer to a \"what's the best configuration\" question\nuntil you've tested some configurations.\n\nMike Stone\n", "msg_date": "Tue, 20 Sep 2005 05:41:09 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Stripe size" }, { "msg_contents": "Hi Everybody!\n\nI've got a spare machine which is 2xXEON 3.2GHz, 4Gb RAM\n14x140Gb SCSI 10k (LSI MegaRaid 320U). It is going into production in 3-5months.\nI do have free time to run tests on this machine, and I could test different stripe sizes\nif somebody prepares a test script and data for that.\n\nI could also test different RAID modes 0,1,5 and 10 for this script.\n\nI guess the community needs these results.\n\nOn 16 Sep 2005 04:51:43 -0700\n\"bm\\\\mbn\" <[email protected]> wrote:\n\n> Hi Everyone\n> \n> The machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15k\n> disks.\n> \n> 2 disks are in RAID1 and hold the OS, SWAP & pg_xlog\n> 4 disks are in RAID10 and hold the Cluster itself.\n> \n> the DB will have two major tables 1 with 10 million rows and one with\n> 100 million rows.\n> All the activities against this tables will be SELECT.\n> \n> Currently the strip size is 8k. I read in many place this is a poor\n> setting.\n> \n> Am i right ?\n\n-- \nEvgeny Gridasov\nSoftware Developer\nI-Free, Russia\n", "msg_date": "Tue, 20 Sep 2005 14:53:07 +0400", "msg_from": "evgeny gridasov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Stripe size" }, { "msg_contents": "Typically your stripe size impacts read and write.\n\nIn Solaris, the trick is to match it with your maxcontig parameter. If \nyou set maxcontig to 128 pages which is 128* 8 = 1024k (1M) then your \noptimal stripe size is 128 * 8 / (number of spindles in LUN).. Assuming \nnumber of spindles is 6 then you get an odd number. In such cases either \nyour current io or the next sequential io is going to be little bit \ninefficient depending on what you select (as a rule of thumb however \njust take the closest stripe size). However if your number of spindles \nmatches 8 then you get a perfect 128 and hence makes sense to select \n128K. (Maxcontig is a paramter in Solaris which defines the max \ncontiguous space allocated to a block which really helps in case of \nsequential io operations).\n\nBut as you see this was maxcontig dependent in my case. What if your \nmaxcontig is way off track. This can happen if your io pattern is more \nand more random. In such cases maxcontig is better at lower numbers to \nreduce space wastage and in effect reducing your stripe size reduces \nyour responde time.\n\nThis means now it is Workload dependent... Random IOs or Sequential IOs \n(atleast where IOs can be clubbed together).\n\nAs you can see stripe size in Solaris is eventually dependent on your \nWorkload. Typically my guess is on any other platform, the stripe size \nis dependent on your Workload and how it will access the data. Lower \nstripe size helps smaller IOs perform better but lack total throughtput \nefficiency. While larger stripe size increases throughput efficiency at \nthe cost of response time of your small IO requirements.\n\nDon't forget many file systems will buffer your IOs and can club them \ntogether if it finds them sequential from its point of view. Hence in \nsuch cases the effective IO size is what matters for raid sizes.\n\nIf you effective IO sizes are big then go for higher raid size.\nIf your effective IO sizes are small and response time is critical go \nfor smaller raid sizes\n\nRegards,\nJignesh\n\nevgeny gridasov wrote:\n\n>Hi Everybody!\n>\n>I've got a spare machine which is 2xXEON 3.2GHz, 4Gb RAM\n>14x140Gb SCSI 10k (LSI MegaRaid 320U). It is going into production in 3-5months.\n>I do have free time to run tests on this machine, and I could test different stripe sizes\n>if somebody prepares a test script and data for that.\n>\n>I could also test different RAID modes 0,1,5 and 10 for this script.\n>\n>I guess the community needs these results.\n>\n>On 16 Sep 2005 04:51:43 -0700\n>\"bm\\\\mbn\" <[email protected]> wrote:\n>\n> \n>\n>>Hi Everyone\n>>\n>>The machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15k\n>>disks.\n>>\n>>2 disks are in RAID1 and hold the OS, SWAP & pg_xlog\n>>4 disks are in RAID10 and hold the Cluster itself.\n>>\n>>the DB will have two major tables 1 with 10 million rows and one with\n>>100 million rows.\n>>All the activities against this tables will be SELECT.\n>>\n>>Currently the strip size is 8k. I read in many place this is a poor\n>>setting.\n>>\n>>Am i right ?\n>> \n>>\n>\n> \n>\n\n-- \n______________________________\n\nJignesh K. Shah\nMTS Software Engineer, \nMDE - Horizontal Technologies \nSun Microsystems, Inc\nPhone: (781) 442 3052\nEmail: [email protected]\n______________________________\n\n\n", "msg_date": "Tue, 20 Sep 2005 10:01:44 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Stripe size" }, { "msg_contents": "I have benched different sripe sizes with different file systems, and the \nperfmance differences can be quite dramatic.\n\nTheoreticaly a smaller stripe is better for OLTP as you can write more small \ntransactions independantly to more different disks more often than not, but \na large stripe size is good for Data warehousing as you are often doing very \nlarge sequential reads, and a larger stripe size is going to exploit the \non-drive cache as you request larger single chunks from the disk at a time.\n\nIt also seems that different controllers are partial to different defaults \nthat can affect their performance, so I would suggest that testing this on \ntwo different controller cards man be less than optimal.\n\nI would also recommend looking at file system. For us JFS worked \nsignifcantly faster than resier for large read loads and large write loads, \nso we chose JFS over ext3 and reiser.\n\nI found that lower stripe sizes impacted performance badly as did overly \nlarge stripe sizes.\n\nAlex Turner\nNetEconomist\n\nOn 16 Sep 2005 04:51:43 -0700, bmmbn <[email protected]> wrote:\n> \n> Hi Everyone\n> \n> The machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15k\n> disks.\n> \n> 2 disks are in RAID1 and hold the OS, SWAP & pg_xlog\n> 4 disks are in RAID10 and hold the Cluster itself.\n> \n> the DB will have two major tables 1 with 10 million rows and one with\n> 100 million rows.\n> All the activities against this tables will be SELECT.\n> \n> Currently the strip size is 8k. I read in many place this is a poor\n> setting.\n> \n> Am i right ?\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\nI have benched different sripe sizes with different file systems, and the perfmance differences can be quite dramatic.\n\nTheoreticaly a smaller stripe is better for OLTP as you can write more\nsmall transactions independantly to more different disks more often\nthan not, but a large stripe size is good for Data warehousing as you\nare often doing very large sequential reads, and a larger stripe size\nis going to exploit the on-drive cache as you request larger single\nchunks from the disk at a time.\n\nIt also seems that different controllers are partial to different\ndefaults that can affect their performance, so I would suggest that\ntesting this on two different controller cards man be less than optimal.\n\nI would also recommend looking at file system.  For us JFS worked\nsignifcantly faster than resier for large read loads and large write\nloads, so we chose JFS over ext3 and reiser.\n\nI found that lower stripe sizes impacted performance badly as did overly large stripe sizes.\n\nAlex Turner\nNetEconomistOn 16 Sep 2005 04:51:43 -0700, bmmbn <[email protected]> wrote:\nHi EveryoneThe machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15kdisks.2 disks are in RAID1 and hold the OS, SWAP & pg_xlog4 disks are in RAID10 and hold the Cluster itself.\nthe DB will have two major tables 1 with 10 million rows and one with100 million rows.All the activities against this tables will be SELECT.Currently the strip size is 8k. I read in many place this is a poor\nsetting.Am i right ?---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the postmaster", "msg_date": "Tue, 20 Sep 2005 11:13:15 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Stripe size" } ]
[ { "msg_contents": "Hello all,\nMostly Postgres makes sense to me. But now and then it does something\nthat boggles my brain. Take the statements below. I have a table\n(agent) with 5300 rows. The primary key is agent_id. I can do SELECT\nagent_id FROM agent and it returns all PK values in less than half a\nsecond (dual Opteron box, 4G ram, SATA Raid 10 drive system).\n\nBut when I do a DELETE on two rows with an IN statement, using the primary\nkey index (as stated by EXPLAIN) it take almost 4 minutes.\npg_stat_activity shows nine other connections, all idle.\n\nIf someone can explain this to me it will help restore my general faith in\norder and consistancy in the universe.\n\nMartin\n\n\n-- Executing query:\nSELECT count(*) from agent;\nTotal query runtime: 54 ms.\nData retrieval runtime: 31 ms.\n1 rows retrieved.\nResult: 5353\n\n-- Executing query:\nVACUUM ANALYZE agent;\n\n-- Executing query:\nDELETE FROM agent WHERE agent_id IN (15395, 15394);\nQuery returned successfully: 2 rows affected, 224092 ms execution time.\n\n-- Executing query:\nEXPLAIN DELETE FROM agent WHERE agent_id IN (15395, 15394);\nIndex Scan using agent2_pkey, agent2_pkey on agent (cost=0.00..7.27\nrows=2 width=6)\nIndex Cond: ((agent_id = 15395) OR (agent_id = 15394))\n\nHere's my table\nCREATE TABLE agent\n(\n agent_id int4 NOT NULL DEFAULT nextval('agent_id_seq'::text),\n office_id int4 NOT NULL,\n lastname varchar(25),\n firstname varchar(25),\n...other columns... \n CONSTRAINT agent2_pkey PRIMARY KEY (agent_id),\n CONSTRAINT agent_office_fk FOREIGN KEY (office_id) REFERENCES office (office_id) ON UPDATE RESTRICT ON DELETE RESTRICT\n) \nWITHOUT OIDS;\n\n\n", "msg_date": "Fri, 16 Sep 2005 20:34:14 -0500", "msg_from": "Martin Nickel <[email protected]>", "msg_from_op": true, "msg_subject": "How can this be?" }, { "msg_contents": "On Fri, Sep 16, 2005 at 08:34:14PM -0500, Martin Nickel wrote:\n\n> Hello all,\n> Mostly Postgres makes sense to me. But now and then it does something\n> that boggles my brain. Take the statements below. I have a table\n> (agent) with 5300 rows. The primary key is agent_id. I can do SELECT\n> agent_id FROM agent and it returns all PK values in less than half a\n> second (dual Opteron box, 4G ram, SATA Raid 10 drive system).\n> \n> But when I do a DELETE on two rows with an IN statement, using the primary\n> key index (as stated by EXPLAIN) it take almost 4 minutes.\n> pg_stat_activity shows nine other connections, all idle.\n> \n> If someone can explain this to me it will help restore my general faith in\n> order and consistancy in the universe.\n\nWhen you delete a row from agent PG needs to find any matching rows in\noffice. Is office large? Is office(office_id) indexed?\n\n> -- Executing query:\n> DELETE FROM agent WHERE agent_id IN (15395, 15394);\n> Query returned successfully: 2 rows affected, 224092 ms execution time.\n> \n> -- Executing query:\n> EXPLAIN DELETE FROM agent WHERE agent_id IN (15395, 15394);\n> Index Scan using agent2_pkey, agent2_pkey on agent (cost=0.00..7.27\n> rows=2 width=6)\n> Index Cond: ((agent_id = 15395) OR (agent_id = 15394))\n> \n> Here's my table\n> CREATE TABLE agent\n> (\n> agent_id int4 NOT NULL DEFAULT nextval('agent_id_seq'::text),\n> office_id int4 NOT NULL,\n> lastname varchar(25),\n> firstname varchar(25),\n> ...other columns... \n> CONSTRAINT agent2_pkey PRIMARY KEY (agent_id),\n> CONSTRAINT agent_office_fk FOREIGN KEY (office_id) REFERENCES office (office_id) ON UPDATE RESTRICT ON DELETE RESTRICT\n> ) \n> WITHOUT OIDS;\n\nCheers,\n Steve\n", "msg_date": "Mon, 19 Sep 2005 16:02:09 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can this be?" }, { "msg_contents": "On Fri, 16 Sep 2005, Martin Nickel wrote:\n\n> Hello all,\n> Mostly Postgres makes sense to me. But now and then it does something\n> that boggles my brain. Take the statements below. I have a table\n> (agent) with 5300 rows. The primary key is agent_id. I can do SELECT\n> agent_id FROM agent and it returns all PK values in less than half a\n> second (dual Opteron box, 4G ram, SATA Raid 10 drive system).\n>\n> But when I do a DELETE on two rows with an IN statement, using the primary\n> key index (as stated by EXPLAIN) it take almost 4 minutes.\n> pg_stat_activity shows nine other connections, all idle.\n\nAre there any tables that reference agent or other triggers? My first\nguess would be that there's a foreign key check for something else that's\nreferencing agent.agent_id for which an index scan isn't being used.\n", "msg_date": "Mon, 19 Sep 2005 16:07:44 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can this be?" } ]
[ { "msg_contents": "Hello,\n\nWe are using postgresql in a search engine on an intranet handling \nthrousand of documents.\nBut we ave a big problem when users use more than two search key.\n\nThere are more tables around, but the heart of the search engine is \nmade of three tables :\n\nfiches (f_id int4, f_title varchar) 52445 rows\nengine (f_id int4, k_id int4, weight ) 11761700 rows\nkeywords(k_id, keyword) 1072600 rows\n\nA \"fiche\" is linked to any kind of document.\nThe engine table counts how many times a keyword appears in a document.\n\nA query to search on one or two keywords is quick to execute (the \nfront-end creates thoses queries):\n\n---------------------------------------------------------------------\nselect count (distinct f.f_id) as results\nFROM\nfiches f\n INNER JOIN engine e1 INNER JOIN keywords k1 USING (k_id) USING (f_id)\n INNER JOIN engine e2 INNER JOIN keywords k2 USING (k_id) USING (f_id)\n\nWHERE TRUE\nAND k1.keyword like 'maintenance%'\nAND k2.keyword like 'exploitation%'\n;\n\nQUERY PLAN\nAggregate (cost=3953.00..3953.00 rows=1 width=4) (actual \ntime=525.243..525.243 rows=1 loops=1)\n -> Nested Loop (cost=1974.79..3952.99 rows=1 width=4) (actual \ntime=211.570..513.758 rows=6879 loops=1)\n -> Hash Join (cost=1974.79..3949.62 rows=1 width=8) (actual \ntime=211.483..389.340 rows=6879 loops=1)\n Hash Cond: (\"outer\".f_id = \"inner\".f_id)\n -> Nested Loop (cost=0.00..1974.76 rows=11 width=4) \n(actual time=0.132..155.499 rows=9520 loops=1)\n -> Index Scan using keyword_pattern_key on keywords \nk2 (cost=0.00..3.51 rows=1 width=4) (actual time=0.078..1.887 rows=75 \nloops=1)\n Index Cond: (((keyword)::text ~>=~ \n'exploitation'::character varying) AND ((keyword)::text ~<~ \n'exploitatioo'::character varying))\n Filter: ((keyword)::text ~~ 'exploitation%'::text)\n -> Index Scan using k_id_key on engine e2 \n(cost=0.00..1954.93 rows=1306 width=8) (actual time=0.049..1.842 \nrows=127 loops=75)\n Index Cond: (e2.k_id = \"outer\".k_id)\n -> Hash (cost=1974.76..1974.76 rows=11 width=4) (actual \ntime=211.203..211.203 rows=0 loops=1)\n -> Nested Loop (cost=0.00..1974.76 rows=11 \nwidth=4) (actual time=0.296..197.590 rows=11183 loops=1)\n -> Index Scan using keyword_pattern_key on \nkeywords k1 (cost=0.00..3.51 rows=1 width=4) (actual time=0.189..1.351 \nrows=73 loops=1)\n Index Cond: (((keyword)::text ~>=~ \n'maintenance'::character varying) AND ((keyword)::text ~<~ \n'maintenancf'::character varying))\n Filter: ((keyword)::text ~~ \n'maintenance%'::text)\n -> Index Scan using k_id_key on engine e1 \n(cost=0.00..1954.93 rows=1306 width=8) (actual time=0.029..2.406 \nrows=153 loops=73)\n Index Cond: (e1.k_id = \"outer\".k_id)\n -> Index Scan using fiches_pkey on fiches f (cost=0.00..3.36 \nrows=1 width=4) (actual time=0.013..0.014 rows=1 loops=6879)\n Index Cond: (f.f_id = \"outer\".f_id)\nTotal runtime: 525.511 ms\n--------------------------------------------------------------------------\n\n\nBut when there are three keywords or more, the planner chooses to \nperform a very costly nested loop :\n\n--------------------------------------------------------------------------\nselect count (distinct f.f_id) as results\nFROM\nfiches f\n INNER JOIN engine e1 INNER JOIN keywords k1 USING (k_id) USING (f_id)\n INNER JOIN engine e2 INNER JOIN keywords k2 USING (k_id) USING (f_id)\n INNER JOIN engine e3 INNER JOIN keywords k3 USING (k_id) USING (f_id)\n\nWHERE TRUE\nAND k1.keyword like 'maintenance%'\nAND k2.keyword like 'exploitation%'\nAND k3.keyword like 'numerique%'\n;\n\nQUERY PLAN\nAggregate (cost=5927.90..5927.90 rows=1 width=4) (actual \ntime=673048.168..673048.169 rows=1 loops=1)\n -> Nested Loop (cost=1974.79..5927.90 rows=1 width=4) (actual \ntime=1853.789..673038.065 rows=2929 loops=1)\n -> Nested Loop (cost=1974.79..5924.52 rows=1 width=12) (actual \ntime=1853.719..672881.725 rows=2929 loops=1)\n Join Filter: (\"inner\".f_id = \"outer\".f_id)\n -> Hash Join (cost=1974.79..3949.62 rows=1 width=8) \n(actual time=198.845..441.947 rows=6879 loops=1)\n Hash Cond: (\"outer\".f_id = \"inner\".f_id)\n -> Nested Loop (cost=0.00..1974.76 rows=11 \nwidth=4) (actual time=0.129..199.895 rows=9520 loops=1)\n -> Index Scan using keyword_pattern_key on \nkeywords k2 (cost=0.00..3.51 rows=1 width=4) (actual time=0.077..1.918 \nrows=75 loops=1)\n Index Cond: (((keyword)::text ~>=~ \n'exploitation'::character varying) AND ((keyword)::text ~<~ \n'exploitatioo'::character varying))\n Filter: ((keyword)::text ~~ \n'exploitation%'::text)\n -> Index Scan using k_id_key on engine e2 \n(cost=0.00..1954.93 rows=1306 width=8) (actual time=0.035..2.342 \nrows=127 loops=75)\n Index Cond: (e2.k_id = \"outer\".k_id)\n -> Hash (cost=1974.76..1974.76 rows=11 width=4) \n(actual time=198.650..198.650 rows=0 loops=1)\n -> Nested Loop (cost=0.00..1974.76 rows=11 \nwidth=4) (actual time=0.174..187.216 rows=11183 loops=1)\n -> Index Scan using keyword_pattern_key \non keywords k1 (cost=0.00..3.51 rows=1 width=4) (actual \ntime=0.113..1.222 rows=73 loops=1)\n Index Cond: (((keyword)::text ~>=~ \n'maintenance'::character varying) AND ((keyword)::text ~<~ \n'maintenancf'::character varying))\n Filter: ((keyword)::text ~~ \n'maintenance%'::text)\n -> Index Scan using k_id_key on engine \ne1 (cost=0.00..1954.93 rows=1306 width=8) (actual time=0.029..2.311 \nrows=153 loops=73)\n Index Cond: (e1.k_id = \"outer\".k_id)\n -> Nested Loop (cost=0.00..1974.76 rows=11 width=4) \n(actual time=0.087..90.165 rows=9553 loops=6879)\n -> Index Scan using keyword_pattern_key on keywords \nk3 (cost=0.00..3.51 rows=1 width=4) (actual time=0.049..0.628 rows=49 \nloops=6879)\n Index Cond: (((keyword)::text ~>=~ \n'numerique'::character varying) AND ((keyword)::text ~<~ \n'numeriquf'::character varying))\n Filter: ((keyword)::text ~~ 'numerique%'::text)\n -> Index Scan using k_id_key on engine e3 \n(cost=0.00..1954.93 rows=1306 width=8) (actual time=0.023..1.544 \nrows=195 loops=337071)\n Index Cond: (e3.k_id = \"outer\".k_id)\n -> Index Scan using fiches_pkey on fiches f (cost=0.00..3.36 \nrows=1 width=4) (actual time=0.041..0.043 rows=1 loops=2929)\n Index Cond: (f.f_id = \"outer\".f_id)\nTotal runtime: 673048.405 ms\n----------------------------------------------------------------------\nMore than 10 minutes !\n\nIs there a specific reason the planner chooses this way ?\nCan whe do something on the postgresql configuration to avoid this ?\nCan whe force the planner to use a hash join as it does for the first \njoins ?\n\nRegards,\nAntoine Bajolet\n\n\n", "msg_date": "Sat, 17 Sep 2005 17:47:18 +0200", "msg_from": "Antoine Bajolet <[email protected]>", "msg_from_op": true, "msg_subject": "Nested Loop trouble : Execution time increases more 1000 time (long)" }, { "msg_contents": "On Sat, 2005-09-17 at 17:47 +0200, Antoine Bajolet wrote:\n\n> There are more tables around, but the heart of the search engine is \n> made of three tables :\n> \n> fiches (f_id int4, f_title varchar) 52445 rows\n> engine (f_id int4, k_id int4, weight ) 11761700 rows\n> keywords(k_id, keyword) 1072600 rows\n> \n> A \"fiche\" is linked to any kind of document.\n> The engine table counts how many times a keyword appears in a document.\n> \n> A query to search on one or two keywords is quick to execute (the \n> front-end creates thoses queries):\n> \n\n> Is there a specific reason the planner chooses this way ?\n\nYes, you have an additional join for each new keyword, so there is more\nwork to do.\n\nRecode your SQL with an IN subselect that retrieves all possible\nkeywords before it accesses the larger table.\n\nThat way you should have only one join for each new keyword.\n\n> Can whe do something on the postgresql configuration to avoid this ?\n> Can whe force the planner to use a hash join as it does for the first \n> joins ?\n\nNot required, IMHO.\n\nBest Regards, Simon Riggs\n\n\n", "msg_date": "Thu, 22 Sep 2005 09:24:10 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loop trouble : Execution time increases more" }, { "msg_contents": "Antoine Bajolet <[email protected]> writes:\n> We are using postgresql in a search engine on an intranet handling \n> throusand of documents.\n> But we ave a big problem when users use more than two search key.\n\nI think you need to increase the statistics targets for your keywords\ntable --- the estimates of numbers of matching rows are much too small:\n\n> -> Index Scan using keyword_pattern_key on keywords \n> k2 (cost=0.00..3.51 rows=1 width=4) (actual time=0.078..1.887 rows=75 \n> loops=1)\n> Index Cond: (((keyword)::text ~>=~ \n> 'exploitation'::character varying) AND ((keyword)::text ~<~ \n> 'exploitatioo'::character varying))\n> Filter: ((keyword)::text ~~ 'exploitation%'::text)\n\nA factor-of-75 error is quite likely to mislead the planner into\nchoosing a bad join plan.\n\nBTW, have you looked into using a real full-text-search engine (eg,\ntsearch2) instead of rolling your own like this?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Sep 2005 12:17:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loop trouble : Execution time increases more 1000 time\n\t(long)" }, { "msg_contents": "Hello,\n\nTom Lane a �crit :\n\n>Antoine Bajolet <[email protected]> writes:\n> \n>\n>>We are using postgresql in a search engine on an intranet handling \n>>throusand of documents.\n>>But we ave a big problem when users use more than two search key.\n>> \n>>\n>\n>I think you need to increase the statistics targets for your keywords\n>table --- the estimates of numbers of matching rows are much too small:\n> \n>\nWhat value you think i could put into a ALTER TABLE SET STATISTICS \nstatment ?\n\nAlso, the solution given by Simon Riggs works well.\n<quote>\n\nRecode your SQL with an IN subselect that retrieves all possible \nkeywords before it accesses the larger table.\n</quote>\n\nBut i will try the old ones increasing the statistics parameter and compare performance.\n\n\n> \n>\n>> -> Index Scan using keyword_pattern_key on keywords \n>>k2 (cost=0.00..3.51 rows=1 width=4) (actual time=0.078..1.887 rows=75 \n>>loops=1)\n>> Index Cond: (((keyword)::text ~>=~ \n>>'exploitation'::character varying) AND ((keyword)::text ~<~ \n>>'exploitatioo'::character varying))\n>> Filter: ((keyword)::text ~~ 'exploitation%'::text)\n>> \n>>\n>\n>A factor-of-75 error is quite likely to mislead the planner into\n>choosing a bad join plan.\n>\n>BTW, have you looked into using a real full-text-search engine (eg,\n>tsearch2) instead of rolling your own like this?\n> \n>\nIt seems a quite good contrib, but...\nThe first version of this search engine was developped in 2000... \ntsearch2 nor tsearch existed at this time.\nAlso, there are some developpement works around this search engine \n(pertinence algorithm, filtering with users rights, ponderating keywords \nwith specific rules to each type of document, etc.) and adapting all to \nwork in the similar way with tsearch2 seems to be a bit heavy.\nAt the end, each document indexed are quite big and the choosen method \nreduces disk storage : 1 Go of text content traduces to ~100 Mo of table \nspace.\n\nBest Regards,\nAntoine Bajolet\n\n\n", "msg_date": "Thu, 22 Sep 2005 19:12:36 +0200", "msg_from": "Antoine Bajolet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested Loop trouble : Execution time increases more" }, { "msg_contents": "Re,\n\nWith modifing parameters like this :\n\nALTER TABLE keywords ALTER keyword SET STATISTICS 100;\nALTER TABLE keywords ALTER k_id SET STATISTICS 100;\nALTER TABLE engine ALTER k_id SET STATISTICS 100;\nALTER TABLE engine ALTER f_id SET STATISTICS 100;\n\nvacuuming both tables\nand rewriting the queries using sub-selects :\n\nselect count (distinct f.f_id) as results\nFROM\nfiches f\nINNER JOIN (SELECT distinct f_id FROM keywords,engine WHERE engine.k_id \n= keywords.k_id AND keyword like 'exploitation%') as e1 USING(f_id)\nINNER JOIN (SELECT distinct f_id FROM keywords,engine WHERE engine.k_id \n= keywords.k_id AND keyword like 'maintenance%') as e2 USING(f_id)\nINNER JOIN (SELECT distinct f_id FROM keywords,engine WHERE engine.k_id \n= keywords.k_id AND keyword like 'numerique%') as e3 USING(f_id)\n\nThe query time is less than 600 ms, and increases only a little adding \nmore keywords.\n\nThanks to Tom Lane and Simon Riggs.\n\nBest regards,\nAntoine Bajolet\n\nAntoine Bajolet a �crit :\n\n> Hello,\n>\n> Tom Lane a �crit :\n>\n>> Antoine Bajolet <[email protected]> writes:\n>> \n>>\n>>> We are using postgresql in a search engine on an intranet handling \n>>> throusand of documents.\n>>> But we ave a big problem when users use more than two search key.\n>>> \n>>\n>>\n>> I think you need to increase the statistics targets for your keywords\n>> table --- the estimates of numbers of matching rows are much too small:\n>> \n>>\n> What value you think i could put into a ALTER TABLE SET STATISTICS \n> statment ?\n>\n> Also, the solution given by Simon Riggs works well.\n> <quote>\n>\n> Recode your SQL with an IN subselect that retrieves all possible \n> keywords before it accesses the larger table.\n> </quote>\n>\n> But i will try the old ones increasing the statistics parameter and \n> compare performance.\n>\n\n", "msg_date": "Thu, 22 Sep 2005 19:52:54 +0200", "msg_from": "Antoine Bajolet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested Loop trouble : Execution time increases more" } ]
[ { "msg_contents": "I have a query that looks roughly like this (I've removed irrelevant \nSELECT clause material and obfuscated names, trying to keep them \nconsistent where altered in EXPLAIN output):\n\nSELECT u.emma_member_id, h.action_ts\nFROM user as u, history as h\nWHERE u.user_id = h.user_id\nAND h.action_id = '$constant_data'\nORDER BY h.action_ts DESC LIMIT 100 OFFSET 0\n\nThe user table has ~25,000 rows. The history table has ~750,000 rows. \nCurrently, there is an index on history.action_ts and a separate one \non history.action_id. There's also a PRIMARY KEY on user.user_id. If \nI run the query as such, I get a plan like this:\n\n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n--------------------------------------------------------\nLimit (cost=0.00..2196.30 rows=100 width=925) (actual \ntime=947.208..3178.775 rows=3 loops=1)\n -> Nested Loop (cost=0.00..83898.65 rows=3820 width=925) \n(actual time=947.201..3178.759 rows=3 loops=1)\n -> Index Scan Backward using h_action_ts_idx on history h \n(cost=0.00..60823.53 rows=3820 width=480) (actual \ntime=946.730..3177.953 rows=3 loops=1)\n Filter: (action_id = $constant_data::bigint)\n -> Index Scan using user_pkey on user u (cost=0.00..6.01 \nrows=1 width=445) (actual time=0.156..0.161 rows=1 loops=3)\n Index Cond: (u.user_id = \"outer\".user_id)\nTotal runtime: 3179.143 ms\n(7 rows)\n\nIf I drop the index on the timestamp field, I get a plan like this:\n\n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-------------------------------------------------\nLimit (cost=17041.41..17041.66 rows=100 width=925) (actual \ntime=201.725..201.735 rows=3 loops=1)\n -> Sort (cost=17041.41..17050.96 rows=3820 width=925) (actual \ntime=201.719..201.722 rows=3 loops=1)\n Sort Key: h.action_ts\n -> Merge Join (cost=13488.15..16814.13 rows=3820 \nwidth=925) (actual time=7.306..201.666 rows=3 loops=1)\n Merge Cond: (\"outer\".user_id = \"inner\".user_id)\n -> Index Scan using user_pkey on user u \n(cost=0.00..3134.82 rows=26802 width=445) (actual time=0.204..151.351 \nrows=24220 loops=1)\n -> Sort (cost=13488.15..13497.70 rows=3820 \nwidth=480) (actual time=0.226..0.234 rows=3 loops=1)\n Sort Key: h.user_id\n -> Index Scan using h_action_id_idx on history \nh (cost=0.00..13260.87 rows=3820 width=480) (actual \ntime=0.184..0.195 rows=3 loops=1)\n Index Cond: (action_id = \n$constant_data::bigint)\nTotal runtime: 202.089 ms\n(11 rows)\n\nClearly, if the index on the timestamp field is there, postgres wants \nto use it for the ORDER BY, even though the performance is worse. How \nis this preference made internally? If both indexes exist, will \npostgres always prefer the index on an ordered column? If I need the \nindex on the timestamp field for other queries, is my best bet just \nto increase sort_mem for this query?\n\nHere's my version string:\nPostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC 2.95.4\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source: Open Your i�\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-469-5150\n615-469-5151 (fax)\n\n", "msg_date": "Mon, 19 Sep 2005 18:00:33 -0500", "msg_from": "\"Thomas F. O'Connell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index Selection: ORDER BY vs. PRIMARY KEY" }, { "msg_contents": "\"Thomas F. O'Connell\" <[email protected]> writes:\n> Clearly, if the index on the timestamp field is there, postgres wants \n> to use it for the ORDER BY, even though the performance is worse. How \n> is this preference made internally? If both indexes exist, will \n> postgres always prefer the index on an ordered column? If I need the \n> index on the timestamp field for other queries, is my best bet just \n> to increase sort_mem for this query?\n\nIf you suppose that Postgres has a \"preference\" for one index over\nanother, you're already fatally off track. It's all about estimated\ncosts. In this case, the plan with h_action_ts_idx is preferred because\nit has a lower estimated cost (2196.30) than the other plan (17041.66).\nThe way to think about this is not that Postgres \"prefers\" one index\nover another, but that the estimated costs aren't in line with reality.\n\nIt looks from the plans that there are a number of estimation errors\ngiving you trouble, but the one that seems most easily fixable is\nhere:\n\n -> Index Scan using h_action_id_idx on history h (cost=0.00..13260.87 rows=3820 width=480) (actual time=0.184..0.195 rows=3 loops=1)\n Index Cond: (action_id = $constant_data::bigint)\n\nEstimating 3820 rows matching $constant_data when there are really only\n3 is a pretty serious estimation error :-( ... certainly more than\nenough to explain a factor-of-100 error in the total estimated costs.\n\nHow recently did you last ANALYZE the history file? If the ANALYZE\nstats are up-to-date and it's still blowing the rowcount estimate by\na factor of 1000, maybe you need to increase the statistics target for\nthis column.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Sep 2005 23:05:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Selection: ORDER BY vs. PRIMARY KEY " }, { "msg_contents": "\nOn Sep 19, 2005, at 10:05 PM, Tom Lane wrote:\n\n> \"Thomas F. O'Connell\" <[email protected]> writes:\n>\n>> Clearly, if the index on the timestamp field is there, postgres wants\n>> to use it for the ORDER BY, even though the performance is worse. How\n>> is this preference made internally? If both indexes exist, will\n>> postgres always prefer the index on an ordered column? If I need the\n>> index on the timestamp field for other queries, is my best bet just\n>> to increase sort_mem for this query?\n>\n> If you suppose that Postgres has a \"preference\" for one index over\n> another, you're already fatally off track. It's all about estimated\n> costs. In this case, the plan with h_action_ts_idx is preferred \n> because\n> it has a lower estimated cost (2196.30) than the other plan \n> (17041.66).\n> The way to think about this is not that Postgres \"prefers\" one index\n> over another, but that the estimated costs aren't in line with \n> reality.\n>\n> It looks from the plans that there are a number of estimation errors\n> giving you trouble, but the one that seems most easily fixable is\n> here:\n>\n> -> Index Scan using h_action_id_idx on history h \n> (cost=0.00..13260.87 rows=3820 width=480) (actual time=0.184..0.195 \n> rows=3 loops=1)\n> Index Cond: (action_id = $constant_data::bigint)\n>\n> Estimating 3820 rows matching $constant_data when there are really \n> only\n> 3 is a pretty serious estimation error :-( ... certainly more than\n> enough to explain a factor-of-100 error in the total estimated costs.\n>\n> How recently did you last ANALYZE the history file? If the ANALYZE\n> stats are up-to-date and it's still blowing the rowcount estimate by\n> a factor of 1000, maybe you need to increase the statistics target for\n> this column.\n>\n> regards, tom lane\n\nThanks for the guidance, Tom. I don't know why I was \"fatally off \ntrack\" on this one. It was indeed statistics related. pg_autovacuum \nhadn't visited this table for a long enough window to have an impact \non the estimates. A sad case of the should've-known-betters...\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source: Open Your i�\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-469-5150\n615-469-5151 (fax)\n\n", "msg_date": "Tue, 20 Sep 2005 00:36:51 -0500", "msg_from": "\"Thomas F. O'Connell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index Selection: ORDER BY vs. PRIMARY KEY " } ]
[ { "msg_contents": "Alex Turner wrote:\n\n> I would also recommend looking at file system. For us JFS worked signifcantly \n> faster than resier for large read loads and large write loads, so we chose JFS \n> over ext3 and reiser.\n \nhas jfs been reliable for you? there seems to be a lot of conjecture about instability,\nbut i find jfs a potentially attractive alternative for a number of reasons.\n\nrichard\n", "msg_date": "Tue, 20 Sep 2005 11:21:44 -0400", "msg_from": "\"Welty, Richard\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RAID Stripe size" }, { "msg_contents": "I have found JFS to be just fine. We have been running a medium load on this \nserver for 9 months with no unscheduled down time. Datbase is about 30gig on \ndisk, and we get about 3-4 requests per second that generate results sets in \nthe thousands from about 8am to about 11pm.\n\nI have foudn that JFS barfs if you put a million files in a directory and \ntry to do an 'ls', but then so did reiser, only Ext3 handled this test \nsuccesfully. Fortunately with a database, this is an atypical situation, so \nJFS has been fine for DB for us so far.\n\nWe have had severe problems with Ext3 when file systems hit 100% usage, they \nget all kinds of unhappy, we haven't had the same problem with JFS.\n\nAlex Turner\nNetEconomist\n\nOn 9/20/05, Welty, Richard <[email protected]> wrote:\n> \n> Alex Turner wrote:\n> \n> > I would also recommend looking at file system. For us JFS worked \n> signifcantly\n> > faster than resier for large read loads and large write loads, so we \n> chose JFS\n> > over ext3 and reiser.\n> \n> has jfs been reliable for you? there seems to be a lot of conjecture about \n> instability,\n> but i find jfs a potentially attractive alternative for a number of \n> reasons.\n> \n> richard\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nI have found JFS to be just fine.  We have been running a medium\nload on this server for 9 months with no unscheduled down time. \nDatbase is about 30gig on disk, and we get about 3-4 requests per\nsecond that generate results sets in the thousands from about 8am to\nabout 11pm.\n\nI have foudn that JFS barfs if you put a million files in a directory\nand try to do an 'ls', but then so did reiser, only Ext3 handled this\ntest succesfully.  Fortunately with a database, this is an\natypical situation, so JFS has been fine for DB for us so far.\n\nWe have had severe problems with Ext3 when file systems hit 100% usage,\nthey get all kinds of unhappy, we haven't had the same problem with JFS.\n\nAlex Turner\nNetEconomistOn 9/20/05, Welty, Richard <[email protected]> wrote:\nAlex Turner  wrote:> I would also recommend looking at file system.  For us JFS worked signifcantly>  faster than resier for large read loads and large write loads, so we chose JFS>  over ext3 and reiser.\nhas jfs been reliable for you? there seems to be a lot of conjecture about instability,but i find jfs a potentially attractive alternative for a number of reasons.richard---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to       choose an index scan if your joining column's datatypes do not       match", "msg_date": "Tue, 20 Sep 2005 11:33:29 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Stripe size" }, { "msg_contents": "We have a production server(8.0.2) running 24x7, 300k+ transactions per day.\nLinux 2.6.11 / JFS file system.\nNo problems. It works faster than ext3.\n\n> Alex Turner wrote:\n> \n> > I would also recommend looking at file system. For us JFS worked signifcantly \n> > faster than resier for large read loads and large write loads, so we chose JFS \n> > over ext3 and reiser.\n> \n> has jfs been reliable for you? there seems to be a lot of conjecture about instability,\n> but i find jfs a potentially attractive alternative for a number of reasons.\n> \n> richard\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n-- \nEvgeny Gridasov\nSoftware Developer\nI-Free, Russia\n", "msg_date": "Tue, 20 Sep 2005 20:02:10 +0400", "msg_from": "evgeny gridasov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID Stripe size" }, { "msg_contents": "Hi Everybody.\n\nI am going to replace some 'select count(*) from ... where ...' queries\nwhich run on large tables (10M+ rows) with something like\n'explain select * from ... where ....' and parse planner output after that\nto find out its forecast about number of rows the query is going to retrieve.\n\nSince my users do not need exact row count for large tables, this will\nboost performance for my application. I ran some queries with explain and\nexplain analyze then. If i set statistics number for the table about 200-300\nthe planner forecast seems to be working very fine.\n\nMy questions are:\n1. Is there a way to interact with postgresql planner, other than 'explain ...'? An aggregate query like 'select estimate_count(*) from ...' would really help =))\n2. How precise is the planner row count forecast given for a complex query (select with 3-5 joint tables,aggregates,subselects, etc...)?\n\n\n-- \nEvgeny Gridasov\nSoftware Developer\nI-Free, Russia\n", "msg_date": "Tue, 20 Sep 2005 22:12:55 +0400", "msg_from": "evgeny gridasov <[email protected]>", "msg_from_op": false, "msg_subject": "Planner statistics vs. count(*)" }, { "msg_contents": "evgeny gridasov wrote:\n> Hi Everybody.\n> \n> I am going to replace some 'select count(*) from ... where ...' queries\n> which run on large tables (10M+ rows) with something like\n> 'explain select * from ... where ....' and parse planner output after that\n> to find out its forecast about number of rows the query is going to retrieve.\n> \n> Since my users do not need exact row count for large tables, this will\n> boost performance for my application. I ran some queries with explain and\n> explain analyze then. If i set statistics number for the table about 200-300\n> the planner forecast seems to be working very fine.\n> \n> My questions are:\n> 1. Is there a way to interact with postgresql planner, other than 'explain ...'? An aggregate query like 'select estimate_count(*) from ...' would really help =))\n> 2. How precise is the planner row count forecast given for a complex query (select with 3-5 joint tables,aggregates,subselects, etc...)?\n> \n> \nI think that this has been done before. Check the list archives (I believe it\nmay have been Michael Fuhr?)\n\nah, check this:\n\nhttp://archives.postgresql.org/pgsql-sql/2005-08/msg00046.php\n\n-- \n_______________________________\n\nThis e-mail may be privileged and/or confidential, and the sender does\nnot waive any related rights and obligations. Any distribution, use or\ncopying of this e-mail or the information it contains by other than an\nintended recipient is unauthorized. If you received this e-mail in\nerror, please advise me (by return e-mail or otherwise) immediately.\n_______________________________\n", "msg_date": "Tue, 20 Sep 2005 11:29:31 -0700", "msg_from": "Bricklen Anderson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner statistics vs. count(*)" } ]
[ { "msg_contents": "unsubscribe\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\nNelba Sánchez R.\nServicios Tecnológicos de Gestión (STG)\nPontificia Universidad Católica de Chile\nfono : (56-2) 686 2316\nfax : (56-2) 222 9487\nmailto:[email protected]\n\n", "msg_date": "Tue, 20 Sep 2005 14:41:30 -0400", "msg_from": "Nelba =?iso-8859-1?Q?S=E1nchez?= Rojas <[email protected]>", "msg_from_op": true, "msg_subject": "unsubscribe " }, { "msg_contents": "I have a table that is purged by 25% each night. I'd like to do a\nvacuum nightly after the purge to reclaim the space, but I think I'll\nstill need to do a vacuum full weekly.\n\nWould there be any benefit to doing a cluster instead of the vacuum?\n\n", "msg_date": "Tue, 20 Sep 2005 14:53:19 -0400", "msg_from": "Markus Benne <[email protected]>", "msg_from_op": false, "msg_subject": "VACUUM FULL vs CLUSTER" }, { "msg_contents": "On Tue, Sep 20, 2005 at 14:53:19 -0400,\n Markus Benne <[email protected]> wrote:\n> I have a table that is purged by 25% each night. I'd like to do a\n> vacuum nightly after the purge to reclaim the space, but I think I'll\n> still need to do a vacuum full weekly.\n> \n> Would there be any benefit to doing a cluster instead of the vacuum?\n\nIf you have a proper FSM setting you shouldn't need to do vacuum fulls\n(unless you have an older version of postgres where index bloat might\nbe an issue).\n", "msg_date": "Fri, 23 Sep 2005 09:20:27 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM FULL vs CLUSTER" }, { "msg_contents": "Bruno Wolff III mentioned :\n=> If you have a proper FSM setting you shouldn't need to do vacuum fulls\n=> (unless you have an older version of postgres where index bloat might\n=> be an issue).\n\nWhat version of postgres was the last version that had\nthe index bloat problem?\n", "msg_date": "Fri, 23 Sep 2005 18:16:44 +0200", "msg_from": "Stef <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM FULL vs CLUSTER" }, { "msg_contents": "On Fri, Sep 23, 2005 at 18:16:44 +0200,\n Stef <[email protected]> wrote:\n> Bruno Wolff III mentioned :\n> => If you have a proper FSM setting you shouldn't need to do vacuum fulls\n> => (unless you have an older version of postgres where index bloat might\n> => be an issue).\n> \n> What version of postgres was the last version that had\n> the index bloat problem?\n\nYou can check the release notes to be sure, but my memory is that the\nunbounded bloat problem was fixed in 7.4. There still are usage patterns\nthat can result in bloating, but it is limited to some constant multiplier\nof the minimum index size.\n", "msg_date": "Fri, 23 Sep 2005 11:59:52 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM FULL vs CLUSTER" }, { "msg_contents": "On Fri, Sep 23, 2005 at 06:16:44PM +0200, Stef wrote:\n> Bruno Wolff III mentioned :\n> => If you have a proper FSM setting you shouldn't need to do vacuum fulls\n> => (unless you have an older version of postgres where index bloat might\n> => be an issue).\n> \n> What version of postgres was the last version that had\n> the index bloat problem?\n\nThe worst problems were solved in 7.4. There are problems in certain\nlimited circumstances even with current releases.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34\n\"The ability to monopolize a planet is insignificant\nnext to the power of the source\"\n", "msg_date": "Fri, 23 Sep 2005 13:03:05 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM FULL vs CLUSTER" }, { "msg_contents": "\nBruno Wolff III mentioned :\n=> > => If you have a proper FSM setting you shouldn't need to do vacuum fulls\n=> > => (unless you have an older version of postgres where index bloat might\n=> > => be an issue).\n\nThanks Alvaro and Bruno\n\nI just want to clarify something that I also couldn't \nfind a clear cut answer for before. \n\nWhat is a proper fsm setting? \n\nSomeone told me to set max_fsm_relations to the number of\nrelations in pg_class plus a few more to allow for new relations.\nAnd max_fsm_pages to the number of rows in the biggest table I\nwant to vacuum, plus a few 1000's for extra room?\n\nWhere does this free space map sit? On the disk somewhere,\nor in memory, or both.\n\nI once set the max_fsm_pages very high by mistake, and postgres\nthen started up and used a _lot_ of shared memory, and I had to\nincrease shmmax. Is there abything to watch out for when bumping this\nsetting up a lot?\n\nKind Regards\nStefan \n", "msg_date": "Fri, 23 Sep 2005 19:18:03 +0200", "msg_from": "Stef <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM FULL vs CLUSTER" }, { "msg_contents": "[email protected] (Stef) writes:\n> Bruno Wolff III mentioned :\n> => If you have a proper FSM setting you shouldn't need to do vacuum fulls\n> => (unless you have an older version of postgres where index bloat might\n> => be an issue).\n>\n> What version of postgres was the last version that had\n> the index bloat problem?\n\nI believe that was fixed in 7.3; it was certainly resolved by 7.4...\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"cbbrowne.com\")\nhttp://www.ntlug.org/~cbbrowne/spiritual.html\nMICROS~1 has brought the microcomputer OS to the point where it is\nmore bloated than even OSes from what was previously larger classes of\nmachines altogether. This is perhaps Bill's single greatest\naccomplishment.\n", "msg_date": "Fri, 23 Sep 2005 13:48:32 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM FULL vs CLUSTER" }, { "msg_contents": "you can see that at the end of vacuum log (sorry for my english)\n...\nINFO: free space map: 930 relations, 48827 pages stored; 60240 total pages\nneeded -- NEEDED!\n-- I have already configured in postgresql.conf, you can see it below\nDETAIL: Allocated FSM size: 1000 relations + 70000 pages = 475 kB shared\nmemory. -- ALLOCATED ACCORDING TO max_fsm_pages , etc\nVACUUM\n\nYou probably must adjust your shared memory, coz the database need it, but\nit depends on your database...\n\n(I could be wrong, I'm learning postgresql, please, feel free to correct me)\n\n-----Mensaje original-----\nDe: [email protected]\n[mailto:[email protected]]En nombre de Stef\nEnviado el: viernes, 23 de septiembre de 2005 14:18\nPara: Bruno Wolff III\nCC: Markus Benne; [email protected]\nAsunto: Re: [PERFORM] VACUUM FULL vs CLUSTER\n\n\n\nBruno Wolff III mentioned :\n=> > => If you have a proper FSM setting you shouldn't need to do vacuum\nfulls\n=> > => (unless you have an older version of postgres where index bloat\nmight\n=> > => be an issue).\n\nThanks Alvaro and Bruno\n\nI just want to clarify something that I also couldn't\nfind a clear cut answer for before.\n\nWhat is a proper fsm setting?\n\nSomeone told me to set max_fsm_relations to the number of\nrelations in pg_class plus a few more to allow for new relations.\nAnd max_fsm_pages to the number of rows in the biggest table I\nwant to vacuum, plus a few 1000's for extra room?\n\nWhere does this free space map sit? On the disk somewhere,\nor in memory, or both.\n\nI once set the max_fsm_pages very high by mistake, and postgres\nthen started up and used a _lot_ of shared memory, and I had to\nincrease shmmax. Is there abything to watch out for when bumping this\nsetting up a lot?\n\nKind Regards\nStefan\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Tue, 27 Sep 2005 13:21:07 -0300", "msg_from": "\"Dario\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM FULL vs CLUSTER" } ]
[ { "msg_contents": "Background: We are running a web application on apache with database\nserver as PostgreSQL. The application is a expense claim application\nwith workflow. The software versions are specified below:\n\n \n\nRed Hat Linux release 7.3\n\nApache 1.3.20\n\nPostgreSQL 7.1.3\n\n \n\nProblem: When the application is accessed by many users like say 40, the\nPostgreSQL database freezes. \n\n \n\nDescription of Process happening behind: The application mainly puts\nload on one table (worklist) where the steps were created for each\nexpense claim initiated. While creating each step for claim there is\nexclusive row lock and at the end inserts a new step. Both these\nstatements are in one transaction. When the apache hangs in between the\ndeadlock remains. In this way there are many deadlocks created which\nmakes the database to finally freeze. To resolve this we were restarting\nthe PostgreSQl db. Sometime the apache also hangs. Then we were\nrestarting the apache. There is no log created in postgreSQL. Whereas\nour application records an error log: 'Failed to gain exclusive table\nrow lock'\n\n \n\nWe were guessing that the database hanging is due to deadlock issue. But\nnot sure of it.\n\n \n\nI have attached the postgreSQL.conf file for your reference to check the\nsettings.\n\n \n\nPlease let me know what might be the reason and how to check and resolve\nit.\n\n \n\nThanks and Best Regards,\n\nAnu", "msg_date": "Wed, 21 Sep 2005 11:44:54 +1000", "msg_from": "\"Anu Kucharlapati\" <[email protected]>", "msg_from_op": true, "msg_subject": "Deadlock Issue with PostgreSQL" }, { "msg_contents": "\"Anu Kucharlapati\" <[email protected]> writes:\n> Red Hat Linux release 7.3\n> Apache 1.3.20\n> PostgreSQL 7.1.3\n\nI'm not sure about Apache, but both the RHL and Postgres versions\nyou are using are stone age --- *please* update. Red Hat stopped\nsupporting that release years ago, and the PG community isn't\nsupporting 7.1.* anymore either. There are too many known problems\nin 7.1.* that are unfixable without a major-version upgrade.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Sep 2005 22:15:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deadlock Issue with PostgreSQL " } ]
[ { "msg_contents": "I currently have a Postgres 7.3 database running under WIN2K using cygwin\nand want to move to Postgres 8.0.3 (native windows version).\nI am finding most simple queries are significantly faster on the native\nwindows version compared to 7.3 (under cygwin).\nHowever, for a complex query, that involve multiple JOINs, the 7.3 version\nis actually faster (about 2X faster).\n\nThe query that I am running was optimized to run under 7.3. It was\nspecifically modified to control the planner with explicit JOINs.\nWhen I run the same query on the 8.0.3 version with the join_collapse_limit\nset to 1 the query is slower.\n\nCan someone tell me why setting the join_collapse_limit to 1 in the 8.0\nversion does not produce similar results to the 7.3 version?\nDoes anyone have any suggestions on what I can do? Do I have to rewrite the\nquery?\n\n\nHere are the results of an explain analyze on the query.\n\nExplain analyze Postgres 7.3 running on WIN2K using cygwin.\n\nHash Join (cost=21808.27..1946264.80 rows=2982 width=1598) (actual\ntime=2186.00..2320.00 rows=50 loops=1)\n Hash Cond: (\"outer\".doc_internalparentomxref = \"inner\".doc_documentid)\n -> Hash Join (cost=20948.78..1945323.29 rows=2982 width=1534) (actual\ntime=2110.00..2227.00 rows=50 loops=1)\n Hash Cond: (\"outer\".doc_internalrootxref = \"inner\".doc_documentid)\n -> Hash Join (cost=20089.29..1944381.79 rows=2982 width=1484)\n(actual time=2067.00..2179.00 rows=50 loops=1)\n Hash Cond: (\"outer\".doc_documentid = \"inner\".doc_documentid)\n Join Filter: (\"inner\".dc_doccontacttype = 'FROM'::character\nvarying)\n -> Hash Join (cost=7455.14..1928613.59 rows=2982\nwidth=1138) (actual time=1216.00..1539.00 rows=50 loops=1)\n Hash Cond: (\"outer\".doc_documentid =\n\"inner\".doc_documentid)\n Join Filter: (\"inner\".dc_doccontacttype =\n'TO'::character varying)\n -> Hash Join (cost=183.49..1918519.06 rows=2860\nwidth=792) (actual time=64.00..301.00 rows=50 loops=1)\n Hash Cond: (\"outer\".doc_documentid =\n\"inner\".doc_documentid)\n -> Seq Scan on document finaldoc\n(cost=0.00..1918256.94 rows=2860 width=717) (actual time=13.00..254.00\nrows=50 loops=1)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=335.27..335.27\nrows=50 width=160) (actual time=0.00..0.01 rows=50 loops=5719)\n -> Limit (cost=0.00..335.27\nrows=50 width=160) (actual time=3.00..8.00 rows=50 loops=1)\n -> Nested Loop\n(cost=0.00..38347.95 rows=5719 width=160) (actual time=3.00..8.00 rows=51\nloops=1)\n -> Merge Join\n(cost=0.00..3910.14 rows=5719 width=120) (actual time=3.00..3.00 rows=51\nloops=1)\n Merge Cond:\n(\"outer\".doc_documentid = \"inner\".doc_documentid)\n -> Index Scan\nusing pk_document on document doc (cost=0.00..3256.48 rows=5719 width=80)\n(actual time=1.00..1.00 rows=51 loops=1)\n -> Index Scan\nusing pk_folder_document on folder_document (cost=0.00..553.91 rows=5719\nwidth=40) (actual time=2.00..2.00 rows=51 loops=1)\n -> Index Scan using\npk_document on document root (cost=0.00..6.01 rows=1 width=40) (actual\ntime=0.10..0.10 rows=1 loops=51)\n Index Cond:\n(\"outer\".doc_internalrootxref = root.doc_documentid)\n -> Hash (cost=169.19..169.19 rows=5719\nwidth=75) (actual time=31.00..31.00 rows=0 loops=1)\n -> Seq Scan on folder_document\n(cost=0.00..169.19 rows=5719 width=75) (actual time=0.00..11.00 rows=5719\nloops=1)\n -> Hash (cost=1328.80..1328.80 rows=34280 width=346)\n(actual time=846.00..846.00 rows=0 loops=1)\n -> Seq Scan on doccontact dcto\n(cost=0.00..1328.80 rows=34280 width=346) (actual time=0.00..175.00\nrows=34280 loops=1)\n -> Hash (cost=1328.80..1328.80 rows=34280 width=346)\n(actual time=445.00..445.00 rows=0 loops=1)\n -> Seq Scan on doccontact dcfrom (cost=0.00..1328.80\nrows=34280 width=346) (actual time=0.00..223.00 rows=34280 loops=1)\n -> Hash (cost=845.19..845.19 rows=5719 width=50) (actual\ntime=42.00..42.00 rows=0 loops=1)\n -> Seq Scan on document root (cost=0.00..845.19 rows=5719\nwidth=50) (actual time=0.00..2.00 rows=5719 loops=1)\n -> Hash (cost=845.19..845.19 rows=5719 width=64) (actual\ntime=73.00..73.00 rows=0 loops=1)\n -> Seq Scan on document parentom (cost=0.00..845.19 rows=5719\nwidth=64) (actual time=0.00..30.00 rows=5719 loops=1)\n SubPlan\n -> Limit (cost=0.00..5.56 rows=1 width=40) (actual time=0.06..0.06\nrows=0 loops=50)\n -> Result (cost=0.00..7.20 rows=1 width=40) (actual\ntime=0.06..0.06 rows=0 loops=50)\n One-Time Filter: ($0 = true)\n -> Index Scan using documentevent_index on documentevent\nde (cost=0.00..7.20 rows=1 width=40) (actual time=0.07..0.07 rows=0\nloops=44)\n Index Cond: (($1 = doc_documentid) AND\n(de_processedflag = false) AND (de_documenteventstatus = 'ERROR'::character\nvarying))\n -> Limit (cost=0.00..3.86 rows=1 width=40) (actual time=0.10..0.10\nrows=0 loops=50)\n\nExplain analyze Postgres 8.0.3 running natively under WIN2K.\n\nHash IN Join (cost=5293.09..7121.89 rows=50 width=1369) (actual\ntime=1062.000..5558.000 rows=50 loops=1)\n Hash Cond: ((\"outer\".doc_documentid)::text =\n(\"inner\".doc_documentid)::text)\n -> Hash Left Join (cost=4798.24..6199.29 rows=5741 width=1369) (actual\ntime=751.000..4236.000 rows=5719 loops=1)\n Hash Cond: ((\"outer\".doc_internalparentomxref)::text =\n(\"inner\".doc_documentid)::text)\n -> Hash Left Join (cost=3956.48..5271.41 rows=5741 width=1345)\n(actual time=541.000..3105.000 rows=5719 loops=1)\n Hash Cond: ((\"outer\".doc_internalrootxref)::text =\n(\"inner\".doc_documentid)::text)\n -> Hash Left Join (cost=3114.72..4343.53 rows=5741\nwidth=1335) (actual time=501.000..2313.000 rows=5719 loops=1)\n Hash Cond: ((\"outer\".doc_documentid)::text =\n(\"inner\".doc_documentid)::text)\n -> Hash Left Join (cost=1649.92..2721.09 rows=5741\nwidth=1039) (actual time=180.000..1342.000 rows=5719 loops=1)\n Hash Cond: ((\"outer\".doc_documentid)::text =\n(\"inner\".doc_documentid)::text)\n -> Hash Left Join (cost=185.13..1098.65\nrows=5741 width=743) (actual time=40.000..592.000 rows=5719 loops=1)\n Hash Cond: ((\"outer\".doc_documentid)::text\n= (\"inner\".doc_documentid)::text)\n -> Seq Scan on document finaldoc\n(cost=0.00..827.41 rows=5741 width=708) (actual time=0.000..41.000 rows=5719\nloops=1)\n -> Hash (cost=170.70..170.70 rows=5770\nwidth=75) (actual time=40.000..40.000 rows=0 loops=1)\n -> Seq Scan on folder_document\n(cost=0.00..170.70 rows=5770 width=75) (actual time=0.000..10.000 rows=5719\nloops=1)\n -> Hash (cost=1450.50..1450.50 rows=5718\nwidth=336) (actual time=140.000..140.000 rows=0 loops=1)\n -> Seq Scan on doccontact dcto\n(cost=0.00..1450.50 rows=5718 width=336) (actual time=0.000..130.000\nrows=5718 loops=1)\n Filter: ((dc_doccontacttype)::text =\n'TO'::text)\n -> Hash (cost=1450.50..1450.50 rows=5718 width=336)\n(actual time=321.000..321.000 rows=0 loops=1)\n -> Seq Scan on doccontact dcfrom\n(cost=0.00..1450.50 rows=5718 width=336) (actual time=10.000..291.000\nrows=5718 loops=1)\n Filter: ((dc_doccontacttype)::text =\n'FROM'::text)\n -> Hash (cost=827.41..827.41 rows=5741 width=50) (actual\ntime=40.000..40.000 rows=0 loops=1)\n -> Seq Scan on document root (cost=0.00..827.41\nrows=5741 width=50) (actual time=0.000..30.000 rows=5719 loops=1)\n -> Hash (cost=827.41..827.41 rows=5741 width=64) (actual\ntime=210.000..210.000 rows=0 loops=1)\n -> Seq Scan on document parentom (cost=0.00..827.41\nrows=5741 width=64) (actual time=0.000..160.000 rows=5719 loops=1)\n -> Hash (cost=494.73..494.73 rows=50 width=42) (actual\ntime=261.000..261.000 rows=0 loops=1)\n -> Subquery Scan \"IN_subquery\" (cost=185.13..494.73 rows=50\nwidth=42) (actual time=101.000..261.000 rows=50 loops=1)\n -> Limit (cost=185.13..494.23 rows=50 width=40) (actual\ntime=101.000..261.000 rows=50 loops=1)\n -> Nested Loop Left Join (cost=185.13..35676.18\nrows=5741 width=40) (actual time=101.000..261.000 rows=50 loops=1)\n -> Hash Left Join (cost=185.13..1098.65\nrows=5741 width=80) (actual time=91.000..91.000 rows=50 loops=1)\n Hash Cond: ((\"outer\".doc_documentid)::text\n= (\"inner\".doc_documentid)::text)\n -> Seq Scan on document doc\n(cost=0.00..827.41 rows=5741 width=80) (actual time=10.000..10.000 rows=50\nloops=1)\n -> Hash (cost=170.70..170.70 rows=5770\nwidth=40) (actual time=81.000..81.000 rows=0 loops=1)\n -> Seq Scan on folder_document\n(cost=0.00..170.70 rows=5770 width=40) (actual time=10.000..61.000 rows=5719\nloops=1)\n -> Index Scan using pk_document on document root\n(cost=0.00..6.01 rows=1 width=40) (actual time=3.400..3.400 rows=1 loops=50)\n Index Cond:\n((\"outer\".doc_internalrootxref)::text = (root.doc_documentid)::text)\n SubPlan\n -> Limit (cost=0.00..1.96 rows=1 width=40) (actual time=0.400..0.400\nrows=0 loops=50)\n -> Seq Scan on followup_document fd (cost=0.00..3.91 rows=2\nwidth=40) (actual time=0.400..0.400 rows=0 loops=50)\n Filter: (($1)::text = (doc_documentid)::text)\n -> Limit (cost=0.00..6.01 rows=1 width=40) (actual\ntime=17.620..17.620 rows=0 loops=50)\n -> Result (cost=0.00..6.01 rows=1 width=40) (actual\ntime=17.620..17.620 rows=0 loops=50)\n One-Time Filter: ($0 = true)\n -> Index Scan using documentevent_index on documentevent\nde (cost=0.00..6.01 rows=1 width=40) (actual time=28.419..28.419 rows=0\nloops=31)\n Index Cond: ((($1)::text = (doc_documentid)::text)\nAND (de_processedflag = false) AND ((de_documenteventstatus)::text =\n'ERROR'::text))\n Total runtime: 5558.000 ms\n\n", "msg_date": "Wed, 21 Sep 2005 12:38:20 -0700", "msg_from": "\"Gurpreet Aulakh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query slower on 8.0.3 (Windows) vs 7.3 (cygwin)" }, { "msg_contents": "I have started to break my query down and analyze each piece.\nWhat I have discovered is very interesting.\n\nFirst here is a small piece of my query.\n\nEXPLAIN ANALYZE SELECT doc.doc_documentid FROM document AS doc\n\tLEFT JOIN document as root ON doc.doc_internalRootXref =\nroot.doc_documentId\n\tLEFT JOIN folder_document ON doc.doc_documentid =\nfolder_document.doc_documentId LIMIT 500 OFFSET 0\n\nWhen I run this on Postgres 8.0.3 running under windows this is the result\n\nQUERY PLAN\nLimit (cost=183.49..753.41 rows=500 width=40) (actual time=47.000..79.000\nrows=500 loops=1)\n -> Hash Left Join (cost=183.49..6702.23 rows=5719 width=40) (actual\ntime=47.000..79.000 rows=500 loops=1)\n Hash Cond: ((\"outer\".doc_documentid)::text =\n(\"inner\".doc_documentid)::text)\n -> Merge Left Join (cost=0.00..6432.96 rows=5719 width=40) (actual\ntime=0.000..16.000 rows=500 loops=1)\n Merge Cond: ((\"outer\".doc_internalrootxref)::text =\n(\"inner\".doc_documentid)::text)\n -> Index Scan using doc_internalrootxref_index on document\ndoc (cost=0.00..3172.64 rows=5719 width=80) (actual time=0.000..0.000\nrows=500 loops=1)\n -> Index Scan using pk_document on document root\n(cost=0.00..3174.53 rows=5719 width=40) (actual time=0.000..0.000 rows=863\nloops=1)\n -> Hash (cost=169.19..169.19 rows=5719 width=40) (actual\ntime=47.000..47.000 rows=0 loops=1)\n -> Seq Scan on folder_document (cost=0.00..169.19 rows=5719\nwidth=40) (actual time=0.000..16.000 rows=5719 loops=1)\nTotal runtime: 79.000 ms\n\nHere is the result of running the same query on the Postgres 7.3 running\nunder Cygwin\n\nQUERY PLAN\nLimit (cost=183.49..775.31 rows=500 width=160) (actual time=13.00..44.00\nrows=500 loops=1)\n -> Hash Join (cost=183.49..6952.79 rows=5719 width=160) (actual\ntime=13.00..44.00 rows=501 loops=1)\n Hash Cond: (\"outer\".doc_documentid = \"inner\".doc_documentid)\n -> Merge Join (cost=0.00..6612.03 rows=5719 width=120) (actual\ntime=0.00..29.00 rows=775 loops=1)\n Merge Cond: (\"outer\".doc_internalrootxref =\n\"inner\".doc_documentid)\n -> Index Scan using doc_internalrootxref_index on document\ndoc (cost=0.00..3254.39 rows=5719 width=80) (actual time=0.00..7.00\nrows=775 loops=1)\n -> Index Scan using pk_document on document root\n(cost=0.00..3257.88 rows=5719 width=40) (actual time=0.00..15.00 rows=1265\nloops=1)\n -> Hash (cost=169.19..169.19 rows=5719 width=40) (actual\ntime=12.00..12.00 rows=0 loops=1)\n -> Seq Scan on folder_document (cost=0.00..169.19 rows=5719\nwidth=40) (actual time=0.00..9.00 rows=5719 loops=1)\nTotal runtime: 45.00 msec\n\nWhat is really interesting is the time it takes for the Hash to occur. For\nthe first hash, on the 7.3 it takes only 12ms while on the 8.0 it takes\n47ms.\nNow the databases are created from the same data and I have run\nvacuumdb -f -z on the databases.\n\nNow I have read something on the archives that stated that perhaps the data\nis in the filesystem (not database) cache. Would this be the case?. If so\nhow would I improve the performance under WIN2K?\n\nAnyone have any ideas?\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Gurpreet\nAulakh\nSent: September 21, 2005 12:38 PM\nTo: [email protected]\nSubject: [PERFORM] Query slower on 8.0.3 (Windows) vs 7.3 (cygwin)\n\n\nI currently have a Postgres 7.3 database running under WIN2K using cygwin\nand want to move to Postgres 8.0.3 (native windows version).\nI am finding most simple queries are significantly faster on the native\nwindows version compared to 7.3 (under cygwin).\nHowever, for a complex query, that involve multiple JOINs, the 7.3 version\nis actually faster (about 2X faster).\n\nThe query that I am running was optimized to run under 7.3. It was\nspecifically modified to control the planner with explicit JOINs.\nWhen I run the same query on the 8.0.3 version with the join_collapse_limit\nset to 1 the query is slower.\n\nCan someone tell me why setting the join_collapse_limit to 1 in the 8.0\nversion does not produce similar results to the 7.3 version?\nDoes anyone have any suggestions on what I can do? Do I have to rewrite the\nquery?\n\n\nHere are the results of an explain analyze on the query.\n\nExplain analyze Postgres 7.3 running on WIN2K using cygwin.\n\nHash Join (cost=21808.27..1946264.80 rows=2982 width=1598) (actual\ntime=2186.00..2320.00 rows=50 loops=1)\n Hash Cond: (\"outer\".doc_internalparentomxref = \"inner\".doc_documentid)\n -> Hash Join (cost=20948.78..1945323.29 rows=2982 width=1534) (actual\ntime=2110.00..2227.00 rows=50 loops=1)\n Hash Cond: (\"outer\".doc_internalrootxref = \"inner\".doc_documentid)\n -> Hash Join (cost=20089.29..1944381.79 rows=2982 width=1484)\n(actual time=2067.00..2179.00 rows=50 loops=1)\n Hash Cond: (\"outer\".doc_documentid = \"inner\".doc_documentid)\n Join Filter: (\"inner\".dc_doccontacttype = 'FROM'::character\nvarying)\n -> Hash Join (cost=7455.14..1928613.59 rows=2982\nwidth=1138) (actual time=1216.00..1539.00 rows=50 loops=1)\n Hash Cond: (\"outer\".doc_documentid =\n\"inner\".doc_documentid)\n Join Filter: (\"inner\".dc_doccontacttype =\n'TO'::character varying)\n -> Hash Join (cost=183.49..1918519.06 rows=2860\nwidth=792) (actual time=64.00..301.00 rows=50 loops=1)\n Hash Cond: (\"outer\".doc_documentid =\n\"inner\".doc_documentid)\n -> Seq Scan on document finaldoc\n(cost=0.00..1918256.94 rows=2860 width=717) (actual time=13.00..254.00\nrows=50 loops=1)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=335.27..335.27\nrows=50 width=160) (actual time=0.00..0.01 rows=50 loops=5719)\n -> Limit (cost=0.00..335.27\nrows=50 width=160) (actual time=3.00..8.00 rows=50 loops=1)\n -> Nested Loop\n(cost=0.00..38347.95 rows=5719 width=160) (actual time=3.00..8.00 rows=51\nloops=1)\n -> Merge Join\n(cost=0.00..3910.14 rows=5719 width=120) (actual time=3.00..3.00 rows=51\nloops=1)\n Merge Cond:\n(\"outer\".doc_documentid = \"inner\".doc_documentid)\n -> Index Scan\nusing pk_document on document doc (cost=0.00..3256.48 rows=5719 width=80)\n(actual time=1.00..1.00 rows=51 loops=1)\n -> Index Scan\nusing pk_folder_document on folder_document (cost=0.00..553.91 rows=5719\nwidth=40) (actual time=2.00..2.00 rows=51 loops=1)\n -> Index Scan using\npk_document on document root (cost=0.00..6.01 rows=1 width=40) (actual\ntime=0.10..0.10 rows=1 loops=51)\n Index Cond:\n(\"outer\".doc_internalrootxref = root.doc_documentid)\n -> Hash (cost=169.19..169.19 rows=5719\nwidth=75) (actual time=31.00..31.00 rows=0 loops=1)\n -> Seq Scan on folder_document\n(cost=0.00..169.19 rows=5719 width=75) (actual time=0.00..11.00 rows=5719\nloops=1)\n -> Hash (cost=1328.80..1328.80 rows=34280 width=346)\n(actual time=846.00..846.00 rows=0 loops=1)\n -> Seq Scan on doccontact dcto\n(cost=0.00..1328.80 rows=34280 width=346) (actual time=0.00..175.00\nrows=34280 loops=1)\n -> Hash (cost=1328.80..1328.80 rows=34280 width=346)\n(actual time=445.00..445.00 rows=0 loops=1)\n -> Seq Scan on doccontact dcfrom (cost=0.00..1328.80\nrows=34280 width=346) (actual time=0.00..223.00 rows=34280 loops=1)\n -> Hash (cost=845.19..845.19 rows=5719 width=50) (actual\ntime=42.00..42.00 rows=0 loops=1)\n -> Seq Scan on document root (cost=0.00..845.19 rows=5719\nwidth=50) (actual time=0.00..2.00 rows=5719 loops=1)\n -> Hash (cost=845.19..845.19 rows=5719 width=64) (actual\ntime=73.00..73.00 rows=0 loops=1)\n -> Seq Scan on document parentom (cost=0.00..845.19 rows=5719\nwidth=64) (actual time=0.00..30.00 rows=5719 loops=1)\n SubPlan\n -> Limit (cost=0.00..5.56 rows=1 width=40) (actual time=0.06..0.06\nrows=0 loops=50)\n -> Result (cost=0.00..7.20 rows=1 width=40) (actual\ntime=0.06..0.06 rows=0 loops=50)\n One-Time Filter: ($0 = true)\n -> Index Scan using documentevent_index on documentevent\nde (cost=0.00..7.20 rows=1 width=40) (actual time=0.07..0.07 rows=0\nloops=44)\n Index Cond: (($1 = doc_documentid) AND\n(de_processedflag = false) AND (de_documenteventstatus = 'ERROR'::character\nvarying))\n -> Limit (cost=0.00..3.86 rows=1 width=40) (actual time=0.10..0.10\nrows=0 loops=50)\n\nExplain analyze Postgres 8.0.3 running natively under WIN2K.\n\nHash IN Join (cost=5293.09..7121.89 rows=50 width=1369) (actual\ntime=1062.000..5558.000 rows=50 loops=1)\n Hash Cond: ((\"outer\".doc_documentid)::text =\n(\"inner\".doc_documentid)::text)\n -> Hash Left Join (cost=4798.24..6199.29 rows=5741 width=1369) (actual\ntime=751.000..4236.000 rows=5719 loops=1)\n Hash Cond: ((\"outer\".doc_internalparentomxref)::text =\n(\"inner\".doc_documentid)::text)\n -> Hash Left Join (cost=3956.48..5271.41 rows=5741 width=1345)\n(actual time=541.000..3105.000 rows=5719 loops=1)\n Hash Cond: ((\"outer\".doc_internalrootxref)::text =\n(\"inner\".doc_documentid)::text)\n -> Hash Left Join (cost=3114.72..4343.53 rows=5741\nwidth=1335) (actual time=501.000..2313.000 rows=5719 loops=1)\n Hash Cond: ((\"outer\".doc_documentid)::text =\n(\"inner\".doc_documentid)::text)\n -> Hash Left Join (cost=1649.92..2721.09 rows=5741\nwidth=1039) (actual time=180.000..1342.000 rows=5719 loops=1)\n Hash Cond: ((\"outer\".doc_documentid)::text =\n(\"inner\".doc_documentid)::text)\n -> Hash Left Join (cost=185.13..1098.65\nrows=5741 width=743) (actual time=40.000..592.000 rows=5719 loops=1)\n Hash Cond: ((\"outer\".doc_documentid)::text\n= (\"inner\".doc_documentid)::text)\n -> Seq Scan on document finaldoc\n(cost=0.00..827.41 rows=5741 width=708) (actual time=0.000..41.000 rows=5719\nloops=1)\n -> Hash (cost=170.70..170.70 rows=5770\nwidth=75) (actual time=40.000..40.000 rows=0 loops=1)\n -> Seq Scan on folder_document\n(cost=0.00..170.70 rows=5770 width=75) (actual time=0.000..10.000 rows=5719\nloops=1)\n -> Hash (cost=1450.50..1450.50 rows=5718\nwidth=336) (actual time=140.000..140.000 rows=0 loops=1)\n -> Seq Scan on doccontact dcto\n(cost=0.00..1450.50 rows=5718 width=336) (actual time=0.000..130.000\nrows=5718 loops=1)\n Filter: ((dc_doccontacttype)::text =\n'TO'::text)\n -> Hash (cost=1450.50..1450.50 rows=5718 width=336)\n(actual time=321.000..321.000 rows=0 loops=1)\n -> Seq Scan on doccontact dcfrom\n(cost=0.00..1450.50 rows=5718 width=336) (actual time=10.000..291.000\nrows=5718 loops=1)\n Filter: ((dc_doccontacttype)::text =\n'FROM'::text)\n -> Hash (cost=827.41..827.41 rows=5741 width=50) (actual\ntime=40.000..40.000 rows=0 loops=1)\n -> Seq Scan on document root (cost=0.00..827.41\nrows=5741 width=50) (actual time=0.000..30.000 rows=5719 loops=1)\n -> Hash (cost=827.41..827.41 rows=5741 width=64) (actual\ntime=210.000..210.000 rows=0 loops=1)\n -> Seq Scan on document parentom (cost=0.00..827.41\nrows=5741 width=64) (actual time=0.000..160.000 rows=5719 loops=1)\n -> Hash (cost=494.73..494.73 rows=50 width=42) (actual\ntime=261.000..261.000 rows=0 loops=1)\n -> Subquery Scan \"IN_subquery\" (cost=185.13..494.73 rows=50\nwidth=42) (actual time=101.000..261.000 rows=50 loops=1)\n -> Limit (cost=185.13..494.23 rows=50 width=40) (actual\ntime=101.000..261.000 rows=50 loops=1)\n -> Nested Loop Left Join (cost=185.13..35676.18\nrows=5741 width=40) (actual time=101.000..261.000 rows=50 loops=1)\n -> Hash Left Join (cost=185.13..1098.65\nrows=5741 width=80) (actual time=91.000..91.000 rows=50 loops=1)\n Hash Cond: ((\"outer\".doc_documentid)::text\n= (\"inner\".doc_documentid)::text)\n -> Seq Scan on document doc\n(cost=0.00..827.41 rows=5741 width=80) (actual time=10.000..10.000 rows=50\nloops=1)\n -> Hash (cost=170.70..170.70 rows=5770\nwidth=40) (actual time=81.000..81.000 rows=0 loops=1)\n -> Seq Scan on folder_document\n(cost=0.00..170.70 rows=5770 width=40) (actual time=10.000..61.000 rows=5719\nloops=1)\n -> Index Scan using pk_document on document root\n(cost=0.00..6.01 rows=1 width=40) (actual time=3.400..3.400 rows=1 loops=50)\n Index Cond:\n((\"outer\".doc_internalrootxref)::text = (root.doc_documentid)::text)\n SubPlan\n -> Limit (cost=0.00..1.96 rows=1 width=40) (actual time=0.400..0.400\nrows=0 loops=50)\n -> Seq Scan on followup_document fd (cost=0.00..3.91 rows=2\nwidth=40) (actual time=0.400..0.400 rows=0 loops=50)\n Filter: (($1)::text = (doc_documentid)::text)\n -> Limit (cost=0.00..6.01 rows=1 width=40) (actual\ntime=17.620..17.620 rows=0 loops=50)\n -> Result (cost=0.00..6.01 rows=1 width=40) (actual\ntime=17.620..17.620 rows=0 loops=50)\n One-Time Filter: ($0 = true)\n -> Index Scan using documentevent_index on documentevent\nde (cost=0.00..6.01 rows=1 width=40) (actual time=28.419..28.419 rows=0\nloops=31)\n Index Cond: ((($1)::text = (doc_documentid)::text)\nAND (de_processedflag = false) AND ((de_documenteventstatus)::text =\n'ERROR'::text))\n Total runtime: 5558.000 ms\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n\n", "msg_date": "Wed, 21 Sep 2005 17:11:17 -0700", "msg_from": "\"Gurpreet Aulakh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query slower on 8.0.3 (Windows) vs 7.3 (cygwin)" }, { "msg_contents": "\"Gurpreet Aulakh\" <[email protected]> writes:\n> What is really interesting is the time it takes for the Hash to occur. For\n> the first hash, on the 7.3 it takes only 12ms while on the 8.0 it takes\n> 47ms.\n\nYou haven't told us a thing about the column datatypes involved (much\nless what the query actually is) ... but I wonder if this is a textual\ndatatype and the 8.0 installation is using a non-C locale where the 7.3\ninstallation is using C locale. That could account for a considerable\nslowdown in text comparison speeds.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Sep 2005 23:12:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query slower on 8.0.3 (Windows) vs 7.3 (cygwin) " }, { "msg_contents": "Hi,\n\nHere is the information that you requested.\n\nThe sub query that I am using is\n\nEXPLAIN ANALYZE SELECT doc.doc_documentid FROM document AS doc\n\tLEFT JOIN document as root\n\t\tON doc.doc_internalRootXref = root.doc_documentId\n\tLEFT JOIN folder_document ON doc.doc_documentid =\nfolder_document.doc_documentId\nLIMIT 500 OFFSET 0\n\n\nThe column doc_documentid is character varying(48) on both tables (document,\nfolder_document).\nThe column doc_internalRootXref is also character varying(48)\ndoc_documentid and doc_internalRootXref are UUIDs that is 36 chars long.\n\nThe document table has 58 columns.\n\t31 columns are varchar ranging from size 8 to 80\n\t7 booleans\n\t4 numeric(12,2)\n\t8 timestamp with time zone\n\t1 integer\n\t1 bigint\n\t5 text\n\nThe folder_documen table has 6 columns\n\t4 varchar (2 of length 16 2 of length 48)\n\nThe following indexes are on the document table\n\t pk_document primary key btree (doc_documentid),\n document_pk unique btree (doc_documentid),\n doc_deliverydate_index btree (doc_deliverydate),\n doc_externalxref_index btree (doc_externalxref),\n doc_internalparentomxref_index btree (doc_internalparentomxref),\n doc_internalrootxref_index btree (doc_internalrootxref)\nThe following indexes are on the folder_document table\n\tpk_folder_document primary key btree (doc_documentid)\n fk_folder_document1 FOREIGN KEY (fld_folderid) REFERENCES\nfolder(fld_folderid)\n\t\tON UPDATE RESTRICT ON DELETE CASCADE,\n\tfk_folder_document2 FOREIGN KEY (doc_documentid) REFERENCES\ndocument(doc_documentid)\n\t\tON UPDATE RESTRICT ON DELETE CASCADE\n\nAfter reading your hint about locale settings, I reinstalled postgres and\nmade sure the locale was set\nto C and that the encoding was SQL_ASCII. (these are the settings on the\ncygwin installation).\n\nI still get the same results in the last post.\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: September 21, 2005 8:13 PM\nTo: Gurpreet Aulakh\nCc: [email protected]\nSubject: Re: [PERFORM] Query slower on 8.0.3 (Windows) vs 7.3 (cygwin)\n\n\n\"Gurpreet Aulakh\" <[email protected]> writes:\n> What is really interesting is the time it takes for the Hash to occur. For\n> the first hash, on the 7.3 it takes only 12ms while on the 8.0 it takes\n> 47ms.\n\nYou haven't told us a thing about the column datatypes involved (much\nless what the query actually is) ... but I wonder if this is a textual\ndatatype and the 8.0 installation is using a non-C locale where the 7.3\ninstallation is using C locale. That could account for a considerable\nslowdown in text comparison speeds.\n\n\t\t\tregards, tom lane\n\n\n\n", "msg_date": "Thu, 22 Sep 2005 11:54:11 -0700", "msg_from": "\"Gurpreet Aulakh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query slower on 8.0.3 (Windows) vs 7.3 (cygwin) " }, { "msg_contents": "After further investigation I have found that the reason why the query is\nslower on 8.0.3 is that the hash and hash joins are slower on the 8.0.3.\n\nSo the question comes down to : Why are hash and hash joins slower? Is this\na postgres configuration setting that I am missing? Is the locale still\nscrewing me up? I have set the locale to 'C' without any improvements. Is it\nbecause the column type is a varchar that the hash is slower?\n\n\n\n", "msg_date": "Fri, 23 Sep 2005 10:15:36 -0700", "msg_from": "\"Gurpreet Aulakh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query slower on 8.0.3 (Windows) vs 7.3 (cygwin) " }, { "msg_contents": "\"Gurpreet Aulakh\" <[email protected]> writes:\n> After further investigation I have found that the reason why the query is\n> slower on 8.0.3 is that the hash and hash joins are slower on the 8.0.3.\n> So the question comes down to : Why are hash and hash joins slower?\n\nI looked into this a bit and determined that the problem seems to have\nbeen introduced here:\n\n2002-12-30 10:21 tgl\n\n\t* src/: backend/executor/nodeHash.c,\n\tbackend/executor/nodeHashjoin.c, backend/optimizer/path/costsize.c,\n\tinclude/executor/nodeHash.h: Better solution to integer overflow\n\tproblem in hash batch-number computation: reduce the bucket number\n\tmod nbatch. This changes the association between original bucket\n\tnumbers and batches, but that doesn't matter. Minor other cleanups\n\tin hashjoin code to help centralize decisions.\n\n(which means it's present in 7.4 as well as 8.0). The code now\ngroups tuples into hash batches according to\n\t(hashvalue % totalbuckets) % nbatch\nWhen a tuple that is not in the first batch is reloaded, it is placed\ninto a bucket according to\n\t(hashvalue % nbuckets)\nThis means that if totalbuckets, nbatch, and nbuckets have a common\nfactor F, the buckets won't be evenly used; in fact, only one in every F\nbuckets will be used at all, the rest remaining empty. The ones that\nare used accordingly will contain about F times more tuples than\nintended. The slowdown comes from having to compare these extra tuples\nagainst the outer-relation tuples.\n\n7.3 uses a different algorithm for grouping tuples that avoids this\nproblem, but it has performance issues of its own (in particular, to\navoid integer overflow we have to limit the number of batches we can\nhave). So just reverting this patch doesn't seem very attractive.\n\nThe problem no longer exists in 8.1 because of rewrites undertaken for\nanother purpose, so I'm sort of tempted to do nothing. To fix this in\nthe back branches we'd have to develop new code that won't ever go into\nCVS tip and thus will never get beta-tested. The risk of breaking\nthings seems higher than I'd like.\n\nIf we did want to fix it, my first idea is to increment nbatch looking\nfor a value that has no common factor with nbuckets.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Sep 2005 17:12:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query slower on 8.0.3 (Windows) vs 7.3 (cygwin) " }, { "msg_contents": "\n\tHello fellow Postgresql'ers.\n\n\tI've been stumbled on this RAID card which looks nice. It is a PCI-X SATA \nRaid card with 6 channels, and does RAID 0,1,5,10,50.\n\tIt is a HP card with an Adaptec chip on it, and 64 MB cache.\n\n\tHP Part # : 372953-B21\n\tAdaptec Part # : AAR-2610SA/64MB/HP\n\n\tThere' even a picture :\n\thttp://megbytes.free.fr/Sata/DSC05970.JPG\n\n\tI know it isn't as good as a full SCSI system. I just want to know if \nsome of you have had experiences with these, and if this cards belong to \nthe \"slower than no RAID\" camp, like some DELL card we often see mentioned \nhere, or to the \"decent performance for the price\" camp. It is to run on a \nLinux.\n\n\tThanks in advance for your time and information.\n", "msg_date": "Sat, 24 Sep 2005 10:34:15 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Advice on RAID card" }, { "msg_contents": "I would consider Software Raid\n\n\nPFC wrote:\n\n>\n> Hello fellow Postgresql'ers.\n>\n> I've been stumbled on this RAID card which looks nice. It is a \n> PCI-X SATA Raid card with 6 channels, and does RAID 0,1,5,10,50.\n> It is a HP card with an Adaptec chip on it, and 64 MB cache.\n>\n> HP Part # : 372953-B21\n> Adaptec Part # : AAR-2610SA/64MB/HP\n>\n> There' even a picture :\n> http://megbytes.free.fr/Sata/DSC05970.JPG\n>\n> I know it isn't as good as a full SCSI system. I just want to know \n> if some of you have had experiences with these, and if this cards \n> belong to the \"slower than no RAID\" camp, like some DELL card we \n> often see mentioned here, or to the \"decent performance for the \n> price\" camp. It is to run on a Linux.\n>\n> Thanks in advance for your time and information.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n-- \n--------------------------\nCanaan Surfing Ltd.\nInternet Service Providers\nBen-Nes Michael - Manager\nTel: 972-4-6991122\nCel: 972-52-8555757\nFax: 972-4-6990098\nhttp://www.canaan.net.il\n--------------------------\n\n", "msg_date": "Sun, 25 Sep 2005 13:17:27 +0300", "msg_from": "Michael Ben-Nes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice on RAID card" }, { "msg_contents": "I would think software raid would be quite inappropriate considering \npostgres when it is working is taking a fair amount of CPU as would \nsoftware RAID. Does anyone know if this is really the case ?\n\nDave\nOn 25-Sep-05, at 6:17 AM, Michael Ben-Nes wrote:\n\n> I would consider Software Raid\n>\n>\n> PFC wrote:\n>\n>\n>>\n>> Hello fellow Postgresql'ers.\n>>\n>> I've been stumbled on this RAID card which looks nice. It is a \n>> PCI-X SATA Raid card with 6 channels, and does RAID 0,1,5,10,50.\n>> It is a HP card with an Adaptec chip on it, and 64 MB cache.\n>>\n>> HP Part # : 372953-B21\n>> Adaptec Part # : AAR-2610SA/64MB/HP\n>>\n>> There' even a picture :\n>> http://megbytes.free.fr/Sata/DSC05970.JPG\n>>\n>> I know it isn't as good as a full SCSI system. I just want to \n>> know if some of you have had experiences with these, and if this \n>> cards belong to the \"slower than no RAID\" camp, like some DELL \n>> card we often see mentioned here, or to the \"decent performance \n>> for the price\" camp. It is to run on a Linux.\n>>\n>> Thanks in advance for your time and information.\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>>\n>\n>\n> -- \n> --------------------------\n> Canaan Surfing Ltd.\n> Internet Service Providers\n> Ben-Nes Michael - Manager\n> Tel: 972-4-6991122\n> Cel: 972-52-8555757\n> Fax: 972-4-6990098\n> http://www.canaan.net.il\n> --------------------------\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n>\n\n", "msg_date": "Sun, 25 Sep 2005 10:57:56 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice on RAID card" }, { "msg_contents": "On 9/25/05, Dave Cramer <[email protected]> wrote:\n> I would think software raid would be quite inappropriate considering\n> postgres when it is working is taking a fair amount of CPU as would\n> software RAID. Does anyone know if this is really the case ?\n>\n\nI attempted to get some extra speed out of my Compaq/HP SA6404 card by\nusing software RAID1 across to hardware RAID10 sets. It didn't help,\nbut there was no noticeable load or drop in performance because of it.\n Granted, this was on a 4-way Opteron, but, anecdotally speaking, the\nlinux software RAID has surprisingly low overhead.\n\nMy $0.02, hope it helps.\n\n--\nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n", "msg_date": "Sun, 25 Sep 2005 15:27:31 +0000", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice on RAID card" }, { "msg_contents": "Dave Cramer wrote:\n\n> I would think software raid would be quite inappropriate considering \n> postgres when it is working is taking a fair amount of CPU as would \n> software RAID. Does anyone know if this is really the case ?\n\nThe common explanation is that CPUs are so fast now that it doesn't make \na difference.\n From my experience software raid works very, very well. However I have \nnever put\nsoftware raid on anything that is very heavily loaded.\n\nI would still use hardware raid if it is very heavily loaded.\n\nSincerely,\n\nJoshua D. Drake\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n\n", "msg_date": "Sun, 25 Sep 2005 08:42:51 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice on RAID card" }, { "msg_contents": "On Sun, Sep 25, 2005 at 10:57:56AM -0400, Dave Cramer wrote:\n>I would think software raid would be quite inappropriate considering \n>postgres when it is working is taking a fair amount of CPU as would \n>software RAID. Does anyone know if this is really the case ?\n\nIt's not. Modern cpu's can handle raid operations without even noticing.\nAt the point where your raid ops become a significant fraction of the\ncpu you'll be i/o bound anyway.\n\nMike Stone\n", "msg_date": "Sun, 25 Sep 2005 11:59:09 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice on RAID card" }, { "msg_contents": "\n\n> The common explanation is that CPUs are so fast now that it doesn't make \n> a difference.\n> From my experience software raid works very, very well. However I have \n> never put\n> software raid on anything that is very heavily loaded.\n\n\tEven for RAID5 ? it uses a bit more CPU for the parity calculations.\n\tAn advantage of software raid, is that if the RAID card dies, you have to \nbuy the same one ; whether I think that you can transfer a bunch of \nsoftware RAID5 disks to another machine if the machine they're in dies...\n", "msg_date": "Sun, 25 Sep 2005 18:08:27 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice on RAID card" }, { "msg_contents": "\n>\n> Even for RAID5 ? it uses a bit more CPU for the parity calculations.\n\nI honestly can't speak to RAID 5. I don't (and won't) use it. RAID 5 is \na little brutal when under\nheavy write load. I use either 1, or 10.\n\n> An advantage of software raid, is that if the RAID card dies, you \n> have to buy the same one ; whether I think that you can transfer a \n> bunch of software RAID5 disks to another machine if the machine \n> they're in dies...\n\nThere is a huge advantage to software raid on all kinds of levels. If \nyou have the CPU then I suggest\nit. However you will never get the performance out of software raid on \nthe high level (think 1 gig of cache)\nthat you would on a software raid setup.\n\nIt is a bit of a tradeoff but for most installations software raid is \nmore than adequate.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n\n", "msg_date": "Sun, 25 Sep 2005 09:22:02 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice on RAID card" }, { "msg_contents": "\n> There is a huge advantage to software raid on all kinds of levels. If \n> you have the CPU then I suggest\n> it. However you will never get the performance out of software raid on \n> the high level (think 1 gig of cache)\n> that you would on a software raid setup.\n>\n> It is a bit of a tradeoff but for most installations software raid is \n> more than adequate.\n\n\tWhich makes me think that I will use Software Raid 5 and convert the \nprice of the card into RAM.\n\tThis should be nice for a budget server.\n\tGonna investigate now if Linux software RAID5 is rugged enough. Can \nalways buy the a card later if not.\n\n\tThanks all for the advice, you were really helpful.\n", "msg_date": "Sun, 25 Sep 2005 18:53:57 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice on RAID card" }, { "msg_contents": "PFC <[email protected]> writes:\n\n> \tWhich makes me think that I will use Software Raid 5 and convert the\n> price of the card into RAM.\n> \tThis should be nice for a budget server.\n> \tGonna investigate now if Linux software RAID5 is rugged enough. Can\n> always buy the a card later if not.\n\nRaid 5 is perhaps the exception here. For Raid 5 a substantial amount of CPU\npower is needed.\n\nAlso, Raid 5 is particularly inappropriate for write-heavy Database traffic.\nRaid 5 actually hurts write latency dramatically and Databases are very\nsensitive to latency.\n\nOn the other hand if your database is primarily read-only then Raid 5 may not\nbe a problem and may be faster than raid 1+0.\n\n-- \ngreg\n\n", "msg_date": "25 Sep 2005 13:41:06 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice on RAID card" }, { "msg_contents": "On Sun, Sep 25, 2005 at 01:41:06PM -0400, Greg Stark wrote:\n>Also, Raid 5 is particularly inappropriate for write-heavy Database traffic.\n>Raid 5 actually hurts write latency dramatically and Databases are very\n>sensitive to latency.\n\nSoftware raid 5 actually may have an advantage here. The main cause for\nhigh raid5 write latency is the necessity of having blocks from each\ndisk available to calculate the parity. The chances of a pc with several\ngigs of ram having all the blocks cached (thus not requiring any reads)\nare higher than on a hardware raid with several hundred megs of ram. \n\nMike Stone\n", "msg_date": "Sun, 25 Sep 2005 14:13:52 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice on RAID card" }, { "msg_contents": "On Sun, Sep 25, 2005 at 06:53:57PM +0200, PFC wrote:\n> \tGonna investigate now if Linux software RAID5 is rugged enough. Can \n> always buy the a card later if not.\n\nNote that 2.6.13 and 2.6.14 have several improvements to the software RAID\ncode, some with regard to ruggedness. You might want to read the changelogs.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sun, 25 Sep 2005 22:54:46 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice on RAID card" }, { "msg_contents": "[email protected] (\"Joshua D. Drake\") writes:\n> There is a huge advantage to software raid on all kinds of\n> levels. If you have the CPU then I suggest it. However you will\n> never get the performance out of software raid on the high level\n> (think 1 gig of cache) that you would on a software raid setup.\n\nThis appears to be a case where the \"ludicrous MHz increases\" on\ndesktop CPUs has actually provided a material benefit.\n\nThe sorts of embedded controllers typically used on RAID controllers\nare StrongARMs and i960s, and, well, 250MHz is actually fast for\nthese.\n\nWhen AMD and Intel fight over adding gigahertz and megabytes of cache\nto their chips, this means that the RAID work can get pushed over to\none's \"main CPU\" without chewing up terribly much of its bandwidth.\n\nThat says to me that in the absence of battery backed cache, it's not\nworth having a \"bottom-end\" RAID controller. Particularly if the\ndeath of the controller would be likely to kill your data.\n\nBattery-backed cache changes the value proposition, of course...\n-- \nselect 'cbbrowne' || '@' || 'acm.org';\nhttp://cbbrowne.com/info/linuxdistributions.html\nAll generalizations are false, including this one. \n", "msg_date": "Sun, 25 Sep 2005 20:10:00 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice on RAID card" }, { "msg_contents": "Thanks for your help Tom.\n\nWhile testing 8.1, I found that simple joins take longer in 8.1 than 8.0.\nFor example the sub query\n\nSELECT doc.doc_documentid FROM document AS doc LEFT JOIN folder_document ON\ndoc.doc_documentid = folder_document.doc_documentId LEFT JOIN document as\nroot ON doc.doc_internalRootXref = root.doc_documentId\n\nis actually slower on 8.1 than 8.0.\n\nHowever, the full query that I will be running is much faster. In my\nevaluation I found the same pattern. That simple joins were slower but\ncomplex joins were faster.\n\nOverall though, 8.1 is faster and we will probably be moving to it when it's\nofficially released.\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: September 23, 2005 2:13 PM\nTo: Gurpreet Aulakh\nCc: [email protected]\nSubject: Re: [PERFORM] Query slower on 8.0.3 (Windows) vs 7.3 (cygwin)\n\n\n\"Gurpreet Aulakh\" <[email protected]> writes:\n> After further investigation I have found that the reason why the query is\n> slower on 8.0.3 is that the hash and hash joins are slower on the 8.0.3.\n> So the question comes down to : Why are hash and hash joins slower?\n\nI looked into this a bit and determined that the problem seems to have\nbeen introduced here:\n\n2002-12-30 10:21 tgl\n\n\t* src/: backend/executor/nodeHash.c,\n\tbackend/executor/nodeHashjoin.c, backend/optimizer/path/costsize.c,\n\tinclude/executor/nodeHash.h: Better solution to integer overflow\n\tproblem in hash batch-number computation: reduce the bucket number\n\tmod nbatch. This changes the association between original bucket\n\tnumbers and batches, but that doesn't matter. Minor other cleanups\n\tin hashjoin code to help centralize decisions.\n\n(which means it's present in 7.4 as well as 8.0). The code now\ngroups tuples into hash batches according to\n\t(hashvalue % totalbuckets) % nbatch\nWhen a tuple that is not in the first batch is reloaded, it is placed\ninto a bucket according to\n\t(hashvalue % nbuckets)\nThis means that if totalbuckets, nbatch, and nbuckets have a common\nfactor F, the buckets won't be evenly used; in fact, only one in every F\nbuckets will be used at all, the rest remaining empty. The ones that\nare used accordingly will contain about F times more tuples than\nintended. The slowdown comes from having to compare these extra tuples\nagainst the outer-relation tuples.\n\n7.3 uses a different algorithm for grouping tuples that avoids this\nproblem, but it has performance issues of its own (in particular, to\navoid integer overflow we have to limit the number of batches we can\nhave). So just reverting this patch doesn't seem very attractive.\n\nThe problem no longer exists in 8.1 because of rewrites undertaken for\nanother purpose, so I'm sort of tempted to do nothing. To fix this in\nthe back branches we'd have to develop new code that won't ever go into\nCVS tip and thus will never get beta-tested. The risk of breaking\nthings seems higher than I'd like.\n\nIf we did want to fix it, my first idea is to increment nbatch looking\nfor a value that has no common factor with nbuckets.\n\n\t\t\tregards, tom lane\n\n\n\n", "msg_date": "Mon, 26 Sep 2005 10:10:56 -0700", "msg_from": "\"Gurpreet Aulakh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query slower on 8.0.3 (Windows) vs 7.3 (cygwin) " }, { "msg_contents": "\"Gurpreet Aulakh\" <[email protected]> writes:\n> While testing 8.1, I found that simple joins take longer in 8.1 than 8.0.\n> For example the sub query\n> SELECT doc.doc_documentid FROM document AS doc LEFT JOIN folder_document ON\n> doc.doc_documentid = folder_document.doc_documentId LEFT JOIN document as\n> root ON doc.doc_internalRootXref = root.doc_documentId\n> is actually slower on 8.1 than 8.0.\n\nWith no more detail than that, this report is utterly unhelpful. Let's\nsee the table schemas and the EXPLAIN ANALYZE results in both cases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Sep 2005 13:41:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query slower on 8.0.3 (Windows) vs 7.3 (cygwin) " } ]
[ { "msg_contents": "I have about 419804 rows in my article table. I have installed tsearch2 and\nits gist index correctly. \n\nMy table structure is:\n\nCREATE TABLE tbarticles\n\n(\n\n articleid int4 NOT NULL,\n\n title varchar(250),\n\n mediaid int4,\n\n datee date,\n\n content text,\n\n contentvar text,\n\n mmcol float4 NOT NULL,\n\n sirkulasi float4,\n\n page varchar(10),\n\n tglisidata date,\n\n namapc varchar(12),\n\n usere varchar(12),\n\n file_pdf varchar(255),\n\n file_pdf2 varchar(50),\n\n kolom int4,\n\n size_jpeg int4,\n\n journalist varchar(120),\n\n ratebw float4,\n\n ratefc float4,\n\n fti tsvector, \n\n CONSTRAINT pk_tbarticles PRIMARY KEY (articleid)\n\n) WITHOUT OIDS;\n\nCreate index fti_idx1 on tbarticles using gist (fti);\n\nCreate index fti_idx2 on tbarticles using gist (datee, fti);\n\n \n\nBut when I search something like:\n\nSelect articleid, title, datee from tbarticles where fti @@\nto_tsquery('susilo&bambang&yudhoyono&jusuf&kalla');\n\nIt takes about 30 sec. I run explain analyze and the index is used\ncorrectly.\n\n \n\nThen I try multi column index to filter by date, and my query something\nlike:\n\nSelect articleid, title, datee from tbarticles where fti @@\nto_tsquery('susilo&bambang&yudhoyono&jusuf&kalla') and datee >= '2002-01-01'\nand datee <= current_date\n\nAn it still run about 25 sec. I do run explain analyze and my multicolumn\nindex is used correctly.\n\nThis is not acceptable if want to publish my website if the search took very\nlonger.\n\n \n\nI have run vacuum full analyze before doing such query. What going wrong\nwith my query?? Is there any way to make this faster?\n\nI have try to tune my postgres configuration, but it seem helpless. My linux\nbox is Redhat 4 AS, and \n\nthe hardware: 2 GB RAM DDR 400, 2x200 GB Serial ATA 7200RPM and configure as\nRAID0 (just for postgres data), my sistem run at EIDE 80GB 7200 RPM.\n\n \n\nPlease.help.help.\n\n\n\n\n\n\n\n\n\n\nI have about 419804 rows in my article table. I have\ninstalled tsearch2 and its gist index correctly. \nMy table structure is:\nCREATE TABLE tbarticles\n(\n  articleid int4 NOT NULL,\n  title varchar(250),\n  mediaid int4,\n  datee date,\n  content text,\n  contentvar text,\n  mmcol float4 NOT NULL,\n  sirkulasi float4,\n  page varchar(10),\n  tglisidata date,\n  namapc varchar(12),\n  usere varchar(12),\n  file_pdf varchar(255),\n  file_pdf2 varchar(50),\n  kolom int4,\n  size_jpeg int4,\n  journalist varchar(120),\n  ratebw float4,\n  ratefc float4,\n  fti tsvector, \n  CONSTRAINT pk_tbarticles PRIMARY KEY (articleid)\n) WITHOUT OIDS;\nCreate index fti_idx1 on tbarticles using gist (fti);\nCreate index fti_idx2 on tbarticles using gist (datee, fti);\n \nBut when I search something like:\nSelect articleid, title, datee from tbarticles where fti @@\nto_tsquery(‘susilo&bambang&yudhoyono&jusuf&kalla’);\nIt takes about 30 sec. I run explain analyze and the index\nis used correctly.\n \nThen I try multi column index to filter by date, and my\nquery something like:\nSelect articleid, title, datee from tbarticles where fti @@\nto_tsquery(‘susilo&bambang&yudhoyono&jusuf&kalla’)\nand datee >= '2002-01-01' and datee <= current_date\nAn it still run about 25 sec. I do run explain analyze and\nmy multicolumn index is used correctly.\nThis is not acceptable if want to publish my website if the\nsearch took very longer.\n \nI have run vacuum full analyze before doing such query. What\ngoing wrong with my query?? Is there any way to make this faster?\nI have try to tune my postgres configuration, but it seem\nhelpless. My linux box is Redhat 4 AS, and \nthe hardware: 2 GB RAM DDR 400, 2x200 GB Serial ATA 7200RPM and\nconfigure as RAID0 (just for postgres data), my sistem run at EIDE 80GB 7200\nRPM.\n \nPlease…help…help…", "msg_date": "Thu, 22 Sep 2005 05:08:12 +0700", "msg_from": "\"Ahmad Fajar\" <[email protected]>", "msg_from_op": true, "msg_subject": "tsearch2 seem very slow" }, { "msg_contents": "Ahmad,\n\nhow fast is repeated runs ? First time system could be very slow.\nAlso, have you checked my page\nhttp://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\nand some info about tsearch2 internals\nhttp://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n\n \tOleg\nOn Thu, 22 Sep 2005, Ahmad Fajar wrote:\n\n> I have about 419804 rows in my article table. I have installed tsearch2 and\n> its gist index correctly.\n>\n> My table structure is:\n>\n> CREATE TABLE tbarticles\n>\n> (\n>\n> articleid int4 NOT NULL,\n>\n> title varchar(250),\n>\n> mediaid int4,\n>\n> datee date,\n>\n> content text,\n>\n> contentvar text,\n>\n> mmcol float4 NOT NULL,\n>\n> sirkulasi float4,\n>\n> page varchar(10),\n>\n> tglisidata date,\n>\n> namapc varchar(12),\n>\n> usere varchar(12),\n>\n> file_pdf varchar(255),\n>\n> file_pdf2 varchar(50),\n>\n> kolom int4,\n>\n> size_jpeg int4,\n>\n> journalist varchar(120),\n>\n> ratebw float4,\n>\n> ratefc float4,\n>\n> fti tsvector,\n>\n> CONSTRAINT pk_tbarticles PRIMARY KEY (articleid)\n>\n> ) WITHOUT OIDS;\n>\n> Create index fti_idx1 on tbarticles using gist (fti);\n>\n> Create index fti_idx2 on tbarticles using gist (datee, fti);\n>\n>\n>\n> But when I search something like:\n>\n> Select articleid, title, datee from tbarticles where fti @@\n> to_tsquery('susilo&bambang&yudhoyono&jusuf&kalla');\n>\n> It takes about 30 sec. I run explain analyze and the index is used\n> correctly.\n>\n>\n>\n> Then I try multi column index to filter by date, and my query something\n> like:\n>\n> Select articleid, title, datee from tbarticles where fti @@\n> to_tsquery('susilo&bambang&yudhoyono&jusuf&kalla') and datee >= '2002-01-01'\n> and datee <= current_date\n>\n> An it still run about 25 sec. I do run explain analyze and my multicolumn\n> index is used correctly.\n>\n> This is not acceptable if want to publish my website if the search took very\n> longer.\n>\n>\n>\n> I have run vacuum full analyze before doing such query. What going wrong\n> with my query?? Is there any way to make this faster?\n>\n> I have try to tune my postgres configuration, but it seem helpless. My linux\n> box is Redhat 4 AS, and\n>\n> the hardware: 2 GB RAM DDR 400, 2x200 GB Serial ATA 7200RPM and configure as\n> RAID0 (just for postgres data), my sistem run at EIDE 80GB 7200 RPM.\n>\n>\n>\n> Please.help.help.\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Fri, 23 Sep 2005 11:35:48 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tsearch2 seem very slow" } ]
[ { "msg_contents": "> >I previously posted the following as a sequel to my SELECT DISTINCT\n> >Performance Issue question. We would most appreciate any clue or\n> >suggestions on how to overcome this show-stopping issue. We are using\n> >8.0.3 on Windows.\n> >\n> >Is it a known limitation when using a view with SELECT ... LIMIT 1?\n> >\n> >Would the forthcoming performance enhancement with MAX help when used\n> >within a view, as in:\n> >\n> >create or replace view VCurPlayer as select * from Player a\n> >where a.AtDate = (select Max(b.AtDate) from Player b where a.PlayerID\n=\n> >b.PlayerID);\n\nHere is a trick I use sometimes with views, etc. This may or may not be\neffective to solve your problem but it's worth a shot. Create one small\nSQL function taking date, etc. and returning the values and define it\nimmutable. Now in-query it is treated like a constant.\n\nAnother useful application for this feature is when you have nested\nviews (view 1 queries view 2) and you need to filter records based on\nfields from view 2 which are not returned in view 1. Impossible? \n\nin view 2 add clause where v2.f between f_min() and f_max(), them being\nimmutable functions which can grab filter criteria based on inputs or\nvalues from a table.\n\nMerlin\n", "msg_date": "Thu, 22 Sep 2005 10:37:08 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "At 22:37 05/09/22, Merlin Moncure wrote:\n\n> > >create or replace view VCurPlayer as select * from Player a\n> > >where a.AtDate = (select Max(b.AtDate) from Player b where a.PlayerID=\n> > >b.PlayerID);\n>\n>Here is a trick I use sometimes with views, etc. This may or may not be\n>effective to solve your problem but it's worth a shot. Create one small\n>SQL function taking date, etc. and returning the values and define it\n>immutable. Now in-query it is treated like a constant.\n\nWe don't use functions as a rule, but I would be glad to give it a try.\nI would most appreciate if you could define a sample function and rewrite \nthe VCurPlayer view above. Both PlayerID and AtDate are varchar fields.\n\n>Another useful application for this feature is when you have nested\n>views (view 1 queries view 2) and you need to filter records based on\n>fields from view 2 which are not returned in view 1. Impossible?\n>\n>in view 2 add clause where v2.f between f_min() and f_max(), them being\n>immutable functions which can grab filter criteria based on inputs or\n>values from a table.\n>\n>Merlin\n\nBest regards,\nKC. \n\n", "msg_date": "Thu, 22 Sep 2005 22:56:47 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" } ]
[ { "msg_contents": "> >Here is a trick I use sometimes with views, etc. This may or may not\nbe\n> >effective to solve your problem but it's worth a shot. Create one\nsmall\n> >SQL function taking date, etc. and returning the values and define it\n> >immutable. Now in-query it is treated like a constant.\n> \n> We don't use functions as a rule, but I would be glad to give it a\ntry.\n> I would most appreciate if you could define a sample function and\nrewrite\n> the VCurPlayer view above. Both PlayerID and AtDate are varchar\nfields.\n\n> esdt=> explain analyze select PlayerID,AtDate from Player a\n> where PlayerID='22220' and AtDate = (select b.AtDate from Player b\n> where b.PlayerID = '22220' order by b.PlayerID desc, b.AtDate desc\nLIMIT 1\ntry:\n\ncreate function player_max_at_date (varchar) returns date as \n$$\n\tselect atdate from player where playerid = $1 order by playerid\ndesc, AtDate desc limit 1;\n$$ language sql immutable;\n\ncreate view v as select playerid, player_max_at_date(playerid) from\nplayer;\nselect * from v where playerid = 'x'; --etc\n\nnote: this function is not really immutable. try with both 'immutable'\nand 'stable' if performance is same, do stable.\n\nYou're welcome in advance, ;)\nMerlin\n\n\n", "msg_date": "Thu, 22 Sep 2005 14:07:38 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "Thank you all for your suggestions. I' tried, with some variations too, but \nstill no success. The times given are the best of a few repeated tries on \nan 8.1 beta 2 db freshly migrated from 8.0.3 on Windows.\n\nFor reference, only the following gets the record quickly:\n\nesdt=> explain analyze select PlayerID,AtDate from Player a\n where PlayerID='22220' and AtDate = (select b.AtDate from Player b\n where b.PlayerID = '22220' order by b.PlayerID desc, b.AtDate desc LIMIT 1);\n\n Index Scan using pk_player on player a (cost=0.75..4.26 rows=1 width=23) \n(actual time=0.054..0.057 rows=1 loops=1)\n Index Cond: (((playerid)::text = '22220'::text) AND ((atdate)::text = \n($0)::text))\n InitPlan\n -> Limit (cost=0.00..0.75 rows=1 width=23) (actual \ntime=0.027..0.028 rows=1 loops=1)\n -> Index Scan Backward using pk_player on player \nb (cost=0.00..1323.05 rows=1756 width=23) (actual time=0.023..0.023 rows=1 \nloops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Total runtime: 0.132 ms\n\nAt 02:19 05/09/23, Kevin Grittner wrote:\n>Have you tried the \"best choice\" pattern -- where you select the set of\n>candidate rows and then exclude those for which a better choice\n>exists within the set? I often get better results with this pattern than\n>with the alternatives.\n\nesdt=> explain analyze select PlayerID,AtDate from Player a where \nPlayerID='22220'\nand not exists (select * from Player b where b.PlayerID = a.PlayerID and \nb.AtDate > a.AtDate);\n\n Index Scan using pk_player on player a (cost=0.00..3032.46 rows=878 \nwidth=23)\n(actual time=35.820..35.823 rows=1 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Filter: (NOT (subplan))\n SubPlan\n -> Index Scan using pk_player on player b (cost=0.00..378.68 \nrows=389 width=776) (actual time=0.013..0.013 rows=1 loops=1743)\n Index Cond: (((playerid)::text = ($0)::text) AND \n((atdate)::text > ($1)::text))\n Total runtime: 35.950 ms\n\nNote that it is faster than the LIMIT 1:\n\nesdt=> explain analyze select PlayerID,AtDate from Player a where \nPlayerID='22220' and AtDate = (select b.AtDate from Pl\nayer b where b.PlayerID = a.PlayerID order by b.PlayerID desc, b.AtDate \ndesc LIMIT 1);\n\n Index Scan using pk_player on player a (cost=0.00..2789.07 rows=9 \nwidth=23) (actual time=41.366..41.371 rows=1 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Filter: ((atdate)::text = ((subplan))::text)\n SubPlan\n -> Limit (cost=0.00..0.83 rows=1 width=23) (actual \ntime=0.013..0.014 rows=1 loops=1743)\n -> Index Scan Backward using pk_player on player \nb (cost=0.00..970.53 rows=1166 width=23) (actual time=0.008..0.008 rows=1 \nloops=1743)\n Index Cond: ((playerid)::text = ($0)::text)\n Total runtime: 41.490 ms\n\nAt 02:07 05/09/23, Merlin Moncure wrote:\n> > >Here is a trick I use sometimes with views, etc. This may or may not be\n> > >effective to solve your problem but it's worth a shot. Create one small\n> > >SQL function taking date, etc. and returning the values and define it\n> > >immutable. Now in-query it is treated like a constant.\n\nesdt=> create or replace function player_max_atdate (varchar(32)) returns \nvarchar(32) as $$\nesdt$> select atdate from player where playerid = $1 order by playerid \ndesc, AtDate desc limit 1;\nesdt$> $$ language sql immutable;\nCREATE FUNCTION\nesdt=> create or replace view VCurPlayer3 as select * from Player where \nAtDate = player_max_atdate(PlayerID);\nCREATE VIEW\nesdt=> explain analyze select PlayerID,AtDate from VCurPlayer3 where \nPlayerID='22220';\n\n Index Scan using pk_player on player (cost=0.00..1331.83 rows=9 \nwidth=23) (actual time=65.434..65.439 rows=1 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Filter: ((atdate)::text = (player_max_atdate(playerid))::text)\n Total runtime: 65.508 ms\n\nWhile it says loops=1, the time suggests that it is going through all 1743 \nrecords for that PlayerID.\n\nI tried to simulate the fast subquery inside the function, but it is taking \nalmost twice as much time:\n\nesdt=> create or replace function player_max_atdate (varchar(32)) returns \nvarchar(32) as $$\nesdt$> select atdate from player a where playerid = $1 and AtDate = \n(select b.AtDate from Player b\nesdt$> where b.PlayerID = $1 order by b.PlayerID desc, b.AtDate desc LIMIT 1);\nesdt$> $$ language sql immutable;\nCREATE FUNCTION\nesdt=> explain analyze select PlayerID,AtDate from VCurPlayer3 where \nPlayerID='22220';\n\n Index Scan using pk_player on player (cost=0.00..1331.83 rows=9 \nwidth=23) (actual time=119.369..119.373 rows=1 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Filter: ((atdate)::text = (player_max_atdate(playerid))::text)\n Total runtime: 119.441 ms\n\nAdding another LIMIT 1 inside the function makes it even slower:\n\nesdt=> create or replace function player_max_atdate (varchar(32)) returns \nvarchar(32) as $$\nesdt$> select atdate from player where playerid = $1 and AtDate = (select \nb.AtDate from Player b\nesdt$> where b.PlayerID = $1 order by b.PlayerID desc, b.AtDate desc LIMIT 1)\nesdt$> order by PlayerID desc, AtDate desc LIMIT 1;\nesdt$> $$ language sql immutable;\nCREATE FUNCTION\nesdt=> explain analyze select PlayerID,AtDate from VCurPlayer3 where \nPlayerID='22220';\n\n Index Scan using pk_player on player (cost=0.00..1331.83 rows=9 \nwidth=23) (actual time=129.858..129.863 rows=1 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Filter: ((atdate)::text = (player_max_atdate(playerid))::text)\n Total runtime: 129.906 ms\n\nAt 00:16 05/09/23, Simon Riggs wrote:\n>If the current value is used so often, use two tables - one with a\n>current view only of the row maintained using UPDATE. Different\n>performance issues maybe, but at least not correlated subquery ones.\n\nMany of our tables have similar construct and it would be a huge task to \nduplicate and maintain all these tables throughout the system. We would \nprefer a solution with SQL or function at the view or db level, or better \nstill, a fix, if this problem is considered general enough.\n\n>You're welcome in advance, ;)\n>Merlin\n\nThank you all in advance for any further ideas.\nKC.\n\n", "msg_date": "Fri, 23 Sep 2005 16:53:55 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "K C Lau wrote:\n> Thank you all for your suggestions. I' tried, with some variations too, \n> but still no success. The times given are the best of a few repeated \n> tries on an 8.1 beta 2 db freshly migrated from 8.0.3 on Windows.\n> \n\nA small denormalization, where you mark the row with the latest atdate \nfor each playerid may get you the performance you want.\n\ne.g: (8.1beta1)\n\nALTER TABLE player ADD islastatdate boolean;\n\nUPDATE player SET islastatdate = true where (playerid,atdate) IN\n(SELECT playerid, atdate FROM vcurplayer);\n\nCREATE OR REPLACE VIEW vcurplayer AS\nSELECT * FROM player a\nWHERE islastatdate;\n\nCREATE INDEX player_id_lastatdate ON player(playerid, islastatdate)\nWHERE islastatdate;\n\nANALYZE player;\n\nGenerating some test data produced:\n\nEXPLAIN ANALYZE\nSELECT playerid,atdate\nFROM vcurplayer\nWHERE playerid='22220';\n\n QUERY PLAN \n\n--------------------------------------------------------------------------------------------------------------------------------\n Index Scan using player_id_lastatdate on player a (cost=0.00..4.33 \nrows=1 width=13) (actual time=0.142..0.149 rows=1 loops=1)\n Index Cond: ((playerid = '22220'::text) AND (lastatdate = true))\n Filter: lastatdate\n Total runtime: 0.272 ms\n(4 rows)\n\nWhereas with the original view definition:\n\nCREATE OR REPLACE VIEW vcurplayer AS\nSELECT * FROM player a\nWHERE a.atdate =\n( SELECT max(b.atdate) FROM player b\n WHERE a.playerid = b.playerid);\n\nEXPLAIN ANALYZE\nSELECT playerid,atdate\nFROM vcurplayer\nWHERE playerid='22220';\n \n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using player_id_date on player a (cost=0.00..7399.23 \nrows=11 width=13) (actual time=121.738..121.745 rows=1 loops=1)\n Index Cond: (playerid = '22220'::text)\n Filter: (atdate = (subplan))\n SubPlan\n -> Result (cost=1.72..1.73 rows=1 width=0) (actual \ntime=0.044..0.047 rows=1 loops=2000)\n InitPlan\n -> Limit (cost=0.00..1.72 rows=1 width=4) (actual \ntime=0.028..0.031 rows=1 loops=2000)\n -> Index Scan Backward using player_id_date on \nplayer b (cost=0.00..3787.94 rows=2198 width=4) (actual \ntime=0.019..0.019 rows=1 loops=2000)\n Index Cond: ($0 = playerid)\n Filter: (atdate IS NOT NULL)\n Total runtime: 121.916 ms\n(11 rows)\n\nNote that my generated data has too many rows for each playerid, but the \n difference in performance should illustrate the idea.\n\nCheers\n\nMark\n", "msg_date": "Sat, 24 Sep 2005 13:40:25 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "Dear Mark,\n\nThank you. That seems like a more manageable alternative if nothing else \nworks out. It should cover many of the OLTP update transactions. But it \ndoes mean quite a bit of programming changes and adding another index on \nall such tables, and it would not cover those cases when we need to get the \nlatest record before a certain time, for example.\n\nI'm wondering if this performance issue is common enough for other users to \nmerit a fix in pg, especially as it seems that with MVCC, each of the data \nrecords need to be accessed in addition to scanning the index.\n\nBest regards,\nKC.\n\nAt 09:40 05/09/24, Mark Kirkwood wrote:\n>A small denormalization, where you mark the row with the latest atdate for \n>each playerid may get you the performance you want.\n>\n>e.g: (8.1beta1)\n>\n>ALTER TABLE player ADD islastatdate boolean;\n>\n>UPDATE player SET islastatdate = true where (playerid,atdate) IN\n>(SELECT playerid, atdate FROM vcurplayer);\n>\n>CREATE OR REPLACE VIEW vcurplayer AS\n>SELECT * FROM player a\n>WHERE islastatdate;\n>\n>CREATE INDEX player_id_lastatdate ON player(playerid, islastatdate)\n>WHERE islastatdate;\n>\n>ANALYZE player;\n>\n>Generating some test data produced:\n>\n>EXPLAIN ANALYZE\n>SELECT playerid,atdate\n>FROM vcurplayer\n>WHERE playerid='22220';\n>\n> QUERY PLAN\n>--------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using player_id_lastatdate on player a (cost=0.00..4.33 \n> rows=1 width=13) (actual time=0.142..0.149 rows=1 loops=1)\n> Index Cond: ((playerid = '22220'::text) AND (lastatdate = true))\n> Filter: lastatdate\n> Total runtime: 0.272 ms\n>(4 rows)\n>\n>Whereas with the original view definition:\n>\n>CREATE OR REPLACE VIEW vcurplayer AS\n>SELECT * FROM player a\n>WHERE a.atdate =\n>( SELECT max(b.atdate) FROM player b\n> WHERE a.playerid = b.playerid);\n>\n>EXPLAIN ANALYZE\n>SELECT playerid,atdate\n>FROM vcurplayer\n>WHERE playerid='22220';\n> QUERY PLAN\n>-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using player_id_date on player a (cost=0.00..7399.23 rows=11 \n> width=13) (actual time=121.738..121.745 rows=1 loops=1)\n> Index Cond: (playerid = '22220'::text)\n> Filter: (atdate = (subplan))\n> SubPlan\n> -> Result (cost=1.72..1.73 rows=1 width=0) (actual \n> time=0.044..0.047 rows=1 loops=2000)\n> InitPlan\n> -> Limit (cost=0.00..1.72 rows=1 width=4) (actual \n> time=0.028..0.031 rows=1 loops=2000)\n> -> Index Scan Backward using player_id_date on player \n> b (cost=0.00..3787.94 rows=2198 width=4) (actual time=0.019..0.019 \n> rows=1 loops=2000)\n> Index Cond: ($0 = playerid)\n> Filter: (atdate IS NOT NULL)\n> Total runtime: 121.916 ms\n>(11 rows)\n>\n>Note that my generated data has too many rows for each playerid, but \n>the difference in performance should illustrate the idea.\n>\n>Cheers\n>\n>Mark\n\n", "msg_date": "Sat, 24 Sep 2005 11:18:24 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "K C Lau wrote:\n\n> I'm wondering if this performance issue is common enough for other users \n> to merit a fix in pg, especially as it seems that with MVCC, each of the \n> data records need to be accessed in addition to scanning the index.\n> \n\nYes - there are certainly cases where index only access (or something \nsimilar, like b+tree tables) would be highly desirable.\n\n From what I have understood from previous discussions, there are \ndifficulties involved with producing a design that does not cause new \nproblems...\n\nregards\n\nMark\n", "msg_date": "Sat, 24 Sep 2005 17:14:59 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "On Fri, Sep 23, 2005 at 04:53:55PM +0800, K C Lau wrote:\n> Thank you all for your suggestions. I' tried, with some variations too, but \n> still no success. The times given are the best of a few repeated tries on \n> an 8.1 beta 2 db freshly migrated from 8.0.3 on Windows.\n> \n> For reference, only the following gets the record quickly:\n> \n> esdt=> explain analyze select PlayerID,AtDate from Player a\n> where PlayerID='22220' and AtDate = (select b.AtDate from Player b\n> where b.PlayerID = '22220' order by b.PlayerID desc, b.AtDate desc LIMIT \n> 1);\n> \n> Index Scan using pk_player on player a (cost=0.75..4.26 rows=1 width=23) \n> (actual time=0.054..0.057 rows=1 loops=1)\n> Index Cond: (((playerid)::text = '22220'::text) AND ((atdate)::text = \n> ($0)::text))\n> InitPlan\n> -> Limit (cost=0.00..0.75 rows=1 width=23) (actual \n> time=0.027..0.028 rows=1 loops=1)\n> -> Index Scan Backward using pk_player on player \n> b (cost=0.00..1323.05 rows=1756 width=23) (actual time=0.023..0.023 rows=1 \n> loops=1)\n> Index Cond: ((playerid)::text = '22220'::text)\n> Total runtime: 0.132 ms\n\nIf you're doing that, you should try something like the following:\ndecibel=# explain analyze select * from t where ctid=(select ctid from rrs order by rrs_id desc limit 1);\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------\n Tid Scan on t (cost=0.44..4.45 rows=1 width=42) (actual time=0.750..0.754 rows=1 loops=1)\n Filter: (ctid = $0)\n InitPlan\n -> Limit (cost=0.00..0.44 rows=1 width=10) (actual time=0.548..0.549 rows=1 loops=1)\n -> Index Scan Backward using rrs_rrs__rrs_id on rrs (cost=0.00..3.08 rows=7 width=10) (actual time=0.541..0.541 rows=1 loops=1)\n Total runtime: 1.061 ms\n(6 rows)\n\ndecibel=# select count(*) from t; count \n--------\n 458752\n\nNote that that's on my nice slow laptop to boot (the count took like 10\nseconds).\n\nJust remember that ctid *is not safe outside of a transaction*!! So you can't\ndo something like\n\nSELECT ctid FROM ...\nstore that in some variable...\nSELECT * FROM table WHERE ctid = variable\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 4 Oct 2005 16:31:54 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" } ]
[ { "msg_contents": "Have you tried the \"best choice\" pattern -- where you select the set of\ncandidate rows and then exclude those for which a better choice\nexists within the set? I often get better results with this pattern than\nwith the alternatives. Transmuting your query to use this patter gives:\n \nselect PlayerID,AtDate from Player a where PlayerID='22220'\n and not exists\n (select * from Player b\n where b.PlayerID = a.PlayerID and b.AtDate > a.AtDate);\n \n>>> K C Lau <[email protected]> 09/21/05 11:21 PM >>>\n\nselect PlayerID,AtDate from Player a\n where PlayerID='22220' and AtDate = (select b.AtDate from Player b\n where b.PlayerID = a.PlayerID order by b.PlayerID desc, b.AtDate desc \nLIMIT 1);\n\n", "msg_date": "Thu, 22 Sep 2005 13:19:56 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" } ]
[ { "msg_contents": "Hi,\n\nI've got many queries running much slower on 8.1 beta2 than on 8.0.1\nHere is a simplified one that takes 484 ms on 8.1 and 32 ms on 8.0.1.\n\nselect\n 0\nfrom\n Content C\n\n left outer join Supplier S\n on C.SupplierId = S.SupplierId\n\n left outer join Price P\n on C.PriceId = P.PriceId;\n\nAny ideas why it's slower?\n\nThanks\nJean-Pierre Pelletier\ne-djuster\n\n======================================================\n\ncreate table Price (\n PriceId INTEGER NOT NULL DEFAULT NEXTVAL('PriceId'),\n ItemId INTEGER NOT NULL,\n SupplierId INTEGER NOT NULL,\n LocationId SMALLINT NULL,\n FromDate DATE NOT NULL DEFAULT CURRENT_DATE,\n UnitValue DECIMAL NOT NULL,\n InsertedByPersonId INTEGER NOT NULL,\n LastUpdatedByPersonId INTEGER NULL,\n InsertTimestamp TIMESTAMP(0) NOT NULL DEFAULT CURRENT_TIMESTAMP,\n LastUpdateTimeStamp TIMESTAMP(0) NULL\n);\n\nalter table price add primary key (priceid);\n\ncreate table Supplier (\n SupplierId INTEGER NOT NULL DEFAULT NEXTVAL('SupplierId'),\n SupplierDescription VARCHAR(50) NOT NULL,\n InsertTimestamp TIMESTAMP(0) NULL DEFAULT CURRENT_TIMESTAMP,\n ApprovalDate DATE NULL\n);\n\nalter table supplier add primary key (supplierid);\n\n-- I've only put one row in table Content because it was sufficient to\nproduce\n-- the slowdown\n\ncreate table content (contentid integer not null, supplierid integer,\npriceid integer);\ninsert into content VALUES (148325, 12699, 388026);\n\nvacuum analyze content; -- 1 row\nvacuum analyze price; -- 581475 rows\nvacuum analyze supplier; -- 10139 rows\n\n======================================================\nHere are the query plans:\n\nOn \"PostgreSQL 8.1beta2 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC)\n3.4.2 (mingw-special)\"\n\nexplain select 0 from Content C LEFT OUTER JOIN Supplier S ON\nC.SupplierId = S.SupplierId LEFT OUTER JOIN Price P ON C.PriceId =\nP.PriceId;\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..18591.77 rows=1 width=0)\n Join Filter: (\"outer\".priceid = \"inner\".priceid)\n -> Nested Loop Left Join (cost=0.00..5.59 rows=1 width=4)\n -> Seq Scan on content c (cost=0.00..1.01 rows=1 width=8)\n -> Index Scan using \"Supplier Id\" on supplier s (cost=0.00..4.56\nrows=1 width=4)\n Index Cond: (\"outer\".supplierid = s.supplierid)\n -> Seq Scan on price p (cost=0.00..11317.75 rows=581475 width=4)\n\n\n\"PostgreSQL 8.0.1 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2\n(mingw-special)\"\n\nexplain select 0 from Content C LEFT OUTER JOIN Supplier S ON\nC.SupplierId = S.SupplierId LEFT OUTER JOIN Price P ON C.PriceId =\nP.PriceId;\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..11.08 rows=1 width=0)\n -> Nested Loop Left Join (cost=0.00..5.53 rows=1 width=4)\n -> Seq Scan on content c (cost=0.00..1.01 rows=1 width=8)\n -> Index Scan using \"Supplier Id\" on supplier s (cost=0.00..4.51\nrows=1 width=4)\n Index Cond: (\"outer\".supplierid = s.supplierid)\n -> Index Scan using price_pkey on price p (cost=0.00..5.53 rows=1\nwidth=4)\n Index Cond: (\"outer\".priceid = p.priceid)\n\n", "msg_date": "Thu, 22 Sep 2005 17:20:04 -0400", "msg_from": "\"Jean-Pierre Pelletier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "What stood out to me the most was:\n\nOn Sep 22, 2005, at 2:20 PM, Jean-Pierre Pelletier wrote:\n\n> -> Seq Scan on price p (cost=0.00..11317.75 rows=581475 width=4)\n\na) is the index there, b) have you analyzed, c) perhaps the planners \nhave different default values for when to use an index vrs a \nseqscan... if you turn off seqscan, are the timings similar?\n\nGavin M. Roy\n800 Pound Gorilla\[email protected]\n\n\n\nWhat stood out to me the most was:On Sep 22, 2005, at 2:20 PM, Jean-Pierre Pelletier wrote:  ->  Seq Scan on price p  (cost=0.00..11317.75 rows=581475 width=4) a) is the index there, b) have you analyzed, c) perhaps the planners have different default values for when to use an index vrs a seqscan...  if you turn off seqscan, are the timings similar? Gavin M. Roy800 Pound [email protected]", "msg_date": "Thu, 22 Sep 2005 14:32:21 -0700", "msg_from": "\"Gavin M. Roy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "All indexes are there, and I've analyzed the three tables.\n\nI turned off seq scan, the query plans became identical but the performance\nwas not better.\n\n----- Original Message ----- \n From: Gavin M. Roy \n To: Jean-Pierre Pelletier \n Cc: [email protected] \n Sent: Thursday, September 22, 2005 5:32 PM\n Subject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0\n\n\n What stood out to me the most was:\n\n\n On Sep 22, 2005, at 2:20 PM, Jean-Pierre Pelletier wrote:\n\n\n -> Seq Scan on price p (cost=0.00..11317.75 rows=581475 width=4)\n\n\n\n a) is the index there, b) have you analyzed, c) perhaps the planners have different default values for when to use an index vrs a seqscan... if you turn off seqscan, are the timings similar?\n\n\n Gavin M. Roy\n 800 Pound Gorilla\n [email protected]\n\n\n\n\n\n\n\n\n\n\nAll indexes are there, and I've analyzed the three \ntables.\n \nI turned off seq scan, the query plans became \nidentical but the performance\nwas not better.\n \n----- Original Message ----- \n\nFrom:\nGavin M. Roy \nTo: Jean-Pierre Pelletier \nCc: [email protected]\n\nSent: Thursday, September 22, 2005 5:32 \n PM\nSubject: Re: [PERFORM] Queries 15 times \n slower on 8.1 beta 2 than on 8.0\nWhat stood out to me the most was:\n \n\nOn Sep 22, 2005, at 2:20 PM, Jean-Pierre Pelletier wrote:\n\n  ->  Seq Scan on price p  (cost=0.00..11317.75 rows=581475 \n width=4)\na) is the index there, b) have you analyzed, c) perhaps the planners have \n different default values for when to use an index vrs a seqscan...  if \n you turn off seqscan, are the timings similar?\n\nGavin M. Roy\n800 Pound Gorilla\[email protected]", "msg_date": "Thu, 22 Sep 2005 17:38:40 -0400", "msg_from": "\"Jean-Pierre Pelletier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "Jean-Pierre Pelletier wrote:\n> Hi,\n> \n> I've got many queries running much slower on 8.1 beta2 than on 8.0.1\n> Here is a simplified one that takes 484 ms on 8.1 and 32 ms on 8.0.1.\n> \n> select\n> 0\n> from\n> Content C\n> \n> left outer join Supplier S\n> on C.SupplierId = S.SupplierId\n> \n> left outer join Price P\n> on C.PriceId = P.PriceId;\n> \n> Any ideas why it's slower?\n\nYou really have to post the results of \"EXPLAIN ANALYZE\" not just\nexplain. So that we can tell what the planner is expecting, versus what\nreally happened.\n\nJohn\n=:->\n\n> \n> Thanks\n> Jean-Pierre Pelletier\n> e-djuster\n>", "msg_date": "Thu, 22 Sep 2005 16:48:22 -0500", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "Here are the explain analyze:\n\nOn 8.1 beta2:\n\n\"Nested Loop Left Join (cost=0.00..18591.77 rows=1 width=0) (actual \ntime=1320.302..2439.066 rows=1 loops=1)\"\n\" Join Filter: (\"outer\".priceid = \"inner\".priceid)\"\n\" -> Nested Loop Left Join (cost=0.00..5.59 rows=1 width=4) (actual \ntime=0.044..0.058 rows=1 loops=1)\"\n\" -> Seq Scan on content c (cost=0.00..1.01 rows=1 width=8) (actual \ntime=0.009..0.011 rows=1 loops=1)\"\n\" -> Index Scan using \"Supplier Id\" on supplier s (cost=0.00..4.56 \nrows=1 width=4) (actual time=0.016..0.022 rows=1 loops=1)\"\n\" Index Cond: (\"outer\".supplierid = s.supplierid)\"\n\" -> Seq Scan on price p (cost=0.00..11317.75 rows=581475 width=4) \n(actual time=0.004..1143.720 rows=581475 loops=1)\"\n\"Total runtime: 2439.211 ms\"\n\nOn 8.0.1:\n\n\"Nested Loop Left Join (cost=0.00..11.02 rows=1 width=0) (actual \ntime=0.000..0.000 rows=1 loops=1)\"\n\" -> Nested Loop Left Join (cost=0.00..5.48 rows=1 width=4) (actual \ntime=0.000..0.000 rows=1 loops=1)\"\n\" -> Seq Scan on content c (cost=0.00..1.01 rows=1 width=8) (actual \ntime=0.000..0.000 rows=1 loops=1)\"\n\" -> Index Scan using \"Supplier Id\" on supplier s (cost=0.00..4.46 \nrows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)\"\n\" Index Cond: (\"outer\".supplierid = s.supplierid)\"\n\" -> Index Scan using \"Price Id\" on price p (cost=0.00..5.53 rows=1 \nwidth=4) (actual time=0.000..0.000 rows=1 loops=1)\"\n\" Index Cond: (\"outer\".priceid = p.priceid)\"\n\"Total runtime: 0.000 ms\"\n\n----- Original Message ----- \nFrom: \"John Arbash Meinel\" <[email protected]>\nTo: \"Jean-Pierre Pelletier\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, September 22, 2005 5:48 PM\nSubject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0\n\n\n", "msg_date": "Thu, 22 Sep 2005 17:58:49 -0400", "msg_from": "\"Jean-Pierre Pelletier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "Jean-Pierre Pelletier wrote:\n> Here are the explain analyze:\n\nWhat is the explain analyze if you use \"set enable_seqscan to off\"?\n\nAlso, can you post the output of:\n\\d supplier\n\\d price\n\\d content\n\nMostly I just want to see what the indexes are, in the case that you\ndon't want to show us your schema.\n\nJohn\n=:->", "msg_date": "Thu, 22 Sep 2005 17:03:16 -0500", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "Jean-Pierre,\n\nFirst off, you're on Windows?\n\n> \" -> Seq Scan on price p (cost=0.00..11317.75 rows=581475 width=4)\n> (actual time=0.004..1143.720 rows=581475 loops=1)\"\n\nWell, this is your pain point. Can we see the index scan plan on 8.1? \nGiven that it's *expecting* only one row, I can't understand why it's \nusing a seq scan ...\n\n> \"Nested Loop Left Join (cost=0.00..11.02 rows=1 width=0) (actual\n> time=0.000..0.000 rows=1 loops=1)\"\n> \" -> Nested Loop Left Join (cost=0.00..5.48 rows=1 width=4) (actual\n> time=0.000..0.000 rows=1 loops=1)\"\n> \"Total runtime: 0.000 ms\"\n\nFeh, this looks like the \"windows does not report times\" bug, which makes \nit hard to compare ...\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 22 Sep 2005 15:19:05 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "With enable-seq-scan = off, it runs in 350 ms so better than 484 ms\nbut still much slower than 32 ms in 8.0.1.\n\n==============================================\n\n Table \"public.content\"\n Column | Type | Modifiers\n------------+---------+-----------\n contentid | integer | not null\n supplierid | integer |\n priceid | integer |\n\n Table \"public.price\"\n Column | Type | Modifiers\n-----------------------+--------------------------------+-----------\n priceid | integer | not null\n itemid | integer |\n supplierid | integer |\n locationid | smallint |\n fromdate | date |\n unitvalue | numeric |\n insertedbypersonid | integer |\n lastupdatedbypersonid | integer |\n inserttimestamp | timestamp(0) without time zone |\n lastupdatetimestamp | timestamp(0) without time zone |\nIndexes:\n \"price_pkey\" PRIMARY KEY, btree (priceid)\n\n Table \"public.supplier\"\n Column | Type | \nModifie\nrs\n---------------------+--------------------------------+-------------------------\n---------------------\n supplierid | integer | not null default \nnextval\n('SupplierId'::text)\n supplierdescription | character varying(50) | not null\n inserttimestamp | timestamp(0) without time zone | default now()\n approvaldate | date |\nIndexes:\n \"Supplier Id\" PRIMARY KEY, btree (supplierid)\n \"Supplier Description\" UNIQUE, btree (upper(supplierdescription::text))\n \"Supplier.InsertTimestamp\" btree (inserttimestamp)\nCheck constraints:\n \"Supplier Name cannot be empty\" CHECK (btrim(supplierdescription::text) \n<> ''::tex\n\n================================================================================\n\n\nExplan analyze with enable-seq-scan = off on 8.1 beta2\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n------------------------------------------------------------\n Merge Left Join (cost=100000005.60..101607964.74 rows=1 width=0) (actual \ntime=\n729.067..729.078 rows=1 loops=1)\n Merge Cond: (\"outer\".priceid = \"inner\".priceid)\n -> Sort (cost=100000005.60..100000005.60 rows=1 width=4) (actual \ntime=0.064\n..0.067 rows=1 loops=1)\n Sort Key: c.priceid\n -> Nested Loop Left Join (cost=100000000.00..100000005.59 rows=1 \nwidt\nh=4) (actual time=0.038..0.049 rows=1 loops=1)\n -> Seq Scan on content c (cost=100000000.00..100000001.01 \nro\nws=1 width=8) (actual time=0.008..0.011 rows=1 loops=1)\n -> Index Scan using \"Supplier Id\" on supplier s \n(cost=0.00..4.5\n6 rows=1 width=4) (actual time=0.016..0.019 rows=1 loops=1)\n Index Cond: (\"outer\".supplierid = s.supplierid)\n -> Index Scan using \"Price Id\" on price p (cost=0.00..1606505.44 \nrows=58147\n5 width=4) (actual time=0.008..370.854 rows=164842 loops=1)\n Total runtime: 729.192 ms\n\n----- Original Message ----- \nFrom: \"John Arbash Meinel\" <[email protected]>\nTo: \"Jean-Pierre Pelletier\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, September 22, 2005 6:03 PM\nSubject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0\n\n\n", "msg_date": "Thu, 22 Sep 2005 18:28:29 -0400", "msg_from": "\"Jean-Pierre Pelletier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "On Thu, Sep 22, 2005 at 03:19:05PM -0700, Josh Berkus wrote:\n> > \" -> Seq Scan on price p (cost=0.00..11317.75 rows=581475 width=4)\n> > (actual time=0.004..1143.720 rows=581475 loops=1)\"\n> \n> Well, this is your pain point. Can we see the index scan plan on 8.1? \n> Given that it's *expecting* only one row, I can't understand why it's \n> using a seq scan ...\n\nI've created a simplified, self-contained test case for this:\n\nCREATE TABLE price (\n priceid integer PRIMARY KEY\n);\n\nCREATE TABLE supplier (\n supplierid integer PRIMARY KEY\n);\n\nCREATE TABLE content (\n contentid integer PRIMARY KEY,\n supplierid integer NOT NULL REFERENCES supplier,\n priceid integer NOT NULL REFERENCES price\n);\n\nINSERT INTO price (priceid) SELECT * FROM generate_series(1, 50000);\nINSERT INTO supplier (supplierid) SELECT * FROM generate_series(1, 10000);\nINSERT INTO content (contentid, supplierid, priceid) VALUES (1, 1, 50000);\n\nANALYZE price;\nANALYZE supplier;\nANALYZE content;\n\nEXPLAIN ANALYZE\nSELECT 0\nFROM content c\nLEFT OUTER JOIN supplier s ON c.supplierid = s.supplierid\nLEFT OUTER JOIN price p ON c.priceid = p.priceid;\n\nHere's the EXPLAIN ANALYZE from 8.0.3:\n\n Nested Loop Left Join (cost=0.00..7.06 rows=1 width=0) (actual time=0.180..0.232 rows=1 loops=1)\n -> Nested Loop Left Join (cost=0.00..4.04 rows=1 width=4) (actual time=0.105..0.133 rows=1 loops=1)\n -> Seq Scan on content c (cost=0.00..1.01 rows=1 width=8) (actual time=0.021..0.029 rows=1 loops=1)\n -> Index Scan using supplier_pkey on supplier s (cost=0.00..3.01 rows=1 width=4) (actual time=0.052..0.059 rows=1 loops=1)\n Index Cond: (\"outer\".supplierid = s.supplierid)\n -> Index Scan using price_pkey on price p (cost=0.00..3.01 rows=1 width=4) (actual time=0.046..0.055 rows=1 loops=1)\n Index Cond: (\"outer\".priceid = p.priceid)\n Total runtime: 0.582 ms\n\nHere it is from 8.1beta2:\n\n Merge Right Join (cost=4.05..1054.06 rows=1 width=0) (actual time=676.863..676.895 rows=1 loops=1)\n Merge Cond: (\"outer\".priceid = \"inner\".priceid)\n -> Index Scan using price_pkey on price p (cost=0.00..925.00 rows=50000 width=4) (actual time=0.035..383.345 rows=50000 loops=1)\n -> Sort (cost=4.05..4.05 rows=1 width=4) (actual time=0.152..0.159 rows=1 loops=1)\n Sort Key: c.priceid\n -> Nested Loop Left Join (cost=0.00..4.04 rows=1 width=4) (actual time=0.082..0.111 rows=1 loops=1)\n -> Seq Scan on content c (cost=0.00..1.01 rows=1 width=8) (actual time=0.016..0.024 rows=1 loops=1)\n -> Index Scan using supplier_pkey on supplier s (cost=0.00..3.01 rows=1 width=4) (actual time=0.039..0.047 rows=1 loops=1)\n Index Cond: (\"outer\".supplierid = s.supplierid)\n Total runtime: 677.563 ms\n\nIf we change content's priceid then we get the same plan but faster results:\n\nUPDATE content SET priceid = 1;\n\n Merge Right Join (cost=4.05..1054.06 rows=1 width=0) (actual time=0.268..0.303 rows=1 loops=1)\n Merge Cond: (\"outer\".priceid = \"inner\".priceid)\n -> Index Scan using price_pkey on price p (cost=0.00..925.00 rows=50000 width=4) (actual time=0.049..0.061 rows=2 loops=1)\n -> Sort (cost=4.05..4.05 rows=1 width=4) (actual time=0.187..0.192 rows=1 loops=1)\n Sort Key: c.priceid\n -> Nested Loop Left Join (cost=0.00..4.04 rows=1 width=4) (actual time=0.099..0.128 rows=1 loops=1)\n -> Seq Scan on content c (cost=0.00..1.01 rows=1 width=8) (actual time=0.025..0.033 rows=1 loops=1)\n -> Index Scan using supplier_pkey on supplier s (cost=0.00..3.01 rows=1 width=4) (actual time=0.046..0.053 rows=1 loops=1)\n Index Cond: (\"outer\".supplierid = s.supplierid)\n Total runtime: 0.703 ms\n\n-- \nMichael Fuhr\n", "msg_date": "Thu, 22 Sep 2005 16:54:32 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "I don't know if it makes a difference but in my tables,\ncontent.supplierid and content.priceid were nullable.\n\n----- Original Message ----- \nFrom: \"Michael Fuhr\" <[email protected]>\nTo: \"Josh Berkus\" <[email protected]>\nCc: <[email protected]>; \"Jean-Pierre Pelletier\" \n<[email protected]>; \"John Arbash Meinel\" <[email protected]>\nSent: Thursday, September 22, 2005 6:54 PM\nSubject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0\n\n\n> On Thu, Sep 22, 2005 at 03:19:05PM -0700, Josh Berkus wrote:\n>> > \" -> Seq Scan on price p (cost=0.00..11317.75 rows=581475 width=4)\n>> > (actual time=0.004..1143.720 rows=581475 loops=1)\"\n>>\n>> Well, this is your pain point. Can we see the index scan plan on 8.1?\n>> Given that it's *expecting* only one row, I can't understand why it's\n>> using a seq scan ...\n>\n> I've created a simplified, self-contained test case for this:\n>\n> CREATE TABLE price (\n> priceid integer PRIMARY KEY\n> );\n>\n> CREATE TABLE supplier (\n> supplierid integer PRIMARY KEY\n> );\n>\n> CREATE TABLE content (\n> contentid integer PRIMARY KEY,\n> supplierid integer NOT NULL REFERENCES supplier,\n> priceid integer NOT NULL REFERENCES price\n> );\n>\n> INSERT INTO price (priceid) SELECT * FROM generate_series(1, 50000);\n> INSERT INTO supplier (supplierid) SELECT * FROM generate_series(1, 10000);\n> INSERT INTO content (contentid, supplierid, priceid) VALUES (1, 1, 50000);\n>\n> ANALYZE price;\n> ANALYZE supplier;\n> ANALYZE content;\n>\n> EXPLAIN ANALYZE\n> SELECT 0\n> FROM content c\n> LEFT OUTER JOIN supplier s ON c.supplierid = s.supplierid\n> LEFT OUTER JOIN price p ON c.priceid = p.priceid;\n>\n> Here's the EXPLAIN ANALYZE from 8.0.3:\n>\n> Nested Loop Left Join (cost=0.00..7.06 rows=1 width=0) (actual \n> time=0.180..0.232 rows=1 loops=1)\n> -> Nested Loop Left Join (cost=0.00..4.04 rows=1 width=4) (actual \n> time=0.105..0.133 rows=1 loops=1)\n> -> Seq Scan on content c (cost=0.00..1.01 rows=1 width=8) \n> (actual time=0.021..0.029 rows=1 loops=1)\n> -> Index Scan using supplier_pkey on supplier s (cost=0.00..3.01 \n> rows=1 width=4) (actual time=0.052..0.059 rows=1 loops=1)\n> Index Cond: (\"outer\".supplierid = s.supplierid)\n> -> Index Scan using price_pkey on price p (cost=0.00..3.01 rows=1 \n> width=4) (actual time=0.046..0.055 rows=1 loops=1)\n> Index Cond: (\"outer\".priceid = p.priceid)\n> Total runtime: 0.582 ms\n>\n> Here it is from 8.1beta2:\n>\n> Merge Right Join (cost=4.05..1054.06 rows=1 width=0) (actual \n> time=676.863..676.895 rows=1 loops=1)\n> Merge Cond: (\"outer\".priceid = \"inner\".priceid)\n> -> Index Scan using price_pkey on price p (cost=0.00..925.00 \n> rows=50000 width=4) (actual time=0.035..383.345 rows=50000 loops=1)\n> -> Sort (cost=4.05..4.05 rows=1 width=4) (actual time=0.152..0.159 \n> rows=1 loops=1)\n> Sort Key: c.priceid\n> -> Nested Loop Left Join (cost=0.00..4.04 rows=1 width=4) \n> (actual time=0.082..0.111 rows=1 loops=1)\n> -> Seq Scan on content c (cost=0.00..1.01 rows=1 width=8) \n> (actual time=0.016..0.024 rows=1 loops=1)\n> -> Index Scan using supplier_pkey on supplier s \n> (cost=0.00..3.01 rows=1 width=4) (actual time=0.039..0.047 rows=1 loops=1)\n> Index Cond: (\"outer\".supplierid = s.supplierid)\n> Total runtime: 677.563 ms\n>\n> If we change content's priceid then we get the same plan but faster \n> results:\n>\n> UPDATE content SET priceid = 1;\n>\n> Merge Right Join (cost=4.05..1054.06 rows=1 width=0) (actual \n> time=0.268..0.303 rows=1 loops=1)\n> Merge Cond: (\"outer\".priceid = \"inner\".priceid)\n> -> Index Scan using price_pkey on price p (cost=0.00..925.00 \n> rows=50000 width=4) (actual time=0.049..0.061 rows=2 loops=1)\n> -> Sort (cost=4.05..4.05 rows=1 width=4) (actual time=0.187..0.192 \n> rows=1 loops=1)\n> Sort Key: c.priceid\n> -> Nested Loop Left Join (cost=0.00..4.04 rows=1 width=4) \n> (actual time=0.099..0.128 rows=1 loops=1)\n> -> Seq Scan on content c (cost=0.00..1.01 rows=1 width=8) \n> (actual time=0.025..0.033 rows=1 loops=1)\n> -> Index Scan using supplier_pkey on supplier s \n> (cost=0.00..3.01 rows=1 width=4) (actual time=0.046..0.053 rows=1 loops=1)\n> Index Cond: (\"outer\".supplierid = s.supplierid)\n> Total runtime: 0.703 ms\n>\n> -- \n> Michael Fuhr \n\n", "msg_date": "Thu, 22 Sep 2005 19:07:41 -0400", "msg_from": "\"Jean-Pierre Pelletier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "my settings are:\n\neffective_cache_size = 1000\nrandom_page_cost = 4\nwork_mem = 20000\n\n----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Jean-Pierre Pelletier\" <[email protected]>\nSent: Thursday, September 22, 2005 6:58 PM\nSubject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0\n\n\n> Jean-Pierre,\n> \n>> How do I produce an \"Index scan plan\" ?\n> \n> You just did. What's your effective_cache_size set to? \n> random_page_cost? work_mem?\n> \n> -- \n> --Josh\n> \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n", "msg_date": "Thu, 22 Sep 2005 19:10:25 -0400", "msg_from": "\"Jean-Pierre Pelletier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "Michael Fuhr <[email protected]> writes:\n> I've created a simplified, self-contained test case for this:\n\nI see the problem --- I broke best_inner_indexscan() for some cases\nwhere the potential indexscan clause is an outer-join ON clause.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Sep 2005 19:12:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0 " }, { "msg_contents": "Jean-Pierre,\n\n> effective_cache_size = 1000\n\nTry setting this to 16,384 as a test.\n\n> random_page_cost = 4\n\nTry setting this to 2.5 as a test.\n\n> work_mem = 20000\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 22 Sep 2005 16:16:25 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "On Thu, Sep 22, 2005 at 07:07:41PM -0400, Jean-Pierre Pelletier wrote:\n> I don't know if it makes a difference but in my tables,\n> content.supplierid and content.priceid were nullable.\n\nThat makes no difference in the tests I've done.\n\nTom Lane says he's found the problem; I expect he'll be committing\na fix shortly.\n\n-- \nMichael Fuhr\n", "msg_date": "Thu, 22 Sep 2005 17:17:46 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "Thanks everybody for your help, I'll be awaiting the fix.\n\nI've also noticed that pg_stat_activity is always empty even if\nstats_start_collector = on\n\n----- Original Message ----- \nFrom: \"Michael Fuhr\" <[email protected]>\nTo: \"Jean-Pierre Pelletier\" <[email protected]>\nCc: \"Josh Berkus\" <[email protected]>; <[email protected]>; \n\"John Arbash Meinel\" <[email protected]>\nSent: Thursday, September 22, 2005 7:17 PM\nSubject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0\n\n\n> On Thu, Sep 22, 2005 at 07:07:41PM -0400, Jean-Pierre Pelletier wrote:\n>> I don't know if it makes a difference but in my tables,\n>> content.supplierid and content.priceid were nullable.\n>\n> That makes no difference in the tests I've done.\n>\n> Tom Lane says he's found the problem; I expect he'll be committing\n> a fix shortly.\n>\n> -- \n> Michael Fuhr\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly \n\n", "msg_date": "Thu, 22 Sep 2005 19:26:30 -0400", "msg_from": "\"Jean-Pierre Pelletier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "Jean-Pierre,\n\n> Thanks everybody for your help, I'll be awaiting the fix.\n>\n> I've also noticed that pg_stat_activity is always empty even if\n> stats_start_collector = on\n\nYes, I believe that this is a know Windows issue. Not sure if it's fixed \nin 8.1.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 22 Sep 2005 16:35:06 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" }, { "msg_contents": "Michael Fuhr <[email protected]> writes:\n> Tom Lane says he's found the problem; I expect he'll be committing\n> a fix shortly.\n\nThe attached patch allows it to generate the expected plan, at least\nin the test case I tried.\n\n\t\t\tregards, tom lane\n\n*** src/backend/optimizer/path/indxpath.c.orig\tSun Aug 28 18:47:20 2005\n--- src/backend/optimizer/path/indxpath.c\tThu Sep 22 19:17:41 2005\n***************\n*** 955,969 ****\n \t/*\n \t * Examine each joinclause in the joininfo list to see if it matches any\n \t * key of any index. If so, add the clause's other rels to the result.\n- \t * (Note: we consider only actual participants, not extraneous rels\n- \t * possibly mentioned in required_relids.)\n \t */\n \tforeach(l, rel->joininfo)\n \t{\n \t\tRestrictInfo *joininfo = (RestrictInfo *) lfirst(l);\n \t\tRelids\tother_rels;\n \n! \t\tother_rels = bms_difference(joininfo->clause_relids, rel->relids);\n \t\tif (matches_any_index(joininfo, rel, other_rels))\n \t\t\touter_relids = bms_join(outer_relids, other_rels);\n \t\telse\n--- 955,967 ----\n \t/*\n \t * Examine each joinclause in the joininfo list to see if it matches any\n \t * key of any index. If so, add the clause's other rels to the result.\n \t */\n \tforeach(l, rel->joininfo)\n \t{\n \t\tRestrictInfo *joininfo = (RestrictInfo *) lfirst(l);\n \t\tRelids\tother_rels;\n \n! \t\tother_rels = bms_difference(joininfo->required_relids, rel->relids);\n \t\tif (matches_any_index(joininfo, rel, other_rels))\n \t\t\touter_relids = bms_join(outer_relids, other_rels);\n \t\telse\n", "msg_date": "Thu, 22 Sep 2005 19:50:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0 " }, { "msg_contents": "Explain analyze on my 8.0.1 installation does report the time for\nslower queries but for this small query it reports 0.000 ms\n\n----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: <[email protected]>\nCc: \"Jean-Pierre Pelletier\" <[email protected]>; \"John Arbash \nMeinel\" <[email protected]>\nSent: Thursday, September 22, 2005 6:19 PM\nSubject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0\n\n\n> Jean-Pierre,\n>\n> First off, you're on Windows?\n>\n>> \" -> Seq Scan on price p (cost=0.00..11317.75 rows=581475 width=4)\n>> (actual time=0.004..1143.720 rows=581475 loops=1)\"\n>\n> Well, this is your pain point. Can we see the index scan plan on 8.1?\n> Given that it's *expecting* only one row, I can't understand why it's\n> using a seq scan ...\n>\n>> \"Nested Loop Left Join (cost=0.00..11.02 rows=1 width=0) (actual\n>> time=0.000..0.000 rows=1 loops=1)\"\n>> \" -> Nested Loop Left Join (cost=0.00..5.48 rows=1 width=4) (actual\n>> time=0.000..0.000 rows=1 loops=1)\"\n>> \"Total runtime: 0.000 ms\"\n>\n> Feh, this looks like the \"windows does not report times\" bug, which makes\n> it hard to compare ...\n>\n> -- \n> --Josh\n>\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq \n\n", "msg_date": "Thu, 22 Sep 2005 21:41:37 -0400", "msg_from": "\"Jean-Pierre Pelletier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" } ]
[ { "msg_contents": "Hello!\n\n\tGot a DB with traffic statictics stored. And a SELECT statement which shows traffic volume per days also divided by regions - local traffic and global.\n Thus SELECT statement returns about some (in about 10-20) rows paired like this:\n\nttype (text)| volume (int)| tdate (date)\n----------------------------------------\nlocal | xxxxx | some-date\nglobal | xxxxx | some-date\n\n\tWhen executing this SELECT (see SELECT.A above) it executes in about 700 ms, but when I want wipe out all info about local traffic, with query like this:\n SELECT * FROM ( SELECT.A ) a WHERE type = 'global';\nIt executes about 10000 ms - more then 10 TIMES SLOWER!\n\n Why this could be?\n\n\n\n-------------------------------------------------\nInitial Query - SELECT.A (executes about 700 ms)\n\nSELECT \n CASE is_local(aa.uaix) WHEN true THEN 'local' \n ELSE 'global' END AS TType, \n aa.cDate AS TDate,\n SUM(aa.data) AS Value \nFROM (\n SELECT \n a.uaix AS uaix, \n cDate AS cDate, \n SUM(a.data) AS data \n FROM (\n\t (\n SELECT toIP AS uaix, \n cDate AS cDate, \n SUM(packetSize) AS data\n\t FROM vw_stats\n WHERE interface <> 'inet'\n AND cdate = '01.09.2005'\n AND fromIP << '192.168.0.0/16'\n AND NOT (toIP << '192.168.0.0/16')\n GROUP BY 1,2\n\t )\n UNION \n (\n SELECT fromIP AS uaix, \n cDate AS cDate, \n SUM(packetSize) AS data\n FROM vw_stats\n WHERE interface <> 'inet'\n AND cdate = '01.09.2005'\n AND toIP << '192.168.0.0/16'\n AND NOT (fromIP << '192.168.0.0/16')\n GROUP BY 1,2\n )\n ) a\n GROUP BY 1,2\n) aa\nGROUP BY 1,2\nORDER BY 1,2\n\n-----------------------------------------------------------\nQuery with local info filtered (executes about 10000 ms)\n\nSELECT * FROM (\n<HERE PLACED SELECT.A>\n) aaa WHERE aaa.TType = 'global';\n\n\n-----------------------------------------------------------\n\nRunning Postgresql 8.0.3 on FreeBSD 5.3\n\n \n\n-- \nBest regards,\n eVl mailto:[email protected]\n\n\n", "msg_date": "Fri, 23 Sep 2005 01:27:16 +0300", "msg_from": "eVl <[email protected]>", "msg_from_op": true, "msg_subject": "optimization downgrade perfomance?" }, { "msg_contents": "eVl <[email protected]> writes:\n> \tWhen executing this SELECT (see SELECT.A above) it executes in about 700 ms, but when I want wipe out all info about local traffic, with query like this:\n> SELECT * FROM ( SELECT.A ) a WHERE type = 'global';\n> It executes about 10000 ms - more then 10 TIMES SLOWER!\n\n> Why this could be?\n\nYou tell us --- let's see EXPLAIN ANALYZE results for both cases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Sep 2005 09:40:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimization downgrade perfomance? " }, { "msg_contents": "eVl <[email protected]> writes:\n>> You tell us --- let's see EXPLAIN ANALYZE results for both cases.\n\n> Here EXPLAIN ANALYZE results for both queries attached.\n\nThe problem seems to be that the is_uaix() function is really slow\n(somewhere around 4 msec per call it looks like). Look at the\nfirst scan over stats:\n\n -> Index Scan using cdate_cluster on stats s (cost=0.00..201.51 rows=6 width=25) (actual time=5.231..2165.145 rows=418 loops=1)\n Index Cond: (cdate = '2005-09-01'::date)\n Filter: ((fromip << '192.168.0.0/16'::inet) AND (NOT (toip << '192.168.0.0/16'::inet)) AND (CASE is_uaix(toip) WHEN true THEN 'local'::text ELSE 'global'::text END = 'global'::text))\n\nversus\n\n -> Index Scan using cdate_cluster on stats s (cost=0.00..165.94 rows=1186 width=25) (actual time=0.131..43.258 rows=578 loops=1)\n Index Cond: (cdate = '2005-09-01'::date)\n Filter: ((fromip << '192.168.0.0/16'::inet) AND (NOT (toip << '192.168.0.0/16'::inet)))\n\nThe 578 evaluations of the CASE are adding over 2100msec. There's\nanother 1600 evaluations needed in the other arm of the UNION...\n\nBetter look at exactly what is_uaix() is doing, because the CASE structure\nis surely not that slow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Sep 2005 23:09:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimization downgrade perfomance? " } ]
[ { "msg_contents": " \nHave tried adjusting the effective_cache_size so that you don't the\nplanner may produce a better explain plan for you and not needing to set\nseqscan to off.\n\n\n-- \n Husam \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Jean-Pierre\nPelletier\nSent: Thursday, September 22, 2005 3:28 PM\nTo: John Arbash Meinel\nCc: [email protected]\nSubject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0\n\nWith enable-seq-scan = off, it runs in 350 ms so better than 484 ms\nbut still much slower than 32 ms in 8.0.1.\n\n==============================================\n\n Table \"public.content\"\n Column | Type | Modifiers\n------------+---------+-----------\n contentid | integer | not null\n supplierid | integer |\n priceid | integer |\n\n Table \"public.price\"\n Column | Type | Modifiers\n-----------------------+--------------------------------+-----------\n priceid | integer | not null\n itemid | integer |\n supplierid | integer |\n locationid | smallint |\n fromdate | date |\n unitvalue | numeric |\n insertedbypersonid | integer |\n lastupdatedbypersonid | integer |\n inserttimestamp | timestamp(0) without time zone |\n lastupdatetimestamp | timestamp(0) without time zone |\nIndexes:\n \"price_pkey\" PRIMARY KEY, btree (priceid)\n\n Table \"public.supplier\"\n Column | Type | \nModifie\nrs\n---------------------+--------------------------------+-----------------\n--------\n---------------------\n supplierid | integer | not null default\n\nnextval\n('SupplierId'::text)\n supplierdescription | character varying(50) | not null\n inserttimestamp | timestamp(0) without time zone | default now()\n approvaldate | date |\nIndexes:\n \"Supplier Id\" PRIMARY KEY, btree (supplierid)\n \"Supplier Description\" UNIQUE, btree\n(upper(supplierdescription::text))\n \"Supplier.InsertTimestamp\" btree (inserttimestamp)\nCheck constraints:\n \"Supplier Name cannot be empty\" CHECK\n(btrim(supplierdescription::text) \n<> ''::tex\n\n========================================================================\n========\n\n\nExplan analyze with enable-seq-scan = off on 8.1 beta2\n QUERY\nPLAN\n\n------------------------------------------------------------------------\n--------\n------------------------------------------------------------\n Merge Left Join (cost=100000005.60..101607964.74 rows=1 width=0)\n(actual \ntime=\n729.067..729.078 rows=1 loops=1)\n Merge Cond: (\"outer\".priceid = \"inner\".priceid)\n -> Sort (cost=100000005.60..100000005.60 rows=1 width=4) (actual \ntime=0.064\n..0.067 rows=1 loops=1)\n Sort Key: c.priceid\n -> Nested Loop Left Join (cost=100000000.00..100000005.59\nrows=1 \nwidt\nh=4) (actual time=0.038..0.049 rows=1 loops=1)\n -> Seq Scan on content c\n(cost=100000000.00..100000001.01 \nro\nws=1 width=8) (actual time=0.008..0.011 rows=1 loops=1)\n -> Index Scan using \"Supplier Id\" on supplier s \n(cost=0.00..4.5\n6 rows=1 width=4) (actual time=0.016..0.019 rows=1 loops=1)\n Index Cond: (\"outer\".supplierid = s.supplierid)\n -> Index Scan using \"Price Id\" on price p (cost=0.00..1606505.44 \nrows=58147\n5 width=4) (actual time=0.008..370.854 rows=164842 loops=1)\n Total runtime: 729.192 ms\n\n----- Original Message ----- \nFrom: \"John Arbash Meinel\" <[email protected]>\nTo: \"Jean-Pierre Pelletier\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, September 22, 2005 6:03 PM\nSubject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n**********************************************************************\nThis message contains confidential information intended only for the \nuse of the addressee(s) named above and may contain information that \nis legally privileged. If you are not the addressee, or the person \nresponsible for delivering it to the addressee, you are hereby \nnotified that reading, disseminating, distributing or copying this \nmessage is strictly prohibited. If you have received this message by \nmistake, please immediately notify us by replying to the message and \ndelete the original message immediately thereafter.\n\nThank you. FADLD Tag\n**********************************************************************\n\n", "msg_date": "Thu, 22 Sep 2005 15:37:14 -0700", "msg_from": "\"Tomeh, Husam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" } ]
[ { "msg_contents": "----- Original Message ----- \nFrom: \"Jean-Pierre Pelletier\" <[email protected]>\nTo: <[email protected]>\nSent: Thursday, September 22, 2005 6:37 PM\nSubject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0\n\n\n> How do I produce an \"Index scan plan\" ?\n>\n> ----- Original Message ----- \n> From: \"Josh Berkus\" <[email protected]>\n> To: <[email protected]>\n> Cc: \"Jean-Pierre Pelletier\" <[email protected]>; \"John Arbash \n> Meinel\" <[email protected]>\n> Sent: Thursday, September 22, 2005 6:19 PM\n> Subject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0\n>\n>\n>> Jean-Pierre,\n>>\n>> First off, you're on Windows?\n>>\n>>> \" -> Seq Scan on price p (cost=0.00..11317.75 rows=581475 width=4)\n>>> (actual time=0.004..1143.720 rows=581475 loops=1)\"\n>>\n>> Well, this is your pain point. Can we see the index scan plan on 8.1?\n>> Given that it's *expecting* only one row, I can't understand why it's\n>> using a seq scan ...\n>>\n>>> \"Nested Loop Left Join (cost=0.00..11.02 rows=1 width=0) (actual\n>>> time=0.000..0.000 rows=1 loops=1)\"\n>>> \" -> Nested Loop Left Join (cost=0.00..5.48 rows=1 width=4) (actual\n>>> time=0.000..0.000 rows=1 loops=1)\"\n>>> \"Total runtime: 0.000 ms\"\n>>\n>> Feh, this looks like the \"windows does not report times\" bug, which makes\n>> it hard to compare ...\n>>\n>> -- \n>> --Josh\n>>\n>> Josh Berkus\n>> Aglio Database Solutions\n>> San Francisco\n> \n\n", "msg_date": "Thu, 22 Sep 2005 18:43:31 -0400", "msg_from": "\"Jean-Pierre Pelletier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Fw: Queries 15 times slower on 8.1 beta 2 than on 8.0" } ]
[ { "msg_contents": " \nThe recommendation for effective_cache_size is about 2/3 of your\nserver's physical RAM (if the server is dedicated only for postgres).\nThis should have a significant impact on whether Postgres planner\nchooses indexes over sequential scans. \n\n-- \n Husam \n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Jean-Pierre\nPelletier\nSent: Thursday, September 22, 2005 4:10 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0\n\nmy settings are:\n\neffective_cache_size = 1000\nrandom_page_cost = 4\nwork_mem = 20000\n\n----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Jean-Pierre Pelletier\" <[email protected]>\nSent: Thursday, September 22, 2005 6:58 PM\nSubject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0\n\n\n> Jean-Pierre,\n> \n>> How do I produce an \"Index scan plan\" ?\n> \n> You just did. What's your effective_cache_size set to? \n> random_page_cost? work_mem?\n> \n> -- \n> --Josh\n> \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n**********************************************************************\nThis message contains confidential information intended only for the \nuse of the addressee(s) named above and may contain information that \nis legally privileged. If you are not the addressee, or the person \nresponsible for delivering it to the addressee, you are hereby \nnotified that reading, disseminating, distributing or copying this \nmessage is strictly prohibited. If you have received this message by \nmistake, please immediately notify us by replying to the message and \ndelete the original message immediately thereafter.\n\nThank you. FADLD Tag\n**********************************************************************\n\n", "msg_date": "Thu, 22 Sep 2005 16:18:54 -0700", "msg_from": "\"Tomeh, Husam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries 15 times slower on 8.1 beta 2 than on 8.0" } ]
[ { "msg_contents": "Hello Tom,\n\n \n\nThanks a lot for your quick response. Which version do you think is the\nmore stable one that we should upgrade to?\n\n \n\nPlease provide us with the Upgrade instructions/documentation to be\nfollowed for both red hat and PostgreSQL. \n\n \n\nThanks and Best Regards,\n\nAnu\n\n \n\n \n\n-----Original Message-----\n\nFrom: Tom Lane [mailto:[email protected]] \n\nSent: Wednesday, September 21, 2005 12:15 PM\n\nTo: Anu Kucharlapati\n\nCc: [email protected]; Owen Blizzard\n\nSubject: Re: [PERFORM] Deadlock Issue with PostgreSQL \n\n \n\n\"Anu Kucharlapati\" <[email protected]> writes:\n\n> Red Hat Linux release 7.3\n\n> Apache 1.3.20\n\n> PostgreSQL 7.1.3\n\n \n\nI'm not sure about Apache, but both the RHL and Postgres versions\n\nyou are using are stone age --- *please* update. Red Hat stopped\n\nsupporting that release years ago, and the PG community isn't\n\nsupporting 7.1.* anymore either. There are too many known problems\n\nin 7.1.* that are unfixable without a major-version upgrade.\n\n \n\n regards, tom lane\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\nHello Tom,\n \nThanks a lot for your quick response. Which version do you think is the\nmore stable one that we should upgrade to?\n \nPlease provide us with the Upgrade instructions/documentation to be\nfollowed for both red hat and PostgreSQL. \n \nThanks and Best Regards,\nAnu\n \n \n-----Original Message-----\nFrom: Tom Lane\n[mailto:[email protected]] \nSent: Wednesday, September 21, 2005 12:15 PM\nTo: Anu Kucharlapati\nCc: [email protected]; Owen Blizzard\nSubject: Re: [PERFORM] Deadlock Issue with PostgreSQL \n \n\"Anu Kucharlapati\" <[email protected]> writes:\n> Red Hat Linux release 7.3\n> Apache 1.3.20\n> PostgreSQL 7.1.3\n \nI'm not sure about Apache, but both the RHL and Postgres versions\nyou are using are stone age --- *please* update.  Red Hat stopped\nsupporting that release years ago, and the PG community isn't\nsupporting 7.1.* anymore either.  There are too many known problems\nin 7.1.* that are unfixable without a major-version upgrade.\n \n                  regards, tom lane", "msg_date": "Fri, 23 Sep 2005 10:21:33 +1000", "msg_from": "\"Anu Kucharlapati\" <[email protected]>", "msg_from_op": true, "msg_subject": "FW: Deadlock Issue with PostgreSQL " }, { "msg_contents": "Anu,\n\n> Thanks a lot for your quick response. Which version do you think is the\n> more stable one that we should upgrade to?\n\n8.0.3\n\n> Please provide us with the Upgrade instructions/documentation to be\n> followed for both red hat and PostgreSQL.\n\nSee the PostgreSQL documentation for upgrade instructions. Given how old \nyour version is, you might need to go through an intermediate version, \nlike 7.3.\n\nRed Hat upgrades are between you and Red Hat. They sell support for a \nreason ...\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 22 Sep 2005 19:49:18 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Deadlock Issue with PostgreSQL" } ]
[ { "msg_contents": "Hi,\n\nI must convert an old table into a new table. The conversion goes at ~\n100 records per second. Given the fact that I must convert 40 million\nrecords, it takes too much time: more hours than the 48 hour weekend I\nhave for the conversion;-).\n\nThe tables are rather simple: both tables only have a primary key\nconstraint (of type text) and no other indexes. I only copy 3 columns. I\nuse Java for the conversion. For the exact code see below.\n\nDuring the conversion my processor load is almost non existant. The\nharddisk throughput is ~ 6 megabyte/second max (measured with iostat).\n\nMy platform is Debian Sarge AMD64. My hardware is a Tyan Thunder K8W\n2885 motherboard, 2 Opteron 248 processors, 2 GB RAM, a SATA bootdisk\nwith / and swap, and a 3Ware 9500S-8 RAID-5 controller with 5 attached\nSATA disks with /home and /var. /var contains *all* PostgreSQL log and\ndatabase files (default Debian installation).\n\nOutput of hdparm -Tt /dev/sdb (sdb is the RAID opartition)\n\n/dev/sdb:\n Timing cached reads: 1696 MB in 2.00 seconds = 846.86 MB/sec\n Timing buffered disk reads: 246 MB in 3.01 seconds = 81.79 MB/sec\n\n\nI want to determine the cause of my performance problem (if it is one).\n\n1. Is this a performance I can expect?\n2. If not, how can I determine the cause?\n3. Can I anyhow improve the performance without replacing my hardware,\ne.g. by tweaking the software?\n4. Is there a Linux (Debian) tool that I can use to benchmark write\nperformance?\n\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\nThe Java code I use for the conversion :\n\n//////////////// ....\nResultSet resultSet = selectStatement.executeQuery(\n\"select ordernummer, orderdatum, klantnummer from odbc.orders order by\nordernummer\");\n\t\t\t\nconnection.setAutoCommit(false);\n\t\t\t\nPreparedStatement ordersInsertStatement = \nconnection.prepareStatement(\"insert into prototype.orders\n(objectid,ordernumber,orderdate,customernumber) values (?,?,?,?)\");\t\t\t\n\t\t\t\nwhile( resultSet.next() )\n{\n\nif( (++record % 100) == 0){\n\tSystem.err.println( \"handling record: \" + record);\n}\n\t\t\t\t\n// the next line can do > 1.000.000 objectId/sec\nString orderObjectId = ObjectIdGenerator.newObjectId();\nordersInsertStatement.setString(1,orderObjectId);\nordersInsertStatement.setInt(2,resultSet.getInt(\"ordernummer\")); \nordersInsertStatement.setDate(3,resultSet.getDate(\"orderdatum\")); \nordersInsertStatement.setInt(4,resultSet.getInt(\"klantnummer\")); \n\t\t\nordersInsertStatement.execute();\n\t\t\t\t\n}\t\n\t\t\t\nconnection.commit();\n\n", "msg_date": "Fri, 23 Sep 2005 08:49:27 +0200", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "How to determine cause of performance problem?" }, { "msg_contents": "Hi Joost,\n\nwhy do you convert programmatically? I would do something like\n\ncreate sequence s_objectid;\n\ninsert into \nprototype.orders(objectid,ordernumber,orderdate,customernumber)\nselect next_val('s_objectid'),ordernummer, orderdatum, klantnummer from\nodbc.orders\n\n\nSounds a lot faster to me.\n\n\n/Ulrich\n", "msg_date": "Fri, 23 Sep 2005 11:31:25 +0200", "msg_from": "Ulrich Wisser <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to determine cause of performance problem?" }, { "msg_contents": "On Fri, Sep 23, 2005 at 08:49:27AM +0200, Joost Kraaijeveld wrote:\n>3. Can I anyhow improve the performance without replacing my hardware,\n>e.g. by tweaking the software?\n\nIt's not clear what your object id generator does. If it's just a\nsequence, it's not clear that you need this program at all--just use a\nSELECT INTO and make the object id a SERIAL.\n\nIf you do need to control the object id or do some other processing\nbefore putting the data into the new table, rewrite to use a COPY\ninstead of an INSERT.\n\nMike Stone\n", "msg_date": "Fri, 23 Sep 2005 05:55:17 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to determine cause of performance problem?" }, { "msg_contents": "On Fri, 2005-09-23 at 05:55 -0400, Michael Stone wrote:\n> It's not clear what your object id generator does. If it's just a\n> sequence, it's not clear that you need this program at all--just use a\n> SELECT INTO and make the object id a SERIAL.\nIt generates a GUID (and no, I do not want to turn this in a discussion\nabout GUIDs). As in the Java code comment: it is not the generation of\nthe GUID that is the problem (that is, I can generate millions of them\nper second.)\n\n> If you do need to control the object id or do some other processing\n> before putting the data into the new table, rewrite to use a COPY\n> instead of an INSERT.\nIt is actually the shortest piece of code that gives me a poor\nperformance. The conversion problem is much, much larger and much much\nmore complicated. \n\nI suspect that either my hardware is to slow (but then again, see the\nspecs), or my Debian is to slow, or my PostgreSQL settings are wrong.\n\nBut I have no clue where to begin with determining the bottleneck (it\neven may be a normal performance for all I know: I have no experience\nwith converting such (large) database).\n\nAny suggestions?\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Fri, 23 Sep 2005 12:21:15 +0200", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to determine cause of performance problem?" }, { "msg_contents": "On Fri, Sep 23, 2005 at 12:21:15PM +0200, Joost Kraaijeveld wrote:\n>On Fri, 2005-09-23 at 05:55 -0400, Michael Stone wrote:\n>> It's not clear what your object id generator does. If it's just a\n>> sequence, it's not clear that you need this program at all--just use a\n>> SELECT INTO and make the object id a SERIAL.\n>It generates a GUID (and no, I do not want to turn this in a discussion\n>about GUIDs). As in the Java code comment: it is not the generation of\n>the GUID that is the problem (that is, I can generate millions of them\n>per second.)\n\nI didn't say it was, did I? If you use a SELECT INTO instead of\nSELECTing each record and then reINSERTing it you avoid a round trip\nlatency for each row. There's a reason I said \"if it's just a sequence\".\n\n>> If you do need to control the object id or do some other processing\n>> before putting the data into the new table, rewrite to use a COPY\n>> instead of an INSERT.\n>It is actually the shortest piece of code that gives me a poor\n>performance. The conversion problem is much, much larger and much much\n>more complicated. \n\nOk, that's great, but you didn't respond to the suggestion of using COPY\nINTO instead of INSERT.\n\n>But I have no clue where to begin with determining the bottleneck (it\n>even may be a normal performance for all I know: I have no experience\n>with converting such (large) database).\n>\n>Any suggestions?\n\nRespond to the first suggestion?\n\nMike Stone\n", "msg_date": "Fri, 23 Sep 2005 07:05:00 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to determine cause of performance problem?" }, { "msg_contents": "On 23-9-2005 13:05, Michael Stone wrote:\n> On Fri, Sep 23, 2005 at 12:21:15PM +0200, Joost Kraaijeveld wrote:\n> \n> Ok, that's great, but you didn't respond to the suggestion of using COPY\n> INTO instead of INSERT.\n> \n>> But I have no clue where to begin with determining the bottleneck (it\n>> even may be a normal performance for all I know: I have no experience\n>> with converting such (large) database).\n>>\n>> Any suggestions?\n> \n> \n> Respond to the first suggestion?\n\nAnother suggestion:\nHow many indexes and constraints are on the new table?\nDrop all of them and recreate them once the table is filled. Of course \nthat only works if you know your data will be ok (which is normal for \nimports of already conforming data like database dumps of existing tables).\nThis will give major performance improvements, if you have indexes and \nsuch on the new table.\n\nBest regards,\n\nArjen\n", "msg_date": "Fri, 23 Sep 2005 13:19:55 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to determine cause of performance problem?" }, { "msg_contents": "Joost,\n\nI presume you are using a relatively new jdbc driver. Make sure you \nhave added prepareThreshold=1 to the url to that it will use a named \nserver side prepared statement\n\nYou could also use your mod 100 code block to implement batch \nprocessing of the inserts.\n\nsee addBatch, in jdbc specs\n\nDave\n\nOn 23-Sep-05, at 2:49 AM, Joost Kraaijeveld wrote:\n\n> Hi,\n>\n> I must convert an old table into a new table. The conversion goes at ~\n> 100 records per second. Given the fact that I must convert 40 million\n> records, it takes too much time: more hours than the 48 hour weekend I\n> have for the conversion;-).\n>\n> The tables are rather simple: both tables only have a primary key\n> constraint (of type text) and no other indexes. I only copy 3 \n> columns. I\n> use Java for the conversion. For the exact code see below.\n>\n> During the conversion my processor load is almost non existant. The\n> harddisk throughput is ~ 6 megabyte/second max (measured with iostat).\n>\n> My platform is Debian Sarge AMD64. My hardware is a Tyan Thunder K8W\n> 2885 motherboard, 2 Opteron 248 processors, 2 GB RAM, a SATA bootdisk\n> with / and swap, and a 3Ware 9500S-8 RAID-5 controller with 5 attached\n> SATA disks with /home and /var. /var contains *all* PostgreSQL log and\n> database files (default Debian installation).\n>\n> Output of hdparm -Tt /dev/sdb (sdb is the RAID opartition)\n>\n> /dev/sdb:\n> Timing cached reads: 1696 MB in 2.00 seconds = 846.86 MB/sec\n> Timing buffered disk reads: 246 MB in 3.01 seconds = 81.79 MB/sec\n>\n>\n> I want to determine the cause of my performance problem (if it is \n> one).\n>\n> 1. Is this a performance I can expect?\n> 2. If not, how can I determine the cause?\n> 3. Can I anyhow improve the performance without replacing my hardware,\n> e.g. by tweaking the software?\n> 4. Is there a Linux (Debian) tool that I can use to benchmark write\n> performance?\n>\n>\n>\n> -- \n> Groeten,\n>\n> Joost Kraaijeveld\n> Askesis B.V.\n> Molukkenstraat 14\n> 6524NB Nijmegen\n> tel: 024-3888063 / 06-51855277\n> fax: 024-3608416\n> e-mail: [email protected]\n> web: www.askesis.nl\n>\n>\n> The Java code I use for the conversion :\n>\n> //////////////// ....\n> ResultSet resultSet = selectStatement.executeQuery(\n> \"select ordernummer, orderdatum, klantnummer from odbc.orders order by\n> ordernummer\");\n>\n> connection.setAutoCommit(false);\n>\n> PreparedStatement ordersInsertStatement =\n> connection.prepareStatement(\"insert into prototype.orders\n> (objectid,ordernumber,orderdate,customernumber) values (?,?,?,?)\");\n>\n> while( resultSet.next() )\n> {\n>\n> if( (++record % 100) == 0){\n> System.err.println( \"handling record: \" + record);\n> }\n>\n> // the next line can do > 1.000.000 objectId/sec\n> String orderObjectId = ObjectIdGenerator.newObjectId();\n> ordersInsertStatement.setString(1,orderObjectId);\n> ordersInsertStatement.setInt(2,resultSet.getInt(\"ordernummer\"));\n> ordersInsertStatement.setDate(3,resultSet.getDate(\"orderdatum\"));\n> ordersInsertStatement.setInt(4,resultSet.getInt(\"klantnummer\"));\n>\n> ordersInsertStatement.execute();\n>\n> }\n>\n> connection.commit();\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n>\n\n", "msg_date": "Fri, 23 Sep 2005 07:30:59 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to determine cause of performance problem?" }, { "msg_contents": "On Fri, 2005-09-23 at 13:19 +0200, Arjen van der Meijden wrote:\n> Another suggestion:\n> How many indexes and constraints are on the new table?\nAs mentioned in the first mail: in this tables only primary key\nconstraints, no other indexes or constraints.\n\n> Drop all of them and recreate them once the table is filled. Of course \n> that only works if you know your data will be ok (which is normal for \n> imports of already conforming data like database dumps of existing tables).\n> This will give major performance improvements, if you have indexes and \n> such on the new table.\nI will test this a for perfomance improvement, but still, I wonder if ~\n100 inserts/second is a reasonable performance for my software/hardware\ncombination.\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Fri, 23 Sep 2005 15:35:58 +0200", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to determine cause of performance problem?" }, { "msg_contents": "On Fri, 2005-09-23 at 07:05 -0400, Michael Stone wrote: \n> On Fri, Sep 23, 2005 at 12:21:15PM +0200, Joost Kraaijeveld wrote:\n> >On Fri, 2005-09-23 at 05:55 -0400, Michael Stone wrote:\n> I didn't say it was, did I? \nNo, you did not. But only last week someon'es head was (luckely for him\nonly virtually) almost chopped off for suggesting the usage of GUIDs ;-)\n\n\n> Ok, that's great, but you didn't respond to the suggestion of using COPY\n> INTO instead of INSERT.\nPart of the code I left out are some data conversions (e.g. from\npath-to-file to blob, from text to date (not castable because of the\nhomebrew original format)). I don't believe that I can do these in a SQL\nstatement, can I (my knowledge of SQL as a langage is not that good)? .\nHowever I will investigate if I can do the conversion in two steps and\ncheck if it is faster.\n\nBut still, I wonder if ~100 inserts/second is a reasonable performance\nfor my software/hardware combination.\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Fri, 23 Sep 2005 15:49:25 +0200", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to determine cause of performance problem?" }, { "msg_contents": "On 23-9-2005 15:35, Joost Kraaijeveld wrote:\n> On Fri, 2005-09-23 at 13:19 +0200, Arjen van der Meijden wrote:\n>>Drop all of them and recreate them once the table is filled. Of course \n>>that only works if you know your data will be ok (which is normal for \n>>imports of already conforming data like database dumps of existing tables).\n>>This will give major performance improvements, if you have indexes and \n>>such on the new table.\n> \n> I will test this a for perfomance improvement, but still, I wonder if ~\n> 100 inserts/second is a reasonable performance for my software/hardware\n> combination.\n\nFor the hardware: no, I don't think it is for such a simple table/small \nrecordsize.\nI did a few batch-inserts with indexes on tables and was very \ndisappointed about the time it took. But with no indexes and constraints \nleft it flew and the table of 7 million records (of 3 ints and 2 \nbigints) was imported in 75 seconds, on a bit simpler hardware. That was \ndone using a simple pg_dump-built sql-file which was then fed to psql as \ninput. And of course that used the local unix socket, not the local \nnetwork interface (I don't know which jdbc takes).\nBut generating a single transaction (as you do) with inserts shouldn't \nbe that much slower.\n\nSo I don't think its your hardware, nor your postgresql, although a bit \nextra maintenance_work_mem may help, if you haven't touched that.\nLeaving the queries, the application and the driver. But I don't have \nthat much experience with jdbc and postgresql-performance. In php I \nwouldn't select all the 40M records at once, the resultset would be in \nthe clients-memory and that may actually cause trouble. But I don't know \nhow that is implemented in JDBC, it may of course be using cursors and \nit would be less of a problem than perhaps.\nYou could try writing the inserts to file and see how long that takes, \nto eliminate the possibility of your application being slow on other \nparts than the inserting of data. If that is fast enough, a last resort \nmay be to write a csv-file from java and use that with a copy-statement \nin psql ;)\n\nBest regards,\n\nArjen\n", "msg_date": "Fri, 23 Sep 2005 16:06:54 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to determine cause of performance problem?" }, { "msg_contents": "Joost Kraaijeveld <[email protected]> writes:\n> I will test this a for perfomance improvement, but still, I wonder if ~\n> 100 inserts/second is a reasonable performance for my software/hardware\n> combination.\n\nIs the client code running on the same machine as the database server?\nIf not, what's the network delay and latency between them?\n\nThe major problem you're going to have here is at least one network\nround trip per row inserted --- possibly more, if the jdbc driver is\ndoing \"helpful\" stuff behind your back like starting/committing\ntransactions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Sep 2005 10:33:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to determine cause of performance problem? " }, { "msg_contents": "On Fri, 2005-09-23 at 10:33 -0400, Tom Lane wrote:\n> Is the client code running on the same machine as the database server?\n> If not, what's the network delay and latency between them?\nYes, it is running on the same machine.\n\n\n> The major problem you're going to have here is at least one network\n> round trip per row inserted --- possibly more, if the jdbc driver is\n> doing \"helpful\" stuff behind your back like starting/committing\n> transactions.\nOK, I will look into that.\n\nBut do you maybe know a pointer to info, or tools that can measure, what\nmy machine is doing during all the time it is doing nothing? Something\nlike the performance monitor in Windows but than for Linux?\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Fri, 23 Sep 2005 16:47:04 +0200", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to determine cause of performance problem?" }, { "msg_contents": "Joost Kraaijeveld <[email protected]> writes:\n> But do you maybe know a pointer to info, or tools that can measure, what\n> my machine is doing during all the time it is doing nothing? Something\n> like the performance monitor in Windows but than for Linux?\n\ntop, vmstat, iostat, sar, strace, oprofile, ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Sep 2005 10:50:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to determine cause of performance problem? " }, { "msg_contents": "On Fri, Sep 23, 2005 at 03:49:25PM +0200, Joost Kraaijeveld wrote:\n>On Fri, 2005-09-23 at 07:05 -0400, Michael Stone wrote: \n>> Ok, that's great, but you didn't respond to the suggestion of using COPY\n>> INTO instead of INSERT.\n>Part of the code I left out are some data conversions (e.g. from\n>path-to-file to blob, from text to date (not castable because of the\n>homebrew original format)). I don't believe that I can do these in a SQL\n>statement, can I (my knowledge of SQL as a langage is not that good)? .\n>However I will investigate if I can do the conversion in two steps and\n>check if it is faster.\n\nI'm not sure what you're trying to say. \n\nYou're currently putting rows into the table by calling \"INSERT INTO\"\nfor each row. The sample code you send could be rewritten to use \"COPY\nINTO\" instead. For bulk inserts like you're doing, the copy approach\nwill be a lot faster. Instead of inserting one row, waiting for a\nreply, and inserting the next row, you just cram data down a pipe to the\nserver. \n\nSee:\nhttp://www.postgresql.org/docs/8.0/interactive/sql-copy.html\nhttp://www.faqs.org/docs/ppbook/x5504.htm\n\nMike Stone\n", "msg_date": "Fri, 23 Sep 2005 11:31:18 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to determine cause of performance problem?" } ]
[ { "msg_contents": "I have concerns about whether we are overallocating memory for use in\nexternal sorts. (All code relating to this is in tuplesort.c)\n\nWhen we begin a sort we allocate (work_mem | maintenance_work_mem) and\nattempt to do the sort in memory. If the sort set is too big to fit in\nmemory we then write to disk and begin an external sort. The same memory\nallocation is used for both types of sort, AFAICS.\n\nThe external sort algorithm benefits from some memory but not much.\nKnuth says that the amount of memory required is very low, with a value\ntypically less than 1 kB. I/O overheads mean that there is benefit from\nhaving longer sequential writes, so the optimum is much larger than\nthat. I've not seen any data that indicates that a setting higher than\n16 MB adds any value at all to a large external sort. I have some\nindications from private tests that very high memory settings may\nactually hinder performance of the sorts, though I cannot explain that\nand wonder whether it is the performance tests themselves that have\nissues.\n\nDoes anyone have any clear data that shows the value of large settings\nof work_mem when the data to be sorted is much larger than memory? (I am\nwell aware of the value of setting work_mem higher for smaller sorts, so\nany performance data needs to reflect only very large sorts). \n\nIf not, I would propose that when we move from qsort to tapesort mode we\nfree the larger work_mem setting (if one exists) and allocate only a\nlower, though still optimal setting for the tapesort. That way the\nmemory can be freed for use by other users or the OS while the tapesort\nproceeds (which is usually quite a while...).\n\nFeedback, please.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Fri, 23 Sep 2005 10:37:12 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Releasing memory during External sorting?" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> If not, I would propose that when we move from qsort to tapesort mode we\n> free the larger work_mem setting (if one exists) and allocate only a\n> lower, though still optimal setting for the tapesort. That way the\n> memory can be freed for use by other users or the OS while the tapesort\n> proceeds (which is usually quite a while...).\n\nOn most platforms it's quite unlikely that any memory would actually get\nreleased back to the OS before transaction end, because the memory\nblocks belonging to the tuplesort context will be intermixed with blocks\nbelonging to other contexts. So I think this is pretty pointless.\n(If you can't afford to have the sort using all of sort_mem, you've set\nsort_mem too large, anyway.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Sep 2005 10:09:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Releasing memory during External sorting? " }, { "msg_contents": "On Fri, 2005-09-23 at 10:09 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > If not, I would propose that when we move from qsort to tapesort mode we\n> > free the larger work_mem setting (if one exists) and allocate only a\n> > lower, though still optimal setting for the tapesort. That way the\n> > memory can be freed for use by other users or the OS while the tapesort\n> > proceeds (which is usually quite a while...).\n> \n> On most platforms it's quite unlikely that any memory would actually get\n> released back to the OS before transaction end, because the memory\n> blocks belonging to the tuplesort context will be intermixed with blocks\n> belonging to other contexts. So I think this is pretty pointless.\n\nI take it you mean pointless because of the way the memory allocation\nworks, rather than because giving memory back isn't worthwhile ?\n\nSurely the sort memory would be allocated in contiguous chunks? In some\ncases we might be talking about more than a GB of memory, so it'd be\ngood to get that back ASAP. I'm speculating....\n\n> (If you can't afford to have the sort using all of sort_mem, you've set\n> sort_mem too large, anyway.)\n\nSort takes care to allocate only what it needs as starts up. All I'm\nsuggesting is to take the same care when the sort mode changes. If the\nabove argument held water then we would just allocate all the memory in\none lump at startup, \"because we can afford to\", so I don't buy that. \n\nSince we know the predicted size of the sort set prior to starting the\nsort node, could we not use that information to allocate memory\nappropriately? i.e. if sort size is predicted to be more than twice the\nsize of work_mem, then just move straight to the external sort algorithm\nand set the work_mem down at the lower limit?\n\nThat is, unless somebody has evidence that having a very large memory\nhas any performance benefit for external sorting?\n\nBest Regards, Simon Riggs\n\n\n\n\n", "msg_date": "Fri, 23 Sep 2005 16:06:18 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Releasing memory during External sorting?" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> Since we know the predicted size of the sort set prior to starting the\n> sort node, could we not use that information to allocate memory\n> appropriately? i.e. if sort size is predicted to be more than twice the\n> size of work_mem, then just move straight to the external sort algorithm\n> and set the work_mem down at the lower limit?\n\nHave you actually read the sort code?\n\nDuring the run-forming phase it's definitely useful to eat all the\nmemory you can: that translates directly to longer initial runs and\nhence fewer merge passes. During the run-merging phase it's possible\nthat using less memory would not hurt performance any, but as already\nstated, I don't think it will actually end up cutting the backend's\nmemory footprint --- the sbrk point will be established during the run\nforming phase and it's unlikely to move back much until transaction end.\n\nAlso, if I recall the development of that code correctly, the reason for\nusing more than minimum memory during the merge phase is that writing or\nreading lots of tuples at once improves sequentiality of access to the\ntemp files. So I'm not sure that cutting down the memory wouldn't hurt\nperformance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Sep 2005 11:31:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Releasing memory during External sorting? " }, { "msg_contents": "> On most platforms it's quite unlikely that any memory would \n> actually get\n> released back to the OS before transaction end, because the memory\n> blocks belonging to the tuplesort context will be intermixed with \n> blocks\n> belonging to other contexts. So I think this is pretty pointless.\n> (If you can't afford to have the sort using all of sort_mem, you've \n> set\n> sort_mem too large, anyway.)\nOn OpenBSD 3.8 malloc use mmap(2) and no more sbrk.\nSo, as soon as the bloc is free, it returns to the OS.\nAccess to the freed pointer crashs immediatly.\n\nCordialement,\nJean-G�rard Pailloncy\n\n", "msg_date": "Fri, 23 Sep 2005 18:39:35 +0200", "msg_from": "Pailloncy Jean-Gerard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Releasing memory during External sorting? " }, { "msg_contents": "On Fri, Sep 23, 2005 at 06:39:35PM +0200, Pailloncy Jean-Gerard wrote:\n> >On most platforms it's quite unlikely that any memory would actually\n> >get released back to the OS before transaction end, because the\n> >memory blocks belonging to the tuplesort context will be intermixed\n> >with blocks belonging to other contexts. So I think this is pretty\n> >pointless. (If you can't afford to have the sort using all of\n> >sort_mem, you've set sort_mem too large, anyway.)\n\n> On OpenBSD 3.8 malloc use mmap(2) and no more sbrk.\n> So, as soon as the bloc is free, it returns to the OS.\n> Access to the freed pointer crashs immediatly.\n\nInteresting point. Glibc also uses mmap() but only for allocations\ngreater than a few K, otherwise it's a waste of space.\n\nI guess you would have to look into the postgresql allocator to see if\nit doesn't divide the mmap()ed space up between multiple contexts.\nLarge allocations certainly appear to be passed off to malloc() but I\ndon't think execSort allocates all it's space in one go, it just counts\nthe space allocated by palloc().\n\nSo, unless someone goes and adds changes the tuplesort code to allocate\nbig blocks and use them only for tuples, I think you're going to run\ninto issues with data interleaved, meaning not much to give back to the\nOS...\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Fri, 23 Sep 2005 20:32:46 +0200", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Releasing memory during External sorting?" }, { "msg_contents": "Calculating Optimal memory for disk based sort is based only on minimizing\nIO.\nA previous post stated we can merge as many subfiles as we want in a single\npass,\nthis is not accurate, as we want to eliminate disk seeks also in the merge\nphase,\nalso the merging should be done by reading blocks of data from each subfile,\nif we have data of size N and M memory, then we will have K=N/M subfiles to\nmerge\nafter sorting each.\nin the merge operation if we want to merge all blocks in one pass we will\nread\nM/K data from each subfile into memory and begin merging, we will read\nanother M/K block\nwhen the buffer from a subfile is empty,\nwe would like disk seek time to be irrelavant when comparing to sequential\nIO time.\nWe notice that we are performing IO in blocks of N/K^2 which is M/(N/M)^2\nlet us assume that sequeential IO is done at 100MB/s and that\na random seek requires ~15ms. and we want seek time to be irrelavnt in one\norder of\nmagnitute we get, that in the time of one random seek we can read 1.5MB of\ndata\nand would get optimal performance if we perform IO in blocks of 15MB.\nand since in the merge algorithm showed above we perform IO in blocks of M/K\nwe would like M>K*15MB which results in a very large memory requirement.\nM^2>N*15MB\nM>sqrt(N*15MB)\nfor example for sorting 10GB of data, we would like M>380MB\nfor optimal performance.\n\nalternativly if we can choose a diffrent algorithm in which we merge only a\nconstant\nnumber of sunfiles to gether at a time but then we will require multiple\npasses to merge\nthe entire file. we will require log(K) passes over the entire data and this\napproach obviously\nimproves with increase of memory.\n\nThe first aproach requires 2 passes of the entire data and K^2+K random\nseeks,\nthe second aproach(when merging l blocks at a time) requires: log(l,K)\npasses over the data\nand K*l+K random seeks.\n\n\nOn 9/23/05, Simon Riggs <[email protected]> wrote:\n>\n> I have concerns about whether we are overallocating memory for use in\n> external sorts. (All code relating to this is in tuplesort.c)\n>\n> When we begin a sort we allocate (work_mem | maintenance_work_mem) and\n> attempt to do the sort in memory. If the sort set is too big to fit in\n> memory we then write to disk and begin an external sort. The same memory\n> allocation is used for both types of sort, AFAICS.\n>\n> The external sort algorithm benefits from some memory but not much.\n> Knuth says that the amount of memory required is very low, with a value\n> typically less than 1 kB. I/O overheads mean that there is benefit from\n> having longer sequential writes, so the optimum is much larger than\n> that. I've not seen any data that indicates that a setting higher than\n> 16 MB adds any value at all to a large external sort. I have some\n> indications from private tests that very high memory settings may\n> actually hinder performance of the sorts, though I cannot explain that\n> and wonder whether it is the performance tests themselves that have\n> issues.\n>\n> Does anyone have any clear data that shows the value of large settings\n> of work_mem when the data to be sorted is much larger than memory? (I am\n> well aware of the value of setting work_mem higher for smaller sorts, so\n> any performance data needs to reflect only very large sorts).\n>\n> If not, I would propose that when we move from qsort to tapesort mode we\n> free the larger work_mem setting (if one exists) and allocate only a\n> lower, though still optimal setting for the tapesort. That way the\n> memory can be freed for use by other users or the OS while the tapesort\n> proceeds (which is usually quite a while...).\n>\n> Feedback, please.\n>\n> Best Regards, Simon Riggs\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\nCalculating Optimal memory for disk based sort is based only on minimizing IO.\nA previous post stated we can merge as many subfiles as we want in a single pass,\nthis is not accurate, as we want to eliminate disk seeks also in the merge phase,\nalso the merging should be done by reading blocks of data from each subfile,\nif we have data of size N and M memory, then we will have K=N/M subfiles to merge\nafter sorting each. \nin the merge operation if we want to merge all blocks in one pass we will read \nM/K data from each subfile into memory and begin merging, we will read another M/K block\nwhen the buffer from a subfile is empty, \nwe would like disk seek time to be irrelavant when comparing to sequential IO time.\nWe notice that we are performing IO in blocks of N/K^2 which is M/(N/M)^2 \nlet us assume that sequeential IO is done at 100MB/s and that\na random seek requires ~15ms. and we want seek time to be irrelavnt in one order of\nmagnitute we get, that in the time of one random seek we can read 1.5MB of data\nand would get optimal performance if we perform IO in blocks of 15MB.\nand since in the merge algorithm showed above we perform IO in blocks of M/K\nwe would like M>K*15MB which results in a very large memory requirement.\nM^2>N*15MB\nM>sqrt(N*15MB)\nfor example for sorting 10GB of data, we would like M>380MB\nfor optimal performance.\n\nalternativly if we can choose a diffrent algorithm in which we merge only a constant\nnumber of sunfiles to gether at a time but then we will require multiple passes to merge\nthe entire file. we will require log(K) passes over the entire data and this approach obviously\nimproves with increase of memory.\n\nThe first aproach requires 2 passes of the entire data and K^2+K random seeks,\nthe second aproach(when merging l blocks at a time) requires: log(l,K) passes over the data\nand K*l+K random seeks.\n\nOn 9/23/05, Simon Riggs <[email protected]> wrote:\nI have concerns about whether we are overallocating memory for use inexternal sorts. (All code relating to this is in tuplesort.c)When we begin a sort we allocate (work_mem | maintenance_work_mem) andattempt to do the sort in memory. If the sort set is too big to fit in\nmemory we then write to disk and begin an external sort. The same memoryallocation is used for both types of sort, AFAICS.The external sort algorithm benefits from some memory but not much.Knuth says that the amount of memory required is very low, with a value\ntypically less than 1 kB. I/O overheads mean that there is benefit fromhaving longer sequential writes, so the optimum is much larger thanthat. I've not seen any data that indicates that a setting higher than\n16 MB adds any value at all to a large external sort. I have someindications from private tests that very high memory settings mayactually hinder performance of the sorts, though I cannot explain thatand wonder whether it is the performance tests themselves that have\nissues.Does anyone have any clear data that shows the value of large settingsof work_mem when the data to be sorted is much larger than memory? (I amwell aware of the value of setting work_mem higher for smaller sorts, so\nany performance data needs to reflect only very large sorts).If not, I would propose that when we move from qsort to tapesort mode wefree the larger work_mem setting (if one exists) and allocate only a\nlower, though still optimal setting for the tapesort. That way thememory can be freed for use by other users or the OS while the tapesortproceeds (which is usually quite a while...).Feedback, please.\nBest Regards, Simon Riggs---------------------------(end of broadcast)---------------------------TIP 5: don't forget to increase your free space map settings", "msg_date": "Sat, 24 Sep 2005 08:24:01 +0300", "msg_from": "Meir Maor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Releasing memory during External sorting?" }, { "msg_contents": "On Fri, 2005-09-23 at 11:31 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > Since we know the predicted size of the sort set prior to starting the\n> > sort node, could we not use that information to allocate memory\n> > appropriately? i.e. if sort size is predicted to be more than twice the\n> > size of work_mem, then just move straight to the external sort algorithm\n> > and set the work_mem down at the lower limit?\n> \n> Have you actually read the sort code?\n\nYes and Knuth too. Your research and code are incredible, almost\nuntouchable. Yet sort performance is important and empirical evidence\nsuggests that this can be improved upon significantly, so I am and will\nbe spending time trying to improve upon that. Another time...\n\nThis thread was aiming to plug a problem I saw with 8.1's ability to use\nvery large work_mem settings. I felt that either my performance numbers\nwere wrong or we needed to do something; I've not had anybody show me\nperformance numbers that prove mine doubtful, yet.\n\n> During the run-forming phase it's definitely useful to eat all the\n> memory you can: that translates directly to longer initial runs and\n> hence fewer merge passes. \n\nSounds good, but maybe that is not the dominant effect. I'll retest, on\nthe assumption that there is a benefit, but there's something wrong with\nmy earlier tests.\n\n> During the run-merging phase it's possible\n> that using less memory would not hurt performance any, but as already\n> stated, I don't think it will actually end up cutting the backend's\n> memory footprint --- the sbrk point will be established during the run\n> forming phase and it's unlikely to move back much until transaction end.\n\n> Also, if I recall the development of that code correctly, the reason for\n> using more than minimum memory during the merge phase is that writing or\n> reading lots of tuples at once improves sequentiality of access to the\n> temp files. So I'm not sure that cutting down the memory wouldn't hurt\n> performance.\n\nCutting memory below about 16 MB does definitely hurt external sort\nperformance; I explain that as being the effect of sequential access. I\nhaven't looked to nail down the breakpoint exactly since it seemed more\nimportant simply to say that there looked like there was one.. Its just\nthat raising it above that mark doesn't help much, according to my\ncurrent results.\n\nI'll get some more test results and repost them, next week. I will be\nvery happy if the results show that more memory helps.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Sun, 25 Sep 2005 18:45:22 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Releasing memory during External sorting?" } ]
[ { "msg_contents": "On Fri, 23 Sep 2005, Ahmad Fajar wrote:\n\n> Hi Oleg,\n>\n> I didn't deny on the third repeat or more, it can reach < 600 msec. It is\n> only because the result still in postgres cache, but how about in the first\n> run? I didn't dare, the values is un-acceptable. Because my table will grows\n> rapidly, it's about 100000 rows per-week. And the visitor will search\n> anything that I don't know, whether it's the repeated search or new search,\n> or whether it's in postgres cache or not.\n\nif you have enoush shared memory postgresql will keep index pages there.\n\n\n>\n> I just compare with http://www.postgresql.org, the search is quite fast, and\n> I don't know whether the site uses tsearch2 or something else. But as fas as\n> I know, if the rows reach >100 milion (I have try for 200 milion rows and it\n> seem very slow), even if don't use tsearch2, only use simple query like:\n> select f1, f2 from table1 where f2='blabla',\n> and f2 is indexes, my postgres still slow on the first time, about >10 sec.\n> because of this I tried something brand new to fullfill my needs. I have\n> used fti, and tsearch2 but still slow.\n>\n> I don't know what's going wrong with my postgres, what configuration must I\n> do to perform the query get fast result. Or must I use enterprisedb 2005 or\n> pervasive postgres (both uses postgres), I don't know very much about these\n> two products.\n\nyou didn't show us your configuration (hardware,postgresql and tsearch2),\nexplain analyze of your queries, so we can't help you.\nHow big is your database, tsearch2 index size ?\n\n\n>\n> Regards,\n> ahmad fajar\n>\n>\n> -----Original Message-----\n> From: Oleg Bartunov [mailto:[email protected]]\n> Sent: Jumat, 23 September 2005 14:36\n> To: Ahmad Fajar\n> Cc: [email protected]\n> Subject: Re: [PERFORM] tsearch2 seem very slow\n>\n> Ahmad,\n>\n> how fast is repeated runs ? First time system could be very slow.\n> Also, have you checked my page\n> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\n> and some info about tsearch2 internals\n> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n>\n> \tOleg\n> On Thu, 22 Sep 2005, Ahmad Fajar wrote:\n>\n>> I have about 419804 rows in my article table. I have installed tsearch2\n> and\n>> its gist index correctly.\n>>\n>> My table structure is:\n>>\n>> CREATE TABLE tbarticles\n>>\n>> (\n>>\n>> articleid int4 NOT NULL,\n>>\n>> title varchar(250),\n>>\n>> mediaid int4,\n>>\n>> datee date,\n>>\n>> content text,\n>>\n>> contentvar text,\n>>\n>> mmcol float4 NOT NULL,\n>>\n>> sirkulasi float4,\n>>\n>> page varchar(10),\n>>\n>> tglisidata date,\n>>\n>> namapc varchar(12),\n>>\n>> usere varchar(12),\n>>\n>> file_pdf varchar(255),\n>>\n>> file_pdf2 varchar(50),\n>>\n>> kolom int4,\n>>\n>> size_jpeg int4,\n>>\n>> journalist varchar(120),\n>>\n>> ratebw float4,\n>>\n>> ratefc float4,\n>>\n>> fti tsvector,\n>>\n>> CONSTRAINT pk_tbarticles PRIMARY KEY (articleid)\n>>\n>> ) WITHOUT OIDS;\n>>\n>> Create index fti_idx1 on tbarticles using gist (fti);\n>>\n>> Create index fti_idx2 on tbarticles using gist (datee, fti);\n>>\n>>\n>>\n>> But when I search something like:\n>>\n>> Select articleid, title, datee from tbarticles where fti @@\n>> to_tsquery('susilo&bambang&yudhoyono&jusuf&kalla');\n>>\n>> It takes about 30 sec. I run explain analyze and the index is used\n>> correctly.\n>>\n>>\n>>\n>> Then I try multi column index to filter by date, and my query something\n>> like:\n>>\n>> Select articleid, title, datee from tbarticles where fti @@\n>> to_tsquery('susilo&bambang&yudhoyono&jusuf&kalla') and datee >=\n> '2002-01-01'\n>> and datee <= current_date\n>>\n>> An it still run about 25 sec. I do run explain analyze and my multicolumn\n>> index is used correctly.\n>>\n>> This is not acceptable if want to publish my website if the search took\n> very\n>> longer.\n>>\n>>\n>>\n>> I have run vacuum full analyze before doing such query. What going wrong\n>> with my query?? Is there any way to make this faster?\n>>\n>> I have try to tune my postgres configuration, but it seem helpless. My\n> linux\n>> box is Redhat 4 AS, and\n>>\n>> the hardware: 2 GB RAM DDR 400, 2x200 GB Serial ATA 7200RPM and configure\n> as\n>> RAID0 (just for postgres data), my sistem run at EIDE 80GB 7200 RPM.\n>>\n>>\n>>\n>> Please.help.help.\n>>\n>>\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Fri, 23 Sep 2005 15:25:37 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tsearch2 seem very slow" }, { "msg_contents": "Hi Oleg,\n\nFor single index I try this query:\nexplain analyze \nselect articleid, title, datee from articles\nwhere fti @@ to_tsquery('bank&indonesia');\n\nanalyze result:\n----------------\n\"Index Scan using fti_idx on articles (cost=0.00..862.97 rows=420 width=51)\n(actual time=0.067..183761.324 rows=46186 loops=1)\"\n\" Index Cond: (fti @@ '\\'bank\\' & \\'indonesia\\''::tsquery)\"\n\"Total runtime: 183837.826 ms\"\n\nAnd for multicolumn index I try this query:\nexplain analyze \nselect articleid, title, datee from articles\nwhere fti @@ to_tsquery('bank&mega');\n\nanalyze result:\n----------------\n\"Index Scan using articles_x1 on articles (cost=0.00..848.01 rows=410\nwidth=51) (actual time=52.204..37914.135 rows=1841 loops=1)\"\n\" Index Cond: ((datee >= '2002-01-01'::date) AND (datee <=\n('now'::text)::date) AND (fti @@ '\\'bank\\' & \\'mega\\''::tsquery))\"\n\"Total runtime: 37933.757 ms\"\n\nThe table structure is as mention on the first talk. If you wanna know how\nmuch table in my database, it's about 100 tables or maybe more. Now I\ndevelop the version 2 of my web application, you can take a look at:\nhttp://www.mediatrac.net, so it will hold many datas. But the biggest table\nis article's table. On develop this version 2 I just use half data of the\narticle's table (about 419804 rows). May be if I import all of the article's\ntable data it will have 1 million rows. The article's table grows rapidly,\nabout 100000 rows per-week. My developing database size is 28 GB (not real\ndatabase, coz I still develop the version 2 and I use half of the data for\nplay around). I just wanna to perform quick search (fulltext search) on my\narticle's table not other table. On version 1, the current running version I\nuse same hardware spesification as mention below, but there is no fulltext\nsearch. So I develop the new version with new features, new interface and\ninclude the fulltext search.\n\nI do know, if the application finish, I must use powerfull hardware. But how\ncan I guarantee the application will run smooth, if I do fulltext search on\n419804 rows in a table it took a long time to get the result. \n\nCould you or friends in this maling-list help me....plz..plzz\n\nTsearch2 configuration:\n-------------------------\nI use default configuration, english stop word file as tsearch2 provide,\nstem dictionary as default (coz I don't know how to configure and add new\ndata to stem dictionary) and I add some words to the english stop word file.\n\nPostgresql configuration\n-------------------------\nmax_connections = 32\nshared_buffers = 32768\nsort_mem = 8192\nvacuum_mem = 65536\nwork_mem = 16384\nmaintenance_work_mem = 65536\nmax_fsm_pages = 30000\nmax_fsm_relations = 1000\nmax_files_per_process = 100000\ncheckpoint_segments = 15\neffective_cache_size = 192000\nrandom_page_cost = 2\ngeqo = true\ngeqo_threshold = 50\ngeqo_effort = 5\ngeqo_pool_size = 0\ngeqo_generations = 0\ngeqo_selection_bias = 2.0\nfrom_collapse_limit = 10\njoin_collapse_limit = 15\n\nOS configuration:\n------------------\nI use Redhat 4 AS, kernel 2.6.9-11\nkernel.shmmax=1073741824\nkernel.sem=250 32000 100 128\nfs.aio-max-nr=5242880\nthe server I configure just only for postgresql, no other service is running\nlike: www, samba, ftp, email, firewall \n\nhardware configuration:\n------------------------\nMotherboard ASUS P5GD1\nProcessor P4 3,2 GHz\nMemory 2 GB DDR 400, \n2x200 GB Serial ATA 7200 RPM UltraATA/133, configure as RAID0 for postgresql\ndata and the partition is EXT3\n1x80 GB EIDE 7200 RPM configure for system and home directory and the\npartiton is EXT3\n\nDid I miss something?\n\nRegards,\nahmad fajar\n\n\n-----Original Message-----\nFrom: Oleg Bartunov [mailto:[email protected]] \nSent: Jumat, 23 September 2005 18:26\nTo: Ahmad Fajar\nCc: [email protected]\nSubject: RE: [PERFORM] tsearch2 seem very slow\n\nOn Fri, 23 Sep 2005, Ahmad Fajar wrote:\n\n> Hi Oleg,\n>\n> I didn't deny on the third repeat or more, it can reach < 600 msec. It is\n> only because the result still in postgres cache, but how about in the\nfirst\n> run? I didn't dare, the values is un-acceptable. Because my table will\ngrows\n> rapidly, it's about 100000 rows per-week. And the visitor will search\n> anything that I don't know, whether it's the repeated search or new\nsearch,\n> or whether it's in postgres cache or not.\n\nif you have enoush shared memory postgresql will keep index pages there.\n\n\n>\n> I just compare with http://www.postgresql.org, the search is quite fast,\nand\n> I don't know whether the site uses tsearch2 or something else. But as fas\nas\n> I know, if the rows reach >100 milion (I have try for 200 milion rows and\nit\n> seem very slow), even if don't use tsearch2, only use simple query like:\n> select f1, f2 from table1 where f2='blabla',\n> and f2 is indexes, my postgres still slow on the first time, about >10\nsec.\n> because of this I tried something brand new to fullfill my needs. I have\n> used fti, and tsearch2 but still slow.\n>\n> I don't know what's going wrong with my postgres, what configuration must\nI\n> do to perform the query get fast result. Or must I use enterprisedb 2005\nor\n> pervasive postgres (both uses postgres), I don't know very much about\nthese\n> two products.\n\nyou didn't show us your configuration (hardware,postgresql and tsearch2),\nexplain analyze of your queries, so we can't help you.\nHow big is your database, tsearch2 index size ?\n\n\n>\n> Regards,\n> ahmad fajar\n>\n>\n> -----Original Message-----\n> From: Oleg Bartunov [mailto:[email protected]]\n> Sent: Jumat, 23 September 2005 14:36\n> To: Ahmad Fajar\n> Cc: [email protected]\n> Subject: Re: [PERFORM] tsearch2 seem very slow\n>\n> Ahmad,\n>\n> how fast is repeated runs ? First time system could be very slow.\n> Also, have you checked my page\n> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\n> and some info about tsearch2 internals\n> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n>\n> \tOleg\n> On Thu, 22 Sep 2005, Ahmad Fajar wrote:\n>\n>> I have about 419804 rows in my article table. I have installed tsearch2\n> and\n>> its gist index correctly.\n>>\n>> My table structure is:\n>>\n>> CREATE TABLE tbarticles\n>>\n>> (\n>>\n>> articleid int4 NOT NULL,\n>>\n>> title varchar(250),\n>>\n>> mediaid int4,\n>>\n>> datee date,\n>>\n>> content text,\n>>\n>> contentvar text,\n>>\n>> mmcol float4 NOT NULL,\n>>\n>> sirkulasi float4,\n>>\n>> page varchar(10),\n>>\n>> tglisidata date,\n>>\n>> namapc varchar(12),\n>>\n>> usere varchar(12),\n>>\n>> file_pdf varchar(255),\n>>\n>> file_pdf2 varchar(50),\n>>\n>> kolom int4,\n>>\n>> size_jpeg int4,\n>>\n>> journalist varchar(120),\n>>\n>> ratebw float4,\n>>\n>> ratefc float4,\n>>\n>> fti tsvector,\n>>\n>> CONSTRAINT pk_tbarticles PRIMARY KEY (articleid)\n>>\n>> ) WITHOUT OIDS;\n>>\n>> Create index fti_idx1 on tbarticles using gist (fti);\n>>\n>> Create index fti_idx2 on tbarticles using gist (datee, fti);\n>>\n>>\n>>\n>> But when I search something like:\n>>\n>> Select articleid, title, datee from tbarticles where fti @@\n>> to_tsquery('susilo&bambang&yudhoyono&jusuf&kalla');\n>>\n>> It takes about 30 sec. I run explain analyze and the index is used\n>> correctly.\n>>\n>>\n>>\n>> Then I try multi column index to filter by date, and my query something\n>> like:\n>>\n>> Select articleid, title, datee from tbarticles where fti @@\n>> to_tsquery('susilo&bambang&yudhoyono&jusuf&kalla') and datee >=\n> '2002-01-01'\n>> and datee <= current_date\n>>\n>> An it still run about 25 sec. I do run explain analyze and my multicolumn\n>> index is used correctly.\n>>\n>> This is not acceptable if want to publish my website if the search took\n> very\n>> longer.\n>>\n>>\n>>\n>> I have run vacuum full analyze before doing such query. What going wrong\n>> with my query?? Is there any way to make this faster?\n>>\n>> I have try to tune my postgres configuration, but it seem helpless. My\n> linux\n>> box is Redhat 4 AS, and\n>>\n>> the hardware: 2 GB RAM DDR 400, 2x200 GB Serial ATA 7200RPM and configure\n> as\n>> RAID0 (just for postgres data), my sistem run at EIDE 80GB 7200 RPM.\n>>\n>>\n>>\n>> Please.help.help.\n>>\n>>\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 23 Sep 2005 23:40:05 +0700", "msg_from": "\"Ahmad Fajar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tsearch2 seem very slow" }, { "msg_contents": "Ahmad,\n\nwhat's about the number of unique words ? I mean stat() function.\nSometimes, it helps to identify garbage words.\nHow big is your articles (average length) ?\n\nplease, cut'n paste queries and output from psql ! How fast are \nnext queries ?\n\n \tOleg\nOn Fri, 23 Sep 2005, Ahmad Fajar wrote:\n\n> Hi Oleg,\n>\n> For single index I try this query:\n> explain analyze\n> select articleid, title, datee from articles\n> where fti @@ to_tsquery('bank&indonesia');\n>\n> analyze result:\n> ----------------\n> \"Index Scan using fti_idx on articles (cost=0.00..862.97 rows=420 width=51)\n> (actual time=0.067..183761.324 rows=46186 loops=1)\"\n> \" Index Cond: (fti @@ '\\'bank\\' & \\'indonesia\\''::tsquery)\"\n> \"Total runtime: 183837.826 ms\"\n>\n> And for multicolumn index I try this query:\n> explain analyze\n> select articleid, title, datee from articles\n> where fti @@ to_tsquery('bank&mega');\n>\n> analyze result:\n> ----------------\n> \"Index Scan using articles_x1 on articles (cost=0.00..848.01 rows=410\n> width=51) (actual time=52.204..37914.135 rows=1841 loops=1)\"\n> \" Index Cond: ((datee >= '2002-01-01'::date) AND (datee <=\n> ('now'::text)::date) AND (fti @@ '\\'bank\\' & \\'mega\\''::tsquery))\"\n> \"Total runtime: 37933.757 ms\"\n>\n> The table structure is as mention on the first talk. If you wanna know how\n> much table in my database, it's about 100 tables or maybe more. Now I\n> develop the version 2 of my web application, you can take a look at:\n> http://www.mediatrac.net, so it will hold many datas. But the biggest table\n> is article's table. On develop this version 2 I just use half data of the\n> article's table (about 419804 rows). May be if I import all of the article's\n> table data it will have 1 million rows. The article's table grows rapidly,\n> about 100000 rows per-week. My developing database size is 28 GB (not real\n> database, coz I still develop the version 2 and I use half of the data for\n> play around). I just wanna to perform quick search (fulltext search) on my\n> article's table not other table. On version 1, the current running version I\n> use same hardware spesification as mention below, but there is no fulltext\n> search. So I develop the new version with new features, new interface and\n> include the fulltext search.\n>\n> I do know, if the application finish, I must use powerfull hardware. But how\n> can I guarantee the application will run smooth, if I do fulltext search on\n> 419804 rows in a table it took a long time to get the result.\n>\n> Could you or friends in this maling-list help me....plz..plzz\n>\n> Tsearch2 configuration:\n> -------------------------\n> I use default configuration, english stop word file as tsearch2 provide,\n> stem dictionary as default (coz I don't know how to configure and add new\n> data to stem dictionary) and I add some words to the english stop word file.\n>\n> Postgresql configuration\n> -------------------------\n> max_connections = 32\n> shared_buffers = 32768\n> sort_mem = 8192\n> vacuum_mem = 65536\n> work_mem = 16384\n> maintenance_work_mem = 65536\n> max_fsm_pages = 30000\n> max_fsm_relations = 1000\n> max_files_per_process = 100000\n> checkpoint_segments = 15\n> effective_cache_size = 192000\n> random_page_cost = 2\n> geqo = true\n> geqo_threshold = 50\n> geqo_effort = 5\n> geqo_pool_size = 0\n> geqo_generations = 0\n> geqo_selection_bias = 2.0\n> from_collapse_limit = 10\n> join_collapse_limit = 15\n>\n> OS configuration:\n> ------------------\n> I use Redhat 4 AS, kernel 2.6.9-11\n> kernel.shmmax=1073741824\n> kernel.sem=250 32000 100 128\n> fs.aio-max-nr=5242880\n> the server I configure just only for postgresql, no other service is running\n> like: www, samba, ftp, email, firewall\n>\n> hardware configuration:\n> ------------------------\n> Motherboard ASUS P5GD1\n> Processor P4 3,2 GHz\n> Memory 2 GB DDR 400,\n> 2x200 GB Serial ATA 7200 RPM UltraATA/133, configure as RAID0 for postgresql\n> data and the partition is EXT3\n> 1x80 GB EIDE 7200 RPM configure for system and home directory and the\n> partiton is EXT3\n>\n> Did I miss something?\n>\n> Regards,\n> ahmad fajar\n>\n>\n> -----Original Message-----\n> From: Oleg Bartunov [mailto:[email protected]]\n> Sent: Jumat, 23 September 2005 18:26\n> To: Ahmad Fajar\n> Cc: [email protected]\n> Subject: RE: [PERFORM] tsearch2 seem very slow\n>\n> On Fri, 23 Sep 2005, Ahmad Fajar wrote:\n>\n>> Hi Oleg,\n>>\n>> I didn't deny on the third repeat or more, it can reach < 600 msec. It is\n>> only because the result still in postgres cache, but how about in the\n> first\n>> run? I didn't dare, the values is un-acceptable. Because my table will\n> grows\n>> rapidly, it's about 100000 rows per-week. And the visitor will search\n>> anything that I don't know, whether it's the repeated search or new\n> search,\n>> or whether it's in postgres cache or not.\n>\n> if you have enoush shared memory postgresql will keep index pages there.\n>\n>\n>>\n>> I just compare with http://www.postgresql.org, the search is quite fast,\n> and\n>> I don't know whether the site uses tsearch2 or something else. But as fas\n> as\n>> I know, if the rows reach >100 milion (I have try for 200 milion rows and\n> it\n>> seem very slow), even if don't use tsearch2, only use simple query like:\n>> select f1, f2 from table1 where f2='blabla',\n>> and f2 is indexes, my postgres still slow on the first time, about >10\n> sec.\n>> because of this I tried something brand new to fullfill my needs. I have\n>> used fti, and tsearch2 but still slow.\n>>\n>> I don't know what's going wrong with my postgres, what configuration must\n> I\n>> do to perform the query get fast result. Or must I use enterprisedb 2005\n> or\n>> pervasive postgres (both uses postgres), I don't know very much about\n> these\n>> two products.\n>\n> you didn't show us your configuration (hardware,postgresql and tsearch2),\n> explain analyze of your queries, so we can't help you.\n> How big is your database, tsearch2 index size ?\n>\n>\n>>\n>> Regards,\n>> ahmad fajar\n>>\n>>\n>> -----Original Message-----\n>> From: Oleg Bartunov [mailto:[email protected]]\n>> Sent: Jumat, 23 September 2005 14:36\n>> To: Ahmad Fajar\n>> Cc: [email protected]\n>> Subject: Re: [PERFORM] tsearch2 seem very slow\n>>\n>> Ahmad,\n>>\n>> how fast is repeated runs ? First time system could be very slow.\n>> Also, have you checked my page\n>> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\n>> and some info about tsearch2 internals\n>> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n>>\n>> \tOleg\n>> On Thu, 22 Sep 2005, Ahmad Fajar wrote:\n>>\n>>> I have about 419804 rows in my article table. I have installed tsearch2\n>> and\n>>> its gist index correctly.\n>>>\n>>> My table structure is:\n>>>\n>>> CREATE TABLE tbarticles\n>>>\n>>> (\n>>>\n>>> articleid int4 NOT NULL,\n>>>\n>>> title varchar(250),\n>>>\n>>> mediaid int4,\n>>>\n>>> datee date,\n>>>\n>>> content text,\n>>>\n>>> contentvar text,\n>>>\n>>> mmcol float4 NOT NULL,\n>>>\n>>> sirkulasi float4,\n>>>\n>>> page varchar(10),\n>>>\n>>> tglisidata date,\n>>>\n>>> namapc varchar(12),\n>>>\n>>> usere varchar(12),\n>>>\n>>> file_pdf varchar(255),\n>>>\n>>> file_pdf2 varchar(50),\n>>>\n>>> kolom int4,\n>>>\n>>> size_jpeg int4,\n>>>\n>>> journalist varchar(120),\n>>>\n>>> ratebw float4,\n>>>\n>>> ratefc float4,\n>>>\n>>> fti tsvector,\n>>>\n>>> CONSTRAINT pk_tbarticles PRIMARY KEY (articleid)\n>>>\n>>> ) WITHOUT OIDS;\n>>>\n>>> Create index fti_idx1 on tbarticles using gist (fti);\n>>>\n>>> Create index fti_idx2 on tbarticles using gist (datee, fti);\n>>>\n>>>\n>>>\n>>> But when I search something like:\n>>>\n>>> Select articleid, title, datee from tbarticles where fti @@\n>>> to_tsquery('susilo&bambang&yudhoyono&jusuf&kalla');\n>>>\n>>> It takes about 30 sec. I run explain analyze and the index is used\n>>> correctly.\n>>>\n>>>\n>>>\n>>> Then I try multi column index to filter by date, and my query something\n>>> like:\n>>>\n>>> Select articleid, title, datee from tbarticles where fti @@\n>>> to_tsquery('susilo&bambang&yudhoyono&jusuf&kalla') and datee >=\n>> '2002-01-01'\n>>> and datee <= current_date\n>>>\n>>> An it still run about 25 sec. I do run explain analyze and my multicolumn\n>>> index is used correctly.\n>>>\n>>> This is not acceptable if want to publish my website if the search took\n>> very\n>>> longer.\n>>>\n>>>\n>>>\n>>> I have run vacuum full analyze before doing such query. What going wrong\n>>> with my query?? Is there any way to make this faster?\n>>>\n>>> I have try to tune my postgres configuration, but it seem helpless. My\n>> linux\n>>> box is Redhat 4 AS, and\n>>>\n>>> the hardware: 2 GB RAM DDR 400, 2x200 GB Serial ATA 7200RPM and configure\n>> as\n>>> RAID0 (just for postgres data), my sistem run at EIDE 80GB 7200 RPM.\n>>>\n>>>\n>>>\n>>> Please.help.help.\n>>>\n>>>\n>>\n>> \tRegards,\n>> \t\tOleg\n>> _____________________________________________________________\n>> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>> Sternberg Astronomical Institute, Moscow University (Russia)\n>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>> phone: +007(095)939-16-83, +007(095)939-23-83\n>>\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Sat, 24 Sep 2005 10:07:42 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tsearch2 seem very slow" }, { "msg_contents": "Hi Oleg,\n\nSorry for my late. From the stat() function I got 1,5 million rows, although\nI've added garbage words to the stop word file, there seem still have\ngarbage words. So I ask for my team to identify the garbage words and add to\nstop words and I will update the articles after that. And about my articles,\nit is quite big enough, the average length is about 2900 characters. And I\nthink, I have to tune tsearch2 and concentrate to the garbage words. The\nmost articles are indonesian language. What others way to tune the tsearch2\nbeside the garbage words?\n\nBeside that, I still have problem, if I do a simple query like:\nSelect ids, keywords from dict where keywords='blabla' ('blabla' is a single\nword); The table have 200 million rows, I have index the keywords field. On\nthe first time my query seem to slow to get the result, about 15-60 sec to\nget the result. I use latest pgAdmin3 to test all queries. But if I repeat\nthe query I will get fast result. My question is why on the first time the\nquery seem to slow. \n\nI try to cluster the table base on keyword index, but after 15 hours waiting\nand it doesn't finish I stop clustering. Now I think I have to change the\nfile system for postgresql data. Do you have any idea what best for\npostgresql, JFS or XFS? I will not try reiserfs, because there are some\nrumors about reiserfs stability, although reiserfs is fast enough for\npostgresql. And must I down grade my postgresql from version 8.0.3 to 7.4.8?\n\n\n\nRegards,\nahmad fajar\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Oleg Bartunov\nSent: Saturday, September 24, 2005 1:08 PM\nTo: Ahmad Fajar\nCc: [email protected]\nSubject: Re: [PERFORM] tsearch2 seem very slow\n\nAhmad,\n\nwhat's about the number of unique words ? I mean stat() function.\nSometimes, it helps to identify garbage words.\nHow big is your articles (average length) ?\n\nplease, cut'n paste queries and output from psql ! How fast are \nnext queries ?\n\n \tOleg\nOn Fri, 23 Sep 2005, Ahmad Fajar wrote:\n\n> Hi Oleg,\n>\n> For single index I try this query:\n> explain analyze\n> select articleid, title, datee from articles\n> where fti @@ to_tsquery('bank&indonesia');\n>\n> analyze result:\n> ----------------\n> \"Index Scan using fti_idx on articles (cost=0.00..862.97 rows=420\nwidth=51)\n> (actual time=0.067..183761.324 rows=46186 loops=1)\"\n> \" Index Cond: (fti @@ '\\'bank\\' & \\'indonesia\\''::tsquery)\"\n> \"Total runtime: 183837.826 ms\"\n>\n> And for multicolumn index I try this query:\n> explain analyze\n> select articleid, title, datee from articles\n> where fti @@ to_tsquery('bank&mega');\n>\n> analyze result:\n> ----------------\n> \"Index Scan using articles_x1 on articles (cost=0.00..848.01 rows=410\n> width=51) (actual time=52.204..37914.135 rows=1841 loops=1)\"\n> \" Index Cond: ((datee >= '2002-01-01'::date) AND (datee <=\n> ('now'::text)::date) AND (fti @@ '\\'bank\\' & \\'mega\\''::tsquery))\"\n> \"Total runtime: 37933.757 ms\"\n>\n> The table structure is as mention on the first talk. If you wanna know how\n> much table in my database, it's about 100 tables or maybe more. Now I\n> develop the version 2 of my web application, you can take a look at:\n> http://www.mediatrac.net, so it will hold many datas. But the biggest\ntable\n> is article's table. On develop this version 2 I just use half data of the\n> article's table (about 419804 rows). May be if I import all of the\narticle's\n> table data it will have 1 million rows. The article's table grows rapidly,\n> about 100000 rows per-week. My developing database size is 28 GB (not real\n> database, coz I still develop the version 2 and I use half of the data for\n> play around). I just wanna to perform quick search (fulltext search) on my\n> article's table not other table. On version 1, the current running version\nI\n> use same hardware spesification as mention below, but there is no fulltext\n> search. So I develop the new version with new features, new interface and\n> include the fulltext search.\n>\n> I do know, if the application finish, I must use powerfull hardware. But\nhow\n> can I guarantee the application will run smooth, if I do fulltext search\non\n> 419804 rows in a table it took a long time to get the result.\n>\n> Could you or friends in this maling-list help me....plz..plzz\n>\n> Tsearch2 configuration:\n> -------------------------\n> I use default configuration, english stop word file as tsearch2 provide,\n> stem dictionary as default (coz I don't know how to configure and add new\n> data to stem dictionary) and I add some words to the english stop word\nfile.\n>\n> Postgresql configuration\n> -------------------------\n> max_connections = 32\n> shared_buffers = 32768\n> sort_mem = 8192\n> vacuum_mem = 65536\n> work_mem = 16384\n> maintenance_work_mem = 65536\n> max_fsm_pages = 30000\n> max_fsm_relations = 1000\n> max_files_per_process = 100000\n> checkpoint_segments = 15\n> effective_cache_size = 192000\n> random_page_cost = 2\n> geqo = true\n> geqo_threshold = 50\n> geqo_effort = 5\n> geqo_pool_size = 0\n> geqo_generations = 0\n> geqo_selection_bias = 2.0\n> from_collapse_limit = 10\n> join_collapse_limit = 15\n>\n> OS configuration:\n> ------------------\n> I use Redhat 4 AS, kernel 2.6.9-11\n> kernel.shmmax=1073741824\n> kernel.sem=250 32000 100 128\n> fs.aio-max-nr=5242880\n> the server I configure just only for postgresql, no other service is\nrunning\n> like: www, samba, ftp, email, firewall\n>\n> hardware configuration:\n> ------------------------\n> Motherboard ASUS P5GD1\n> Processor P4 3,2 GHz\n> Memory 2 GB DDR 400,\n> 2x200 GB Serial ATA 7200 RPM UltraATA/133, configure as RAID0 for\npostgresql\n> data and the partition is EXT3\n> 1x80 GB EIDE 7200 RPM configure for system and home directory and the\n> partiton is EXT3\n>\n> Did I miss something?\n>\n> Regards,\n> ahmad fajar\n>\n>\n> -----Original Message-----\n> From: Oleg Bartunov [mailto:[email protected]]\n> Sent: Jumat, 23 September 2005 18:26\n> To: Ahmad Fajar\n> Cc: [email protected]\n> Subject: RE: [PERFORM] tsearch2 seem very slow\n>\n> On Fri, 23 Sep 2005, Ahmad Fajar wrote:\n>\n>> Hi Oleg,\n>>\n>> I didn't deny on the third repeat or more, it can reach < 600 msec. It is\n>> only because the result still in postgres cache, but how about in the\n> first\n>> run? I didn't dare, the values is un-acceptable. Because my table will\n> grows\n>> rapidly, it's about 100000 rows per-week. And the visitor will search\n>> anything that I don't know, whether it's the repeated search or new\n> search,\n>> or whether it's in postgres cache or not.\n>\n> if you have enoush shared memory postgresql will keep index pages there.\n>\n>\n>>\n>> I just compare with http://www.postgresql.org, the search is quite fast,\n> and\n>> I don't know whether the site uses tsearch2 or something else. But as fas\n> as\n>> I know, if the rows reach >100 milion (I have try for 200 milion rows and\n> it\n>> seem very slow), even if don't use tsearch2, only use simple query like:\n>> select f1, f2 from table1 where f2='blabla',\n>> and f2 is indexes, my postgres still slow on the first time, about >10\n> sec.\n>> because of this I tried something brand new to fullfill my needs. I have\n>> used fti, and tsearch2 but still slow.\n>>\n>> I don't know what's going wrong with my postgres, what configuration must\n> I\n>> do to perform the query get fast result. Or must I use enterprisedb 2005\n> or\n>> pervasive postgres (both uses postgres), I don't know very much about\n> these\n>> two products.\n>\n> you didn't show us your configuration (hardware,postgresql and tsearch2),\n> explain analyze of your queries, so we can't help you.\n> How big is your database, tsearch2 index size ?\n>\n>\n>>\n>> Regards,\n>> ahmad fajar\n>>\n>>\n>> -----Original Message-----\n>> From: Oleg Bartunov [mailto:[email protected]]\n>> Sent: Jumat, 23 September 2005 14:36\n>> To: Ahmad Fajar\n>> Cc: [email protected]\n>> Subject: Re: [PERFORM] tsearch2 seem very slow\n>>\n>> Ahmad,\n>>\n>> how fast is repeated runs ? First time system could be very slow.\n>> Also, have you checked my page\n>> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\n>> and some info about tsearch2 internals\n>> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n>>\n>> \tOleg\n>> On Thu, 22 Sep 2005, Ahmad Fajar wrote:\n>>\n>>> I have about 419804 rows in my article table. I have installed tsearch2\n>> and\n>>> its gist index correctly.\n>>>\n>>> My table structure is:\n>>>\n>>> CREATE TABLE tbarticles\n>>>\n>>> (\n>>>\n>>> articleid int4 NOT NULL,\n>>>\n>>> title varchar(250),\n>>>\n>>> mediaid int4,\n>>>\n>>> datee date,\n>>>\n>>> content text,\n>>>\n>>> contentvar text,\n>>>\n>>> mmcol float4 NOT NULL,\n>>>\n>>> sirkulasi float4,\n>>>\n>>> page varchar(10),\n>>>\n>>> tglisidata date,\n>>>\n>>> namapc varchar(12),\n>>>\n>>> usere varchar(12),\n>>>\n>>> file_pdf varchar(255),\n>>>\n>>> file_pdf2 varchar(50),\n>>>\n>>> kolom int4,\n>>>\n>>> size_jpeg int4,\n>>>\n>>> journalist varchar(120),\n>>>\n>>> ratebw float4,\n>>>\n>>> ratefc float4,\n>>>\n>>> fti tsvector,\n>>>\n>>> CONSTRAINT pk_tbarticles PRIMARY KEY (articleid)\n>>>\n>>> ) WITHOUT OIDS;\n>>>\n>>> Create index fti_idx1 on tbarticles using gist (fti);\n>>>\n>>> Create index fti_idx2 on tbarticles using gist (datee, fti);\n>>>\n>>>\n>>>\n>>> But when I search something like:\n>>>\n>>> Select articleid, title, datee from tbarticles where fti @@\n>>> to_tsquery('susilo&bambang&yudhoyono&jusuf&kalla');\n>>>\n>>> It takes about 30 sec. I run explain analyze and the index is used\n>>> correctly.\n>>>\n>>>\n>>>\n>>> Then I try multi column index to filter by date, and my query something\n>>> like:\n>>>\n>>> Select articleid, title, datee from tbarticles where fti @@\n>>> to_tsquery('susilo&bambang&yudhoyono&jusuf&kalla') and datee >=\n>> '2002-01-01'\n>>> and datee <= current_date\n>>>\n>>> An it still run about 25 sec. I do run explain analyze and my\nmulticolumn\n>>> index is used correctly.\n>>>\n>>> This is not acceptable if want to publish my website if the search took\n>> very\n>>> longer.\n>>>\n>>>\n>>>\n>>> I have run vacuum full analyze before doing such query. What going wrong\n>>> with my query?? Is there any way to make this faster?\n>>>\n>>> I have try to tune my postgres configuration, but it seem helpless. My\n>> linux\n>>> box is Redhat 4 AS, and\n>>>\n>>> the hardware: 2 GB RAM DDR 400, 2x200 GB Serial ATA 7200RPM and\nconfigure\n>> as\n>>> RAID0 (just for postgres data), my sistem run at EIDE 80GB 7200 RPM.\n>>>\n>>>\n>>>\n>>> Please.help.help.\n>>>\n>>>\n>>\n>> \tRegards,\n>> \t\tOleg\n>> _____________________________________________________________\n>> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>> Sternberg Astronomical Institute, Moscow University (Russia)\n>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>> phone: +007(095)939-16-83, +007(095)939-23-83\n>>\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n", "msg_date": "Mon, 26 Sep 2005 01:14:46 +0700", "msg_from": "\"Ahmad Fajar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tsearch2 seem very slow" }, { "msg_contents": "On Mon, 26 Sep 2005, Ahmad Fajar wrote:\n\n> Hi Oleg,\n>\n> Sorry for my late. From the stat() function I got 1,5 million rows, although\n> I've added garbage words to the stop word file, there seem still have\n> garbage words. So I ask for my team to identify the garbage words and add to\n\nwhat king of garbage ? Probably you index not needed token types, for\nexample, email address, file names....\n\n> stop words and I will update the articles after that. And about my articles,\n> it is quite big enough, the average length is about 2900 characters. And I\n> think, I have to tune tsearch2 and concentrate to the garbage words. The\n> most articles are indonesian language. What others way to tune the tsearch2\n> beside the garbage words?\n\ndo you need proximity ? If no, use strip(tsvector) function to remove\ncoordinate information from tsvector.\n\ndon't index default configuration and index only needed tokens, for \nexample, to index only 3 type of tokens, first create 'qq' configuration\nand specify tokens to index.\n\ninsert into pg_ts_cfg values('qq','default','en_US');\n-- tokens to index\ninsert into pg_ts_cfgmap values('qq','lhword','{en_ispell,en_stem}');\ninsert into pg_ts_cfgmap values('qq','lword','{en_ispell,en_stem}');\ninsert into pg_ts_cfgmap values('qq','lpart_hword','{en_ispell,en_stem}');\n\n\n>\n> Beside that, I still have problem, if I do a simple query like:\n> Select ids, keywords from dict where keywords='blabla' ('blabla' is a single\n> word); The table have 200 million rows, I have index the keywords field. On\n> the first time my query seem to slow to get the result, about 15-60 sec to\n> get the result. I use latest pgAdmin3 to test all queries. But if I repeat\n> the query I will get fast result. My question is why on the first time the\n> query seem to slow.\n\nbecause index pages should be readed from disk into shared buffers, so next\nquery will benefit from that. You need enough shared memory to get real\nbenefit. You may get postgresql stats and look on cache hit ration.\n\nbtw, how does your query ( keywords='blabla') relates to tsearch2 ?\n\n>\n> I try to cluster the table base on keyword index, but after 15 hours waiting\n> and it doesn't finish I stop clustering. Now I think I have to change the\n\ndon't use cluster for big tables ! simple\n select * into clustered_foo from foo order by indexed_field\nwould be faster and does the same job.\n\n> file system for postgresql data. Do you have any idea what best for\n> postgresql, JFS or XFS? I will not try reiserfs, because there are some\n> rumors about reiserfs stability, although reiserfs is fast enough for\n> postgresql. And must I down grade my postgresql from version 8.0.3 to 7.4.8?\n>\n\nI'm not experienced with filesystems :)\n\n\n>\n>\n> Regards,\n> ahmad fajar\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Oleg Bartunov\n> Sent: Saturday, September 24, 2005 1:08 PM\n> To: Ahmad Fajar\n> Cc: [email protected]\n> Subject: Re: [PERFORM] tsearch2 seem very slow\n>\n> Ahmad,\n>\n> what's about the number of unique words ? I mean stat() function.\n> Sometimes, it helps to identify garbage words.\n> How big is your articles (average length) ?\n>\n> please, cut'n paste queries and output from psql ! How fast are\n> next queries ?\n>\n> \tOleg\n> On Fri, 23 Sep 2005, Ahmad Fajar wrote:\n>\n>> Hi Oleg,\n>>\n>> For single index I try this query:\n>> explain analyze\n>> select articleid, title, datee from articles\n>> where fti @@ to_tsquery('bank&indonesia');\n>>\n>> analyze result:\n>> ----------------\n>> \"Index Scan using fti_idx on articles (cost=0.00..862.97 rows=420\n> width=51)\n>> (actual time=0.067..183761.324 rows=46186 loops=1)\"\n>> \" Index Cond: (fti @@ '\\'bank\\' & \\'indonesia\\''::tsquery)\"\n>> \"Total runtime: 183837.826 ms\"\n>>\n>> And for multicolumn index I try this query:\n>> explain analyze\n>> select articleid, title, datee from articles\n>> where fti @@ to_tsquery('bank&mega');\n>>\n>> analyze result:\n>> ----------------\n>> \"Index Scan using articles_x1 on articles (cost=0.00..848.01 rows=410\n>> width=51) (actual time=52.204..37914.135 rows=1841 loops=1)\"\n>> \" Index Cond: ((datee >= '2002-01-01'::date) AND (datee <=\n>> ('now'::text)::date) AND (fti @@ '\\'bank\\' & \\'mega\\''::tsquery))\"\n>> \"Total runtime: 37933.757 ms\"\n>>\n>> The table structure is as mention on the first talk. If you wanna know how\n>> much table in my database, it's about 100 tables or maybe more. Now I\n>> develop the version 2 of my web application, you can take a look at:\n>> http://www.mediatrac.net, so it will hold many datas. But the biggest\n> table\n>> is article's table. On develop this version 2 I just use half data of the\n>> article's table (about 419804 rows). May be if I import all of the\n> article's\n>> table data it will have 1 million rows. The article's table grows rapidly,\n>> about 100000 rows per-week. My developing database size is 28 GB (not real\n>> database, coz I still develop the version 2 and I use half of the data for\n>> play around). I just wanna to perform quick search (fulltext search) on my\n>> article's table not other table. On version 1, the current running version\n> I\n>> use same hardware spesification as mention below, but there is no fulltext\n>> search. So I develop the new version with new features, new interface and\n>> include the fulltext search.\n>>\n>> I do know, if the application finish, I must use powerfull hardware. But\n> how\n>> can I guarantee the application will run smooth, if I do fulltext search\n> on\n>> 419804 rows in a table it took a long time to get the result.\n>>\n>> Could you or friends in this maling-list help me....plz..plzz\n>>\n>> Tsearch2 configuration:\n>> -------------------------\n>> I use default configuration, english stop word file as tsearch2 provide,\n>> stem dictionary as default (coz I don't know how to configure and add new\n>> data to stem dictionary) and I add some words to the english stop word\n> file.\n>>\n>> Postgresql configuration\n>> -------------------------\n>> max_connections = 32\n>> shared_buffers = 32768\n>> sort_mem = 8192\n>> vacuum_mem = 65536\n>> work_mem = 16384\n>> maintenance_work_mem = 65536\n>> max_fsm_pages = 30000\n>> max_fsm_relations = 1000\n>> max_files_per_process = 100000\n>> checkpoint_segments = 15\n>> effective_cache_size = 192000\n>> random_page_cost = 2\n>> geqo = true\n>> geqo_threshold = 50\n>> geqo_effort = 5\n>> geqo_pool_size = 0\n>> geqo_generations = 0\n>> geqo_selection_bias = 2.0\n>> from_collapse_limit = 10\n>> join_collapse_limit = 15\n>>\n>> OS configuration:\n>> ------------------\n>> I use Redhat 4 AS, kernel 2.6.9-11\n>> kernel.shmmax=1073741824\n>> kernel.sem=250 32000 100 128\n>> fs.aio-max-nr=5242880\n>> the server I configure just only for postgresql, no other service is\n> running\n>> like: www, samba, ftp, email, firewall\n>>\n>> hardware configuration:\n>> ------------------------\n>> Motherboard ASUS P5GD1\n>> Processor P4 3,2 GHz\n>> Memory 2 GB DDR 400,\n>> 2x200 GB Serial ATA 7200 RPM UltraATA/133, configure as RAID0 for\n> postgresql\n>> data and the partition is EXT3\n>> 1x80 GB EIDE 7200 RPM configure for system and home directory and the\n>> partiton is EXT3\n>>\n>> Did I miss something?\n>>\n>> Regards,\n>> ahmad fajar\n>>\n>>\n>> -----Original Message-----\n>> From: Oleg Bartunov [mailto:[email protected]]\n>> Sent: Jumat, 23 September 2005 18:26\n>> To: Ahmad Fajar\n>> Cc: [email protected]\n>> Subject: RE: [PERFORM] tsearch2 seem very slow\n>>\n>> On Fri, 23 Sep 2005, Ahmad Fajar wrote:\n>>\n>>> Hi Oleg,\n>>>\n>>> I didn't deny on the third repeat or more, it can reach < 600 msec. It is\n>>> only because the result still in postgres cache, but how about in the\n>> first\n>>> run? I didn't dare, the values is un-acceptable. Because my table will\n>> grows\n>>> rapidly, it's about 100000 rows per-week. And the visitor will search\n>>> anything that I don't know, whether it's the repeated search or new\n>> search,\n>>> or whether it's in postgres cache or not.\n>>\n>> if you have enoush shared memory postgresql will keep index pages there.\n>>\n>>\n>>>\n>>> I just compare with http://www.postgresql.org, the search is quite fast,\n>> and\n>>> I don't know whether the site uses tsearch2 or something else. But as fas\n>> as\n>>> I know, if the rows reach >100 milion (I have try for 200 milion rows and\n>> it\n>>> seem very slow), even if don't use tsearch2, only use simple query like:\n>>> select f1, f2 from table1 where f2='blabla',\n>>> and f2 is indexes, my postgres still slow on the first time, about >10\n>> sec.\n>>> because of this I tried something brand new to fullfill my needs. I have\n>>> used fti, and tsearch2 but still slow.\n>>>\n>>> I don't know what's going wrong with my postgres, what configuration must\n>> I\n>>> do to perform the query get fast result. Or must I use enterprisedb 2005\n>> or\n>>> pervasive postgres (both uses postgres), I don't know very much about\n>> these\n>>> two products.\n>>\n>> you didn't show us your configuration (hardware,postgresql and tsearch2),\n>> explain analyze of your queries, so we can't help you.\n>> How big is your database, tsearch2 index size ?\n>>\n>>\n>>>\n>>> Regards,\n>>> ahmad fajar\n>>>\n>>>\n>>> -----Original Message-----\n>>> From: Oleg Bartunov [mailto:[email protected]]\n>>> Sent: Jumat, 23 September 2005 14:36\n>>> To: Ahmad Fajar\n>>> Cc: [email protected]\n>>> Subject: Re: [PERFORM] tsearch2 seem very slow\n>>>\n>>> Ahmad,\n>>>\n>>> how fast is repeated runs ? First time system could be very slow.\n>>> Also, have you checked my page\n>>> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\n>>> and some info about tsearch2 internals\n>>> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n>>>\n>>> \tOleg\n>>> On Thu, 22 Sep 2005, Ahmad Fajar wrote:\n>>>\n>>>> I have about 419804 rows in my article table. I have installed tsearch2\n>>> and\n>>>> its gist index correctly.\n>>>>\n>>>> My table structure is:\n>>>>\n>>>> CREATE TABLE tbarticles\n>>>>\n>>>> (\n>>>>\n>>>> articleid int4 NOT NULL,\n>>>>\n>>>> title varchar(250),\n>>>>\n>>>> mediaid int4,\n>>>>\n>>>> datee date,\n>>>>\n>>>> content text,\n>>>>\n>>>> contentvar text,\n>>>>\n>>>> mmcol float4 NOT NULL,\n>>>>\n>>>> sirkulasi float4,\n>>>>\n>>>> page varchar(10),\n>>>>\n>>>> tglisidata date,\n>>>>\n>>>> namapc varchar(12),\n>>>>\n>>>> usere varchar(12),\n>>>>\n>>>> file_pdf varchar(255),\n>>>>\n>>>> file_pdf2 varchar(50),\n>>>>\n>>>> kolom int4,\n>>>>\n>>>> size_jpeg int4,\n>>>>\n>>>> journalist varchar(120),\n>>>>\n>>>> ratebw float4,\n>>>>\n>>>> ratefc float4,\n>>>>\n>>>> fti tsvector,\n>>>>\n>>>> CONSTRAINT pk_tbarticles PRIMARY KEY (articleid)\n>>>>\n>>>> ) WITHOUT OIDS;\n>>>>\n>>>> Create index fti_idx1 on tbarticles using gist (fti);\n>>>>\n>>>> Create index fti_idx2 on tbarticles using gist (datee, fti);\n>>>>\n>>>>\n>>>>\n>>>> But when I search something like:\n>>>>\n>>>> Select articleid, title, datee from tbarticles where fti @@\n>>>> to_tsquery('susilo&bambang&yudhoyono&jusuf&kalla');\n>>>>\n>>>> It takes about 30 sec. I run explain analyze and the index is used\n>>>> correctly.\n>>>>\n>>>>\n>>>>\n>>>> Then I try multi column index to filter by date, and my query something\n>>>> like:\n>>>>\n>>>> Select articleid, title, datee from tbarticles where fti @@\n>>>> to_tsquery('susilo&bambang&yudhoyono&jusuf&kalla') and datee >=\n>>> '2002-01-01'\n>>>> and datee <= current_date\n>>>>\n>>>> An it still run about 25 sec. I do run explain analyze and my\n> multicolumn\n>>>> index is used correctly.\n>>>>\n>>>> This is not acceptable if want to publish my website if the search took\n>>> very\n>>>> longer.\n>>>>\n>>>>\n>>>>\n>>>> I have run vacuum full analyze before doing such query. What going wrong\n>>>> with my query?? Is there any way to make this faster?\n>>>>\n>>>> I have try to tune my postgres configuration, but it seem helpless. My\n>>> linux\n>>>> box is Redhat 4 AS, and\n>>>>\n>>>> the hardware: 2 GB RAM DDR 400, 2x200 GB Serial ATA 7200RPM and\n> configure\n>>> as\n>>>> RAID0 (just for postgres data), my sistem run at EIDE 80GB 7200 RPM.\n>>>>\n>>>>\n>>>>\n>>>> Please.help.help.\n>>>>\n>>>>\n>>>\n>>> \tRegards,\n>>> \t\tOleg\n>>> _____________________________________________________________\n>>> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>>> Sternberg Astronomical Institute, Moscow University (Russia)\n>>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>>> phone: +007(095)939-16-83, +007(095)939-23-83\n>>>\n>>\n>> \tRegards,\n>> \t\tOleg\n>> _____________________________________________________________\n>> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>> Sternberg Astronomical Institute, Moscow University (Russia)\n>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>> phone: +007(095)939-16-83, +007(095)939-23-83\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 2: Don't 'kill -9' the postmaster\n>>\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Sun, 25 Sep 2005 22:33:02 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tsearch2 seem very slow" }, { "msg_contents": "Hi Oleg, \n\n> what king of garbage ? Probably you index not needed token types, for\n> example, email address, file names....\n\n> do you need proximity ? If no, use strip(tsvector) function to remove\n> coordinate information from tsvector.\n\nI need proximity. Some time I have to rank my article and make a chart for\nthat.\n\n> don't index default configuration and index only needed tokens, for \n> example, to index only 3 type of tokens, first create 'qq' configuration\n> and specify tokens to index.\n\n> insert into pg_ts_cfg values('qq','default','en_US');\n-- tokens to index\n> insert into pg_ts_cfgmap values('qq','lhword','{en_ispell,en_stem}');\n> insert into pg_ts_cfgmap values('qq','lword','{en_ispell,en_stem}');\n> insert into pg_ts_cfgmap values('qq','lpart_hword','{en_ispell,en_stem}');\n\nI still don't understand about tsearch2 configuration, so until now I just\nuse default configuration. I will try your suggestion. But how can I get the\nen_ispell? Does my system will know if I use: ....,'{en_ispell,en_stem}';\n>From default configuration I only see: ..., '{en_stem}';\n\n> Beside that, I still have problem, if I do a simple query like:\n> Select ids, keywords from dict where keywords='blabla' ('blabla' is a\nsingle\n> word); The table have 200 million rows, I have index the keywords field.\nOn\n> the first time my query seem to slow to get the result, about 15-60 sec to\n> get the result. I use latest pgAdmin3 to test all queries. But if I repeat\n> the query I will get fast result. My question is why on the first time the\n> query seem to slow.\n\n> because index pages should be readed from disk into shared buffers, so \n> next query will benefit from that. You need enough shared memory to get \n> real benefit. You may get postgresql stats and look on cache hit ration.\n\n> btw, how does your query ( keywords='blabla') relates to tsearch2 ?\n\n(Keywords='blabla') isn't related to tsearch2, I just got an idea from\ntsearch2 and try different approach. But I stuck on the query result speed.\nVery slow to get result on the first query. \nAnd how to see postgresql stats and look on cache hit ratio? I still don't\nknow how to get it.\n\n> I try to cluster the table base on keyword index, but after 15 hours \n> waiting and it doesn't finish I stop clustering. \n\n> don't use cluster for big tables ! simple\n> select * into clustered_foo from foo order by indexed_field\n> would be faster and does the same job.\n\nWhat the use of clustered_foo table? And how to use it?\nI think it will not distinct duplicate rows. And the clustered_foo table\nstill not have an index, so if query to this table, I think the query will\nbe very slow to get a result.\n\nRegards,\nahmad fajar\n\n", "msg_date": "Mon, 26 Sep 2005 02:14:38 +0700", "msg_from": "\"Ahmad Fajar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tsearch2 seem very slow" }, { "msg_contents": "Ahmad,\n\nOn Mon, 26 Sep 2005, Ahmad Fajar wrote:\n\n> Hi Oleg,\n>\n>> what king of garbage ? Probably you index not needed token types, for\n>> example, email address, file names....\n>\n>> do you need proximity ? If no, use strip(tsvector) function to remove\n>> coordinate information from tsvector.\n>\n> I need proximity. Some time I have to rank my article and make a chart for\n> that.\n>\n>> don't index default configuration and index only needed tokens, for\n>> example, to index only 3 type of tokens, first create 'qq' configuration\n>> and specify tokens to index.\n>\n>> insert into pg_ts_cfg values('qq','default','en_US');\n> -- tokens to index\n>> insert into pg_ts_cfgmap values('qq','lhword','{en_ispell,en_stem}');\n>> insert into pg_ts_cfgmap values('qq','lword','{en_ispell,en_stem}');\n>> insert into pg_ts_cfgmap values('qq','lpart_hword','{en_ispell,en_stem}');\n>\n> I still don't understand about tsearch2 configuration, so until now I just\n> use default configuration. I will try your suggestion. But how can I get the\n> en_ispell? Does my system will know if I use: ....,'{en_ispell,en_stem}';\n>> From default configuration I only see: ..., '{en_stem}';\n\nI think you should read documentation. I couldn't explain you things already\nwritten.\n\n>\n>> Beside that, I still have problem, if I do a simple query like:\n>> Select ids, keywords from dict where keywords='blabla' ('blabla' is a\n> single\n>> word); The table have 200 million rows, I have index the keywords field.\n> On\n>> the first time my query seem to slow to get the result, about 15-60 sec to\n>> get the result. I use latest pgAdmin3 to test all queries. But if I repeat\n>> the query I will get fast result. My question is why on the first time the\n>> query seem to slow.\n>\n>> because index pages should be readed from disk into shared buffers, so\n>> next query will benefit from that. You need enough shared memory to get\n>> real benefit. You may get postgresql stats and look on cache hit ration.\n>\n>> btw, how does your query ( keywords='blabla') relates to tsearch2 ?\n>\n> (Keywords='blabla') isn't related to tsearch2, I just got an idea from\n> tsearch2 and try different approach. But I stuck on the query result speed.\n> Very slow to get result on the first query.\n> And how to see postgresql stats and look on cache hit ratio? I still don't\n> know how to get it.\n>\n\nlearn from http://www.postgresql.org/docs/8.0/static/monitoring-stats.html\n\n>> I try to cluster the table base on keyword index, but after 15 hours\n>> waiting and it doesn't finish I stop clustering.\n>\n>> don't use cluster for big tables ! simple\n>> select * into clustered_foo from foo order by indexed_field\n>> would be faster and does the same job.\n>\n> What the use of clustered_foo table? And how to use it?\n> I think it will not distinct duplicate rows. And the clustered_foo table\n> still not have an index, so if query to this table, I think the query will\n> be very slow to get a result.\n\noh guy, you certainly need to read documentation\nhttp://www.postgresql.org/docs/8.0/static/sql-cluster.html\n\n\n>\n> Regards,\n> ahmad fajar\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Mon, 26 Sep 2005 00:11:33 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tsearch2 seem very slow" }, { "msg_contents": "Hi Oleg,\n\nThanks, I will read your documentation again, and try to understand what I\nmiss. And about pgmanual, it is very help me. I'll take attention on that.\n\nRegards,\nahmad fajar\n\n-----Original Message-----\nFrom: Oleg Bartunov [mailto:[email protected]] \nSent: Monday, September 26, 2005 3:12 AM\nTo: Ahmad Fajar\nCc: [email protected]\nSubject: RE: [PERFORM] tsearch2 seem very slow\n\nAhmad,\n\nOn Mon, 26 Sep 2005, Ahmad Fajar wrote:\n\n> Hi Oleg,\n>\n>> what king of garbage ? Probably you index not needed token types, for\n>> example, email address, file names....\n>\n>> do you need proximity ? If no, use strip(tsvector) function to remove\n>> coordinate information from tsvector.\n>\n> I need proximity. Some time I have to rank my article and make a chart for\n> that.\n>\n>> don't index default configuration and index only needed tokens, for\n>> example, to index only 3 type of tokens, first create 'qq' configuration\n>> and specify tokens to index.\n>\n>> insert into pg_ts_cfg values('qq','default','en_US');\n> -- tokens to index\n>> insert into pg_ts_cfgmap values('qq','lhword','{en_ispell,en_stem}');\n>> insert into pg_ts_cfgmap values('qq','lword','{en_ispell,en_stem}');\n>> insert into pg_ts_cfgmap\nvalues('qq','lpart_hword','{en_ispell,en_stem}');\n>\n> I still don't understand about tsearch2 configuration, so until now I just\n> use default configuration. I will try your suggestion. But how can I get\nthe\n> en_ispell? Does my system will know if I use: ....,'{en_ispell,en_stem}';\n>> From default configuration I only see: ..., '{en_stem}';\n\nI think you should read documentation. I couldn't explain you things already\nwritten.\n\n>\n>> Beside that, I still have problem, if I do a simple query like:\n>> Select ids, keywords from dict where keywords='blabla' ('blabla' is a\n> single\n>> word); The table have 200 million rows, I have index the keywords field.\n> On\n>> the first time my query seem to slow to get the result, about 15-60 sec\nto\n>> get the result. I use latest pgAdmin3 to test all queries. But if I\nrepeat\n>> the query I will get fast result. My question is why on the first time\nthe\n>> query seem to slow.\n>\n>> because index pages should be readed from disk into shared buffers, so\n>> next query will benefit from that. You need enough shared memory to get\n>> real benefit. You may get postgresql stats and look on cache hit ration.\n>\n>> btw, how does your query ( keywords='blabla') relates to tsearch2 ?\n>\n> (Keywords='blabla') isn't related to tsearch2, I just got an idea from\n> tsearch2 and try different approach. But I stuck on the query result\nspeed.\n> Very slow to get result on the first query.\n> And how to see postgresql stats and look on cache hit ratio? I still don't\n> know how to get it.\n>\n\nlearn from http://www.postgresql.org/docs/8.0/static/monitoring-stats.html\n\n>> I try to cluster the table base on keyword index, but after 15 hours\n>> waiting and it doesn't finish I stop clustering.\n>\n>> don't use cluster for big tables ! simple\n>> select * into clustered_foo from foo order by indexed_field\n>> would be faster and does the same job.\n>\n> What the use of clustered_foo table? And how to use it?\n> I think it will not distinct duplicate rows. And the clustered_foo table\n> still not have an index, so if query to this table, I think the query will\n> be very slow to get a result.\n\noh guy, you certainly need to read documentation\nhttp://www.postgresql.org/docs/8.0/static/sql-cluster.html\n\n\n>\n> Regards,\n> ahmad fajar\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 26 Sep 2005 04:07:08 +0700", "msg_from": "\"Ahmad Fajar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tsearch2 seem very slow" } ]
[ { "msg_contents": "> At 02:07 05/09/23, Merlin Moncure wrote:\n> > > >Here is a trick I use sometimes with views, etc. This may or may\nnot\n> be\n> > > >effective to solve your problem but it's worth a shot. Create\none\n> small\n> > > >SQL function taking date, etc. and returning the values and\ndefine it\n> > > >immutable. Now in-query it is treated like a constant.\n> \n> esdt=> create or replace function player_max_atdate (varchar(32))\nreturns\n> varchar(32) as $$\n> esdt$> select atdate from player where playerid = $1 order by\nplayerid\n> desc, AtDate desc limit 1;\n> esdt$> $$ language sql immutable;\n\nCan you time just the execution of this function and compare vs. pure\nSQL version? If the times are different, can you do a exaplain analyze\nof a prepared version of above?\n\nprepare test(character varying) as select atdate from player where\nplayerid = $1 order by playerid desc, AtDate desc limit 1;\n\nexplain analyze execute test('22220');\n\n> CREATE FUNCTION\n> esdt=> create or replace view VCurPlayer3 as select * from Player\nwhere\n> AtDate = player_max_atdate(PlayerID);\n> CREATE VIEW\n\nThis is wrong, it should have been \ncreate or replace view VCurPlayer3 as select *,\nplayer_max_atdate(PlayerID) as max_date from Player;\n\nI did a test on a table with 124k records and a two part key, ID & date.\nesp# select count(*) from parts_order_file;\ncount\n--------\n 124158\n(1 row)\n\n\nesp=# select count(*) from parts_order_file where pr_dealer_no =\n'000500';\n count\n-------\n 27971\n(1 row)\n\ncreated same function, view v, etc.\nesp=# explain analyze select * from v where pr_dealer_no = '000500'\nlimit 1;\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n----------------------------\n----------------------------------------------------------------\n Limit (cost=0.00..3.87 rows=1 width=10) (actual time=1.295..1.297\nrows=1 loops=1)\n -> Index Scan using parts_order_file_pr_dealer_no_key on\nparts_order_file (cost=0.00..109369.15\n rows=28226 width=10) (actual time=1.287..1.287 rows=1 loops=1)\n Index Cond: (pr_dealer_no = '000500'::bpchar)\n Total runtime: 1.413 ms\n(4 rows)\n\nSomething is not jiving here. However, if the server plan still does\nnot come out correct, try the following (p.s. why is function returning\nvarchar(32) and not date?):\n\ncreate or replace function player_max_atdate (varchar(32)) returns date\nas\n$$\n DECLARE\n player_record record;\n return date date;\n BEGIN\n for player_record in execute\n 'select atdate from player where playerid = \\'' || $1 || '\\'\norder by playerid desc, AtDate desc limit 1;' loop\n return_date = player_record.atdate; \n end loop;\n \n return return_date;\n END;\n$ language plpgsql immutable;\n\nMerlin\n", "msg_date": "Fri, 23 Sep 2005 08:34:21 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" }, { "msg_contents": "Dear Merlin,\n\nAt 20:34 05/09/23, Merlin Moncure wrote:\n>Can you time just the execution of this function and compare vs. pure\n>SQL version? If the times are different, can you do a exaplain analyze\n>of a prepared version of above?\n\nesdt=> prepare test(character varying) as select atdate from player where\nesdt-> playerid = $1 order by playerid desc, AtDate desc limit 1;\nPREPARE\nesdt=> explain analyze execute test('22220');\n Limit (cost=0.00..0.83 rows=1 width=23) (actual time=0.032..0.033 rows=1 \nloops=1)\n -> Index Scan Backward using pk_player on player (cost=0.00..970.53 \nrows=1166 width=23) (actual time=0.027..0.027 rows=1 loops=1)\n Index Cond: ((playerid)::text = ($1)::text)\n Total runtime: 0.088 ms\n\nThe prepared SQL timing is similar to that of a direct SQL.\n\n> > esdt=> create or replace view VCurPlayer3 as select * from Player where\n> > AtDate = player_max_atdate(PlayerID);\n>\n>This is wrong, it should have been\n>create or replace view VCurPlayer3 as select *,\n>player_max_atdate(PlayerID) as max_date from Player;\n\nYour suggestion returns all the records plus a max AtDate column for each \nPlayerID.\nWhat I want to get with the view is the record that has the max value of \nAtDate for each PlayerID.\nThe AtDate is a varchar(23) field containing a string date of format \n'yyyymmddhh', not the SQL Date field. Sorry if that confused you.\n\n>Something is not jiving here. However, if the server plan still does\n>not come out correct, try the following (p.s. why is function returning\n>varchar(32) and not date?):\n\nesdt=> create or replace function player_max_atdate (varchar(32)) returns \nvarchar(32) as $$\nesdt$> DECLARE\nesdt$> player_record record;\nesdt$> return_date varchar(32);\nesdt$> BEGIN\nesdt$> for player_record in execute\nesdt$> 'select atdate from player where playerid = \\'' || $1 || \n'\\' order by playerid desc, AtDate desc limit 1;' loop\nesdt$> return_date = player_record.atdate;\nesdt$> end loop;\nesdt$> return return_date;\nesdt$> END;\nesdt$> $$ language plpgsql immutable;\nCREATE FUNCTION\nesdt=> create or replace view VCurPlayer3 as select * from Player where \nAtDate = player_max_atdate(PlayerID);\nCREATE VIEW\nesdt=> explain analyze select PlayerID,AtDate from VCurPlayer3 where \nPlayerID='22220';\n\n Index Scan using pk_player on player (cost=0.00..1331.83 rows=9 \nwidth=23) (actual time=849.021..849.025 rows=1 loops=1)\n Index Cond: ((playerid)::text = '22220'::text)\n Filter: ((atdate)::text = (player_max_atdate(playerid))::text)\n Total runtime: 849.078 ms\n\nYour suggested plpgsql function seems to be even slower, with a best time \nof 849 ms after several tries. Is that expected?\n\nThanks again and best regards,\nKC.\n\n", "msg_date": "Fri, 23 Sep 2005 23:08:20 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue" } ]
[ { "msg_contents": "Hi\n\n \n\nWe are experiencing consistent slowness on the database for one\napplication. This is more a reporting type of application, heavy on the\nbytea data type usage (gets rendered into PDFs in the app server). A lot\nof queries, mostly selects and a few random updates, get accumulated on\nthe server - with increasing volume of users on the application. Below\nis a snapshot of top, with about 80 selects and 3 or 4 updates. Things\nget better eventually if I cancel (SIGINT) some of the oldest queries. I\nalso see a few instances of shared locks not being granted during this\ntime...I don't even see high iowait or memory starvation during these\ntimes, as indicated by top.\n\n \n\n-bash-2.05b$ psql -c \"select * from pg_locks;\" dbname | grep f\n\n | | 77922136 | 16761 | ShareLock | f\n\n \n\n \n\n \n\nWe (development) are looking into the query optimization (explain\nanalyze, indexes, etc), and my understanding is that the queries when\nrun for explain analyze execute fast, but during busy times, they become\nquite slow, taking from a few seconds to a few minutes to execute. I do\nsee in the log that almost all queries do have either ORDER BY, or GROUP\nBY, or DISTINCT. Does it hurt to up the sort_mem to 3MB or 4MB? Should I\nup the effective_cache_size to 5 or 6GB? The app is does not need a lot\nof connections on the database, I can reduce it down from 600.\n\n \n\nBased on the description above and the configuration below does any\nthing appear bad in config? Is there anything I can try in the\nconfiguration to improve performance?\n\n \n\n \n\nThe database size is about 4GB. \n\nThis is PG 7.4.7, RHAS3.0 (u5), Local 4 spindle RAID10 (15KRPM), and\nlogs on a separate set of drives, RAID10. 6650 server, 4 x XEON, 12GB\nRAM.\n\nVacuum is done every night, full vacuum done once a week.\n\nI had increased the shared_buffers and sort_memory recently, which\ndidn't help.\n\n \n\nThanks,\nAnjan\n\n \n\n \n\n \n\n \n\n10:44:51 up 14 days, 13:38, 2 users, load average: 0.98, 1.14, 1.12\n\n264 processes: 257 sleeping, 7 running, 0 zombie, 0 stopped\n\nCPU states: cpu user nice system irq softirq iowait idle\n\n total 14.4% 0.0% 7.4% 0.0% 0.0% 0.0% 77.9%\n\n cpu00 15.7% 0.0% 5.7% 0.0% 0.1% 0.0% 78.2%\n\n cpu01 15.1% 0.0% 7.5% 0.0% 0.0% 0.1% 77.0%\n\n cpu02 10.5% 0.0% 5.9% 0.0% 0.0% 0.0% 83.4%\n\n cpu03 9.9% 0.0% 5.9% 0.0% 0.0% 0.0% 84.0%\n\n cpu04 7.9% 0.0% 3.7% 0.0% 0.0% 0.0% 88.2%\n\n cpu05 19.3% 0.0% 12.3% 0.0% 0.0% 0.0% 68.3%\n\n cpu06 20.5% 0.0% 9.5% 0.0% 0.0% 0.1% 69.7%\n\n cpu07 16.1% 0.0% 8.5% 0.0% 0.1% 0.3% 74.7%\n\nMem: 12081736k av, 7881972k used, 4199764k free, 0k shrd,\n82372k buff\n\n 4823496k actv, 2066260k in_d, 2036k in_c\n\nSwap: 4096532k av, 0k used, 4096532k free 6888900k\ncached\n\n \n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU\nCOMMAND\n\n16773 postgres 15 0 245M 245M 240M S 0.0 2.0 1:16 7\npostmaster\n\n16880 postgres 15 0 245M 245M 240M S 0.1 2.0 0:49 6\npostmaster\n\n16765 postgres 15 0 245M 245M 240M S 0.0 2.0 1:16 0\npostmaster\n\n16825 postgres 15 0 245M 245M 240M S 0.0 2.0 1:02 5\npostmaster\n\n16774 postgres 15 0 245M 245M 240M S 0.1 2.0 1:16 0\npostmaster\n\n16748 postgres 15 0 245M 245M 240M S 0.0 2.0 1:19 5\npostmaster\n\n16881 postgres 15 0 245M 245M 240M S 0.1 2.0 0:50 7\npostmaster\n\n16762 postgres 15 0 245M 245M 240M S 0.0 2.0 1:14 4\npostmaster\n\n...\n\n...\n\n \n\n \n\nmax_connections = 600\n\n \n\nshared_buffers = 30000 #=234MB, up from 21760=170MB min 16, at least\nmax_connections*2, 8KB each\n\nsort_mem = 2048 # min 64, size in KB\n\nvacuum_mem = 32768 # up from 16384 min 1024, size in KB\n\n \n\n# - Free Space Map -\n\n \n\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n\n#max_fsm_relations = 1000 # min 100, ~50 bytes each\n\n \n\n#fsync = true # turns forced synchronization on or off\n\n#wal_sync_method = fsync # the default varies across platforms:\n\n # fsync, fdatasync, open_sync, or\nopen_datasync\n\n#wal_buffers = 8 # min 4, 8KB each\n\n \n\n# - Checkpoints -\n\n \n\ncheckpoint_segments = 125 # in logfile segments, min 1, 16MB each\n\ncheckpoint_timeout = 600 # range 30-3600, in seconds\n\n#checkpoint_warning = 30 # 0 is off, in seconds\n\n#commit_delay = 0 # range 0-100000, in microseconds\n\n#commit_siblings = 5 # range 1-1000\n\n \n\n \n\n \n\n# - Planner Method Enabling -\n\n \n\n#enable_hashagg = true\n\n#enable_hashjoin = true\n\n#enable_indexscan = true\n\n#enable_mergejoin = true\n\n#enable_nestloop = true\n\n#enable_seqscan = true\n\n#enable_sort = true\n\n#enable_tidscan = true\n\n \n\n# - Planner Cost Constants -\n\n \n\neffective_cache_size = 262144 # =2GB typically 8KB each\n\n#random_page_cost = 4 # units are one sequential page fetch\ncost\n\n#cpu_tuple_cost = 0.01 # (same)\n\n#cpu_index_tuple_cost = 0.001 # (same)\n\n#cpu_operator_cost = 0.0025 # (same)\n\n \n\n# - Genetic Query Optimizer -\n\n \n\n#geqo = true\n\n#geqo_threshold = 11\n\n#geqo_effort = 1\n\n#geqo_generations = 0\n\n#geqo_pool_size = 0 # default based on tables in statement,\n\n # range 128-1024\n\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n \n\n# - Other Planner Options -\n\n \n\n#default_statistics_target = 10 # range 1-1000\n\n#from_collapse_limit = 8\n\n#join_collapse_limit = 8 # 1 disables collapsing of explicit\nJOINs\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nHi\n \nWe are experiencing consistent slowness on the database for\none application. This is more a reporting type of application, heavy on the\nbytea data type usage (gets rendered into PDFs in the app server). A lot of\nqueries, mostly selects and a few random updates, get accumulated on the server\n– with increasing volume of users on the application. Below is a snapshot\nof top, with about 80 selects and 3 or 4 updates. Things get better eventually if\nI cancel (SIGINT) some of the oldest queries. I also see a few instances of\nshared locks not being granted during this time…I don’t even see high\niowait or memory starvation during these times, as indicated by top.\n \n-bash-2.05b$ psql -c \"select * from pg_locks;\" dbname\n| grep f\n          |          |    77922136 | 16761 |\nShareLock        | f\n \n \n \nWe (development) are looking into the query optimization\n(explain analyze, indexes, etc), and my understanding is that the queries when\nrun for explain analyze execute fast, but during busy times, they become quite\nslow, taking from a few seconds to a few minutes to execute. I do see in the\nlog that almost all queries do have either ORDER BY, or GROUP BY, or DISTINCT.\nDoes it hurt to up the sort_mem to 3MB or 4MB? Should I up the\neffective_cache_size to 5 or 6GB? The app is does not need a lot of connections\non the database, I can reduce it down from 600.\n \nBased on the description above and the configuration below\ndoes any thing appear bad in config? Is there anything I can try in the\nconfiguration to improve performance?\n \n \nThe database size is about 4GB. \nThis is PG 7.4.7, RHAS3.0 (u5), Local 4 spindle RAID10\n(15KRPM), and logs on a separate set of drives, RAID10. 6650 server, 4 x XEON,\n12GB RAM.\nVacuum is done every night, full vacuum done once a week.\nI had increased the shared_buffers and sort_memory recently,\nwhich didn’t help.\n \nThanks,\nAnjan\n \n \n \n \n10:44:51  up 14 days, 13:38,  2 users,  load average: 0.98,\n1.14, 1.12\n264 processes: 257 sleeping, 7 running, 0 zombie, 0 stopped\nCPU states:  cpu    user    nice  system    irq  softirq \niowait    idle\n           total   14.4%    0.0%    7.4%   0.0%     0.0%   \n0.0%   77.9%\n           cpu00   15.7%    0.0%    5.7%   0.0%     0.1%   \n0.0%   78.2%\n           cpu01   15.1%    0.0%    7.5%   0.0%     0.0%   \n0.1%   77.0%\n           cpu02   10.5%    0.0%    5.9%   0.0%     0.0%   \n0.0%   83.4%\n           cpu03    9.9%    0.0%    5.9%   0.0%     0.0%   \n0.0%   84.0%\n           cpu04    7.9%    0.0%    3.7%   0.0%     0.0%   \n0.0%   88.2%\n           cpu05   19.3%    0.0%   12.3%   0.0%     0.0%   \n0.0%   68.3%\n           cpu06   20.5%    0.0%    9.5%   0.0%     0.0%   \n0.1%   69.7%\n           cpu07   16.1%    0.0%    8.5%   0.0%     0.1%   \n0.3%   74.7%\nMem:  12081736k av, 7881972k used, 4199764k free,       0k\nshrd,   82372k buff\n                   4823496k actv, 2066260k in_d,    2036k\nin_c\nSwap: 4096532k av,       0k used, 4096532k\nfree                 6888900k cached\n \n  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM  \nTIME CPU COMMAND\n16773 postgres  15   0  245M 245M  240M S     0.0  2.0  \n1:16   7 postmaster\n16880 postgres  15   0  245M 245M  240M S     0.1  2.0  \n0:49   6 postmaster\n16765 postgres  15   0  245M 245M  240M S     0.0  2.0  \n1:16   0 postmaster\n16825 postgres  15   0  245M 245M  240M S     0.0  2.0  \n1:02   5 postmaster\n16774 postgres  15   0  245M 245M  240M S     0.1  2.0  \n1:16   0 postmaster\n16748 postgres  15   0  245M 245M  240M S     0.0  2.0  \n1:19   5 postmaster\n16881 postgres  15   0  245M 245M  240M S     0.1  2.0  \n0:50   7 postmaster\n16762 postgres  15   0  245M 245M  240M S     0.0  2.0  \n1:14   4 postmaster\n…\n…\n \n \nmax_connections = 600\n \nshared_buffers = 30000  #=234MB, up from 21760=170MB min 16,\nat least max_connections*2, 8KB each\nsort_mem = 2048         # min 64, size in KB\nvacuum_mem = 32768              # up from 16384 min 1024,\nsize in KB\n \n# - Free Space Map -\n \n#max_fsm_pages = 20000          # min max_fsm_relations*16,\n6 bytes each\n#max_fsm_relations = 1000       # min 100, ~50 bytes each\n \n#fsync = true                   # turns forced\nsynchronization on or off\n#wal_sync_method = fsync        # the default varies across\nplatforms:\n                                # fsync, fdatasync,\nopen_sync, or open_datasync\n#wal_buffers = 8                # min 4, 8KB each\n \n# - Checkpoints -\n \ncheckpoint_segments = 125       # in logfile segments, min\n1, 16MB each\ncheckpoint_timeout = 600        # range 30-3600, in seconds\n#checkpoint_warning = 30        # 0 is off, in seconds\n#commit_delay = 0               # range 0-100000, in\nmicroseconds\n#commit_siblings = 5            # range 1-1000\n \n \n \n# - Planner Method Enabling -\n \n#enable_hashagg = true\n#enable_hashjoin = true\n#enable_indexscan = true\n#enable_mergejoin = true\n#enable_nestloop = true\n#enable_seqscan = true\n#enable_sort = true\n#enable_tidscan = true\n \n# - Planner Cost Constants -\n \neffective_cache_size = 262144   # =2GB typically 8KB each\n#random_page_cost = 4           # units are one sequential\npage fetch cost\n#cpu_tuple_cost = 0.01          # (same)\n#cpu_index_tuple_cost = 0.001   # (same)\n#cpu_operator_cost = 0.0025     # (same)\n \n# - Genetic Query Optimizer -\n \n#geqo = true\n#geqo_threshold = 11\n#geqo_effort = 1\n#geqo_generations = 0\n#geqo_pool_size = 0             # default based on tables in\nstatement,\n                                # range 128-1024\n#geqo_selection_bias = 2.0      # range 1.5-2.0\n \n# - Other Planner Options -\n \n#default_statistics_target = 10 # range 1-1000\n#from_collapse_limit = 8\n#join_collapse_limit = 8        # 1 disables collapsing of\nexplicit JOINs", "msg_date": "Fri, 23 Sep 2005 12:02:29 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "slow database, queries accumulating" }, { "msg_contents": "I have read that 600 connections are a LOT (somebody correct me please if\nI'm wrong), since each connections requires a process and your server must\nserve this. Besides the overhead involved, you will end up with 1200\nmegabytes of sort_mem allocated (probably idle most of time)...\n\npgpool allows you to reuse process (similar to oracle shared servers). Fact:\nI didn't have the need to use it. AFAICS, it's easy to use. (I'll try to\nmake it work and I'll share tests, but dunno know when)\n\nlong life, little spam and prosperity\n\n\n-----Mensaje original-----\nDe: [email protected]\n[mailto:[email protected]]En nombre de Anjan Dave\nEnviado el: viernes, 23 de septiembre de 2005 13:02\nPara: [email protected]\nAsunto: [PERFORM] slow database, queries accumulating\n\n\nHi\n\nWe are experiencing consistent slowness on the database for one application.\nThis is more a reporting type of application, heavy on the bytea data type\nusage (gets rendered into PDFs in the app server). A lot of queries, mostly\nselects and a few random updates, get accumulated on the server - with\nincreasing volume of users on the application. Below is a snapshot of top,\nwith about 80 selects and 3 or 4 updates. Things get better eventually if I\ncancel (SIGINT) some of the oldest queries. I also see a few instances of\nshared locks not being granted during this time.I don't even see high iowait\nor memory starvation during these times, as indicated by top.\n\n-bash-2.05b$ psql -c \"select * from pg_locks;\" dbname | grep f\n | | 77922136 | 16761 | ShareLock | f\n\n\n\nWe (development) are looking into the query optimization (explain analyze,\nindexes, etc), and my understanding is that the queries when run for explain\nanalyze execute fast, but during busy times, they become quite slow, taking\nfrom a few seconds to a few minutes to execute. I do see in the log that\nalmost all queries do have either ORDER BY, or GROUP BY, or DISTINCT. Does\nit hurt to up the sort_mem to 3MB or 4MB? Should I up the\neffective_cache_size to 5 or 6GB? The app is does not need a lot of\nconnections on the database, I can reduce it down from 600.\n\nBased on the description above and the configuration below does any thing\nappear bad in config? Is there anything I can try in the configuration to\nimprove performance?\n\n\nThe database size is about 4GB.\nThis is PG 7.4.7, RHAS3.0 (u5), Local 4 spindle RAID10 (15KRPM), and logs on\na separate set of drives, RAID10. 6650 server, 4 x XEON, 12GB RAM.\nVacuum is done every night, full vacuum done once a week.\nI had increased the shared_buffers and sort_memory recently, which didn't\nhelp.\n\nThanks,\nAnjan\n\n\n\n\n10:44:51 up 14 days, 13:38, 2 users, load average: 0.98, 1.14, 1.12\n264 processes: 257 sleeping, 7 running, 0 zombie, 0 stopped\nCPU states: cpu user nice system irq softirq iowait idle\n total 14.4% 0.0% 7.4% 0.0% 0.0% 0.0% 77.9%\n cpu00 15.7% 0.0% 5.7% 0.0% 0.1% 0.0% 78.2%\n cpu01 15.1% 0.0% 7.5% 0.0% 0.0% 0.1% 77.0%\n cpu02 10.5% 0.0% 5.9% 0.0% 0.0% 0.0% 83.4%\n cpu03 9.9% 0.0% 5.9% 0.0% 0.0% 0.0% 84.0%\n cpu04 7.9% 0.0% 3.7% 0.0% 0.0% 0.0% 88.2%\n cpu05 19.3% 0.0% 12.3% 0.0% 0.0% 0.0% 68.3%\n cpu06 20.5% 0.0% 9.5% 0.0% 0.0% 0.1% 69.7%\n cpu07 16.1% 0.0% 8.5% 0.0% 0.1% 0.3% 74.7%\nMem: 12081736k av, 7881972k used, 4199764k free, 0k shrd, 82372k\nbuff\n 4823496k actv, 2066260k in_d, 2036k in_c\nSwap: 4096532k av, 0k used, 4096532k free 6888900k\ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n16773 postgres 15 0 245M 245M 240M S 0.0 2.0 1:16 7 postmaster\n16880 postgres 15 0 245M 245M 240M S 0.1 2.0 0:49 6 postmaster\n16765 postgres 15 0 245M 245M 240M S 0.0 2.0 1:16 0 postmaster\n16825 postgres 15 0 245M 245M 240M S 0.0 2.0 1:02 5 postmaster\n16774 postgres 15 0 245M 245M 240M S 0.1 2.0 1:16 0 postmaster\n16748 postgres 15 0 245M 245M 240M S 0.0 2.0 1:19 5 postmaster\n16881 postgres 15 0 245M 245M 240M S 0.1 2.0 0:50 7 postmaster\n16762 postgres 15 0 245M 245M 240M S 0.0 2.0 1:14 4 postmaster\n.\n.\n\n\nmax_connections = 600\n\nshared_buffers = 30000 #=234MB, up from 21760=170MB min 16, at least\nmax_connections*2, 8KB each\nsort_mem = 2048 # min 64, size in KB\nvacuum_mem = 32768 # up from 16384 min 1024, size in KB\n\n# - Free Space Map -\n\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~50 bytes each\n\n#fsync = true # turns forced synchronization on or off\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or\nopen_datasync\n#wal_buffers = 8 # min 4, 8KB each\n\n# - Checkpoints -\n\ncheckpoint_segments = 125 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 600 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n\n\n# - Planner Method Enabling -\n\n#enable_hashagg = true\n#enable_hashjoin = true\n#enable_indexscan = true\n#enable_mergejoin = true\n#enable_nestloop = true\n#enable_seqscan = true\n#enable_sort = true\n#enable_tidscan = true\n\n# - Planner Cost Constants -\n\neffective_cache_size = 262144 # =2GB typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = true\n#geqo_threshold = 11\n#geqo_effort = 1\n#geqo_generations = 0\n#geqo_pool_size = 0 # default based on tables in statement,\n # range 128-1024\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit JOINs\n\n\n\n", "msg_date": "Tue, 27 Sep 2005 12:12:53 -0300", "msg_from": "\"Dario\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow database, queries accumulating" } ]
[ { "msg_contents": "From: Simon Riggs <[email protected]>\nSent: Sep 23, 2005 5:37 AM\nSubject: [PERFORM] Releasing memory during External sorting?\n\n>I have concerns about whether we are overallocating memory for use in\n>external sorts. (All code relating to this is in tuplesort.c)\n>\nA decent external sorting algorithm, say a Merge Sort + Radix (or\nDistribution Counting) hybrid with appropriate optimizations for small sub-\nfiles, should become more effective / efficient the more RAM you give it. \n\n\n>The external sort algorithm benefits from some memory but not much.\n>\nThat's probably an artifact of the psql external sorting code and _not_\ndue to some fundamental external sorting issue.\n\n\n>Knuth says that the amount of memory required is very low, with a value\n>typically less than 1 kB.\n>\n\"Required\" means the external sort can operate on that little memory. How\nMuch memory is required for optimal performance is another matter.\n\n\n>I/O overheads mean that there is benefit from having longer sequential\n>writes, so the optimum is much larger than that. I've not seen any data\n>that indicates that a setting higher than 16 MB adds any value at all to a \n>large external sort.\n>\nIt should. A first pass upper bound would be the amount of RAM needed for\nReplacement Selection to create a run (ie sort) of the whole file. That should\nbe ~ the amount of RAM to hold 1/2 the file in a Replacement Selection pass.\n\nAt the simplest, for any file over 32MB the optimum should be more than \n16MB.\n\n\n> I have some indications from private tests that very high memory settings\n>may actually hinder performance of the sorts, though I cannot explain that\n>and wonder whether it is the performance tests themselves that have issues.\n>\nHmmm. Are you talking about amounts so high that you are throwing the OS\ninto paging and swapping thrash behavior? If not, then the above is weird.\n\n\n>Does anyone have any clear data that shows the value of large settings\n>of work_mem when the data to be sorted is much larger than memory? (I am\n>well aware of the value of setting work_mem higher for smaller sorts, so\n>any performance data needs to reflect only very large sorts). \n>\nThis is not PostgreSQL specific, but it does prove the point that the performance\nof external sorts benefits greatly from large amounts of RAM being available:\n\nhttp://research.microsoft.com/barc/SortBenchmark/\n\nLooking at the particulars of the algorithms listed there should shed a lot of light\non what a \"good\" external sorting algorithm looks like:\n1= HD IO matters the most.\n 1a= Seeking behavior is the largest factor in poor performance.\n2= No optimal external sorting algorithm should use more than 2 passes.\n3= Optimal external sorting algorithms should use 1 pass if at all possible.\n4= Use as much RAM as possible, and use it as efficiently as possible.\n5= The amount of RAM needed to hide the latency of a HD subsytem goes up as\nthe _square_ of the difference between the bandwidth of the HD subsystem and\nmemory.\n6= Be cache friendly.\n7= For large numbers of records whose sorting key is substantially smaller than\nthe record itself, use a pointer + compressed key representation and write the data\nto HD in sorted order (Replace HD seeks with RAM seeks. Minimize RAM seeks).\n8= Since your performance will be constrained by HD IO first and RAM IO second,\nup to a point it is worth it to spend more CPU cycles to save on IO.\n\nGiven the large and growing gap between CPU IO, RAM IO, and HD IO, these issues\nare becoming more important for _internal_ sorts as well. \n\n\n>Feedback, please.\n>\n>Best Regards, Simon Riggs\n>\nHope this is useful,\nRon\n", "msg_date": "Fri, 23 Sep 2005 12:48:35 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Releasing memory during External sorting?" }, { "msg_contents": "Ron Peacetree <[email protected]> writes:\n> 2= No optimal external sorting algorithm should use more than 2 passes.\n> 3= Optimal external sorting algorithms should use 1 pass if at all possible.\n\nA comparison-based sort must use at least N log N operations, so it\nwould appear to me that if you haven't got approximately log N passes\nthen your algorithm doesn't work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Sep 2005 13:17:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Releasing memory during External sorting? " }, { "msg_contents": "operations != passes. If you were clever, you could probably write a\nmodified bubble-sort algorithm that only made 2 passes. A pass is a\ndisk scan, operations are then performed (hopefully in memory) on what\nyou read from the disk. So there's no theoretical log N lower-bound on\nthe number of disk passes.\n\nNot that I have anything else useful to add to this discussion, just a\ntidbit I remembered from my CS classes back in college :)\n\n-- Mark\n\nOn Fri, 2005-09-23 at 13:17 -0400, Tom Lane wrote:\n> Ron Peacetree <[email protected]> writes:\n> > 2= No optimal external sorting algorithm should use more than 2 passes.\n> > 3= Optimal external sorting algorithms should use 1 pass if at all possible.\n> \n> A comparison-based sort must use at least N log N operations, so it\n> would appear to me that if you haven't got approximately log N passes\n> then your algorithm doesn't work.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n", "msg_date": "Fri, 23 Sep 2005 10:43:02 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Releasing memory during External sorting?" }, { "msg_contents": "Mark Lewis <[email protected]> writes:\n> operations != passes. If you were clever, you could probably write a\n> modified bubble-sort algorithm that only made 2 passes. A pass is a\n> disk scan, operations are then performed (hopefully in memory) on what\n> you read from the disk. So there's no theoretical log N lower-bound on\n> the number of disk passes.\n\nGiven infinite memory that might be true, but I don't think I believe it\nfor limited memory. If you have room for K tuples in memory then it's\nimpossible to perform more than K*N useful comparisons per pass (ie, as\neach tuple comes off the disk you can compare it to all the ones\ncurrently in memory; anything more is certainly redundant work). So if\nK < logN it's clearly not gonna work.\n\nIt's possible that you could design an algorithm that works in a fixed\nnumber of passes if you are allowed to assume you can hold O(log N)\ntuples in memory --- and in practice that would probably work fine,\nif the constant factor implied by the O() isn't too big. But it's not\nreally solving the general external-sort problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Sep 2005 14:15:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Releasing memory during External sorting? " }, { "msg_contents": "On Fri, 2005-09-23 at 12:48 -0400, Ron Peacetree wrote:\n\n> > I have some indications from private tests that very high memory settings\n> >may actually hinder performance of the sorts, though I cannot explain that\n> >and wonder whether it is the performance tests themselves that have issues.\n> >\n> Hmmm. Are you talking about amounts so high that you are throwing the OS\n> into paging and swapping thrash behavior? If not, then the above is weird.\n\nThanks for your thoughts. I'll retest, on the assumption that there is a\nbenefit, but there's something wrong with my earlier tests.\n\nBest Regards, Simon Riggs\n\n\n", "msg_date": "Sun, 25 Sep 2005 18:45:30 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Releasing memory during External sorting?" } ]
[ { "msg_contents": "Yep. Also, bear in mind that the lg(n!)= ~ nlgn - n lower bound on\nthe number of comparisions:\na= says nothing about the amount of data movement used.\nb= only holds for generic comparison based sorting algorithms.\n\nAs Knuth says (vol 3, p180), Distribution Counting sorts without\never comparing elements to each other at all, and so does Radix\nSort. Similar comments can be found in many algorithms texts.\n\nAny time we know that the range of the data to be sorted is substantially\nrestricted compared to the number of items to be sorted, we can sort in\nless than O(lg(n!)) time. DB fields tend to take on few values and are\ntherefore \"substantially restricted\".\n\nGiven the proper resources and algorithms, O(n) sorts are very plausible\nwhen sorting DB records.\n\nAll of the fastest external sorts of the last decade or so take advantage of\nthis. Check out that URL I posted.\n\nRon\n\n\n-----Original Message-----\nFrom: Mark Lewis <[email protected]>\nSent: Sep 23, 2005 1:43 PM\nTo: Tom Lane <[email protected]>\nSubject: Re: [PERFORM] Releasing memory during External sorting?\n\noperations != passes. If you were clever, you could probably write a\nmodified bubble-sort algorithm that only made 2 passes. A pass is a\ndisk scan, operations are then performed (hopefully in memory) on what\nyou read from the disk. So there's no theoretical log N lower-bound on\nthe number of disk passes.\n\nNot that I have anything else useful to add to this discussion, just a\ntidbit I remembered from my CS classes back in college :)\n\n-- Mark\n\nOn Fri, 2005-09-23 at 13:17 -0400, Tom Lane wrote:\n> Ron Peacetree <[email protected]> writes:\n> > 2= No optimal external sorting algorithm should use more than 2 passes.\n> > 3= Optimal external sorting algorithms should use 1 pass if at all possible.\n> \n> A comparison-based sort must use at least N log N operations, so it\n> would appear to me that if you haven't got approximately log N passes\n> then your algorithm doesn't work.\n> \n> \t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Sep 2005 14:40:39 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Releasing memory during External sorting?" } ]
[ { "msg_contents": "From: Tom Lane <[email protected]>\nSent: Sep 23, 2005 2:15 PM\nSubject: Re: [PERFORM] Releasing memory during External sorting? \n\n>Mark Lewis <[email protected]> writes:\n>> operations != passes. If you were clever, you could probably write a\n>> modified bubble-sort algorithm that only made 2 passes. A pass is a\n>> disk scan, operations are then performed (hopefully in memory) on what\n>> you read from the disk. So there's no theoretical log N lower-bound on\n>> the number of disk passes.\n\n>Given infinite memory that might be true, but I don't think I believe it\n>for limited memory. If you have room for K tuples in memory then it's\n>impossible to perform more than K*N useful comparisons per pass (ie, as\n>each tuple comes off the disk you can compare it to all the ones\n>currently in memory; anything more is certainly redundant work). So if\n>K < logN it's clearly not gonna work.\n>\nActually, it's far better than that. I recall a paper I saw in one of the\nalgorithms journals 15+ years ago that proved that if you knew the range\nof the data, regardless of what that range was, and had n^2 space, you\ncould sort n items in O(n) time.\n\nTurns out that with very modest constraints on the range of the data and\nsubstantially less extra space (about the same as you'd need for\nReplacement Selection + External Merge Sort), you can _still_ sort in\nO(n) time. \n\n\n>It's possible that you could design an algorithm that works in a fixed\n>number of passes if you are allowed to assume you can hold O(log N)\n>tuples in memory --- and in practice that would probably work fine,\n>if the constant factor implied by the O() isn't too big. But it's not\n>really solving the general external-sort problem.\n>\nIf you know nothing about the data to be sorted and must guard against\nthe worst possible edge cases, AKA the classic definition of \"the general\nexternal sorting problem\", then one can't do better than some variant\nof Replacement Selection + Unbalanced Multiway Merge.\n\nOTOH, ITRW things are _not_ like that. We know the range of the data\nin our DB fields or we can safely assume it to be relatively constrained.\nThis allows us access to much better external sorting algorithms.\n\nFor example Postman Sort (the 2005 winner of the PennySort benchmark)\nis basically an IO optimized version of an external Radix Sort.\n\n\nRon\n", "msg_date": "Fri, 23 Sep 2005 15:44:54 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Releasing memory during External sorting?" } ]
[ { "msg_contents": "Hello pals, I have the following table in Postgresql 8.0.1\n\nMydb# \\d geoip_block\nTable \"public.geoip_block\"\n Column | Type | Modifiers\n-------------+--------+-----------\n locid | bigint |\n start_block | inet |\n end_block | inet |\n\nmydb# explain analyze select locid from geoip_block where\n'216.230.158.50'::inet between start_block and end_block;\n QUERY PLAN\n----------------------------------------------------------------------------\n-------------------------------------------\n Seq Scan on geoip_block (cost=0.00..142772.86 rows=709688 width=8) (actual\ntime=14045.384..14706.927 rows=1 loops=1)\n Filter: (('216.230.158.50'::inet >= start_block) AND\n('216.230.158.50'::inet <= end_block))\n Total runtime: 14707.038 ms\n\nOk, now I decided to create a index to \"speed\" a little the query\n\nMydb# create index idx_ipblocks on geoip_block(start_block, end_block);\nCREATE INDEX\n\nclickad=# explain analyze select locid from geoip_block where\n'216.230.158.50'::inet between start_block and end_block;\n QUERY PLAN\n----------------------------------------------------------------------------\n------------------------------------------\n Seq Scan on geoip_block (cost=0.00..78033.96 rows=230141 width=8) (actual\ntime=12107.919..12610.199 rows=1 loops=1)\n Filter: (('216.230.158.50'::inet >= start_block) AND\n('216.230.158.50'::inet <= end_block))\n Total runtime: 12610.329 ms\n(3 rows)\n\nI guess the planner is doing a sequential scan in the table, why not use the\ncompound index? Do you have any idea in how to speed up this query?\n\nThanks a lot!\n\n", "msg_date": "Fri, 23 Sep 2005 16:03:11 -0600", "msg_from": "\"Cristian Prieto\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index use in BETWEEN statement..." }, { "msg_contents": "\nHello pals, I have the following table in Postgresql 8.0.1\n\nMydb# \\d geoip_block\nTable \"public.geoip_block\"\n Column | Type | Modifiers\n-------------+--------+-----------\n locid | bigint |\n start_block | inet |\n end_block | inet |\n\nmydb# explain analyze select locid from geoip_block where\n'216.230.158.50'::inet between start_block and end_block;\n QUERY PLAN\n----------------------------------------------------------------------------\n-------------------------------------------\n Seq Scan on geoip_block (cost=0.00..142772.86 rows=709688 width=8) (actual\ntime=14045.384..14706.927 rows=1 loops=1)\n Filter: (('216.230.158.50'::inet >= start_block) AND\n('216.230.158.50'::inet <= end_block))\n Total runtime: 14707.038 ms\n\nOk, now I decided to create a index to \"speed\" a little the query\n\nMydb# create index idx_ipblocks on geoip_block(start_block, end_block);\nCREATE INDEX\n\nclickad=# explain analyze select locid from geoip_block where\n'216.230.158.50'::inet between start_block and end_block;\n QUERY PLAN\n----------------------------------------------------------------------------\n------------------------------------------\n Seq Scan on geoip_block (cost=0.00..78033.96 rows=230141 width=8) (actual\ntime=12107.919..12610.199 rows=1 loops=1)\n Filter: (('216.230.158.50'::inet >= start_block) AND\n('216.230.158.50'::inet <= end_block))\n Total runtime: 12610.329 ms\n(3 rows)\n\nI guess the planner is doing a sequential scan in the table, why not use the\ncompound index? Do you have any idea in how to speed up this query?\n\nThanks a lot!\n\n", "msg_date": "Mon, 26 Sep 2005 09:26:43 -0600", "msg_from": "\"Cristian Prieto\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index use in BETWEEN statement..." }, { "msg_contents": "On 9/26/05 11:26 AM, \"Cristian Prieto\" <[email protected]> wrote:\n\n> \n> Hello pals, I have the following table in Postgresql 8.0.1\n> \n> Mydb# \\d geoip_block\n> Table \"public.geoip_block\"\n> Column | Type | Modifiers\n> -------------+--------+-----------\n> locid | bigint |\n> start_block | inet |\n> end_block | inet |\n> \n> mydb# explain analyze select locid from geoip_block where\n> '216.230.158.50'::inet between start_block and end_block;\n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> -------------------------------------------\n> Seq Scan on geoip_block (cost=0.00..142772.86 rows=709688 width=8) (actual\n> time=14045.384..14706.927 rows=1 loops=1)\n> Filter: (('216.230.158.50'::inet >= start_block) AND\n> ('216.230.158.50'::inet <= end_block))\n> Total runtime: 14707.038 ms\n> \n> Ok, now I decided to create a index to \"speed\" a little the query\n> \n> Mydb# create index idx_ipblocks on geoip_block(start_block, end_block);\n> CREATE INDEX\n> \n> clickad=# explain analyze select locid from geoip_block where\n> '216.230.158.50'::inet between start_block and end_block;\n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> ------------------------------------------\n> Seq Scan on geoip_block (cost=0.00..78033.96 rows=230141 width=8) (actual\n> time=12107.919..12610.199 rows=1 loops=1)\n> Filter: (('216.230.158.50'::inet >= start_block) AND\n> ('216.230.158.50'::inet <= end_block))\n> Total runtime: 12610.329 ms\n> (3 rows)\n> \n> I guess the planner is doing a sequential scan in the table, why not use the\n> compound index? Do you have any idea in how to speed up this query?\n\nDid you vacuum analyze the table after creating the index?\n\nSean\n\n", "msg_date": "Mon, 26 Sep 2005 12:24:00 -0400", "msg_from": "Sean Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index use in BETWEEN statement..." }, { "msg_contents": "\nmydb=# explain analyze select locid from geoip_block where\n'216.230.158.50'::inet between start_block and end_block;\n QUERY PLAN\n----------------------------------------------------------------------------\n------------------------------------------\n Seq Scan on geoip_block (cost=0.00..78033.96 rows=230141 width=8) (actual\ntime=13015.538..13508.708 rows=1 loops=1)\n Filter: (('216.230.158.50'::inet >= start_block) AND\n('216.230.158.50'::inet <= end_block))\n Total runtime: 13508.905 ms\n(3 rows)\n\nmydb=# alter table geoip_block add constraint pkey_geoip_block primary key\n(start_block, end_block);\nNOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index\n\"pkey_geoip_block\" for table \"geoip_block\"\nALTER TABLE\n\nmydb=# vacuum analyze geoip_block; \n\nmydb=# explain analyze select locid from geoip_block where\n'216.230.158.50'::inet between start_block and end_block;\n QUERY PLAN\n----------------------------------------------------------------------------\n-------------------------------------------\n Seq Scan on geoip_block (cost=0.00..101121.01 rows=308324 width=8) (actual\ntime=12128.190..12631.550 rows=1 loops=1)\n Filter: (('216.230.158.50'::inet >= start_block) AND\n('216.230.158.50'::inet <= end_block))\n Total runtime: 12631.679 ms\n(3 rows)\n\nmydb=#\n\n\nAs you see it still using a sequential scan in the table and ignores the\nindex, any other suggestion?\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Sean Davis\nSent: Lunes, 26 de Septiembre de 2005 10:24 a.m.\nTo: Cristian Prieto; [email protected]\nSubject: Re: [GENERAL] Index use in BETWEEN statement...\n\nOn 9/26/05 11:26 AM, \"Cristian Prieto\" <[email protected]> wrote:\n\n> \n> Hello pals, I have the following table in Postgresql 8.0.1\n> \n> Mydb# \\d geoip_block\n> Table \"public.geoip_block\"\n> Column | Type | Modifiers\n> -------------+--------+-----------\n> locid | bigint |\n> start_block | inet |\n> end_block | inet |\n> \n> mydb# explain analyze select locid from geoip_block where\n> '216.230.158.50'::inet between start_block and end_block;\n> QUERY PLAN\n>\n----------------------------------------------------------------------------\n> -------------------------------------------\n> Seq Scan on geoip_block (cost=0.00..142772.86 rows=709688 width=8)\n(actual\n> time=14045.384..14706.927 rows=1 loops=1)\n> Filter: (('216.230.158.50'::inet >= start_block) AND\n> ('216.230.158.50'::inet <= end_block))\n> Total runtime: 14707.038 ms\n> \n> Ok, now I decided to create a index to \"speed\" a little the query\n> \n> Mydb# create index idx_ipblocks on geoip_block(start_block, end_block);\n> CREATE INDEX\n> \n> clickad=# explain analyze select locid from geoip_block where\n> '216.230.158.50'::inet between start_block and end_block;\n> QUERY PLAN\n>\n----------------------------------------------------------------------------\n> ------------------------------------------\n> Seq Scan on geoip_block (cost=0.00..78033.96 rows=230141 width=8) (actual\n> time=12107.919..12610.199 rows=1 loops=1)\n> Filter: (('216.230.158.50'::inet >= start_block) AND\n> ('216.230.158.50'::inet <= end_block))\n> Total runtime: 12610.329 ms\n> (3 rows)\n> \n> I guess the planner is doing a sequential scan in the table, why not use\nthe\n> compound index? Do you have any idea in how to speed up this query?\n\nDid you vacuum analyze the table after creating the index?\n\nSean\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n", "msg_date": "Mon, 26 Sep 2005 11:49:59 -0600", "msg_from": "\"Cristian Prieto\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index use in BETWEEN statement..." }, { "msg_contents": "\"Cristian Prieto\" <[email protected]> writes:\n> mydb=# explain analyze select locid from geoip_block where\n> '216.230.158.50'::inet between start_block and end_block;\n\n> As you see it still using a sequential scan in the table and ignores the\n> index, any other suggestion?\n\nThat two-column index is entirely useless for this query; in fact btree\nindexes of any sort are pretty useless. You really need some sort of\nmultidimensional index type like rtree or gist. There was discussion\njust a week or three ago of how to optimize searches for intervals\noverlapping a specified point, which is identical to your problem.\nCan't remember if the question was about timestamp intervals or plain\nintervals, but try checking the list archives.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Sep 2005 15:17:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index use in BETWEEN statement... " }, { "msg_contents": "\n\nCristian Prieto wrote:\n\n>mydb=# explain analyze select locid from geoip_block where\n>'216.230.158.50'::inet between start_block and end_block;\n> QUERY PLAN\n>----------------------------------------------------------------------------\n>------------------------------------------\n> Seq Scan on geoip_block (cost=0.00..78033.96 rows=230141 width=8) (actual\n>time=13015.538..13508.708 rows=1 loops=1)\n> Filter: (('216.230.158.50'::inet >= start_block) AND\n>('216.230.158.50'::inet <= end_block))\n> Total runtime: 13508.905 ms\n>(3 rows)\n>\n>mydb=# alter table geoip_block add constraint pkey_geoip_block primary key\n>(start_block, end_block);\n>NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index\n>\"pkey_geoip_block\" for table \"geoip_block\"\n>ALTER TABLE\n>\n>mydb=# vacuum analyze geoip_block; \n>\n>mydb=# explain analyze select locid from geoip_block where\n>'216.230.158.50'::inet between start_block and end_block;\n> QUERY PLAN\n>----------------------------------------------------------------------------\n>-------------------------------------------\n> Seq Scan on geoip_block (cost=0.00..101121.01 rows=308324 width=8) (actual\n>time=12128.190..12631.550 rows=1 loops=1)\n> Filter: (('216.230.158.50'::inet >= start_block) AND\n>('216.230.158.50'::inet <= end_block))\n> Total runtime: 12631.679 ms\n>(3 rows)\n>\n>mydb=#\n>\n>\n>As you see it still using a sequential scan in the table and ignores the\n>index, any other suggestion?\n>\n>Cristian,\n> \n>\nPlease note that the planner thinks 308324 rows are being returned, \nwhile there is actually only 1 (one!). You might try altering statistics \nfor the relevant column(s), analyzing the table, and then try again. If \nthat doesn't give you a more accurate row estimate, though, it won't help.\n\nDon\n\n", "msg_date": "Mon, 26 Sep 2005 14:44:42 -0500", "msg_from": "Don Isgitt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index use in BETWEEN statement..." }, { "msg_contents": "On 9/27/05 7:45 AM, \"Yonatan Ben-Nes\" <[email protected]> wrote:\n\n> Tom Lane wrote:\n>> \"Cristian Prieto\" <[email protected]> writes:\n>> \n>>> mydb=# explain analyze select locid from geoip_block where\n>>> '216.230.158.50'::inet between start_block and end_block;\n>> \n>> \n>>> As you see it still using a sequential scan in the table and ignores the\n>>> index, any other suggestion?\n>> \n>> \n>> That two-column index is entirely useless for this query; in fact btree\n>> indexes of any sort are pretty useless. You really need some sort of\n>> multidimensional index type like rtree or gist. There was discussion\n>> just a week or three ago of how to optimize searches for intervals\n>> overlapping a specified point, which is identical to your problem.\n>> Can't remember if the question was about timestamp intervals or plain\n>> intervals, but try checking the list archives.\n>> \n>> regards, tom lane\n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 4: Have you searched our list archives?\n>> \n>> http://archives.postgresql.org\n> \n> I think that Tom is talking about a discussion which I started entitled\n> \"Planner create a slow plan without an available index\" search for it\n> maybe it will help you.\n> At the end I created an RTREE index and it did solved my problem though\n> my data was 2 INT fields and not INET fields as yours so im not sure how\n> can you work with that... To solve my problem I created boxes from the 2\n> numbers and with them I did overlapping.\n\nThere is some code in this thread that shows the box approach explicitly:\n\nhttp://archives.postgresql.org/pgsql-sql/2005-09/msg00189.php\n\nSean\n\n", "msg_date": "Tue, 27 Sep 2005 06:54:51 -0400", "msg_from": "Sean Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index use in BETWEEN statement..." }, { "msg_contents": "Tom Lane wrote:\n> \"Cristian Prieto\" <[email protected]> writes:\n> \n>>mydb=# explain analyze select locid from geoip_block where\n>>'216.230.158.50'::inet between start_block and end_block;\n> \n> \n>>As you see it still using a sequential scan in the table and ignores the\n>>index, any other suggestion?\n> \n> \n> That two-column index is entirely useless for this query; in fact btree\n> indexes of any sort are pretty useless. You really need some sort of\n> multidimensional index type like rtree or gist. There was discussion\n> just a week or three ago of how to optimize searches for intervals\n> overlapping a specified point, which is identical to your problem.\n> Can't remember if the question was about timestamp intervals or plain\n> intervals, but try checking the list archives.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\nI think that Tom is talking about a discussion which I started entitled \n\"Planner create a slow plan without an available index\" search for it \nmaybe it will help you.\nAt the end I created an RTREE index and it did solved my problem though \nmy data was 2 INT fields and not INET fields as yours so im not sure how \ncan you work with that... To solve my problem I created boxes from the 2 \nnumbers and with them I did overlapping.\n", "msg_date": "Tue, 27 Sep 2005 13:45:41 +0200", "msg_from": "Yonatan Ben-Nes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index use in BETWEEN statement..." } ]
[ { "msg_contents": "From: Dann Corbit <[email protected]>\nSent: Sep 23, 2005 5:38 PM\nSubject: RE: [HACKERS] [PERFORM] Releasing memory during External sorting?\n\n>_C Unleashed_ also explains how to use a callback function to perform\n>arbitrary radix sorts (you simply need a method that returns the\n>[bucketsize] most significant bits for a given data type, for the length\n>of the key).\n>\n>So you can sort fairly arbitrary data in linear time (of course if the\n>key is long then O(n*log(n)) will be better anyway.)\n>\n>But in any case, if we are talking about external sorting, then disk\n>time will be so totally dominant that the choice of algorithm is\n>practically irrelevant.\n>\nHorsefeathers. Jim Gray's sorting contest site:\nhttp://research.microsoft.com/barc/SortBenchmark/\n\nproves that the choice of algorithm can have a profound affect on\nperformance. After all, the amount of IO done is the most\nimportant of the things that you should be optimizing for in\nchoosing an external sorting algorithm.\n\nClearly, if we know or can assume the range of the data in question\nthe theoretical minimum amount of IO is one pass through all of the\ndata (otherwise, we are back in O(lg(n!)) land ). Equally clearly, for\nHD's that one pass should involve as few seeks as possible.\n\nIn fact, such a principle can be applied to _all_ forms of IO: HD,\nRAM, and CPU cache. The absolute best that any sort can\npossibly do is to make one pass through the data and deduce the\nproper ordering of the data during that one pass.\n\nIt's usually also important that our algorithm be Stable, preferably\nWholly Stable.\n\nLet's call such a sort Optimal External Sort (OES). Just how much\nfaster would it be than current practice?\n\nThe short answer is the difference between how long it currently\ntakes to sort a file vs how long it would take to \"cat\" the contents\nof the same file to a RAM buffer (_without_ displaying it). IOW, \nthere's SIGNIFICANT room for improvement over current\nstandard practice in terms of sorting performance, particularly\nexternal sorting performance.\n\nSince sorting is a fundamental operation in many parts of a DBMS,\nthis is a Big Deal.\n \nThis discussion has gotten my creative juices flowing. I'll post\nsome Straw Man algorithm sketches after I've done some more\nthought.\n\nRon\n\n> -----Original Message-----\n> From: Dann Corbit <[email protected]>\n> Sent: Friday, September 23, 2005 2:21 PM\n> Subject: Re: [HACKERS] [PERFORM] Releasing memory during ...\n> \n>For the subfiles, load the top element of each subfile into a priority\n>queue. Extract the min element and write it to disk. If the next\n>value is the same, then the queue does not need to be adjusted.\n>If the next value in the subfile changes, then adjust it.\n> \n>Then, when the lowest element in the priority queue changes, adjust\n>the queue.\n> \n>Keep doing that until the queue is empty.\n> \n>You can create all the subfiles in one pass over the data.\n> \n>You can read all the subfiles, merge them, and write them out in a\n>second pass (no matter how many of them there are).\n> \nThe Gotcha with Priority Queues is that their performance depends\nentirely on implementation. In naive implementations either Enqueue()\nor Dequeue() takes O(n) time, which reduces sorting time to O(n^2).\n\nThe best implementations I know of need O(lglgn) time for those\noperations, allowing sorting to be done in O(nlglgn) time.\nUnfortunately, there's a lot of data manipulation going on in the \nprocess and two IO passes are required to sort any given file.\nPriority Queues do not appear to be very \"IO friendly\".\n\nI know of no sorting performance benchmark contest winner based on\nPriority Queues.\n\n\n>Replacement selection is not a good idea any more, since obvious\n>better ideas should take over. Longer runs are of no value if you do not\n>have to do multiple merge passes.\n> \nJudging from the literature and the contest winners, Replacement\nSelection is still a viable and important technique. Besides Priority\nQueues, what \"obvious better ideas\" have you heard of?\n\n\n>I have explained this general technique in the book \"C Unleashed\",\n>chapter 13.\n> \n>Sample code is available on the book's home page.\n>\nURL please? \n", "msg_date": "Sat, 24 Sep 2005 06:30:47 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Releasing memory during External sorting?" } ]
[ { "msg_contents": "It looks like a rebranded low end Adaptec 64MB PCI-X <-> SATA RAID card.\nLooks like the 64MB buffer is not upgradable.\nLooks like it's SATA, not SATA II\n\nThere are much better ways to spend your money.\n\nThese are the products with the current best price/performance ratio:\nhttp://www.areca.us/products/html/pcix-sata.htm\n\nAssuming you are not building 1U boxes, get one of the full height\ncards and order it with the maximum size buffer you can afford.\nThe cards take 1 SODIMM, so that will be a max of 1GB or 2GB\ndepending on whether 2GB SODIMMs are available to you yet.\n\nRon\n\n-----Original Message-----\nFrom: PFC <[email protected]>\nSent: Sep 24, 2005 4:34 AM\nTo: [email protected]\nSubject: [PERFORM] Advice on RAID card\n\n\n\tHello fellow Postgresql'ers.\n\n\tI've been stumbled on this RAID card which looks nice. It is a PCI-X SATA \nRaid card with 6 channels, and does RAID 0,1,5,10,50.\n\tIt is a HP card with an Adaptec chip on it, and 64 MB cache.\n\n\tHP Part # : 372953-B21\n\tAdaptec Part # : AAR-2610SA/64MB/HP\n\n\tThere' even a picture :\n\thttp://megbytes.free.fr/Sata/DSC05970.JPG\n\n\tI know it isn't as good as a full SCSI system. I just want to know if \nsome of you have had experiences with these, and if this cards belong to \nthe \"slower than no RAID\" camp, like some DELL card we often see mentioned \nhere, or to the \"decent performance for the price\" camp. It is to run on a \nLinux.\n\n\tThanks in advance for your time and information.\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n", "msg_date": "Sat, 24 Sep 2005 11:55:53 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Advice on RAID card" }, { "msg_contents": "\n> It looks like a rebranded low end Adaptec 64MB PCI-X <-> SATA RAID card.\n> Looks like the 64MB buffer is not upgradable.\n> Looks like it's SATA, not SATA II\n\n\tYeah, that's exactly what it is. I can get one for 150 Euro, the Areca is \nat least 600. This is for a budget server so while it would be nice to \nhave all the high-tech stuff, it's not the point. My question was raher, \nis it one of the crap RAID5 cards which are actually SLOWER than plain IDE \ndisks, or is it decent, even though low-end (and cheap), and worth it \ncompared to software RAID5 ?\n\n> Assuming you are not building 1U boxes, get one of the full height\n> cards and order it with the maximum size buffer you can afford.\n> The cards take 1 SODIMM, so that will be a max of 1GB or 2GB\n> depending on whether 2GB SODIMMs are available to you yet.\n\n\tIt's for a budget dev server which should have RAID5 for reliability, but \nnot necessarily stellar performance (and price). I asked about this card \nbecause I can get one at a good price.\n\n\tThanks for taking the time to answer.\n", "msg_date": "Sat, 24 Sep 2005 18:27:36 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice on RAID card" } ]
[ { "msg_contents": "When I need to insert a few hundred or thousand things in\na table from a 3-tier application, it seems I'm much better\noff creating a big string of semicolon separated insert\nstatements rather than sending them one at a time - even\nwhen I use the obvious things like wrapping the statements\nin a transaction and using the library's prepared statements.\n\n\n\nI tried both Ruby/DBI and C#/Npgsql; and in both cases\nsets of inserts that took 3 seconds when run individually\ntook about 0.7 seconds when concatenated together.\n\nIs it expected that I'd be better off sending big\nconcatenated strings like\n \"insert into tbl (c1,c2) values (v1,v2);insert into tbl (c1,c2) values (v3,v4);...\"\ninstead of sending them one at a time?\n\n\n\n\n\ndb.ExecuteSQL(\"BEGIN\");\nsql = new System.Text.StringBulder(10000);\nfor ([a lot of data elements]) {\n sql.Append(\n \"insert into user_point_features (col1,col2)\"+\n \" values (\" +obj.val1 +\",\"+obj.val2+\");\"\n );\n}\ndb.ExecuteSQL(sql.ToString());\ndb.ExecuteSQL(\"COMMIT\");\n", "msg_date": "Sat, 24 Sep 2005 13:51:16 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Multiple insert performance trick or performance misunderstanding?" }, { "msg_contents": "Ron Mayer <[email protected]> writes:\n> Is it expected that I'd be better off sending big\n> concatenated strings like\n> \"insert into tbl (c1,c2) values (v1,v2);insert into tbl (c1,c2) values (v3,v4);...\"\n> instead of sending them one at a time?\n\nIt's certainly possible, if the network round trip from client to server\nis slow. I do not think offhand that there is any material advantage\nfor the processing within the server (assuming you've wrapped the whole\nthing into one transaction in both cases); if anything, the\nconcatenated-statement case is probably a bit worse inside the server\nbecause it will transiently eat more memory. But network latency or\nclient-side per-command overhead could well cause the results you see.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Sep 2005 17:15:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple insert performance trick or performance\n\tmisunderstanding?" } ]
[ { "msg_contents": ">> Even for RAID5 ? it uses a bit more CPU for the parity calculations.\n\n> I honestly can't speak to RAID 5. I don't (and won't) use it. RAID 5 is \n> a little brutal when under\n> heavy write load. I use either 1, or 10.\n\nYes, for RAID5 software RAID is better than HW RAID today - the modern general purpose CPUs are *much* faster at the ECC calculations than the CPUs on most modern hardware SCSI RAID cards.\n\nNote that there is a trend toward SATA RAID, and the newer crop of SATA RAID adapters from companies like 3Ware are starting to be much faster than software RAID with lower CPU consumption.\n\nUse non-RAID SCSI controllers if you want high performance and low CPU consumption with software RAID. The write-combining and TCQ of SCSI is well suited to SW RAID. Note that if you use HW RAID controllers for SW RAID, expect slightly better performance than in their HW RAID mode, but much higher CPU consumption, as they make for terrible JBOD SCSI controllers. This is especially true of the HP smartarray controllers with their Linux drivers.\n\n- Luke\nGreenplum\n\n", "msg_date": "Sun, 25 Sep 2005 14:09:30 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Advice on RAID card" } ]
[ { "msg_contents": "If I do a simple query like:\n\nSelect ids, keywords from dict where keywords='blabla' ('blabla' is a single\nword); \n\nThe table have 200 million rows, I have index the keywords field. On the\nfirst time my query seem to slow to get the result, about 15-60 sec to get\nthe result. But if I repeat the query I will get fast result. My question is\nwhy on the first time the query seem very slow. \n\nTable structure is quite simple:\n\nIds bigint, keywords varchar(150), weight varchar(1), dpos int.\n\n \n\nI use latest pgAdmin3 to test all queries. My linux box is Redhat 4 AS,\nkernel 2.6.9-11, postgresql version 8.0.3, 2x200 GB SATA 7200 RPM configure\nas RAID0 with ext3 file system for postgresql data only. 80 GB EIDE 7200 RPM\nwith ext3 file system for OS only. The server has 2 GB RAM with P4 3,2 GHz.\n\n \n\nIf I do this query on mssql server, with the same hardware spesification and\nsame data, mssql server beat postgresql, the query about 0-4 sec to get the\nresult. What wrong with my postgresql.\n\n \n\nwassalam,\n\nahmad fajar\n\n \n\n\n\n\n\n\n\n\n\n\nIf I do a simple query like:\nSelect ids, keywords from dict where\nkeywords='blabla' ('blabla' is a single word); \nThe table have 200 million rows, I have index the\nkeywords field. On the first time my query seem to slow to get the result,\nabout 15-60 sec to get the result. But if I repeat the query I will get fast\nresult. My question is why on the first time the query seem very slow. \nTable structure is quite simple:\nIds bigint, keywords varchar(150), weight\nvarchar(1), dpos int.\n \nI use latest pgAdmin3 to test all queries. My linux\nbox is Redhat 4 AS, kernel 2.6.9-11, postgresql version 8.0.3, 2x200 GB SATA 7200\nRPM configure as RAID0 with ext3 file system for postgresql data only. 80 GB\nEIDE 7200 RPM with ext3 file system for OS only. The server has 2 GB RAM with\nP4 3,2 GHz.\n \nIf I do this query on mssql server, with the same\nhardware spesification and same data, mssql server beat postgresql, the query\nabout 0-4 sec to get the result. What wrong with my postgresql.\n \n\nwassalam,\nahmad fajar", "msg_date": "Mon, 26 Sep 2005 01:42:59 +0700", "msg_from": "\"Ahmad Fajar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query seem to slow if table have more than 200 million rows" }, { "msg_contents": "\n\"\"Ahmad Fajar\"\" <[email protected]> wrote\n>\n> Select ids, keywords from dict where keywords='blabla' ('blabla' is a \n> single\n> word);\n>\n> The table have 200 million rows, I have index the keywords field. On the\n> first time my query seem to slow to get the result, about 15-60 sec to get\n> the result. But if I repeat the query I will get fast result. My question \n> is\n> why on the first time the query seem very slow.\n>\n> Table structure is quite simple:\n>\n> Ids bigint, keywords varchar(150), weight varchar(1), dpos int.\n>\n\nThe first slowness is obviously caused by disk IOs. The second time is \nfaster because all data pages it requires are already in buffer pool. 200 \nmillion rows is not a problem for btree index, even if your client tool \nappends some spaces to your keywords at your insertion time, the ideal btree \nis 5 to 6 layers high at most. Can you show the iostats of index from your \nstatistics view? \nhttp://www.postgresql.org/docs/8.0/static/monitoring-stats.html#MONITORING-STATS-VIEWS\n\nRegards,\nQingqing\n\n\n", "msg_date": "Mon, 26 Sep 2005 18:43:14 -0700", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query seem to slow if table have more than 200 million rows" }, { "msg_contents": "Hi Qingqing,\n\nI don't know whether the statistic got is bad or good, this is the\nstatistic:\nscooby=# select a.relid, a.relname, b.indexrelid, b.indexrelname,\nc.idx_scan, c.idx_tup_read, c.idx_tup_fetch,\nscooby-# a.heap_blks_read, a.heap_blks_hit, a.idx_blks_read, a.idx_blks_hit,\nscooby-# a.toast_blks_read, a.toast_blks_hit, a.tidx_blks_read,\na.tidx_blks_hit, b.idx_blks_read, b.idx_blks_hit\nscooby-# from pg_statio_user_tables a, pg_statio_user_indexes b,\npg_stat_all_indexes c\nscooby-# where a.relid=b.relid and a.relid=c.relid and\nb.indexrelid=c.indexrelid and a.relname=b.relname and\nscooby-# a.relname=c.relname and a.relname='fti_dict1';\n relid | relname | indexrelid | indexrelname | idx_scan | idx_tup_read\n| idx_tup_fetch | heap_blks_read | heap_blks_hit | idx\n_blks_read | idx_blks_hit | toast_blks_read | toast_blks_hit |\ntidx_blks_read | tidx_blks_hit | idx_blks_read | idx_blks_hit\n----------+-----------+------------+--------------+----------+--------------\n+---------------+----------------+---------------+----\n-----------+--------------+-----------------+----------------+--------------\n--+---------------+---------------+--------------\n 22880226 | fti_dict1 | 22880231 | idx_dict3 | 0 | 0\n| 0 | 0 | 0 |\n 0 | 0 | | |\n| | 0 | 0\n 22880226 | fti_dict1 | 22880230 | idx_dict2 | 7 | 592799\n| 592799 | 0 | 0 |\n 0 | 0 | | |\n| | 0 | 0\n 22880226 | fti_dict1 | 22880229 | idx_dict1 | 0 | 0\n| 0 | 0 | 0 |\n 0 | 0 | | |\n| | 0 | 0\n(3 rows)\n\nI have try several time the query below with different keyword, but I just\ngot idx_tup_read and idx_tup_fetch changed, others keep zero. \nThe Index are:\nIds (Idx_dict1), \nkeywords (idx_dict2 varchar_ops),\nkeywords (idx_dict3 varchar_pattern_ops) ==> I use this index for query ...\nkeywords like 'blabla%', just for testing purpose\n\nRegards,\nahmad fajar\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Qingqing Zhou\nSent: Selasa, 27 September 2005 8:43\nTo: [email protected]\nSubject: Re: [PERFORM] Query seem to slow if table have more than 200\nmillion rows\n\n\n\"\"Ahmad Fajar\"\" <[email protected]> wrote\n>\n> Select ids, keywords from dict where keywords='blabla' ('blabla' is a \n> single\n> word);\n>\n> The table have 200 million rows, I have index the keywords field. On the\n> first time my query seem to slow to get the result, about 15-60 sec to get\n> the result. But if I repeat the query I will get fast result. My question \n> is\n> why on the first time the query seem very slow.\n>\n> Table structure is quite simple:\n>\n> Ids bigint, keywords varchar(150), weight varchar(1), dpos int.\n>\n\nThe first slowness is obviously caused by disk IOs. The second time is \nfaster because all data pages it requires are already in buffer pool. 200 \nmillion rows is not a problem for btree index, even if your client tool \nappends some spaces to your keywords at your insertion time, the ideal btree\n\nis 5 to 6 layers high at most. Can you show the iostats of index from your \nstatistics view? \nhttp://www.postgresql.org/docs/8.0/static/monitoring-stats.html#MONITORING-S\nTATS-VIEWS\n\nRegards,\nQingqing\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n", "msg_date": "Tue, 27 Sep 2005 16:39:55 +0700", "msg_from": "\"Ahmad Fajar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query seem to slow if table have more than 200 million rows" }, { "msg_contents": "\n\"\"Ahmad Fajar\"\" <[email protected]> wrote\n> Hi Qingqing,\n>\n> I don't know whether the statistic got is bad or good, this is the\n> statistic:\n\nPlease do it in this way:\n\n1. Start postmaster with \"stats_start_collector=true\" and \n\"stats_block_level=true\".\n\n2. Use psql connect it, do something like this:\n\ntest=# select pg_stat_reset();\n pg_stat_reset\n---------------\n t\n(1 row)\n\ntest=# select * from pg_statio_user_indexes ;\n relid | indexrelid | schemaname | relname | indexrelname | idx_blks_read | \nidx_\nblks_hit\n-------+------------+------------+---------+--------------+---------------+-----\n---------\n 16385 | 16390 | public | test | test_idx | 0 |\n 0\n(1 row)\n\ntest=# select count(*) from test where a <= 1234;\n count\n-------\n 7243\n(1 row)\n\ntest=# select * from pg_statio_user_indexes ;\n relid | indexrelid | schemaname | relname | indexrelname | idx_blks_read | \nidx_\nblks_hit\n-------+------------+------------+---------+--------------+---------------+-----\n---------\n 16385 | 16390 | public | test | test_idx | 55 |\n 0\n(1 row)\n\n\nThis gives us that to get \"select count(*) from test where a <= 1234\", I \nhave to read 55 index blocks (no index block hit since I just restart \npostmaster so the bufferpool is empty).\n\n\nRegards,\nQingqing\n\n\n", "msg_date": "Mon, 3 Oct 2005 16:04:50 -0700", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query seem to slow if table have more than 200 million rows" } ]
[ { "msg_contents": "While I understand being economical, at some point one crosses the line\nto being penny wise and pound foolish.\n\nHow much is the data on this server going to be worth?\nHow much much will it cost you to recover or restore it (assuming that\nis even possible if you lose it)?\n\nIf your data is worth nothing or the cost to recover or restore it is\nnegligible, then you don't need (nor should want) a DB server. You'll\nget higher performance at less cost via a number of other methods.\n\nOTOH, if you data _does_ have value by any of the above metrics,\nthen it is worth it to pay attention to reliable, safe, fast, physical IO.\n\nBattery backed HD caches of appropriate size are usually well worth\nthe $, as they pay for themselves (and then some) with the first data\nloss they prevent.\n\nRAID 5 means you are _always_ only 2 HDs from data loss, and 1 HD\nfrom a serious performance hit. Part of the trade-off with using SATA\nHDs that cost 1/3-1/4 their U320 15Krpm brethren is that such\ncircumstances are +FAR+ more likely with SATA HDs. \n\nIf you are not going to use RAID 10 because of cost issues, then\nspend the $ to get the biggest battery backed cache you can afford\nand justify as being cheaper than what the proper RAID 6 or RAID 10\nsetup would cost you. Even if you are going to use SW RAID and the\ncontroller will just be a JBOD controller.\n\nOn the general subject of costs... \n\nAt this writing, SODIMM RAM costs ~$100 (US) per GB. Standard\nDIMMs cost ~$75 per GB unless you buy 4GB ones, in which case\nthey cost ~$100 per GB.\n\nThe \"sweet spot\" in SATA HD pricing is ~$160 for 320GB at 7200rpm\n(don't buy the 36GB or 74GB WD Raptors, they are no longer worth\nit). If you are careful you can get SATA HD's with 16MB rather than\n8MB buffers for that price. Each such HD will give you ~50MB/s of\nraw Average Sustained Transfer Rate.\n\nDecent x86 compatible CPUs are available for ~$200-$400 apiece.\nRarely will a commodity HW DB server need a more high end CPU.\n\nSome of the above numbers rate to either fall to 1/2 cost or 2x in value\nfor the dollar within the next 6-9 months, and all of them will within the\nnext 18 months. And so will RAID controller costs.\n\nYour salary will hopefully not degrade at that rate, and it is unlikely that\nyour value for the dollar will increase at that rate. Nor is it likely that\ndata worth putting on a DB server will do so.\n\nFigure out what your required performance and reliability for the next 18\nmonths is going to be, and buy the stuff from the above list that will\nsustain that. No matter what.\n\nAnything less rates _highly_ to end up costing you and your organization\nmore money within the next 18months than you will \"save\" in initial\nacquisition cost.\n\nRon\n\n \n\n\n\n-----Original Message-----\nFrom: PFC <[email protected]>\nSent: Sep 24, 2005 12:27 PM\nSubject: Re: [PERFORM] Advice on RAID card\n\n\n> It looks like a rebranded low end Adaptec 64MB PCI-X <-> SATA RAID card.\n> Looks like the 64MB buffer is not upgradable.\n> Looks like it's SATA, not SATA II\n\n\tYeah, that's exactly what it is. I can get one for 150 Euro, the Areca is \nat least 600. This is for a budget server so while it would be nice to \nhave all the high-tech stuff, it's not the point. My question was raher, \nis it one of the crap RAID5 cards which are actually SLOWER than plain IDE \ndisks, or is it decent, even though low-end (and cheap), and worth it \ncompared to software RAID5 ?\n\n> Assuming you are not building 1U boxes, get one of the full height\n> cards and order it with the maximum size buffer you can afford.\n> The cards take 1 SODIMM, so that will be a max of 1GB or 2GB\n> depending on whether 2GB SODIMMs are available to you yet.\n\n\tIt's for a budget dev server which should have RAID5 for reliability, but \nnot necessarily stellar performance (and price). I asked about this card \nbecause I can get one at a good price.\n\n\tThanks for taking the time to answer.\n\n", "msg_date": "Mon, 26 Sep 2005 07:01:59 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Advice on RAID card" }, { "msg_contents": "I think the answer is simple\n\n\nif the question is low end Raid card or software ? go on the software \nand youll get better performance.\n\nIf this is a high end server i wouldnt think twice. HW RAID is a must \nand not only because the performance but because the easynes ( hot swap \nand such ) and the battery\n\n\nRon Peacetree wrote:\n\n>While I understand being economical, at some point one crosses the line\n>to being penny wise and pound foolish.\n>\n>How much is the data on this server going to be worth?\n>How much much will it cost you to recover or restore it (assuming that\n>is even possible if you lose it)?\n>\n>If your data is worth nothing or the cost to recover or restore it is\n>negligible, then you don't need (nor should want) a DB server. You'll\n>get higher performance at less cost via a number of other methods.\n>\n>OTOH, if you data _does_ have value by any of the above metrics,\n>then it is worth it to pay attention to reliable, safe, fast, physical IO.\n>\n>Battery backed HD caches of appropriate size are usually well worth\n>the $, as they pay for themselves (and then some) with the first data\n>loss they prevent.\n>\n>RAID 5 means you are _always_ only 2 HDs from data loss, and 1 HD\n>from a serious performance hit. Part of the trade-off with using SATA\n>HDs that cost 1/3-1/4 their U320 15Krpm brethren is that such\n>circumstances are +FAR+ more likely with SATA HDs. \n>\n>If you are not going to use RAID 10 because of cost issues, then\n>spend the $ to get the biggest battery backed cache you can afford\n>and justify as being cheaper than what the proper RAID 6 or RAID 10\n>setup would cost you. Even if you are going to use SW RAID and the\n>controller will just be a JBOD controller.\n>\n>On the general subject of costs... \n>\n>At this writing, SODIMM RAM costs ~$100 (US) per GB. Standard\n>DIMMs cost ~$75 per GB unless you buy 4GB ones, in which case\n>they cost ~$100 per GB.\n>\n>The \"sweet spot\" in SATA HD pricing is ~$160 for 320GB at 7200rpm\n>(don't buy the 36GB or 74GB WD Raptors, they are no longer worth\n>it). If you are careful you can get SATA HD's with 16MB rather than\n>8MB buffers for that price. Each such HD will give you ~50MB/s of\n>raw Average Sustained Transfer Rate.\n>\n>Decent x86 compatible CPUs are available for ~$200-$400 apiece.\n>Rarely will a commodity HW DB server need a more high end CPU.\n>\n>Some of the above numbers rate to either fall to 1/2 cost or 2x in value\n>for the dollar within the next 6-9 months, and all of them will within the\n>next 18 months. And so will RAID controller costs.\n>\n>Your salary will hopefully not degrade at that rate, and it is unlikely that\n>your value for the dollar will increase at that rate. Nor is it likely that\n>data worth putting on a DB server will do so.\n>\n>Figure out what your required performance and reliability for the next 18\n>months is going to be, and buy the stuff from the above list that will\n>sustain that. No matter what.\n>\n>Anything less rates _highly_ to end up costing you and your organization\n>more money within the next 18months than you will \"save\" in initial\n>acquisition cost.\n>\n>Ron\n>\n> \n>\n>\n>\n>-----Original Message-----\n>From: PFC <[email protected]>\n>Sent: Sep 24, 2005 12:27 PM\n>Subject: Re: [PERFORM] Advice on RAID card\n>\n>\n> \n>\n>>It looks like a rebranded low end Adaptec 64MB PCI-X <-> SATA RAID card.\n>>Looks like the 64MB buffer is not upgradable.\n>>Looks like it's SATA, not SATA II\n>> \n>>\n>\n>\tYeah, that's exactly what it is. I can get one for 150 Euro, the Areca is \n>at least 600. This is for a budget server so while it would be nice to \n>have all the high-tech stuff, it's not the point. My question was raher, \n>is it one of the crap RAID5 cards which are actually SLOWER than plain IDE \n>disks, or is it decent, even though low-end (and cheap), and worth it \n>compared to software RAID5 ?\n>\n> \n>\n>>Assuming you are not building 1U boxes, get one of the full height\n>>cards and order it with the maximum size buffer you can afford.\n>>The cards take 1 SODIMM, so that will be a max of 1GB or 2GB\n>>depending on whether 2GB SODIMMs are available to you yet.\n>> \n>>\n>\n>\tIt's for a budget dev server which should have RAID5 for reliability, but \n>not necessarily stellar performance (and price). I asked about this card \n>because I can get one at a good price.\n>\n>\tThanks for taking the time to answer.\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n> \n>\n\n-- \n--------------------------\nCanaan Surfing Ltd.\nInternet Service Providers\nBen-Nes Michael - Manager\nTel: 972-4-6991122\nCel: 972-52-8555757\nFax: 972-4-6990098\nhttp://www.canaan.net.il\n--------------------------\n\n", "msg_date": "Thu, 29 Sep 2005 17:25:49 +0300", "msg_from": "Michael Ben-Nes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice on RAID card" } ]
[ { "msg_contents": ">From: Ron Peacetree <[email protected]>\n>Sent: Sep 24, 2005 6:30 AM\n>Subject: Re: [HACKERS] [PERFORM] Releasing memory during External sorting?\n>\n>... the amount of IO done is the most\n>important of the things that you should be optimizing for in\n>choosing an external sorting algorithm.\n>\n> <snip>\n>\n>Since sorting is a fundamental operation in many parts of a DBMS,\n>this is a Big Deal.\n> \n>This discussion has gotten my creative juices flowing. I'll post\n>some Straw Man algorithm sketches after I've done some more\n>thought.\n>\nAs a thought exeriment, I've been considering the best way to sort 1TB\n(2^40B) of 2-4KB (2^11-2^12B) records. That's 2^28-2^29 records.\n\nPart I: A Model of the System\nThe performance of such external sorts is limited by HD IO, then\nmemory IO, and finally CPU throughput.\n\nOn commodity HW, single HD IO is ~1/2048 (single HD realistic worst\ncase) to ~1/128 (single HD best case. No more than one seek every\n~14.7ms for a ~50MB/s 7200rpm SATA II HD) the throughtput of RAM.\n\nRAID HD IO will be in the range from as low as a single HD (RAID 1) to\n~1/8 (a RAID system saturating the external IO bus) the throughput of\nRAM.\n\nRAM is ~1/8-1/16 the throughput and ~128x the latency of the data\npathways internal to the CPU.\n\nThis model suggests that HD IO will greatly dominate every other\nfactor, particuarly if we are talking about a single HD rather than a\nperipheral bus saturating RAID subsystem. If at all possible, we want\nto access the HD subsystem only once for each data item, and we want\nto avoid seeking more than the critical number of seeks implied above\nwhen doing it. It also suggests that at a minimum, it's worth it to\nspend ~8 memory operations or ~64 CPU operations to avoid a HD access.\nFar more than that if we are talking about a single random access.\n\nIt's worth spending ~128 CPU operations to avoid a single random RAM\naccess, and literally 10's or even 100's of thousands of CPU operations to\navoid a random HD access. In addition, there are many indications in\ncurrent ECE and IT literature that the performance gaps between these\npieces of computer systems are increasing and expected to continue to do\nso for the forseeable future. In short, _internal_ sorts have some, and are\ngoing to increasingly have more, of the same IO problems usually\nassociated with external sorts.\n\n\nPart II: a Suggested Algorithm\nThe simplest case is one where we have to order the data using a key that\nonly has two values.\n\nGiven 2^40B of data using 2KB or 4KB per record, the most compact\nrepresentation we can make of such a data set is to assign a 32b= 4B RID\nor Rptr for location + a 1b key for each record. Just the RID's would take up\n1.25GB (250M records) or 2.5GB (500M records). Enough space that even\nan implied ordering of records may not fit into RAM.\n\nStill, sorting 1.25GB or 2.5GB of RIDs is considerably less expensive in terms\nof IO operations than sorting the actual 1TB of data.\n\nThat IO cost can be lowered even further if instead of actually physically\nsorting the RIDs, we assign a RID to the appropriate catagory inside the CPU\nas we scan the data set and append the entries in a catagory from CPU cache\nto a RAM file in one IO burst whenever said catagory gets full inside the CPU.\nWe can do the same with either RAM file to HD whenever they get full. The\nsorted order of the data is found by concatenating the appropriate files at the\nend of the process.\n\nAs simple as this example is, it has many of the characteristics we are looking for:\nA= We access each piece of data once on HD and in RAM.\nB= We do the minimum amount of RAM and HD IO, and almost no random IO in\neither case.\nC= We do as much work as possible within the CPU.\nD= This process is stable. Equal keys stay in the original order they are encountered.\n\nTo generalize this method, we first need our 1b Key to become a sufficiently large\nenough Key or KeyPrefix to be useful, yet not so big as to be CPU cache unfriendly.\n\nCache lines (also sometimes called \"blocks\") are usually 64B= 512b in size.\nTherefore our RID+Key or KeyPrefix should never be larger than this. For a 2^40B\ndata set, a 5B RID leaves us with potentially as much as 59B of Key or KeyPrefix.\nSince the data can't take on more than 40b worth different values (actually 500M= 29b\nfor our example), we have more than adequate space for Key or KeyPrefix. We just\nhave to figure out how to use it effectively.\nA typical CPU L2 cache can hold 10's or 100's of thousands of such cache lines.\nThat's enough that we should be able to do a significant amount of useful work within\nthe CPU w/o having to go off-die.\n\nThe data structure we are using to represent the sorted data also needs to be\ngeneralized. We want a space efficient DS that allows us to find any given element in\nas few accesses as possible and that allows us to insert new elements or reorganize\nthe DS as efficiently as possible. This being a DB discussion list, a B+ tree seems like\na fairly obvious suggestion ;-)\n\nA B+ tree where each element is no larger than a cache line and no node is larger than\nwhat fits into L2 cache can be created dynamically as we scan the data set via any of\nthe fast, low IO methods well known for doing so. Since the L2 cache can hold 10's of\nthousands of cache lines, it should be easy to make sure that the B+ tree has something\nlike 1000 elements per node (making the base of the logarithm for access being at least\n1000). The log base 1000 of 500M is ~2.9, so that means that even in the absolute \nworst case where every one of the 500M records is unique we can find any given\nelement in less than 3 accesses of the B+ tree. Increasing the order of the B+ tree is\nan option to reduce average accesses even further.\n\nSince the DS representing the sorted order of the data is a B+ tree, it's very \"IO friendly\"\nif we need to store part or all of it on HD.\n\nIn an multiprocessor environment, we can assign chunks of the data set to different\nCPUs, let them build their independant B+ trees to represent the data in sorted order from\ntheir POV, and then merge the B+ trees very efficiently into one overall DS to represent\nthe sorted order of the entire data set.\n\nFinally, since these are B+ trees, we can keep them around and easily update them at will\nfor frequent used sorting conditions.\n\nWhat do people think?\n\nRon\n", "msg_date": "Mon, 26 Sep 2005 13:47:24 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "[PERFORM] A Better External Sort?" } ]
[ { "msg_contents": "Is there an performance benefit to using int2 (instead of int4) in cases\nwhere i know i will be well within its numeric range? I want to conserve\nstorage space and gain speed anywhere i can, but i know some apps simply end\nup casting 2byte data to 4byte (like Java int/short).\n\nThese int2 values will be used in primary and foreign key fields and I know\nthat i must explicitly use tick marks (ex: where int2_column = '12') in\norder to make use of indexes, but my question is IS IT WORTH IT? IS THERE\nANY REAL GAIN FOR DOING THIS?\n\nAn simple scenario would be:\n\nSongs\n-------\nsong_id serial pkey\ngenre int2 fkey\ntitle varchar\n...\n\n\nGenres\n-------\ngenreid int2 pkey\nname varchar\ndescription varchar\n\n\n\nI KNOW that I am not going to have anywhere near 32,000+ different genres in\nmy genre table so why use int4? Would that squeeze a few more milliseconds\nof performance out of a LARGE song table query with a genre lookup?\n\n\nThanks,\n\n-Aaron\n\n", "msg_date": "Mon, 26 Sep 2005 12:54:05 -0500", "msg_from": "\"Announce\" <[email protected]>", "msg_from_op": true, "msg_subject": "int2 vs int4 in Postgres" }, { "msg_contents": "On Mon, Sep 26, 2005 at 12:54:05PM -0500, Announce wrote:\n> Is there an performance benefit to using int2 (instead of int4) in cases\n> where i know i will be well within its numeric range? I want to conserve\n> storage space and gain speed anywhere i can, but i know some apps simply end\n> up casting 2byte data to 4byte (like Java int/short).\n> \n> These int2 values will be used in primary and foreign key fields and I know\n> that i must explicitly use tick marks (ex: where int2_column = '12') in\n> order to make use of indexes, but my question is IS IT WORTH IT? IS THERE\n> ANY REAL GAIN FOR DOING THIS?\n\n> An simple scenario would be:\n> \n> Songs\n> -------\n> song_id serial pkey\n> genre int2 fkey\n> title varchar\n\nNot in this case, because the varchar column that follows the int2\ncolumn needs 4-byte alignment, so after the int2 column there must be 2\nbytes of padding.\n\nIf you had two consecutive int2 fields you would save some the space.\nOr int2/bool/bool (bool has 1-byte alignment), etc.\n\nThis assumes you are in a tipical x86 environment ... in other\nenvironments the situation may be different.\n\n-- \nAlvaro Herrera Valdivia, Chile ICBM: S 39� 49' 17.7\", W 73� 14' 26.8\"\nVoy a acabar con todos los humanos / con los humanos yo acabar�\nvoy a acabar con todos / con todos los humanos acabar� (Bender)\n", "msg_date": "Mon, 26 Sep 2005 14:42:53 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: int2 vs int4 in Postgres" }, { "msg_contents": "On Mon, 2005-26-09 at 12:54 -0500, Announce wrote:\n> Is there an performance benefit to using int2 (instead of int4) in cases\n> where i know i will be well within its numeric range?\n\nint2 uses slightly less storage space (2 bytes rather than 4). Depending\non alignment and padding requirements, as well as the other columns in\nthe table, that may translate into requiring fewer disk pages and\ntherefore slightly better performance and lower storage requirements.\n\n-Neil\n\n\n", "msg_date": "Mon, 26 Sep 2005 15:48:30 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: int2 vs int4 in Postgres" }, { "msg_contents": "[email protected] (\"Announce\") writes:\n> I KNOW that I am not going to have anywhere near 32,000+ different\n> genres in my genre table so why use int4? Would that squeeze a few\n> more milliseconds of performance out of a LARGE song table query\n> with a genre lookup?\n\nIf the field is immaterial in terms of the size of the table, then it\nwon't help materially.\n\nIf you were going to index on it, however, THAT would make it\nsignificant for indices involving the \"genre\" column. Fitting more\ntuples into each page is a big help, and this would help.\n\nI doubt it'll be material, but I'd think it a good thing to apply what\nrestrictions to your data types that you can, a priori, so I'd be\ninclined to use \"int2\" for this...\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in String.concat \"@\" [name;tld];;\nhttp://cbbrowne.com/info/nonrdbms.html\nRules of the Evil Overlord #136. \"If I build a bomb, I will simply\nremember which wire to cut if it has to be deactivated and make every\nwire red.\" <http://www.eviloverlord.com/>\n", "msg_date": "Mon, 26 Sep 2005 17:45:56 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: int2 vs int4 in Postgres" }, { "msg_contents": "[email protected] (\"Announce\") writes:\n> I KNOW that I am not going to have anywhere near 32,000+ different\n> genres in my genre table so why use int4? Would that squeeze a few\n> more milliseconds of performance out of a LARGE song table query\n> with a genre lookup?\n\nBy the way, I see a lot of queries on tables NOT optimized in this\nfashion that run in less than a millisecond, so it would seem\nremarkable to me if there were milliseconds to be squeezed out in the\nfirst place...\n-- \noutput = reverse(\"moc.enworbbc\" \"@\" \"enworbbc\")\nhttp://www.ntlug.org/~cbbrowne/sap.html\nWhy do we drive on parkways and park on driveways?\n", "msg_date": "Mon, 26 Sep 2005 17:47:19 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: int2 vs int4 in Postgres" }, { "msg_contents": "Chris Browne <[email protected]> writes:\n> If the field is immaterial in terms of the size of the table, then it\n> won't help materially.\n> If you were going to index on it, however, THAT would make it\n> significant for indices involving the \"genre\" column. Fitting more\n> tuples into each page is a big help, and this would help.\n\nFor a multicolumn index it might help to replace int4 by int2. For a\nsingle-column index, alignment constraints on the index entries will\nprevent you from saving anything :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Sep 2005 19:00:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: int2 vs int4 in Postgres " } ]
[ { "msg_contents": ">From: Dann Corbit <[email protected]>\n>Sent: Sep 26, 2005 5:13 PM\n>To: Ron Peacetree <[email protected]>, [email protected], \n>\[email protected]\n>Subject: RE: [HACKERS] [PERFORM] A Better External Sort?\n>\n>I think that the btrees are going to be O(n*log(n)) in construction of\n>the indexes in disk access unless you memory map them [which means you\n>would need stupendous memory volume] and so I cannot say that I really\n>understand your idea yet.\n>\nTraditional algorithms for the construction of Btree variants (B, B+, B*, ...)\ndon't require O(nlgn) HD accesses. These shouldn't either.\n\nLet's start by assuming that an element is <= in size to a cache line and a\nnode fits into L1 DCache. To make the discussion more concrete, I'll use a\n64KB L1 cache + a 1MB L2 cache only as an example.\n\nSimplest case: the Key has few enough distinct values that all Keys or\nKeyPrefixes fit into L1 DCache (for a 64KB cache with 64B lines, that's\n <= 1000 different values. More if we can fit more than 1 element into\neach cache line.).\n\nAs we scan the data set coming in from HD, we compare the Key or KeyPrefix\nto the sorted list of Key values in the node. This can be done in O(lgn) using\nBinary Search or O(lglgn) using a variation of Interpolation Search. \nIf the Key value exists, we append this RID to the list of RIDs having the\nsame Key:\n If the RAM buffer of this list of RIDs is full we append it and the current\n RID to the HD list of these RIDs.\nElse we insert this new key value into its proper place in the sorted list of Key\nvalues in the node and start a new list for this value of RID.\n\nWe allocate room for a CPU write buffer so we can schedule RAM writes to\nthe RAM lists of RIDs so as to minimize the randomness of them.\n\nWhen we are finished scanning the data set from HD, the sorted node with\nRID lists for each Key value contains the sort order for the whole data set.\n\nNotice that almost all of the random data access is occuring within the CPU\nrather than in RAM or HD, and that we are accessing RAM or HD only when\nabsolutely needed.\n\nNext simplest case: Multiple nodes, but they all fit in the CPU cache(s).\nIn the given example CPU, we will be able to fit at least 1000 elements per\nnode and 2^20/2^16= up to 16 such nodes in this CPU. We use a node's\nworth of space as a RAM write buffer, so we end up with room for 15 such\nnodes in this CPU. This is enough for a 2 level index to at least 15,000\ndistinct Key value lists.\n\nAll of the traditional tricks for splitting a Btree node and redistributing\nelements within them during insertion or splitting for maximum node\nutilization can be used here.\n\nThe most general case: There are too many nodes to fit within the CPU\ncache(s). The root node now points to a maximum of at least 1000 nodes\nsince each element in the root node points to another node. A full 2 level\nindex is now enough to point to at least 10^6 distinct Key value lists, and\n3 levels will index more distinct Key values than is possible in our 1TB, \n500M record example.\n\nWe can use some sort of node use prediction algorithm like LFU to decide\nwhich node should be moved out of CPU when we have to replace one of\nthe nodes in the CPU. The nodes in RAM or on HD can be arranged to\nmaximize streaming IO behavior and minimize random access IO\nbehavior.\n\nAs you can see, both the RAM and HD IO are as minimized as possible,\nand what such IO there is has been optimized for streaming behavior.\n\n \n>Can you draw a picture of it for me? (I am dyslexic and understand things\n>far better when I can visualize it).\n>\nNot much for pictures. Hopefully the explanation helps?\n\nRon\n", "msg_date": "Mon, 26 Sep 2005 21:10:47 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] A Better External Sort?" }, { "msg_contents": "Ron Peacetree <[email protected]> writes:\n> Let's start by assuming that an element is <= in size to a cache line and a\n> node fits into L1 DCache. [ much else snipped ] \n\nSo far, you've blithely assumed that you know the size of a cache line,\nthe sizes of L1 and L2 cache, and that you are working with sort keys\nthat you can efficiently pack into cache lines. And that you know the\nrelative access speeds of the caches and memory so that you can schedule\ntransfers, and that the hardware lets you get at that transfer timing.\nAnd that the number of distinct key values isn't very large.\n\nI don't see much prospect that anything we can actually use in a\nportable fashion is going to emerge from this line of thought.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Sep 2005 21:42:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] A Better External Sort? " }, { "msg_contents": "Ron,\n\nI've somehow missed part of this thread, which is a shame since this is \nan area of primary concern for me.\n\nYour suggested algorithm seems to be designed to relieve I/O load by \nmaking more use of the CPU. (if I followed it correctly). However, \nthat's not PostgreSQL's problem; currently for us external sort is a \n*CPU-bound* operation, half of which is value comparisons. (oprofiles \navailable if anyone cares)\n\nSo we need to look, instead, at algorithms which make better use of \nwork_mem to lower CPU activity, possibly even at the expense of I/O.\n\n--Josh Berkus\n", "msg_date": "Tue, 27 Sep 2005 09:15:06 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] A Better External Sort?" } ]
[ { "msg_contents": "SECOND ATTEMPT AT POST. Web mailer appears to have\neaten first one. I apologize in advance if anyone gets two\nversions of this post.\n=r\n\n>From: Tom Lane <[email protected]>\n>Sent: Sep 26, 2005 9:42 PM\n>Subject: Re: [HACKERS] [PERFORM] A Better External Sort? \n>\n>So far, you've blithely assumed that you know the size of a cache line,\n>the sizes of L1 and L2 cache,\n>\nNO. I used exact values only as examples. Realistic examples drawn\nfrom an extensive survey of past, present, and what I could find out\nabout future systems; but only examples nonetheless. For instance,\nHennessy and Patterson 3ed points out that 64B cache lines are\noptimally performing for caches between 16KB and 256KB. The same\nsource as well as sources specifically on CPU memory hierarchy\ndesign points out that we are not likely to see L1 caches larger than\n256KB in the forseeable future.\n\nThe important point was the idea of an efficient Key, rather than\nRecord, sort using a CPU cache friendly data structure with provably\ngood space and IO characteristics based on a reasonable model of\ncurrent and likely future single box computer architecture (although\nit would be fairly easy to extend it to include the effects of\nnetworking.)\n\nNo apriori exact or known values are required for the method to work.\n\n\n>and that you are working with sort keys that you can efficiently pack\n>into cache lines.\n>\nNot \"pack\". \"map\". n items can not take on more than n values. n\nvalues can be represented in lgn bits. Less efficient mappings can\nalso work. Either way I demonstrated that we have plenty of space in\na likely and common cache line size. Creating a mapping function\nto represent m values in lgm bits is a well known hack, and if we keep\ntrack of minimum and maximum values for fields during insert and\ndelete operations, we can even create mapping functions fairly easily.\n(IIRC, Oracle does keep track of minimum and maximum field\nvalues.)\n\n\n>And that you know the relative access speeds of the caches and\n>memory so that you can schedule transfers,\n>\nAgain, no. I created a reasonable model of a computer system that\nholds remarkably well over a _very_ wide range of examples. I\ndon't need the numbers to be exactly right to justify my approach\nto this problem or understand why other approaches may have\ndownsides. I just have to get the relative performance of the\nsystem components and the relative performance gap between them\nreasonably correct. The stated model does that very well.\n\nPlease don't take my word for it. Go grab some random box:\nlaptop, desktop, unix server, etc and try it for yourself. Part of the\nreason I published the model was so that others could examine it.\n \n\n>and that the hardware lets you get at that transfer timing.\n>\nNever said anything about this, and in fact I do not need any such.\n\n\n>And that the number of distinct key values isn't very large.\n>\nQuite the opposite in fact. I went out of my way to show that the\nmethod still works well even if every Key is distinct. It is _more\nefficient_ when the number of distinct keys is small compared to\nthe number of data items, but it works as well as any other Btree\nwould when all n of the Keys are distinct. This is just a CPU cache\nand more IO friendly Btree, not some magical and unheard of\ntechnique. It's just as general purpose as Btrees usually are.\n\nI'm simply looking at the current and likely future state of computer\nsystems architecture and coming up with a slight twist on how to use\nalready well known and characterized techniques. not trying to start\na revolution.\n\n\nI'm trying very hard NOT to waste anyone's time around here.\nIncluding my own\nRon \n", "msg_date": "Tue, 27 Sep 2005 01:09:19 -0400 (EDT)", "msg_from": "Ron Peacetree <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] A Better External Sort?" }, { "msg_contents": "Ron,\n\nAgain, if you feel strongly enough about the theory to argue it, I recommend\nthat you spend your time constructively; create an implemenation of it.\nCiting academics is cool and all, but code speaks louder than theory in this\ncase. As Tom mentioned, this has to be portable. Making assumptions about\ncomputing architectures (especially those in the future), is fine for\ntheory, but not practical for something that needs to be maintained in the\nreal-world. Go forth and write thy code.\n\n-Jonah\n\nOn 9/27/05, Ron Peacetree <[email protected]> wrote:\n>\n> SECOND ATTEMPT AT POST. Web mailer appears to have\n> eaten first one. I apologize in advance if anyone gets two\n> versions of this post.\n> =r\n>\n> >From: Tom Lane <[email protected]>\n> >Sent: Sep 26, 2005 9:42 PM\n> >Subject: Re: [HACKERS] [PERFORM] A Better External Sort?\n> >\n> >So far, you've blithely assumed that you know the size of a cache line,\n> >the sizes of L1 and L2 cache,\n> >\n> NO. I used exact values only as examples. Realistic examples drawn\n> from an extensive survey of past, present, and what I could find out\n> about future systems; but only examples nonetheless. For instance,\n> Hennessy and Patterson 3ed points out that 64B cache lines are\n> optimally performing for caches between 16KB and 256KB. The same\n> source as well as sources specifically on CPU memory hierarchy\n> design points out that we are not likely to see L1 caches larger than\n> 256KB in the forseeable future.\n>\n> The important point was the idea of an efficient Key, rather than\n> Record, sort using a CPU cache friendly data structure with provably\n> good space and IO characteristics based on a reasonable model of\n> current and likely future single box computer architecture (although\n> it would be fairly easy to extend it to include the effects of\n> networking.)\n>\n> No apriori exact or known values are required for the method to work.\n>\n>\n> >and that you are working with sort keys that you can efficiently pack\n> >into cache lines.\n> >\n> Not \"pack\". \"map\". n items can not take on more than n values. n\n> values can be represented in lgn bits. Less efficient mappings can\n> also work. Either way I demonstrated that we have plenty of space in\n> a likely and common cache line size. Creating a mapping function\n> to represent m values in lgm bits is a well known hack, and if we keep\n> track of minimum and maximum values for fields during insert and\n> delete operations, we can even create mapping functions fairly easily.\n> (IIRC, Oracle does keep track of minimum and maximum field\n> values.)\n>\n>\n> >And that you know the relative access speeds of the caches and\n> >memory so that you can schedule transfers,\n> >\n> Again, no. I created a reasonable model of a computer system that\n> holds remarkably well over a _very_ wide range of examples. I\n> don't need the numbers to be exactly right to justify my approach\n> to this problem or understand why other approaches may have\n> downsides. I just have to get the relative performance of the\n> system components and the relative performance gap between them\n> reasonably correct. The stated model does that very well.\n>\n> Please don't take my word for it. Go grab some random box:\n> laptop, desktop, unix server, etc and try it for yourself. Part of the\n> reason I published the model was so that others could examine it.\n>\n>\n> >and that the hardware lets you get at that transfer timing.\n> >\n> Never said anything about this, and in fact I do not need any such.\n>\n>\n> >And that the number of distinct key values isn't very large.\n> >\n> Quite the opposite in fact. I went out of my way to show that the\n> method still works well even if every Key is distinct. It is _more\n> efficient_ when the number of distinct keys is small compared to\n> the number of data items, but it works as well as any other Btree\n> would when all n of the Keys are distinct. This is just a CPU cache\n> and more IO friendly Btree, not some magical and unheard of\n> technique. It's just as general purpose as Btrees usually are.\n>\n> I'm simply looking at the current and likely future state of computer\n> systems architecture and coming up with a slight twist on how to use\n> already well known and characterized techniques. not trying to start\n> a revolution.\n>\n>\n> I'm trying very hard NOT to waste anyone's time around here.\n> Including my own\n> Ron\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\n\n--\nRespectfully,\n\nJonah H. Harris, Database Internals Architect\nEnterpriseDB Corporation\nhttp://www.enterprisedb.com/\n\nRon,\n\nAgain, if you feel strongly enough about the theory to argue it, I\nrecommend that you spend your time constructively; create an\nimplemenation of it.  Citing academics is cool and all, but code\nspeaks louder than theory in this case.  As Tom mentioned, this\nhas to be portable.  Making assumptions about computing\narchitectures (especially those in the future), is fine for theory, but\nnot practical for something that needs to be maintained in the\nreal-world.  Go forth and write thy code.\n\n-JonahOn 9/27/05, Ron Peacetree <[email protected]> wrote:\nSECOND ATTEMPT AT POST.  Web mailer appears to haveeaten first one.  I apologize in advance if anyone gets twoversions of this post.=r>From: Tom Lane <[email protected]\n>>Sent: Sep 26, 2005 9:42 PM>Subject: Re: [HACKERS] [PERFORM] A Better External Sort?>>So far, you've blithely assumed that you know the size of a cache line,>the sizes of L1 and L2 cache,\n>NO.  I used exact values only as examples.  Realistic examples drawnfrom an extensive survey of past, present, and what I could find outabout future systems; but only examples nonetheless.  For instance,\nHennessy and Patterson 3ed points out that 64B cache lines areoptimally performing for caches between 16KB and 256KB.  The samesource as well as sources specifically on CPU memory hierarchydesign points out that we are not likely to see L1 caches larger than\n256KB in the forseeable future.The important point was the idea of an efficient Key, rather thanRecord, sort using a CPU cache friendly data structure with provablygood space and IO characteristics based on a reasonable model of\ncurrent and likely future single box computer architecture (althoughit would be fairly easy to extend it to include the effects ofnetworking.)No apriori exact or known values are required for the method to work.\n>and that you are working with sort keys that you can efficiently pack>into cache lines.>Not \"pack\".  \"map\".  n items can not take on more than n values.  nvalues can be represented in lgn bits.  Less efficient mappings can\nalso work.  Either way I demonstrated that we have plenty of space ina likely and common cache line size.  Creating a mapping functionto represent m values in lgm bits is a well known hack, and if we keeptrack of minimum and maximum values for fields during insert and\ndelete operations, we can even create mapping functions fairly easily.(IIRC, Oracle does keep track of minimum and maximum fieldvalues.)>And that you know the relative access speeds of the caches and\n>memory so that you can schedule transfers,>Again, no.  I created a reasonable model of a computer system thatholds remarkably well over a _very_ wide range of examples.  Idon't need the numbers to be exactly right to justify my approach\nto this problem or understand why other approaches may havedownsides.  I just have to get the relative performance of thesystem components and the relative performance gap between themreasonably correct.  The stated model does that very well.\nPlease don't take my word for it.  Go grab some random box:laptop, desktop, unix server, etc and try it for yourself.  Part of thereason I published the model was so that others could examine it.\n>and that the hardware lets you get at that transfer timing.>Never said anything about this, and in fact I do not need any such.>And that the number of distinct key values isn't very large.\n>Quite the opposite in fact.  I went out of my way to show that themethod still works well even if every Key is distinct.  It is _moreefficient_ when the number of distinct keys is small compared tothe number of data items, but it works as well as any other Btree\nwould when all n of the Keys are distinct.  This is just a CPU cacheand more IO friendly Btree, not some magical and unheard oftechnique.  It's just as general purpose as Btrees usually are.I'm simply looking at the current and likely future state of computer\nsystems architecture and coming up with a slight twist on how to usealready well known and characterized techniques. not trying to starta revolution.I'm trying very hard NOT to waste anyone's time around here.\nIncluding my ownRon---------------------------(end of broadcast)---------------------------TIP 5: don't forget to increase your free space map settings-- \nRespectfully,Jonah H. Harris, Database Internals ArchitectEnterpriseDB Corporationhttp://www.enterprisedb.com/", "msg_date": "Tue, 27 Sep 2005 08:49:22 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] A Better External Sort?" } ]
[ { "msg_contents": "Hello all,\n\n I have table ma_data, that contain above 300000 rows.\n This table has primary key id, and field alias_id.\n I create index (btree)on this field.\n Set statistic:\n\n ALTER TABLE \"public\".\"ma_data\"\n ALTER COLUMN \"alias_id\" SET STATISTICS 998;\n\n So, when I do something like\n SELECT alias_id FROM ma_data GROUP BY alias_id\n and have (with seq_scan off):\n \n Group (cost=0.00..1140280.63 rows=32 width=4) (actual time=0.159..2640.090 rows=32 loops=1)\n -> Index Scan using reference_9_fk on ma_data (cost=0.00..1139526.57 rows=301624 width=4) (actual time=0.120..1471.128 rows=301624 loops=1)\n Total runtime: 2640.407 ms\n (3 rows)\n\n As I understand there are some problems with visibility of records,\n but some others DBMS used indexes without problems(for example\n FireBird)? Or maybe some another information be helpful for me and\n community.\n\n-- \nпїЅ пїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ пїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ,\n пїЅпїЅпїЅпїЅпїЅ пїЅпїЅпїЅпїЅпїЅпїЅ пїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ mailto:[email protected]\n\n", "msg_date": "Tue, 27 Sep 2005 12:14:31 +0300", "msg_from": "Andrey Repko <[email protected]>", "msg_from_op": true, "msg_subject": "Index not used on group by" }, { "msg_contents": "Andrey Repko wrote:\n> \n> I have table ma_data, that contain above 300000 rows.\n> This table has primary key id, and field alias_id.\n> I create index (btree)on this field.\n> Set statistic:\n> \n> ALTER TABLE \"public\".\"ma_data\"\n> ALTER COLUMN \"alias_id\" SET STATISTICS 998;\n> \n> So, when I do something like\n> SELECT alias_id FROM ma_data GROUP BY alias_id\n\nWhy are you using GROUP BY without any aggregate functions?\n\nWhat happens if you use something like\n SELECT DISTINCT alias_id FROM ma_data;\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 27 Sep 2005 11:48:15 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used on group by" }, { "msg_contents": "пїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ Richard,\n\nTuesday, September 27, 2005, 1:48:15 PM, пїЅпїЅ пїЅпїЅпїЅпїЅпїЅпїЅ:\n\nRH> Andrey Repko wrote:\n>> \n>> I have table ma_data, that contain above 300000 rows.\n>> This table has primary key id, and field alias_id.\n>> I create index (btree)on this field.\n>> Set statistic:\n>> \n>> ALTER TABLE \"public\".\"ma_data\"\n>> ALTER COLUMN \"alias_id\" SET STATISTICS 998;\n>> \n>> So, when I do something like\n>> SELECT alias_id FROM ma_data GROUP BY alias_id\n\nRH> Why are you using GROUP BY without any aggregate functions?\n\nRH> What happens if you use something like\nRH> SELECT DISTINCT alias_id FROM ma_data;\nsart_ma=# EXPLAIN ANALYZE SELECT DISTINCT alias_id FROM ma_data;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=65262.63..66770.75 rows=32 width=4) (actual time=16780.214..18250.761 rows=32 loops=1)\n -> Sort (cost=65262.63..66016.69 rows=301624 width=4) (actual time=16780.204..17255.129 rows=301624 loops=1)\n Sort Key: alias_id\n -> Seq Scan on ma_data (cost=0.00..37811.24 rows=301624 width=4) (actual time=6.896..15321.023 rows=301624 loops=1)\n Total runtime: 18292.542 ms\n(5 rows)\n\nsart_ma=# EXPLAIN ANALYZE SELECT alias_id FROM ma_data GROUP BY alias_id;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=38565.30..38565.62 rows=32 width=4) (actual time=15990.863..15990.933 rows=32 loops=1)\n -> Seq Scan on ma_data (cost=0.00..37811.24 rows=301624 width=4) (actual time=3.446..14572.141 rows=301624 loops=1)\n Total runtime: 15991.244 ms\n(3 rows)\n\n-- \nпїЅ пїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ пїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ,\n пїЅпїЅпїЅпїЅпїЅ пїЅпїЅпїЅпїЅпїЅпїЅ пїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ mailto:[email protected]\n\n", "msg_date": "Tue, 27 Sep 2005 13:57:16 +0300", "msg_from": "=?Windows-1251?Q?=C0=ED=E4=F0=E5=E9_=D0=E5=EF=EA=EE?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used on group by" }, { "msg_contents": "Андрей Репко wrote:\n> RH> What happens if you use something like\n> RH> SELECT DISTINCT alias_id FROM ma_data;\n> sart_ma=# EXPLAIN ANALYZE SELECT DISTINCT alias_id FROM ma_data;\n> QUERY PLAN\n> \n> -------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=65262.63..66770.75 rows=32 width=4) (actual time=16780.214..18250.761 rows=32 loops=1)\n> -> Sort (cost=65262.63..66016.69 rows=301624 width=4) (actual time=16780.204..17255.129 rows=301624 loops=1)\n> Sort Key: alias_id\n> -> Seq Scan on ma_data (cost=0.00..37811.24 rows=301624 width=4) (actual time=6.896..15321.023 rows=301624 loops=1)\n> Total runtime: 18292.542 ms\n\n> sart_ma=# EXPLAIN ANALYZE SELECT alias_id FROM ma_data GROUP BY alias_id;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=38565.30..38565.62 rows=32 width=4) (actual time=15990.863..15990.933 rows=32 loops=1)\n> -> Seq Scan on ma_data (cost=0.00..37811.24 rows=301624 width=4) (actual time=3.446..14572.141 rows=301624 loops=1)\n> Total runtime: 15991.244 ms\n\nOK - the planner thinks it's doing the right thing, your cost estimates \nare way off. If you look back at where you got an index-scan, it's cost \nwas 1.1 million.\n Index Scan using reference_9_fk on ma_data (cost=0.00..1139526.57\n\nThat's way above the numbers for seq-scan+hash/sort, so if the cost \nestimate was right PG would be making the right choice. Looks like you \nneed to check your configuration settings. Have you read:\n http://www.powerpostgresql.com/PerfList\nor\n http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n--\n Richard Huxton\n Archonet Ltd\n\n", "msg_date": "Tue, 27 Sep 2005 12:08:31 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used on group by" }, { "msg_contents": "пїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ Richard,\n\nTuesday, September 27, 2005, 2:08:31 PM, пїЅпїЅ пїЅпїЅпїЅпїЅпїЅпїЅ:\n\n\n>> sart_ma=# EXPLAIN ANALYZE SELECT alias_id FROM ma_data GROUP BY alias_id;\n>> QUERY PLAN\n>> -------------------------------------------------------------------------------------------------------------------------\n>> HashAggregate (cost=38565.30..38565.62 rows=32 width=4)\n>> (actual time=15990.863..15990.933 rows=32 loops=1)\n>> -> Seq Scan on ma_data (cost=0.00..37811.24 rows=301624\n>> width=4) (actual time=3.446..14572.141 rows=301624 loops=1)\n>> Total runtime: 15991.244 ms\n\nRH> OK - the planner thinks it's doing the right thing, your cost estimates\nRH> are way off. If you look back at where you got an index-scan, it's cost\nRH> was 1.1 million.\nRH> Index Scan using reference_9_fk on ma_data (cost=0.00..1139526.57\nBut why PG scan _all_ the records in the table? As I understand we can\n\"just\" select information from index, not scaning all the table? Of\ncourse if we select ALL records from table index can't help us.\nIf I write something like:\nSELECT (SELECT alias_id FROM ma_data WHERE alias_id =1 LIMIT 1)\nUNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =2 LIMIT 1)\nUNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =3 LIMIT 1)\nUNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =4 LIMIT 1)\nUNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =5 LIMIT 1)\nUNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =6 LIMIT 1)\nUNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =7 LIMIT 1)\nUNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =8 LIMIT 1)\nUNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =9 LIMIT 1)\n...\nUNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id=max_alias_id LIMIT 1)\nIt works better, much better.\n\nRH> That's way above the numbers for seq-scan+hash/sort, so if the cost\nRH> estimate was right PG would be making the right choice. Looks like you\nRH> need to check your configuration settings. Have you read:\nRH> http://www.powerpostgresql.com/PerfList\nRH> or\nRH> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\nThanks.\n\n\n-- \nпїЅ пїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ пїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ,\n пїЅпїЅпїЅпїЅпїЅ пїЅпїЅпїЅпїЅпїЅпїЅ пїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ mailto:[email protected]\n\n", "msg_date": "Tue, 27 Sep 2005 14:37:31 +0300", "msg_from": "=?Windows-1251?Q?=C0=ED=E4=F0=E5=E9_=D0=E5=EF=EA=EE?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used on group by" }, { "msg_contents": "Андрей Репко wrote:\n> Здравствуйте Richard,\n> \n> Tuesday, September 27, 2005, 2:08:31 PM, Вы писали:\n> \n> \n> \n>>>sart_ma=# EXPLAIN ANALYZE SELECT alias_id FROM ma_data GROUP BY alias_id;\n>>> QUERY PLAN\n>>>-------------------------------------------------------------------------------------------------------------------------\n>>> HashAggregate (cost=38565.30..38565.62 rows=32 width=4)\n>>>(actual time=15990.863..15990.933 rows=32 loops=1)\n>>> -> Seq Scan on ma_data (cost=0.00..37811.24 rows=301624\n>>>width=4) (actual time=3.446..14572.141 rows=301624 loops=1)\n>>> Total runtime: 15991.244 ms\n> \n> \n> RH> OK - the planner thinks it's doing the right thing, your cost estimates\n> RH> are way off. If you look back at where you got an index-scan, it's cost\n> RH> was 1.1 million.\n> RH> Index Scan using reference_9_fk on ma_data (cost=0.00..1139526.57\n> But why PG scan _all_ the records in the table? As I understand we can\n> \"just\" select information from index, not scaning all the table? Of\n> course if we select ALL records from table index can't help us.\n\nActually, if you select more than 5-10% of the rows (in general) you are \nbetter off using a seq-scan.\n\nPostgreSQL estimates the total cost of possible query plans and picks \nthe cheapest. In your case your configuration settings seem to be \npushing the cost of an index scan much higher than it is. So, it picks \nthe sequential-scan.\n\n> If I write something like:\n> SELECT (SELECT alias_id FROM ma_data WHERE alias_id =1 LIMIT 1)\n> UNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =2 LIMIT 1)\n> UNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =3 LIMIT 1)\n> UNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =4 LIMIT 1)\n> UNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =5 LIMIT 1)\n> UNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =6 LIMIT 1)\n> UNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =7 LIMIT 1)\n> UNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =8 LIMIT 1)\n> UNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id =9 LIMIT 1)\n> ...\n> UNION ALL SELECT (SELECT alias_id FROM ma_data WHERE alias_id=max_alias_id LIMIT 1)\n> It works better, much better.\n\nOf course - it will always choose index queries here - it can see you \nare only fetching one row in each subquery.\n\nCorrect your configuration settings so PG estimates the cost of an index \n query correctly and all should be well.\n--\n Richard Huxton\n Archonet Ltd\n\n", "msg_date": "Tue, 27 Sep 2005 15:34:02 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used on group by" } ]
[ { "msg_contents": "Hi All,\n\nCan anyone please tell/point me where I can get the postgresql system layout\n(I've an interest to contribute). I would also like to know the files\ninvolved for performing each task ( for eg when doing a select operation\nwhat is exactly happening in postgres along with the files).\n\nI was wandering inside the source for a while and I couldn't get a start\npoint to go with.\n\nNeed a clarification in copydir.c file of src/port directory, In the\nfollowing snippet the destination directory is created first then the source\ndirectory is read. Suppose if I don't have permission to read the source,\neven then the destination directory would be created.\nI just want to know whether there is any reason for doing so?\n\nif (mkdir(todir, S_IRUSR | S_IWUSR | S_IXUSR) != 0)\nereport(ERROR,\n(errcode_for_file_access(),\nerrmsg(\"could not create directory \\\"%s\\\": %m\", todir)));\n\nxldir = AllocateDir(fromdir);\nif (xldir == NULL)\nereport(ERROR,\n(errcode_for_file_access(),\nerrmsg(\"could not open directory \\\"%s\\\": %m\", fromdir)));\n\n\n\n--\nwith thanks & regards,\nS.Gnanavel\nSatyam Computer Services Ltd.\n\nHi All,\n\n  Can anyone please tell/point  me where I can get the\npostgresql system layout (I've an interest to contribute). I would also\nlike to know the files involved for performing each task ( for eg when\ndoing a select operation what is exactly happening in postgres along\nwith the files).\n\n  I was wandering inside the source for a while and I couldn't get a start point to go with. \n\n Need a clarification in copydir.c file of src/port\ndirectory,  In the following snippet the destination directory is\ncreated first then the source directory is read. Suppose if I don't\nhave permission to read the source, even then the destination directory\nwould be created.\nI just want to know whether there is any reason for doing so?\n\n        if (mkdir(todir, S_IRUSR | S_IWUSR | S_IXUSR) != 0)\n                ereport(ERROR,\n                               \n(errcode_for_file_access(),\n                                \nerrmsg(\"could not create directory \\\"%s\\\": %m\", todir)));\n\n        xldir = AllocateDir(fromdir);\n        if (xldir == NULL)\n                ereport(ERROR,\n                               \n(errcode_for_file_access(),\n                                \nerrmsg(\"could not open directory \\\"%s\\\": %m\", fromdir)));\n\n\n-- with thanks & regards,S.GnanavelSatyam Computer Services Ltd.", "msg_date": "Tue, 27 Sep 2005 15:20:05 +0530", "msg_from": "Gnanavel S <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL overall design" }, { "msg_contents": "At 2005-09-27 15:20:05 +0530, [email protected] wrote:\n>\n> Can anyone please tell/point me where I can get the postgresql system\n> layout (I've an interest to contribute).\n\nhttp://www.postgresql.org/developer/coding\n\nAnd, in particular:\n\nhttp://www.postgresql.org/docs/faqs.FAQ_DEV.html\n\n-- ams\n", "msg_date": "Tue, 27 Sep 2005 15:57:01 +0530", "msg_from": "Abhijit Menon-Sen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL overall design" }, { "msg_contents": "On 9/27/05, Abhijit Menon-Sen <[email protected]> wrote:\n>\n> At 2005-09-27 15:20:05 +0530, [email protected] wrote:\n> >\n> > Can anyone please tell/point me where I can get the postgresql system\n> > layout (I've an interest to contribute).\n>\n> http://www.postgresql.org/developer/coding\n>\n> And, in particular:\n>\n> http://www.postgresql.org/docs/faqs.FAQ_DEV.html\n>\n> -- ams\n>\n\nThanks. I'll go thru' the documentation.\n\n\n--\nwith regards,\nS.Gnanavel\n\nOn 9/27/05, Abhijit Menon-Sen <[email protected]> wrote:\nAt 2005-09-27 15:20:05 +0530, [email protected] wrote:>> Can anyone please tell/point me where I can get the postgresql system> layout (I've an interest to contribute).\nhttp://www.postgresql.org/developer/codingAnd, in particular:http://www.postgresql.org/docs/faqs.FAQ_DEV.html\n-- ams\nThanks. I'll go thru' the documentation.-- with regards,S.Gnanavel", "msg_date": "Tue, 27 Sep 2005 17:35:58 +0530", "msg_from": "Gnanavel S <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL overall design" }, { "msg_contents": "Were you looking for a call graph?\n\nOn 9/27/05, Abhijit Menon-Sen <[email protected]> wrote:\n>\n> At 2005-09-27 15:20:05 +0530, [email protected] wrote:\n> >\n> > Can anyone please tell/point me where I can get the postgresql system\n> > layout (I've an interest to contribute).\n>\n> http://www.postgresql.org/developer/coding\n>\n> And, in particular:\n>\n> http://www.postgresql.org/docs/faqs.FAQ_DEV.html\n>\n> -- ams\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n\n\n--\nRespectfully,\n\nJonah H. Harris, Database Internals Architect\nEnterpriseDB Corporation\nhttp://www.enterprisedb.com/\n\nWere you looking for a call graph?On 9/27/05, Abhijit Menon-Sen <[email protected]> wrote:\nAt 2005-09-27 15:20:05 +0530, [email protected] wrote:>> Can anyone please tell/point me where I can get the postgresql system> layout (I've an interest to contribute).\nhttp://www.postgresql.org/developer/codingAnd, in particular:http://www.postgresql.org/docs/faqs.FAQ_DEV.html\n-- ams---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate       subscribe-nomail command to \[email protected] so that your       message can get through to the mailing list cleanly-- Respectfully,Jonah H. Harris, Database Internals Architect\nEnterpriseDB Corporationhttp://www.enterprisedb.com/", "msg_date": "Tue, 27 Sep 2005 09:00:35 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL overall design" }, { "msg_contents": "\n[ -performance removed ]\n\nGnanavel S wrote:\n\n>\n> Need a clarification in copydir.c file of src/port directory, In the \n> following snippet the destination directory is created first then the \n> source directory is read. Suppose if I don't have permission to read \n> the source, even then the destination directory would be created.\n> I just want to know whether there is any reason for doing so?\n>\n> \n\n\nUnder what circumstances do you imagine this will happen, since the \npostmaster user owns all the files and directories?\n\ncheers\n\nandrew\n", "msg_date": "Tue, 27 Sep 2005 09:08:31 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL overall design" }, { "msg_contents": "On 9/27/05, Jonah H. Harris <[email protected]> wrote:\n>\n> Were you looking for a call graph?\n\n\nYes. I want to know the list and sequence of files involved during a call.\n\nOn 9/27/05, Abhijit Menon-Sen <[email protected]> wrote:\n> >\n> > At 2005-09-27 15:20:05 +0530, [email protected] wrote:\n> > >\n> > > Can anyone please tell/point me where I can get the postgresql system\n> > > layout (I've an interest to contribute).\n> >\n> > http://www.postgresql.org/developer/coding\n> >\n> > And, in particular:\n> >\n> > http://www.postgresql.org/docs/faqs.FAQ_DEV.html\n> >\n> > -- ams\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n>\n>\n> --\n> Respectfully,\n>\n> Jonah H. Harris, Database Internals Architect\n> EnterpriseDB Corporation\n> http://www.enterprisedb.com/\n>\n\n\n\n--\nwith regards,\nS.Gnanavel\nSatyam Computer Services Ltd.\n\nOn 9/27/05, Jonah H. Harris <[email protected]> wrote:\nWere you looking for a call graph?\nYes. I want to know the list and sequence of files involved during  a call.\nOn 9/27/05, \nAbhijit Menon-Sen <[email protected]> wrote:\n\nAt 2005-09-27 15:20:05 +0530, [email protected] wrote:>> Can anyone please tell/point me where I can get the postgresql system\n> layout (I've an interest to contribute).\nhttp://www.postgresql.org/developer/codingAnd, in particular:\nhttp://www.postgresql.org/docs/faqs.FAQ_DEV.html\n-- ams---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate       subscribe-nomail command to \n\[email protected] so that your       message can get through to the mailing list cleanly-- Respectfully,Jonah H. Harris, Database Internals Architect\nEnterpriseDB Corporationhttp://www.enterprisedb.com/\n-- with regards,S.GnanavelSatyam Computer Services Ltd.", "msg_date": "Tue, 27 Sep 2005 19:00:14 +0530", "msg_from": "Gnanavel S <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL overall design" }, { "msg_contents": "On 9/27/05, Andrew Dunstan <[email protected]> wrote:\n>\n>\n> [ -performance removed ]\n>\n> Gnanavel S wrote:\n>\n> >\n> > Need a clarification in copydir.c file of src/port directory, In the\n> > following snippet the destination directory is created first then the\n> > source directory is read. Suppose if I don't have permission to read\n> > the source, even then the destination directory would be created.\n> > I just want to know whether there is any reason for doing so?\n> >\n> >\n>\n>\n> Under what circumstances do you imagine this will happen, since the\n> postmaster user owns all the files and directories?\n\n\nUnderstood. But can you explain why it is done in that way as what I said\nseems to be standard way of doing it (correct me if I'm wrong).\n\n\n\n--\nwith regards,\nS.Gnanavel\n\nOn 9/27/05, Andrew Dunstan <[email protected]> wrote:\n[ -performance removed ]Gnanavel S wrote:>>  Need a clarification in copydir.c file of src/port directory,  In the> following snippet the destination directory is created first then the\n> source directory is read. Suppose if I don't have permission to read> the source, even then the destination directory would be created.> I just want to know whether there is any reason for doing so?\n>>Under what circumstances do you imagine this will happen, since thepostmaster user owns all the files and directories?\nUnderstood. But can you explain why it is done in that way as what I\nsaid seems to be standard way of doing it (correct me if I'm wrong).-- with regards,S.Gnanavel", "msg_date": "Tue, 27 Sep 2005 19:16:09 +0530", "msg_from": "Gnanavel S <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL overall design" }, { "msg_contents": "\nHave you read the developers FAQ?\n\n---------------------------------------------------------------------------\n\nGnanavel S wrote:\n> Hi All,\n> \n> Can anyone please tell/point me where I can get the postgresql system layout\n> (I've an interest to contribute). I would also like to know the files\n> involved for performing each task ( for eg when doing a select operation\n> what is exactly happening in postgres along with the files).\n> \n> I was wandering inside the source for a while and I couldn't get a start\n> point to go with.\n> \n> Need a clarification in copydir.c file of src/port directory, In the\n> following snippet the destination directory is created first then the source\n> directory is read. Suppose if I don't have permission to read the source,\n> even then the destination directory would be created.\n> I just want to know whether there is any reason for doing so?\n> \n> if (mkdir(todir, S_IRUSR | S_IWUSR | S_IXUSR) != 0)\n> ereport(ERROR,\n> (errcode_for_file_access(),\n> errmsg(\"could not create directory \\\"%s\\\": %m\", todir)));\n> \n> xldir = AllocateDir(fromdir);\n> if (xldir == NULL)\n> ereport(ERROR,\n> (errcode_for_file_access(),\n> errmsg(\"could not open directory \\\"%s\\\": %m\", fromdir)));\n> \n> \n> \n> --\n> with thanks & regards,\n> S.Gnanavel\n> Satyam Computer Services Ltd.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 27 Sep 2005 10:12:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] PostgreSQL overall design" }, { "msg_contents": "On 9/27/05, Bruce Momjian <[email protected]> wrote:\n>\n>\n> Have you read the developers FAQ?\n\n\nThanks Bruce. I'm going through that.\n\n---------------------------------------------------------------------------\n>\n> Gnanavel S wrote:\n> > Hi All,\n> >\n> > Can anyone please tell/point me where I can get the postgresql system\n> layout\n> > (I've an interest to contribute). I would also like to know the files\n> > involved for performing each task ( for eg when doing a select operation\n> > what is exactly happening in postgres along with the files).\n> >\n> > I was wandering inside the source for a while and I couldn't get a start\n> > point to go with.\n> >\n> > Need a clarification in copydir.c file of src/port directory, In the\n> > following snippet the destination directory is created first then the\n> source\n> > directory is read. Suppose if I don't have permission to read the\n> source,\n> > even then the destination directory would be created.\n> > I just want to know whether there is any reason for doing so?\n> >\n> > if (mkdir(todir, S_IRUSR | S_IWUSR | S_IXUSR) != 0)\n> > ereport(ERROR,\n> > (errcode_for_file_access(),\n> > errmsg(\"could not create directory \\\"%s\\\": %m\", todir)));\n> >\n> > xldir = AllocateDir(fromdir);\n> > if (xldir == NULL)\n> > ereport(ERROR,\n> > (errcode_for_file_access(),\n> > errmsg(\"could not open directory \\\"%s\\\": %m\", fromdir)));\n> >\n> >\n> >\n> > --\n> > with thanks & regards,\n> > S.Gnanavel\n> > Satyam Computer Services Ltd.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n>\n\n\n\n--\nwith regards,\nS.Gnanavel\n\nOn 9/27/05, Bruce Momjian <[email protected]> wrote:\nHave you read the developers FAQ?\nThanks Bruce. I'm going through that.\n---------------------------------------------------------------------------\nGnanavel S wrote:> Hi All,>> Can anyone please tell/point me where I can get the postgresql system layout> (I've an interest to contribute). I would also like to know the files> involved for performing each task ( for eg when doing a select operation\n> what is exactly happening in postgres along with the files).>> I was wandering inside the source for a while and I couldn't get a start> point to go with.>> Need a clarification in \ncopydir.c file of src/port directory, In the> following snippet the destination directory is created first then the source> directory is read. Suppose if I don't have permission to read the source,> even then the destination directory would be created.\n> I just want to know whether there is any reason for doing so?>> if (mkdir(todir, S_IRUSR | S_IWUSR | S_IXUSR) != 0)> ereport(ERROR,> (errcode_for_file_access(),> errmsg(\"could not create directory \\\"%s\\\": %m\", todir)));\n>> xldir = AllocateDir(fromdir);> if (xldir == NULL)> ereport(ERROR,> (errcode_for_file_access(),> errmsg(\"could not open directory \\\"%s\\\": %m\", fromdir)));\n>>>> --> with thanks & regards,> S.Gnanavel> Satyam Computer Services Ltd.--  Bruce\nMomjian                        |  http://candle.pha.pa.us  [email protected]              \n|  (610) 359-1001  +  If your life is a hard drive,     |  13 Roberts Road  +  Christ\ncan be your\nbackup.        |  Newtown\nSquare, Pennsylvania 19073-- with regards,S.Gnanavel", "msg_date": "Tue, 27 Sep 2005 19:50:35 +0530", "msg_from": "Gnanavel S <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] PostgreSQL overall design" }, { "msg_contents": "On Tue, Sep 27, 2005 at 07:00:14PM +0530, Gnanavel S wrote:\n> On 9/27/05, Jonah H. Harris <[email protected]> wrote:\n> >\n> > Were you looking for a call graph?\n> \n> \n> Yes. I want to know the list and sequence of files involved during a call.\n\nTotal non-coder question, but is there an open-source utility that's\ncapable of generating that? Seems like a useful piece of documentation\nto have. Also seems like it'd be completely impractical to maintain by\nhand.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 30 Sep 2005 18:19:08 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL overall design" } ]