threads
listlengths 1
275
|
---|
[
{
"msg_contents": "I wonder whether the current versions of postgres (i.e. either 8.2 or 8.3) are able to utilize multiple cores for the execution of a single query?\n\nThis is one thing that systems like SQL Server and Oracle have been able to do for quite some time. I haven't seen much in the documentation that hints that this may be possible in PG, nor did I find much in the mailinglists about this. The only thing I found was a topic that discussed some patches that may eventually lead to a sequence scan being handled by multiple cores.\n\nCould someone shed some light on the current or future abilities of PG for making use of multiple cores to execute a single query?\n\nThanks in advance\n\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\nI wonder whether the current versions of postgres (i.e. either 8.2 or 8.3) are able to utilize multiple cores for the execution of a single query?This is one thing that systems like SQL Server and Oracle have been able to do for quite some time. I haven't seen much in the documentation that hints that this may be possible in PG, nor did I find much in the mailinglists about this. The only thing I found was a topic that discussed some patches that may eventually lead to a sequence scan being handled by multiple cores.Could someone shed some light on the current or future abilities of PG for making use of multiple cores to execute a single query?Thanks in advanceExpress yourself instantly with MSN Messenger! MSN Messenger",
"msg_date": "Sat, 1 Dec 2007 14:21:21 +0100",
"msg_from": "henk de wit <[email protected]>",
"msg_from_op": true,
"msg_subject": "Utilizing multiple cores for one query"
},
{
"msg_contents": "On Dec 1, 2007 8:21 AM, henk de wit <[email protected]> wrote:\n> I wonder whether the current versions of postgres (i.e. either 8.2 or 8.3)\n> are able to utilize multiple cores for the execution of a single query?\n\nNope.\n\n> This is one thing that systems like SQL Server and Oracle have been able to\n> do for quite some time. I haven't seen much in the documentation that hints\n> that this may be possible in PG, nor did I find much in the mailinglists\n> about this. The only thing I found was a topic that discussed some patches\n> that may eventually lead to a sequence scan being handled by multiple cores.\n\nI believe the threads you're talking about were related to scanning,\nnot parallel query. Though, when Qingqing and I were discussing\nparallel query a little over a year ago, I do seem to recall several\nuninformed opinions stating that sequential scans were the only thing\nit could be useful for.\n\n> Could someone shed some light on the current or future abilities of PG for\n> making use of multiple cores to execute a single query?\n\nCurrently, the only way to parallelize a query in Postgres is to use pgpool-II.\n\nhttp://pgpool.projects.postgresql.org/\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n",
"msg_date": "Sat, 1 Dec 2007 08:31:13 -0500",
"msg_from": "\"Jonah H. Harris\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Utilizing multiple cores for one query"
},
{
"msg_contents": "> > I wonder whether the current versions of postgres (i.e. either 8.2 or 8.3)\n> > are able to utilize multiple cores for the execution of a single query?\n> Nope.\n\nI see, thanks for the clarification.\n\nBtw, in this thread: http://archives.postgresql.org/pgsql-performance/2007-10/msg00159.php\n\nthe following is said:\n\n>You can determine what runs in parellel based on the\n>indentation of the output.\n>Items at the same indentation level under the same\n>\"parent\" line will run in parallel\n\nWouldn't this offer some opportunities for running things on multiple cores? Based on the above, many people already seem to think that PG is able to utilize multiple cores for 1 query. Of course, it can be easily \"proved\" that this does not happen by simply watching at the CPU utilization graphs when executing a query. Nevertheless, those people may wonder why (some of) those items that already run in parallel not actually run in parallel using multiple cores?\n\n> Currently, the only way to parallelize a query in Postgres is to use pgpool-II.\n>\n> http://pgpool.projects.postgresql.org/\n\nYes, I noticed this project before. At the time it was not really clear how stable and/or how well supported this is. It indeed seems to support parallel queries automatically by being able to rewrite standard queries. It does seem it needs different DB nodes and is thus probably not able to use multiple cores of a single DBMS. Also, I could not really find how well pgpool-II is doing at making judgments of the level of parallelization it's going to use. E.g. when there are 16 nodes in the system with a currently low utilization, a single query may be split into 16 pieces. On the other hand, when 8 of these nodes are heavily utilized, splitting to 8 pieces might be better. etc.\n\nAnyway, are there any plans for postgresql to support parallelizing queries natively? \n\n\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n> > I wonder whether the current versions of postgres (i.e. either 8.2 or 8.3)> > are able to utilize multiple cores for the execution of a single query?> Nope.I see, thanks for the clarification.Btw, in this thread: http://archives.postgresql.org/pgsql-performance/2007-10/msg00159.phpthe following is said:>You can determine what runs in parellel based on the>indentation of the output.>Items at the same indentation level under the same>\"parent\" line will run in parallelWouldn't this offer some opportunities for running things on multiple cores? Based on the above, many people already seem to think that PG is able to utilize multiple cores for 1 query. Of course, it can be easily \"proved\" that this does not happen by simply watching at the CPU utilization graphs when executing a query. Nevertheless, those people may wonder why (some of) those items that already run in parallel not actually run in parallel using multiple cores?> Currently, the only way to parallelize a query in Postgres is to use pgpool-II.>> http://pgpool.projects.postgresql.org/Yes, I noticed this project before. At the time it was not really clear how stable and/or how well supported this is. It indeed seems to support parallel queries automatically by being able to rewrite standard queries. It does seem it needs different DB nodes and is thus probably not able to use multiple cores of a single DBMS. Also, I could not really find how well pgpool-II is doing at making judgments of the level of parallelization it's going to use. E.g. when there are 16 nodes in the system with a currently low utilization, a single query may be split into 16 pieces. On the other hand, when 8 of these nodes are heavily utilized, splitting to 8 pieces might be better. etc.Anyway, are there any plans for postgresql to support parallelizing queries natively? Express yourself instantly with MSN Messenger! MSN Messenger",
"msg_date": "Sat, 1 Dec 2007 15:42:36 +0100",
"msg_from": "henk de wit <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Utilizing multiple cores for one query"
},
{
"msg_contents": "On Dec 1, 2007 9:42 AM, henk de wit <[email protected]> wrote:\n> Wouldn't this offer some opportunities for running things on multiple cores?\n\nNo, it's not actually parallel in the same sense.\n\n> Yes, I noticed this project before. At the time it was not really clear how\n> stable and/or how well supported this is. It indeed seems to support\n> parallel queries automatically by being able to rewrite standard queries. It\n> does seem it needs different DB nodes and is thus probably not able to use\n> multiple cores of a single DBMS.\n\nI've seen it actually set up to use multiple connections to the same\nDBMS. How well it would work is pretty much dependent on your\napplication and the amount of parallelization you could actually gain.\n\n\n> Also, I could not really find how well\n> pgpool-II is doing at making judgments of the level of parallelization it's\n> going to use. E.g. when there are 16 nodes in the system with a currently\n> low utilization, a single query may be split into 16 pieces. On the other\n> hand, when 8 of these nodes are heavily utilized, splitting to 8 pieces\n> might be better. etc.\n\nIIRC, it doesn't plan parallelization that way. It looks at what is\npartitioned (by default) on different nodes and parallelizes based on\nthat. As I said earlier, you can partition a single node and put\npgpool-II on top of it to gain some parallelization. Unfortunately,\nit isn't capable of handling things like parallel index builds or\nother useful maintenance features... but it can do fairly good query\nresult parallelization.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n",
"msg_date": "Sat, 1 Dec 2007 11:39:15 -0500",
"msg_from": "\"Jonah H. Harris\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Utilizing multiple cores for one query"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nhenk de wit wrote:\n>> > I wonder whether the current versions of postgres (i.e. either 8.2\n> or 8.3)\n>> > are able to utilize multiple cores for the execution of a single query?\n>> Nope.\n> \n> I see, thanks for the clarification.\n> \n> Btw, in this thread:\n> http://archives.postgresql.org/pgsql-performance/2007-10/msg00159.php\n> \n> the following is said:\n> \n>>You can determine what runs in parellel based on the\n>>indentation of the output.\n>>Items at the same indentation level under the same\n>>\"parent\" line will run in parallel\n> \n> Wouldn't this offer some opportunities for running things on multiple\n> cores? Based on the above, many people already seem to think that PG is\n> able to utilize multiple cores for 1 query.\n\nOf course, it depends on just what you mean. Since postgresql is a\nclient-server system, the client can run on one processor and the server on\nanother. And that _is_ parallelism in a way. For me in one application, my\nclient uses about 20% of a processor and the server uses around 80%. But in\nmore detail,\n\nVIRT RES SHR SWAP %MEM %CPU TIME+ P COMMAND\n\n2019m 94m 93m 1.9g 1.2 79 2:29.97 3 postgres: jdbeyer stock [local]\nINSERT\n2019m 813m 813m 1.2g 10.2 2 23:38.67 0 postgres: writer process\n\n2018m 29m 29m 1.9g 0.4 0 4:07.59 3 /usr/bin/postmaster -p 5432 -D ...\n 8624 652 264 7972 0.0 0 0:00.10 2 postgres: logger process\n\n 9624 1596 204 8028 0.0 0 0:01.07 2 postgres: stats buffer process\n\n 8892 840 280 8052 0.0 0 0:00.74 1 postgres: stats collector process\n 6608 2320 1980 4288 0.0 22 1:56.27 0 /home/jdbeyer/bin/enter\n\nThe P column shows the processor the process last ran on. In this case, I\nmight get away with using one processor, it is clearly using all four.\n\nNow this is not processing a single query on multiple cores (in this case,\nthe \"query\" is running on core #3 only), but the ancillary stuff is running\non multiple cores and some of it should be charged to the query. And the OS\nkernel takes time for IO and stuff as well.\n\n> Of course, it can be easily\n> \"proved\" that this does not happen by simply watching at the CPU\n> utilization graphs when executing a query. Nevertheless, those people\n> may wonder why (some of) those items that already run in parallel not\n> actually run in parallel using multiple cores?\n> \n>\n- --\n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 11:40:01 up 1 day, 2:02, 5 users, load average: 4.15, 4.14, 4.15\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.5 (GNU/Linux)\nComment: Using GnuPG with CentOS - http://enigmail.mozdev.org\n\niD8DBQFHUZP/Ptu2XpovyZoRAn2BAKDLCyDrRiSo40u15M5GwY4OkxGlngCfbNHI\n7hjIcP1ozr+KYPr43Pck9TA=\n=Fawa\n-----END PGP SIGNATURE-----\n",
"msg_date": "Sat, 01 Dec 2007 12:03:59 -0500",
"msg_from": "Jean-David Beyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Utilizing multiple cores for one query"
},
{
"msg_contents": "On Sat, 1 Dec 2007, Jonah H. Harris wrote:\n> I believe the threads you're talking about were related to scanning,\n> not parallel query. Though, when Qingqing and I were discussing\n> parallel query a little over a year ago, I do seem to recall several\n> uninformed opinions stating that sequential scans were the only thing\n> it could be useful for.\n\nI would imagine sorting a huge set of results would benefit from\nmulti-threading, because it can be split up into separate tasks. Heck,\nPostgres *already* splits sorting up into multiple chunks when the results\nto sort are bigger than fit in memory.\n\nThis would benefit a lot of multi-table joins, because being able to sort\na table faster would enable merge joins to be used at lower cost. That's\nparticularly valuable when you're doing a large summary multi-table join\nthat uses most of the database contents.\n\nMatthew\n\n-- \nBeware of bugs in the above code; I have only proved it correct, not\ntried it. --Donald Knuth\n",
"msg_date": "Tue, 4 Dec 2007 13:24:35 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Utilizing multiple cores for one query"
},
{
"msg_contents": "On 12/1/07, Jonah H. Harris <[email protected]> wrote:\n> Currently, the only way to parallelize a query in Postgres is to use pgpool-II.\n\nFYI: plproxy issues queries for several nodes in parallel too.\n\n-- \nmarko\n",
"msg_date": "Mon, 10 Dec 2007 13:51:17 +0200",
"msg_from": "\"Marko Kreen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Utilizing multiple cores for one query"
}
] |
[
{
"msg_contents": " Hello,\n\n Started to work with big tables (like 300GB) and performance problems started to appear. :(\n\n To simplify things - table has an index on From an index on To columns. And it also have several other not indexed columns. There are 100000+ of different values for From and the same for To.\n\n I execute simple query \"select * from bigtable where From='something'\". Query returns like 1000 rows and takes 5++ seconds to complete. As far as I understand the query is slow because:\n - first it has to retrieve pointers to rows with data from index. That goes fast.\n - retrieve all the rows one by one. There we have 100% random read because rows with the same From is distributed evenly through all the 300GB and most probably nothing is cached. So there we are limited by _one_ disk performance independently of how many disks we have in storage? And in case storage consists of 15k rpm Cheetahs with 3.5ms average read seek time we should expect not more than ~285 rows per second?\n\n I feel that I'm overlooking something here. But I'm new into data warehousing. :)\n\n Also this query should greatly benefit from parallel execution or async IO. Storage (seeks/second) scales almost linearly when it has a lot of disks. And query is completely IO bound so it should scale well on single server.\n\n And I cannot use some index organized table or table partitioned by From :) because there are at least 2 similar indexes by which queries can be executed - From and To.\n\n Ideas for improvement? Greenplum or EnterpriseDB? Or I forgot something from PostgreSQL features.\n\n Thanks,\n\n Mindaugas\n",
"msg_date": "Sun, 02 Dec 2007 12:26:17 +0200 (EET)",
"msg_from": "\"Mindaugas\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dealing with big tables"
},
{
"msg_contents": "Hi,\n\nmy answer may be out of topic since you might be looking for a\npostgres-only solution.. But just in case....\n\nWhat are you trying to achieve exactly ? Is there any way you could\nre-work your algorithms to avoid selects and use a sequential scan\n(consider your postgres data as one big file) to retrieve each of the\nrows, analyze / compute them (possibly in a distributed manner), and\njoin the results at the end ? \n\nA few pointers :\nhttp://lucene.apache.org/hadoop/\nhttp://www.gridgain.com/\n\nRegards,\nSami Dalouche\n\n\n\nOn Sun, 2007-12-02 at 12:26 +0200, Mindaugas wrote:\n> Hello,\n> \n> Started to work with big tables (like 300GB) and performance problems started to appear. :(\n> \n> To simplify things - table has an index on From an index on To columns. And it also have several other not indexed columns. There are 100000+ of different values for From and the same for To.\n> \n> I execute simple query \"select * from bigtable where From='something'\". Query returns like 1000 rows and takes 5++ seconds to complete. As far as I understand the query is slow because:\n> - first it has to retrieve pointers to rows with data from index. That goes fast.\n> - retrieve all the rows one by one. There we have 100% random read because rows with the same From is distributed evenly through all the 300GB and most probably nothing is cached. So there we are limited by _one_ disk performance independently of how many disks we have in storage? And in case storage consists of 15k rpm Cheetahs with 3.5ms average read seek time we should expect not more than ~285 rows per second?\n> \n> I feel that I'm overlooking something here. But I'm new into data warehousing. :)\n> \n> Also this query should greatly benefit from parallel execution or async IO. Storage (seeks/second) scales almost linearly when it has a lot of disks. And query is completely IO bound so it should scale well on single server.\n> \n> And I cannot use some index organized table or table partitioned by From :) because there are at least 2 similar indexes by which queries can be executed - From and To.\n> \n> Ideas for improvement? Greenplum or EnterpriseDB? Or I forgot something from PostgreSQL features.\n> \n> Thanks,\n> \n> Mindaugas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n\n",
"msg_date": "Sun, 02 Dec 2007 12:05:35 +0100",
"msg_from": "Sami Dalouche <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dealing with big tables"
},
{
"msg_contents": "\n> my answer may be out of topic since you might be looking for a\n> postgres-only solution.. But just in case....\n\n I'd like to stay with SQL.\n\n> What are you trying to achieve exactly ? Is there any way you could\n> re-work your algorithms to avoid selects and use a sequential scan\n> (consider your postgres data as one big file) to retrieve each of the\n> rows, analyze / compute them (possibly in a distributed manner), and\n> join the results at the end ?\n\n I'm trying to improve performance - get answer from mentioned query \nfaster.\n\n And since cardinality is high (100000+ different values) I doubt that it \nwould be possible to reach select speed with reasonable number of nodes of \nsequential scan nodes.\n\n Mindaugas\n",
"msg_date": "Sun, 02 Dec 2007 13:37:56 +0200 (EET)",
"msg_from": "\"Mindaugas\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dealing with big tables"
},
{
"msg_contents": "On Dec 2, 2007 11:26 AM, Mindaugas <[email protected]> wrote:\n> I execute simple query \"select * from bigtable where From='something'\". Query returns like 1000 rows and takes 5++ seconds to complete. As far as I understand the query is slow because:\n\nCan you post an EXPLAIN ANALYZE? Which version of PostgreSQL do you use?\n\n--\nGuillaume\n",
"msg_date": "Sun, 2 Dec 2007 12:55:34 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dealing with big tables"
},
{
"msg_contents": "\n\"Mindaugas\" <[email protected]> writes:\n\n> I execute simple query \"select * from bigtable where From='something'\".\n> Query returns like 1000 rows and takes 5++ seconds to complete. \n\nAs you pointed out that's not terribly slow for 1000 random accesses. It\nsounds like your drive has nearly 5ms seek time which is pretty common.\n\nWhat exactly is your goal? Do you need this query to respond in under a\nspecific limit? What limit? Do you need to be able to execute many instances\nof this query in less than 5s * the number of executions? Or do you have more\ncomplex queries that you're really worried about?\n\nI do have an idea of how to improve Postgres for this case but it has to wait\nuntil we're done with 8.3 and the tree opens for 8.4.\n\n> Ideas for improvement? Greenplum or EnterpriseDB? Or I forgot something\n> from PostgreSQL features.\n\nBoth Greenplum and EnterpriseDB have products in this space which let you\nbreak the query up over several servers but at least in EnterpriseDB's case\nit's targeted towards running complex queries which take longer than this to\nrun. I doubt you would see much benefit for a 5s query after the overhead of\nsending parts of the query out to different machines and then reassembling the\nresults. If your real concern is with more complex queries they may make sense\nthough. It's also possible that paying someone to come look at your database\nwill find other ways to speed it up.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!\n",
"msg_date": "Sun, 02 Dec 2007 12:07:37 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dealing with big tables"
},
{
"msg_contents": "\n> What exactly is your goal? Do you need this query to respond in under a\n> specific limit? What limit? Do you need to be able to execute many instances\n> of this query in less than 5s * the number of executions? Or do you have more\n> complex queries that you're really worried about?\n\n I'd like this query to respond under a specific time limit. 5s now is OK but 50s later for 10000 rows is too slow.\n\n> Both Greenplum and EnterpriseDB have products in this space which let you\n> break the query up over several servers but at least in EnterpriseDB's case\n> it's targeted towards running complex queries which take longer than this to\n> run. I doubt you would see much benefit for a 5s query after the overhead of\n> sending parts of the query out to different machines and then reassembling the\n> results. If your real concern is with more complex queries they may make sense\n> though. It's also possible that paying someone to come look at your database\n> will find other ways to speed it up.\n\n I see. This query also should benefit alot even when run in parallel on one server. Since anyway most time it spends in waiting for storage to respond.\n\n Also off list I was pointed out about covering indexes in MySQL. But they are not supported in PostgreSQL, aren't they?\n\n Mindaugas\n",
"msg_date": "Sun, 02 Dec 2007 14:35:45 +0200 (EET)",
"msg_from": "\"Mindaugas\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dealing with big tables"
},
{
"msg_contents": "Mindaugas wrote:\n> \n>\n> And I cannot use some index organized table or table partitioned by From :) because there are at least 2 similar indexes by which queries can be executed - From and To.\n>\n> \n\nThis makes things a bit tough. One trick is to vertically partition the \ntable into two new tables - with \"From\" in one and \"To\" in the other... \nthen you can (horizontally) partition or cluster on each of these \ncolumns separately.\n\nYou can make it reasonably transparent by using a view to combine the \ncolumns again to get something that looks like the original table.\n\nCheers\n\nMark\n",
"msg_date": "Mon, 03 Dec 2007 12:13:07 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dealing with big tables"
},
{
"msg_contents": "On Mon, 3 Dec 2007, Mark Kirkwood wrote:\n\n> > And I cannot use some index organized table or table partitioned by\n> > From :) because there are at least 2 similar indexes by which queries\n> > can be executed - From and To.\n\n> This makes things a bit tough. One trick is to vertically partition the\n> table into two new tables - with \"From\" in one and \"To\" in the other...\n> then you can (horizontally) partition or cluster on each of these\n> columns separately.\n\nOr you could even commit a capital sin and have several copies of the same\ntable, sorted by different columns. Just remember to select from the\ncorrect table to get the performance, and to write all changes to all the\ntables! Kind of messes up transactions and locking a little though.\n\nMatthew\n\n-- \nNo, C++ isn't equal to D. 'C' is undeclared, so we assume it's an int,\nwith a default value of zero. Hence, C++ should really be called 1.\n-- met24, commenting on the quote \"C++ -- shouldn't it be called D?\"\n",
"msg_date": "Mon, 3 Dec 2007 10:35:27 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dealing with big tables"
},
{
"msg_contents": "On Dec 2, 2007 7:35 AM, Mindaugas <[email protected]> wrote:\n> I'd like this query to respond under a specific time limit. 5s now is OK but 50s later for 10000 rows is too slow.\n>\n> Also off list I was pointed out about covering indexes in MySQL. But they are not supported in PostgreSQL, aren't they?\n\nThe PostgreSQL architecture does not allow that...some of the MVCC\ninfo is stored in the tuple. An all index access strategy would only\nhelp anyways if all the information being looked up was in the\nindex...this probably isn't practical in really big table anyways (I\nthink said feature is overrated). What would help, at least\ntemporarily, would be to cluster the table on your index so at least\nyour lookups would have some locality. Just be aware that cluster is\na one shot deal...if there are changes to the table it will gradually\nfall out of order. Otherwise it would be an ordered table, which is\nwhat everyone wants...some of the big commercial databases have this.\n\nIn any case, you should still at least post an explain analyze to make\nsure something funky isn't going on. Barring that, I would be looking\nat some type of caching to optimize around the problem...perhaps look\nat the table design?\n\nmerlin\n",
"msg_date": "Mon, 3 Dec 2007 17:14:21 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dealing with big tables"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm busy evaluating PostgreSQL and I'm having performance problems on one of\nmy servers. I have a very simple one table database, and the client using\nMono 1.2.5.1 is running a loop doing INSERTs on the table. Initially I\ntested this on my development PC, an old P4 system with 2GB RAM and 10,000\nINSERTs took ~12 secs on average, which I was fairly satisfied with. I then\nmoved everything over to our test server, a new Dell 1950 server with quad\ncore Xeon processors, 4GB RAM and SCSI hdd expecting to see better\nperformance, but instead performance dropped to ~44 secs for 10,000 INSERTs.\nThis obviously is not acceptable. Both the PC and server are running the\nexact same PostgreSQL version, Mono version, client application and both\ntests were run under very low load and on an empty table. I noticed that CPU\nutilization on the Dell server is very low, 1-2% utilization, so obviously\nit's not a load problem. Only the test application is accessing the\ndatabase.\n\nSo my question is, can anyone please give me some tips on what commands or\ntools I can use to try and pin down where exactly the performance drop is\ncoming from? I'm obviously new to PostgreSQL so even basic checks can be\nrelevant.\n\nKind regards\n\nBeyers Cronje\n\nHi all,I'm busy evaluating PostgreSQL and I'm having performance problems on one of my servers. I have a very simple one table database, and the client using Mono 1.2.5.1 is running a loop doing INSERTs on the table. Initially I tested this on my development PC, an old P4 system with 2GB RAM and 10,000 INSERTs took ~12 secs on average, which I was fairly satisfied with. I then moved everything over to our test server, a new Dell 1950 server with quad core Xeon processors, 4GB RAM and SCSI hdd expecting to see better performance, but instead performance dropped to ~44 secs for 10,000 INSERTs. This obviously is not acceptable. Both the PC and server are running the exact same PostgreSQL version, Mono version, client application and both tests were run under very low load and on an empty table. I noticed that CPU utilization on the Dell server is very low, 1-2% utilization, so obviously it's not a load problem. Only the test application is accessing the database.\nSo my question is, can anyone please give me some tips on what commands or tools I can use to try and pin down where exactly the performance drop is coming from? I'm obviously new to PostgreSQL so even basic checks can be relevant.\nKind regardsBeyers Cronje",
"msg_date": "Sun, 2 Dec 2007 23:34:00 +0200",
"msg_from": "\"Beyers Cronje\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 8.2.5 slow performance on INSERT on Linux"
},
{
"msg_contents": "On 02/12/2007, Beyers Cronje <[email protected]> wrote:\n> Hi all,\n>\n> I'm busy evaluating PostgreSQL and I'm having performance problems on one of\n> my servers. I have a very simple one table database, and the client using\n> Mono 1.2.5.1 is running a loop doing INSERTs on the table. Initially I\n> tested this on my development PC, an old P4 system with 2GB RAM and 10,000\n> INSERTs took ~12 secs on average, which I was fairly satisfied with. I then\n> moved everything over to our test server, a new Dell 1950 server with quad\n> core Xeon processors, 4GB RAM and SCSI hdd expecting to see better\n> performance, but instead performance dropped to ~44 secs for 10,000 INSERTs.\n> This obviously is not acceptable. Both the PC and server are running the\n> exact same PostgreSQL version, Mono version, client application and both\n> tests were run under very low load and on an empty table. I noticed that CPU\n> utilization on the Dell server is very low, 1-2% utilization, so obviously\n> it's not a load problem. Only the test application is accessing the\n> database.\n>\n> So my question is, can anyone please give me some tips on what commands or\n> tools I can use to try and pin down where exactly the performance drop is\n> coming from? I'm obviously new to PostgreSQL so even basic checks can be\n> relevant.\n>\n> Kind regards\n>\n> Beyers Cronje\n>\n\nHello\n\na) use COPY instead INSERT (it's much faster) if it is possible\n\nb) check your configuration and read this article\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\nRegards\nPavel Stehule\n",
"msg_date": "Sun, 2 Dec 2007 22:46:25 +0100",
"msg_from": "\"Pavel Stehule\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 8.2.5 slow performance on INSERT on Linux"
},
{
"msg_contents": "On Sun, 2 Dec 2007, Beyers Cronje wrote:\n\n> Initially I tested this on my development PC, an old P4 system with 2GB \n> RAM and 10,000 INSERTs took ~12 secs on average, which I was fairly \n> satisfied with. I then moved everything over to our test server, a new \n> Dell 1950 server with quad core Xeon processors, 4GB RAM and SCSI hdd \n> expecting to see better performance, but instead performance dropped to \n> ~44 secs for 10,000 INSERTs.\n\nYour development system is probably running inexpensive IDE disks that \ncache writes, while the test server is not caching. If you loop over \nsingle inserts, PostgreSQL's default configuration will do a physical \ncommit to disk after every one of them, which limits performance to how \nfast the disk spins. If your server has 15K RPM drives, a single client \ncan commit at most 250 transactions per second to disk, which means 10,000 \ninserts done one at a time must take at least 40 seconds no matter how \nfast the server is.\n\nThere's a rambling discussion of this topic at \nhttp://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm that \nshould fill in some background here.\n\nIf you use COPY instead of INSERT, that bypasses the WAL and you don't see \nthis. Also, if you adjust your loop to do multiple inserts as a single \ntransaction, that will change the behavior here as well.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Sun, 2 Dec 2007 17:53:06 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 8.2.5 slow performance on INSERT on Linux"
},
{
"msg_contents": ">\n> Your development system is probably running inexpensive IDE disks that\n> cache writes, while the test server is not caching. If you loop over\n> single inserts, PostgreSQL's default configuration will do a physical\n> commit to disk after every one of them, which limits performance to how\n> fast the disk spins. If your server has 15K RPM drives, a single client\n> can commit at most 250 transactions per second to disk, which means 10,000\n> inserts done one at a time must take at least 40 seconds no matter how\n> fast the server is.\n>\n> There's a rambling discussion of this topic at\n> http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm<http://www.westnet.com/%7Egsmith/content/postgresql/TuningPGWAL.htm>that\n> should fill in some background here.\n\n\nThis is exactly what is happening. Thank you for the above link, the article\nwas very informative.\n\nIf you use COPY instead of INSERT, that bypasses the WAL and you don't see\n> this. Also, if you adjust your loop to do multiple inserts as a single\n> transaction, that will change the behavior here as well.\n\n\nI will give COPY a go and see how it performs. For testing we specifically\nonly did one insert per transaction, we will obviously optimize the actual\napplication to do multiple insert per transaction where-ever possible.\n\nKind regards\n\nBeyers Cronje\n\nPS Thank you for the quick responses Greg and Pavel. It is always\nencouraging starting off with a new product and seeing there are people\npassionate about it.\n\nYour development system is probably running inexpensive IDE disks that\ncache writes, while the test server is not caching. If you loop oversingle inserts, PostgreSQL's default configuration will do a physicalcommit to disk after every one of them, which limits performance to how\nfast the disk spins. If your server has 15K RPM drives, a single clientcan commit at most 250 transactions per second to disk, which means 10,000inserts done one at a time must take at least 40 seconds no matter how\nfast the server is.There's a rambling discussion of this topic athttp://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm\n thatshould fill in some background here.This is exactly what is happening. Thank you for the above link, the article was very informative. \nIf you use COPY instead of INSERT, that bypasses the WAL and you don't seethis. Also, if you adjust your loop to do multiple inserts as a singletransaction, that will change the behavior here as well.\nI will give COPY a go and see how it performs. For testing we specifically only did one insert per transaction, we will obviously optimize the actual application to do multiple insert per transaction where-ever possible.\nKind regardsBeyers CronjePS Thank you for the quick responses Greg and Pavel. It is always encouraging starting off with a new product and seeing there are people passionate about it.",
"msg_date": "Mon, 3 Dec 2007 01:32:21 +0200",
"msg_from": "\"Beyers Cronje\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 8.2.5 slow performance on INSERT on Linux"
}
] |
[
{
"msg_contents": "I'd like to get confirmation that I'm correctly understanding the \ntimes given in EXPLAIN ANALYZE. Taking the example given in the Using \nExplain section of the docs,\n\nhttp://www.postgresql.org/docs/current/static/using-explain\n\nEXPLAIN ANALYZE SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 < \n100 AND t1.unique2 = t2.unique2;\n\n QUERY PLAN\n------------------------------------------------------------------------ \n----------------------------------------------------------\nNested Loop (cost=2.37..553.11 rows=106 width=488) (actual \ntime=1.392..12.700 rows=100 loops=1)\n -> Bitmap Heap Scan on tenk1 t1 (cost=2.37..232.35 rows=106 \nwidth=244) (actual time=0.878..2.367 rows=100 loops=1)\n Recheck Cond: (unique1 < 100)\n -> Bitmap Index Scan on tenk1_unique1 (cost=0.00..2.37 \nrows=106 width=0) (actual time=0.546..0.546 rows=100 loops=1)\n Index Cond: (unique1 < 100)\n -> Index Scan using tenk2_unique2 on tenk2 t2 (cost=0.00..3.01 \nrows=1 width=244) (actual time=0.067..0.078 rows=1 loops=100)\n Index Cond: (\"outer\".unique2 = t2.unique2)\nTotal runtime: 14.452 ms\n\nI'm interested in figuring out what percentage of the total runtime \nis spent in each node. Here are my calculations.\n\nNested loop:\n actual time: 12.700 ms/loop * 1 loop = 12.700 ms\n percent of total runtime: 88%\n percent spent in subnodes: 16% + 54% = 70%\n percent spent in node: 18%\n\n Bitmap Heap Scan on tenk1:\n actual time: 2.367 ms/loop * 1 loop = 2.367 ms\n percent of total runtime: 16%\n time spent in subnodes: 4%\n time spent in node: 12%\n\n Bitmap Heap Scan on tenk1_unique1:\n actual time: 0.546 ms/loop * 1 loop = 0.546 ms: 4%\n time spent in subnodes: 0%\n time spent in node: 4%\n\n Index Scan total time:\n actual time: 0.078 ms/loop * 100 loops = 7.80 ms\n percent of total runtime: 54%\n percent spent in subnodes: 0%\n percent spent in node: 54%\n\nexecutor overhead: 14.452 ms - 12.700 ms = 1.752 ms: 12%\n\nIs this correct?\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n",
"msg_date": "Sun, 2 Dec 2007 19:28:15 -0500",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": true,
"msg_subject": "EXPLAIN ANALYZE time calculations"
},
{
"msg_contents": "Michael Glaesemann <[email protected]> writes:\n> I'd like to get confirmation that I'm correctly understanding the \n> times given in EXPLAIN ANALYZE.\n> ...\n> Is this correct?\n\nLooks about right to me. Note that some of what you are calling\n\"executor overhead\" might also be classed as \"gettimeofday overhead\".\nThe measured difference between two successive gettimeofday readings\npresumably includes the actual userland runtime plus the equivalent\nof one gettimeofday call; but we actually did two calls. IOW the\nactual time to get in and out of a node is going to be a shade more\nthan is reported.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 02 Dec 2007 19:56:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE time calculations "
},
{
"msg_contents": "\nOn Dec 2, 2007, at 19:56 , Tom Lane wrote:\n\n> IOW the actual time to get in and out of a node is going to be a \n> shade more\n> than is reported.\n\nThanks, Tom. Should be close enough for jazz.\n\nWhen I was first going over the Using Explain section, I stumbled a \nbit on the startup time/total time/loops bit (which is why explain- \nanalyze.info times and percentages are currently miscalculated). I \ntook startup time to be the time to return the first row *of the \nfirst loop*. But it's actually the average startup time to return the \nfirst row *in each loop*, right?\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n",
"msg_date": "Sun, 2 Dec 2007 20:06:07 -0500",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN ANALYZE time calculations "
},
{
"msg_contents": "Michael Glaesemann <[email protected]> writes:\n> I took startup time to be the time to return the first row *of the \n> first loop*. But it's actually the average startup time to return the \n> first row *in each loop*, right?\n\nCorrect, just as the total time and tuples returned are averages over all\nthe loops.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 02 Dec 2007 20:10:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE time calculations "
}
] |
[
{
"msg_contents": "\nI have a question about how Postgres makes use of RAID arrays for\nperformance, because we are considering buying a 12-disc array for\nperformance reasons. I'm interested in how the performance scales with the\nnumber of discs in the array.\n\nNow, I know that for an OLTP workload (in other words, lots of small\nparallel random accesses), the performance will scale almost with the\nnumber of discs. However, I'm more interested in the performance of\nindividual queries, particularly those where Postgres has to do an index\nscan, which will result in a single query performing lots of random\naccesses to the disc system. Theoretically, this *can* scale with the\nnumber of discs too - my question is does it?\n\nDoes Postgres issue requests to each random access in turn, waiting for\neach one to complete before issuing the next request (in which case the\nperformance will not exceed that of a single disc), or does it use some\nclever asynchronous access method to send a queue of random access\nrequests to the OS that can be distributed among the available discs?\n\nAny knowledgable answers or benchmark proof would be appreciated,\n\nMatthew\n\n-- \n\"To err is human; to really louse things up requires root\n privileges.\" -- Alexander Pope, slightly paraphrased\n",
"msg_date": "Tue, 4 Dec 2007 12:23:02 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": true,
"msg_subject": "RAID arrays and performance"
},
{
"msg_contents": "\"Matthew\" <[email protected]> writes:\n\n> Does Postgres issue requests to each random access in turn, waiting for\n> each one to complete before issuing the next request (in which case the\n> performance will not exceed that of a single disc), or does it use some\n> clever asynchronous access method to send a queue of random access\n> requests to the OS that can be distributed among the available discs?\n\nSorry, it does the former, at least currently.\n\nThat said, this doesn't really come up nearly as often as you might think.\nNormally queries fit mostly in either the large batch query domain or the\nsmall quick oltp query domain. For the former Postgres tries quite hard to do\nsequential i/o which the OS will do readahead for and you'll get good\nperformance. For the latter you're normally running many simultaneous such\nqueries and the raid array helps quite a bit.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n",
"msg_date": "Tue, 04 Dec 2007 12:40:32 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "On Tue, 4 Dec 2007, Gregory Stark wrote:\n> \"Matthew\" <[email protected]> writes:\n>\n> > Does Postgres issue requests to each random access in turn, waiting for\n> > each one to complete before issuing the next request (in which case the\n> > performance will not exceed that of a single disc), or does it use some\n> > clever asynchronous access method to send a queue of random access\n> > requests to the OS that can be distributed among the available discs?\n>\n> Sorry, it does the former, at least currently.\n>\n> That said, this doesn't really come up nearly as often as you might think.\n\nShame. It comes up a *lot* in my project. A while ago we converted a task\nthat processes a queue of objects to processing groups of a thousand\nobjects, which sped up the process considerably. So we run an awful lot of\nqueries with IN lists with a thousand values. They hit the indexes, then\nfetch the rows by random access. A full table sequential scan would take\nmuch longer. It'd be awfully nice to have those queries go twelve times\nfaster.\n\n> Normally queries fit mostly in either the large batch query domain or the\n> small quick oltp query domain. For the former Postgres tries quite hard to do\n> sequential i/o which the OS will do readahead for and you'll get good\n> performance. For the latter you're normally running many simultaneous such\n> queries and the raid array helps quite a bit.\n\nHaving twelve discs will certainly improve the sequential IO throughput!\n\nHowever, if this was implemented (and I have *no* idea whatsoever how hard\nit would be), then large index scans would scale with the number of discs\nin the system, which would be quite a win, I would imagine. Large index\nscans can't be that rare!\n\nMatthew\n\n-- \nSoftware suppliers are trying to make their software packages more\n'user-friendly'.... Their best approach, so far, has been to take all\nthe old brochures, and stamp the words, 'user-friendly' on the cover.\n-- Bill Gates\n",
"msg_date": "Tue, 4 Dec 2007 13:16:57 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "Matthew wrote:\n> On Tue, 4 Dec 2007, Gregory Stark wrote:\n> \n>> \"Matthew\" <[email protected]> writes\n>>> Does Postgres issue requests to each random access in turn, waiting for\n>>> each one to complete before issuing the next request (in which case the\n>>> performance will not exceed that of a single disc), or does it use some\n>>> clever asynchronous access method to send a queue of random access\n>>> requests to the OS that can be distributed among the available discs?\n>>> \n>> Sorry, it does the former, at least currently.\n>> That said, this doesn't really come up nearly as often as you might think.\n>> \n> Shame. It comes up a *lot* in my project. A while ago we converted a task\n> that processes a queue of objects to processing groups of a thousand\n> objects, which sped up the process considerably. So we run an awful lot of\n> queries with IN lists with a thousand values. They hit the indexes, then\n> fetch the rows by random access. A full table sequential scan would take\n> much longer. It'd be awfully nice to have those queries go twelve times\n> faster.\n> \nThe bitmap scan method does ordered reads of the table, which can \npartially take advantage of sequential reads. Not sure whether bitmap \nscan is optimal for your situation or whether your situation would allow \nthis to be taken advantage of.\n\n>> Normally queries fit mostly in either the large batch query domain or the\n>> small quick oltp query domain. For the former Postgres tries quite hard to do\n>> sequential i/o which the OS will do readahead for and you'll get good\n>> performance. For the latter you're normally running many simultaneous such\n>> queries and the raid array helps quite a bit.\n>> \n> Having twelve discs will certainly improve the sequential IO throughput!\n>\n> However, if this was implemented (and I have *no* idea whatsoever how hard\n> it would be), then large index scans would scale with the number of discs\n> in the system, which would be quite a win, I would imagine. Large index\n> scans can't be that rare!\n> \nDo you know that there is a problem, or are you speculating about one? I \nthink your case would be far more compelling if you could show a \nproblem. :-)\n\nI would think that at a minimum, having 12 disks with RAID 0 or RAID 1+0 \nwould allow your insane queries to run concurrent with up to 12 other \nqueries. Unless your insane query is the only query in use on the \nsystem, I think you may be speculating about a nearly non-existence \nproblem. Just a suggestion...\n\nI recall talk of more intelligent table scanning algorithms, and the use \nof asynchronous I/O to benefit from RAID arrays, but the numbers \nprepared to convince people that the change would have effect have been \nless than impressive.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n",
"msg_date": "Tue, 04 Dec 2007 08:45:24 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "On Tue, 4 Dec 2007, Mark Mielke wrote:\n> The bitmap scan method does ordered reads of the table, which can\n> partially take advantage of sequential reads. Not sure whether bitmap\n> scan is optimal for your situation or whether your situation would allow\n> this to be taken advantage of.\n\nBitmap scan may help where several randomly-accessed pages are next to\neach other. However, I would expect the operating system's readahead and\nbuffer cache to take care of that one.\n\n> Do you know that there is a problem, or are you speculating about one? I\n> think your case would be far more compelling if you could show a\n> problem. :-)\n>\n> I would think that at a minimum, having 12 disks with RAID 0 or RAID 1+0\n> would allow your insane queries to run concurrent with up to 12 other\n> queries.\n\nYeah, I don't really care about concurrency. It's pretty obvious that\nrunning x concurrent queries on an x-disc RAID system will allow the\nutilisation of all the discs at once, therefore allowing the performance\nto scale by x. What I'm talking about is a single query running on an\nx-disc RAID array.\n\n> Unless your insane query is the only query in use on the system...\n\nThat's exactly the case.\n\n> I recall talk of more intelligent table scanning algorithms, and the use\n> of asynchronous I/O to benefit from RAID arrays, but the numbers\n> prepared to convince people that the change would have effect have been\n> less than impressive.\n\nI think a twelve-times speed increase is impressive. Granted, given\ngreatly-concurrent access, the benefits go away, but I think it'd be worth\nit for when there are few queries running on the system.\n\nI don't think you would have to create a more intelligent table scanning\nalgorithm. What you would need to do is take the results of the index,\nconvert that to a list of page fetches, then pass that list to the OS as\nan asynchronous \"please fetch all these into the buffer cache\" request,\nthen do the normal algorithm as is currently done. The requests would then\ncome out of the cache instead of from the disc. Of course, this is from a\nsimple Java programmer who doesn't know the OS interfaces for this sort of\nthing.\n\nMatthew\n\n-- \nHere we go - the Fairy Godmother redundancy proof.\n -- Computer Science Lecturer\n",
"msg_date": "Tue, 4 Dec 2007 14:11:25 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "Matthew wrote:\n> On Tue, 4 Dec 2007, Mark Mielke wrote:\n> \n>> The bitmap scan method does ordered reads of the table, which can\n>> partially take advantage of sequential reads. Not sure whether bitmap\n>> scan is optimal for your situation or whether your situation would allow\n>> this to be taken advantage of.\n>> \n> Bitmap scan may help where several randomly-accessed pages are next to\n> each other. However, I would expect the operating system's readahead and\n> buffer cache to take care of that one.\n> \nThe disk head has less theoretical distance to travel if always moving \nin a single direction instead of randomly seeking back and forth.\n>> Do you know that there is a problem, or are you speculating about one? I\n>> think your case would be far more compelling if you could show a\n>> problem. :-)\n>>\n>> I would think that at a minimum, having 12 disks with RAID 0 or RAID 1+0\n>> would allow your insane queries to run concurrent with up to 12 other\n>> queries.\n>> \n> Yeah, I don't really care about concurrency. It's pretty obvious that\n> running x concurrent queries on an x-disc RAID system will allow the\n> utilisation of all the discs at once, therefore allowing the performance\n> to scale by x. What I'm talking about is a single query running on an\n> x-disc RAID array.\n> \nThe time to seek to a particular sector does not reduce 12X with 12 \ndisks. It is still approximately the same, only it can handle 12X the \nconcurrency. This makes RAID best for concurrent loads. In your \nscenario, you are trying to make a single query take advantage of this \nconcurrency by taking advantage of the system cache. I don't think full \n12X concurrency of a single query is possible for most loads, probably \nincluding yours. See later for details.\n>> I recall talk of more intelligent table scanning algorithms, and the use\n>> of asynchronous I/O to benefit from RAID arrays, but the numbers\n>> prepared to convince people that the change would have effect have been\n>> less than impressive.\n>> \n> I think a twelve-times speed increase is impressive. Granted, given\n> greatly-concurrent access, the benefits go away, but I think it'd be worth\n> it for when there are few queries running on the system.\n>\n> I don't think you would have to create a more intelligent table scanning\n> algorithm. What you would need to do is take the results of the index,\n> convert that to a list of page fetches, then pass that list to the OS as\n> an asynchronous \"please fetch all these into the buffer cache\" request,\n> then do the normal algorithm as is currently done. The requests would then\n> come out of the cache instead of from the disc. Of course, this is from a\n> simple Java programmer who doesn't know the OS interfaces for this sort of\n> thing.\n> \nThat's about how the talk went. :-)\n\nThe problem is that a 12X speed for 12 disks seems unlikely except under \nvery specific loads (such as a sequential scan of a single table). Each \nof the indexes may need to be scanned or searched in turn, then each of \nthe tables would need to be scanned or searched in turn, depending on \nthe query plan. There is no guarantee that the index rows or the table \nrows are equally spread across the 12 disks. CPU processing becomes \ninvolved with is currently limited to a single processor thread. I \nsuspect no database would achieve a 12X speedup for 12 disks unless a \nsimple sequential scan of a single table was required, in which case the \nreads could be fully parallelized with RAID 0 using standard sequential \nreads, and this is available today using built-in OS or disk read-ahead.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n",
"msg_date": "Tue, 04 Dec 2007 09:30:36 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "\n\"Mark Mielke\" <[email protected]> writes:\n\n> Matthew wrote:\n>\n>> I don't think you would have to create a more intelligent table scanning\n>> algorithm. What you would need to do is take the results of the index,\n>> convert that to a list of page fetches, then pass that list to the OS as\n>> an asynchronous \"please fetch all these into the buffer cache\" request,\n>> then do the normal algorithm as is currently done. The requests would then\n>> come out of the cache instead of from the disc. Of course, this is from a\n>> simple Java programmer who doesn't know the OS interfaces for this sort of\n>> thing.\n>\n> That's about how the talk went. :-)\n>\n> The problem is that a 12X speed for 12 disks seems unlikely except under very\n> specific loads (such as a sequential scan of a single table). Each of the\n> indexes may need to be scanned or searched in turn, then each of the tables\n> would need to be scanned or searched in turn, depending on the query plan.\n> There is no guarantee that the index rows or the table rows are equally spread\n> across the 12 disks. CPU processing becomes involved with is currently limited\n> to a single processor thread. I suspect no database would achieve a 12X speedup\n> for 12 disks unless a simple sequential scan of a single table was required, in\n> which case the reads could be fully parallelized with RAID 0 using standard\n> sequential reads, and this is available today using built-in OS or disk\n> read-ahead.\n\nI'm sure you would get something between 1x and 12x though...\n\nI'm rerunning my synthetic readahead tests now. That doesn't show the effect\nof the other cpu and i/o work being done in the meantime but surely if they're\nbeing evicted from cache too soon that just means your machine is starved for\ncache and you should add more RAM?\n\nAlso, it's true, you need to preread more than 12 blocks to handle a 12-disk\nraid. My offhand combinatorics analysis seems to indicate you would expect to\nneed to n(n-1)/2 blocks on average before you've hit all the blocks. There's\nlittle penalty to prereading unless you use up too much kernel resources or\nyou do unnecessary i/o which you never use, so I would expect doing n^2 capped\nat some reasonable number like 1,000 pages (enough to handle a 32-disk raid)\nwould be reasonable.\n\nThe real trick is avoiding doing prefetches that are never needed. The user\nmay never actually read all the tuples being requested. I think that means we\nshouldn't prefetch until the second tuple is read and then gradually increase\nthe prefetch distance as you read more and more of the results.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n",
"msg_date": "Tue, 04 Dec 2007 14:53:43 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "On Tue, 4 Dec 2007, Mark Mielke wrote:\n> The disk head has less theoretical distance to travel if always moving\n> in a single direction instead of randomly seeking back and forth.\n\nTrue... and false. The head can move pretty quickly, and it also has\nrotational latency and settling time to deal with. This means that there\nare cases where it is actually faster to move across the disc and read a\nblock, then move back and read a block than to read them in order.\n\nSo, if you hand requests one by one to the disc, it will almost always be\nfaster to order them. On the other hand, if you hand a huge long list of\nrequests to a decent SCSI or SATA-NCQ disc in one go, it will reorder the\nreads itself, and it will do it much better than you.\n\n> The time to seek to a particular sector does not reduce 12X with 12\n> disks. It is still approximately the same, only it can handle 12X the\n> concurrency. This makes RAID best for concurrent loads. In your\n> scenario, you are trying to make a single query take advantage of this\n> concurrency by taking advantage of the system cache.\n\nKind of. The system cache is just a method to make it simpler to explain -\nI don't know the operating system interfaces, but it's likely that the\nactual call is something more like \"transfer these blocks to these memory\nlocations and tell me when they're all finished.\" I'm trying to make a\nsingle query concurrent by using the knowledge of a *list* of accesses to\nbe made, and getting the operating system to do all of them concurrently.\n\n> The problem is that a 12X speed for 12 disks seems unlikely except under\n> very specific loads (such as a sequential scan of a single table).\n\nI'll grant you that 12X may not actually be reached, but it'll be\nsomewhere between 1X and 12X. I'm optimistic.\n\n> Each of the indexes may need to be scanned or searched in turn, then\n> each of the tables would need to be scanned or searched in turn,\n> depending on the query plan.\n\nYes, the indexes would also have to be accessed concurrently, and that\nwill undoubtedly be harder to code than accessing the tables concurrently.\n\n> There is no guarantee that the index rows or the table rows are equally\n> spread across the 12 disks.\n\nIndeed. However, you will get an advantage if they are spread out at all.\nStatistically, the larger the number of pages that need to be retrieved in\nthe set, the more equally spread between the discs they will be. It's the\ntimes when there are a large number of pages to retrieve that this will be\nmost useful.\n\n> CPU processing becomes involved with is currently limited to a single\n> processor thread.\n\nOn the contrary, this is a problem at the moment for sequential table\nscans, but would not be a problem for random accesses. If you have twelve\ndiscs all throwing 80MB/s at the CPU, it's understandable that the CPU\nwon't keep up. However, when you're making random accesses, with say a\n15,000 rpm disc, and retrieving a single 8k page on every access, each\ndisc will be producing a maximum of 2MB per second, which can be handled\nquite easily by modern CPUs. Index scans are limited by the disc, not the\nCPU.\n\nMathew\n\n-- \nA. Top Posters\n> Q. What's the most annoying thing in the world?\n",
"msg_date": "Tue, 4 Dec 2007 15:21:45 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "On Tue, 4 Dec 2007, Gregory Stark wrote:\n> Also, it's true, you need to preread more than 12 blocks to handle a 12-disk\n> raid. My offhand combinatorics analysis seems to indicate you would expect to\n> need to n(n-1)/2 blocks on average before you've hit all the blocks. There's\n> little penalty to prereading unless you use up too much kernel resources or\n> you do unnecessary i/o which you never use, so I would expect doing n^2 capped\n> at some reasonable number like 1,000 pages (enough to handle a 32-disk raid)\n> would be reasonable.\n\nIt's better than that, actually. Let's assume a RAID 0 striped set of\ntwelve discs. If you spread out twelve requests to twelve discs, then the\nexpected number of requests to each disc is one. The probablility that any\nsingle disc receives more than say three requests is rather small. As you\nincrease the number of requests, the longest reasonably-expected queue\nlength for a particular disc gets closer to the number of requests divided\nby the number of discs, as the requests get spread more and more evenly\namong the discs.\n\nThe larger the set of requests, the closer the performance will scale to\nthe number of discs.\n\nMatthew\n\n-- \nAll of this sounds mildly turgid and messy and confusing... but what the\nheck. That's what programming's all about, really\n -- Computer Science Lecturer\n",
"msg_date": "Tue, 4 Dec 2007 15:27:55 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "\nFwiw, what made you bring up this topic now? You're the second person in about\ntwo days to bring up precisely this issue and it was an issue I had been\nplanning to bring up on -hackers as it was.\n\n\"Matthew\" <[email protected]> writes:\n\n> Kind of. The system cache is just a method to make it simpler to explain -\n> I don't know the operating system interfaces, but it's likely that the\n> actual call is something more like \"transfer these blocks to these memory\n> locations and tell me when they're all finished.\" I'm trying to make a\n> single query concurrent by using the knowledge of a *list* of accesses to\n> be made, and getting the operating system to do all of them concurrently.\n\nThere are two interfaces of which I'm aware of. posix_fadvise() which just\ntells the kernel you'll be needing the blocks soon. Linux at least initiates\nI/O on them into cache. libaio which lets you specify the location to read the\nblocks and how to notify you when they're ready.\n\nOn Linux posix_fadvise works great but libaio doesn't seem to gain any speed\nbonus, at least on 2.6.22 with glibc 2.6.1. I was under the impression there\nwas a kernel implementation somewhere but apparently it's not helping.\n\nOn Solaris apparently it doesn't have posix_fadvise but libaio works great. We\ncould use libaio as a kind of backdoor fadvise where we just initiate i/o on\nthe block but throw away the results assuming they'll stay in cache for the\nreal read or we could add an asynchronous interface to the buffer manager. The\nlatter is attractive but would be a much more invasive patch. I'm inclined to\nstart with the former.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n",
"msg_date": "Tue, 04 Dec 2007 15:40:24 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "Matthew wrote:\n> On Tue, 4 Dec 2007, Gregory Stark wrote:\n> \n>> Also, it's true, you need to preread more than 12 blocks to handle a 12-disk\n>> raid. My offhand combinatorics analysis seems to indicate you would expect to\n>> need to n(n-1)/2 blocks on average before you've hit all the blocks. There's\n>> little penalty to prereading unless you use up too much kernel resources or\n>> you do unnecessary i/o which you never use, so I would expect doing n^2 capped\n>> at some reasonable number like 1,000 pages (enough to handle a 32-disk raid)\n>> would be reasonable.\n>> \n> It's better than that, actually. Let's assume a RAID 0 striped set of\n> twelve discs. If you spread out twelve requests to twelve discs, then the\n> expected number of requests to each disc is one. The probablility that any\n> single disc receives more than say three requests is rather small. As you\n> increase the number of requests, the longest reasonably-expected queue\n> length for a particular disc gets closer to the number of requests divided\n> by the number of discs, as the requests get spread more and more evenly\n> among the discs.\n>\n> The larger the set of requests, the closer the performance will scale to\n> the number of discs\n\nThis assumes that you can know which pages to fetch ahead of time - \nwhich you do not except for sequential read of a single table.\n\nI think it would be possible that your particular case could be up to 6X \nfaster, but that most other people will see little or no speed up. If it \ndoes read the wrong pages, it is wasting it's time.\n\nI am not trying to discourage you - only trying to ensure that you have \nreasonable expectations. 12X is far too optimistic.\n\nPlease show one of your query plans and how you as a person would design \nwhich pages to request reads for.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nMatthew wrote:\n\nOn Tue, 4 Dec 2007, Gregory Stark wrote:\n \n\nAlso, it's true, you need to preread more than 12 blocks to handle a 12-disk\nraid. My offhand combinatorics analysis seems to indicate you would expect to\nneed to n(n-1)/2 blocks on average before you've hit all the blocks. There's\nlittle penalty to prereading unless you use up too much kernel resources or\nyou do unnecessary i/o which you never use, so I would expect doing n^2 capped\nat some reasonable number like 1,000 pages (enough to handle a 32-disk raid)\nwould be reasonable.\n \n\nIt's better than that, actually. Let's assume a RAID 0 striped set of\ntwelve discs. If you spread out twelve requests to twelve discs, then the\nexpected number of requests to each disc is one. The probablility that any\nsingle disc receives more than say three requests is rather small. As you\nincrease the number of requests, the longest reasonably-expected queue\nlength for a particular disc gets closer to the number of requests divided\nby the number of discs, as the requests get spread more and more evenly\namong the discs.\n\nThe larger the set of requests, the closer the performance will scale to\nthe number of discs\n\n\nThis assumes that you can know which pages to fetch ahead of time -\nwhich you do not except for sequential read of a single table.\n\nI think it would be possible that your particular case could be up to\n6X faster, but that most other people will see little or no speed up.\nIf it does read the wrong pages, it is wasting it's time.\n\nI am not trying to discourage you - only trying to ensure that you have\nreasonable expectations. 12X is far too optimistic.\n\nPlease show one of your query plans and how you as a person would\ndesign which pages to request reads for.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>",
"msg_date": "Tue, 04 Dec 2007 10:41:13 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "[email protected] wrote:\n> So, if you hand requests one by one to the disc, it will almost always be\n> faster to order them. On the other hand, if you hand a huge long list of\n> requests to a decent SCSI or SATA-NCQ disc in one go, it will reorder the\n> reads itself, and it will do it much better than you.\n>\n> \nSure - but this doesn't suggest threading so much as pushing all reads \ninto AIO as soon as they can\nbe identified - and hoping that your os has a decent AIO subsystem, \nwhich is sadly a tall order\nfor many *nix systems.\n\nI do think some thought should be given to threading but I would expect \nthe effect to be more\nnoticeable for updates where you update tables that have multiple \nindices. In the case of your\nscan then you need threading on CPU (rather than concurrent IO through \nAIO) if the disks\ncan feed you data faster than you can scan it. Which might be the case \nfor some scans\nusing user functions, but I wouldn't have thought it would be the case \nfor a sinple index scan.\n\nAt some point, hopefully, the engines will be:\n a) thread safe (if not thread hot) so it can exist with threaded user \nfunctions and embedded\n languages\n b) able to incorporate C++ add-in functionality\n\nThere may not be a pressing reason to support either of these, but \nhaving a capability to\nexperiment would surely be helpful and allow incremental enhancement - \nso baby steps\ncould be made to (for example) move stats and logging to a background \nthread, move\npush of results to clients out of the query evaluator thread, and so \non. Parallel threading\nqueries is a whle different ball game which needs thought in the optimiser.\n\nJames\n\n",
"msg_date": "Tue, 04 Dec 2007 15:50:12 +0000",
"msg_from": "James Mansion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "Mark Mielke wrote:\n> This assumes that you can know which pages to fetch ahead of time - \n> which you do not except for sequential read of a single table.\n>\nWhy doesn't it help to issue IO ahead-of-time requests when you are \nscanning an index? You can read-ahead\nin index pages, and submit requests for data pages as soon as it is \nclear you'll want them. Doing so can allow\nthe disks and OS to relax the order in which you receive them, which may \nallow you to process them while IO\ncontinues, and it may also optimise away some seeking and settle time. \nMaybe.\n\n",
"msg_date": "Tue, 04 Dec 2007 15:55:30 +0000",
"msg_from": "James Mansion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "On Tue, 4 Dec 2007, Mark Mielke wrote:\n> > The larger the set of requests, the closer the performance will scale to\n> > the number of discs\n>\n> This assumes that you can know which pages to fetch ahead of time -\n> which you do not except for sequential read of a single table.\n\nThere are circumstances where it may be hard to locate all the pages ahead\nof time - that's probably when you're doing a nested loop join. However,\nif you're looking up in an index and get a thousand row hits in the index,\nthen there you go. Page locations to load.\n\n> Please show one of your query plans and how you as a person would design\n> which pages to request reads for.\n\nHow about the query that \"cluster <[email protected]>\" was trying to get\nto run faster a few days ago? Tom Lane wrote about it:\n\n| Wouldn't help, because the accesses to \"questions\" are not the problem.\n| The query's spending nearly all its time in the scan of \"posts\", and\n| I'm wondering why --- doesn't seem like it should take 6400msec to fetch\n| 646 rows, unless perhaps the data is just horribly misordered relative\n| to the index.\n\nWhich is exactly what's going on. The disc is having to seek 646 times\nfetching a single row each time, and that takes 6400ms. He obviously has a\nstandard 5,400 or 7,200 rpm drive with a seek time around 10ms.\n\nOr on a similar vein, fill a table with completely random values, say ten\nmillion rows with a column containing integer values ranging from zero to\nten thousand. Create an index on that column, analyse it. Then pick a\nnumber between zero and ten thousand, and\n\n\"SELECT * FROM table WHERE that_column = the_number_you_picked\"\n\nMatthew\n\n-- \nExperience is what allows you to recognise a mistake the second time you\nmake it.\n",
"msg_date": "Tue, 4 Dec 2007 16:00:37 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "James Mansion wrote:\n> Mark Mielke wrote:\n>> This assumes that you can know which pages to fetch ahead of time - \n>> which you do not except for sequential read of a single table.\n> Why doesn't it help to issue IO ahead-of-time requests when you are \n> scanning an index? You can read-ahead\n> in index pages, and submit requests for data pages as soon as it is \n> clear you'll want them. Doing so can allow\n> the disks and OS to relax the order in which you receive them, which \n> may allow you to process them while IO\n> continues, and it may also optimise away some seeking and settle \n> time. Maybe.\nSorry to be unclear. To achieve a massive speedup (12X for 12 disks with \nRAID 0) requires that you know what reads to perform in advance. The \nmoment you do not, you only have a starting point, your operations begin \nto serialize again. For example, you must scan the first index, to be \nable to know what table rows to read. At a minimum, this breaks your \nquery into: 1) Preload all the index pages you will need, 2) Scan the \nindex pages you needed, 3) Preload all the table page you will need, 4) \nScan the table pages you needed. But do you really need the whole index? \nWhat if you only need parts of the index, and this plan now reads the \nwhole index using async I/O \"just in case\" it is useful? Index is a \nB-Tree for a reason. In Matthew's case where he has an IN clause with \nthousands of possibles (I think?), perhaps a complete index scan is \nalways the best case - but that's only one use case, and in my opinion, \nan obscure one. As soon as additional table joins become involved, the \nchance that whole index scans are required would probably normally \nreduce, which turns the index scan into a regular B-Tree scan, which is \ndifficult to perform async I/O for, as you don't necessarily know which \npages to read next.\n\nIt seems like a valuable goal - but throwing imaginary numbers around \ndoes not appeal to me. I am more interested in Gregory's simulations. I \nwould like to understand his simulation better, and see his results. \nSpeculation about amazing potential is barely worth the words used to \nexpress it. The real work is in design and implementation. :-)\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n",
"msg_date": "Tue, 04 Dec 2007 11:06:42 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "Matthew wrote:\n> On Tue, 4 Dec 2007, Mark Mielke wrote:\n> \n>>> The larger the set of requests, the closer the performance will scale to\n>>> the number of discs\n>>> \n>> This assumes that you can know which pages to fetch ahead of time -\n>> which you do not except for sequential read of a single table.\n>> \n> There are circumstances where it may be hard to locate all the pages ahead\n> of time - that's probably when you're doing a nested loop join. However,\n> if you're looking up in an index and get a thousand row hits in the index,\n> then there you go. Page locations to load.\n> \nSure.\n>> Please show one of your query plans and how you as a person would design\n>> which pages to request reads for.\n>> \n> How about the query that \"cluster <[email protected]>\" was trying to get\n> to run faster a few days ago? Tom Lane wrote about it:\n>\n> | Wouldn't help, because the accesses to \"questions\" are not the problem.\n> | The query's spending nearly all its time in the scan of \"posts\", and\n> | I'm wondering why --- doesn't seem like it should take 6400msec to fetch\n> | 646 rows, unless perhaps the data is just horribly misordered relative\n> | to the index.\n>\n> Which is exactly what's going on. The disc is having to seek 646 times\n> fetching a single row each time, and that takes 6400ms. He obviously has a\n> standard 5,400 or 7,200 rpm drive with a seek time around 10ms.\n> \nYour proposal would not necessarily improve his case unless he also \npurchased additional disks, at which point his execution time may be \ndifferent. More speculation. :-)\n\nIt seems reasonable - but still a guess.\n\n> Or on a similar vein, fill a table with completely random values, say ten\n> million rows with a column containing integer values ranging from zero to\n> ten thousand. Create an index on that column, analyse it. Then pick a\n> number between zero and ten thousand, and\n>\n> \"SELECT * FROM table WHERE that_column = the_number_you_picked\nThis isn't a real use case. Optimizing for the worst case scenario is \nnot always valuable.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nMatthew wrote:\n\nOn Tue, 4 Dec 2007, Mark Mielke wrote:\n \n\n\nThe larger the set of requests, the closer the performance will scale to\nthe number of discs\n \n\nThis assumes that you can know which pages to fetch ahead of time -\nwhich you do not except for sequential read of a single table.\n \n\nThere are circumstances where it may be hard to locate all the pages ahead\nof time - that's probably when you're doing a nested loop join. However,\nif you're looking up in an index and get a thousand row hits in the index,\nthen there you go. Page locations to load.\n \n\nSure.\n\n\n\nPlease show one of your query plans and how you as a person would design\nwhich pages to request reads for.\n \n\nHow about the query that \"cluster <[email protected]>\" was trying to get\nto run faster a few days ago? Tom Lane wrote about it:\n\n| Wouldn't help, because the accesses to \"questions\" are not the problem.\n| The query's spending nearly all its time in the scan of \"posts\", and\n| I'm wondering why --- doesn't seem like it should take 6400msec to fetch\n| 646 rows, unless perhaps the data is just horribly misordered relative\n| to the index.\n\nWhich is exactly what's going on. The disc is having to seek 646 times\nfetching a single row each time, and that takes 6400ms. He obviously has a\nstandard 5,400 or 7,200 rpm drive with a seek time around 10ms.\n \n\nYour proposal would not necessarily improve his case unless he also\npurchased additional disks, at which point his execution time may be\ndifferent. More speculation. :-)\n\nIt seems reasonable - but still a guess.\n\n\nOr on a similar vein, fill a table with completely random values, say ten\nmillion rows with a column containing integer values ranging from zero to\nten thousand. Create an index on that column, analyse it. Then pick a\nnumber between zero and ten thousand, and\n\n\"SELECT * FROM table WHERE that_column = the_number_you_picked\n\nThis isn't a real use case. Optimizing for the worst case scenario is\nnot always valuable.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>",
"msg_date": "Tue, 04 Dec 2007 11:11:28 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "On Tue, 4 Dec 2007, Gregory Stark wrote:\n> Fwiw, what made you bring up this topic now? You're the second person in about\n> two days to bring up precisely this issue and it was an issue I had been\n> planning to bring up on -hackers as it was.\n\nI only just joined the performance mailing list to talk about R-trees. I\nwould probably have brought it up earlier if I had been here earlier.\nHowever, we're thinking of buying this large machine, and that reminded\nme.\n\nI have been biting at the bit for my bosses to allow me time to write an\nindexing system for transient data - a lookup table backed by disc,\nlooking up from an integer to get an object, native in Java. Our system\noften needs to fetch a list of a thousand different objects by a key like\nthat, and Postgres just doesn't do that exact thing fast. I was going to\nimplement it with full asynchronous IO, to do that particular job very\nfast, so I have done a reasonable amount of research into the topic. In\nJava, that is. It would add a little bit more performance for our system.\nThat wouldn't cover us - we still need to do complex queries with the same\nproblem, and that'll have to stay in Postgres.\n\nMatthew\n\n-- \nThe early bird gets the worm. If you want something else for breakfast, get\nup later.\n",
"msg_date": "Tue, 4 Dec 2007 16:16:41 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "Matthew wrote:\n> On Tue, 4 Dec 2007, Gregory Stark wrote:\n> \n>> Fwiw, what made you bring up this topic now? You're the second person in about\n>> two days to bring up precisely this issue and it was an issue I had been\n>> planning to bring up on -hackers as it was.\n>> \n> I only just joined the performance mailing list to talk about R-trees. I\n> would probably have brought it up earlier if I had been here earlier.\n> However, we're thinking of buying this large machine, and that reminded\n> me.\n>\n> I have been biting at the bit for my bosses to allow me time to write an\n> indexing system for transient data - a lookup table backed by disc,\n> looking up from an integer to get an object, native in Java. Our system\n> often needs to fetch a list of a thousand different objects by a key like\n> that, and Postgres just doesn't do that exact thing fast. I was going to\n> implement it with full asynchronous IO, to do that particular job very\n> fast, so I have done a reasonable amount of research into the topic. In\n> Java, that is. It would add a little bit more performance for our system.\n> That wouldn't cover us - we still need to do complex queries with the same\n> problem, and that'll have to stay in Postgres\nSo much excitement and zeal - refreshing to see. And yet, no numbers! :-)\n\nYou describe a new asynchronous I/O system to map integers to Java \nobjects above. Why would you write this? Have you tried BerkeleyDB or \nBerkeleyDB JE? BerkeleyDB with BDB as a backend or their full Java \nbackend gives you a Java persistence API that will allow you to map Java \nobjects (including integers) to other Java objects. They use generated \nJava run time instructions instead of reflection to store and lock your \nJava objects. If it came to a bet, I would bet that their research and \ntuning over several years, and many people, would beat your initial \nimplementation, asynchronous I/O or not.\n\nAsynchronous I/O is no more a magic bullet than threading. It requires a \nlot of work to get it right, and if one gets it wrong, it can be slower \nthan the regular I/O or single threaded scenarios. Both look sexy on \npaper, neither may be the solution to your problem. Or they may be. We \nwouldn't know without numbers.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nMatthew wrote:\n\nOn Tue, 4 Dec 2007, Gregory Stark wrote:\n \n\nFwiw, what made you bring up this topic now? You're the second person in about\ntwo days to bring up precisely this issue and it was an issue I had been\nplanning to bring up on -hackers as it was.\n \n\nI only just joined the performance mailing list to talk about R-trees. I\nwould probably have brought it up earlier if I had been here earlier.\nHowever, we're thinking of buying this large machine, and that reminded\nme.\n\nI have been biting at the bit for my bosses to allow me time to write an\nindexing system for transient data - a lookup table backed by disc,\nlooking up from an integer to get an object, native in Java. Our system\noften needs to fetch a list of a thousand different objects by a key like\nthat, and Postgres just doesn't do that exact thing fast. I was going to\nimplement it with full asynchronous IO, to do that particular job very\nfast, so I have done a reasonable amount of research into the topic. In\nJava, that is. It would add a little bit more performance for our system.\nThat wouldn't cover us - we still need to do complex queries with the same\nproblem, and that'll have to stay in Postgres\n\nSo much excitement and zeal - refreshing to see. And yet, no numbers!\n:-)\n\nYou describe a new asynchronous I/O system to map integers to Java\nobjects above. Why would you write this? Have you tried BerkeleyDB or\nBerkeleyDB JE? BerkeleyDB with BDB as a backend or their full Java\nbackend gives you a Java persistence API that will allow you to map\nJava objects (including integers) to other Java objects. They use\ngenerated Java run time instructions instead of reflection to store and\nlock your Java objects. If it came to a bet, I would bet that their\nresearch and tuning over\nseveral years, and many people, would beat your initial implementation,\nasynchronous I/O or not.\n\nAsynchronous I/O is no more a magic bullet than threading. It requires\na lot of work to get it right, and if one gets it wrong, it can be\nslower than the regular I/O or single threaded scenarios. Both look\nsexy on paper, neither may be the solution to your problem. Or they may\nbe. We wouldn't know without numbers.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>",
"msg_date": "Tue, 04 Dec 2007 11:32:14 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "On Tue, 4 Dec 2007, Mark Mielke wrote:\n> So much excitement and zeal - refreshing to see. And yet, no numbers! :-)\n\nWhat sort of numbers did you want to see?\n\n> You describe a new asynchronous I/O system to map integers to Java\n> objects above. Why would you write this? Have you tried BerkeleyDB or\n> BerkeleyDB JE? BerkeleyDB with BDB as a backend or their full Java\n> backend gives you a Java persistence API that will allow you to map Java\n> objects (including integers) to other Java objects.\n\nLooked at all those. Didn't like their performance characteristics, or\ntheir interfaces. It's simply the fact that they're not designed for the\nworkload that we want to put on them.\n\n> If it came to a bet, I would bet that their research and\n> tuning over several years, and many people, would beat your initial\n> implementation, asynchronous I/O or not.\n\nQuite possibly. However, there's the possibility that it wouldn't. And I\ncan improve it - initial implementations often suck.\n\n> Asynchronous I/O is no more a magic bullet than threading. It requires a\n> lot of work to get it right, and if one gets it wrong, it can be slower\n> than the regular I/O or single threaded scenarios. Both look sexy on\n> paper, neither may be the solution to your problem. Or they may be. We\n> wouldn't know without numbers.\n\nOkay, numbers. About eight years ago I wrote the shell of a filesystem\nimplementation, concentrating on performance and integrity. It absolutely\nwhooped ext2 for both read and write speed - especially metadata write\nspeed. Anything up to 60 times as fast. I wrote a load of metadata to\next2, which took 599.52 seconds, and on my system took 10.54 seconds.\nListing it back (presumably from cache) took 1.92 seconds on ext2 and 0.22\nseconds on my system. No silly caching tricks that sacrifice integrity.\n\nIt's a pity I got nowhere near finishing that system - just enough to\nprove my point and get a degree, but looking back on it there are massive\nways it's rubbish and should be improved. It was an initial\nimplementation. I didn't have reiserfs, jfs, or xfs available at that\ntime, but it would have been really interesting to compare. This is the\nsystem I would have based my indexing thing on.\n\nMatthew\n\n-- \nAnyone who goes to a psychiatrist ought to have his head examined.\n",
"msg_date": "Tue, 4 Dec 2007 17:03:13 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "\"Matthew\" <[email protected]> writes:\n\n> On Tue, 4 Dec 2007, Mark Mielke wrote:\n>> So much excitement and zeal - refreshing to see. And yet, no numbers! :-)\n>\n> What sort of numbers did you want to see?\n\nFWIW I posted some numbers from a synthetic case to pgsql-hackers\n\nhttp://archives.postgresql.org/pgsql-hackers/2007-12/msg00088.php\n\nVery promising\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n",
"msg_date": "Tue, 04 Dec 2007 19:28:42 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "Mark Mielke wrote:\n> At a minimum, this breaks your query into: 1) Preload all the index \n> pages you will need\nIsn't this fairly predictable - the planner has chosen the index so it \nwill be operating\non a bounded subset.\n> , 2) Scan the index pages you needed\nYes, and AIO helps when you can scan them in arbitrary order, as they \nare returned.\n> , 3) Preload all the table page you will need\nNo - just request that they load. You can do work as soon as any page \nis returned.\n> , 4) Scan the table pages you needed.\nIn the order that is most naturally returned by the disks.\n\n> But do you really need the whole index? \nI don't think I suggested that.\n> What if you only need parts of the index, and this plan now reads the \n> whole index using async I/O \"just in case\" it is useful?\nWhere did you get that from?\n> index scan into a regular B-Tree scan, which is difficult to perform \n> async I/O for, as you don't necessarily know which pages to read next.\n>\nMost B-trees are not so deep. It would generally be a win to retain \ninterior nodes of indices in\nshared memory, even if leaf pages are not present. In such a case, it \nis quite quick to establish\nwhich leaf pages must be demand loaded.\n\nI'm not suggesting that Postgres indices are structured in a way that \nwould support this sort\nof thing now.\n\nJames\n\n",
"msg_date": "Tue, 04 Dec 2007 20:14:22 +0000",
"msg_from": "James Mansion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "James Mansion wrote:\n> Mark Mielke wrote:\n>> At a minimum, this breaks your query into: 1) Preload all the index \n>> pages you will need\n> Isn't this fairly predictable - the planner has chosen the index so it \n> will be operating\n> on a bounded subset.\nWhat is the bounded subset? It is bounded by the value. What value? You \nneed to access the first page before you know what the second page is. \nPostgreSQL or the kernel should already have the hottest pages in \nmemory, so the value of doing async I/O is very likely the cooler pages \nthat are unique to the query. We don't know what the cooler pages are \nuntil we follow three tree down.\n\n>> , 2) Scan the index pages you needed\n> Yes, and AIO helps when you can scan them in arbitrary order, as they \n> are returned.\n\nI don't think you are talking about searching a B-Tree, as the order is \nimportant when searching, and search performance would be reduced if one \nreads and scans more pages than necessary to map from the value to the \nrow. I presume you are talking about scanning the entire index. Where \n\"needed\" means \"all\". Again, this only benefits a subset of the queries.\n\n>> , 3) Preload all the table page you will need\n> No - just request that they load. You can do work as soon as any page \n> is returned.\n\nThe difference between preload and handling async I/O in terms of \nperformance is debatable. Greg reports that async I/O on Linux sucks, \nbut posix_fadvise*() has substantial benefits. posix_fadvise*() is \npreload not async I/O (he also reported that async I/O on Solaris seems \nto work well). Being able to do work as the first page is available is a \nmicro-optimization as far as I am concerned at this point (one that may \nnot yet work on Linux), as the real benefit comes from utilizing all 12 \ndisks in Matthew's case, not from guaranteeing that data is processed as \nsoon as possible.\n\n>> , 4) Scan the table pages you needed.\n> In the order that is most naturally returned by the disks.\n\nMicro-optimization.\n\n>> But do you really need the whole index? \n> I don't think I suggested that.\n>> What if you only need parts of the index, and this plan now reads the \n>> whole index using async I/O \"just in case\" it is useful?\n> Where did you get that from?\n\nI get it from your presumption that you can know which pages of the \nindex to load in advance. The only way you can know which pages must be \nloaded, is to know that you need to query them all. Unless you have some \nway of speculating with some degree of accuracy which colder pages in \nthe index you will need, without looking at the index?\n\n>> index scan into a regular B-Tree scan, which is difficult to perform \n>> async I/O for, as you don't necessarily know which pages to read next.\n> Most B-trees are not so deep. It would generally be a win to retain \n> interior nodes of indices in\n> shared memory, even if leaf pages are not present. In such a case, it \n> is quite quick to establish\n> which leaf pages must be demand loaded.\n\nThis is bogus. The less deep the B-Tree is, the less there should be any \nrequirement for async I/O. Hot index pages will be in cache.\n\n> I'm not suggesting that Postgres indices are structured in a way that \n> would support this sort\n> of thing now.\n\nIn your hand waving, you have assumed that PostgreSQL B-Tree index might \nneed to be replaced? :-)\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n",
"msg_date": "Tue, 04 Dec 2007 17:14:55 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "Mark Mielke wrote:\n> PostgreSQL or the kernel should already have the hottest pages in \n> memory, so the value of doing async I/O is very likely the cooler \n> pages that are unique to the query. We don't know what the cooler \n> pages are until we follow three tree down.\n>\nI'm assuming that at the time we start to search the index, we have some \nidea of value or values that\nwe are looking for. Or, as you say, we are applying a function to 'all \nof it'.\n\nThink of a 'between' query. The subset of the index that can be a match \ncan be bounded by the leaf\npages that contain the end point(s). Similarly if we have a merge with \na sorted intermediate set from\na prior step then we also have bounds on the values.\n\nI'm not convinced that your assertion that the index leaf pages must \nnecessarily be processed in-order\nis true - it depends what sort of operation is under way. I am assuming \nthat we try hard to keep\ninterior index nodes and data in meory and that having identified the \nsubset of these that we want, we\ncan immediately infer the set of leaves that are potentially of interest.\n> The difference between preload and handling async I/O in terms of \n> performance is debatable. Greg reports that async I/O on Linux sucks, \n> but posix_fadvise*() has substantial benefits. posix_fadvise*() is \n> preload not async I/O (he also reported that async I/O on Solaris \n> seems to work well). Being able to do work as the first page is \n> available is a micro-optimization as far as I am concerned at this \n> point (one that may not yet work on Linux), as the real benefit comes \n> from utilizing all 12 disks in Matthew's case, not from guaranteeing \n> that data is processed as soon as possible.\n>\nI see it as part of the same problem. You can partition the data across \nall the disks and run queries in parallel\nagainst the partitions, or you can lay out the data in the RAID array in \nwhich case the optimiser has very little idea\nhow the data will map to physical data layout - so its best bet is to \nlet the systems that DO know decide the\naccess strategy. And those systems can only do that if you give them a \nlot of requests that CAN be reordered,\nso they can choose a good ordering.\n\n> Micro-optimization.\n>\nWell, you like to assert this - but why? If a concern is the latency \n(and my experience suggests that latency is the\nbiggest issue in practice, not throughput per se) then overlapping \nprocessing while waiting for 'distant' data is\nimportant - and we don't have any information about the physical layout \nof the data that allows us to assert that\nforward access pre-read of data from a file is the right strategy for \naccessing it as fast as possible - we have to\nallow the OS (and to an increasing extent the disks) to manage the \nelevator IO to best effect. Its clear that the\nspeed of streaming read and write of modern disks is really high \ncompared to that of random access, so anything\nwe can do to help the disks run in that mode is pretty worthwhile even \nif the physical streaming doesn't match\nany obvious logical ordering of the OS files or logical data pages \nwithin them. If you have a function to apply to\na set of data elements and the elements are independant, then requiring \nthat the function is applied in an order\nrather than conceptually in parallel is going to put a lot of constraint \non how the hardware can optimise it.\n\nClearly a hint to preload is better than nothing. But it seems to me \nthat the worst case is that we wait for\nthe slowest page to load and then start processing hoping that the rest \nof the data stays in the buffer cache\nand is 'instant', while AIO and evaluate-when-ready means that process \nis still bound by the slowest\ndata to arrive, but at that point there is little processing still to \ndo, and the already-processed buffers can be\nreused earlier. In the case where there is significant presure on the \nbuffer cache, this can be significant.\n\nOf course, a couple of decades bullying Sybase systems on Sun Enterprise \nboxes may have left me\nsomewhat jaundiced - but Sybase can at least parallelise things. \nSometimes. When it does, its quite\na big win.\n\n> In your hand waving, you have assumed that PostgreSQL B-Tree index \n> might need to be replaced? :-)\n>\nSure, if the intent is to make the system thread-hot or AIO-hot, then \nthe change is potentially very\ninvasive. The strategy to evaluate queries based on parallel execution \nand async IO is not necessarily\nvery like a strategy where you delegate to the OS buffer cache.\n\nI'm not too bothered for the urpose of this discussion whether the way \nthat postgres currently\nnavigates indexes is amenable to this. This is bikeshed land, right?\n\nI think it is foolish to disregard strategies that will allow \noverlapping IO and processing - and we want to\nkeep disks reading and writing rather than seeking. To me that suggests \nAIO and disk-native queuing\nare quite a big deal. And parallel evaluation will be too as the number \nof cores goes up and there is\nan expectation that this should reduce latency of individual query, not \njust allow throughput with lots\nof concurrent demand.\n\n",
"msg_date": "Tue, 04 Dec 2007 23:58:31 +0000",
"msg_from": "James Mansion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "James Mansion wrote:\n> Mark Mielke wrote:\n>> PostgreSQL or the kernel should already have the hottest pages in \n>> memory, so the value of doing async I/O is very likely the cooler \n>> pages that are unique to the query. We don't know what the cooler \n>> pages are until we follow three tree down.\n>>\n> I'm assuming that at the time we start to search the index, we have \n> some idea of value or values that\n> we are looking for. Or, as you say, we are applying a function to \n> 'all of it'.\n>\n> Think of a 'between' query. The subset of the index that can be a \n> match can be bounded by the leaf\n> pages that contain the end point(s). Similarly if we have a merge \n> with a sorted intermediate set from\n> a prior step then we also have bounds on the values.\n\nHow do you find the bounding points for these pages? Does this not \nrequire descending through the tree in a more traditional way?\n\n> I'm not convinced that your assertion that the index leaf pages must \n> necessarily be processed in-order\n> is true - it depends what sort of operation is under way. I am \n> assuming that we try hard to keep\n> interior index nodes and data in meory and that having identified the \n> subset of these that we want, we\n> can immediately infer the set of leaves that are potentially of interest.\n\nIt is because you are missing my point. In order to find a reduced set \nof pages to load, one must descend through the B-Tree. Identifying the \nsubset requires sequential loading of pages. There is some theoretical \npotential for async I/O, but I doubt your own assertion that this is \nsignificant in any significant way. I ask you again - how do you know \nwhich lower level pages to load before you load the higher level pages? \nThe only time the B-Tree can be processed out of order in this regard is \nif you are doing a bitmap index scan or some similar operation that will \nscan the entire tree, and you do not care what order the pages arrive \nin. If you are looking for a specific key, one parent page leads to one \nchild page, and the operation is sequential.\n\n>> The difference between preload and handling async I/O in terms of \n>> performance is debatable. Greg reports that async I/O on Linux sucks, \n>> but posix_fadvise*() has substantial benefits. posix_fadvise*() is \n>> preload not async I/O (he also reported that async I/O on Solaris \n>> seems to work well). Being able to do work as the first page is \n>> available is a micro-optimization as far as I am concerned at this \n>> point (one that may not yet work on Linux), as the real benefit comes \n>> from utilizing all 12 disks in Matthew's case, not from guaranteeing \n>> that data is processed as soon as possible.\n> I see it as part of the same problem. You can partition the data \n> across all the disks and run queries in parallel\n> against the partitions, or you can lay out the data in the RAID array \n> in which case the optimiser has very little idea\n> how the data will map to physical data layout - so its best bet is to \n> let the systems that DO know decide the\n> access strategy. And those systems can only do that if you give them \n> a lot of requests that CAN be reordered,\n> so they can choose a good ordering.\n\nYou can see it however you like - what remains, is that the 12X speed up \nis going to come from using 12 disks, and loading the 12 disks in \nparallel. More theoretical improvements with regard to the ability for a \nparticular hard disk to schedule reads and return results out of order, \nhave not, in my reading, been shown to reliably improve performance to a \nsignificant degree. Unfortunately, Linux doesn't support my SATA NCQ \nyet, so I haven't been able to experiment myself. Gregory provided \nnumbers showing a 2X - 3X performance when using three disks. This has \nthe potential for significant improvement with real hardware, modest \ncost, and perhaps trivial changes to PostgreSQL. What you describe is a \nre-design of the I/O strategy that will be designed for asynchronous \nI/O, with some sort of state machine that will be able to deal \nefficiently with either index pages or table pages out of order. Do you \nhave numbers to show that such a significant change would result in a \nreasonable return on the investment?\n\n>> Micro-optimization.\n> Well, you like to assert this - but why?\nI'll quote from:\n\n http://en.wikipedia.org/wiki/Native_Command_Queuing\n\nMost reading I have done shows NCQ to have limited gains, with most \nbenchmarks being suspect. Also, latency is only for the first page. One \npresumes that asynch I/O will be mostly valuable in the case where many \npages can be scheduled for reading at the same time. In the case that \nmany pages are scheduled for reading, the first page will be eventually \nserved, and the overall bandwidth is still the same. In your theoretical \nmodel, you are presuming the CPU is a bottleneck, either for processing, \nor scheduling the next batch of reads. I think you are hand waving, and \ngiven that Linux doesn't yet support asynch I/O well, Gregory's model \nwill serve more PostgreSQL users than yours with less complexity.\n\n> Clearly a hint to preload is better than nothing. But it seems to me \n> that the worst case is that we wait for\n> the slowest page to load and then start processing hoping that the \n> rest of the data stays in the buffer cache\n> and is 'instant', while AIO and evaluate-when-ready means that process \n> is still bound by the slowest\n> data to arrive, but at that point there is little processing still to \n> do, and the already-processed buffers can be\n> reused earlier. In the case where there is significant presure on the \n> buffer cache, this can be significant.\n\nIt seems to me that you have yet to prove that there will be substantial \ngains in overall performance for preload. Leaping on to presuming that \nPostgreSQL should switch to a fully asynch I/O model seems a greater \nstretch. By the sounds of it, Gregory could have the first implemented \nvery soon. When will you have yours? :-)\n\n>> In your hand waving, you have assumed that PostgreSQL B-Tree index \n>> might need to be replaced? :-)\n> Sure, if the intent is to make the system thread-hot or AIO-hot, then \n> the change is potentially very\n> invasive. The strategy to evaluate queries based on parallel \n> execution and async IO is not necessarily\n> very like a strategy where you delegate to the OS buffer cache.\n>\n> I'm not too bothered for the urpose of this discussion whether the way \n> that postgres currently\n> navigates indexes is amenable to this. This is bikeshed land, right?\n\nI am only interested by juicy projects that have a hope of success. This \nsubject does interest me - I am hoping my devil's advocate participation \nencourages people to seek a practical implementation that will benefit me.\n\n> I think it is foolish to disregard strategies that will allow \n> overlapping IO and processing - and we want to\n> keep disks reading and writing rather than seeking. To me that \n> suggests AIO and disk-native queuing\n> are quite a big deal. And parallel evaluation will be too as the \n> number of cores goes up and there is\n> an expectation that this should reduce latency of individual query, \n> not just allow throughput with lots\n> of concurrent demand.\n\nI am more amenable to multi-threaded index processing for the same query \nthan async I/O to take advantage of NCQ. Guess we each come from a \ndifferent background. :-)\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n",
"msg_date": "Tue, 04 Dec 2007 19:55:03 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "On Tue, 4 Dec 2007, Mark Mielke wrote:\n\n>> This is bikeshed land, right?\n> I am only interested by juicy projects that have a hope of success. This \n> subject does interest me - I am hoping my devil's advocate participation \n> encourages people to seek a practical implementation that will benefit me.\n\nNah, you're both in bikeshed land. Ultimately this feature is going to \nget built out in a way that prioritizes as much portability as is possible \nwhile minimizing the impact on existing code. The stated PostgreSQL bias \nis that async-I/O is not worth the trouble until proven otherwise: \nhttp://www.postgresql.org/docs/faqs.FAQ_DEV.html#item1.14\n\nThat's why Greg Stark is busy with low-level tests of I/O options while \nyou're arguing much higher level details. Until you've got some \nbenchmarks in this specific area to go by, you can talk theory all day and \nnot impact what implementation will actually get built one bit.\n\nAs an example of an area you've already brought up where theory and \nbenchmarks clash violently, the actual situation with NCQ on SATA drives \nhas been heavily blurred because of how shoddy some manufacturer's NCQ \nimplementation are. Take a look at the interesting thread on this topic \nat http://lkml.org/lkml/2007/4/3/159 to see what I'm talking about. Even \nif you have an NCQ drive, and a version of Linux that supports it, and \nyou've setup everything up right, you can still end up with unexpected \nperformance regressions under some circumstances. It's still the wild \nwest with that technology.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 5 Dec 2007 01:47:04 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "Mark Mielke wrote:\n> Asynchronous I/O is no more a magic bullet than threading. It requires a \n> lot of work to get it right, and if one gets it wrong, it can be slower \n> than the regular I/O or single threaded scenarios. Both look sexy on \n> paper, neither may be the solution to your problem. Or they may be. We \n> wouldn't know without numbers.\n\nAgreed. We currently don't use multiple CPUs or multiple disks\nefficiently for single-query loads. There is certainly more we could do\nin these areas, and it is on the TODO list.\n\nThe good news is that most work loads are multi-user and we use\nresources more evenly in those cases.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Sun, 16 Dec 2007 01:39:48 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "\"Matthew\" <[email protected]> writes:\n\n> On Tue, 4 Dec 2007, Gregory Stark wrote:\n>> FWIW I posted some numbers from a synthetic case to pgsql-hackers\n>>\n>> http://archives.postgresql.org/pgsql-hackers/2007-12/msg00088.php\n>...\n> This was with 8192 random requests of size 8192 bytes from an 80GB test file.\n> Unsorted requests ranged from 1.8 MB/s with no prefetching to 28MB/s with lots\n> of prefetching. Sorted requests went from 2.4MB/s to 38MB/s. That's almost\n> exactly 16x improvement for both, and this is top of the line hardware. \n\nNeat. The curves look very similar to mine. I also like that with your\nhardware the benefit maxes out at pretty much exactly where I had\nmathematically predicted they would ((stripe size)^2 / 2).\n\n> So, this is FYI, and also an added encouragement to implement fadvise\n> prefetching in some form or another. How's that going by the way?\n\nI have a patch which implements it for the low hanging fruit of bitmap index\nscans. it does it using an extra trip through the buffer manager which is the\nleast invasive approach but not necessarily the best.\n\nHeikki was pushing me to look at changing the buffer manager to support doing\nasynchronous buffer reads. In that scheme you would issue a ReadBufferAsync\nwhich would give you back a pinned buffer which the scan would have to hold\nonto and call a new buffer manager call to complete the i/o when it actually\nneeded the buffer. The ReadBufferAsync call would put the buffer in some form\nof i/o in progress.\n\nOn systems with only posix_fadvise ReadBufferAsync would issue posix_fadvise\nand ReadBufferFinish would issue a regular read. On systems with an effective\nlibaio the ReadBufferAsync could issue the aio operation and ReadBufferFinish\ncould then do a aio_return.\n\nThe pros of such an approach would be less locking and cpu overhead in the\nbuffer manager since the second trip would have the buffer handle handy and\njust have to issue the read.\n\nThe con of such an approach would be the extra shared buffers occupied by\nbuffers which aren't really needed yet. Also the additional complexity in the\nbuffer manager with the new i/o in progress state. (I'm not sure but I don't\nthink we get to reuse the existing i/o in progress state.)\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!\n",
"msg_date": "Tue, 29 Jan 2008 14:55:39 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "On Tue, 29 Jan 2008, Gregory Stark wrote:\n>> This was with 8192 random requests of size 8192 bytes from an 80GB test file.\n>> Unsorted requests ranged from 1.8 MB/s with no prefetching to 28MB/s with lots\n>> of prefetching. Sorted requests went from 2.4MB/s to 38MB/s. That's almost\n>> exactly 16x improvement for both, and this is top of the line hardware.\n>\n> Neat. The curves look very similar to mine. I also like that with your\n> hardware the benefit maxes out at pretty much exactly where I had\n> mathematically predicted they would ((stripe size)^2 / 2).\n\nWhy would that be the case? Does that mean that we can select a stripe \nsize of 100GB and get massive performance improvements? Doesn't seem \nlogical to me. To me, it maxes out at 16x speed because there are 16 \ndiscs.\n\nAmusingly, there appears to be a spam filter preventing my message (with \nits image) getting through to the performance mailing list.\n\nMatthew\n",
"msg_date": "Tue, 29 Jan 2008 15:09:03 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "\n\"Matthew\" <[email protected]> writes:\n\n> On Tue, 29 Jan 2008, Gregory Stark wrote:\n>>> This was with 8192 random requests of size 8192 bytes from an 80GB test file.\n>>> Unsorted requests ranged from 1.8 MB/s with no prefetching to 28MB/s with lots\n>>> of prefetching. Sorted requests went from 2.4MB/s to 38MB/s. That's almost\n>>> exactly 16x improvement for both, and this is top of the line hardware.\n>>\n>> Neat. The curves look very similar to mine. I also like that with your\n>> hardware the benefit maxes out at pretty much exactly where I had\n>> mathematically predicted they would ((stripe size)^2 / 2).\n>\n> Why would that be the case? Does that mean that we can select a stripe size of\n> 100GB and get massive performance improvements? Doesn't seem logical to me. To\n> me, it maxes out at 16x speed because there are 16 discs.\n\nSorry, I meant \"number of drives in the array\" not number of bytes. So with 16\ndrives you would need approximately 128 random pending i/o operations to\nexpect all drives to be busy at all times.\n\nI got this from a back-of-the-envelope calculation which now that I'm trying\nto reproduce it seems to be wrong. Previously I thought it was n(n+1)/2 or\nabout n^2/2. So at 16 I would have expected about 128 pending i/o requests\nbefore all the drives could be expected to be busy.\n\nNow that I'm working it out more carefully I'm getting that the expected\nnumber of pending i/o requests before all drives are busy is\n n + n/2 + n/3 + ... + n/n\n\nwhich is actually n * H(n) which is approximated closely by n * log(n).\n\nThat would predict that 16 drives would actually max out at 44.4 pending i/o\nrequests. It would predict that my three-drive array would max out well below\nthat at 7.7 pending i/o requests. Empirically neither result seems to match\nreality. Other factors must be dominating.\n\n> Amusingly, there appears to be a spam filter preventing my message (with its\n> image) getting through to the performance mailing list.\n\nThis has been plaguing us for a while. When we figure out who's misconfigured\nsystem is doing it I expect they'll be banned from the internet for life!\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n",
"msg_date": "Tue, 29 Jan 2008 15:52:20 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": ">>> On Tue, Jan 29, 2008 at 9:52 AM, in message\n<[email protected]>, Gregory Stark <[email protected]>\nwrote: \n \n> I got this from a back-of-the-envelope calculation which now that I'm trying\n> to reproduce it seems to be wrong. Previously I thought it was n(n+1)/2 or\n> about n^2/2. So at 16 I would have expected about 128 pending i/o requests\n> before all the drives could be expected to be busy.\n \nThat seems right to me, based on the probabilities of any new\nrequest hitting an already-busy drive.\n \n> Now that I'm working it out more carefully I'm getting that the expected\n> number of pending i/o requests before all drives are busy is\n> n + n/2 + n/3 + ... + n/n\n \nWhat's the basis for that?\n \n-Kevin\n \n\n\n",
"msg_date": "Tue, 29 Jan 2008 10:23:22 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n\n> >>> On Tue, Jan 29, 2008 at 9:52 AM, in message\n> <[email protected]>, Gregory Stark <[email protected]>\n> wrote: \n> \n> > I got this from a back-of-the-envelope calculation which now that I'm trying\n> > to reproduce it seems to be wrong. Previously I thought it was n(n+1)/2 or\n> > about n^2/2. So at 16 I would have expected about 128 pending i/o requests\n> > before all the drives could be expected to be busy.\n> \n> That seems right to me, based on the probabilities of any new\n> request hitting an already-busy drive.\n> \n> > Now that I'm working it out more carefully I'm getting that the expected\n> > number of pending i/o requests before all drives are busy is\n> > n + n/2 + n/3 + ... + n/n\n> \n> What's the basis for that?\n\nWell consider when you've reached n-1 drives; the expected number of requests\nbefore you hit the 1 idle drive remaining out of n would be n requests. When\nyou're at n-2 the expected number of requests before you hit either of the two\nidle drives would be n/2. And so on. The last term of n/n would be the first\ni/o when all the drives are idle and you obviously only need one i/o to hit an\nidle drive.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "29 Jan 2008 11:45:20 -0500",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": ">>> On Tue, Jan 29, 2008 at 10:45 AM, in message\n<[email protected]>, Gregory Stark <[email protected]> wrote: \n \n> Well consider when you've reached n-1 drives; the expected number of requests\n> before you hit the 1 idle drive remaining out of n would be n requests. When\n> you're at n-2 the expected number of requests before you hit either of the \n> two\n> idle drives would be n/2. And so on. The last term of n/n would be the first\n> i/o when all the drives are idle and you obviously only need one i/o to hit \n> an\n> idle drive.\n \nYou're right. Perhaps the reason more requests continue to improve\nperformance is that a smart controller will move across the tracks\nand satisfy the pending requests in the most efficient order?\n \n-Kevin\n \n\n\n",
"msg_date": "Tue, 29 Jan 2008 11:13:54 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "On Tue, 29 Jan 2008, Gregory Stark wrote:\n>> So, this is FYI, and also an added encouragement to implement fadvise\n>> prefetching in some form or another. How's that going by the way?\n>\n> I have a patch which implements it for the low hanging fruit of bitmap index\n> scans. it does it using an extra trip through the buffer manager which is the\n> least invasive approach but not necessarily the best.\n\nGregory - what's the status of that patch at the moment? Will it be making \nit into a new version of Postgres, or are we waiting for it to be \nimplemented fully?\n\nIt's just that our system is doing a lot of bitmap index scans at the \nmoment, and it'd help to be able to spread them across the 16 discs in \nthe RAID array. It's the bottleneck in our system at the moment.\n\nMatthew\n\n-- \nThe email of the species is more deadly than the mail.\n",
"msg_date": "Thu, 18 Sep 2008 14:21:46 +0100 (BST)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "On Thu, 18 Sep 2008, Matthew Wakeling wrote:\n\n> On Tue, 29 Jan 2008, Gregory Stark wrote:\n\n>> I have a patch which implements it for the low hanging fruit of bitmap \n>> index scans. it does it using an extra trip through the buffer manager \n>> which is the least invasive approach but not necessarily the best.\n>\n> Gregory - what's the status of that patch at the moment? Will it be making it \n> into a new version of Postgres, or are we waiting for it to be implemented \n> fully?\n\nIt and a related fadvise patch have been floating around the project queue \nfor a while now. I just sorted through all the various patches and \nmesasges related to this area and updated the list at \nhttp://wiki.postgresql.org/wiki/CommitFestInProgress#Pending_patches \nrecently, I'm trying to kick back a broader reviewed version of this \nconcept right now.\n\n> It's just that our system is doing a lot of bitmap index scans at the moment, \n> and it'd help to be able to spread them across the 16 discs in the RAID \n> array. It's the bottleneck in our system at the moment.\n\nIf you have some specific bitmap index scan test case suggestions you can \npass along (either publicly or in private to me, I can probably help \nanonymize them), that's one of the things that has been holding this up. \nAlternately, if you'd like to join in on testing this all out more help \nwould certainly be welcome.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 18 Sep 2008 15:30:00 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "On Thu, Sep 18, 2008 at 1:30 PM, Greg Smith <[email protected]> wrote:\n> If you have some specific bitmap index scan test case suggestions you can\n> pass along (either publicly or in private to me, I can probably help\n> anonymize them), that's one of the things that has been holding this up.\n> Alternately, if you'd like to join in on testing this all out more help\n> would certainly be welcome.\n\nI posted in pgsql-perform about a problem that's using bitmap heap\nscans that's really slow compared to just using nested-loops. Don't\nknow if that is relevant or not.\n",
"msg_date": "Thu, 18 Sep 2008 13:33:55 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "On Thu, 18 Sep 2008, Greg Smith wrote:\n>> It's just that our system is doing a lot of bitmap index scans at the \n>> moment, and it'd help to be able to spread them across the 16 discs in the \n>> RAID array. It's the bottleneck in our system at the moment.\n>\n> If you have some specific bitmap index scan test case suggestions you can \n> pass along (either publicly or in private to me, I can probably help \n> anonymize them), that's one of the things that has been holding this up.\n\nOkay, here's a description of what we're doing. We are importing data from \na large file (so we don't have a choice on the order that the data comes \nin). For each entry in the source, we have to look up the corresponding \nrow in the database, and issue the correct \"UPDATE\" command according to \nwhat's in the database and what's in the data source. Long story - it \nisn't feasible to change the overall process.\n\nIn order to improve the performance, I made the system look ahead in the \nsource, in groups of a thousand entries, so instead of running:\n\nSELECT * FROM table WHERE field = 'something';\n\na thousand times, we now run:\n\nSELECT * FROM table WHERE field IN ('something', 'something else'...);\n\nwith a thousand things in the IN. Very simple query. It does run faster \nthan the individual queries, but it still takes quite a while. Here is an \nexample query:\n\nSELECT a1_.id AS a1_id, a1_.primaryIdentifier AS a2_ FROM Gene AS a1_ \nWHERE a1_.primaryIdentifier IN ('SPAC11D3.15', 'SPAC11D3.16c', \n'SPAC11D3.17', 'SPAC11D3.18c', 'SPAC15F9.01c', 'SPAC15F9.02', 'SPAC16.01', \n'SPAC18G6.01c', 'SPAC18G6.02c', 'SPAC18G6.04c', 'SPAC18G6.05c', \n'SPAC18G6.06', 'SPAC18G6.07c', 'SPAC18G6.09c', 'SPAC18G6.10', \n'SPAC18G6.11c', 'SPAC18G6.12c', 'SPAC18G6.13', 'SPAC18G6.14c', \n'SPAC18G6.15', 'SPAC1B9.01c', 'SPAC1D4.02c', 'SPAC1D4.03c', 'SPAC1D4.04', \n'SPAC1D4.05c', 'SPAC1D4.07c', 'SPAC1D4.08', 'SPAC1D4.09c', 'SPAC1D4.10', \n'SPAC1D4.11c', 'SPAC1F3.11', 'SPAC23A1.10', 'SPAC23E2.01', 'SPAC23E2.02', \n'SPAC23E2.03c', 'SPAC26A3.02', 'SPAC26A3.03c', 'SPAC26A3.05', \n'SPAC26A3.06', 'SPAC26A3.07c', 'SPAC26A3.08', 'SPAC26A3.09c', \n'SPAC26A3.10', 'SPAC26A3.11', 'SPAC26A3.14c', 'SPAC26A3.15c', \n'SPAC26A3.16', 'SPAC27F1.01c', 'SPAC27F1.03c', 'SPAC27F1.04c', \n'SPAC27F1.05c', 'SPAC27F1.06c', 'SPAC3H8.02', 'SPAC3H8.03', 'SPAC3H8.04', \n'SPAC3H8.05c', 'SPAC3H8.06', 'SPAC3H8.07c', 'SPAC3H8.08c', 'SPAC3H8.09c', \n'SPAC3H8.10', 'SPAC3H8.11', 'SPAC8E11.11', 'SPBC106.15', 'SPBC17G9.10', \n'SPBC24E9.15c', 'WBGene00000969', 'WBGene00003035', 'WBGene00004095', \n'WBGene00016011', 'WBGene00018672', 'WBGene00018674', 'WBGene00018675', \n'WBGene00018676', 'WBGene00018959', 'WBGene00018960', 'WBGene00018961', \n'WBGene00023407') ORDER BY a1_.id LIMIT 2000;\n\nAnd the corresponding EXPLAIN ANALYSE:\n\n Limit (cost=331.28..331.47 rows=77 width=17) (actual time=121.973..122.501 rows=78 loops=1)\n -> Sort (cost=331.28..331.47 rows=77 width=17) (actual time=121.968..122.152 rows=78 loops=1)\n Sort Key: id\n Sort Method: quicksort Memory: 29kB\n -> Bitmap Heap Scan on gene a1_ (cost=174.24..328.87 rows=77 width=17) (actual time=114.311..121.705 rows=78 loops=1)\n Recheck Cond: (primaryidentifier = ANY ('{SPAC11D3.15...\n -> Bitmap Index Scan on gene__key_primaryidentifier (cost=0.00..174.22 rows=77 width=0) (actual time=44.434..44.434 rows=150 loops=1)\n Index Cond: (primaryidentifier = ANY ('{SPAC11D3.15,SPAC11D3.16c...\n Total runtime: 122.733 ms\n(9 rows)\n\nAlthough it's probably in the cache, as it took 1073 ms the first time. \nThe table has half a million rows, but tables all over the database are \nbeing accessed, so the cache is shared between several hundred million \nrows.\n\nPostgres executes this query in two stages. First it does a trawl of the \nindex (on field), and builds an bitmap. Then it fetches the pages \naccording to the bitmap. I can see the second stage being quite easy to \nadapt for fadvise, but the first stage would be a little more tricky. Both \nstages are equally important, as they take a comparable amount of time.\n\nWe are running this database on a 16-spindle RAID array, so the benefits \nto our process of fully utilising them would be quite large. I'm \nconsidering if I can parallelise things a little though.\n\n> Alternately, if you'd like to join in on testing this all out more help would \n> certainly be welcome.\n\nHow would you like me to help?\n\nMatthew\n\n-- \nWhat goes up must come down. Ask any system administrator.\n",
"msg_date": "Fri, 19 Sep 2008 15:59:12 +0100 (BST)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
},
{
"msg_contents": "Matthew Wakeling <[email protected]> writes:\n> In order to improve the performance, I made the system look ahead in the \n> source, in groups of a thousand entries, so instead of running:\n> SELECT * FROM table WHERE field = 'something';\n> a thousand times, we now run:\n> SELECT * FROM table WHERE field IN ('something', 'something else'...);\n> with a thousand things in the IN. Very simple query. It does run faster \n> than the individual queries, but it still takes quite a while. Here is an \n> example query:\n\nYour example shows the IN-list as being sorted, but I wonder whether you\nactually are sorting the items in practice? If not, you might try that\nto improve locality of access to the index.\n\nAlso, parsing/planning time could be part of your problem here with 1000\nthings to look at. Can you adjust your client code to use a prepared\nquery? I'd try\n\tSELECT * FROM table WHERE field = ANY($1::text[])\n(or whatever the field datatype actually is) and then push the list\nover as a single parameter value using array syntax. You might find\nthat it scales to much larger IN-lists that way.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Sep 2008 11:09:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance "
},
{
"msg_contents": "On Fri, 19 Sep 2008, Tom Lane wrote:\n> Your example shows the IN-list as being sorted, but I wonder whether you\n> actually are sorting the items in practice? If not, you might try that\n> to improve locality of access to the index.\n\nWell, like I said, we generally don't have the luxury of dictating the \norder of entries in the data source. However, the IN list itself is sorted \n- more to do with making the logs readable and the tests reproducable than \nfor performance.\n\nHowever, I have been looking at changing the order of the input data. This \nparticular data source is a 29GB xml file, and I wrote a quick program \nwhich sorts that by one key in 40 minutes, which will hopefully allow \nlater data sources (which are easier to sort) to take advantage of spacial \nlocality in the table. However, that key is not the same one as the one \nused in the query above, hence why I say we can't really dictate the order \nof the entries. There's another complication which I won't go into.\n\n> Also, parsing/planning time could be part of your problem here with 1000\n> things to look at. Can you adjust your client code to use a prepared\n> query? I'd try\n> \tSELECT * FROM table WHERE field = ANY($1::text[])\n> (or whatever the field datatype actually is) and then push the list\n> over as a single parameter value using array syntax. You might find\n> that it scales to much larger IN-lists that way.\n\nYes, that is a useful suggestion. However, I am fairly clear that the \nsystem is disk seek-bound at the moment, so it probably wouldn't make a \nmassive improvement. It would also unfortunately require changing a lot of \nour code. Worth doing at some point.\n\nMatthew\n\n-- \n\"Interwoven alignment preambles are not allowed.\"\nIf you have been so devious as to get this message, you will understand\nit, and you deserve no sympathy. -- Knuth, in the TeXbook\n",
"msg_date": "Fri, 19 Sep 2008 16:25:30 +0100 (BST)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance "
},
{
"msg_contents": "Matthew Wakeling <[email protected]> writes:\n>> In order to improve the performance, I made the system look ahead in the \n>> source, in groups of a thousand entries, so instead of running:\n>> SELECT * FROM table WHERE field = 'something';\n>> a thousand times, we now run:\n>> SELECT * FROM table WHERE field IN ('something', 'something else'...);\n>> with a thousand things in the IN. Very simple query. It does run faster \n>> than the individual queries, but it still takes quite a while. Here is an \n>> example query:\n>> \n\nHave you considered temporary tables? Use COPY to put everything you \nwant to query into a temporary table, then SELECT to join the results, \nand pull all of the results, doing additional processing (UPDATE) as you \npull?\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nMatthew Wakeling <[email protected]> writes:\n\n\nIn order to improve the performance, I made the system look ahead in the \nsource, in groups of a thousand entries, so instead of running:\nSELECT * FROM table WHERE field = 'something';\na thousand times, we now run:\nSELECT * FROM table WHERE field IN ('something', 'something else'...);\nwith a thousand things in the IN. Very simple query. It does run faster \nthan the individual queries, but it still takes quite a while. Here is an \nexample query:\n \n\n\n\n\nHave you considered temporary tables? Use COPY to put everything you\nwant to query into a temporary table, then SELECT to join the results,\nand pull all of the results, doing additional processing (UPDATE) as\nyou pull?\n\nCheers,\nmark\n-- \nMark Mielke <[email protected]>",
"msg_date": "Fri, 19 Sep 2008 12:22:51 -0400",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID arrays and performance"
}
] |
[
{
"msg_contents": "Hi,\n\n Postgres 8.2.4 is not using the right plan for different values.\n\n From the below queries listing.addressvaluation table has 19million \nrecords , the other table listing.valuationchangeperiod is just lookup \ntable with 3 records.\n\n If you can see the explain plans for the statements the first one \nuses a bad plan for 737987 addressid search, does a index scan backward \non the primary key \"addressvaluationid\" takes more time to execute and \nthe same query for a different addressid (5851202) uses the correct \noptimal plan with index scan on \"addressid\" column which is way quicker.\n\n Autovacuums usually vacuums these tables regularly, in fact I checked \nthe pg_stat_user_tables the last vacuum/analyze on this table was last \nnight.\n I did another manual vacuum analyze on the listing.addrevaluation \ntable it uses the right plan for all the values now.\n\n Can anyone explain me this wierd behavior ?\n why does it have different plans for different values and after doing \nmanual vacuum analyze it works properly ?\n \n Are autovacuums not effective enough ?\n\n Here are my autovacuum settings\n \nautovacuum_naptime = 120min \nautovacuum_vacuum_threshold = 500 \nautovacuum_analyze_threshold = 250 \nautovacuum_vacuum_scale_factor = 0.001\nautovacuum_analyze_scale_factor = 0.001\nautovacuum_freeze_max_age = 200000000\nautovacuum_vacuum_cost_delay = -1 \nautovacuum_vacuum_cost_limit = -1\n \n Here are the table structures\n\n\n listing.addressvaluation\n Table \n\"listing.addressvaluation\"\n Column | Type \n| Modifiers\n----------------------------+-----------------------------+---------------------------------------------------------------------------\n addressvaluationid | integer | not null \ndefault nextval(('listing.addressvaluationseq'::text)::regclass)\n fkaddressid | integer | not null\n fkaddressvaluationsourceid | integer | not null\n sourcereference | text |\n createdate | timestamp without time zone | not null \ndefault ('now'::text)::timestamp(6) without time zone\n valuationdate | timestamp without time zone | not null\n valuationamount | numeric(14,2) |\n valuationhigh | numeric(14,2) |\n valuationlow | numeric(14,2) |\n valuationconfidence | integer |\n valuationchange | numeric(14,2) |\n fkvaluationchangeperiodid | integer |\n historycharturl | text |\n regionhistorycharturl | text |\nIndexes:\n \"pk_addressvaluation_addressvaluationid\" PRIMARY KEY, btree \n(addressvaluationid), tablespace \"indexdata\"\n \"idx_addressvaluation_createdate\" btree (createdate), tablespace \n\"indexdata\"\n \"idx_addressvaluation_fkaddressid\" btree (fkaddressid), tablespace \n\"indexdata\"\n \"idx_addressvaluation_fkaddressid2\" btree (fkaddressid), tablespace \n\"indexdata\"\nForeign-key constraints:\n \"fk_addressvaluation_address\" FOREIGN KEY (fkaddressid) REFERENCES \nlisting.address(addressid)\n \"fk_addressvaluation_addressvaluationsource\" FOREIGN KEY \n(fkaddressvaluationsourceid) REFERENCES \nlisting.addressvaluationsource(addressvaluationsourceid)\n \"fk_addressvaluation_valuationchangeperiod\" FOREIGN KEY \n(fkvaluationchangeperiodid) REFERENCES \nlisting.valuationchangeperiod(valuationchangeperiodid)\n\nlisting.valuationchangeperiod\n Table \"listing.valuationchangeperiod\"\n Column | Type | \nModifiers\n-------------------------+---------+--------------------------------------------------------------------------------\n valuationchangeperiodid | integer | not null default \nnextval(('listing.valuationchangeperiodseq'::text)::regclass)\n name | text | not null\nIndexes:\n \"pk_valuationchangeperiod_valuationchangeperiodid\" PRIMARY KEY, \nbtree (valuationchangeperiodid), tablespace \"indexdata\"\n \"uq_valuationchangeperiod_name\" UNIQUE, btree (name), tablespace \n\"indexdata\"\n\n\n\nFor Addressid 737987 after autovacuum before manual vacuum analyze\n-------------------------------------------------------------------------------------------\nexplain\nselect this_.addressvaluationid as addressv1_150_1_, \nthis_.sourcereference as sourcere2_150_1_,\n this_.createdate as createdate150_1_, this_.valuationdate as \nvaluatio4_150_1_,\n this_.valuationamount as valuatio5_150_1_, this_.valuationhigh \nas valuatio6_150_1_,\n this_.valuationlow as valuatio7_150_1_, \nthis_.valuationconfidence as valuatio8_150_1_,\n this_.valuationchange as valuatio9_150_1_, \nthis_.historycharturl as history10_150_1_,\n this_.regionhistorycharturl as regionh11_150_1_, \nthis_.fkaddressid as fkaddre12_150_1_,\n this_.fkaddressvaluationsourceid as fkaddre13_150_1_, \nthis_.fkvaluationchangeperiodid as fkvalua14_150_1_,\n valuationc2_.valuationchangeperiodid as valuatio1_197_0_, \nvaluationc2_.name as name197_0_\nfrom listing.addressvaluation this_ left outer join \nlisting.valuationchangeperiod valuationc2_\n on \nthis_.fkvaluationchangeperiodid=valuationc2_.valuationchangeperiodid\nwhere this_.fkaddressid=737987\norder by this_.addressvaluationid\ndesc limit 1;\n \nQUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..678.21 rows=1 width=494)\n -> Nested Loop Left Join (cost=0.00..883026.09 rows=1302 width=494)\n -> Index Scan Backward using \npk_addressvaluation_addressvaluationid on addressvaluation this_ \n(cost=0.00..882649.43 rows=1302 width=482)\n Filter: (fkaddressid = 737987)\n -> Index Scan using \npk_valuationchangeperiod_valuationchangeperiodid on \nvaluationchangeperiod valuationc2_ (cost=0.00..0.28 rows=1 width=12)\n Index Cond: (this_.fkvaluationchangeperiodid = \nvaluationc2_.valuationchangeperiodid)\n(6 rows)\n\n\nFor Addressid 5851202 after autovacuum before manual vacuum analyze\n--------------------------------------------------------------------------------------------\n\nselect this_.addressvaluationid as addressv1_150_1_, \nthis_.sourcereference as sourcere2_150_1_,\n this_.createdate as createdate150_1_, this_.valuationdate as \nvaluatio4_150_1_,\n this_.valuationamount as valuatio5_150_1_, this_.valuationhigh \nas valuatio6_150_1_,\n this_.valuationlow as valuatio7_150_1_, \nthis_.valuationconfidence as valuatio8_150_1_,\n this_.valuationchange as valuatio9_150_1_, \nthis_.historycharturl as history10_150_1_,\n this_.regionhistorycharturl as regionh11_150_1_, \nthis_.fkaddressid as fkaddre12_150_1_,\n this_.fkaddressvaluationsourceid as fkaddre13_150_1_, \nthis_.fkvaluationchangeperiodid as fkvalua14_150_1_,\n valuationc2_.valuationchangeperiodid as valuatio1_197_0_, \nvaluationc2_.name as name197_0_\nfrom listing.addressvaluation this_ left outer join \nlisting.valuationchangeperiod valuationc2_\non this_.fkvaluationchangeperiodid=valuationc2_.valuationchangeperiodid\nwhere this_.fkaddressid=5851202\norder by this_.addressvaluationid\ndesc limit 1;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=30.68..30.68 rows=1 width=494)\n -> Sort (cost=30.68..30.71 rows=11 width=494)\n Sort Key: this_.addressvaluationid\n -> Hash Left Join (cost=1.07..30.49 rows=11 width=494)\n Hash Cond: (this_.fkvaluationchangeperiodid = \nvaluationc2_.valuationchangeperiodid)\n -> Index Scan using idx_addressvaluation_fkaddressid2 on \naddressvaluation this_ (cost=0.00..29.27 rows=11 width=482)\n Index Cond: (fkaddressid = 5851202)\n -> Hash (cost=1.03..1.03 rows=3 width=12)\n -> Seq Scan on valuationchangeperiod valuationc2_ \n(cost=0.00..1.03 rows=3 width=12)\n(9 rows)\n\n\nAfter manual vacuum analyze for addressid 737987\n------------------------------------------------------------------\n\nexplain\nselect this_.addressvaluationid as addressv1_150_1_, \nthis_.sourcereference as sourcere2_150_1_,\n this_.createdate as createdate150_1_, this_.valuationdate as \nvaluatio4_150_1_,\n this_.valuationamount as valuatio5_150_1_, this_.valuationhigh \nas valuatio6_150_1_,\n this_.valuationlow as valuatio7_150_1_, \nthis_.valuationconfidence as valuatio8_150_1_,\n this_.valuationchange as valuatio9_150_1_, \nthis_.historycharturl as history10_150_1_,\n this_.regionhistorycharturl as regionh11_150_1_, \nthis_.fkaddressid as fkaddre12_150_1_,\n this_.fkaddressvaluationsourceid as fkaddre13_150_1_, \nthis_.fkvaluationchangeperiodid as fkvalua14_150_1_,\n valuationc2_.valuationchangeperiodid as valuatio1_197_0_, \nvaluationc2_.name as name197_0_\nfrom listing.addressvaluation this_ inner join \nlisting.valuationchangeperiod valuationc2_\non this_.fkvaluationchangeperiodid=valuationc2_.valuationchangeperiodid\nwhere this_.fkaddressid=737987\norder by this_.addressvaluationid\ndesc limit 1;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=31.24..31.24 rows=1 width=494)\n -> Sort (cost=31.24..31.27 rows=11 width=494)\n Sort Key: this_.addressvaluationid\n -> Hash Join (cost=1.07..31.05 rows=11 width=494)\n Hash Cond: (this_.fkvaluationchangeperiodid = \nvaluationc2_.valuationchangeperiodid)\n -> Index Scan using idx_addressvaluation_fkaddressid on \naddressvaluation this_ (cost=0.00..29.83 rows=11 width=482)\n Index Cond: (fkaddressid = 737987)\n -> Hash (cost=1.03..1.03 rows=3 width=12)\n -> Seq Scan on valuationchangeperiod valuationc2_ \n(cost=0.00..1.03 rows=3 width=12)\n(9 rows)\n\n\n\nThanks!\nPallav.\n\n\n",
"msg_date": "Tue, 04 Dec 2007 11:06:05 -0500",
"msg_from": "Pallav Kalva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer Not using the Right plan"
},
{
"msg_contents": "Pallav Kalva <[email protected]> writes:\n> why does it have different plans for different values\n\nBecause the values occur different numbers of times (or so it thinks\nanyway). If the rowcount estimates are far from reality, perhaps\nincreasing the statistics target would help. However, since you\nonly showed EXPLAIN and not EXPLAIN ANALYZE output, no one can\nreally tell whether the optimizer did anything wrong here.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 04 Dec 2007 11:32:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer Not using the Right plan "
},
{
"msg_contents": "Tom Lane wrote:\n> Pallav Kalva <[email protected]> writes:\n> \n>> why does it have different plans for different values\n>> \n>\n> Because the values occur different numbers of times (or so it thinks\n> anyway). If the rowcount estimates are far from reality, perhaps\n> increasing the statistics target would help. However, since you\n> only showed EXPLAIN and not EXPLAIN ANALYZE output, no one can\n> really tell whether the optimizer did anything wrong here.\n>\n> \t\t\tregards, tom lane\n> \n\nHi Tom,\n\n Thanks! for your reply, here is an another example of the same query \nwith different addressid now. This time I got the explain analyze on the \nquery,\n this query also uses the Index Scan Backwards, it says it took 28 \nseconds but I can say that after looking at the postgres logs it took \nmore than 2 min\n when the query first ran. I ran this one again to get the explain \nanalyze.\n\n The statistics set to \"default_statistics_target = 100\"\n\n I am sure if it uses index on addressid it would be quicker but for \nsome reason it using index backward scan on addressvaluationid and that \nis taking too long.\n\n Not only this one there are some other queries which use index scan \nbackwards scan and it takes too long. Index scan backwards most of the \ntime is not doing good for me is there any way to avoid it ?\n\n \n\nexplain analyze\nselect this_.addressvaluationid as addressv1_150_1_, \nthis_.sourcereference as sourcere2_150_1_,\n this_.createdate as createdate150_1_, this_.valuationdate as \nvaluatio4_150_1_, this_.valuationamount as valuatio5_150_1_,\n this_.valuationhigh as valuatio6_150_1_, this_.valuationlow as \nvaluatio7_150_1_,\n this_.valuationconfidence as valuatio8_150_1_, \nthis_.valuationchange as valuatio9_150_1_,\n this_.historycharturl as history10_150_1_, \nthis_.regionhistorycharturl as regionh11_150_1_,\n this_.fkaddressid as fkaddre12_150_1_, \nthis_.fkaddressvaluationsourceid as fkaddre13_150_1_,\n this_.fkvaluationchangeperiodid as fkvalua14_150_1_, \nvaluationc2_.valuationchangeperiodid as valuatio1_197_0_,\n valuationc2_.name as name197_0_\nfrom listing.addressvaluation this_ left outer join \nlisting.valuationchangeperiod valuationc2_\n on \nthis_.fkvaluationchangeperiodid=valuationc2_.valuationchangeperiodid\nwhere this_.fkaddressid= 6664161\norder by this_.addressvaluationid desc limit 1;\n \nQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..677.69 rows=1 width=494) (actual \ntime=28454.708..28454.712 rows=1 loops=1)\n -> Nested Loop Left Join (cost=0.00..883705.44 rows=1304 width=494) \n(actual time=28454.700..28454.700 rows=1 loops=1)\n -> Index Scan Backward using \npk_addressvaluation_addressvaluationid on addressvaluation this_ \n(cost=0.00..883328.22 rows=1304 width=482) (actual \ntime=28441.236..28441.236 rows=1 loops=1)\n Filter: (fkaddressid = 6664161)\n -> Index Scan using \npk_valuationchangeperiod_valuationchangeperiodid on \nvaluationchangeperiod valuationc2_ (cost=0.00..0.28 rows=1 width=12) \n(actual time=13.447..13.447 rows=1 loops=1)\n Index Cond: (this_.fkvaluationchangeperiodid = \nvaluationc2_.valuationchangeperiodid)\n Total runtime: 28454.789 ms\n(7 rows)\n\n\n",
"msg_date": "Tue, 04 Dec 2007 14:15:52 -0500",
"msg_from": "Pallav Kalva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer Not using the Right plan"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have a large database with e-mail meta-data (no bodies) for over 100\nmillion messages. I am running PostgreSQL 8.2.4 on a server with 2GB of\nRAM (shared_buffers = 240MB, temp_buffers = 128MB, work_mem = 256MB,\nmaintenance_work_mem = 256MB). I have the data split in two separate\ntables, \"email\" and \"email_extras\":\n\n Table \"public.email\"\n Column | Type | Modifiers\n -------------------+-----------------------------+-----------\n id | bigint | not null\n load_id | integer | not null\n ts | timestamp without time zone | not null\n ip | inet | not null\n mfrom | text | not null\n helo | text |\n\n Table \"public.email_extras\"\n Column | Type | Modifiers\n -------------------+-----------------------------+-----------\n id | bigint | not null\n ts | timestamp without time zone | not null\n size | integer | not null\n hdr_from | text |\n\nEach of these tables has been partitioned equally based on the \"ts\"\n(timestamp) field into two dozen or so tables, each covering one week of\nmessages. For testing purposes, I have only one week's partition filled\nfor each of the \"email\" and \"email_extras\" tables (email_2007_week34\n{,extras}).\n\nNow if I perform the following simple join on the \"email\" and \"email_\nextras\" tables ...\n\n SELECT\n count(*)\n FROM\n email\n INNER JOIN email_extras USING (id, ts)\n WHERE\n mfrom <> hdr_from;\n\nthen I get the following horrendously inefficient plan:\n\n QUERY PLAN\n --------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=391396890.89..391396890.90 rows=1 width=0)\n -> Merge Join (cost=9338881.64..349156398.02 rows=16896197148 width=0)\n Merge Cond: ((public.email_extras.id = public.email.id) AND (public.email_extras.ts = public.email.ts))\n Join Filter: (public.email.mfrom <> public.email_extras.hdr_from)\n -> Sort (cost=4592966.95..4658121.33 rows=26061752 width=48)\n Sort Key: public.email_extras.id, public.email_extras.ts\n -> Append (cost=0.00..491969.52 rows=26061752 width=48)\n -> Seq Scan on email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week13_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week14_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week15_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week16_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week17_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week18_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week19_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week20_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week21_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week22_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week23_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week24_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week25_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week26_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week27_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week28_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week29_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week30_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week31_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week32_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week33_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week34_extras email_extras (cost=0.00..491597.12 rows=26052512 width=33)\n -> Seq Scan on email_2007_week35_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week36_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week37_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week38_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week39_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Seq Scan on email_2007_week40_extras email_extras (cost=0.00..13.30 rows=330 width=48)\n -> Sort (cost=4745914.69..4811071.87 rows=26062872 width=48)\n Sort Key: public.email.id, public.email.ts\n -> Append (cost=0.00..644732.72 rows=26062872 width=48)\n -> Seq Scan on email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week13 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week14 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week15 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week16 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week17 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week18 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week19 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week20 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week21 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week22 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week23 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week24 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week25 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week26 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week27 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week28 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week29 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week30 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week31 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week32 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week33 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week34 email (cost=0.00..644349.12 rows=26052512 width=33)\n -> Seq Scan on email_2007_week35 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week36 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week37 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week38 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week39 email (cost=0.00..13.70 rows=370 width=48)\n -> Seq Scan on email_2007_week40 email (cost=0.00..13.70 rows=370 width=48)\n (68 rows)\n\nHowever, if I restrict the query to just the partitions that actually do\nhave data in them ...\n\n SELECT\n count(*)\n FROM\n email_2007_week34\n INNER JOIN email_2007_week34_extras USING (id, ts)\n WHERE\n mfrom <> hdr_from;\n\nthen I get a much better plan that uses a hash join:\n\n QUERY PLAN \n ------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=4266338.94..4266338.95 rows=1 width=0)\n -> Hash Join (cost=1111362.80..4266336.07 rows=1145 width=0)\n Hash Cond: ((email_2007_week34.ts = email_2007_week34_extras.ts) AND (email_2007_week34.id = email_2007_week34_extras.id))\n Join Filter: (email_2007_week34.mfrom <> email_2007_week34_extras.hdr_from)\n -> Seq Scan on email_2007_week34 (cost=0.00..644349.12 rows=26052512 width=33)\n -> Hash (cost=491597.12..491597.12 rows=26052512 width=33)\n -> Seq Scan on email_2007_week34_extras (cost=0.00..491597.12 rows=26052512 width=33)\n (7 rows)\n\nYes, I have `ANALYZE`d the database before running the queries.\n\nHow come the query planner gets thrown off that far by the simple table\npartitioning? What can I do to put the query planner back on the right\ntrack?\n\nJulian.",
"msg_date": "Tue, 4 Dec 2007 19:44:08 +0000",
"msg_from": "Julian Mehnle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad query plans for queries on partitioned table"
},
{
"msg_contents": "Julian Mehnle wrote:\n> I have a large database with e-mail meta-data (no bodies) for over 100\n> million messages. I am running PostgreSQL 8.2.4 on a server with 2GB\n> of RAM (shared_buffers = 240MB, temp_buffers = 128MB, work_mem = 256MB,\n> maintenance_work_mem = 256MB). I have the data split in two separate\n> tables, \"email\" and \"email_extras\":\n> [...]\n>\n> Each of these tables has been partitioned equally based on the \"ts\"\n> (timestamp) field into two dozen or so tables, each covering one week\n> of messages. For testing purposes, I have only one week's partition\n> filled for each of the \"email\" and \"email_extras\" tables\n> (email_2007_week34 {,extras}).\n\nOh, just for the record: I do have \"constraint_exclusion\" enabled.\n\nJulian.",
"msg_date": "Tue, 4 Dec 2007 20:03:14 +0000",
"msg_from": "Julian Mehnle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plans for queries on partitioned table"
},
{
"msg_contents": "\"Julian Mehnle\" <[email protected]> writes:\n\n> However, if I restrict the query to just the partitions that actually do\n> have data in them ...\n\nThere are a few things going on here. \n\n1) The optimizer can't build a plan which ignores those partitions because the\nstatistics are just approximations. You could insert into one of them at any\ntime and the statistics won't update immediately. If you have a partition\nwhich is empty of some type of data you can put a constraint on it to promise\nthe optimizer that that condition will stay true.\n\n2) The optimizer is assuming that empty tables have a default 1,000 records in\nthem with no idea about their statistics. Otherwise you get terrible plans on\ntables which have just been created or never analyzed. In this case that's\ncausing it to think there will be tons of matches on what is apparently a very\nselective criterion.\n\n3) The optimizer is a bit dumb about partitioned tables. But I'm not sure if\nthat's actually the fault here.\n\nTry adding one record of data to each of those partitions or putting a\nconstraint on them which will allow constraint_exclusion (I assume you have\nthat enabled?) to kick in. You'll still be bitten by the parent table but\nhopefully that's not enough to cause a problem.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n",
"msg_date": "Tue, 04 Dec 2007 20:15:21 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plans for queries on partitioned table"
},
{
"msg_contents": "Gregory Stark wrote:\n> There are a few things going on here.\n>\n> 1) The optimizer can't build a plan which ignores those partitions\n> because the statistics are just approximations. You could insert into\n> one of them at any time and the statistics won't update immediately. If\n> you have a partition which is empty of some type of data you can put a\n> constraint on it to promise the optimizer that that condition will stay\n> true.\n\nI actually do have constraints on all the partitions, e.g. for week 34:\n\n Check constraints [for email_2007_week34]:\n \"email_2007_week34_ts_check\" CHECK (ts >= '2007-08-20 00:00:00'::timestamp without time zone AND ts < '2007-08-27 00:00:00'::timestamp without time zone)\n\n Check constraints [for email_2007_week34_extras]:\n \"email_2007_week34_extras_ts_check\" CHECK (ts >= '2007-08-20 00:00:00'::timestamp without time zone AND ts < '2007-08-27 00:00:00'::timestamp without time zone)\n\nShouldn't this be enough to give the query planner a clue that it only\nhas to join the \"email\" and \"email_extras\" tables' partitions pair-wise,\nas opposed to cross-joining them?\n\nFor the record, I also have indexes:\n\n Indexes [for email_2007_week34]:\n \"email_2007_week34_pkey\" PRIMARY KEY, btree (id)\n \"index_email_2007_week34_on_helo\" btree (helo)\n \"index_email_2007_week34_on_ip\" btree (ip)\n \"index_email_2007_week34_on_load_id\" btree (load_id)\n \"index_email_2007_week34_on_mfrom\" btree (mfrom)\n \"index_email_2007_week34_on_ts\" btree (ts)\n\n Indexes [for for email_2007_week34_extras]:\n \"email_2007_week34_extras_pkey\" PRIMARY KEY, btree (id)\n\n> 2) The optimizer is assuming that empty tables have a default 1,000\n> records in them with no idea about their statistics. Otherwise you get\n> terrible plans on tables which have just been created or never\n> analyzed. In this case that's causing it to think there will be tons of\n> matches on what is apparently a very selective criterion.\n\nI see. But this shouldn't matter under the assumption that constraint\nexclusion works correctly, right?\n\n> 3) The optimizer is a bit dumb about partitioned tables. But I'm not\n> sure if that's actually the fault here.\n>\n> Try adding one record of data to each of those partitions or putting a\n> constraint on them which will allow constraint_exclusion (I assume you\n> have that enabled?) to kick in. You'll still be bitten by the parent\n> table but hopefully that's not enough to cause a problem.\n\nThe parent table is empty. How will adding one record to each of the\npartitions make a difference given the above constraints?\n\nJulian.",
"msg_date": "Tue, 4 Dec 2007 20:27:01 +0000",
"msg_from": "Julian Mehnle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plans for queries on partitioned table"
},
{
"msg_contents": "\"Julian Mehnle\" <[email protected]> writes:\n\n> I actually do have constraints on all the partitions, e.g. for week 34:\n>\n> Check constraints [for email_2007_week34]:\n> \"email_2007_week34_ts_check\" CHECK (ts >= '2007-08-20 00:00:00'::timestamp without time zone AND ts < '2007-08-27 00:00:00'::timestamp without time zone)\n>\n> Check constraints [for email_2007_week34_extras]:\n> \"email_2007_week34_extras_ts_check\" CHECK (ts >= '2007-08-20 00:00:00'::timestamp without time zone AND ts < '2007-08-27 00:00:00'::timestamp without time zone)\n>\n> Shouldn't this be enough to give the query planner a clue that it only\n> has to join the \"email\" and \"email_extras\" tables' partitions pair-wise,\n> as opposed to cross-joining them?\n\nAh, well, this falls under \"The optimizer is a bit dumb about partitioned\ntables\". It only looks at the constraints to compare against your WHERE\nclause. It doesn't compare them against the constraints for other tables to\nsee if they're partitioned on the same key and therefore can be joined\ntable-by-table.\n\nI want 8.4 to be cleverer in this area but there's a ton of things it has to\nlearn.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!\n",
"msg_date": "Wed, 05 Dec 2007 00:39:45 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plans for queries on partitioned table"
},
{
"msg_contents": "Gregory Stark wrote:\n> \"Julian Mehnle\" <[email protected]> writes:\n> > I actually do have constraints on all the partitions, e.g. for week\n> > 34: [...]\n> >\n> > Shouldn't this be enough to give the query planner a clue that it\n> > only has to join the \"email\" and \"email_extras\" tables' partitions\n> > pair-wise, as opposed to cross-joining them?\n>\n> Ah, well, this falls under \"The optimizer is a bit dumb about\n> partitioned tables\". It only looks at the constraints to compare\n> against your WHERE clause. It doesn't compare them against the\n> constraints for other tables to see if they're partitioned on the same\n> key and therefore can be joined table-by-table.\n>\n> I want 8.4 to be cleverer in this area but there's a ton of things it\n> has to learn.\n\nThat would be great.\n\nSo there's nothing that can be done about it right now, apart from \nmanually combining separate SELECTs for each partition using UNION?\n\nJulian.",
"msg_date": "Wed, 5 Dec 2007 01:57:25 +0000",
"msg_from": "Julian Mehnle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plans for queries on partitioned table"
},
{
"msg_contents": "\"Julian Mehnle\" <[email protected]> writes:\n\n> Gregory Stark wrote:\n>> \"Julian Mehnle\" <[email protected]> writes:\n>> > I actually do have constraints on all the partitions, e.g. for week\n>> > 34: [...]\n>> >\n>> > Shouldn't this be enough to give the query planner a clue that it\n>> > only has to join the \"email\" and \"email_extras\" tables' partitions\n>> > pair-wise, as opposed to cross-joining them?\n>>\n>> Ah, well, this falls under \"The optimizer is a bit dumb about\n>> partitioned tables\". It only looks at the constraints to compare\n>> against your WHERE clause. It doesn't compare them against the\n>> constraints for other tables to see if they're partitioned on the same\n>> key and therefore can be joined table-by-table.\n>>\n>> I want 8.4 to be cleverer in this area but there's a ton of things it\n>> has to learn.\n>\n> That would be great.\n>\n> So there's nothing that can be done about it right now, apart from \n> manually combining separate SELECTs for each partition using UNION?\n\nWell the in the query you gave I think if the partitions weren't completely\nempty it would still be using the hash join, it would just be doin an append\nof all the nearly-empty partitions first. The reason it's getting confused is\nthat in the absence of stats on them it thinks they contain hundreds of tuples\nwhich will match your where clause and join clause. Look at the expected\nnumber of rows the for the merge jjoin compared to the expected number of rows\nfor the hash join.\n\nBut yeah, there will be cases where you really want:\n\nAppend\n Merge Join\n Part1 of table1\n Part2 of table2\n Merge Join\n Part2 of table1\n Part2 of table2\n ...\n\n\nBut the planner only knows how to do:\n\nMerge Join\n Append\n Part1 of table1\n Part2 of table1\n ...\n Append\n Part1 of table1\n Part2 of table2\n ...\n\nWhich requires two big sorts whereas the first plan could use indexes on\nindividual partitions. It also has a slower startup time and can't take\nadvantage of discovering that a partition of table1 is empty to avoid ever\nreading from the corresponding partition of table2 the way the first plan can.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's 24x7 Postgres support!\n",
"msg_date": "Wed, 05 Dec 2007 08:39:25 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plans for queries on partitioned table"
},
{
"msg_contents": "Gregory, thanks for all the insight! It is much appreciated.\n\nJulian.",
"msg_date": "Wed, 5 Dec 2007 11:26:56 +0000",
"msg_from": "Julian Mehnle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plans for queries on partitioned table"
}
] |
[
{
"msg_contents": "Hi All,\n I've been tasked to evaluate PG as a possible replacement of our MS SQL 2000 solution. Our solution is 100% stored procedure/function centric. It's a report generation system whose sole task is to produce text files filled with processed data that is post-processed by a secondary system. Basically options are selected via a web interface and all these parameters are passed unto the stored procedure and then the stored procedure would run and in the process call other stored procedures until eventually a single formatted text file is produced. \n I decided on Fedora Core 7 and the 8.3 Beta release of Enterprise DB PostgreSQL. I decided to port 1 stored procedure plus it's several support stored procedures into pl/pgsql from T-SQL and compare the performance by measuring how long each system takes to produce the text file. For this test, the output to the text file was discarded and the stored procedure/function would end once the final temporary table is filled with the information that is eventually dumped into the text file. \n\nWindows 2000 Professional + MSDE (/MS SQL) Box vs. FC7 + EnterpriseDB PG Box\n\nNote that both boxes have EXACTLY the same hardware (not VMWARE or anything) \nAMD X2 3800\n2 G RAM DDR 400\n80 G Barracuda Sata\n\nThe data was copied to the Linux box and checked lightly for consistency versus the windows box (number of tables / columns and records) and they all match. After data transfer to the Linux Box, I ran REINDEX and ANALYZE. \n\nFor the actual run the following tables were used and I'm displaying the results of analyze.\n\nINFO: analyzing \"public.AreaDefn\"\nINFO: \"AreaDefn\": scanned 15 of 15 pages, containing 2293 live rows and 0 dead rows; 2293 rows in sample, 2293 estimated total rows\nINFO: analyzing \"public.AreaDefn2\"\nINFO: \"AreaDefn2\": scanned 30 of 30 pages, containing 3439 live rows and 0 dead rows; 3000 rows in sample, 3439 estimated total rows\nINFO: analyzing \"public.Areas\"\nINFO: \"Areas\": scanned 2 of 2 pages, containing 164 live rows and 0 dead rows; 164 rows in sample, 164 estimated total rows\nINFO: analyzing \"public.Brands\"\nINFO: \"Brands\": scanned 1 of 1 pages, containing 11 live rows and 0 dead rows; 11 rows in sample, 11 estimated total rows\nINFO: analyzing \"public.Categories\"\nINFO: \"Categories\": scanned 1 of 1 pages, containing 26 live rows and 0 dead rows; 26 rows in sample, 26 estimated total rows\nINFO: analyzing \"public.CategoryDefn\"\nINFO: \"CategoryDefn\": scanned 1 of 1 pages, containing 133 live rows and 0 dead rows; 133 rows in sample, 133 estimated total rows\nINFO: analyzing \"public.CategoryDefn2\"\nINFO: \"CategoryDefn2\": scanned 2 of 2 pages, containing 211 live rows and 0 dead rows; 211 rows in sample, 211 estimated total rows\nINFO: analyzing \"public.CategorySets\"\nINFO: \"CategorySets\": scanned 1 of 1 pages, containing 3 live rows and 0 dead rows; 3 rows in sample, 3 estimated total rows\nINFO: analyzing \"public.CATemplateGroup\"\nINFO: analyzing \"public.Channels\"\nINFO: \"Channels\": scanned 1 of 1 pages, containing 7 live rows and 0 dead rows; 7 rows in sample, 7 estimated total rows\nINFO: analyzing \"public.ClientCodes\"\nINFO: analyzing \"public.Clients\"\nINFO: \"Clients\": scanned 7 of 7 pages, containing 366 live rows and 0 dead rows; 366 rows in sample, 366 estimated total rows\nINFO: analyzing \"public.Customers\"\nINFO: \"Customers\": scanned 2 of 2 pages, containing 129 live rows and 0 dead rows; 129 rows in sample, 129 estimated total rows\nNFO: analyzing \"public.Databases\"\nINFO: \"Databases\": scanned 1 of 1 pages, containing 1 live rows and 0 dead rows; 1 rows in sample, 1 estimated total rows\nINFO: analyzing \"public.DataSources\"\nINFO: \"DataSources\": scanned 1 of 1 pages, containing 8 live rows and 0 dead rows; 8 rows in sample, 8 estimated total rows\nINFO: analyzing \"public.DateToWeekConversion\"\nINFO: \"DateToWeekConversion\": scanned 4 of 4 pages, containing 371 live rows and 0 dead rows; 371 rows in sample, 371 estimated total rows\nINFO: analyzing \"public.Divisions\"\nINFO: \"Divisions\": scanned 1 of 1 pages, containing 1 live rows and 0 dead rows; 1 rows in sample, 1 estimated total rows\nINFO: analyzing \"public.MetricTable\"\nINFO: \"MetricTable\": scanned 1 of 1 pages, containing 48 live rows and 0 dead rows; 48 rows in sample, 48 estimated total rows\nINFO: analyzing \"public.Offtake\"\nINFO: \"Offtake\": scanned 3000 of 13824 pages, containing 141000 live rows and 0 dead rows; 3000 rows in sample, 649728 estimated total rows\nINFO: analyzing \"public.SKUs\"\nINFO: \"SKUs\": scanned 3 of 3 pages, containing 73 live rows and 0 dead rows; 73 rows in sample, 73 estimated total rows\nINFO: analyzing \"public.SMSDefaults\"\nINFO: \"SMSDefaults\": scanned 1 of 1 pages, containing 43 live rows and 0 dead rows; 43 rows in sample, 43 estimated total rows\nINFO: analyzing \"public.StandardPeriods\"\nINFO: \"StandardPeriods\": scanned 1 of 1 pages, containing 8 live rows and 0 dead rows; 8 rows in sample, 8 estimated total rows\nINFO: analyzing \"public.StandardUnits\"\nINFO: \"StandardUnits\": scanned 1 of 1 pages, containing 9 live rows and 0 dead rows; 9 rows in sample, 9 estimated total rows\nINFO: analyzing \"public.SubDataSources\"\nINFO: \"SubDataSources\": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows\nINFO: analyzing \"public.VolumeUnitDefn\"\nINFO: \"VolumeUnitDefn\": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows\nINFO: analyzing \"public.VolumeUnits\"\nINFO: \"VolumeUnits\": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows\n\nas you can see the biggest one only has 600k records. \n\nHere are the settings used for postgresql.conf ( will just list those that were modified) \n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n#shared_buffers = 32MB\nshared_buffers = 128MB # min 128kB or max_connections*16kB\n # (change requires restart)\n#temp_buffers = 8MB # min 800kB\ntemp_buffers = 32MB\n#max_prepared_transactions = 5 # can be 0 or more\n # (change requires restart)\n#max_prepared_transactions = 20 \n# Note: increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n#work_mem = 1MB # min 64kB\nwork_mem = 2MB\n#maintenance_work_mem = 16MB # min 1MB\nmaintenance_work_mem = 32MB\n#max_stack_depth = 2MB # min 100kB\n\n\nEverything else was kept as is (given by the installer). \n\n/etc/sysctl.conf settings below :\nkernel.shmmax = 1536000000\nkernel.msgmni = 1024\nfs.file-max = 8192\nkernel.sem = \"250 32000 32 1024\"\n\nThe main stored function has approximately 1200 lines of pl/pgsql code. While it is running it calls 7 other support stored functions plus a small view. \n\nThe test basically was run two ways :\n\na) for the linux box, we used PG Admin III to run the core stored function and in the windows box we used query analyzer. \n\nb) Created a small vb program that just calls the stored function for both boxes. \n\n Since I'm a total newbie in PG, I was expecting dismal results in the initial run since our port of the code would not be optimized for PG and sure enough I got them.\n\nWindows 2k Pro + MSDE - 4 seconds \nFC7 + postgresql-8.3-beta3 - 77 seconds\n\n thinking that maybe the GUI of FC7 is hampering the load, I decided to run it again using runlevel 3. The result was approximately a 1-2 % gain on the FC7 but that is insignificant compared to the discrepancy I'm measuring so I decided to just use the w/GUI results. We noticed that the CPU for the linux box was tapped out (at least one of the cores) nearly 100% for the duration of the process. There was plenty of ram available and there was little to no disk access during the entire run. \n\nI decided to further normalize the test and make the OS constant.\n\nWindows 2k Pro + MSDE - 4 seconds\nWindows 2k Pro + postgresql-8.3-beta3 - 54 seconds\n\nTurns out that for our code, running PG in windows is faster significantly. This was a surprise coming from all the research on the internet but it does jive with the results of this guy :\n\nhttp://channel9.msdn.com/ShowPost.aspx?PostID=179064\n\nNote that this guy used MS SQL 2005. Which we have proved to be 2-3 times slower than MS SQL 2000 and hence our research into other options. :)\n\nAnyways I'd like to break up this request/begging for help into two parts.\n\n1) First would be settings of postgresql.conf. Did I do it correctly? The sample data is so small....I based my settings on the recommendations researched for data centers.\n\n2) Code optimization which I plan to start in another email thread since the discussions there would be more detailed.\n\nWould it also make sense to optimize (as far as possible) everything (including the code) for windows first? The target here would be a linux OS but since the discrepancy is so big...the unified Windows OS might be a good place to start for now. \n\nMany Thanks in advance.\n\nRegards\n\n\n\n\nHi All, I've been tasked to evaluate PG as a possible replacement of our MS SQL 2000 solution. Our solution is 100% stored procedure/function centric. It's a report generation system whose sole task is to produce text files filled with processed data that is post-processed by a secondary system. Basically options are selected via a web interface and all these parameters are passed unto the stored procedure and then the stored procedure would run and in the process call other stored procedures until eventually a single formatted text file is produced. I decided on Fedora Core 7 and the 8.3 Beta release of Enterprise DB PostgreSQL. I decided to port 1 stored procedure plus it's several support stored procedures into pl/pgsql from T-SQL and compare the\n performance by measuring how long each system takes to produce the text file. For this test, the output to the text file was discarded and the stored procedure/function would end once the final temporary table is filled with the information that is eventually dumped into the text file. Windows 2000 Professional + MSDE (/MS SQL) Box vs. FC7 + EnterpriseDB PG BoxNote that both boxes have EXACTLY the same hardware (not VMWARE or anything) AMD X2 38002 G RAM DDR 40080 G Barracuda SataThe data was copied to the Linux box and checked lightly for consistency versus the windows box (number of tables / columns and records) and they all match. After data transfer to the Linux Box, I ran REINDEX and ANALYZE. For the actual run the following tables were used and I'm displaying the results of analyze.INFO: analyzing \"public.AreaDefn\"INFO: \"AreaDefn\": scanned 15 of\n 15 pages, containing 2293 live rows and 0 dead rows; 2293 rows in sample, 2293 estimated total rowsINFO: analyzing \"public.AreaDefn2\"INFO: \"AreaDefn2\": scanned 30 of 30 pages, containing 3439 live rows and 0 dead rows; 3000 rows in sample, 3439 estimated total rowsINFO: analyzing \"public.Areas\"INFO: \"Areas\": scanned 2 of 2 pages, containing 164 live rows and 0 dead rows; 164 rows in sample, 164 estimated total rowsINFO: analyzing \"public.Brands\"INFO: \"Brands\": scanned 1 of 1 pages, containing 11 live rows and 0 dead rows; 11 rows in sample, 11 estimated total rowsINFO: analyzing \"public.Categories\"INFO: \"Categories\": scanned 1 of 1 pages, containing 26 live rows and 0 dead rows; 26 rows in sample, 26 estimated total rowsINFO: analyzing \"public.CategoryDefn\"INFO: \"CategoryDefn\": scanned 1 of 1 pages, containing 133 live rows and 0 dead rows; 133 rows in\n sample, 133 estimated total rowsINFO: analyzing \"public.CategoryDefn2\"INFO: \"CategoryDefn2\": scanned 2 of 2 pages, containing 211 live rows and 0 dead rows; 211 rows in sample, 211 estimated total rowsINFO: analyzing \"public.CategorySets\"INFO: \"CategorySets\": scanned 1 of 1 pages, containing 3 live rows and 0 dead rows; 3 rows in sample, 3 estimated total rowsINFO: analyzing \"public.CATemplateGroup\"INFO: analyzing \"public.Channels\"INFO: \"Channels\": scanned 1 of 1 pages, containing 7 live rows and 0 dead rows; 7 rows in sample, 7 estimated total rowsINFO: analyzing \"public.ClientCodes\"INFO: analyzing \"public.Clients\"INFO: \"Clients\": scanned 7 of 7 pages, containing 366 live rows and 0 dead rows; 366 rows in sample, 366 estimated total rowsINFO: analyzing \"public.Customers\"INFO: \"Customers\": scanned 2 of 2 pages, containing 129 live rows\n and 0 dead rows; 129 rows in sample, 129 estimated total rowsNFO: analyzing \"public.Databases\"INFO: \"Databases\": scanned 1 of 1 pages, containing 1 live rows and 0 dead rows; 1 rows in sample, 1 estimated total rowsINFO: analyzing \"public.DataSources\"INFO: \"DataSources\": scanned 1 of 1 pages, containing 8 live rows and 0 dead rows; 8 rows in sample, 8 estimated total rowsINFO: analyzing \"public.DateToWeekConversion\"INFO: \"DateToWeekConversion\": scanned 4 of 4 pages, containing 371 live rows and 0 dead rows; 371 rows in sample, 371 estimated total rowsINFO: analyzing \"public.Divisions\"INFO: \"Divisions\": scanned 1 of 1 pages, containing 1 live rows and 0 dead rows; 1 rows in sample, 1 estimated total rowsINFO: analyzing \"public.MetricTable\"INFO: \"MetricTable\": scanned 1 of 1 pages, containing 48 live rows and 0 dead rows; 48 rows in sample, 48 estimated\n total rowsINFO: analyzing \"public.Offtake\"INFO: \"Offtake\": scanned 3000 of 13824 pages, containing 141000 live rows and 0 dead rows; 3000 rows in sample, 649728 estimated total rowsINFO: analyzing \"public.SKUs\"INFO: \"SKUs\": scanned 3 of 3 pages, containing 73 live rows and 0 dead rows; 73 rows in sample, 73 estimated total rowsINFO: analyzing \"public.SMSDefaults\"INFO: \"SMSDefaults\": scanned 1 of 1 pages, containing 43 live rows and 0 dead rows; 43 rows in sample, 43 estimated total rowsINFO: analyzing \"public.StandardPeriods\"INFO: \"StandardPeriods\": scanned 1 of 1 pages, containing 8 live rows and 0 dead rows; 8 rows in sample, 8 estimated total rowsINFO: analyzing \"public.StandardUnits\"INFO: \"StandardUnits\": scanned 1 of 1 pages, containing 9 live rows and 0 dead rows; 9 rows in sample, 9 estimated total rowsINFO: analyzing\n \"public.SubDataSources\"INFO: \"SubDataSources\": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rowsINFO: analyzing \"public.VolumeUnitDefn\"INFO: \"VolumeUnitDefn\": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rowsINFO: analyzing \"public.VolumeUnits\"INFO: \"VolumeUnits\": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rowsas you can see the biggest one only has 600k records. Here are the settings used for postgresql.conf ( will just list those that were modified) #---------------------------------------------------------------------------# RESOURCE USAGE (except WAL)#---------------------------------------------------------------------------# - Memory -#shared_buffers = 32MBshared_buffers = 128MB \n # min 128kB or max_connections*16kB # (change requires restart)#temp_buffers = 8MB # min 800kBtemp_buffers = 32MB#max_prepared_transactions = 5 # can be 0 or more # (change requires restart)#max_prepared_transactions = 20 # Note: increasing max_prepared_transactions costs ~600 bytes of shared memory# per transaction slot, plus lock space (see max_locks_per_transaction).#work_mem = 1MB # min 64kBwork_mem = 2MB#maintenance_work_mem = 16MB # min\n 1MBmaintenance_work_mem = 32MB#max_stack_depth = 2MB # min 100kBEverything else was kept as is (given by the installer). /etc/sysctl.conf settings below :kernel.shmmax = 1536000000kernel.msgmni = 1024fs.file-max = 8192kernel.sem = \"250 32000 32 1024\"The main stored function has approximately 1200 lines of pl/pgsql code. While it is running it calls 7 other support stored functions plus a small view. The test basically was run two ways :a) for the linux box, we used PG Admin III to run the core stored function and in the windows box we used query analyzer. b) Created a small vb program that just calls the stored function for both boxes. Since I'm a total newbie in PG, I was expecting dismal results in the initial run since our port of the code would not be optimized for PG and sure enough I\n got them.Windows 2k Pro + MSDE - 4 seconds FC7 + postgresql-8.3-beta3 - 77 seconds thinking that maybe the GUI of FC7 is hampering the load, I decided to run it again using runlevel 3. The result was approximately a 1-2 % gain on the FC7 but that is insignificant compared to the discrepancy I'm measuring so I decided to just use the w/GUI results. We noticed that the CPU for the linux box was tapped out (at least one of the cores) nearly 100% for the duration of the process. There was plenty of ram available and there was little to no disk access during the entire run. I decided to further normalize the test and make the OS constant.Windows 2k Pro + MSDE - 4 secondsWindows 2k Pro + postgresql-8.3-beta3 - 54 secondsTurns out that for our code, running PG in windows is faster significantly. This was a surprise coming from all the research on the internet but it does jive with the\n results of this guy :http://channel9.msdn.com/ShowPost.aspx?PostID=179064Note that this guy used MS SQL 2005. Which we have proved to be 2-3 times slower than MS SQL 2000 and hence our research into other options. :)Anyways I'd like to break up this request/begging for help into two parts.1) First would be settings of postgresql.conf. Did I do it correctly? The sample data is so small....I based my settings on the recommendations researched for data centers.2) Code optimization which I plan to start in another email thread since the discussions there would be more detailed.Would it also make sense to optimize (as far as possible) everything (including the code) for windows first? The target here would be a linux OS but since the discrepancy is so big...the unified Windows OS might be a good place to start for\n now. Many Thanks in advance.Regards",
"msg_date": "Wed, 5 Dec 2007 00:13:09 -0800 (PST)",
"msg_from": "Robert Bernabe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Evaluation of PG performance vs MSDE/MSSQL 2000 (not 2005)"
},
{
"msg_contents": "On Dec 5, 2007 1:13 PM, Robert Bernabe <[email protected]> wrote:\n\n> Anyways I'd like to break up this request/begging for help into two parts.\n>\n> 1) First would be settings of postgresql.conf. Did I do it correctly? The\n> sample data is so small....I based my settings on the recommendations\n> researched for data centers.\n>\n\ni think this would mainly depend on what do your stored procedures do, are\nthey writing stuff to the tables, or reading most of the time or something\nelse? i would imagine with small dataset the postgresql.conf settings should\nbe ok, but more can be told after looking at the code.\n\n\n>\n> 2) Code optimization which I plan to start in another email thread since\n> the discussions there would be more detailed.\n>\n\ni think that might be a better starting point. What are you trying to do\nand how.\n\n\n>\n>\n> Would it also make sense to optimize (as far as possible) everything\n> (including the code) for windows first? The target here would be a linux OS\n> but since the discrepancy is so big...the unified Windows OS might be a good\n> place to start for now.\n>\n\nSure, but i am not able to comprehend how the pl/pgsql could contain code\nwhich can tuned OS wise, i would think that any optimization you would do\nthere in the stored code would apply to all platforms.\n\n\n\n\n>\n>\n> Many Thanks in advance.\n>\n> Regards\n>\n>\n>\n>\n\n\n-- \nUsama Munir Dar http://linkedin.com/in/usamadar\nConsultant Architect\nCell:+92 321 5020666\nSkype: usamadar\n\nOn Dec 5, 2007 1:13 PM, Robert Bernabe <[email protected]> wrote:\nAnyways I'd like to break up this request/begging for help into two parts.1) First would be settings of postgresql.conf. Did I do it correctly? The sample data is so small....I based my settings on the recommendations researched for data centers.\ni think this would mainly depend on what do your stored procedures do, are they writing stuff to the tables, or reading most of the time or something else? i would imagine with small dataset the \npostgresql.conf settings should be ok, but more can be told after looking at the code.\n2) Code optimization which I plan to start in another email thread since the discussions there would be more detailed.\ni think that might be a better starting point. What are you trying to do and how. \nWould it also make sense to optimize (as far as possible) everything (including the code) for windows first? The target here would be a linux OS but since the discrepancy is so big...the unified Windows OS might be a good place to start for\n now. Sure, but i am not able to comprehend how the pl/pgsql could contain code which can tuned OS wise, i would think that any optimization you would do there in the stored code would apply to all platforms.\n \nMany Thanks in advance.Regards-- Usama Munir Dar http://linkedin.com/in/usamadar\nConsultant ArchitectCell:+92 321 5020666Skype: usamadar",
"msg_date": "Wed, 5 Dec 2007 14:06:35 +0500",
"msg_from": "\"Usama Dar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Evaluation of PG performance vs MSDE/MSSQL 2000 (not 2005)"
},
{
"msg_contents": "In response to Robert Bernabe <[email protected]>:\n\n> Hi All,\n> I've been tasked to evaluate PG as a possible replacement of our\n> MS SQL 2000 solution. Our solution is 100% stored procedure/function\n> centric.\n\nI've trimmed 99% of your email out, because it's not relevant to my\nanswer.\n\nFact is, it's pretty much impossible for anyone to give specific help\nbecause you've obviously got a large, complex operation going on here,\nand have not provided any real details. The reality is that we'd\nprobably have to see your code to give any specific help.\n\nHowever, I can help you with an approach to fixing it. Based on your\ndescription of the problem, I would guess that there are some differences\nin best practices between MSSQL and PG that are what's hurting your\napplication once it's ported to PG. Basically, you just need to isolate\nthem and adjust.\n\nI recommend enabling full query logging with timing on the PG server.\nIn the postgresql.conf file, set the following:\nlog_min_duration_statement = 0\n\nNote that this will result in a LOT of log information being written,\nwhich will invariably make the application run even slower on PG, but\nfor tuning purposes it's invaluable as it will log every SQL statement\nissued with the time it took to run.\n\n From there, look for the low-hanging fruit. I recommend running your\ntests a few times, then running the logs through pgFouine:\nhttp://pgfouine.projects.postgresql.org/\n\nOnce you've identified the queries that are taking the most time, start\nadjusting the queries and/or the DB schema to improve the timing. In\nmy experience, you'll usually find 2 or 3 queries that are slowing the\nthing down, and the performance will come up to spec once they're\nrewritten (or appropriate indexes added, or whatever) EXPLAIN can\nbe your friend once you've found problematic queries.\n\nAnother piece of broadly useful advice is to install the pgbuffercache\naddon and monitor shared_buffer usage to see if you've got enough. Also\nuseful is monitoring the various statistics in the pg_stat_database\ntable.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Wed, 5 Dec 2007 13:25:44 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Evaluation of PG performance vs MSDE/MSSQL 2000 (not\n 2005)"
},
{
"msg_contents": "On Dec 5, 2007 3:13 AM, Robert Bernabe <[email protected]> wrote:\n> Would it also make sense to optimize (as far as possible) everything\n> (including the code) for windows first? The target here would be a linux OS\n> but since the discrepancy is so big...the unified Windows OS might be a good\n> place to start for now.\n\nspeaking in very general terms, postgresql should be competitive with\nms sql in this type of application. there are a few things here and\nthere you have to watch out for...for example select count(*) from\ntable is slower on pg. another common thing is certain query forms\nthat you have to watch out for...but these issues are often addressed\nwith small adjustments.\n\nthe key here is to isolate specific things in your procedure that are\nunderperforming and to determine the answer why. to get the most\nbenefit from this list, try and post some particulars along with some\n'explain analyze' results.\n\nmerlin\n",
"msg_date": "Wed, 5 Dec 2007 15:43:44 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Evaluation of PG performance vs MSDE/MSSQL 2000 (not 2005)"
},
{
"msg_contents": "On Dec 5, 2007 2:13 AM, Robert Bernabe <[email protected]> wrote:\n>\n> Hi All,\n> I've been tasked to evaluate PG as a possible replacement of our MS SQL\n> 2000 solution. Our solution is 100% stored procedure/function centric. It's\n> a report generation system whose sole task is to produce text files filled\n> with processed data that is post-processed by a secondary system. Basically\n> options are selected via a web interface and all these parameters are passed\n> unto the stored procedure and then the stored procedure would run and in the\n> process call other stored procedures until eventually a single formatted\n> text file is produced.\n> I decided on Fedora Core 7 and the 8.3 Beta release of Enterprise DB\n> PostgreSQL. I decided to port 1 stored procedure plus it's several support\n> stored procedures into pl/pgsql from T-SQL and compare the performance by\n\nNoble, but if you're a postgresql beginner, you might want to take a\npass on running beta code. You might be hitting a corner case,\nperformance wise, and never know it.\n\nA few pointers.\n1: Up your shared_buffers to 512M or so.\n2: Up work_mem to 16M\n\nNow, use the poor man's debugging tool for your stored procs, raise notice\n\n\ncreate or replace function testfunc() returns int as $$\nDECLARE\n tm text;\n cnt int;\nBEGIN\n select timeofday() into tm;\n RAISE NOTICE 'Time is now %',tm;\n select count(*) into cnt from accounts;\n select timeofday() into tm;\n RAISE NOTICE 'Time is now %',tm;\n RETURN 0;\nEND;\n$$ language plpgsql;\n\nOnce you've found what's running slow, narrow it down to a specific part.\n",
"msg_date": "Wed, 5 Dec 2007 16:20:16 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Evaluation of PG performance vs MSDE/MSSQL 2000 (not 2005)"
},
{
"msg_contents": "On Wed, 2007-12-05 at 00:13 -0800, Robert Bernabe wrote:\n> Hi All,\n> I've been tasked to evaluate PG as a possible replacement of our\n> MS SQL 2000 solution. Our solution is 100% stored procedure/function\n> centric. It's a report generation system whose sole task is to produce\n> text files filled with processed data that is post-processed by a\n> secondary system. Basically options are selected via a web interface\n> and all these parameters are passed unto the stored procedure and then\n> the stored procedure would run and in the process call other stored\n> procedures until eventually a single formatted text file is produced. \n> I decided on Fedora Core 7 and the 8.3 Beta release of Enterprise\n> DB PostgreSQL. I decided to port 1 stored procedure plus it's several\n> support stored procedures into pl/pgsql from T-SQL and compare the\n> performance by measuring how long each system takes to produce the\n> text file. For this test, the output to the text file was discarded\n> and the stored procedure/function would end once the final temporary\n> table is filled with the information that is eventually dumped into\n> the text file. \n> \n> Windows 2000 Professional + MSDE (/MS SQL) Box vs. FC7 +\n> EnterpriseDB PG Box\n> \n> Note that both boxes have EXACTLY the same hardware (not VMWARE or\n> anything) \n> AMD X2 3800\n> 2 G RAM DDR 400\n> 80 G Barracuda Sata\n> \n> The data was copied to the Linux box and checked lightly for\n> consistency versus the windows box (number of tables / columns and\n> records) and they all match. After data transfer to the Linux Box, I\n> ran REINDEX and ANALYZE. \n> \n> For the actual run the following tables were used and I'm displaying\n> the results of analyze.\n> \n> INFO: analyzing \"public.AreaDefn\"\n> INFO: \"AreaDefn\": scanned 15 of 15 pages, containing 2293 live rows\n> and 0 dead rows; 2293 rows in sample, 2293 estimated total rows\n> INFO: analyzing \"public.AreaDefn2\"\n> INFO: \"AreaDefn2\": scanned 30 of 30 pages, containing 3439 live rows\n> and 0 dead rows; 3000 rows in sample, 3439 estimated total rows\n> INFO: analyzing \"public.Areas\"\n> INFO: \"Areas\": scanned 2 of 2 pages, containing 164 live rows and 0\n> dead rows; 164 rows in sample, 164 estimated total rows\n> INFO: analyzing \"public.Brands\"\n> INFO: \"Brands\": scanned 1 of 1 pages, containing 11 live rows and 0\n> dead rows; 11 rows in sample, 11 estimated total rows\n> INFO: analyzing \"public.Categories\"\n> INFO: \"Categories\": scanned 1 of 1 pages, containing 26 live rows and\n> 0 dead rows; 26 rows in sample, 26 estimated total rows\n> INFO: analyzing \"public.CategoryDefn\"\n> INFO: \"CategoryDefn\": scanned 1 of 1 pages, containing 133 live rows\n> and 0 dead rows; 133 rows in sample, 133 estimated total rows\n> INFO: analyzing \"public.CategoryDefn2\"\n> INFO: \"CategoryDefn2\": scanned 2 of 2 pages, containing 211 live rows\n> and 0 dead rows; 211 rows in sample, 211 estimated total rows\n> INFO: analyzing \"public.CategorySets\"\n> INFO: \"CategorySets\": scanned 1 of 1 pages, containing 3 live rows\n> and 0 dead rows; 3 rows in sample, 3 estimated total rows\n> INFO: analyzing \"public.CATemplateGroup\"\n> INFO: analyzing \"public.Channels\"\n> INFO: \"Channels\": scanned 1 of 1 pages, containing 7 live rows and 0\n> dead rows; 7 rows in sample, 7 estimated total rows\n> INFO: analyzing \"public.ClientCodes\"\n> INFO: analyzing \"public.Clients\"\n> INFO: \"Clients\": scanned 7 of 7 pages, containing 366 live rows and 0\n> dead rows; 366 rows in sample, 366 estimated total rows\n> INFO: analyzing \"public.Customers\"\n> INFO: \"Customers\": scanned 2 of 2 pages, containing 129 live rows and\n> 0 dead rows; 129 rows in sample, 129 estimated total rows\n> NFO: analyzing \"public.Databases\"\n> INFO: \"Databases\": scanned 1 of 1 pages, containing 1 live rows and 0\n> dead rows; 1 rows in sample, 1 estimated total rows\n> INFO: analyzing \"public.DataSources\"\n> INFO: \"DataSources\": scanned 1 of 1 pages, containing 8 live rows and\n> 0 dead rows; 8 rows in sample, 8 estimated total rows\n> INFO: analyzing \"public.DateToWeekConversion\"\n> INFO: \"DateToWeekConversion\": scanned 4 of 4 pages, containing 371\n> live rows and 0 dead rows; 371 rows in sample, 371 estimated total\n> rows\n> INFO: analyzing \"public.Divisions\"\n> INFO: \"Divisions\": scanned 1 of 1 pages, containing 1 live rows and 0\n> dead rows; 1 rows in sample, 1 estimated total rows\n> INFO: analyzing \"public.MetricTable\"\n> INFO: \"MetricTable\": scanned 1 of 1 pages, containing 48 live rows\n> and 0 dead rows; 48 rows in sample, 48 estimated total rows\n> INFO: analyzing \"public.Offtake\"\n> INFO: \"Offtake\": scanned 3000 of 13824 pages, containing 141000 live\n> rows and 0 dead rows; 3000 rows in sample, 649728 estimated total rows\n> INFO: analyzing \"public.SKUs\"\n> INFO: \"SKUs\": scanned 3 of 3 pages, containing 73 live rows and 0\n> dead rows; 73 rows in sample, 73 estimated total rows\n> INFO: analyzing \"public.SMSDefaults\"\n> INFO: \"SMSDefaults\": scanned 1 of 1 pages, containing 43 live rows\n> and 0 dead rows; 43 rows in sample, 43 estimated total rows\n> INFO: analyzing \"public.StandardPeriods\"\n> INFO: \"StandardPeriods\": scanned 1 of 1 pages, containing 8 live rows\n> and 0 dead rows; 8 rows in sample, 8 estimated total rows\n> INFO: analyzing \"public.StandardUnits\"\n> INFO: \"StandardUnits\": scanned 1 of 1 pages, containing 9 live rows\n> and 0 dead rows; 9 rows in sample, 9 estimated total rows\n> INFO: analyzing \"public.SubDataSources\"\n> INFO: \"SubDataSources\": scanned 0 of 0 pages, containing 0 live rows\n> and 0 dead rows; 0 rows in sample, 0 estimated total rows\n> INFO: analyzing \"public.VolumeUnitDefn\"\n> INFO: \"VolumeUnitDefn\": scanned 0 of 0 pages, containing 0 live rows\n> and 0 dead rows; 0 rows in sample, 0 estimated total rows\n> INFO: analyzing \"public.VolumeUnits\"\n> INFO: \"VolumeUnits\": scanned 0 of 0 pages, containing 0 live rows and\n> 0 dead rows; 0 rows in sample, 0 estimated total rows\n> \n> as you can see the biggest one only has 600k records. \n> \n> Here are the settings used for postgresql.conf ( will just list those\n> that were modified) \n> #---------------------------------------------------------------------------\n> # RESOURCE USAGE (except WAL)\n> #---------------------------------------------------------------------------\n> \n> # - Memory -\n> #shared_buffers = 32MB\n> shared_buffers = 128MB # min 128kB or max_connections*16kB\n> # (change requires restart)\n> #temp_buffers = 8MB # min 800kB\n> temp_buffers = 32MB\n> #max_prepared_transactions = 5 # can be 0 or more\n> # (change requires restart)\n> #max_prepared_transactions = 20 \n> # Note: increasing max_prepared_transactions costs ~600 bytes of\n> shared memory\n> # per transaction slot, plus lock space (see\n> max_locks_per_transaction).\n> #work_mem = 1MB # min 64kB\n> work_mem = 2MB\n> #maintenance_work_mem = 16MB # min 1MB\n> maintenance_work_mem = 32MB\n> #max_stack_depth = 2MB # min 100kB\n> \n> \n> Everything else was kept as is (given by the installer). \n> \n> /etc/sysctl.conf settings below :\n> kernel.shmmax = 1536000000\n> kernel.msgmni = 1024\n> fs.file-max = 8192\n> kernel.sem = \"250 32000 32 1024\"\n> \n> The main stored function has approximately 1200 lines of pl/pgsql\n> code. While it is running it calls 7 other support stored functions\n> plus a small view. \n> \n> The test basically was run two ways :\n> \n> a) for the linux box, we used PG Admin III to run the core stored\n> function and in the windows box we used query analyzer. \n> \n> b) Created a small vb program that just calls the stored function for\n> both boxes. \n> \n> Since I'm a total newbie in PG, I was expecting dismal results in\n> the initial run since our port of the code would not be optimized for\n> PG and sure enough I got them.\n> \n> Windows 2k Pro + MSDE - 4 seconds \n> FC7 + postgresql-8.3-beta3 - 77 seconds\n> \n> thinking that maybe the GUI of FC7 is hampering the load, I\n> decided to run it again using runlevel 3. The result was approximately\n> a 1-2 % gain on the FC7 but that is insignificant compared to the\n> discrepancy I'm measuring so I decided to just use the w/GUI results.\n> We noticed that the CPU for the linux box was tapped out (at least one\n> of the cores) nearly 100% for the duration of the process. There was\n> plenty of ram available and there was little to no disk access during\n> the entire run. \n> \n> I decided to further normalize the test and make the OS constant.\n> \n> Windows 2k Pro + MSDE - 4 seconds\n> Windows 2k Pro + postgresql-8.3-beta3 - 54 seconds\n> \n> Turns out that for our code, running PG in windows is faster\n> significantly. This was a surprise coming from all the research on the\n> internet but it does jive with the results of this guy :\n> \n> http://channel9.msdn.com/ShowPost.aspx?PostID=179064\n> \n> Note that this guy used MS SQL 2005. Which we have proved to be 2-3\n> times slower than MS SQL 2000 and hence our research into other\n> options. :)\n> \n> Anyways I'd like to break up this request/begging for help into two\n> parts.\n> \n> 1) First would be settings of postgresql.conf. Did I do it correctly?\n> The sample data is so small....I based my settings on the\n> recommendations researched for data centers.\n> \n> 2) Code optimization which I plan to start in another email thread\n> since the discussions there would be more detailed.\n> \n> Would it also make sense to optimize (as far as possible) everything\n> (including the code) for windows first? The target here would be a\n> linux OS but since the discrepancy is so big...the unified Windows OS\n> might be a good place to start for now. \n> \n> Many Thanks in advance.\n> \n> Regards\n\n\nHi Robert,\n\nAssuming that you've transferred across all relevant indices, the\nbiggest gotcha I've found from porting stored procedures is forgetting\nto mark them STABLE or IMMUTABLE where relevant (see\nhttp://www.postgresql.org/docs/8.3/static/sql-createfunction.html for\nmore details). Without these function markers, PostgreSQL assumes that\nthe functions are VOLATILE which severely restricts their ability to be\noptimised by the planner.\n\nBTW you mention both EnterpriseDB PostgreSQL 8.3 beta and just\nPostgreSQL 8.3 beta in the text above. Both of these are different -\nwhich one are you actually using?\n\n\nKind regards,\n\nMark.\n\n-- \nILande - Open Source Consultancy\nhttp://www.ilande.co.uk\n\n\n",
"msg_date": "Thu, 06 Dec 2007 07:25:12 +0000",
"msg_from": "Mark Cave-Ayland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Evaluation of PG performance vs MSDE/MSSQL 2000 (not 2005)"
},
{
"msg_contents": "Mark Cave-Ayland wrote:\n> BTW you mention both EnterpriseDB PostgreSQL 8.3 beta and just\n> PostgreSQL 8.3 beta in the text above. Both of these are different -\n> which one are you actually using?\n\nNo they're not. EnterpriseDB Postgres ships entirely standard binaries -\nin fact, the Windows build uses the exact same binaries I build for the\ncommunity installer.\n\nEnterpriseDB Postgres is essentially a packaging and bundling project in\nwhich the aim is to provide consistent and easy to use installers for\nWindows, Mac and Linux that allow users to get started with Postgres,\nSlony, PostGIS, pgAdmin, phpPgAdmin etc...\n\nEnterpriseDB Advanced Server is the entirely different product - thats\nthe one that includes the Oracle compatibility and replication/migration\ntools etc.\n\nRegards, Dave.\n",
"msg_date": "Thu, 06 Dec 2007 08:50:56 +0000",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Evaluation of PG performance vs MSDE/MSSQL 2000 (not\n 2005)"
},
{
"msg_contents": "On Thu, 2007-12-06 at 08:50 +0000, Dave Page wrote:\n\n> EnterpriseDB Postgres is essentially a packaging and bundling project in\n> which the aim is to provide consistent and easy to use installers for\n> Windows, Mac and Linux that allow users to get started with Postgres,\n> Slony, PostGIS, pgAdmin, phpPgAdmin etc...\n> \n> EnterpriseDB Advanced Server is the entirely different product - thats\n> the one that includes the Oracle compatibility and replication/migration\n> tools etc.\n\nAh indeed - my mistake for not realising EnterpriseDB Postgres was a\ndifferent product from EnterpriseDB Advanced Server\n(postgres.enterprisedb.com turned out to be quite an enlightening read).\n\n\nATB,\n\nMark.\n\n-- \nILande - Open Source Consultancy\nhttp://www.ilande.co.uk\n\n\n",
"msg_date": "Thu, 06 Dec 2007 20:10:08 +0000",
"msg_from": "Mark Cave-Ayland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Evaluation of PG performance vs MSDE/MSSQL 2000 (not\n\t2005)"
}
] |
[
{
"msg_contents": "\nI don't know if this is true in this case, but transaction level can be \ndifferent, in mssql it is normally something like\nTRANSACTION_READ_UNCOMMITTED\nin postgres\nTRANSACTION_READ_COMMITTED\nand that makes huge difference in performance.\n\nother thing can be the queries in procedures, if you use same queries, \nperformance can be very bad. databases handles queries differently, and \nchanging query can drop execution times to 1/100th easily.\n\nIsmo\n\nOn Wed, 5 Dec 2007, Robert Bernabe wrote:\n\n> Hi All,\n> I've been tasked to evaluate PG as a possible replacement of our MS SQL 2000 solution. Our solution is 100% stored procedure/function centric. It's a report generation system whose sole task is to produce text files filled with processed data that is post-processed by a secondary system. Basically options are selected via a web interface and all these parameters are passed unto the stored procedure and then the stored procedure would run and in the process call other stored procedures until eventually a single formatted text file is produced. \n> I decided on Fedora Core 7 and the 8.3 Beta release of Enterprise DB PostgreSQL. I decided to port 1 stored procedure plus it's several support stored procedures into pl/pgsql from T-SQL and compare the performance by measuring how long each system takes to produce the text file. For this test, the output to the text file was discarded and the stored procedure/function would end once the final temporary table is filled with the information that is eventually dumped into the text file. \n> \n> Windows 2000 Professional + MSDE (/MS SQL) Box vs. FC7 + EnterpriseDB PG Box\n> \n> Note that both boxes have EXACTLY the same hardware (not VMWARE or anything) \n> AMD X2 3800\n> 2 G RAM DDR 400\n> 80 G Barracuda Sata\n> \n> The data was copied to the Linux box and checked lightly for consistency versus the windows box (number of tables / columns and records) and they all match. After data transfer to the Linux Box, I ran REINDEX and ANALYZE. \n> \n> For the actual run the following tables were used and I'm displaying the results of analyze.\n> \n> INFO: analyzing \"public.AreaDefn\"\n> INFO: \"AreaDefn\": scanned 15 of 15 pages, containing 2293 live rows and 0 dead rows; 2293 rows in sample, 2293 estimated total rows\n> INFO: analyzing \"public.AreaDefn2\"\n> INFO: \"AreaDefn2\": scanned 30 of 30 pages, containing 3439 live rows and 0 dead rows; 3000 rows in sample, 3439 estimated total rows\n> INFO: analyzing \"public.Areas\"\n> INFO: \"Areas\": scanned 2 of 2 pages, containing 164 live rows and 0 dead rows; 164 rows in sample, 164 estimated total rows\n> INFO: analyzing \"public.Brands\"\n> INFO: \"Brands\": scanned 1 of 1 pages, containing 11 live rows and 0 dead rows; 11 rows in sample, 11 estimated total rows\n> INFO: analyzing \"public.Categories\"\n> INFO: \"Categories\": scanned 1 of 1 pages, containing 26 live rows and 0 dead rows; 26 rows in sample, 26 estimated total rows\n> INFO: analyzing \"public.CategoryDefn\"\n> INFO: \"CategoryDefn\": scanned 1 of 1 pages, containing 133 live rows and 0 dead rows; 133 rows in sample, 133 estimated total rows\n> INFO: analyzing \"public.CategoryDefn2\"\n> INFO: \"CategoryDefn2\": scanned 2 of 2 pages, containing 211 live rows and 0 dead rows; 211 rows in sample, 211 estimated total rows\n> INFO: analyzing \"public.CategorySets\"\n> INFO: \"CategorySets\": scanned 1 of 1 pages, containing 3 live rows and 0 dead rows; 3 rows in sample, 3 estimated total rows\n> INFO: analyzing \"public.CATemplateGroup\"\n> INFO: analyzing \"public.Channels\"\n> INFO: \"Channels\": scanned 1 of 1 pages, containing 7 live rows and 0 dead rows; 7 rows in sample, 7 estimated total rows\n> INFO: analyzing \"public.ClientCodes\"\n> INFO: analyzing \"public.Clients\"\n> INFO: \"Clients\": scanned 7 of 7 pages, containing 366 live rows and 0 dead rows; 366 rows in sample, 366 estimated total rows\n> INFO: analyzing \"public.Customers\"\n> INFO: \"Customers\": scanned 2 of 2 pages, containing 129 live rows and 0 dead rows; 129 rows in sample, 129 estimated total rows\n> NFO: analyzing \"public.Databases\"\n> INFO: \"Databases\": scanned 1 of 1 pages, containing 1 live rows and 0 dead rows; 1 rows in sample, 1 estimated total rows\n> INFO: analyzing \"public.DataSources\"\n> INFO: \"DataSources\": scanned 1 of 1 pages, containing 8 live rows and 0 dead rows; 8 rows in sample, 8 estimated total rows\n> INFO: analyzing \"public.DateToWeekConversion\"\n> INFO: \"DateToWeekConversion\": scanned 4 of 4 pages, containing 371 live rows and 0 dead rows; 371 rows in sample, 371 estimated total rows\n> INFO: analyzing \"public.Divisions\"\n> INFO: \"Divisions\": scanned 1 of 1 pages, containing 1 live rows and 0 dead rows; 1 rows in sample, 1 estimated total rows\n> INFO: analyzing \"public.MetricTable\"\n> INFO: \"MetricTable\": scanned 1 of 1 pages, containing 48 live rows and 0 dead rows; 48 rows in sample, 48 estimated total rows\n> INFO: analyzing \"public.Offtake\"\n> INFO: \"Offtake\": scanned 3000 of 13824 pages, containing 141000 live rows and 0 dead rows; 3000 rows in sample, 649728 estimated total rows\n> INFO: analyzing \"public.SKUs\"\n> INFO: \"SKUs\": scanned 3 of 3 pages, containing 73 live rows and 0 dead rows; 73 rows in sample, 73 estimated total rows\n> INFO: analyzing \"public.SMSDefaults\"\n> INFO: \"SMSDefaults\": scanned 1 of 1 pages, containing 43 live rows and 0 dead rows; 43 rows in sample, 43 estimated total rows\n> INFO: analyzing \"public.StandardPeriods\"\n> INFO: \"StandardPeriods\": scanned 1 of 1 pages, containing 8 live rows and 0 dead rows; 8 rows in sample, 8 estimated total rows\n> INFO: analyzing \"public.StandardUnits\"\n> INFO: \"StandardUnits\": scanned 1 of 1 pages, containing 9 live rows and 0 dead rows; 9 rows in sample, 9 estimated total rows\n> INFO: analyzing \"public.SubDataSources\"\n> INFO: \"SubDataSources\": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows\n> INFO: analyzing \"public.VolumeUnitDefn\"\n> INFO: \"VolumeUnitDefn\": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows\n> INFO: analyzing \"public.VolumeUnits\"\n> INFO: \"VolumeUnits\": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows\n> \n> as you can see the biggest one only has 600k records. \n> \n> Here are the settings used for postgresql.conf ( will just list those that were modified) \n> #---------------------------------------------------------------------------\n> # RESOURCE USAGE (except WAL)\n> #---------------------------------------------------------------------------\n> \n> # - Memory -\n> #shared_buffers = 32MB\n> shared_buffers = 128MB # min 128kB or max_connections*16kB\n> # (change requires restart)\n> #temp_buffers = 8MB # min 800kB\n> temp_buffers = 32MB\n> #max_prepared_transactions = 5 # can be 0 or more\n> # (change requires restart)\n> #max_prepared_transactions = 20 \n> # Note: increasing max_prepared_transactions costs ~600 bytes of shared memory\n> # per transaction slot, plus lock space (see max_locks_per_transaction).\n> #work_mem = 1MB # min 64kB\n> work_mem = 2MB\n> #maintenance_work_mem = 16MB # min 1MB\n> maintenance_work_mem = 32MB\n> #max_stack_depth = 2MB # min 100kB\n> \n> \n> Everything else was kept as is (given by the installer). \n> \n> /etc/sysctl.conf settings below :\n> kernel.shmmax = 1536000000\n> kernel.msgmni = 1024\n> fs.file-max = 8192\n> kernel.sem = \"250 32000 32 1024\"\n> \n> The main stored function has approximately 1200 lines of pl/pgsql code. While it is running it calls 7 other support stored functions plus a small view. \n> \n> The test basically was run two ways :\n> \n> a) for the linux box, we used PG Admin III to run the core stored function and in the windows box we used query analyzer. \n> \n> b) Created a small vb program that just calls the stored function for both boxes. \n> \n> Since I'm a total newbie in PG, I was expecting dismal results in the initial run since our port of the code would not be optimized for PG and sure enough I got them.\n> \n> Windows 2k Pro + MSDE - 4 seconds \n> FC7 + postgresql-8.3-beta3 - 77 seconds\n> \n> thinking that maybe the GUI of FC7 is hampering the load, I decided to run it again using runlevel 3. The result was approximately a 1-2 % gain on the FC7 but that is insignificant compared to the discrepancy I'm measuring so I decided to just use the w/GUI results. We noticed that the CPU for the linux box was tapped out (at least one of the cores) nearly 100% for the duration of the process. There was plenty of ram available and there was little to no disk access during the entire run. \n> \n> I decided to further normalize the test and make the OS constant.\n> \n> Windows 2k Pro + MSDE - 4 seconds\n> Windows 2k Pro + postgresql-8.3-beta3 - 54 seconds\n> \n> Turns out that for our code, running PG in windows is faster significantly. This was a surprise coming from all the research on the internet but it does jive with the results of this guy :\n> \n> http://channel9.msdn.com/ShowPost.aspx?PostID=179064\n> \n> Note that this guy used MS SQL 2005. Which we have proved to be 2-3 times slower than MS SQL 2000 and hence our research into other options. :)\n> \n> Anyways I'd like to break up this request/begging for help into two parts.\n> \n> 1) First would be settings of postgresql.conf. Did I do it correctly? The sample data is so small....I based my settings on the recommendations researched for data centers.\n> \n> 2) Code optimization which I plan to start in another email thread since the discussions there would be more detailed.\n> \n> Would it also make sense to optimize (as far as possible) everything (including the code) for windows first? The target here would be a linux OS but since the discrepancy is so big...the unified Windows OS might be a good place to start for now. \n> \n> Many Thanks in advance.\n> \n> Regards\n> \n> \n> \n> \n\n",
"msg_date": "Wed, 5 Dec 2007 10:32:52 +0200 (EET)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Evaluation of PG performance vs MSDE/MSSQL 2000 (not\n 2005)"
}
] |
[
{
"msg_contents": "I think you may increase the row number that you want to limit, like LIMIT\n50.\n\nLIMIT can change the cost of a plan dramatically. Looking in your SQL:\n\n where this_.fkaddressid= 6664161\n order by this_.addressvaluationid desc limit 1;\n\nPlanner may use either index1(this_.fkaddressid) or\nindex2(this_.addressvaluationid) to scan the table. Although it is obvious\nthat using index2 is very expensive, but because you are trying to limit one\nrow from 1304 row, so the cost of using index2 changes to\n\n 883328/1304=677.69\n\nThe cost of using index1 should be lager than 1304, so planner chooses\nindex2.\n\nPlanner tends to choose a plan which has small startup cost when you are\ntrying to LIMIT a small portion of data over a large data set. It seems that\nthe following issue also comes from the same root.\n\n http://archives.postgresql.org/pgsql-performance/2007-11/msg00395.php\nBest Regards\nGaly Lee\n\n\n> Tom Lane wrote:\n>\n> Pallav Kalva <pkalva ( at ) livedatagroup ( dot ) com> writes:\n>\n> why does it have different plans for different values\n>\n> Because the values occur different numbers of times (or so it thinks\n> anyway). If the rowcount estimates are far from reality, perhaps\n> increasing the statistics target would help. However, since you\n> only showed EXPLAIN and not EXPLAIN ANALYZE output, no one can\n> really tell whether the optimizer did anything wrong here.\n>\n> \t\t\tregards, tom lane\n>\n> Hi Tom,\n>\n>\n> Thanks! for your reply, here is an another example of the same query with\n> different addressid now. This time I got the explain analyze on the query,\n> this query also uses the Index Scan Backwards, it says it took 28 seconds\n> but I can say that after looking at the postgres logs it took more than 2\n> min when the query first ran. I ran this one again to get the explain analyze.\n>\n>\n> The statistics set to \"default_statistics_target = 100\"\n>\n>\n> I am sure if it uses index on addressid it would be quicker but for some\n> reason it using index backward scan on addressvaluationid and that is\n> taking too long.\n>\n> Not only this one there are some other queries which use index scan backwards\n> scan and it takes too long. Index scan backwards most of the time is not\n> doing good for me is there any way to avoid it ?\n>\n> explain analyze\n>\n> select this_.addressvaluationid as addressv1_150_1_, this_.sourcereference\n> as sourcere2_150_1_, this_.createdate as createdate150_1_,\n> this_.valuationdate as valuatio4_150_1_, this_.valuationamount as\n> valuatio5_150_1_, this_.valuationhigh as valuatio6_150_1_,\n> this_.valuationlow as valuatio7_150_1_, this_.valuationconfidence as\n> valuatio8_150_1_, this_.valuationchange as valuatio9_150_1_, this_.historycharturl\n> as history10_150_1_, this_.regionhistorycharturl as regionh11_150_1_, this_.fkaddressid\n> as fkaddre12_150_1_, this_.fkaddressvaluationsourceid as fkaddre13_150_1_,\n> this_.fkvaluationchangeperiodid as fkvalua14_150_1_, valuationc2_.valuationchangeperiodid\n> as valuatio1_197_0_,\n>\n> valuationc2_.name as name197_0_\n>\n> from listing.addressvaluation this_ left outer join\n> listing.valuationchangeperiod valuationc2_ on this_.fkvaluationchangeperiodid=valuationc2_.valuationchangeperiodid\n>\n>\n> where this_.fkaddressid= 6664161\n> order by this_.addressvaluationid desc limit 1;\n>\n> QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..677.69 rows=1 width=494) (actual time=\n> 28454.708..28454.712 rows=1 loops=1) -> Nested Loop Left Join (cost=\n> 0.00..883705.44 rows=1304 width=494) (actual time=28454.700..28454.700rows=1 loops=1) ->\n> Index Scan Backward using pk_addressvaluation_addressvaluationid on\n> addressvaluation this_ (cost=0.00..883328.22 rows=1304 width=482) (actual\n> time=28441.236..28441.236 rows=1 loops=1)\n>\n> Filter: (fkaddressid = 6664161)\n>\n> -> Index Scan using pk_valuationchangeperiod_valuationchangeperiodid on valuationchangeperiod\n> valuationc2_ (cost=0.00..0.28 rows=1 width=12) (actual time=13.447..13.447rows=1 loops=1) Index\n> Cond: (this_.fkvaluationchangeperiodid = valuationc2_.valuationchangeperiodid)\n>\n>\n> Total runtime: 28454.789 ms\n> (7 rows)\n>\n>\n>\n\n \nI think you may increase the row number that you want to limit, like LIMIT 50.\n \nLIMIT can change the cost of a plan dramatically. Looking in your SQL: where this_.fkaddressid= 6664161 order by this_.addressvaluationid desc limit 1;\n \nPlanner may use either index1(this_.fkaddressid) or index2(this_.addressvaluationid) to scan the table. Although it is obvious that using index2 is very expensive, but because you are trying to limit one row from 1304 row, so the cost of using index2 changes to \n\n 883328/1304=677.69\nThe cost of using index1 should be lager than 1304, so planner chooses index2. \nPlanner tends to choose a plan which has small startup cost when you are trying to LIMIT a small portion of data over a large data set. It seems that the following issue also comes from the same root.\n\n http://archives.postgresql.org/pgsql-performance/2007-11/msg00395.php\n\nBest Regards\nGaly Lee \n\nTom Lane wrote:\n\nPallav Kalva <pkalva ( at ) livedatagroup ( dot ) com> writes:\n\n why does it have different plans for different values\nBecause the values occur different numbers of times (or so it thinks\nanyway). If the rowcount estimates are far from reality, perhaps\nincreasing the statistics target would help. However, since you\nonly showed EXPLAIN and not EXPLAIN ANALYZE output, no one can\nreally tell whether the optimizer did anything wrong here.\n\n\t\t\tregards, tom lane\nHi Tom,\n\nThanks! for your reply, here is an another example of the same query with different addressid now. This time I got the explain analyze on the query, this query also uses the Index Scan Backwards, it says it took 28 \nseconds but I can say that after looking at the postgres logs it took more than 2 min when the query first ran. I ran this one again to get the explain analyze. \n The statistics set to \"default_statistics_target = 100\"\n\nI am sure if it uses index on addressid it would be quicker but for some reason it using index backward scan on addressvaluationid and that is taking too long. \nNot only this one there are some other queries which use index scan backwards scan and it takes too long. Index scan backwards most of the time is not doing good for me is there any way to avoid it ? \nexplain analyze\nselect this_.addressvaluationid as addressv1_150_1_, this_.sourcereference as sourcere2_150_1_, this_.createdate as createdate150_1_, this_.valuationdate as valuatio4_150_1_, this_.valuationamount as valuatio5_150_1_, \nthis_.valuationhigh as valuatio6_150_1_, this_.valuationlow as valuatio7_150_1_, this_.valuationconfidence as valuatio8_150_1_, this_.valuationchange as valuatio9_150_1_, this_.historycharturl as history10_150_1_, \nthis_.regionhistorycharturl as regionh11_150_1_, this_.fkaddressid as fkaddre12_150_1_, this_.fkaddressvaluationsourceid as fkaddre13_150_1_, this_.fkvaluationchangeperiodid as fkvalua14_150_1_, \nvaluationc2_.valuationchangeperiodid as valuatio1_197_0_, valuationc2_.name as name197_0_\nfrom listing.addressvaluation this_ left outer join listing.valuationchangeperiod valuationc2_ on this_.fkvaluationchangeperiodid=valuationc2_.valuationchangeperiodid \nwhere this_.fkaddressid= 6664161\norder by this_.addressvaluationid desc limit 1;\nQUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ \nLimit (cost=0.00..677.69 rows=1 width=494) (actual time=28454.708..28454.712 rows=1 loops=1) -> Nested Loop Left Join (cost=0.00..883705.44 rows=1304 width=494) (actual time=28454.700..28454.700\n rows=1 loops=1) -> Index Scan Backward using pk_addressvaluation_addressvaluationid on addressvaluation this_ (cost=0.00..883328.22 rows=1304 width=482) (actual time=28441.236..28441.236\n rows=1 loops=1) Filter: (fkaddressid = 6664161)\n-> Index Scan using pk_valuationchangeperiod_valuationchangeperiodid on valuationchangeperiod valuationc2_ (cost=0.00..0.28 rows=1 width=12) (actual time=13.447..13.447 rows=1 loops=1) \nIndex Cond: (this_.fkvaluationchangeperiodid = valuationc2_.valuationchangeperiodid) Total runtime: 28454.789 ms\n(7 rows)",
"msg_date": "Wed, 5 Dec 2007 21:49:33 +0900",
"msg_from": "\"galy lee\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer Not using the Right plan"
}
] |
[
{
"msg_contents": "hi i need to know all the database overhead sizes and block header sizes etc \netc as I have a very complex database to build and it needs to be speed \ntuned beyond reckoning\n\n\n\nI have gathered some relevant information form the documentation such as all \nthe data type sizes and the RM block information but I don't have any \ninformation on INDEX blocks or other general overheads\n\n\n\nhttp://www.peg.com/techpapers/monographs/space/space.html\n\n\n\nhttp://www.postgresql.org/docs/8.1/static/datatype.html\n\n\n\nI am using postgres 8.1 if anyone can post links to pages containing over \nhead information and index block header information it would be most \nappreciated as I cannot seem to find anything\n\n\n\nRegards\n\nKelvan\n\n\n",
"msg_date": "Fri, 7 Dec 2007 12:45:26 +1200",
"msg_from": "\"kelvan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "database tuning"
},
{
"msg_contents": "kelvan wrote:\n> hi i need to know all the database overhead sizes and block header sizes etc \n> etc as I have a very complex database to build and it needs to be speed \n> tuned beyond reckoning\n\n[snip]\n\n> I am using postgres 8.1 if anyone can post links to pages containing over \n> head information and index block header information it would be most \n> appreciated as I cannot seem to find anything\n\nI'd look to the source if you care that strongly. Don't rely on any info \nfound on the internet unless it explicitly mentions 8.1 - these things \nchange. Have a look in \"backend/storage/\" and \"backend/access/\" I'd \nguess (not a hacker myself).\n\n\nSome thoughts though:\n1. If you care that strongly about performance, start building it with 8.3\n\n2. Does your testing show that index storage overheads are/will be a \nproblem? If not, I'd concentrate on the testing to make sure you've \nidentified the bottlenecks first.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 07 Dec 2007 08:12:55 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": "On Fri, 2007-12-07 at 12:45 +1200, kelvan wrote:\n\n> hi i need to know all the database overhead sizes and block header sizes etc \n> etc as I have a very complex database to build and it needs to be speed \n> tuned beyond reckoning\n\nIf your need-for-speed is so high, I would suggest using 8.3 or at least\nlooking at the 8.3 documentation.\n\nThis release is very nearly production and is much faster than 8.1 or\n8.2. You may not have realised that Postgres dot releases are actually\nmajor releases and have significant speed differences.\n\nThere's not much to be done about the overheads you mention, so best to\nconcentrate your efforts on index planning for your most frequently\nexecuted queries.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n",
"msg_date": "Fri, 07 Dec 2007 08:39:20 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": "\n\"Simon Riggs\" <[email protected]> wrote in message \nnews:[email protected]...\n> On Fri, 2007-12-07 at 12:45 +1200, kelvan wrote:\n>\n>> hi i need to know all the database overhead sizes and block header sizes \n>> etc\n>> etc as I have a very complex database to build and it needs to be speed\n>> tuned beyond reckoning\n>\n> If your need-for-speed is so high, I would suggest using 8.3 or at least\n> looking at the 8.3 documentation.\n>\n> This release is very nearly production and is much faster than 8.1 or\n> 8.2. You may not have realised that Postgres dot releases are actually\n> major releases and have significant speed differences.\n>\n> There's not much to be done about the overheads you mention, so best to\n> concentrate your efforts on index planning for your most frequently\n> executed queries.\n>\n> -- \n> Simon Riggs\n> 2ndQuadrant http://www.2ndQuadrant.com\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\n\n\"Simon Riggs\" <[email protected]> wrote in message \nnews:[email protected]...\n> On Fri, 2007-12-07 at 12:45 +1200, kelvan wrote:\n>\n>> hi i need to know all the database overhead sizes and block header sizes \n>> etc\n>> etc as I have a very complex database to build and it needs to be speed\n>> tuned beyond reckoning\n>\n> If your need-for-speed is so high, I would suggest using 8.3 or at least\n> looking at the 8.3 documentation.\n>\n> This release is very nearly production and is much faster than 8.1 or\n> 8.2. You may not have realised that Postgres dot releases are actually\n> major releases and have significant speed differences.\n>\n> There's not much to be done about the overheads you mention, so best to\n> concentrate your efforts on index planning for your most frequently\n> executed queries.\n>\n> -- \n> Simon Riggs\n> 2ndQuadrant http://www.2ndQuadrant.com\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\nok heres the thing i dont have a choice i just have to work with whats given \nwhether it is good or not why i need these overheads is for block \ncalculations and and tablespace calculations i have to keep everything in a \nvery very small area on the hdd for head reading speed as the server i am \nforced to use is a peice of crap so i need to do my calculations to resolve \nthis\n\nit is not that i dont know how to do my job i understand effective indexing \nmaterlized views and all other effects of database tuning is was my major \naspect in my study i just need to know the numbers to do what i have to do.\n\ni am new to postgres i have used many other database management systems i \nknow the over heads for all of them just not this one if someone could \nplease be of assisstance.\n\nlet me give a breef outlay of what i have without breaking my confidentality \nagreement\n\nmac server mac os 10.x\npostgres 8.2.5 (appologies i just got updated documentation with errors \nfixed in it)\n70gig hdd\n5 gig ram\n4 cpus (not that it matters as postgres is not multi threading)\n\nand i have to support approxmatally anywhere from 5000 - 30000 users all \nusing it concurentally\n\nas you can see this server wouldnt be my first choice (or my last choice) \nbut as i said i have not choice at this time.\nthe interface programmer and i have come up with ways to solve certian \nproblems in preformance that this server produces but i still need to tune \nthe database\n\nif you need any other information for someone to give me the overheads then \nplease ask but i may not be able to tell you\n\nregards\nkelvan \n\n\n",
"msg_date": "Sat, 8 Dec 2007 07:13:36 +1200",
"msg_from": "\"kelvan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": "kelvan wrote:\n> ok heres the thing i dont have a choice i just have to work with whats given \n\nAh well, it happens to all of us.\n\n> whether it is good or not why i need these overheads is for block \n> calculations and and tablespace calculations i have to keep everything in a \n> very very small area on the hdd for head reading speed as the server i am \n> forced to use is a peice of crap so i need to do my calculations to resolve \n> this\n\nOut of curiosity, how are you planning to keep the relevant parts of \nPostgreSQL's files at a particular physical location on the disk? I \nwasn't aware of any facilities in Mac-OS X for this.\n\n> it is not that i dont know how to do my job i understand effective indexing \n> materlized views and all other effects of database tuning is was my major \n> aspect in my study i just need to know the numbers to do what i have to do.\n\nFair enough. See the source-code for full details - start with those \ndirectories I mentioned before.\n\n> i am new to postgres i have used many other database management systems i \n> know the over heads for all of them just not this one if someone could \n> please be of assisstance.\n> \n> let me give a breef outlay of what i have without breaking my confidentality \n> agreement\n> \n> mac server mac os 10.x\n> postgres 8.2.5 (appologies i just got updated documentation with errors \n> fixed in it)\n> 70gig hdd\n> 5 gig ram\n> 4 cpus (not that it matters as postgres is not multi threading)\n\nHmm - Not enough RAM or disks, too many cpus but you knew that anyway. \nOh, and PG *will* use all 4 CPUs, just one per backend - not all 4 for a \nsingle query. Not a problem in your case.\n\n> and i have to support approxmatally anywhere from 5000 - 30000 users all \n> using it concurentally\n\nHmm 30,000 concurrent users, 5GB RAM = 175kB per user. Not going to \nwork. You'll want more than that for each connection even if it's \nbasically idle.\n\nEven if you don't run out of RAM, I can't see how a single disk could \nkeep up with even a moderate rate of updates from that many users. \nPresumably largely read-only?\n\nMaybe you mean 30,000 web-users behind a connection-pool?\n\nHow many users have you reached in your testing?\n\n> as you can see this server wouldnt be my first choice (or my last choice) \n> but as i said i have not choice at this time.\n> the interface programmer and i have come up with ways to solve certian \n> problems in preformance that this server produces but i still need to tune \n> the database\n\nI don't think it's clear as to how you intend to tune the database with \nindex page-layout details, particularly since you say you are new to \nPostgreSQL.\n\nFor example, with your above requirements, I'd be particularly concerned \nabout four things:\n 1. shared_buffers\n 2. work_mem\n 3. Trading off 1+2 vs the risk of swap\n 4. WAL activity / checkpointing impacting on my single disk\n\nIt would be interesting to see what conclusions you reached on these, \ngiven that you're pushing the hardware to its limits. Can you share the \nresults of your testing on these?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 10 Dec 2007 09:31:02 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": "On Dec 7, 2007 1:13 PM, kelvan <[email protected]> wrote:\n\n> ok heres the thing i dont have a choice i just have to work with whats given\n> whether it is good or not why i need these overheads is for block\n> calculations and and tablespace calculations i have to keep everything in a\n> very very small area on the hdd for head reading speed as the server i am\n> forced to use is a peice of crap so i need to do my calculations to resolve\n> this\n>\n> it is not that i dont know how to do my job i understand effective indexing\n> materlized views and all other effects of database tuning is was my major\n> aspect in my study i just need to know the numbers to do what i have to do.\n>\n> i am new to postgres i have used many other database management systems i\n> know the over heads for all of them just not this one if someone could\n> please be of assisstance.\n>\n> let me give a breef outlay of what i have without breaking my confidentality\n> agreement\n>\n> mac server mac os 10.x\n> postgres 8.2.5 (appologies i just got updated documentation with errors\n> fixed in it)\n> 70gig hdd\n> 5 gig ram\n> 4 cpus (not that it matters as postgres is not multi threading)\n\nUh, yeah it matters, postgresql can use multiple backends just fine.\nBut this will be the least of your problems.\n\n> and i have to support approxmatally anywhere from 5000 - 30000 users all\n> using it concurentally\n\nYou are being set up to fail. No matter how you examine things like\nthe size of individual fields in a pg database, this hardware cannot\npossibly handle that kind of load. period. Not with Postgresql, nor\nwith oracle, nor with teradata, nor with any other db.\n\nIf you need to have 30k users actually connected directly to your\ndatabase you most likely have a design flaw somewhere. If you can use\nconnection pooling to get the number of connections to some fraction\nof that, then you might get it to work. However, being forced to use\na single 70G hard drive on an OSX machine with 5 Gigs ram is sub\noptimal.\n\n> as you can see this server wouldnt be my first choice (or my last choice)\n> but as i said i have not choice at this time.\n\nThen you need to quit. Now. And find a job where you are not being\nsetup to fail. Seriously.\n\n> the interface programmer and i have come up with ways to solve certian\n> problems in preformance that this server produces but i still need to tune\n> the database\n\nYou're being asked to take a school bus and tune it to compete at the indy 500.\n",
"msg_date": "Mon, 10 Dec 2007 10:58:38 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": ">>> On Mon, Dec 10, 2007 at 6:29 PM, in message <[email protected]>,\n\"kelvan\" <[email protected]> wrote: \n \n> i need a more powerful dbms one that can \n> handle multi threading.\n \nIf you're looking to handle a lot of concurrent users, PostgreSQL\nhas the power. The threading issues really only impact the ability\nto spread the work for a single large query over the processors.\nFor multiple users the work is spread over the processors just fine.\n \n> as i have said not my choice i know 5 gigs of ram wouldnt start a hot air \n> balloon let alone support the user base i will have\n \nWe've run a web site with two million hits per day, running 10\nmillion SELECT queries and 1 million DML database transactions\n(averaging over 10 million DML statements) per day on a machine\nwith 6 MB of RAM under PostgreSQL, so you might be surprised.\nYour biggest problem is the single disk drive. RAID not only\nis critical for data integrity, it helps performance when your\ndata is not fully cached.\n \n> we cannot configure postgres on a mac to \n> go over 200 connections for god knows what reason but we have found ways \n> around that using the mac\n \nWell, with four processors there's no point to going above about\n15 or 20 database connections. Use one of the many excellent\noptions for connection pooling for better results.\n \n> i am using alot of codes using small int and bit in my database and \n> de-normalising everying to keep the cnnections down and the data read \n> ammount down but that can only do so much.\n \nDenormalization almost always requires more disk space. That's\nexactly what you should be trying to avoid.\n \n> my problem is read time which is why i want to compact the postgres blocks \n> as much as possible keeping the data of the database in as small a location \n> as possible.\n \nA much bigger issue from that regard will probably be dead space\nfrom updated and deleted rows (plus from any rollbacks). Have\nyou figured out what your VACUUM strategy will be?\n \nWithout knowing more, it's hard to say for sure, but you might do\njust fine if you can get a few more drives hooked up through a\ndecent RAID controller, and funnel your connection through a\nconnection pool.\n \nI hope this helps.\n \n-Kevin\n \n\n",
"msg_date": "Mon, 10 Dec 2007 18:15:51 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": ">>> On Mon, Dec 10, 2007 at 6:15 PM, in message\n<[email protected]>, Kevin Grittner wrote: \n\n> with 6 MB of RAM\n\nObviously a typo -- that should read 6 GB of RAM.\n\n\n",
"msg_date": "Mon, 10 Dec 2007 18:19:20 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Fwd: Re: database tuning"
},
{
"msg_contents": "\n\"\"Scott Marlowe\"\" <[email protected]> wrote in message \nnews:[email protected]...\n> On Dec 7, 2007 1:13 PM, kelvan <[email protected]> wrote:\n>\n>> ok heres the thing i dont have a choice i just have to work with whats \n>> given\n>> whether it is good or not why i need these overheads is for block\n>> calculations and and tablespace calculations i have to keep everything in \n>> a\n>> very very small area on the hdd for head reading speed as the server i am\n>> forced to use is a peice of crap so i need to do my calculations to \n>> resolve\n>> this\n>>\n>> it is not that i dont know how to do my job i understand effective \n>> indexing\n>> materlized views and all other effects of database tuning is was my major\n>> aspect in my study i just need to know the numbers to do what i have to \n>> do.\n>>\n>> i am new to postgres i have used many other database management systems i\n>> know the over heads for all of them just not this one if someone could\n>> please be of assisstance.\n>>\n>> let me give a breef outlay of what i have without breaking my \n>> confidentality\n>> agreement\n>>\n>> mac server mac os 10.x\n>> postgres 8.2.5 (appologies i just got updated documentation with errors\n>> fixed in it)\n>> 70gig hdd\n>> 5 gig ram\n>> 4 cpus (not that it matters as postgres is not multi threading)\n>\n> Uh, yeah it matters, postgresql can use multiple backends just fine.\n> But this will be the least of your problems.\n>\n>> and i have to support approxmatally anywhere from 5000 - 30000 users all\n>> using it concurentally\n>\n> You are being set up to fail. No matter how you examine things like\n> the size of individual fields in a pg database, this hardware cannot\n> possibly handle that kind of load. period. Not with Postgresql, nor\n> with oracle, nor with teradata, nor with any other db.\n>\n> If you need to have 30k users actually connected directly to your\n> database you most likely have a design flaw somewhere. If you can use\n> connection pooling to get the number of connections to some fraction\n> of that, then you might get it to work. However, being forced to use\n> a single 70G hard drive on an OSX machine with 5 Gigs ram is sub\n> optimal.\n>\n>> as you can see this server wouldnt be my first choice (or my last choice)\n>> but as i said i have not choice at this time.\n>\n> Then you need to quit. Now. And find a job where you are not being\n> setup to fail. Seriously.\n>\n>> the interface programmer and i have come up with ways to solve certian\n>> problems in preformance that this server produces but i still need to \n>> tune\n>> the database\n>\n> You're being asked to take a school bus and tune it to compete at the indy \n> 500.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\nlook i know this wont work hell i knew that from day one in all regards this \nis a temporary stand point after things start getting off i am going to blow \nup that mac and burn postgres as i need a more powerful dbms one that can \nhandle multi threading.\n\nas i have said not my choice i know 5 gigs of ram wouldnt start a hot air \nballoon let alone support the user base i will have this is for me not a \nperminate job but i take high regards in my work and want to do the best job \npossible that and the money is good as i am in between jobs as it stands\n\nfor now i only need to support a few thousand and they are going to be \nbehind a web interface as it stands we cannot configure postgres on a mac to \ngo over 200 connections for god knows what reason but we have found ways \naround that using the mac\n\ni have already calculated that the hdd is no where up to what we need and \nwill die after about 6 months but in that time the mac server is going to be \nkilled and we will then have a real server ill do some data migration and \nthen a different dbms but until then i have to make a buffer to keep things \nalive -_-\n\nthe 30000 is just the number of queries that the web interface will be \nsending at its high point when there are many users in the database by users \ni mean at the point of the web interface not the back end so treat them as \nqueries.\n\nso as you can see ill need as fast a read time for every query as possible. \ni am using alot of codes using small int and bit in my database and \nde-normalising everying to keep the cnnections down and the data read \nammount down but that can only do so much.we have no problem supporting that \nmany users form a web stand point\nmy problem is read time which is why i want to compact the postgres blocks \nas much as possible keeping the data of the database in as small a location \nas possible.\n\nregards\nkelvan \n\n\n",
"msg_date": "Tue, 11 Dec 2007 12:29:14 +1200",
"msg_from": "\"kelvan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": "On Tue, 11 Dec 2007, kelvan wrote:\n\n> i am going to blow up that mac and burn postgres as i need a more \n> powerful dbms one that can handle multi threading.\n\nSomeone pointed this out already, but I'll repeat: PostgreSQL has a \nmulti-process architecture that's fully capable of taking advantage of \nmultiple CPUs. Whether a multi-process model is better or worse than a \nmulti-threaded one is a popular subject to debate, but it's certainly not \ntrue that switching to threads will always give a performance benefit, and \nyou shouldn't expect a large one--processes and threads are not that \ndifferent. As a simple example benchmarks usually show the multi-process \nPostgreSQL scales better to high client loads than the multi-threaded \nMySQL.\n\nThe only spot where PostgreSQL has a clear performance limitation is that \nno single query can be split among multiple processors usefully. Since \nyou seem to be working for many users doing small tasks rather than a \nsingle large one, I wouldn't expect the scalability of the core database \ncode to be your problem.\n\n> as it stands we cannot configure postgres on a mac to go over 200 \n> connections for god knows what reason but we have found ways around that \n> using the mac\n\nIn a web application environment, there is no good reason to have that \nmany individual database connections. You should consider the fact that \nyou had trouble getting more than 200 going a warning sign. The right way \nto deal with this is not to work around it, but to use some sort of \nconnection pooling software instead. You might use something that does \nPostgreSQL-level pooling like PgBouncer \nhttps://developer.skype.com/SkypeGarage/DbProjects/PgBouncer or you could \ndo higher level caching with something like memcached \nhttp://www.danga.com/memcached/\n\n> so as you can see ill need as fast a read time for every query as \n> possible. i am using alot of codes using small int and bit in my \n> database and de-normalising everying to keep the cnnections down and the \n> data read ammount down but that can only do so much.\n\nWhat you should be worried about here is how much of the database you can \ncram into memory at once. Have you looked into using large amounts of \nmemory for shared_buffers? In your situation you should consider putting \nmultiple GB worth of memory there to hold data. Particularly with a \nsingle disk, if you even get to the point where you need to read from disk \nregularly you're not going to get anywhere close to your performance \ngoals.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Mon, 10 Dec 2007 19:36:32 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": "kelvan wrote:\n\nI wonder where did all the punctuation symbols on your keyboard went.\nYour email is amazingly hard to read.\n\n> overhead information i would to know is row overheads column overheads and \n> header overheads for blocks and anything else i have missed\n\nAs for storage overhead, see here:\n\nhttp://www.postgresql.org/docs/8.3/static/storage-page-layout.html\n\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J\n\"Siempre hay que alimentar a los dioses, aunque la tierra est� seca\" (Orual)\n",
"msg_date": "Tue, 11 Dec 2007 19:27:48 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": "http://www.postgresql.org/docs/8.1/static/storage.html\n\nOn Dec 11, 2007 5:18 PM, kelvan <[email protected]> wrote:\n> you know what you lot have left my original question this server is a\n> temporary piece of shit\n>\n> my original question is what are the overheads for postgres but obviously no\n> one knows or no one knows where a webpage containing this information is -_-\n\nSo, have you looked in the docs?\n\nI go here:\n\nhttp://www.postgresql.org/docs/8.1/static/index.html\nsee this down the page a bit:\nhttp://www.postgresql.org/docs/8.1/static/storage.html\nwhich takes me here:\nhttp://www.postgresql.org/docs/8.1/static/storage-page-layout.html\n\nAnd it seems to have that information in it.\n\nAgain. You can look at the source, or find out experimentally by\nbuilding tables and checking their size. Some of this is an inexact\nscience because different architechtures have different alignment\nrequirements.\n",
"msg_date": "Tue, 11 Dec 2007 16:29:59 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": "On Dec 11, 2007, at 5:18 PM, kelvan wrote:\n\n> you know what you lot have left my original question this server is a\n> temporary piece of shit\n>\n> my original question is what are the overheads for postgres but \n> obviously no\n> one knows or no one knows where a webpage containing this \n> information is -_-\n>\n> overhead information i would to know is row overheads column \n> overheads and\n> header overheads for blocks and anything else i have missed\n>\n> trust me postgres and a Mac don't like working together you have no \n> idea the\n> amount of problems we have incurred with php trying to talk to \n> postgres on a\n> Mac out biggest problem is Mac tecs are incompetent and we cannot \n> get any\n> support for the server I know postgres connects fine just we cannot \n> get it\n> working on the Mac so I love your guys ideas but they don't work \n> that's why\n> I have had to take another approach if we were not using a Mac we \n> would have\n> none of the problems we have with connection issues such as php \n> seems to\n> want to take up 20 db connections at a time but I think we fixed that\n> problem overall our problem is the Mac and we cannot get it support\n>\n> neither I nor the web app developer are Mac savvy hell as far as we \n> have\n> seen no Mac tec is Mac savvy either we cannot get parts of postgres \n> to run\n> on a Mac either such as pgagent which is necessary for us but we \n> cannot seem\n> to find a daemon that works on a Mac\n>\n> I have a list of problems a mile long and none of them are postgres \n> it is\n> the Mac\n>\n> so minus all that as the Mac is only a temporary solution can \n> anyone just\n> answer the original question for me if not and I mean no offence to \n> anyone\n> but I really don't care as I am going to re do it all later down \n> the track\n>\n> as I have said your ideas sound good just not Mac oriented nor are \n> they to\n> do with my original question I have never had trouble finding overhead\n> information on any other DBMS I have used this is the first time I \n> have had\n> to ask for it and since this DBMS is open source I have to ask a \n> community\n> rather than a company\n>\n> if anyone is wondering why I don't switch now money and time are \n> not on my\n> side\n>\n> and for those who wonder why don't I leave this job is big time \n> just starts\n> off small time but the potential of this job is very nice and as \n> they say if\n> you want something good you have to work hard for it I am not a fan of\n> taking the easy way out as it brings no benefits\n>\n> for those who want to know more I cannot tell you as I am under a\n> confidentiality agreement\n\nKelvan, proper capitalization and punctuation are virtues when \ntrying to communicate extensively via text mediums. I, for one, read \nthe first couple and last couple of lines of this message after \ngruelingly reading your last message and I wouldn't be surprised if \nothers with more experience and better answers at the ready simply \nignored both as that much text is extremely difficult to follow in \nthe absence those aforementioned virtues.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n",
"msg_date": "Tue, 11 Dec 2007 16:34:01 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": "kelvan wrote:\n> you know what you lot have left my original question this server is a \n> temporary piece of shit\n> \n> my original question is what are the overheads for postgres but obviously no \n> one knows or no one knows where a webpage containing this information is -_-\n> \n> overhead information i would to know is row overheads column overheads and \n> header overheads for blocks and anything else i have missed\n\nYou said you had most of that in your original post:\n > I have gathered some relevant information form the documentation such \n > as all\n > the data type sizes and the RM block information but I don't have any\n > information on INDEX blocks or other general overheads\n\nThe index details are in the source, as I said in my first reply. It's \njust that nobody here thinks that'll help you much.\n\n> neither I nor the web app developer are Mac savvy hell as far as we have \n> seen no Mac tec is Mac savvy either\n\nSo what on earth are you going to do with the index overhead figures? \nWithout accurate information on usage patterns, fill-factor, vacuuming \nfrequency etc. they aren't going to tell you anything.\n\nEven if you could get an accurate figure for database size with less \neffort than just generating test data, what would your next step be?\n\n> as I have said your ideas sound good just not Mac oriented \n\nThe only idea I've seen mentioned is connection-pooling. I'm not sure \nwhy that wouldn't work on a Mac.\n\nOther comments were warning that 30,000 connections weren't do-able, \nthat de-normalising made poor use of your limited disk/memory and \npointing out solutions other people use.\n\nOh, and me asking for any info from your testing.\n\n > nor are they to\n> do with my original question I have never had trouble finding overhead \n> information on any other DBMS I have used this is the first time I have had \n> to ask for it and since this DBMS is open source I have to ask a community \n> rather than a company\n\nAgain, since you said you had all the stuff from the manuals, the rest \nis in the source. That's what the source is there for.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 11 Dec 2007 23:15:31 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": "you know what you lot have left my original question this server is a \ntemporary piece of shit\n\nmy original question is what are the overheads for postgres but obviously no \none knows or no one knows where a webpage containing this information is -_-\n\noverhead information i would to know is row overheads column overheads and \nheader overheads for blocks and anything else i have missed\n\ntrust me postgres and a Mac don't like working together you have no idea the \namount of problems we have incurred with php trying to talk to postgres on a \nMac out biggest problem is Mac tecs are incompetent and we cannot get any \nsupport for the server I know postgres connects fine just we cannot get it \nworking on the Mac so I love your guys ideas but they don't work that's why \nI have had to take another approach if we were not using a Mac we would have \nnone of the problems we have with connection issues such as php seems to \nwant to take up 20 db connections at a time but I think we fixed that \nproblem overall our problem is the Mac and we cannot get it support\n\nneither I nor the web app developer are Mac savvy hell as far as we have \nseen no Mac tec is Mac savvy either we cannot get parts of postgres to run \non a Mac either such as pgagent which is necessary for us but we cannot seem \nto find a daemon that works on a Mac\n\nI have a list of problems a mile long and none of them are postgres it is \nthe Mac\n\nso minus all that as the Mac is only a temporary solution can anyone just \nanswer the original question for me if not and I mean no offence to anyone \nbut I really don't care as I am going to re do it all later down the track\n\nas I have said your ideas sound good just not Mac oriented nor are they to \ndo with my original question I have never had trouble finding overhead \ninformation on any other DBMS I have used this is the first time I have had \nto ask for it and since this DBMS is open source I have to ask a community \nrather than a company\n\nif anyone is wondering why I don't switch now money and time are not on my \nside\n\nand for those who wonder why don't I leave this job is big time just starts \noff small time but the potential of this job is very nice and as they say if \nyou want something good you have to work hard for it I am not a fan of \ntaking the easy way out as it brings no benefits\n\nfor those who want to know more I cannot tell you as I am under a \nconfidentiality agreement\n\nregards\nKelvan \n\n\n",
"msg_date": "Wed, 12 Dec 2007 11:18:42 +1200",
"msg_from": "\"kelvan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": "On Wed, 12 Dec 2007, kelvan wrote:\n\n> my original question is what are the overheads for postgres but obviously no\n> one knows or no one knows where a webpage containing this information is -_-\n\nIn addition to the documentation links people have already suggested, I'd \nalso suggest \nhttp://andreas.scherbaum.la/blog/archives/282-table-size,-database-size.html \nwhich gives some helpful suggestions on measuring the actual size of data \nyou've got in the database already. It's possible to make a mistake when \ntrying to compute overhead yourself; loading a subset of the data and \nmeasuring the size is much less prone to error.\n\n> if we were not using a Mac we would have none of the problems we have \n> with connection issues such as php seems to want to take up 20 db \n> connections at a time\n\nI can't imagine why the connection pooling links I suggested before \nwouldn't work perfectly fine on a Mac. You're correct to first nail down \nwhy PHP is connecting more than you expect, but eventually I suspect \nyou'll need to wander toward pooling.\n\n> neither I nor the web app developer are Mac savvy hell as far as we have \n> seen no Mac tec is Mac savvy either\n\n:)\n\n> we cannot get parts of postgres to run on a Mac either such as pgagent \n> which is necessary for us but we cannot seem to find a daemon that works \n> on a Mac\n\nYou might want to give some specifics and ask about this on the pgAdmin \nmailing list: http://www.pgadmin.org/support/list.php\n\nOS X support is relatively recent for pgAdmin and I see some other recent \nfixes for specific issues on that platform.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Tue, 11 Dec 2007 19:07:38 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": "On Wed, Dec 12, 2007 at 12:27:39PM +1200, kelvan wrote:\n>I have also learnt and also Richard pointed out just not in so many words \n>the difference in support from a open source community compared to a non \n>open source company is that the people who give support in open source are \n>opinionated rather than concise meaning they will give you their opinion \n>rather than answering the question\n\nNo, the difference is that the paid support *has to* give *an* answer. \nIt doesn't have to be a useful answer, it just has to fulfil their \nobligation. They will give you whatever answer you ask for to get you \noff the phone as quickly as possible because it makes their on-phone \nnumbers better than arguing about it and trying to give a useful answer. \n\nFree support will tell you that what you're asking for is silly, because \nthey don't have to give you the answer you asked for in order to get you \noff the phone.\n\nYou seem to have already made up your mind about a whole number of \nthings, making this whole discussion more amusing than erudite.\n\nMike Stone\n",
"msg_date": "Tue, 11 Dec 2007 19:10:26 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": "Michael Stone wrote:\n> On Wed, Dec 12, 2007 at 12:27:39PM +1200, kelvan wrote:\n>> I have also learnt and also Richard pointed out just not in so many \n>> words the difference in support from a open source community compared \n>> to a non open source company is that the people who give support in \n>> open source are opinionated rather than concise meaning they will give \n>> you their opinion rather than answering the question\n> \n> No, the difference is that the paid support *has to* give *an* answer. \n\nGood lord what a bizarre paragraph. Michael is right, paid support *has \nto* give *an* answer and I guarantee you that the answer will be \nopinionated.\n\nThere are very little right and wrong in the world of software. It is \nmostly one pedantic opinion versus another pedantic opinion.\n\nI get paid everyday to offer my opinion :)\n\nJoshua D. Drake\n\n",
"msg_date": "Tue, 11 Dec 2007 16:17:51 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database tuning"
},
{
"msg_contents": "Ok thx I have got it thx to David and Scott for the links I now know why I \ncouldn't find them as I was looking for blocks rather than page damn \nsynonyms\n\n\n\nand to Eric thx for the criticism but yea I failed English so I know my \npunctuation is bad unless I concentrate and I am to busy to do that so for \nyou Eric here is a full stop. (that was a joke not a shot at you I \nunderstand what you are saying but yeah)\n\n\n\nI have also learnt and also Richard pointed out just not in so many words \nthe difference in support from a open source community compared to a non \nopen source company is that the people who give support in open source are \nopinionated rather than concise meaning they will give you their opinion \nrather than answering the question\n\n\n\nRegards\n\nKelvan\n\n\n",
"msg_date": "Wed, 12 Dec 2007 12:27:39 +1200",
"msg_from": "\"kelvan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: database tuning"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm currently trying to tune the Cost-Based Vacuum Delay in a\n8.2.5 server. The aim is to reduce as much as possible the\nperformance impact of vacuums on application queries, with the\nbackground idea of running autovacuum as much as possible[1].\n\nMy test involves vacuuming a large table, and measuring the\ncompletion time, as the vacuuming proceeds, of a rather long\nrunning application query (involving a table different from the\none being vacuumed) which cannot fit entirely in buffers (and the\ncompletion time of the vacuum, because it needs not be too slow,\nof course).\n\nI ran my tests with a few combinations of\nvacuum_cost_delay/vacuum_cost_limit, while keeping the other\nparameters set to the default from the 8.2.5 tarball:\n\nvacuum_cost_page_hit = 1\nvacuum_cost_page_miss = 10\nvacuum_cost_page_dirty = 20\n\nThe completion time of the query is about 16 seconds in\nisolation. With a vacuuming proceeding, here are the results:\n\n vacuum_cost_delay/vacuum_cost_limit (deactivated) 20/200 40/200 100/1000 150/1000 200/1000 300/1000\n\n VACUUM ANALYZE time 54 s 112 s 188 s 109 s 152 s 190 s 274 s\n SELECT time 50 s 28 s 26 s 24 s 22 s 20 s 19 s\n\nI have noticed that others (Alvaro, Joshua) suggest to set\nvacuum_cost_delay as low as 10 or 20 ms, however in my situation\nI'd like to lower the performance impact in application queries\nand will probably choose 150/1000 where \"only\" a +40% is seen -\nI'm curious if anyone else has followed the same path, or is\nthere any outstanding flaw I've missed here? I'm talking\noutstanding, as of course any local decision may be different in\nthe hope of favouring a different database/application behaviour.\n\n\nOther than that, it's the results obtained with the design\nprinciple of Cost-Base Vacuum Delay, which I find a little\nsurprising. Of course, I think it has been thought through a lot,\nand my observations are probably naive, but I'm going to throw my\nideas anyway, who knows.\n\nI'd think that it would be possible to lower yet again the impact\nof vacuuming on other queries, while keeping a vacuuming time\nwith little overhead, if dynamically changing the delays related\nto database activity, rather than using fixed costs and delays.\nFor example, before and after each vacuum sleep delay is\ncompleted, pg could:\n\n- check the amount of currently running queries\n (pg_stat_activity), and continue sleeping if it is above a\n configured threshold; by following this path, databases with\n peak activities could use a threshold of 1 and have zero\n ressource comsumption for vacuuming during peaks, still having\n nearly no time completion overhead for vacuuming out of peaks\n (since the check is performed also before the sleep delay,\n which would be deactivated if no queries are running); if we\n can afford a luxury implementation, we could always have a\n maximum sleep time configuration, which would allow vacuuming\n to proceed a little bit even when there's no timeframe with low\n enough database activity\n\n- alternatively, pg could make use of some longer term statistics\n (load average, IO statistics) to dynamically pause the\n vacuuming - this I guess is related to the host OS and probably\n more difficult to have working correctly with multiple disks\n and/or processes running - however, if you want high\n performance from PostgreSQL, you probably won't host other IO\n applications on the same disk(s)\n\n\nWhile I'm at it, a different Cost-Based Vacuum Delay issue:\nVACUUM FULL also follows the Cost-Based Vacuum Delay tunings.\nWhile it makes total sense when you want to perform a query on\nanother table, it becomes a problem when your query is waiting\nfor the exclusive lock on the vacuumed table. Potentially, you\nwill have the vacuuming proceeding \"slowly\" because of the\nCost-Based Vacuum Delay, and a blocked application because the\napplication queries are just waiting.\n\nI'm wondering if it would not be possible to dynamically ignore\n(or lower, if it makes more sense?) the Cost-Based Vacuum Delay\nduring vacuum full, if a configurable amount of queries are\nwaiting for the lock?\n\n(please save yourself from answering \"you should never run VACUUM\nFULL if you're vacuuming enough\" - as long as VACUUM FULL is\navailable in PostgreSQL, there's no reason to not make it as\npractically usable as possible, albeit with low dev priority)\n\n\nRef: \n[1] inspired by http://developer.postgresql.org/~wieck/vacuum_cost/\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland\n",
"msg_date": "Fri, 07 Dec 2007 11:50:19 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cost-Based Vacuum Delay tuning"
},
{
"msg_contents": "\nOn Dec 7, 2007, at 4:50 AM, Guillaume Cottenceau wrote:\n\n> Hi,\n>\n> I'm currently trying to tune the Cost-Based Vacuum Delay in a\n> 8.2.5 server. The aim is to reduce as much as possible the\n> performance impact of vacuums on application queries, with the\n> background idea of running autovacuum as much as possible[1].\n>\n> My test involves vacuuming a large table, and measuring the\n> completion time, as the vacuuming proceeds, of a rather long\n> running application query (involving a table different from the\n> one being vacuumed) which cannot fit entirely in buffers (and the\n> completion time of the vacuum, because it needs not be too slow,\n> of course).\n>\n> I ran my tests with a few combinations of\n> vacuum_cost_delay/vacuum_cost_limit, while keeping the other\n> parameters set to the default from the 8.2.5 tarball:\n>\n> vacuum_cost_page_hit = 1\n> vacuum_cost_page_miss = 10\n> vacuum_cost_page_dirty = 20\n>\n> The completion time of the query is about 16 seconds in\n> isolation. With a vacuuming proceeding, here are the results:\n>\n> vacuum_cost_delay/vacuum_cost_limit (deactivated) 20/200 \n> 40/200 100/1000 150/1000 200/1000 300/1000\n>\n> VACUUM ANALYZE time 54 s 112 s 188 \n> s 109 s 152 s 190 s 274 s\n> SELECT time 50 s 28 s 26 \n> s 24 s 22 s 20 s 19 s\n\nWhile you do mention that the table you're running your select on is \ntoo big to fit in the shared_buffers, the drop in time between the \nfirst run and the rest most likely still reflects the fact that when \nrunning those tests successively a good portion of the table will \nalready be in shared_buffers as well as being in the filesystem \ncache, i.e. very little of the runs after the first will have to hit \nthe disk much.\n\n> I have noticed that others (Alvaro, Joshua) suggest to set\n> vacuum_cost_delay as low as 10 or 20 ms, however in my situation\n> I'd like to lower the performance impact in application queries\n> and will probably choose 150/1000 where \"only\" a +40% is seen -\n> I'm curious if anyone else has followed the same path, or is\n> there any outstanding flaw I've missed here? I'm talking\n> outstanding, as of course any local decision may be different in\n> the hope of favouring a different database/application behaviour.\n>\n>\n> Other than that, it's the results obtained with the design\n> principle of Cost-Base Vacuum Delay, which I find a little\n> surprising. Of course, I think it has been thought through a lot,\n> and my observations are probably naive, but I'm going to throw my\n> ideas anyway, who knows.\n>\n> I'd think that it would be possible to lower yet again the impact\n> of vacuuming on other queries, while keeping a vacuuming time\n> with little overhead, if dynamically changing the delays related\n> to database activity, rather than using fixed costs and delays.\n> For example, before and after each vacuum sleep delay is\n> completed, pg could:\n>\n> - check the amount of currently running queries\n> (pg_stat_activity), and continue sleeping if it is above a\n> configured threshold; by following this path, databases with\n> peak activities could use a threshold of 1 and have zero\n> ressource comsumption for vacuuming during peaks, still having\n> nearly no time completion overhead for vacuuming out of peaks\n> (since the check is performed also before the sleep delay,\n> which would be deactivated if no queries are running); if we\n> can afford a luxury implementation, we could always have a\n> maximum sleep time configuration, which would allow vacuuming\n> to proceed a little bit even when there's no timeframe with low\n> enough database activity\n>\n> - alternatively, pg could make use of some longer term statistics\n> (load average, IO statistics) to dynamically pause the\n> vacuuming - this I guess is related to the host OS and probably\n> more difficult to have working correctly with multiple disks\n> and/or processes running - however, if you want high\n> performance from PostgreSQL, you probably won't host other IO\n> applications on the same disk(s)\n\nThese ideas have been discussed much. Look in the archives to the \nbeginning of this year. I think the general consensus was that it \nwould be good have multiple autovacuum workers that could be tuned \nfor different times or workloads. I know Alvarro was going to work \non something along those lines but I'm not sure what's made it into \n8.3 or what's still definitely planned for the future.\n\n> While I'm at it, a different Cost-Based Vacuum Delay issue:\n> VACUUM FULL also follows the Cost-Based Vacuum Delay tunings.\n> While it makes total sense when you want to perform a query on\n> another table, it becomes a problem when your query is waiting\n> for the exclusive lock on the vacuumed table. Potentially, you\n> will have the vacuuming proceeding \"slowly\" because of the\n> Cost-Based Vacuum Delay, and a blocked application because the\n> application queries are just waiting.\n>\n> I'm wondering if it would not be possible to dynamically ignore\n> (or lower, if it makes more sense?) the Cost-Based Vacuum Delay\n> during vacuum full, if a configurable amount of queries are\n> waiting for the lock?\n>\n> (please save yourself from answering \"you should never run VACUUM\n> FULL if you're vacuuming enough\" - as long as VACUUM FULL is\n> available in PostgreSQL, there's no reason to not make it as\n> practically usable as possible, albeit with low dev priority)\n\nOk, I won't say what you said not to say. But, I will say that I \ndon't agree with you're conjecture that VACUUM FULL should be made \nmore lightweight, it's like using dynamite to knock a whole in a wall \nfor a window.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n",
"msg_date": "Fri, 7 Dec 2007 09:42:52 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cost-Based Vacuum Delay tuning"
},
{
"msg_contents": "Erik Jones <erik 'at' myemma.com> writes:\n\n>> vacuum_cost_delay/vacuum_cost_limit (deactivated) 20/200\n>> 40/200 100/1000 150/1000 200/1000 300/1000\n>>\n>> VACUUM ANALYZE time 54 s 112 s 188\n>> s 109 s 152 s 190 s 274 s\n>> SELECT time 50 s 28 s 26\n>> s 24 s 22 s 20 s 19 s\n>\n> While you do mention that the table you're running your select on is\n> too big to fit in the shared_buffers, the drop in time between the\n> first run and the rest most likely still reflects the fact that when\n\nThese figures don't show a difference between first run and\nsubsequent runs. For each parameter tuning, a couple of runs are\nfired after database restart, and once the value is approximately\nconstant, it's picked and put in this table. The \"deactivated\"\nshows the (stable, from subsequent runs) figure when vacuum delay\nis disabled (vacuum_cost_delay parameter quoted), not the first\nrun, if that's where the confusion came from.\n\n> running those tests successively a good portion of the table will\n> already be in shared_buffers as well as being in the filesystem\n> cache, i.e. very little of the runs after the first will have to hit\n\nA dd sized at the total RAM size is run between each test (not\nbetween each parameter tuning, between each *query test*), to\nremove the OS disk cache effect. Of course, the PostgreSQL\ncaching effect cannot be removed (maybe, it shouldn't, as after\nall this caching is here to improve performance), but the query\nis selected to generate a lot of disk activity even between each\nrun (that's why I said \"a query which cannot fit entirely in\nbuffers\").\n\n> the disk much.\n\nI have of course checked that the subsequent runs mean\nessentially disk activity, not CPU activity.\n\n>> - alternatively, pg could make use of some longer term statistics\n>> (load average, IO statistics) to dynamically pause the\n>> vacuuming - this I guess is related to the host OS and probably\n>> more difficult to have working correctly with multiple disks\n>> and/or processes running - however, if you want high\n>> performance from PostgreSQL, you probably won't host other IO\n>> applications on the same disk(s)\n>\n> These ideas have been discussed much. Look in the archives to the\n> beginning of this year. I think the general consensus was that it\n\nIs it on pgsql-hackers? I haven't found much stuff in\npgsql-performance while looking for \"vacuum_cost_delay tuning\".\n\n> would be good have multiple autovacuum workers that could be tuned\n> for different times or workloads. I know Alvarro was going to work\n\nSounds interesting.\n\n>> I'm wondering if it would not be possible to dynamically ignore\n>> (or lower, if it makes more sense?) the Cost-Based Vacuum Delay\n>> during vacuum full, if a configurable amount of queries are\n>> waiting for the lock?\n>>\n>> (please save yourself from answering \"you should never run VACUUM\n>> FULL if you're vacuuming enough\" - as long as VACUUM FULL is\n>> available in PostgreSQL, there's no reason to not make it as\n>> practically usable as possible, albeit with low dev priority)\n>\n> Ok, I won't say what you said not to say. But, I will say that I\n> don't agree with you're conjecture that VACUUM FULL should be made\n> more lightweight, it's like using dynamite to knock a whole in a wall\n> for a window.\n\nThanks for opening a new kind of trol^Hargument against VACUUM\nFULL, that one's more fresh (at least to me, who doesn't follow\nthe list too close anyway).\n\nJust for the record, I inherited a poorly (actually, \"not\" would\nbe more appropriate) tuned database, containing more than 90% of\ndead tuples on large tables, and I witnessed quite some\nperformance improvement while I could fix that.\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n",
"msg_date": "Fri, 07 Dec 2007 17:44:18 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cost-Based Vacuum Delay tuning"
},
{
"msg_contents": "\nOn Dec 7, 2007, at 10:44 AM, Guillaume Cottenceau wrote:\n\n> Erik Jones <erik 'at' myemma.com> writes:\n>\n>>> vacuum_cost_delay/vacuum_cost_limit (deactivated) 20/200\n>>> 40/200 100/1000 150/1000 200/1000 300/1000\n>>>\n>>> VACUUM ANALYZE time 54 s 112 s 188\n>>> s 109 s 152 s 190 s 274 s\n>>> SELECT time 50 s 28 s 26\n>>> s 24 s 22 s 20 s 19 s\n>>\n>> While you do mention that the table you're running your select on is\n>> too big to fit in the shared_buffers, the drop in time between the\n>> first run and the rest most likely still reflects the fact that when\n>\n> These figures don't show a difference between first run and\n> subsequent runs. For each parameter tuning, a couple of runs are\n> fired after database restart, and once the value is approximately\n> constant, it's picked and put in this table. The \"deactivated\"\n> shows the (stable, from subsequent runs) figure when vacuum delay\n> is disabled (vacuum_cost_delay parameter quoted), not the first\n> run, if that's where the confusion came from.\n\nIt was.\n\n> Is it on pgsql-hackers? I haven't found much stuff in\n> pgsql-performance while looking for \"vacuum_cost_delay tuning\".\n>\n>> would be good have multiple autovacuum workers that could be tuned\n>> for different times or workloads. I know Alvarro was going to work\n>\n> Sounds interesting.\n\nRun the initial archive search against pgsql-general over the last \nyear for a thread called 'Autovacuum Improvements'\n\n>>> I'm wondering if it would not be possible to dynamically ignore\n>>> (or lower, if it makes more sense?) the Cost-Based Vacuum Delay\n>>> during vacuum full, if a configurable amount of queries are\n>>> waiting for the lock?\n>>>\n>>> (please save yourself from answering \"you should never run VACUUM\n>>> FULL if you're vacuuming enough\" - as long as VACUUM FULL is\n>>> available in PostgreSQL, there's no reason to not make it as\n>>> practically usable as possible, albeit with low dev priority)\n>>\n>> Ok, I won't say what you said not to say. But, I will say that I\n>> don't agree with you're conjecture that VACUUM FULL should be made\n>> more lightweight, it's like using dynamite to knock a whole in a wall\n>> for a window.\n>\n> Thanks for opening a new kind of trol^Hargument against VACUUM\n> FULL, that one's more fresh (at least to me, who doesn't follow\n> the list too close anyway).\n\n> Just for the record, I inherited a poorly (actually, \"not\" would\n> be more appropriate) tuned database, containing more than 90% of\n> dead tuples on large tables, and I witnessed quite some\n> performance improvement while I could fix that.\n\nIf you really want the VACUUM FULL effect without having to deal with \nvacuum_cost_delay, use CLUSTER. It also re-writes the table and, \nAFAIK, is not subject to any of the vacuum related configuration \nparameters. I'd argue that if you really need VACUUM FULL, you may \nas well use CLUSTER to get a good ordering of the re-written table.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n",
"msg_date": "Fri, 7 Dec 2007 11:33:16 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cost-Based Vacuum Delay tuning"
},
{
"msg_contents": "Guillaume Cottenceau wrote:\n\n> I have noticed that others (Alvaro, Joshua) suggest to set\n> vacuum_cost_delay as low as 10 or 20 ms,\n\nMy suggestion is to set it as *high* as 10 or 20 ms. Compared to the\noriginal default of 0ms. This is just because I'm lazy enough not to\nhave done any measuring of the exact consequences of such a setting, and\nout of fear that a very high value could provoke some sort of disaster.\n\nI must admit that changing the vacuum_delay_limit isn't something that\nI'm used to recommending. Maybe it does make sense considering\nreadahead effects and the new \"ring buffer\" stuff.\n\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4\n\"La experiencia nos dice que el hombre pel� millones de veces las patatas,\npero era forzoso admitir la posibilidad de que en un caso entre millones,\nlas patatas pelar�an al hombre\" (Ijon Tichy)\n",
"msg_date": "Sat, 8 Dec 2007 10:21:23 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cost-Based Vacuum Delay tuning"
}
] |
[
{
"msg_contents": "Hello,\n\nI've just hit problem, that is unusual for me.\n\nquaker=> \\d sites\n Table \"public.sites\"\n Column | Type | Modifiers \n\n-----------+-------------------+----------------------------------------------------\n id | integer | not null default \nnextval('sites_id_seq'::regclass)\n site_name | character varying | not null\n user_id | integer | not null\n extra | integer |\nIndexes:\n \"sites_pkey\" PRIMARY KEY, btree (id)\n \"sites_site_name_key_unique\" UNIQUE, btree (site_name text_pattern_ops)\n \"sites_user_id_key\" btree (user_id)\n\nquaker=> \\d users\n Table \"public.users\"\n Column | Type | Modifiers \n\n-----------+-------------------+----------------------------------------------------\n id | integer | not null default \nnextval('users_id_seq'::regclass)\n user_name | character varying | not null\n extra | integer |\nIndexes:\n \"users_pkey\" PRIMARY KEY, btree (id)\n\nBoth tables filled with 100k records of random data. In users id is in \nrange from 1..100k, same in sites. In sites user_id is random, range \nfrom 1..150k.\n\nI've created views:\n\nquaker=> \\d users_secure\n View \"public.users_secure\"\n Column | Type | Modifiers\n-----------+-------------------+-----------\n id | integer |\n user_name | character varying |\nView definition:\n SELECT users.id, users.user_name\n FROM users;\n\nquaker=> \\d users_secure_with_has_extra\n View \"public.users_secure_with_has_extra\"\n Column | Type | Modifiers\n-----------+-------------------+-----------\n id | integer |\n user_name | character varying |\n has_extra | boolean |\nView definition:\n SELECT users.id, users.user_name, users.extra IS NOT NULL AS has_extra\n FROM users;\n\nNow, when I do simple query to find all data for sites matching \nsite_name like 'H3bh%' (there are at least one record in sites matching \nthis condition).\n\nquaker=> explain analyze select s.site_name,u.user_name from \nsites_secure s left join users_secure_with_has_extra u on u.id = \ns.user_id where site_name like 'H3bh%' order by site_name limit 10;\n \nQUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3897.02..3897.03 rows=2 width=44) (actual \ntime=430.326..430.331 rows=1 loops=1)\n -> Sort (cost=3897.02..3897.03 rows=2 width=44) (actual \ntime=430.321..430.323 rows=1 loops=1)\n Sort Key: sites.site_name\n -> Nested Loop Left Join (cost=0.00..3897.01 rows=2 \nwidth=44) (actual time=290.103..430.301 rows=1 loops=1)\n Join Filter: (\"inner\".id = \"outer\".user_id)\n -> Index Scan using sites_site_name_key_unique on sites \n (cost=0.00..6.01 rows=1 width=16) (actual time=0.044..0.054 rows=1 \nloops=1)\n Index Cond: (((site_name)::text ~>=~ 'H3bh'::text) \nAND ((site_name)::text ~<~ 'H3bi'::text))\n Filter: ((site_name)::text ~~ 'H3bh%'::text)\n -> Seq Scan on users (cost=0.00..1641.00 rows=100000 \nwidth=20) (actual time=0.007..245.406 rows=100000 loops=1)\n Total runtime: 430.432 ms\n(10 rows)\n\nWhen I resign from LEFT JOIN users_secure_with_has_extra, and put JOIN \ninstead I've got:\n\nquaker=> explain analyze select s.site_name,u.user_name from \nsites_secure s join users_secure_with_has_extra u on u.id = s.user_id \nwhere site_name like 'H3bh%' order by site_name limit 10;\n \nQUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=9.05..9.06 rows=1 width=24) (actual time=0.112..0.118 \nrows=1 loops=1)\n -> Sort (cost=9.05..9.06 rows=1 width=24) (actual \ntime=0.106..0.108 rows=1 loops=1)\n Sort Key: sites.site_name\n -> Nested Loop (cost=0.00..9.04 rows=1 width=24) (actual \ntime=0.073..0.088 rows=1 loops=1)\n -> Index Scan using sites_site_name_key_unique on sites \n (cost=0.00..6.01 rows=1 width=16) (actual time=0.044..0.050 rows=1 \nloops=1)\n Index Cond: (((site_name)::text ~>=~ 'H3bh'::text) \nAND ((site_name)::text ~<~ 'H3bi'::text))\n Filter: ((site_name)::text ~~ 'H3bh%'::text)\n -> Index Scan using users_pkey on users \n(cost=0.00..3.02 rows=1 width=16) (actual time=0.019..0.023 rows=1 loops=1)\n Index Cond: (users.id = \"outer\".user_id)\n Total runtime: 0.216 ms\n(10 rows)\n\nAs explain shows PostgreSQL is using index scan on users, instead of seq \nscan like in example above.\n\nNow. When I use view with no has_extra field (important: field is a \nsimple function on extra field) I get expectable results. Both using \nindexes.\n\nquaker=> explain analyze select s.site_name,u.user_name from \nsites_secure s left join users_secure u on u.id = s.user_id where \nsite_name like 'H3bh%' order by site_name limit 10;\n \nQUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=9.05..9.06 rows=1 width=24) (actual time=0.111..0.117 \nrows=1 loops=1)\n -> Sort (cost=9.05..9.06 rows=1 width=24) (actual \ntime=0.105..0.107 rows=1 loops=1)\n Sort Key: sites.site_name\n -> Nested Loop Left Join (cost=0.00..9.04 rows=1 width=24) \n(actual time=0.072..0.087 rows=1 loops=1)\n -> Index Scan using sites_site_name_key_unique on sites \n (cost=0.00..6.01 rows=1 width=16) (actual time=0.043..0.049 rows=1 \nloops=1)\n Index Cond: (((site_name)::text ~>=~ 'H3bh'::text) \nAND ((site_name)::text ~<~ 'H3bi'::text))\n Filter: ((site_name)::text ~~ 'H3bh%'::text)\n -> Index Scan using users_pkey on users \n(cost=0.00..3.02 rows=1 width=16) (actual time=0.019..0.022 rows=1 loops=1)\n Index Cond: (users.id = \"outer\".user_id)\n Total runtime: 0.216 ms\n(10 rows)\n\nquaker=> explain analyze select s.site_name,u.user_name from \nsites_secure s join users_secure u on u.id = s.user_id where site_name \nlike 'H3bh%' order by site_name limit 10;\n \nQUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=9.05..9.06 rows=1 width=24) (actual time=0.109..0.115 \nrows=1 loops=1)\n -> Sort (cost=9.05..9.06 rows=1 width=24) (actual \ntime=0.104..0.106 rows=1 loops=1)\n Sort Key: sites.site_name\n -> Nested Loop (cost=0.00..9.04 rows=1 width=24) (actual \ntime=0.071..0.086 rows=1 loops=1)\n -> Index Scan using sites_site_name_key_unique on sites \n (cost=0.00..6.01 rows=1 width=16) (actual time=0.042..0.048 rows=1 \nloops=1)\n Index Cond: (((site_name)::text ~>=~ 'H3bh'::text) \nAND ((site_name)::text ~<~ 'H3bi'::text))\n Filter: ((site_name)::text ~~ 'H3bh%'::text)\n -> Index Scan using users_pkey on users \n(cost=0.00..3.02 rows=1 width=16) (actual time=0.018..0.021 rows=1 loops=1)\n Index Cond: (users.id = \"outer\".user_id)\n Total runtime: 0.214 ms\n(10 rows)\n\nWhy?\n\nquaker=> select version();\n version \n\n-----------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.1.10 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.3 \n20070831 (prerelease) (Ubuntu 4.1.2-16ubuntu1)\n(1 row)\n\n\n",
"msg_date": "Fri, 07 Dec 2007 11:55:08 +0100",
"msg_from": "=?ISO-8859-2?Q?Piotr_Gasid=B3o?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trouble with LEFT JOIN using VIEWS."
},
{
"msg_contents": "=?ISO-8859-2?Q?Piotr_Gasid=B3o?= <[email protected]> writes:\n> I've just hit problem, that is unusual for me.\n\n> View definition:\n> SELECT users.id, users.user_name, users.extra IS NOT NULL AS has_extra\n> FROM users;\n\nWhat you've got here is a non-nullable target list, which creates an\noptimization fence when used in the nullable side of an outer join.\nThe problem is that has_extra should read as NULL in the query output\nfor a sites_secure row that has no match in users_secure_with_has_extra,\nbut making users.extra go to null will not make that happen, so there's\na constraint on where the expression can be evaluated. The current\nplanner has no way to deal with that except by restricting the plan\nstructure.\n\nWe have some ideas about how to fix this, but don't hold your breath\n... it's going to take major surgery on the planner, AFAICS.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Dec 2007 10:36:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with LEFT JOIN using VIEWS. "
}
] |
[
{
"msg_contents": "One of those things that comes up regularly on this list in particular are \npeople whose performance issues relate to \"bloated\" tables or indexes. \nWhat I've always found curious is that I've never seen a good way \nsuggested to actually measure said bloat in any useful numeric \nterms--until today.\n\nGreg Sabino Mullane just released a Nagios plug-in for PostgreSQL that you \ncan grab at http://bucardo.org/nagios_postgres/ , and while that is itself \nnice the thing I found most remarkable is the bloat check. The majority \nof that code is an impressive bit of SQL that anyone could use even if you \nhave no interest in Nagios, which is why I point it out for broader \nattention. Look in check_postgres.pl for the \"check_bloat\" routine and \nthe big statement starting at the aptly labled \"This was fun to write\" \nsection. If you pull that out of there and replace $MINPAGES and \n$MINIPAGES near the end with real values, you can pop that into a \nstandalone query and execute it directly. Results look something like \nthis (reformatting for e-mail):\n\nschemaname | tablename | reltuples | relpages | otta | tbloat | \npublic | accounts | 2500000 | 41667 | 40382 | 1.0 |\n\nwastedpages | wastedbytes | wastedsize | iname | ituples |\n 1285 | 10526720 | 10 MB | accounts_pkey | 2500000 |\n\nipages | iotta | ibloat | wastedipages | wastedibytes | wastedisize\n 5594 | 35488 | 0.2 | 0 | 0 | 0 bytes\n\nI'd be curious to hear from those of you who have struggled with this \nclass of problem in the past as to whether you feel this quantifies the \nissue usefully.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Sat, 8 Dec 2007 02:06:46 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Measuring table and index bloat"
},
{
"msg_contents": "On Dec 8, 2007, at 1:06 AM, Greg Smith wrote:\n> One of those things that comes up regularly on this list in \n> particular are people whose performance issues relate to \"bloated\" \n> tables or indexes. What I've always found curious is that I've \n> never seen a good way suggested to actually measure said bloat in \n> any useful numeric terms--until today.\n>\n> Greg Sabino Mullane just released a Nagios plug-in for PostgreSQL \n> that you can grab at http://bucardo.org/nagios_postgres/ , and \n> while that is itself nice the thing I found most remarkable is the \n> bloat check. The majority of that code is an impressive bit of SQL \n> that anyone could use even if you have no interest in Nagios, which \n> is why I point it out for broader attention. Look in \n> check_postgres.pl for the \"check_bloat\" routine and the big \n> statement starting at the aptly labled \"This was fun to write\" \n> section. If you pull that out of there and replace $MINPAGES and \n> $MINIPAGES near the end with real values, you can pop that into a \n> standalone query and execute it directly. Results look something \n> like this (reformatting for e-mail):\n>\n> schemaname | tablename | reltuples | relpages | otta | tbloat | \n> public | accounts | 2500000 | 41667 | 40382 | 1.0 |\n>\n> wastedpages | wastedbytes | wastedsize | iname | ituples |\n> 1285 | 10526720 | 10 MB | accounts_pkey | 2500000 |\n>\n> ipages | iotta | ibloat | wastedipages | wastedibytes | wastedisize\n> 5594 | 35488 | 0.2 | 0 | 0 | 0 bytes\n>\n> I'd be curious to hear from those of you who have struggled with \n> this class of problem in the past as to whether you feel this \n> quantifies the issue usefully.\n\nI don't think he's handling alignment correctly...\n\n CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma\n\nAFAIK that should also be 8 on 64 bit CPUs.\n\nA somewhat more minor nit... the calculation of the null header \nshould be based on what columns in a table are nullable, not whether \na column actually is null. Oh, and otta should be oughta. :) Though \nI'd probably just call it ideal.\n\nHaving said all that, this looks highly useful!\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828",
"msg_date": "Wed, 19 Dec 2007 19:45:25 -0600",
"msg_from": "Decibel! <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Measuring table and index bloat"
}
] |
[
{
"msg_contents": "For some accpac tables, I do synchronization by looking at the audtdate \nand audttime fields. These fields are quite annoying as they are decimal \nencoded dates and times stored as an integer. I do not have the freedom \nto \"fix\" this.\n\nTo find records after a certain time, I must do one of:\n\n select * from icpric where audtdate > ? or (audtdate = ? and \naudttime > ?)\n\nOr:\n\n select * from icpric where audtdate >= ? and (audtdate = ? or \naudttime > ?)\n\nThe fields are as follows:\n\n audtdate | integer | not null\n audttime | integer | not null\n\nI have an index as follows:\n\n \"icpric_audtdate_key\" btree (audtdate, audttime)\n\nThe tables are properly analyzed and vacuumed. I am using PostgreSQL \n8.2.5. The table has ~27,000 rows.\n\nThe first query generates this plan:\n\nPCCYBER=# explain analyze select itemno, audtdate, audttime from icpric \nwhere audtdate > 20071207 or (audtdate = 20071207 and audttime > 23434145);\n QUERY \nPLAN \n----------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on icpric (cost=4.52..8.50 rows=2 width=28) (actual \ntime=0.047..0.052 rows=4 loops=1)\n Recheck Cond: ((audtdate > 20071207) OR ((audtdate = 20071207) AND \n(audttime > 23434145)))\n -> BitmapOr (cost=4.52..4.52 rows=2 width=0) (actual \ntime=0.037..0.037 rows=0 loops=1)\n -> Bitmap Index Scan on icpric_audtdate_key (cost=0.00..2.26 \nrows=1 width=0) (actual time=0.022..0.022 rows=3 loops=1)\n Index Cond: (audtdate > 20071207)\n -> Bitmap Index Scan on icpric_audtdate_key (cost=0.00..2.26 \nrows=1 width=0) (actual time=0.014..0.014 rows=1 loops=1)\n Index Cond: ((audtdate = 20071207) AND (audttime > 23434145))\n Total runtime: 0.096 ms\n(8 rows)\n\nTime: 0.786 ms\n\n\nThe second query generates this plan:\n\nPCCYBER=# explain analyze select itemno, audtdate, audttime from icpric \nwhere audtdate >= 20071207 and (audtdate > 20071207 or audttime > 23434145);\n QUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------\n Index Scan using icpric_audtdate_key on icpric (cost=0.00..4.27 rows=1 \nwidth=28) (actual time=0.266..0.271 rows=4 loops=1)\n Index Cond: (audtdate >= 20071207)\n Filter: ((audtdate > 20071207) OR (audttime > 23434145))\n Total runtime: 0.299 ms\n(4 rows)\n\nTime: 0.880 ms\n\nSample execution times:\n\nPCCYBER=# select itemno, audtdate, audttime from icpric where audtdate > \n20071207 or (audtdate = 20071207 and audttime > 23434145);\n itemno | audtdate | audttime\n--------------------+----------+----------\n MB-AS-M2-CROSSHAIR | 20071207 | 23434154\n PRT-EP-PHOTO R2400 | 20071208 | 1010323\n PRT-EP-PHOTO R2400 | 20071208 | 1010339\n PRT-EP-PHOTO R2400 | 20071208 | 1010350\n(4 rows)\n\nTime: 0.584 ms\n\nPCCYBER=# select itemno, audtdate, audttime from icpric where audtdate \n >= 20071207 and (audtdate > 20071207 or audttime > 23434145);\n itemno | audtdate | audttime\n--------------------+----------+----------\n MB-AS-M2-CROSSHAIR | 20071207 | 23434154\n PRT-EP-PHOTO R2400 | 20071208 | 1010323\n PRT-EP-PHOTO R2400 | 20071208 | 1010339\n PRT-EP-PHOTO R2400 | 20071208 | 1010350\n(4 rows)\n\nTime: 0.831 ms\n\nI can understand that this is a non-optimal query. What I don't \nunderstand is why two bitmap scans, combined together, should be able to \nout-perform a single index scan, when selecting a very small portion of \nthe table. There are only four result rows. I am speculating that the \nindex scan is loading the heap rows to determine whether the Filter: \ncriteria matches, but I do not know if this makes sense? If it does make \nsense, would it be complicated to allow the filter to be processed from \nthe index if all of the fields in the expression are available in the index?\n\nBoth of the queries execute in a satisfactory amount of time - so I \nreally don't care which I use. I thought these results might be \ninteresting to somebody?\n\nThe good thing is that bitmap scan seems to be well optimized.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n",
"msg_date": "Sat, 08 Dec 2007 12:18:32 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Combining two bitmap scans out performs a single regular index scan?"
},
{
"msg_contents": "Mark Mielke <[email protected]> writes:\n> To find records after a certain time, I must do one of:\n> select * from icpric where audtdate > ? or (audtdate = ? and \n> audttime > ?)\n\nIn recent releases (at least 8.2, don't remember about 8.1), a row\ncomparison is what you want:\n\n\tWHERE (auddate, adttime) > (?, ?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 08 Dec 2007 15:46:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Combining two bitmap scans out performs a single regular index\n\tscan?"
},
{
"msg_contents": "Tom Lane wrote:\n> Mark Mielke <[email protected]> writes:\n> \n>> To find records after a certain time, I must do one of:\n>> select * from icpric where audtdate > ? or (audtdate = ? and \n>> audttime > ?)\n>> \n> In recent releases (at least 8.2, don't remember about 8.1), a row\n> comparison is what you want:\n>\n> \tWHERE (auddate, adttime) > (?, ?)\n> \nCool! That's the ticket. :-)\n\nI guess it would be unnecessary to translate the other two queries into \nthis one for the purpose of planning, eh? :-)\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nTom Lane wrote:\n\nMark Mielke <[email protected]> writes:\n \n\nTo find records after a certain time, I must do one of:\n select * from icpric where audtdate > ? or (audtdate = ? and \naudttime > ?)\n \n\nIn recent releases (at least 8.2, don't remember about 8.1), a row\ncomparison is what you want:\n\n\tWHERE (auddate, adttime) > (?, ?)\n \n\nCool! That's the ticket. :-)\n\nI guess it would be unnecessary to translate the other two queries into\nthis one for the purpose of planning, eh? :-)\n\nCheers,\nmark\n-- \nMark Mielke <[email protected]>",
"msg_date": "Sat, 08 Dec 2007 17:07:25 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Combining two bitmap scans out performs a single regular\n\tindex scan?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a small question ... right now on a 8.1.10 version of PostgreSQL I'm \ndoing a vacuum full verbose anaylze a table with 60 304 340 rows in 1155791 \npages and there were 16 835 144 unused item pointers inside and 5 index.\n\nAfter the first treatment of the index (appeared in the verbose) ... the \nvacuum is still working now since 15 hours ... There is none other activity \non this server ... it's a replication server ... so the vacuum is locking \nmost of the replication queries ...\nWhat can I do to get this doing quicker ...?\nIt's a 8Gb server ... with a RAID 10 ... my maintenance_work_mem = 25600.\n\nAny idea ?\nI'm sure that if I delete all the index an rebuild them by hand it'll be \nreally quicker ! (done on another replication server ... took 20 min)\n\nregards,\n-- \nHervé Piedvache\n",
"msg_date": "Sun, 9 Dec 2007 11:42:41 +0100",
"msg_from": "=?utf-8?q?Herv=C3=A9_Piedvache?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuum full since 15 hours"
},
{
"msg_contents": "Hervé Piedvache wrote:\n> I have a small question ... right now on a 8.1.10 version of PostgreSQL I'm \n> doing a vacuum full verbose anaylze a table with 60 304 340 rows in 1155791 \n> pages and there were 16 835 144 unused item pointers inside and 5 index.\n> \n> After the first treatment of the index (appeared in the verbose) ... the \n> vacuum is still working now since 15 hours ... There is none other activity \n> on this server ... it's a replication server ... so the vacuum is locking \n> most of the replication queries ...\n> What can I do to get this doing quicker ...?\n> It's a 8Gb server ... with a RAID 10 ... my maintenance_work_mem = 25600.\n> \n> Any idea ?\n> I'm sure that if I delete all the index an rebuild them by hand it'll be \n> really quicker ! (done on another replication server ... took 20 min)\n\nYes, it probably will be. IIRC we even suggest doing that in the manual. \nAnother alternative is to run CLUSTER instead of VACUUM FULL. Increasing \nmaintenance_work_mem will make the index builds go faster.\n\nConsider if you really need to run VACUUM FULL, or would plain VACUUM be \nenough.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Sun, 09 Dec 2007 16:32:16 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum full since 15 hours"
}
] |
[
{
"msg_contents": "Hello,\n\nI've created table:\n\nquaker=> \\d users\n Table \"public.users\"\n Column | Type | Modifiers \n\n-----------+-------------------+----------------------------------------------------\n id | integer | not null default \nnextval('users_id_seq'::regclass)\n user_name | character varying | not null\n extra | integer |\nIndexes:\n \"users_pkey\" PRIMARY KEY, btree (id)\n \"users_user_name_unique_text_ops\" UNIQUE, btree (user_name text_ops)\n \"users_user_name_unique_text_pattern_ops\" btree (user_name \ntext_pattern_ops)\n\nFilled with random data (100k records).\n\nI do simple queries using above indexes (asking for existing record).\n\nexplain analyze select id from users where user_name = 'quaker';\n QUERY \nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using users_user_name_unique_text_ops on users \n(cost=0.00..8.28 rows=1 width=4) (actual time=0.040..0.043 rows=1 loops=1)\n Index Cond: ((user_name)::text = 'quaker'::text)\n Total runtime: 0.084 ms\n(3 rows)\n\nexplain analyze select id from users where user_name like 'quaker';\n QUERY \nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using users_user_name_unique_text_pattern_ops on users \n(cost=0.00..8.28 rows=1 width=4) (actual time=0.022..0.024 rows=1 loops=1)\n Index Cond: ((user_name)::text ~=~ 'quaker'::text)\n Filter: ((user_name)::text ~~ 'quaker'::text)\n Total runtime: 0.050 ms\n(4 rows)\n\nEverything looks fine.\n\nNow, I've created PL/PGSQL function:\n\ncreate or replace function user_login(\n _v_user_name varchar\n) returns integer as $$\ndeclare\n _i_user_id integer;\nbegin\n select id into _i_user_id from users where user_name = _v_user_name \nlimit 1;\n if found then\n return _i_user_id;\n end if;\n return -1;\nend;\n$$ language plpgsql security definer;\n\nAs shown above, I use \"=\" operator, which should use \nusers_user_name_unique_text_ops index:\n\nexplain analyze select user_login('quaker');\n QUERY PLAN \n\n------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.320..0.322 \nrows=1 loops=1)\n Total runtime: 0.340 ms\n(2 rows)\n\n\nSome performance loss, but OK. Now I've changed \"=\" into \"LIKE\" to use \nusers_user_name_unique_text_pattern_ops index and rerun query:\n\nexplain analyze select user_login('quaker');\n\n QUERY PLAN \n\n--------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=41.606..41.608 \nrows=1 loops=1)\n Total runtime: 41.629 ms\n(2 rows)\n\nSecond run give 61.061 ms. So no improvements.\n\nWhy PL/PGSQL is unable to proper utilize \nusers_user_name_unique_text_pattern_ops?\n\nquaker=> select version();\n version \n\n----------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.2.5 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.3 \n20070831 (prerelease) (Ubuntu 4.1.2-16ubuntu1)\n(1 row)\n",
"msg_date": "Mon, 10 Dec 2007 13:58:01 +0100",
"msg_from": "=?ISO-8859-2?Q?Piotr_Gasid=B3o?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index on VARCHAR with text_pattern_ops inside PL/PGSQL procedure."
},
{
"msg_contents": "Piotr Gasidło wrote:\n> Some performance loss, but OK. Now I've changed \"=\" into \"LIKE\" to use \n> users_user_name_unique_text_pattern_ops index and rerun query:\n> \n> explain analyze select user_login('quaker');\n> \n> QUERY PLAN\n> -------------------------------------------------------------------------------------- \n> \n> Result (cost=0.00..0.01 rows=1 width=0) (actual time=41.606..41.608 \n> rows=1 loops=1)\n> Total runtime: 41.629 ms\n> (2 rows)\n> \n> Second run give 61.061 ms. So no improvements.\n> \n> Why PL/PGSQL is unable to proper utilize \n> users_user_name_unique_text_pattern_ops?\n\nIt plans the query just once for the pl/pgsql function. That means it \ndoesn't know whether you are passing in a name '%foo' which can't use \nthe index. Since only one plan can be used it has to use a scan of the \ntable.\n\nYou can use EXECUTE to get plpgsql to plan the query each time it is \ncalled. That should let it recognise that it can use the index (if it \ncan, of course).\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 10 Dec 2007 13:04:07 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on VARCHAR with text_pattern_ops inside PL/PGSQL procedure."
},
{
"msg_contents": "Hello\n\nthis is known problem of prepared statements. Prepared statement has\nplan built without knowledge any values and should not be optimal.\n\ntry use dynamic query and statement EXECUTE INTO\n\nRegards\nPavel Stehule\n\n\n\nOn 10/12/2007, Piotr Gasidło <[email protected]> wrote:\n> Hello,\n>\n> I've created table:\n>\n> quaker=> \\d users\n> Table \"public.users\"\n> Column | Type | Modifiers\n>\n> -----------+-------------------+----------------------------------------------------\n> id | integer | not null default\n> nextval('users_id_seq'::regclass)\n> user_name | character varying | not null\n> extra | integer |\n> Indexes:\n> \"users_pkey\" PRIMARY KEY, btree (id)\n> \"users_user_name_unique_text_ops\" UNIQUE, btree (user_name text_ops)\n> \"users_user_name_unique_text_pattern_ops\" btree (user_name\n> text_pattern_ops)\n>\n> Filled with random data (100k records).\n>\n> I do simple queries using above indexes (asking for existing record).\n>\n> explain analyze select id from users where user_name = 'quaker';\n> QUERY\n> PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using users_user_name_unique_text_ops on users\n> (cost=0.00..8.28 rows=1 width=4) (actual time=0.040..0.043 rows=1 loops=1)\n> Index Cond: ((user_name)::text = 'quaker'::text)\n> Total runtime: 0.084 ms\n> (3 rows)\n>\n> explain analyze select id from users where user_name like 'quaker';\n> QUERY\n> PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using users_user_name_unique_text_pattern_ops on users\n> (cost=0.00..8.28 rows=1 width=4) (actual time=0.022..0.024 rows=1 loops=1)\n> Index Cond: ((user_name)::text ~=~ 'quaker'::text)\n> Filter: ((user_name)::text ~~ 'quaker'::text)\n> Total runtime: 0.050 ms\n> (4 rows)\n>\n> Everything looks fine.\n>\n> Now, I've created PL/PGSQL function:\n>\n> create or replace function user_login(\n> _v_user_name varchar\n> ) returns integer as $$\n> declare\n> _i_user_id integer;\n> begin\n> select id into _i_user_id from users where user_name = _v_user_name\n> limit 1;\n> if found then\n> return _i_user_id;\n> end if;\n> return -1;\n> end;\n> $$ language plpgsql security definer;\n>\n> As shown above, I use \"=\" operator, which should use\n> users_user_name_unique_text_ops index:\n>\n> explain analyze select user_login('quaker');\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.320..0.322\n> rows=1 loops=1)\n> Total runtime: 0.340 ms\n> (2 rows)\n>\n>\n> Some performance loss, but OK. Now I've changed \"=\" into \"LIKE\" to use\n> users_user_name_unique_text_pattern_ops index and rerun query:\n>\n> explain analyze select user_login('quaker');\n>\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=0) (actual time=41.606..41.608\n> rows=1 loops=1)\n> Total runtime: 41.629 ms\n> (2 rows)\n>\n> Second run give 61.061 ms. So no improvements.\n>\n> Why PL/PGSQL is unable to proper utilize\n> users_user_name_unique_text_pattern_ops?\n>\n> quaker=> select version();\n> version\n>\n> ----------------------------------------------------------------------------------------------------------------------\n> PostgreSQL 8.2.5 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.3\n> 20070831 (prerelease) (Ubuntu 4.1.2-16ubuntu1)\n> (1 row)\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n",
"msg_date": "Mon, 10 Dec 2007 14:07:58 +0100",
"msg_from": "\"Pavel Stehule\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on VARCHAR with text_pattern_ops inside PL/PGSQL procedure."
},
{
"msg_contents": "2007/12/10, Piotr Gasidło <[email protected]>:\n> Why PL/PGSQL is unable to proper utilize\n> users_user_name_unique_text_pattern_ops?\n\nI found solution, that satisfies me (EXECUTE is a bit ugly for me).\n\nI've replaced LIKE operator with ~=~ operator. Now PL/PGSQL function\nproperly uses index on SELECT.\n",
"msg_date": "Thu, 13 Dec 2007 12:41:40 +0100",
"msg_from": "\"=?ISO-8859-2?Q?Piotr_Gasid=B3o?=\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on VARCHAR with text_pattern_ops inside PL/PGSQL procedure."
}
] |
[
{
"msg_contents": "\nHi to all.\n\nI'd like to benchmark PG. I'd like to compare sorting performances (time spent, #of disk accesses, # of run produced etc) of the present Replacement Selection (external sorting) algorithm and of a refinement I'm going to implement.\n\nI'm new on PG, I just had the idea of how to possibly get better that algorithm and choosed to test it on PG since it's an open-source DBMS.\n\nI've been searching the web for a benchmark. I suppose TPC-H and Wisconsin could fit, but had problems when trying to use them.\nAny suggestion on a \"good benchmark\"?\nAny tutorial on how to use them?\n\nThanks for your time.\n\nRegards.\nManolo.\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/",
"msg_date": "Tue, 11 Dec 2007 11:06:52 +0000",
"msg_from": "Manolo _ <[email protected]>",
"msg_from_op": true,
"msg_subject": "Benchmarking PG"
},
{
"msg_contents": "On Dec 11, 2007 4:06 AM, Manolo _ <[email protected]> wrote:\n>\n> Hi to all.\n>\n> I'd like to benchmark PG. I'd like to compare sorting performances (time spent, #of disk accesses, # of run produced etc) of the present Replacement Selection (external sorting) algorithm and of a refinement I'm going to implement.\n>\n> I'm new on PG, I just had the idea of how to possibly get better that algorithm and choosed to test it on PG since it's an open-source DBMS.\n>\n> I've been searching the web for a benchmark. I suppose TPC-H and Wisconsin could fit, but had problems when trying to use them.\n> Any suggestion on a \"good benchmark\"?\n> Any tutorial on how to use them?\n>\n> Thanks for your time.\n>\n> Regards.\n> Manolo.\n> _________________________________________________________________\n> Express yourself instantly with MSN Messenger! Download today it's FREE!\n> http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\nWell, there's pgbench. http://www.postgresql.org/docs/8.3/static/pgbench.html\n\n- Josh / eggyknap\n",
"msg_date": "Tue, 11 Dec 2007 04:16:02 -0700",
"msg_from": "\"Josh Tolley\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking PG"
},
{
"msg_contents": "\nHi Josh!\n\nThanks for your reply.\nActually I forgot to mention PGBench, sorry. But I also forgot to mention I'm looking for an \"impartial\"... I mean \"outer\" tool to test PG.\n\nAny suggestion, please?\n\nRegards.\n\n\n----------------------------------------\n> Date: Tue, 11 Dec 2007 04:16:02 -0700\n> From: [email protected]\n> To: [email protected]\n> Subject: Re: [PERFORM] Benchmarking PG\n> CC: [email protected]\n> \n> On Dec 11, 2007 4:06 AM, Manolo _ wrote:\n>>\n>> Hi to all.\n>>\n>> I'd like to benchmark PG. I'd like to compare sorting performances (time spent, #of disk accesses, # of run produced etc) of the present Replacement Selection (external sorting) algorithm and of a refinement I'm going to implement.\n>>\n>> I'm new on PG, I just had the idea of how to possibly get better that algorithm and choosed to test it on PG since it's an open-source DBMS.\n>>\n>> I've been searching the web for a benchmark. I suppose TPC-H and Wisconsin could fit, but had problems when trying to use them.\n>> Any suggestion on a \"good benchmark\"?\n>> Any tutorial on how to use them?\n>>\n>> Thanks for your time.\n>>\n>> Regards.\n>> Manolo.\n>> _________________________________________________________________\n>> Express yourself instantly with MSN Messenger! Download today it's FREE!\n>> http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>>\n> \n> Well, there's pgbench. http://www.postgresql.org/docs/8.3/static/pgbench.html\n> \n> - Josh / eggyknap\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/",
"msg_date": "Tue, 11 Dec 2007 11:36:19 +0000",
"msg_from": "Manolo _ <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking PG"
},
{
"msg_contents": "Manolo _ wrote:\n> I'd like to benchmark PG. I'd like to compare sorting performances (time spent, #of disk accesses, # of run produced etc) of the present Replacement Selection (external sorting) algorithm and of a refinement I'm going to implement.\n> \n> I'm new on PG, I just had the idea of how to possibly get better that algorithm and choosed to test it on PG since it's an open-source DBMS.\n> \n> I've been searching the web for a benchmark. I suppose TPC-H and Wisconsin could fit, but had problems when trying to use them.\n> Any suggestion on a \"good benchmark\"?\n> Any tutorial on how to use them?\n\nIf you want to compare sorting performance in particular, running a big \nquery that needs a big sort will be much more useful than a generic DBMS \nbenchmark like TPC-H.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 11 Dec 2007 12:11:29 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking PG"
}
] |
[
{
"msg_contents": "The subject of the email:\n \nConfirmação de envio / Sending confirmation (captchaid:132432b16f55)\n\n________________________________\n> Date: Tue, 11 Dec 2007 09:40:37 -0200\n> From: [email protected]\n> To: [email protected]\n> Subject: Confirmação de envio / Sending confirmation (captchaid:132432b16f55)\n> \n> \n> A mensagem de email enviada para [email protected] requer confirmação para ser entregue. Por favor, responda este e-mail informando os caracteres que você vê na imagem abaixo.\n> \n> The email message sent to [email protected] requires a confirmation to be delivered. Please, answer this email informing the characters that you see in the image below.\n> \n> [http://by112w.bay112.mail.live.com/mail/SafeRedirect.aspx?hm__tg=http://64.4.26.249/att/GetAttachment.aspx&hm__qs=file%3d5f23f5aa-dc94-4b8d-979c-5baa326a8b17.gif%26ct%3daW1hZ2UvZ2lm%26name%3dY2FwdGNoYS5naWY_3d%26inline%3d1%26rfc%3d0%26empty%3dFalse%26imgsrc%3dcid%253acaptcha_img&oneredir=1&ip=10.1.106.210&d=d2813&mf=0]\n> \n> Não remova a próxima linha / Don't remove next line\n> captchakey:asbEJCQmIwQkowMTA1NDc\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/",
"msg_date": "Tue, 11 Dec 2007 11:57:24 +0000",
"msg_from": "Manolo _ <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is it spam or not?"
}
] |
[
{
"msg_contents": "RE: Confirmação de envio / Sending confirmation (captchaid:132432b18776)\n________________________________\n> Date: Tue, 11 Dec 2007 10:02:14 -0200\n> From: [email protected]\n> To: [email protected]\n> Subject: Confirmação de envio / Sending confirmation (captchaid:132432b18776)\n> \n> \n> A mensagem de email enviada para [email protected] requer confirmação para ser entregue. Por favor, responda este e-mail informando os caracteres que você vê na imagem abaixo.\n> \n> The email message sent to [email protected] requires a confirmation to be delivered. Please, answer this email informing the characters that you see in the image below.\n> \n> [http://by112w.bay112.mail.live.com/mail/SafeRedirect.aspx?hm__tg=http://64.4.26.249/att/GetAttachment.aspx&hm__qs=file%3dbba6c9e0-b23f-4cf8-bc4c-f54bdf0763c3.gif%26ct%3daW1hZ2UvZ2lm%26name%3dY2FwdGNoYS5naWY_3d%26inline%3d1%26rfc%3d0%26empty%3dFalse%26imgsrc%3dcid%253acaptcha_img&oneredir=1&ip=10.1.106.210&d=d2813&mf=0]\n> \n> Não remova a próxima linha / Don't remove next line\n> captchakey:asbEJCQndIaDUwMTU0NjU\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/",
"msg_date": "Tue, 11 Dec 2007 13:23:31 +0000",
"msg_from": "Manolo _ <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is it spam or not?"
}
] |
[
{
"msg_contents": "Hi,\n\n This below query is taking more than 3 minutes to run, as you can see \nfrom the explain plan it is pretty much using all the indexes still it \nis slow, nested loops are taking too long. Is there anyway I can improve \nthis query performance ?\n\n I am using postgres8.2.4. Here are the number of records in each table.\n\nhelix_fdc=# select relname,relpages,reltuples from pg_class where \nrelname in \n('activity','listingactivity','activitytype','listing','address');\n relname | relpages | reltuples\n-----------------+----------+-------------\n listing | 122215 | 8.56868e+06\n listingactivity | 51225 | 8.67308e+06\n address | 244904 | 1.5182e+07\n activity | 733896 | 6.74342e+07\n activitytype | 2 | 120\n\n\n\n\nhelix_fdc=# explain analyze\nhelix_fdc-# select count(listingact0_.listingactivityid) as col_0_0_, \ndate_trunc('day', activity3_.createdate) as col_1_0_,\nhelix_fdc-# activityty1_.activitytypeid as col_2_0_, \nzipcode2_.zipcodeId as col_3_0_\nhelix_fdc-# from listing.listingactivity listingact0_, common.activity \nactivity3_, common.activitytype activityty1_,\nhelix_fdc-# postal.zipcode zipcode2_, common.activitytype \nactivityty5_, listing.listing listing7_,\nhelix_fdc-# listing.address listingadd8_\nhelix_fdc-# where listingact0_.fkactivityid=activity3_.activityId\nhelix_fdc-# and activity3_.fkactivitytypeid=activityty5_.activitytypeid\nhelix_fdc-# and listingact0_.fklistingid=listing7_.listingid\nhelix_fdc-# and listing7_.fkbestaddressid=listingadd8_.addressid\nhelix_fdc-# and (activityty5_.name in ( 'LISTING_ELEMENT_DETAIL', \n'VIRTUALCARD_DISPLAY'))\nhelix_fdc-# and activity3_.fkactivitytypeid=activityty1_.activitytypeid\nhelix_fdc-# and listingadd8_.fkzipcodeid=zipcode2_.zipcodeId\nhelix_fdc-# and (listingadd8_.fkzipcodeid is not null)\nhelix_fdc-# and activity3_.createdate>='2007-12-11 00:00:00'\nhelix_fdc-# group by date_trunc('day', activity3_.createdate) , \nactivityty1_.activitytypeid , zipcode2_.zipcodeId;\n \nQUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=3587.71..3588.31 rows=40 width=20) (actual \ntime=214022.231..214025.829 rows=925 loops=1)\n -> Nested Loop (cost=3.52..3587.31 rows=40 width=20) (actual \ntime=464.743..213996.150 rows=3571 loops=1)\n -> Nested Loop (cost=3.52..3574.01 rows=40 width=24) (actual \ntime=461.514..213891.251 rows=3571 loops=1)\n -> Nested Loop (cost=3.52..3469.18 rows=41 width=24) \n(actual time=421.683..208158.769 rows=3571 loops=1)\n -> Nested Loop (cost=3.52..3299.05 rows=41 \nwidth=24) (actual time=321.155..91460.769 rows=3586 loops=1)\n -> Nested Loop (cost=3.52..3147.50 rows=41 \nwidth=24) (actual time=188.756..821.893 rows=3586 loops=1)\n -> Hash Join (cost=3.52..880.59 \nrows=321 width=20) (actual time=103.689..325.236 rows=4082 loops=1)\n Hash Cond: \n(activity3_.fkactivitytypeid = activityty5_.activitytypeid)\n -> Index Scan using \nidx_activity_createdate on activity activity3_ (cost=0.00..801.68 \nrows=19247 width=16) (actual time=103.495..244.987 rows=16918 loops=1)\n Index Cond: (createdate >= \n'2007-12-11 00:00:00'::timestamp without time zone)\n -> Hash (cost=3.50..3.50 rows=2 \nwidth=4) (actual time=0.148..0.148 rows=2 loops=1)\n -> Seq Scan on \nactivitytype activityty5_ (cost=0.00..3.50 rows=2 width=4) (actual \ntime=0.062..0.128 rows=2 loops=1)\n Filter: (name = ANY \n('{LISTING_ELEMENT_DETAIL,VIRTUALCARD_DISPLAY}'::text[]))\n -> Index Scan using \nidx_listingactivity_fkactivityid on listingactivity listingact0_ \n(cost=0.00..7.05 rows=1 width=12) (actual time=0.097..0.108 rows=1 \nloops=4082)\n Index Cond: \n(listingact0_.fkactivityid = activity3_.activityid)\n -> Index Scan using pk_listing_listingid on \nlisting listing7_ (cost=0.00..3.68 rows=1 width=8) (actual \ntime=25.216..25.260 rows=1 loops=3586)\n Index Cond: (listingact0_.fklistingid = \nlisting7_.listingid)\n -> Index Scan using pk_address_addressid on \naddress listingadd8_ (cost=0.00..4.14 rows=1 width=8) (actual \ntime=32.508..32.527 rows=1 loops=3586)\n Index Cond: (listing7_.fkbestaddressid = \nlistingadd8_.addressid)\n Filter: (fkzipcodeid IS NOT NULL)\n -> Index Scan using pk_zipcode_zipcodeid on zipcode \nzipcode2_ (cost=0.00..2.54 rows=1 width=4) (actual time=1.586..1.590 \nrows=1 loops=3571)\n Index Cond: (listingadd8_.fkzipcodeid = \nzipcode2_.zipcodeid)\n -> Index Scan using pk_activitytype_activitytypeid on \nactivitytype activityty1_ (cost=0.00..0.32 rows=1 width=4) (actual \ntime=0.007..0.011 rows=1 loops=3571)\n Index Cond: (activity3_.fkactivitytypeid = \nactivityty1_.activitytypeid)\n Total runtime: 214029.185 ms\n(25 rows)\n\n",
"msg_date": "Tue, 11 Dec 2007 10:02:26 -0500",
"msg_from": "Pallav Kalva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow Query"
}
] |
[
{
"msg_contents": "Hi,\n\nI am doing a performance benchmarking test by using benchmarkSQL tool on\npostgresql 8.2.4.I need to tune the parameters to achieve an optimal\nperformance of the postgresql database.\n\nI have installed postgresql 8.2.4 on RHEL AS4. It is a DELL Optiplex\nGX620 PC with 4GB RAM.\n\nPlease suggest me which parameters I need to tune and what would be the\npossible values for the parameters. \n\nAppreciate if I would get a quick response.\n\n \n\nThanks & Regards,\n\nSimanchala\n\n\n\n\n\n\n\n\n\n\nHi,\nI am doing a performance benchmarking test by using\nbenchmarkSQL tool on postgresql 8.2.4.I need to tune the parameters to achieve\nan optimal performance of the postgresql database.\nI have installed postgresql 8.2.4 on RHEL AS4. It is a DELL\nOptiplex GX620 PC with 4GB RAM.\nPlease suggest me which parameters I need to tune and what\nwould be the possible values for the parameters. \nAppreciate if I would get a quick response.\n \nThanks & Regards,\nSimanchala",
"msg_date": "Wed, 12 Dec 2007 09:44:49 +0530",
"msg_from": "\"Bebarta, Simanchala\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need help on parameters and their values to tune the postgresql\n\tdatabase"
},
{
"msg_contents": "Bebarta, Simanchala wrote:\n> I am doing a performance benchmarking test by using benchmarkSQL tool on\n> postgresql 8.2.4.I need to tune the parameters to achieve an optimal\n> performance of the postgresql database.\n> \n> I have installed postgresql 8.2.4 on RHEL AS4. It is a DELL Optiplex\n> GX620 PC with 4GB RAM.\n> \n> Please suggest me which parameters I need to tune and what would be the\n> possible values for the parameters. \n> \n> Appreciate if I would get a quick response.\n\nWhy are you doing the benchmark? That's going to have a big impact on \nhow you should tune. For example, how much do you value data integrity, \nin case of a crash or power outage?\n\nHow many warehouses are you planning to use? The fact that you didn't \nmention anything about your I/O hardware suggests that you're planning \nto do a test that fits entirely in cache. If that's the case, you're \ngoing to be limited by the rate you can fsync the commit WAL records to \ndisk. If you don't care about data integrity, you can turn fsync=off, to \neliminate that bottleneck. In that case you'll also want to turn \nfull_page_writes=off; that'll reduce the CPU overhead somewhat.\n\nAnother important factor is how long you're going to run the test. If \nyou run it for more than say 10 minutes, you'll have to run vacuum to \nkeep the table sizes in check. If it's all in memory, as I presume, \nautovacuum might be enough for the task.\n\nPS. You might want to talk to these guys from Unisys:\nhttp://www.pgcon.org/2007/schedule/events/16.en.html\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 14 Dec 2007 11:03:56 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help on parameters and their values to tune the\n\tpostgresql database"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running \npostgres 7.4.3. This has been recompiled on the server for 64 stored \nprocedure parameters, (I assume this makes postgres 64 bit but are not \nsure). When the server gets under load from database connections \nexecuting reads, lets say 20 - 40 concurrent reads, the CPU's seem to \nlimit at about 30-35% usage with no iowait reported. If I run a simple \nselect at this time it takes 5 seconds, the same query runs in 300 \nmillis when the server is not under load so it seems that the database \nis not performing well even though there is plenty of spare CPU. There \ndoes not appear to be large amounts of disk IO and my database is about \n5.5G so this should fit comfortably in RAM.\n\nchanges to postgresql.sql:\n\nmax_connections = 500\nshared_buffers = 96000\nsort_mem = 10240\neffective_cache_size = 1000000\n\nDoes anyone have any ideas what my bottle neck might be and what I can \ndo about it?\n\nThanks for any help.\n\nMatthew.\n",
"msg_date": "Wed, 12 Dec 2007 10:16:43 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Limited performance on multi core server"
},
{
"msg_contents": "On Wed, Dec 12, 2007 at 10:16:43AM +0000, Matthew Lunnon wrote:\n> Does anyone have any ideas what my bottle neck might be and what I can do \n> about it?\n\nYour bottleneck is that you are using a very old version of PostgreSQL. Try\n8.2 or (if you can) the 8.3 beta series -- it scales a _lot_ better in this\nkind of situation.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 12 Dec 2007 11:39:26 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\n> postgres 7.4.3. This has been recompiled on the server for 64 stored\n> procedure parameters, (I assume this makes postgres 64 bit but are not\n> sure). When the server gets under load from database connections\n> executing reads, lets say 20 - 40 concurrent reads, the CPU's seem to\n> limit at about 30-35% usage with no iowait reported. If I run a simple\n> select at this time it takes 5 seconds, the same query runs in 300\n> millis when the server is not under load so it seems that the database\n> is not performing well even though there is plenty of spare CPU. There\n> does not appear to be large amounts of disk IO and my database is about\n> 5.5G so this should fit comfortably in RAM.\n>\n> changes to postgresql.sql:\n>\n> max_connections = 500\n> shared_buffers = 96000\n> sort_mem = 10240\n> effective_cache_size = 1000000\n>\n> Does anyone have any ideas what my bottle neck might be and what I can\n> do about it?\n\nYou might want to lower shared_buffers to a lower value. Mine is set\nat 32768. Is your db performing complex sort? Remember that this value\nis per connection. Maby 1024. effective_cache_size should also be\nlowered to something like 32768. As far as I understand shared_buffers\nand effective_cache_size have to be altered \"in reverse\", ie. when\nlowering one the other can be raised.\n\nHTH.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n",
"msg_date": "Wed, 12 Dec 2007 11:45:24 +0100",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Hi Matthew,\n\nI know exactly what you experience.\nWe had a 4-way DC Opteron and Pg 7.4 too.\nYou should monitor context switches.\n\n\nFirst suggest upgrade to 8.2.5 because the scale up is much better with 8.2.\n\nYou need to limit the number of concurrent queries to less than 8 (8\ncores) if you need to stay with Pg 7.4.\n\nThe memory setting is looking good to me. I would increase sort_mem and\neffective_cache_size, but this would solve your problem.\n\nBest regards\nSven.\n\n\n\nMatthew Lunnon schrieb:\n> Hi,\n> \n> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\n> postgres 7.4.3. This has been recompiled on the server for 64 stored\n> procedure parameters, (I assume this makes postgres 64 bit but are not\n> sure). When the server gets under load from database connections\n> executing reads, lets say 20 - 40 concurrent reads, the CPU's seem to\n> limit at about 30-35% usage with no iowait reported. If I run a simple\n> select at this time it takes 5 seconds, the same query runs in 300\n> millis when the server is not under load so it seems that the database\n> is not performing well even though there is plenty of spare CPU. There\n> does not appear to be large amounts of disk IO and my database is about\n> 5.5G so this should fit comfortably in RAM.\n> \n> changes to postgresql.sql:\n> \n> max_connections = 500\n> shared_buffers = 96000\n> sort_mem = 10240\n> effective_cache_size = 1000000\n> \n> Does anyone have any ideas what my bottle neck might be and what I can\n> do about it?\n> \n> Thanks for any help.\n> \n> Matthew.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n-- \nSven Geisler <[email protected]> Tel +49.30.921017.81 Fax .50\nSenior Developer, AEC/communications GmbH & Co. KG Berlin, Germany\n",
"msg_date": "Wed, 12 Dec 2007 11:48:31 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "> > Does anyone have any ideas what my bottle neck might be and what I can do\n> > about it?\n>\n> Your bottleneck is that you are using a very old version of PostgreSQL. Try\n> 8.2 or (if you can) the 8.3 beta series -- it scales a _lot_ better in this\n> kind of situation.\n\nYou won't know until you've seen what queries are performed. Changing\ndb-parameters is a short-term solution, upgrading to a newer version\ndoes require some planning.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n",
"msg_date": "Wed, 12 Dec 2007 11:48:31 +0100",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Limiting the queries was our initial thought but we then hit a problem \nwith connection pooling which didn't implement a fifo algorithm. Looks \nlike I'll have to look deeper into the connection pooling.\n\nSo you think the problem might be context switching on the server, I'll \ntake a closer look at the this\n\nThanks\n\nMatthew\n\nSven Geisler wrote:\n> Hi Matthew,\n>\n> I know exactly what you experience.\n> We had a 4-way DC Opteron and Pg 7.4 too.\n> You should monitor context switches.\n>\n>\n> First suggest upgrade to 8.2.5 because the scale up is much better with 8.2.\n>\n> You need to limit the number of concurrent queries to less than 8 (8\n> cores) if you need to stay with Pg 7.4.\n>\n> The memory setting is looking good to me. I would increase sort_mem and\n> effective_cache_size, but this would solve your problem.\n>\n> Best regards\n> Sven.\n>\n>\n>\n> Matthew Lunnon schrieb:\n> \n>> Hi,\n>>\n>> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\n>> postgres 7.4.3. This has been recompiled on the server for 64 stored\n>> procedure parameters, (I assume this makes postgres 64 bit but are not\n>> sure). When the server gets under load from database connections\n>> executing reads, lets say 20 - 40 concurrent reads, the CPU's seem to\n>> limit at about 30-35% usage with no iowait reported. If I run a simple\n>> select at this time it takes 5 seconds, the same query runs in 300\n>> millis when the server is not under load so it seems that the database\n>> is not performing well even though there is plenty of spare CPU. There\n>> does not appear to be large amounts of disk IO and my database is about\n>> 5.5G so this should fit comfortably in RAM.\n>>\n>> changes to postgresql.sql:\n>>\n>> max_connections = 500\n>> shared_buffers = 96000\n>> sort_mem = 10240\n>> effective_cache_size = 1000000\n>>\n>> Does anyone have any ideas what my bottle neck might be and what I can\n>> do about it?\n>>\n>> Thanks for any help.\n>>\n>> Matthew.\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>> \n>\n> \n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--\n\n\n\n\n\n\n\n\nLimiting the queries was our initial thought but we then hit a problem\nwith connection pooling which didn't implement a fifo algorithm. Looks\nlike I'll have to look deeper into the connection pooling.\n\nSo you think the problem might be context switching on the server, I'll\ntake a closer look at the this\n\nThanks\n\nMatthew\n\nSven Geisler wrote:\n\nHi Matthew,\n\nI know exactly what you experience.\nWe had a 4-way DC Opteron and Pg 7.4 too.\nYou should monitor context switches.\n\n\nFirst suggest upgrade to 8.2.5 because the scale up is much better with 8.2.\n\nYou need to limit the number of concurrent queries to less than 8 (8\ncores) if you need to stay with Pg 7.4.\n\nThe memory setting is looking good to me. I would increase sort_mem and\neffective_cache_size, but this would solve your problem.\n\nBest regards\nSven.\n\n\n\nMatthew Lunnon schrieb:\n \n\nHi,\n\nI have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\npostgres 7.4.3. This has been recompiled on the server for 64 stored\nprocedure parameters, (I assume this makes postgres 64 bit but are not\nsure). When the server gets under load from database connections\nexecuting reads, lets say 20 - 40 concurrent reads, the CPU's seem to\nlimit at about 30-35% usage with no iowait reported. If I run a simple\nselect at this time it takes 5 seconds, the same query runs in 300\nmillis when the server is not under load so it seems that the database\nis not performing well even though there is plenty of spare CPU. There\ndoes not appear to be large amounts of disk IO and my database is about\n5.5G so this should fit comfortably in RAM.\n\nchanges to postgresql.sql:\n\nmax_connections = 500\nshared_buffers = 96000\nsort_mem = 10240\neffective_cache_size = 1000000\n\nDoes anyone have any ideas what my bottle neck might be and what I can\ndo about it?\n\nThanks for any help.\n\nMatthew.\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n \n\n\n \n\n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--",
"msg_date": "Wed, 12 Dec 2007 11:06:19 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Ah I was afraid of that. Maybe I'll have to come out of the dark ages.\n\nMatthew\n\nSteinar H. Gunderson wrote:\n> On Wed, Dec 12, 2007 at 10:16:43AM +0000, Matthew Lunnon wrote:\n> \n>> Does anyone have any ideas what my bottle neck might be and what I can do \n>> about it?\n>> \n>\n> Your bottleneck is that you are using a very old version of PostgreSQL. Try\n> 8.2 or (if you can) the 8.3 beta series -- it scales a _lot_ better in this\n> kind of situation.\n>\n> /* Steinar */\n> \n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--\n\n\n\n\n\n\n\n\nAh I was afraid of that. Maybe I'll have to come out of the dark ages.\n\nMatthew\n\nSteinar H. Gunderson wrote:\n\nOn Wed, Dec 12, 2007 at 10:16:43AM +0000, Matthew Lunnon wrote:\n \n\nDoes anyone have any ideas what my bottle neck might be and what I can do \nabout it?\n \n\n\nYour bottleneck is that you are using a very old version of PostgreSQL. Try\n8.2 or (if you can) the 8.3 beta series -- it scales a _lot_ better in this\nkind of situation.\n\n/* Steinar */\n \n\n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--",
"msg_date": "Wed, 12 Dec 2007 11:08:03 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Thanks for the information Claus, Why would reducing the effective \ncache size help the processor usage? It seems that there is plenty of \nresources on the box although I can see that 10MB of sort space could \nmount up if we had 500 connections but at the moment we do not have \nanything like that number.\n\nThanks\nMatthew.\n\nClaus Guttesen wrote:\n>> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\n>> postgres 7.4.3. This has been recompiled on the server for 64 stored\n>> procedure parameters, (I assume this makes postgres 64 bit but are not\n>> sure). When the server gets under load from database connections\n>> executing reads, lets say 20 - 40 concurrent reads, the CPU's seem to\n>> limit at about 30-35% usage with no iowait reported. If I run a simple\n>> select at this time it takes 5 seconds, the same query runs in 300\n>> millis when the server is not under load so it seems that the database\n>> is not performing well even though there is plenty of spare CPU. There\n>> does not appear to be large amounts of disk IO and my database is about\n>> 5.5G so this should fit comfortably in RAM.\n>>\n>> changes to postgresql.sql:\n>>\n>> max_connections = 500\n>> shared_buffers = 96000\n>> sort_mem = 10240\n>> effective_cache_size = 1000000\n>>\n>> Does anyone have any ideas what my bottle neck might be and what I can\n>> do about it?\n>> \n>\n> You might want to lower shared_buffers to a lower value. Mine is set\n> at 32768. Is your db performing complex sort? Remember that this value\n> is per connection. Maby 1024. effective_cache_size should also be\n> lowered to something like 32768. As far as I understand shared_buffers\n> and effective_cache_size have to be altered \"in reverse\", ie. when\n> lowering one the other can be raised.\n>\n> HTH.\n>\n> \n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--\n\n\n\n\n\n\n\nThanks for the information Claus, Why would reducing the effective\ncache size help the processor usage? It seems that there is plenty of\nresources on the box although I can see that 10MB of sort space could\nmount up if we had 500 connections but at the moment we do not have\nanything like that number.\n\nThanks\nMatthew.\n\nClaus Guttesen wrote:\n\n\nI have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\npostgres 7.4.3. This has been recompiled on the server for 64 stored\nprocedure parameters, (I assume this makes postgres 64 bit but are not\nsure). When the server gets under load from database connections\nexecuting reads, lets say 20 - 40 concurrent reads, the CPU's seem to\nlimit at about 30-35% usage with no iowait reported. If I run a simple\nselect at this time it takes 5 seconds, the same query runs in 300\nmillis when the server is not under load so it seems that the database\nis not performing well even though there is plenty of spare CPU. There\ndoes not appear to be large amounts of disk IO and my database is about\n5.5G so this should fit comfortably in RAM.\n\nchanges to postgresql.sql:\n\nmax_connections = 500\nshared_buffers = 96000\nsort_mem = 10240\neffective_cache_size = 1000000\n\nDoes anyone have any ideas what my bottle neck might be and what I can\ndo about it?\n \n\n\nYou might want to lower shared_buffers to a lower value. Mine is set\nat 32768. Is your db performing complex sort? Remember that this value\nis per connection. Maby 1024. effective_cache_size should also be\nlowered to something like 32768. As far as I understand shared_buffers\nand effective_cache_size have to be altered \"in reverse\", ie. when\nlowering one the other can be raised.\n\nHTH.\n\n \n\n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--",
"msg_date": "Wed, 12 Dec 2007 11:12:42 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "> Thanks for the information Claus, Why would reducing the effective cache\n> size help the processor usage? It seems that there is plenty of resources\n> on the box although I can see that 10MB of sort space could mount up if we\n> had 500 connections but at the moment we do not have anything like that\n> number.\n\nThere is a discussion at\nhttp://archives.postgresql.org/pgsql-performance/2005-06/msg00477.php\nwhich can give a clearer picture. But in general rasing values can be\ncounterproductive.\n\nIf you know that you won't need more than 250 connections that would\nbe a reasonable value. You may wan't to read\nhttp://www.varlena.com/GeneralBits/Tidbits/annotated_conf_e.html as\nwell. This has some rules of thumb on the settings for 7.4.x.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n",
"msg_date": "Wed, 12 Dec 2007 12:39:58 +0100",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "On Wed, 12 Dec 2007, Matthew Lunnon wrote:\n\n> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running \n> postgres 7.4.3.\n> shared_buffers = 96000\n\nAs you've already been told repeatedly 7.4 is a release from long before \noptimizations to work well on a multi-core server like this. I'll only \nadd that because of those problems, larger values of shared_buffers were \nsometimes counter-productive with these old versions. You should try \nreducing that to the 10,000-50000 range and see if things improve; that's \nthe general range that was effective with 7.4. Continue to set \neffective_cache_size to a large value so that the optimizer knows how much \nRAM is really available.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 12 Dec 2007 06:48:48 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Hi Matthew,\n\nThe context switching isn't the issue. This is an indicator which is\nuseful to identify your problem.\n\nWhat kind of application do you running? Can you limit the database clients?\n\nWe have a web application based on apache running. We have a limit\nnumber of apache processes which are able to connect the database.\nWe use that to reduce the number of concurrent queries.\nThe apache does the rest for us - the apache does queue incoming http\nrequest if all workers are busy. The configuration helps us to solve the\n performance issue with to much concurrent queries.\n\nI assume that you already checked you application and each sql query is\nnecessary and tuned as best as you can.\n\nRegards\nSven.\n\nMatthew Lunnon schrieb:\n> Limiting the queries was our initial thought but we then hit a problem\n> with connection pooling which didn't implement a fifo algorithm. Looks\n> like I'll have to look deeper into the connection pooling.\n> \n> So you think the problem might be context switching on the server, I'll\n> take a closer look at the this\n> \n> Thanks\n> \n> Matthew\n> \n> Sven Geisler wrote:\n>> Hi Matthew,\n>>\n>> I know exactly what you experience.\n>> We had a 4-way DC Opteron and Pg 7.4 too.\n>> You should monitor context switches.\n>>\n>>\n>> First suggest upgrade to 8.2.5 because the scale up is much better with 8.2.\n>>\n>> You need to limit the number of concurrent queries to less than 8 (8\n>> cores) if you need to stay with Pg 7.4.\n>>\n>> The memory setting is looking good to me. I would increase sort_mem and\n>> effective_cache_size, but this would solve your problem.\n>>\n>> Best regards\n>> Sven.\n>>\n>>\n>>\n>> Matthew Lunnon schrieb:\n>> \n>>> Hi,\n>>>\n>>> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\n>>> postgres 7.4.3. This has been recompiled on the server for 64 stored\n>>> procedure parameters, (I assume this makes postgres 64 bit but are not\n>>> sure). When the server gets under load from database connections\n>>> executing reads, lets say 20 - 40 concurrent reads, the CPU's seem to\n>>> limit at about 30-35% usage with no iowait reported. If I run a simple\n>>> select at this time it takes 5 seconds, the same query runs in 300\n>>> millis when the server is not under load so it seems that the database\n>>> is not performing well even though there is plenty of spare CPU. There\n>>> does not appear to be large amounts of disk IO and my database is about\n>>> 5.5G so this should fit comfortably in RAM.\n>>>\n>>> changes to postgresql.sql:\n>>>\n>>> max_connections = 500\n>>> shared_buffers = 96000\n>>> sort_mem = 10240\n>>> effective_cache_size = 1000000\n>>>\n>>> Does anyone have any ideas what my bottle neck might be and what I can\n>>> do about it?\n>>>\n>>> Thanks for any help.\n>>>\n>>> Matthew.\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 6: explain analyze is your friend\n>>> \n>>\n>> \n> \n> -- \n> Matthew Lunnon\n> Technical Consultant\n> RWA Ltd.\n> \n> [email protected]\n> Tel: +44 (0)29 2081 5056\n> www.rwa-net.co.uk\n> --\n> \n\n-- \nSven Geisler <[email protected]> Tel +49.30.921017.81 Fax .50\nSenior Developer, AEC/communications GmbH & Co. KG Berlin, Germany\n",
"msg_date": "Wed, 12 Dec 2007 12:50:30 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Hi Matthew,\n\nI remember that I also an issue with AMD Opterons before Pg 8.1\nThere is a specific Opteron behaviour on shared memory locks which adds\na extra \"penalty\" during the execution time for Pg code before 8.1.\nI can you provide my patch for Pg 8.0 which should be adaptable for Pg\n7.4 if you can compile PostgreSQL.\n\nBut if you can upgrade you should upgrade to Pg 8.2.5 64-bit. The scale\nup for your concurrent queries will be great.\n\nSven.\n\nMatthew Lunnon schrieb:\n> Hi,\n> \n> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\n> postgres 7.4.3. This has been recompiled on the server for 64 stored\n> procedure parameters, (I assume this makes postgres 64 bit but are not\n> sure). When the server gets under load from database connections\n> executing reads, lets say 20 - 40 concurrent reads, the CPU's seem to\n> limit at about 30-35% usage with no iowait reported. If I run a simple\n> select at this time it takes 5 seconds, the same query runs in 300\n> millis when the server is not under load so it seems that the database\n> is not performing well even though there is plenty of spare CPU. There\n> does not appear to be large amounts of disk IO and my database is about\n> 5.5G so this should fit comfortably in RAM.\n> \n> changes to postgresql.sql:\n> \n> max_connections = 500\n> shared_buffers = 96000\n> sort_mem = 10240\n> effective_cache_size = 1000000\n> \n> Does anyone have any ideas what my bottle neck might be and what I can\n> do about it?\n> \n> Thanks for any help.\n> \n> Matthew.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n-- \nSven Geisler <[email protected]> Tel +49.30.921017.81 Fax .50\nSenior Developer, AEC/communications GmbH & Co. KG Berlin, Germany\n",
"msg_date": "Wed, 12 Dec 2007 13:07:58 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Hi Sven,\n\nyes the patch would be great if you could send it to me, we have already \nhad to compile postgres to up the number of function parameters from 32 \nto 64.\n\nMeanwhile I will try and persuade my colleagues to consider the upgrade \noption.\n\nThanks\nMatthew\n\nSven Geisler wrote:\n> Hi Matthew,\n>\n> I remember that I also an issue with AMD Opterons before Pg 8.1\n> There is a specific Opteron behaviour on shared memory locks which adds\n> a extra \"penalty\" during the execution time for Pg code before 8.1.\n> I can you provide my patch for Pg 8.0 which should be adaptable for Pg\n> 7.4 if you can compile PostgreSQL.\n>\n> But if you can upgrade you should upgrade to Pg 8.2.5 64-bit. The scale\n> up for your concurrent queries will be great.\n>\n> Sven.\n>\n> Matthew Lunnon schrieb:\n> \n>> Hi,\n>>\n>> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\n>> postgres 7.4.3. This has been recompiled on the server for 64 stored\n>> procedure parameters, (I assume this makes postgres 64 bit but are not\n>> sure). When the server gets under load from database connections\n>> executing reads, lets say 20 - 40 concurrent reads, the CPU's seem to\n>> limit at about 30-35% usage with no iowait reported. If I run a simple\n>> select at this time it takes 5 seconds, the same query runs in 300\n>> millis when the server is not under load so it seems that the database\n>> is not performing well even though there is plenty of spare CPU. There\n>> does not appear to be large amounts of disk IO and my database is about\n>> 5.5G so this should fit comfortably in RAM.\n>>\n>> changes to postgresql.sql:\n>>\n>> max_connections = 500\n>> shared_buffers = 96000\n>> sort_mem = 10240\n>> effective_cache_size = 1000000\n>>\n>> Does anyone have any ideas what my bottle neck might be and what I can\n>> do about it?\n>>\n>> Thanks for any help.\n>>\n>> Matthew.\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>> \n>\n> \n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--\n\n\n\n\n\n\n\n\nHi Sven,\n\nyes the patch would be great if you could send it to me, we have\nalready had to compile postgres to up the number of function parameters\nfrom 32 to 64.\n\nMeanwhile I will try and persuade my colleagues to consider the upgrade\noption.\n\nThanks\nMatthew\n\nSven Geisler wrote:\n\nHi Matthew,\n\nI remember that I also an issue with AMD Opterons before Pg 8.1\nThere is a specific Opteron behaviour on shared memory locks which adds\na extra \"penalty\" during the execution time for Pg code before 8.1.\nI can you provide my patch for Pg 8.0 which should be adaptable for Pg\n7.4 if you can compile PostgreSQL.\n\nBut if you can upgrade you should upgrade to Pg 8.2.5 64-bit. The scale\nup for your concurrent queries will be great.\n\nSven.\n\nMatthew Lunnon schrieb:\n \n\nHi,\n\nI have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\npostgres 7.4.3. This has been recompiled on the server for 64 stored\nprocedure parameters, (I assume this makes postgres 64 bit but are not\nsure). When the server gets under load from database connections\nexecuting reads, lets say 20 - 40 concurrent reads, the CPU's seem to\nlimit at about 30-35% usage with no iowait reported. If I run a simple\nselect at this time it takes 5 seconds, the same query runs in 300\nmillis when the server is not under load so it seems that the database\nis not performing well even though there is plenty of spare CPU. There\ndoes not appear to be large amounts of disk IO and my database is about\n5.5G so this should fit comfortably in RAM.\n\nchanges to postgresql.sql:\n\nmax_connections = 500\nshared_buffers = 96000\nsort_mem = 10240\neffective_cache_size = 1000000\n\nDoes anyone have any ideas what my bottle neck might be and what I can\ndo about it?\n\nThanks for any help.\n\nMatthew.\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n \n\n\n \n\n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--",
"msg_date": "Wed, 12 Dec 2007 12:27:24 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Hi Sven,\n\nYes I have done a reasonable amount of query tuning. The application is \na web service using an apache/resin combination at the front end, we \nhave thought about using resin threads to limit the number of \nconnections but are worried about backing up connections in apache and \ngetting some overflow here. But some kind of limiting of connections is \nprobably required.\n\nThanks\nMatthew\n\nSven Geisler wrote:\n> Hi Matthew,\n>\n> The context switching isn't the issue. This is an indicator which is\n> useful to identify your problem.\n>\n> What kind of application do you running? Can you limit the database clients?\n>\n> We have a web application based on apache running. We have a limit\n> number of apache processes which are able to connect the database.\n> We use that to reduce the number of concurrent queries.\n> The apache does the rest for us - the apache does queue incoming http\n> request if all workers are busy. The configuration helps us to solve the\n> performance issue with to much concurrent queries.\n>\n> I assume that you already checked you application and each sql query is\n> necessary and tuned as best as you can.\n>\n> Regards\n> Sven.\n>\n> Matthew Lunnon schrieb:\n> \n>> Limiting the queries was our initial thought but we then hit a problem\n>> with connection pooling which didn't implement a fifo algorithm. Looks\n>> like I'll have to look deeper into the connection pooling.\n>>\n>> So you think the problem might be context switching on the server, I'll\n>> take a closer look at the this\n>>\n>> Thanks\n>>\n>> Matthew\n>>\n>> Sven Geisler wrote:\n>> \n>>> Hi Matthew,\n>>>\n>>> I know exactly what you experience.\n>>> We had a 4-way DC Opteron and Pg 7.4 too.\n>>> You should monitor context switches.\n>>>\n>>>\n>>> First suggest upgrade to 8.2.5 because the scale up is much better with 8.2.\n>>>\n>>> You need to limit the number of concurrent queries to less than 8 (8\n>>> cores) if you need to stay with Pg 7.4.\n>>>\n>>> The memory setting is looking good to me. I would increase sort_mem and\n>>> effective_cache_size, but this would solve your problem.\n>>>\n>>> Best regards\n>>> Sven.\n>>>\n>>>\n>>>\n>>> Matthew Lunnon schrieb:\n>>> \n>>> \n>>>> Hi,\n>>>>\n>>>> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\n>>>> postgres 7.4.3. This has been recompiled on the server for 64 stored\n>>>> procedure parameters, (I assume this makes postgres 64 bit but are not\n>>>> sure). When the server gets under load from database connections\n>>>> executing reads, lets say 20 - 40 concurrent reads, the CPU's seem to\n>>>> limit at about 30-35% usage with no iowait reported. If I run a simple\n>>>> select at this time it takes 5 seconds, the same query runs in 300\n>>>> millis when the server is not under load so it seems that the database\n>>>> is not performing well even though there is plenty of spare CPU. There\n>>>> does not appear to be large amounts of disk IO and my database is about\n>>>> 5.5G so this should fit comfortably in RAM.\n>>>>\n>>>> changes to postgresql.sql:\n>>>>\n>>>> max_connections = 500\n>>>> shared_buffers = 96000\n>>>> sort_mem = 10240\n>>>> effective_cache_size = 1000000\n>>>>\n>>>> Does anyone have any ideas what my bottle neck might be and what I can\n>>>> do about it?\n>>>>\n>>>> Thanks for any help.\n>>>>\n>>>> Matthew.\n>>>>\n>>>> ---------------------------(end of broadcast)---------------------------\n>>>> TIP 6: explain analyze is your friend\n>>>> \n>>>> \n>>> \n>>> \n>> -- \n>> Matthew Lunnon\n>> Technical Consultant\n>> RWA Ltd.\n>>\n>> [email protected]\n>> Tel: +44 (0)29 2081 5056\n>> www.rwa-net.co.uk\n>> --\n>>\n>> \n>\n> \n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--\n\n\n\n\n\n\n\nHi Sven,\n\nYes I have done a reasonable amount of query tuning. The application\nis a web service using an apache/resin combination at the front end, we\nhave thought about using resin threads to limit the number of\nconnections but are worried about backing up connections in apache and\ngetting some overflow here. But some kind of limiting of connections\nis probably required.\n\nThanks\nMatthew\n\nSven Geisler wrote:\n\nHi Matthew,\n\nThe context switching isn't the issue. This is an indicator which is\nuseful to identify your problem.\n\nWhat kind of application do you running? Can you limit the database clients?\n\nWe have a web application based on apache running. We have a limit\nnumber of apache processes which are able to connect the database.\nWe use that to reduce the number of concurrent queries.\nThe apache does the rest for us - the apache does queue incoming http\nrequest if all workers are busy. The configuration helps us to solve the\n performance issue with to much concurrent queries.\n\nI assume that you already checked you application and each sql query is\nnecessary and tuned as best as you can.\n\nRegards\nSven.\n\nMatthew Lunnon schrieb:\n \n\nLimiting the queries was our initial thought but we then hit a problem\nwith connection pooling which didn't implement a fifo algorithm. Looks\nlike I'll have to look deeper into the connection pooling.\n\nSo you think the problem might be context switching on the server, I'll\ntake a closer look at the this\n\nThanks\n\nMatthew\n\nSven Geisler wrote:\n \n\nHi Matthew,\n\nI know exactly what you experience.\nWe had a 4-way DC Opteron and Pg 7.4 too.\nYou should monitor context switches.\n\n\nFirst suggest upgrade to 8.2.5 because the scale up is much better with 8.2.\n\nYou need to limit the number of concurrent queries to less than 8 (8\ncores) if you need to stay with Pg 7.4.\n\nThe memory setting is looking good to me. I would increase sort_mem and\neffective_cache_size, but this would solve your problem.\n\nBest regards\nSven.\n\n\n\nMatthew Lunnon schrieb:\n \n \n\nHi,\n\nI have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\npostgres 7.4.3. This has been recompiled on the server for 64 stored\nprocedure parameters, (I assume this makes postgres 64 bit but are not\nsure). When the server gets under load from database connections\nexecuting reads, lets say 20 - 40 concurrent reads, the CPU's seem to\nlimit at about 30-35% usage with no iowait reported. If I run a simple\nselect at this time it takes 5 seconds, the same query runs in 300\nmillis when the server is not under load so it seems that the database\nis not performing well even though there is plenty of spare CPU. There\ndoes not appear to be large amounts of disk IO and my database is about\n5.5G so this should fit comfortably in RAM.\n\nchanges to postgresql.sql:\n\nmax_connections = 500\nshared_buffers = 96000\nsort_mem = 10240\neffective_cache_size = 1000000\n\nDoes anyone have any ideas what my bottle neck might be and what I can\ndo about it?\n\nThanks for any help.\n\nMatthew.\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n \n \n\n \n \n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--\n\n \n\n\n \n\n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--",
"msg_date": "Wed, 12 Dec 2007 12:29:05 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Hi Matthew,\n\nPlease not that is no official patch, but it works with our Opteron\nserver without any problems. You should run the regression test after\nyou adapt it for Pg 7.4.\n\nSven.\n\nMatthew Lunnon schrieb:\n> Hi Sven,\n> \n> yes the patch would be great if you could send it to me, we have already\n> had to compile postgres to up the number of function parameters from 32\n> to 64.\n> \n> Meanwhile I will try and persuade my colleagues to consider the upgrade\n> option.\n> \n> Thanks\n> Matthew\n> \n> Sven Geisler wrote:\n>> Hi Matthew,\n>>\n>> I remember that I also an issue with AMD Opterons before Pg 8.1\n>> There is a specific Opteron behaviour on shared memory locks which adds\n>> a extra \"penalty\" during the execution time for Pg code before 8.1.\n>> I can you provide my patch for Pg 8.0 which should be adaptable for Pg\n>> 7.4 if you can compile PostgreSQL.\n>>\n>> But if you can upgrade you should upgrade to Pg 8.2.5 64-bit. The scale\n>> up for your concurrent queries will be great.\n>>\n>> Sven.\n>>\n>> Matthew Lunnon schrieb:\n>> \n>>> Hi,\n>>>\n>>> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\n>>> postgres 7.4.3. This has been recompiled on the server for 64 stored\n>>> procedure parameters, (I assume this makes postgres 64 bit but are not\n>>> sure). When the server gets under load from database connections\n>>> executing reads, lets say 20 - 40 concurrent reads, the CPU's seem to\n>>> limit at about 30-35% usage with no iowait reported. If I run a simple\n>>> select at this time it takes 5 seconds, the same query runs in 300\n>>> millis when the server is not under load so it seems that the database\n>>> is not performing well even though there is plenty of spare CPU. There\n>>> does not appear to be large amounts of disk IO and my database is about\n>>> 5.5G so this should fit comfortably in RAM.\n>>>\n>>> changes to postgresql.sql:\n>>>\n>>> max_connections = 500\n>>> shared_buffers = 96000\n>>> sort_mem = 10240\n>>> effective_cache_size = 1000000\n>>>\n>>> Does anyone have any ideas what my bottle neck might be and what I can\n>>> do about it?\n>>>\n>>> Thanks for any help.\n>>>\n>>> Matthew.\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 6: explain analyze is your friend\n>>> \n>>\n>> \n> \n> -- \n> Matthew Lunnon\n> Technical Consultant\n> RWA Ltd.\n> \n> [email protected]\n> Tel: +44 (0)29 2081 5056\n> www.rwa-net.co.uk\n> --\n> \n\n-- \nSven Geisler <[email protected]> Tel +49.30.921017.81 Fax .50\nSenior Developer, AEC/communications GmbH & Co. KG Berlin, Germany",
"msg_date": "Wed, 12 Dec 2007 13:32:18 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Hi Matthew,\n\nThe apache is able to queue 1024 request. Reducing the number of\nmax_clients was the key to deal with the problem of to much concurrent\nqueries. We have monitored less concurrent http request after we\ndecrease max_clients.\n\nWe also have introduce a global statement timeout to stop long running\nqueries.\n\nBoth together protect our database server. The problem we had was only\nto find the values for our application.\n\nSven.\n\nMatthew Lunnon schrieb:\n> Hi Sven,\n> \n> Yes I have done a reasonable amount of query tuning. The application is\n> a web service using an apache/resin combination at the front end, we\n> have thought about using resin threads to limit the number of\n> connections but are worried about backing up connections in apache and\n> getting some overflow here. But some kind of limiting of connections is\n> probably required.\n> \n> Thanks\n> Matthew\n> \n> Sven Geisler wrote:\n>> Hi Matthew,\n>>\n>> The context switching isn't the issue. This is an indicator which is\n>> useful to identify your problem.\n>>\n>> What kind of application do you running? Can you limit the database clients?\n>>\n>> We have a web application based on apache running. We have a limit\n>> number of apache processes which are able to connect the database.\n>> We use that to reduce the number of concurrent queries.\n>> The apache does the rest for us - the apache does queue incoming http\n>> request if all workers are busy. The configuration helps us to solve the\n>> performance issue with to much concurrent queries.\n>>\n>> I assume that you already checked you application and each sql query is\n>> necessary and tuned as best as you can.\n>>\n>> Regards\n>> Sven.\n>>\n>> Matthew Lunnon schrieb:\n>> \n>>> Limiting the queries was our initial thought but we then hit a problem\n>>> with connection pooling which didn't implement a fifo algorithm. Looks\n>>> like I'll have to look deeper into the connection pooling.\n>>>\n>>> So you think the problem might be context switching on the server, I'll\n>>> take a closer look at the this\n>>>\n>>> Thanks\n>>>\n>>> Matthew\n>>>\n>>> Sven Geisler wrote:\n>>> \n>>>> Hi Matthew,\n>>>>\n>>>> I know exactly what you experience.\n>>>> We had a 4-way DC Opteron and Pg 7.4 too.\n>>>> You should monitor context switches.\n>>>>\n>>>>\n>>>> First suggest upgrade to 8.2.5 because the scale up is much better with 8.2.\n>>>>\n>>>> You need to limit the number of concurrent queries to less than 8 (8\n>>>> cores) if you need to stay with Pg 7.4.\n>>>>\n>>>> The memory setting is looking good to me. I would increase sort_mem and\n>>>> effective_cache_size, but this would solve your problem.\n>>>>\n>>>> Best regards\n>>>> Sven.\n>>>>\n>>>>\n>>>>\n>>>> Matthew Lunnon schrieb:\n>>>> \n>>>> \n>>>>> Hi,\n>>>>>\n>>>>> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\n>>>>> postgres 7.4.3. This has been recompiled on the server for 64 stored\n>>>>> procedure parameters, (I assume this makes postgres 64 bit but are not\n>>>>> sure). When the server gets under load from database connections\n>>>>> executing reads, lets say 20 - 40 concurrent reads, the CPU's seem to\n>>>>> limit at about 30-35% usage with no iowait reported. If I run a simple\n>>>>> select at this time it takes 5 seconds, the same query runs in 300\n>>>>> millis when the server is not under load so it seems that the database\n>>>>> is not performing well even though there is plenty of spare CPU. There\n>>>>> does not appear to be large amounts of disk IO and my database is about\n>>>>> 5.5G so this should fit comfortably in RAM.\n>>>>>\n>>>>> changes to postgresql.sql:\n>>>>>\n>>>>> max_connections = 500\n>>>>> shared_buffers = 96000\n>>>>> sort_mem = 10240\n>>>>> effective_cache_size = 1000000\n>>>>>\n>>>>> Does anyone have any ideas what my bottle neck might be and what I can\n>>>>> do about it?\n>>>>>\n>>>>> Thanks for any help.\n>>>>>\n>>>>> Matthew.\n>>>>>\n>>>>> ---------------------------(end of broadcast)---------------------------\n>>>>> TIP 6: explain analyze is your friend\n>>>>> \n>>>>> \n>>>> \n>>>> \n>>> -- \n>>> Matthew Lunnon\n>>> Technical Consultant\n>>> RWA Ltd.\n>>>\n>>> [email protected]\n>>> Tel: +44 (0)29 2081 5056\n>>> www.rwa-net.co.uk\n>>> --\n>>>\n>>> \n>>\n>> \n> \n> -- \n> Matthew Lunnon\n> Technical Consultant\n> RWA Ltd.\n> \n> [email protected]\n> Tel: +44 (0)29 2081 5056\n> www.rwa-net.co.uk\n> --\n> \n\n-- \nSven Geisler <[email protected]> Tel +49.30.921017.81 Fax .50\nSenior Developer, AEC/communications GmbH & Co. KG Berlin, Germany\n",
"msg_date": "Wed, 12 Dec 2007 13:38:34 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Hi Sven,\n\nDoes this mean that one option I have is to use a multi core Intel based \nserver instead of an AMD based server?\n\nMatthew\n\nSven Geisler wrote:\n> Hi Matthew,\n>\n> I remember that I also an issue with AMD Opterons before Pg 8.1\n> There is a specific Opteron behaviour on shared memory locks which adds\n> a extra \"penalty\" during the execution time for Pg code before 8.1.\n> I can you provide my patch for Pg 8.0 which should be adaptable for Pg\n> 7.4 if you can compile PostgreSQL.\n>\n> But if you can upgrade you should upgrade to Pg 8.2.5 64-bit. The scale\n> up for your concurrent queries will be great.\n>\n> Sven.\n>\n> Matthew Lunnon schrieb:\n> \n>> Hi,\n>>\n>> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\n>> postgres 7.4.3. This has been recompiled on the server for 64 stored\n>> procedure parameters, (I assume this makes postgres 64 bit but are not\n>> sure). When the server gets under load from database connections\n>> executing reads, lets say 20 - 40 concurrent reads, the CPU's seem to\n>> limit at about 30-35% usage with no iowait reported. If I run a simple\n>> select at this time it takes 5 seconds, the same query runs in 300\n>> millis when the server is not under load so it seems that the database\n>> is not performing well even though there is plenty of spare CPU. There\n>> does not appear to be large amounts of disk IO and my database is about\n>> 5.5G so this should fit comfortably in RAM.\n>>\n>> changes to postgresql.sql:\n>>\n>> max_connections = 500\n>> shared_buffers = 96000\n>> sort_mem = 10240\n>> effective_cache_size = 1000000\n>>\n>> Does anyone have any ideas what my bottle neck might be and what I can\n>> do about it?\n>>\n>> Thanks for any help.\n>>\n>> Matthew.\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>> \n>\n> \n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--\n\n\n\n\n\n\n\nHi Sven,\n\nDoes this mean that one option I have is to use a multi core Intel\nbased server instead of an AMD based server?\n\nMatthew\n\nSven Geisler wrote:\n\nHi Matthew,\n\nI remember that I also an issue with AMD Opterons before Pg 8.1\nThere is a specific Opteron behaviour on shared memory locks which adds\na extra \"penalty\" during the execution time for Pg code before 8.1.\nI can you provide my patch for Pg 8.0 which should be adaptable for Pg\n7.4 if you can compile PostgreSQL.\n\nBut if you can upgrade you should upgrade to Pg 8.2.5 64-bit. The scale\nup for your concurrent queries will be great.\n\nSven.\n\nMatthew Lunnon schrieb:\n \n\nHi,\n\nI have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\npostgres 7.4.3. This has been recompiled on the server for 64 stored\nprocedure parameters, (I assume this makes postgres 64 bit but are not\nsure). When the server gets under load from database connections\nexecuting reads, lets say 20 - 40 concurrent reads, the CPU's seem to\nlimit at about 30-35% usage with no iowait reported. If I run a simple\nselect at this time it takes 5 seconds, the same query runs in 300\nmillis when the server is not under load so it seems that the database\nis not performing well even though there is plenty of spare CPU. There\ndoes not appear to be large amounts of disk IO and my database is about\n5.5G so this should fit comfortably in RAM.\n\nchanges to postgresql.sql:\n\nmax_connections = 500\nshared_buffers = 96000\nsort_mem = 10240\neffective_cache_size = 1000000\n\nDoes anyone have any ideas what my bottle neck might be and what I can\ndo about it?\n\nThanks for any help.\n\nMatthew.\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n \n\n\n \n\n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--",
"msg_date": "Wed, 12 Dec 2007 14:15:37 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Hi Matthew,\n\nYou should be upgrade to Pg 8.2.5.\nWe did test on different hardware. The bigest step is to use Pg 8.2.5.\nThe high number of context switches which we saw before simple disappeared.\n\n\nIn regards to you question about XEONs: You will have the similar issue\nwith a XEON box.\nWe tried different boxes (4-way DC, 8-way, 8-way DC and 4-way QC).\nThe improvement compared with a 4-way SC wan't really better under Pg\n8.0. I'm not saying that a 4-way SC is faster than the 4-way QC in terms\nof handling more concurrent queries. But there was never a huge step as\nyou would expect from the spend money.\n\nRegards\nSven.\n\nMatthew Lunnon schrieb:\n> Hi Sven,\n> \n> Does this mean that one option I have is to use a multi core Intel based\n> server instead of an AMD based server?\n> \n> Matthew\n> \n> Sven Geisler wrote:\n>> Hi Matthew,\n>>\n>> I remember that I also an issue with AMD Opterons before Pg 8.1\n>> There is a specific Opteron behaviour on shared memory locks which adds\n>> a extra \"penalty\" during the execution time for Pg code before 8.1.\n>> I can you provide my patch for Pg 8.0 which should be adaptable for Pg\n>> 7.4 if you can compile PostgreSQL.\n>>\n>> But if you can upgrade you should upgrade to Pg 8.2.5 64-bit. The scale\n>> up for your concurrent queries will be great.\n>>\n>> Sven.\n>>\n>> Matthew Lunnon schrieb:\n>> \n>>> Hi,\n>>>\n>>> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\n>>> postgres 7.4.3. This has been recompiled on the server for 64 stored\n>>> procedure parameters, (I assume this makes postgres 64 bit but are not\n>>> sure). When the server gets under load from database connections\n>>> executing reads, lets say 20 - 40 concurrent reads, the CPU's seem to\n>>> limit at about 30-35% usage with no iowait reported. If I run a simple\n>>> select at this time it takes 5 seconds, the same query runs in 300\n>>> millis when the server is not under load so it seems that the database\n>>> is not performing well even though there is plenty of spare CPU. There\n>>> does not appear to be large amounts of disk IO and my database is about\n>>> 5.5G so this should fit comfortably in RAM.\n>>>\n>>> changes to postgresql.sql:\n>>>\n>>> max_connections = 500\n>>> shared_buffers = 96000\n>>> sort_mem = 10240\n>>> effective_cache_size = 1000000\n>>>\n>>> Does anyone have any ideas what my bottle neck might be and what I can\n>>> do about it?\n>>>\n>>> Thanks for any help.\n>>>\n>>> Matthew.\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 6: explain analyze is your friend\n>>> \n>>\n>> \n> \n> -- \n> Matthew Lunnon\n> Technical Consultant\n> RWA Ltd.\n> \n> [email protected]\n> Tel: +44 (0)29 2081 5056\n> www.rwa-net.co.uk\n> --\n> \n\n-- \nSven Geisler <[email protected]> Tel +49.30.921017.81 Fax .50\nSenior Developer, AEC/communications GmbH & Co. KG Berlin, Germany\n",
"msg_date": "Wed, 12 Dec 2007 15:28:32 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Matthew Lunnon wrote:\n> Ah I was afraid of that. Maybe I'll have to come out of the dark ages.\n\nYes :) but ignore the comment about the 8.3 Beta series. It is Beta for \na reason, that means testing only, no production.\n\nSincerely,\n\nJoshua D. Drake\n\n>> Your bottleneck is that you are using a very old version of PostgreSQL. Try\n>> 8.2 or (if you can) the 8.3 beta series -- it scales a _lot_ better in this\n>> kind of situation.\n>>\n>> /* Steinar */\n\n",
"msg_date": "Wed, 12 Dec 2007 07:38:20 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Matthew Lunnon wrote:\n> Hi Sven,\n> \n> Does this mean that one option I have is to use a multi core Intel based \n> server instead of an AMD based server?\n\nWow... hold on.\n\n>> I remember that I also an issue with AMD Opterons before Pg 8.1\n>> There is a specific Opteron behaviour on shared memory locks which adds\n>> a extra \"penalty\" during the execution time for Pg code before 8.1.\n\nI would like to see some proof of this.\n\n\n>> I can you provide my patch for Pg 8.0 which should be adaptable for Pg\n>> 7.4 if you can compile PostgreSQL.\n>>\n\nThe real answer here is to upgrade to 8.1.10 or 8.2.5. 64bit if you can.\n\n>>> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running\n>>> postgres 7.4.3. This has been recompiled on the server for 64 stored\n>>> procedure parameters, (I assume this makes postgres 64 bit but are not\n\nThis type of machine, assuming you have decent IO available will \ngenerally be very fast with anything >= 8.1.\n\n>>> changes to postgresql.sql:\n>>>\n>>> max_connections = 500\n>>> shared_buffers = 96000\n\nWay to high for 7.4. Way to low for 8.1 or above (based on what you \nmention your work load is)\n\n>>> sort_mem = 10240\n\nBased on your specs this actually may be fine but I would suggest \nreviewing it after you upgrade.\n\n>>> effective_cache_size = 1000000\n>>>\n\nAgain, too low for 8.1 or above.\n\nSincerely,\n\nJoshua D. Drake\n\n--\nThe PostgreSQL Company: http://www.commandprompt.com/\n\n",
"msg_date": "Wed, 12 Dec 2007 07:47:00 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "On Wed, 2007-12-12 at 07:38 -0800, Joshua D. Drake wrote:\n> Matthew Lunnon wrote:\n> > Ah I was afraid of that. Maybe I'll have to come out of the dark ages.\n> \n> Yes :) but ignore the comment about the 8.3 Beta series. It is Beta for \n> a reason, that means testing only, no production.\n\nMatthew,\n\nSome benchmark comparisons would be really useful though, so please\ndon't be dissuaded from looking at 8.3. It's planned to be released\nafter Christmas, so we're probably around 2 business weeks away from\nrelease if you ignore the holiday period.\n\nEverything you've said suggests you've hit the scalability limit of 7.4,\nwhich had a buffer manager that got worse with larger settings, fixed in\n8.0. Most of the scalability stuff has been added since then and 8.3\nlooks to be really fast, but we would still like some more performance\nnumbers.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 13 Dec 2007 16:23:45 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Simon Riggs wrote:\n> On Wed, 2007-12-12 at 07:38 -0800, Joshua D. Drake wrote:\n>> Matthew Lunnon wrote:\n>>> Ah I was afraid of that. Maybe I'll have to come out of the dark ages.\n>> Yes :) but ignore the comment about the 8.3 Beta series. It is Beta for \n>> a reason, that means testing only, no production.\n> \n> Matthew,\n> \n> Some benchmark comparisons would be really useful though, so please\n> don't be dissuaded from looking at 8.3.\n\nRight, I did say \"testing\" which is always good.\n\nSincerely,\n\nJoshua D. Drake\n\n",
"msg_date": "Thu, 13 Dec 2007 09:10:46 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
},
{
"msg_contents": "Matthew Lunnon wrote:\n> Ah I was afraid of that. Maybe I'll have to come out of the dark ages.\n\nAt the very least, upgrade to latest 7.4 minor version. It probably \nwon't help with you're performance, but 7.4.3 is very old. There's been \na *lot* of bug fixes between 7.4.3 and 7.4.18, including fixes for \nsecurity vulnerabilities and data corruption bugs.\n\n> Steinar H. Gunderson wrote:\n>> On Wed, Dec 12, 2007 at 10:16:43AM +0000, Matthew Lunnon wrote:\n>> \n>>> Does anyone have any ideas what my bottle neck might be and what I \n>>> can do about it?\n>>> \n>>\n>> Your bottleneck is that you are using a very old version of \n>> PostgreSQL. Try\n>> 8.2 or (if you can) the 8.3 beta series -- it scales a _lot_ better in \n>> this\n>> kind of situation.\n>>\n>> /* Steinar */\n>> \n> \n\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 14 Dec 2007 11:11:58 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited performance on multi core server"
}
] |
[
{
"msg_contents": "reading postgres benchmarks for beginners advises to stop reading on the\nwords \"default (ie. unchanged postgresql.conf); but the real test is given\nright after:\n\nhttp://www.kaltenbrunner.cc/blog/index.php?/archives/21-guid.html\n\nThat confirmes my first impression (on different workload) of \"the speed has\ndoubled\".\n\nIf reality confirmes, that 8.2 to 8.3 will be a small step in versions, and\na great step in databases.\n\nHarald\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nSpielberger Straße 49\n70435 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nEuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!\n\nreading postgres benchmarks for beginners advises to stop reading on the words \"default (ie. unchanged postgresql.conf); but the real test is given right after:\nhttp://www.kaltenbrunner.cc/blog/index.php?/archives/21-guid.htmlThat confirmes my first impression (on different workload) of \"the speed has doubled\".If reality confirmes, that 8.2 to 8.3 will be a small step in versions, and a great step in databases.\nHarald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaSpielberger Straße 4970435 Stuttgart0173/9409607fx 01212-5-13695179 -EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!",
"msg_date": "Wed, 12 Dec 2007 13:58:50 +0100",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "URI to kind of a benchmark"
},
{
"msg_contents": "Harald Armin Massa wrote:\n> reading postgres benchmarks for beginners advises to stop reading on the \n> words \"default (ie. unchanged postgresql.conf); but the real test is \n> given right after:\n> \n> http://www.kaltenbrunner.cc/blog/index.php?/archives/21-guid.html\n> \n> That confirmes my first impression (on different workload) of \"the speed \n> has doubled\".\n> \n> If reality confirmes, that 8.2 to 8.3 will be a small step in versions, \n> and a great step in databases.\n\nyeah but keep in mind that this one only tested a very specific scenario \n(update heavy, workload fits easily in buffercache and benefits from \nHOT) - it is a fairly neat improvement though ...\n\n\nStefan\n",
"msg_date": "Wed, 12 Dec 2007 15:48:29 +0100",
"msg_from": "Stefan Kaltenbrunner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URI to kind of a benchmark"
}
] |
[
{
"msg_contents": "Yesterday we moved a 300 GB table containing document images (mostly raster-scanned from paper), into a 215 GB PostgreSQL 8.2.5 database which contains the related case management data. (This separation was never \"right\", since there were links from one to the other, but was necessary under our previous database package for practical reasons.)\n \nThe data was inserted through a Java program using a prepared statement with no indexes on the table. The primary key was then added, and now I've started a vacuum. The new table wound up being the first big table vacuumed, and I noticed something odd. Even though there have been no rollbacks, updates, or deletes on this table, the vacuum is writing as much as it is reading while dealing with the TOAST data.\n \nHere's the current tail of the VACUUM ANALYZE VERBOSE output:\n \nINFO: analyzing \"pg_catalog.pg_auth_members\"\nINFO: \"pg_auth_members\": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows\nINFO: vacuuming \"public.DocImage\"\nINFO: index \"DocImage_pkey\" now contains 2744753 row versions in 10571 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.15s/0.01u sec elapsed 0.28 sec.\nINFO: \"DocImage\": found 0 removable, 2744753 nonremovable row versions in 22901 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n32 pages contain useful free space.\n0 pages are entirely empty.\nCPU 0.46s/0.10u sec elapsed 1.12 sec.\nINFO: vacuuming \"pg_toast.pg_toast_7729979\"\n \nAnd here's a snippet from vmstat 1 output:\n \nprocs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id wa st\n 0 1 156 316500 556 63491808 0 0 58544 57910 1459 2417 2 4 85 9 0\n 0 1 156 317884 556 63490780 0 0 61240 62087 1425 2106 2 5 84 9 0\n 0 3 156 307500 556 63499004 0 0 56968 57882 1472 2091 2 5 84 10 0\n 2 0 156 309280 556 63497976 0 0 59920 58218 1600 4503 5 4 79 11 0\n 0 1 156 313592 556 63494892 0 0 57608 62371 1695 3425 3 5 84 8 0\n 2 1 156 305844 556 63502088 0 0 54568 58164 1644 2962 3 4 84 9 0\n 0 1 156 306560 556 63502088 0 0 61080 57949 1494 2808 3 5 83 9 0\n 1 0 156 303432 552 63505176 0 0 49784 53972 1481 2629 2 4 84 10 0\n 0 1 156 308232 552 63500036 0 0 57496 57903 1426 1954 1 4 85 9 0\n 1 0 156 309008 552 63499008 0 0 62000 61962 1442 2401 2 4 85 8 0\n \nIt's been like this for over half an hour. Not that I expect a vacuum of a 300 GB table to be blindingly fast, but if the data has just been inserted, why all those writes?\n \n-Kevin\n \n PostgreSQL 8.2.5 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20070115 (prerelease) (SUSE Linux)\n \nccsa@SOCRATES:/var/pgsql/data/cc> free -m\n total used free shared buffers cached\nMem: 64446 64147 298 0 0 62018\n-/+ buffers/cache: 2128 62318\nSwap: 1027 0 1027\n \nlisten_addresses = '*'\nport = 5412\nmax_connections = 200\nshared_buffers = 160MB\ntemp_buffers = 50MB\nwork_mem = 32MB\nmaintenance_work_mem = 1GB\nmax_fsm_pages = 800000\nbgwriter_lru_percent = 20.0\nbgwriter_lru_maxpages = 200\nbgwriter_all_percent = 10.0\nbgwriter_all_maxpages = 600\nwal_buffers = 1MB\ncheckpoint_segments = 50\ncheckpoint_timeout = 30min\nseq_page_cost = 0.5\nrandom_page_cost = 0.8\neffective_cache_size = 63GB\ngeqo = off\nfrom_collapse_limit = 15\njoin_collapse_limit = 15\nredirect_stderr = on\nlog_line_prefix = '[%m] %p %q<%u %d %r> '\nstats_block_level = on\nstats_row_level = on\nautovacuum = on\nautovacuum_naptime = 10s\nautovacuum_vacuum_threshold = 1\nautovacuum_analyze_threshold = 1\ndatestyle = 'iso, mdy'\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\nescape_string_warning = off\nstandard_conforming_strings = on\nsql_inheritance = off\n \nBINDIR = /usr/local/pgsql-8.2.5/bin\nDOCDIR = /usr/local/pgsql-8.2.5/doc\nINCLUDEDIR = /usr/local/pgsql-8.2.5/include\nPKGINCLUDEDIR = /usr/local/pgsql-8.2.5/include\nINCLUDEDIR-SERVER = /usr/local/pgsql-8.2.5/include/server\nLIBDIR = /usr/local/pgsql-8.2.5/lib\nPKGLIBDIR = /usr/local/pgsql-8.2.5/lib\nLOCALEDIR =\nMANDIR = /usr/local/pgsql-8.2.5/man\nSHAREDIR = /usr/local/pgsql-8.2.5/share\nSYSCONFDIR = /usr/local/pgsql-8.2.5/etc\nPGXS = /usr/local/pgsql-8.2.5/lib/pgxs/src/makefiles/pgxs.mk\nCONFIGURE = '--prefix=/usr/local/pgsql-8.2.5' '--enable-integer-datetimes' '--enable-debug' '--disable-nls'\nCC = gcc\nCPPFLAGS = -D_GNU_SOURCE\nCFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -g\nCFLAGS_SL = -fpic\nLDFLAGS = -Wl,-rpath,'/usr/local/pgsql-8.2.5/lib'\nLDFLAGS_SL =\nLIBS = -lpgport -lz -lreadline -lcrypt -ldl -lm\nVERSION = PostgreSQL 8.2.5\n \n Table \"public.DocImage\"\n Column | Type | Modifiers\n-----------+--------------+-----------\n countyNo | \"CountyNoT\" | not null\n docId | \"DocIdT\" | not null\n sectionNo | \"SectionNoT\" | not null\n docImage | \"ImageT\" | not null\nIndexes:\n \"DocImage_pkey\" PRIMARY KEY, btree (\"countyNo\", \"docId\", \"sectionNo\")\n \n Schema | Name | Type | Modifier | Check\n--------+------------+----------+----------+-------\n public | CountyNoT | smallint | |\n public | DocIdT | integer | |\n public | SectionNoT | integer | |\n public | ImageT | bytea | |\n \n\n",
"msg_date": "Thu, 13 Dec 2007 09:46:17 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Heavy write activity on first vacuum of fresh TOAST data"
},
{
"msg_contents": "On Thu, 2007-12-13 at 09:46 -0600, Kevin Grittner wrote:\n \n> The data was inserted through a Java program using a prepared\n> statement with no indexes on the table. The primary key was then\n> added, and now I've started a vacuum. The new table wound up being\n> the first big table vacuumed, and I noticed something odd. Even\n> though there have been no rollbacks, updates, or deletes on this\n> table, the vacuum is writing as much as it is reading while dealing\n> with the TOAST data.\n\nWriting hint bits. Annoying isn't it? :-(\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 13 Dec 2007 16:11:23 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh TOAST\n\tdata"
},
{
"msg_contents": ">>> On Thu, Dec 13, 2007 at 10:11 AM, in message\n<[email protected]>, Simon Riggs <[email protected]>\nwrote: \n> On Thu, 2007-12-13 at 09:46 -0600, Kevin Grittner wrote:\n> \n>> The data was inserted through a Java program using a prepared\n>> statement with no indexes on the table. The primary key was then\n>> added, and now I've started a vacuum. The new table wound up being\n>> the first big table vacuumed, and I noticed something odd. Even\n>> though there have been no rollbacks, updates, or deletes on this\n>> table, the vacuum is writing as much as it is reading while dealing\n>> with the TOAST data.\n> \n> Writing hint bits. Annoying isn't it? :-(\n \nSurprising, anyway. If it allows subsequent operations to be\nfaster, I'll take it; although to a naive user it's not clear what\nis known at vacuum time that the INSERT into the empty table\ncouldn't have inferred. Bulk loads into empty tables are a pretty\ncommon use case, so if there was some way to set the hints on\ninsert, as long as the table started the database transaction\nempty, nobody else is modifying it, and only inserts have occurred,\nthat would be a good thing. I'm speaking from the perspective of a\nuser, of course; not someone who would actually try to wrangle the\ncode into working that way.\n \nThanks for the explanation.\n \n-Kevin\n \n\n\n",
"msg_date": "Thu, 13 Dec 2007 10:26:58 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh\n\tTOAST data"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> ... although to a naive user it's not clear what\n> is known at vacuum time that the INSERT into the empty table\n> couldn't have inferred.\n\nThe fact that the INSERT actually committed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Dec 2007 11:35:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh TOAST data "
},
{
"msg_contents": ">>> On Thu, Dec 13, 2007 at 10:35 AM, in message <[email protected]>,\nTom Lane <[email protected]> wrote: \n> \"Kevin Grittner\" <[email protected]> writes:\n>> ... although to a naive user it's not clear what\n>> is known at vacuum time that the INSERT into the empty table\n>> couldn't have inferred.\n> \n> The fact that the INSERT actually committed.\n \nFair enough. I suppose that the possibility that of access before\nthe commit would preclude any optimization that would assume the\ncommit is more likely than a rollback, and do the extra work only in\nthe unusual case?\n \n-Kevin\n \n\n\n",
"msg_date": "Thu, 13 Dec 2007 10:39:47 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh\n\tTOAST data"
},
{
"msg_contents": "On Thu, 2007-12-13 at 10:39 -0600, Kevin Grittner wrote:\n> >>> On Thu, Dec 13, 2007 at 10:35 AM, in message <[email protected]>,\n> Tom Lane <[email protected]> wrote: \n> > \"Kevin Grittner\" <[email protected]> writes:\n> >> ... although to a naive user it's not clear what\n> >> is known at vacuum time that the INSERT into the empty table\n> >> couldn't have inferred.\n> > \n> > The fact that the INSERT actually committed.\n> \n> Fair enough. I suppose that the possibility that of access before\n> the commit would preclude any optimization that would assume the\n> commit is more likely than a rollback, and do the extra work only in\n> the unusual case?\n\nNo chance. There's an optimization of COPY I've not got around to as\nyet, but nothing straightforward we can do with the normal case.\n\nWe might be able to have bgwriter set hint bits on dirty blocks, but the\nsuccess of that would depend upon the transit time of blocks through the\ncache, i.e. it might be totally ineffective. So might be just overhead\nfor the bgwriter and worse, could divert bgwriter attention away from\nwhat its supposed to be doing. That's a lot of work to fiddle with the\nknobs to improve things and there's higher things on the list AFAICS.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 13 Dec 2007 17:00:42 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh TOAST\n\tdata"
},
{
"msg_contents": "Simon Riggs wrote:\n\n> We might be able to have bgwriter set hint bits on dirty blocks, but the\n> success of that would depend upon the transit time of blocks through the\n> cache, i.e. it might be totally ineffective. So might be just overhead\n> for the bgwriter and worse, could divert bgwriter attention away from\n> what its supposed to be doing. That's a lot of work to fiddle with the\n> knobs to improve things and there's higher things on the list AFAICS.\n\nI don't think that works, because the bgwriter has no access to the\ncatalogs, therefore it cannot examine the page contents. To bgwriter,\npages are opaque.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/5ZYLFMCVHXC\n\"There is evil in the world. There are dark, awful things. Occasionally, we get\na glimpse of them. But there are dark corners; horrors almost impossible to\nimagine... even in our worst nightmares.\" (Van Helsing, Dracula A.D. 1972)\n",
"msg_date": "Thu, 13 Dec 2007 14:26:56 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh TOAST\n\tdata"
},
{
"msg_contents": ">>> On Thu, Dec 13, 2007 at 10:11 AM, in message\n<[email protected]>, Simon Riggs <[email protected]>\nwrote: \n> On Thu, 2007-12-13 at 09:46 -0600, Kevin Grittner wrote:\n> \n>> The data was inserted through a Java program using a prepared\n>> statement with no indexes on the table. The primary key was then\n>> added, and now I've started a vacuum. The new table wound up being\n>> the first big table vacuumed, and I noticed something odd. Even\n>> though there have been no rollbacks, updates, or deletes on this\n>> table, the vacuum is writing as much as it is reading while dealing\n>> with the TOAST data.\n> \n> Writing hint bits. Annoying isn't it? :-(\n \nIs there anything in the documentation that mentions this pattern\nof activity? Since I started clearing the WAL file tails before\ncompression, it has surprised me how much WAL file activity there\nis from the nightly vacuum. I had assumed that some part of this\nwas freezing old tuples, but that didn't seem to exactly match the\npattern of activity. If the hint bit changes are written to the\nWAL, I think this explains it.\n \nMaybe this too arcane for the docs, but I'm not so sure.\nEffectively, it means that every new tuple which has much of a\nlifespan has to be written at least three times, if I'm\nunderstanding you: once during the database transaction which\ncreates it, once in the first subsequent vacuum of that table to\nflag that it was committed, and again when it reaches the freeze\nthreshold to prevent transaction number wraparound.\n \nThat last one could be sort of a surprise for someone at some\npoint after, say, restoring from pg_dump, couldn't it? Would it\nmake any kind of sense for a person to do the first vacuum after\na bulk load using the FREEZE keyword (or the more recent\nequivalent setting)?\n \n-Kevin\n \n\n",
"msg_date": "Thu, 13 Dec 2007 11:46:37 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh\n\tTOAST data"
},
{
"msg_contents": "On Thu, 2007-12-13 at 11:46 -0600, Kevin Grittner wrote:\n> >>> On Thu, Dec 13, 2007 at 10:11 AM, in message\n> <[email protected]>, Simon Riggs <[email protected]>\n> wrote: \n> > On Thu, 2007-12-13 at 09:46 -0600, Kevin Grittner wrote:\n> > \n> >> The data was inserted through a Java program using a prepared\n> >> statement with no indexes on the table. The primary key was then\n> >> added, and now I've started a vacuum. The new table wound up being\n> >> the first big table vacuumed, and I noticed something odd. Even\n> >> though there have been no rollbacks, updates, or deletes on this\n> >> table, the vacuum is writing as much as it is reading while dealing\n> >> with the TOAST data.\n> > \n> > Writing hint bits. Annoying isn't it? :-(\n> \n> Is there anything in the documentation that mentions this pattern\n> of activity? Since I started clearing the WAL file tails before\n> compression, it has surprised me how much WAL file activity there\n> is from the nightly vacuum. I had assumed that some part of this\n> was freezing old tuples, but that didn't seem to exactly match the\n> pattern of activity. If the hint bit changes are written to the\n> WAL, I think this explains it.\n\nThey're not.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 13 Dec 2007 18:12:44 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh TOAST\n\tdata"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Simon Riggs wrote:\n>> We might be able to have bgwriter set hint bits on dirty blocks,\n\n> I don't think that works, because the bgwriter has no access to the\n> catalogs, therefore it cannot examine the page contents. To bgwriter,\n> pages are opaque.\n\nAnother issue is that this'd require bgwriter to access the clog SLRU\narea. I seem to remember worrying that that could lead to low-level\ndeadlocks, though I cannot recall the exact case at the moment.\nEven without that, it would increase contention for SLRU, which we\nprobably don't want.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Dec 2007 13:52:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh TOAST data "
},
{
"msg_contents": "On Thu, 2007-12-13 at 13:52 -0500, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > Simon Riggs wrote:\n> >> We might be able to have bgwriter set hint bits on dirty blocks,\n> \n> > I don't think that works, because the bgwriter has no access to the\n> > catalogs, therefore it cannot examine the page contents. To bgwriter,\n> > pages are opaque.\n> \n> Another issue is that this'd require bgwriter to access the clog SLRU\n> area. I seem to remember worrying that that could lead to low-level\n> deadlocks, though I cannot recall the exact case at the moment.\n> Even without that, it would increase contention for SLRU, which we\n> probably don't want.\n\nI was trying to highlight the problems, not advocate that as an\napproach, sorry if I wasn't clear enough. Even if you solved the\nproblems both of you have mentioned I don't think the dynamic behaviour\nwill be useful enough to merit the effort of trying. I'm definitely not\ngoing to be spending any time on this. Fish are frying elsewhere.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 13 Dec 2007 19:20:43 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh TOAST\n\tdata"
},
{
"msg_contents": ">>> On Thu, Dec 13, 2007 at 12:12 PM, in message\n<[email protected]>, Simon Riggs <[email protected]>\nwrote: \n> On Thu, 2007-12-13 at 11:46 -0600, Kevin Grittner wrote:\n>> If the hint bit changes are written to the WAL ...\n> \n> They're not.\n \nSo one would expect a write-intensive initial vacuum after a\nPITR-style recovery?\n \nWhat impact would lack of the hint bits have until a vacuum?\n \n-Kevin\n \n\n\n",
"msg_date": "Thu, 13 Dec 2007 15:19:34 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh\n\tTOAST data"
},
{
"msg_contents": "On Thu, 2007-12-13 at 15:19 -0600, Kevin Grittner wrote:\n> >>> On Thu, Dec 13, 2007 at 12:12 PM, in message\n> <[email protected]>, Simon Riggs <[email protected]>\n> wrote: \n> > On Thu, 2007-12-13 at 11:46 -0600, Kevin Grittner wrote:\n> >> If the hint bit changes are written to the WAL ...\n> > \n> > They're not.\n> \n> So one would expect a write-intensive initial vacuum after a\n> PITR-style recovery?\n\nVery perceptive. I was just thinking about that myself. An interesting\nissue when running with full_page_writes off.\n \n> What impact would lack of the hint bits have until a vacuum?\n\nVacuum isn't important here. Its the first idiot to read the data that\ngets hit.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 13 Dec 2007 21:40:10 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh TOAST\n\tdata"
},
{
"msg_contents": ">>> On Thu, Dec 13, 2007 at 3:40 PM, in message\n<[email protected]>, Simon Riggs <[email protected]>\nwrote: \n> On Thu, 2007-12-13 at 15:19 -0600, Kevin Grittner wrote:\n>> >>> On Thu, Dec 13, 2007 at 12:12 PM, in message\n>> <[email protected]>, Simon Riggs <[email protected]>\n>> wrote: \n>> > On Thu, 2007-12-13 at 11:46 -0600, Kevin Grittner wrote:\n>> >> If the hint bit changes are written to the WAL ...\n>> > \n>> > They're not.\n>> \n>> So one would expect a write-intensive initial vacuum after a\n>> PITR-style recovery?\n> \n> Very perceptive. I was just thinking about that myself. An interesting\n> issue when running with full_page_writes off.\n> \n>> What impact would lack of the hint bits have until a vacuum?\n> \n> Vacuum isn't important here. Its the first idiot to read the data that\n> gets hit.\n \nOK, I want to understand this well enough to recognize it when I\nsee it. (As always, I appreciate the helpful answers here.)\n \nAssuming no data is toasted, after a bulk INSERT or COPY into the\ndatabase, a subsequent SELECT COUNT(*) would figure out the correct\nhint bits and rewrite all rows during execution of the SELECT\nstatement?\n \nThe same is true following a PITR-style recovery?\n \nToasted data would not be rewritten unless accessed (whether that\nbe for selection criteria, sort order, results, or whatever)?\n \nA database VACUUM is going to run into every page not previously\naccessed and make all hint bits correct?\n \nWould a VACUUM FREEZE of a bulk-loaded table do one write for both\nthe hint bits and the transaction ID? (I know that hackers\ngenerally prefer that people leave the transaction IDs unfrozen\nfor a long time to aid in debugging problems, but that seems less\nuseful in a large table which has just been bulk-loaded, true?)\n \n-Kevin\n \n\n",
"msg_date": "Thu, 13 Dec 2007 17:43:27 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh\n\tTOAST data"
},
{
"msg_contents": "Simon Riggs wrote:\n> On Thu, 2007-12-13 at 15:19 -0600, Kevin Grittner wrote:\n> \n>> What impact would lack of the hint bits have until a vacuum?\n>> \n>\n> Vacuum isn't important here. Its the first idiot to read the data that\n> gets hit.\n>\n> \nGiven vacuum must then touch every page, is there a win in only setting \nhint bits on pages where vacuum has to do some other work on the page? \nAs vacuum is causing significant IO load for data that may not be \naccessed for some time.\n\nThe question becomes what is the impact of not setting hint bits? Is it \nbetter or worse than the IO caused by vacuum?\n\nRegards\n\nRussell Smith\n",
"msg_date": "Fri, 14 Dec 2007 18:30:11 +1100",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh TOAST\n data"
},
{
"msg_contents": "Russell Smith <[email protected]> writes:\n> Given vacuum must then touch every page, is there a win in only setting \n> hint bits on pages where vacuum has to do some other work on the page? \n> As vacuum is causing significant IO load for data that may not be \n> accessed for some time.\n\nWell, if vacuum doesn't do it then some other poor sod will have to.\n\nMy feeling is that vacuum's purpose in life is to offload maintenance\ncycles from foreground queries, so we should be happy to have it setting\nall the hint bits. If Kevin doesn't like the resultant I/O load then he\nshould use the vacuum_cost_delay parameters to dial down vacuum speed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Dec 2007 02:42:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh TOAST data "
},
{
"msg_contents": ">>> On Fri, Dec 14, 2007 at 1:42 AM, in message <[email protected]>,\nTom Lane <[email protected]> wrote: \n \n> My feeling is that vacuum's purpose in life is to offload maintenance\n> cycles from foreground queries, so we should be happy to have it setting\n> all the hint bits.\n \nAbsolutely.\n \n> If Kevin doesn't like the resultant I/O load then he\n> should use the vacuum_cost_delay parameters to dial down vacuum speed.\n \nIt's not that I don't like it -- I'm often called upon to diagnose\nissues, and understanding the dynamics of things like this helps me\ninterpret what I'm seeing. No complaint here.\n \n-Kevin\n \n\n\n",
"msg_date": "Fri, 14 Dec 2007 09:19:11 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh\n\tTOAST data"
},
{
"msg_contents": "On Thu, 13 Dec 2007, Simon Riggs wrote:\n> On Thu, 2007-12-13 at 09:46 -0600, Kevin Grittner wrote:\n> > Even though there have been no rollbacks, updates, or deletes on this\n> > table, the vacuum is writing as much as it is reading while dealing\n> > with the TOAST data.\n>\n> Writing hint bits. Annoying isn't it? :-(\n\nInteresting thread. Now, I know absolutely nothing about how the data is\nstored, but it strikes me as being non-optimal that every single block on\nthe disc needs to be written again just to update some hint bits. Could\nthose bits be taken out into a separate bitmap stored somewhere else? That\nwould mean the (relatively small) amount of data being written could be\nwritten in a small sequential write to the disc, rather than very sparsely\nover the whole table.\n\nMatthew\n\n-- \nIf you let your happiness depend upon how somebody else feels about you,\nnow you have to control how somebody else feels about you. -- Abraham Hicks\n",
"msg_date": "Fri, 14 Dec 2007 15:42:33 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh TOASTa"
},
{
"msg_contents": ">>> On Thu, Dec 13, 2007 at 3:40 PM, in message\n<[email protected]>, Simon Riggs <[email protected]>\nwrote: \n> On Thu, 2007-12-13 at 15:19 -0600, Kevin Grittner wrote:\n \n>> So one would expect a write-intensive initial vacuum after a\n>> PITR-style recovery?\n \n> An interesting issue when running with full_page_writes off.\n \nI'm curious. How does the full_page_writes setting affect this?\n \n-Kevin\n \n\n\n",
"msg_date": "Fri, 14 Dec 2007 10:07:41 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh\n\tTOAST data"
},
{
"msg_contents": "Matthew <[email protected]> writes:\n> Interesting thread. Now, I know absolutely nothing about how the data is\n> stored, but it strikes me as being non-optimal that every single block on\n> the disc needs to be written again just to update some hint bits. Could\n> those bits be taken out into a separate bitmap stored somewhere else?\n\nYou are trying to optimize the wrong thing. Making vacuum cheaper at\nthe cost of making every tuple lookup more expensive is not going to\nbe a win.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Dec 2007 11:22:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh TOASTa "
},
{
"msg_contents": "On Fri, 14 Dec 2007, Tom Lane wrote:\n> Matthew <[email protected]> writes:\n> > Interesting thread. Now, I know absolutely nothing about how the data is\n> > stored, but it strikes me as being non-optimal that every single block on\n> > the disc needs to be written again just to update some hint bits. Could\n> > those bits be taken out into a separate bitmap stored somewhere else?\n>\n> You are trying to optimize the wrong thing. Making vacuum cheaper at\n> the cost of making every tuple lookup more expensive is not going to\n> be a win.\n\nTrue, although it depends how much data there actually is. If it's one bit\nper page, then you could possibly rely on that data staying in the cache\nenough for it to stop being a performance hit. If it's much more data than\nthat, then yes it's silly, and I should be embarrassed at having made the\nsuggestion. No point making each random tuple lookup into two disc\naccesses instead of one.\n\nMatthew\n\n-- \nIt is better to keep your mouth closed and let people think you are a fool\nthan to open it and remove all doubt. -- Mark Twain\n",
"msg_date": "Fri, 14 Dec 2007 16:52:29 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy write activity on first vacuum of fresh TOASTa"
}
] |
[
{
"msg_contents": "PostgreSQL: 8.2\n\nI am looking at the possibility of storing files in some of my database\ntables. My concern is obviously performance. I have configured\nPostgreSQL to take advantage of Linux file caching. So my PostgreSQL\ndoes not have a large setting for shared_buffers even though I have 24G\nof memory. The performance today is very good. \n\n \n\nSome questions I have:\n\n \n\nWhat data type should I use for fields that hold files?\n\nIs there anything that I should be aware of when putting files into a\nfield in a table?\n\nWhen PostgreSQL accesses a table that has fields that contain files does\nit put the fields that contain the files into the shared_buffers memory\narea?\n\n \n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL: 8.2\nI am looking at the possibility of storing files in some of my\ndatabase tables. My concern is obviously performance. I have configured\nPostgreSQL to take advantage of Linux file caching. So my PostgreSQL does not\nhave a large setting for shared_buffers even though I have 24G of memory. The\nperformance today is very good. \n \nSome questions I have:\n \nWhat data type should I use for fields that hold files?\nIs there anything that I should be aware of when putting files\ninto a field in a table?\nWhen PostgreSQL accesses a table that has fields that\ncontain files does it put the fields that contain the files into the shared_buffers\nmemory area?\n \n \nThanks,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu",
"msg_date": "Thu, 13 Dec 2007 13:10:28 -0600",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Putting files into fields in a table"
},
{
"msg_contents": "On 12/13/07, Campbell, Lance <[email protected]> wrote:\n> I am looking at the possibility of storing files in some of my database\n> tables. My concern is obviously performance. I have configured PostgreSQL\n> to take advantage of Linux file caching. So my PostgreSQL does not have a\n> large setting for shared_buffers even though I have 24G of memory.\n\nThis used to be the recommended way before 8.0. In 8.0, it is\nadvantageous to give PostgreSQL more buffers. You should still make\nsome room for the kernel cache.\n\nBy \"storing files\", I assume you mean a lot of data imported from\nfiles. The procs and cons of storing large amounts of data as\nPostgreSQL tuples has been debated before. You might want to search\nthe archives.\n\nMy opinion is that PostgreSQL is fine up to a point (let's say 10,000\ntuples x 2KB), above which I would merely store references to\nfile-system objects. Managing these objects can be painful, especially\nin a cluster of more than one machine, but at least it's fast and\nlightweight.\n\n> What data type should I use for fields that hold files?\n\nPostgreSQL has two ways of storing \"large amounts of data\" in a single\ntuple: variable-length columns, and blobs.\n\nBlobs are divided into chunks and stored in separate tables, one tuple\nper chunk, indexed by offset, and PostgreSQL allows random access to\nthe data. The downside is that they take up more room, they're slower\nto create, slower to read from end to end, and I believe there are\nsome operations (such as function expressions) that don't work on\nthem. Some replication products, including (the last time I looked)\nSlony, does not support replicating blobs. Blobs are not deprecated, I\nthink, but they feel like they are.\n\nVariable-length columns such as bytea and text support a system called\nTOAST, which allow the first few kilobytes of the data to be stored\nin-place in the tuple, and the overflow to be stored elsewhere and\npotentially compressed. This system is much faster and tighter than\nblobs, but does not offer random I/O.\n\n> Is there anything that I should be aware of when putting files into a field\n> in a table?\n\nBackup dumps will increase in size in proportion to the size of your\ndata. PostgreSQL is no speed demon at loading/storing data, so this\nmight turn out to be the Achilles heel.\n\n> When PostgreSQL accesses a table that has fields that contain files does it\n> put the fields that contain the files into the shared_buffers memory area?\n\nI believe so.\n\nAlexander.\n",
"msg_date": "Thu, 13 Dec 2007 20:38:32 +0100",
"msg_from": "\"Alexander Staubo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Putting files into fields in a table"
},
{
"msg_contents": "I did not see much info in the 8.2 documentation on BLOB. I did ready\nabout \"bytea\" or binary data type. It seems like it would work for\nstoring files. I guess I could stick with the OS for file storage but\nit is a pain. It would be easier to use the DB.\n\nThanks,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf\nOf Alexander Staubo\nSent: Thursday, December 13, 2007 1:39 PM\nTo: Campbell, Lance\nCc: [email protected]\nSubject: Re: [PERFORM] Putting files into fields in a table\n\nOn 12/13/07, Campbell, Lance <[email protected]> wrote:\n> I am looking at the possibility of storing files in some of my\ndatabase\n> tables. My concern is obviously performance. I have configured\nPostgreSQL\n> to take advantage of Linux file caching. So my PostgreSQL does not\nhave a\n> large setting for shared_buffers even though I have 24G of memory.\n\nThis used to be the recommended way before 8.0. In 8.0, it is\nadvantageous to give PostgreSQL more buffers. You should still make\nsome room for the kernel cache.\n\nBy \"storing files\", I assume you mean a lot of data imported from\nfiles. The procs and cons of storing large amounts of data as\nPostgreSQL tuples has been debated before. You might want to search\nthe archives.\n\nMy opinion is that PostgreSQL is fine up to a point (let's say 10,000\ntuples x 2KB), above which I would merely store references to\nfile-system objects. Managing these objects can be painful, especially\nin a cluster of more than one machine, but at least it's fast and\nlightweight.\n\n> What data type should I use for fields that hold files?\n\nPostgreSQL has two ways of storing \"large amounts of data\" in a single\ntuple: variable-length columns, and blobs.\n\nBlobs are divided into chunks and stored in separate tables, one tuple\nper chunk, indexed by offset, and PostgreSQL allows random access to\nthe data. The downside is that they take up more room, they're slower\nto create, slower to read from end to end, and I believe there are\nsome operations (such as function expressions) that don't work on\nthem. Some replication products, including (the last time I looked)\nSlony, does not support replicating blobs. Blobs are not deprecated, I\nthink, but they feel like they are.\n\nVariable-length columns such as bytea and text support a system called\nTOAST, which allow the first few kilobytes of the data to be stored\nin-place in the tuple, and the overflow to be stored elsewhere and\npotentially compressed. This system is much faster and tighter than\nblobs, but does not offer random I/O.\n\n> Is there anything that I should be aware of when putting files into a\nfield\n> in a table?\n\nBackup dumps will increase in size in proportion to the size of your\ndata. PostgreSQL is no speed demon at loading/storing data, so this\nmight turn out to be the Achilles heel.\n\n> When PostgreSQL accesses a table that has fields that contain files\ndoes it\n> put the fields that contain the files into the shared_buffers memory\narea?\n\nI believe so.\n\nAlexander.\n",
"msg_date": "Thu, 13 Dec 2007 14:09:06 -0600",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Putting files into fields in a table"
},
{
"msg_contents": "Campbell, Lance wrote:\n> I did not see much info in the 8.2 documentation on BLOB.\n\nThat's because we don't call them \"blobs\". Search for \"large objects\"\ninstead:\n\nhttp://www.postgresql.org/docs/current/static/largeobjects.html\n\n-- \nAlvaro Herrera http://www.flickr.com/photos/alvherre/\n\"Executive Executive Summary: The [Windows] Vista Content Protection\n specification could very well constitute the longest suicide note in history.\"\n Peter Guttman, http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt\n",
"msg_date": "Thu, 13 Dec 2007 17:16:18 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Putting files into fields in a table"
},
{
"msg_contents": "\nOn Dec 13, 2007, at 2:09 PM, Campbell, Lance wrote:\n\n> I did not see much info in the 8.2 documentation on BLOB. I did ready\n> about \"bytea\" or binary data type. It seems like it would work for\n> storing files. I guess I could stick with the OS for file storage but\n> it is a pain. It would be easier to use the DB.\n\nIn postgres they're simply called Large Objects (or LOBs) and there \nis a whole chapter devoted to them in Part IV of the manual. Note \nthat you only need to use this facility if you're going to be storing \ndata over 1G in size (at which point your limit becomes 2G). What \nkind of data are in these files? What gain do you foresee in storing \nthe files directly in the db (as opposed, say, to storing the paths \nto the files in the filesystem)?\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n",
"msg_date": "Thu, 13 Dec 2007 14:22:04 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Putting files into fields in a table"
},
{
"msg_contents": "Erik,\nThe advantage with storing things in the database verses the file system\nis the number of lines of code. I manage 18 software applications. I\nhave developed an application that reads in an XML file and will\ngenerate database java code for inserting, updating, selecting and\ndeleting data. So for me the database is a no brainer. But when I need\nto store files that are uploaded by users I have to hand code the\nprocess. It is not hard. It is just time consuming. I want to keep\nthe amount I can do per hour at a very high level. The less code the\nbetter.\n \nUsing a database correctly really saves on the number of lines of code.\n\nThanks,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n\n-----Original Message-----\nFrom: Erik Jones [mailto:[email protected]] \nSent: Thursday, December 13, 2007 2:22 PM\nTo: Campbell, Lance\nCc: [email protected] list\nSubject: Re: [PERFORM] Putting files into fields in a table\n\n\nOn Dec 13, 2007, at 2:09 PM, Campbell, Lance wrote:\n\n> I did not see much info in the 8.2 documentation on BLOB. I did ready\n> about \"bytea\" or binary data type. It seems like it would work for\n> storing files. I guess I could stick with the OS for file storage but\n> it is a pain. It would be easier to use the DB.\n\nIn postgres they're simply called Large Objects (or LOBs) and there \nis a whole chapter devoted to them in Part IV of the manual. Note \nthat you only need to use this facility if you're going to be storing \ndata over 1G in size (at which point your limit becomes 2G). What \nkind of data are in these files? What gain do you foresee in storing \nthe files directly in the db (as opposed, say, to storing the paths \nto the files in the filesystem)?\n\nErik Jones\n\nSoftware Developer | Emma(r)\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n",
"msg_date": "Thu, 13 Dec 2007 14:54:10 -0600",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Putting files into fields in a table"
}
] |
[
{
"msg_contents": "Is it possible yet in PostgreSQL to hide the source code of functions from\nusers based on role membership? I would like to avoid converting the code\nto C to secure the source code and I don't want it obfuscated either. \n\nIn an ideal world, if a user can't modify a function, he/she shouldn't be\nable to see the source code. If the user can execute the function, then the\nuser should be able to see the signature of the function but not the body.\n\nThanks!\n\n\nJon\n",
"msg_date": "Fri, 14 Dec 2007 09:01:09 -0600",
"msg_from": "\"Roberts, Jon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "viewing source code"
},
{
"msg_contents": "In response to \"Roberts, Jon\" <[email protected]>:\n\n> Is it possible yet in PostgreSQL to hide the source code of functions from\n> users based on role membership? I would like to avoid converting the code\n> to C to secure the source code and I don't want it obfuscated either. \n> \n> In an ideal world, if a user can't modify a function, he/she shouldn't be\n> able to see the source code. If the user can execute the function, then the\n> user should be able to see the signature of the function but not the body.\n\nI doubt that's going to happen. Mainly because I disagree completely\nwith your ideal world description (any user who can execute a function\nshould have the right to examine it to see what it actually does).\n\nI suspect that others would agree with me, the result being that there's\nno universally-agreed-on approach. As a result, what _really_ needs to\nbe done is an extra permission bit added to functions so administrators\ncan control who can view the function body.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Fri, 14 Dec 2007 10:25:26 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "Roberts, Jon <[email protected]> schrieb:\n\n> Is it possible yet in PostgreSQL to hide the source code of functions from\n> users based on role membership? I would like to avoid converting the code\n> to C to secure the source code and I don't want it obfuscated either. \n\nSome days ago i have seen a pl/pgsql- code - obfuscator, iirc somewhere\nunder http://www.pgsql.cz/index.php/PostgreSQL, but i don't know how it\nworks, and i can't find the correkt link now, i'm sorry...\n\n(maybe next week in the browser-history, my pc@work)\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Fri, 14 Dec 2007 22:24:21 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Dec 14, 2007 4:24 PM, Andreas Kretschmer <[email protected]> wrote:\n> Some days ago i have seen a pl/pgsql- code - obfuscator, iirc somewhere\n> under http://www.pgsql.cz/index.php/PostgreSQL, but i don't know how it\n> works, and i can't find the correkt link now, i'm sorry...\n\nI started one awhile ago... but it may have been part of my mass purge\nfor disk space. I searched that site and can't find one... but it\nwould be a nice-to-have for a lot of users. Of course, I know it's\neasy to get around obfuscation, but it makes people *think* it's\nsecure, and as JD always says, it just makes it difficult for the\naverage user to understand what it's doing.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n",
"msg_date": "Fri, 14 Dec 2007 16:39:30 -0500",
"msg_from": "\"Jonah H. Harris\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
}
] |
[
{
"msg_contents": "> > In an ideal world, if a user can't modify a function, he/she shouldn't\n> be\n> > able to see the source code. If the user can execute the function, then\n> the\n> > user should be able to see the signature of the function but not the\n> body.\n> \n> I doubt that's going to happen. Mainly because I disagree completely\n> with your ideal world description (any user who can execute a function\n> should have the right to examine it to see what it actually does).\n> \n\nThat is like saying anyone that has rights to call a web service should be\nable to see the source code for it. There should be the ability to create\nsome level of abstraction when appropriate.\n\nHowever, in the current configuration, all users with permission to log in\ncan see all source code. They don't have rights to execute the functions\nbut they can see the source code for them. Shouldn't I be able to revoke\nboth the ability to execute and the ability to see functions?\n\n\nJon\n",
"msg_date": "Fri, 14 Dec 2007 09:35:47 -0600",
"msg_from": "\"Roberts, Jon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "In response to \"Roberts, Jon\" <[email protected]>:\n\n> > > In an ideal world, if a user can't modify a function, he/she shouldn't\n> > be\n> > > able to see the source code. If the user can execute the function, then\n> > the\n> > > user should be able to see the signature of the function but not the\n> > body.\n> > \n> > I doubt that's going to happen. Mainly because I disagree completely\n> > with your ideal world description (any user who can execute a function\n> > should have the right to examine it to see what it actually does).\n> \n> That is like saying anyone that has rights to call a web service should be\n> able to see the source code for it.\n\nI think that's a good idea. If vendors were forced publish their code,\nwe'd have less boneheaded security breaches.\n\n> There should be the ability to create\n> some level of abstraction when appropriate.\n\nI agree. If vendors want to have boneheaded security breaches, they should\nbe allowed.\n\n> However, in the current configuration, all users with permission to log in\n> can see all source code. They don't have rights to execute the functions\n> but they can see the source code for them. Shouldn't I be able to revoke\n> both the ability to execute and the ability to see functions?\n\nUm ... why did you snip my second paragraph where I said exactly this?\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Fri, 14 Dec 2007 11:18:49 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Fri, 14 Dec 2007 11:18:49 -0500\r\nBill Moran <[email protected]> wrote:\r\n\r\n> > That is like saying anyone that has rights to call a web service\r\n> > should be able to see the source code for it.\r\n> \r\n> I think that's a good idea. If vendors were forced publish their\r\n> code, we'd have less boneheaded security breaches.\r\n\r\nNot all closed source code is subject to boneheaded security breaches.\r\nI believe that this individuals request is a valid one from a business\r\nrequirements perspective.\r\n\r\n> \r\n> > There should be the ability to create\r\n> > some level of abstraction when appropriate.\r\n> \r\n> I agree. If vendors want to have boneheaded security breaches, they\r\n> should be allowed.\r\n\r\nIt is not up to your or me to make the determination of what people are\r\nable to do with their code.\r\n\r\n> \r\n> > However, in the current configuration, all users with permission to\r\n> > log in can see all source code. They don't have rights to execute\r\n> > the functions but they can see the source code for them. Shouldn't\r\n> > I be able to revoke both the ability to execute and the ability to\r\n> > see functions?\r\n\r\nYes and know. If your functions are interpreted then no, I don't see\r\nany reason for this feature, e.g; python,perl,plpgsql,sql,ruby. I can\r\nread them on disk anyway.\r\n\r\nIf you want to obfuscate your code I suggest you use a compilable form\r\nor a code obfuscation module for your functions (which can be had for\r\nat least python, I am sure others as well).\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHYrejATb/zqfZUUQRAjd7AJ9iCqsvsB/7FfvUeLkpCUZ4/14/+wCcCD+w\r\nZ4kjQ44yOgfR4ph0SKkUuUI=\r\n=v3Fz\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Fri, 14 Dec 2007 09:04:33 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "Joshua D. Drake wrote:\n\n> > > However, in the current configuration, all users with permission to\n> > > log in can see all source code. They don't have rights to execute\n> > > the functions but they can see the source code for them. Shouldn't\n> > > I be able to revoke both the ability to execute and the ability to\n> > > see functions?\n> \n> Yes and know. If your functions are interpreted then no, I don't see\n> any reason for this feature, e.g; python,perl,plpgsql,sql,ruby. I can\n> read them on disk anyway.\n\nIf you have access to the files, which is not necessarily the case.\nRandom users, in particular, won't.\n\nMaybe this can be done by revoking privileges to pg_proc. I am sure it\ncan be made to work. It does work for pg_auth_id, and nobody says that\n\"they can read the passwords from disk anyway.\"\n\n-- \nAlvaro Herrera Developer, http://www.PostgreSQL.org/\n\"We're here to devour each other alive\" (Hobbes)\n",
"msg_date": "Fri, 14 Dec 2007 14:11:27 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "In response to \"Joshua D. Drake\" <[email protected]>:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> On Fri, 14 Dec 2007 11:18:49 -0500\n> Bill Moran <[email protected]> wrote:\n> \n> > > That is like saying anyone that has rights to call a web service\n> > > should be able to see the source code for it.\n> > \n> > I think that's a good idea. If vendors were forced publish their\n> > code, we'd have less boneheaded security breaches.\n> \n> Not all closed source code is subject to boneheaded security breaches.\n> I believe that this individuals request is a valid one from a business\n> requirements perspective.\n\nI could go into all sorts of philosophical debates on this ... for example,\n\"not all drivers are stupid enough to ram their cars into other things,\nyet we still have seatbelt laws in the US.\"\n\n> > > There should be the ability to create\n> > > some level of abstraction when appropriate.\n> > \n> > I agree. If vendors want to have boneheaded security breaches, they\n> > should be allowed.\n> \n> It is not up to your or me to make the determination of what people are\n> able to do with their code.\n\nThat's what I said. Despite my cynical nature, I _do_ believe in\nallowing people to shoot their own foot. Sometimes it's funny to\nwatch.\n\nAny, yes, there are some folks who have very good QA and documentation\nteams and can avoid pitfalls of security breaches and poorly documented\nfunctions with unexpected side-effects. Even if they're not that\nbrilliant, they deserve the right to make their own choices.\n\n> > > However, in the current configuration, all users with permission to\n> > > log in can see all source code. They don't have rights to execute\n> > > the functions but they can see the source code for them. Shouldn't\n> > > I be able to revoke both the ability to execute and the ability to\n> > > see functions?\n> \n> Yes and know. If your functions are interpreted then no, I don't see\n> any reason for this feature, e.g; python,perl,plpgsql,sql,ruby. I can\n> read them on disk anyway.\n\nI disagree here. If they're connecting remotely to PG, they have no\ndirect access to the disk.\n\n> If you want to obfuscate your code I suggest you use a compilable form\n> or a code obfuscation module for your functions (which can be had for\n> at least python, I am sure others as well).\n\nAlthough this is an excellent suggestion as well.\n\nBut I still think the feature is potentially useful.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Fri, 14 Dec 2007 14:03:30 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Dec 14, 2007 2:03 PM, Bill Moran <[email protected]> wrote:\n> I disagree here. If they're connecting remotely to PG, they have no\n> direct access to the disk.\n\npg_read_file?\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n",
"msg_date": "Fri, 14 Dec 2007 16:03:33 -0500",
"msg_from": "\"Jonah H. Harris\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
}
] |
[
{
"msg_contents": "PostgreSQL: 8.2\n\n \n\nMy understanding is that when one creates a large object there is no way\nto link the large object to a field in a table so that cascading delete\ncan occur. Is this correct? My understanding is that you have to\nmanually delete the large object.\n\n \n\nI also read something about the OID ID being limited in size. What is\nthe size limit of this OID type? I am sure that it is bigger than the\nnumber of files that I would be uploaded into my db; but I just want to\nget an idea of the range.\n\n \n\nWhen putting a reference to a large object in a table, should the type\nof the reference object be OID?\n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL: 8.2\n \nMy understanding is that when one creates a large object\nthere is no way to link the large object to a field in a table so that\ncascading delete can occur. Is this correct? My understanding is\nthat you have to manually delete the large object.\n \nI also read something about the OID ID being limited in size. \nWhat is the size limit of this OID type? I am sure that it is bigger than\nthe number of files that I would be uploaded into my db; but I just want to get\nan idea of the range.\n \nWhen putting a reference to a large object in a table,\nshould the type of the reference object be OID?\n \nThanks,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu",
"msg_date": "Fri, 14 Dec 2007 11:11:41 -0600",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Large Objects and Toast"
},
{
"msg_contents": "On Dec 14, 2007 10:11 PM, Campbell, Lance <[email protected]> wrote:\n\n> PostgreSQL: 8.2\n>\n>\n>\n> My understanding is that when one creates a large object there is no way\n> to link the large object to a field in a table so that cascading delete can\n> occur. Is this correct? My understanding is that you have to manually\n> delete the large object.\n>\n\nYes, but you can setup a trigger to do that for you i think.\n\n>\n>\n> I also read something about the OID ID being limited in size. What is the\n> size limit of this OID type? I am sure that it is bigger than the number of\n> files that I would be uploaded into my db; but I just want to get an idea of\n> the range.\n>\n\nOid is an unsigned integer in postgres IIRC.\n\n\n\n-- \nUsama Munir Dar http://linkedin.com/in/usamadar\nConsultant Architect\nCell:+92 321 5020666\nSkype: usamadar\n\nOn Dec 14, 2007 10:11 PM, Campbell, Lance <[email protected]> wrote:\n\n\nPostgreSQL: 8.2\n \nMy understanding is that when one creates a large object\nthere is no way to link the large object to a field in a table so that\ncascading delete can occur. Is this correct? My understanding is\nthat you have to manually delete the large object.Yes, but you can setup a trigger to do that for you i think. \n\n \nI also read something about the OID ID being limited in size. \nWhat is the size limit of this OID type? I am sure that it is bigger than\nthe number of files that I would be uploaded into my db; but I just want to get\nan idea of the range.Oid is an unsigned integer in postgres IIRC. -- Usama Munir Dar \nhttp://linkedin.com/in/usamadarConsultant ArchitectCell:+92 321 5020666Skype: usamadar",
"msg_date": "Mon, 17 Dec 2007 01:33:58 +0500",
"msg_from": "\"Usama Dar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large Objects and Toast"
}
] |
[
{
"msg_contents": "I'm grooming a new server to replace one that is soon to be retired.\nMost of the data was loaded very close together, in terms of\ndatabase transaction numbers, and probably 95% of it will never be\nupdated. To assess the potential impact of a \"freeze everything in\nthe database at once\" sort of night, I started a VACUUM FREEZE at\nthe database level, to see what the load looked like.\n \nThis seems to be a remarkably good way to cause extreme checkpoint\nspikes under 8.2.5, even with an aggressive background writer. The\nI/O pattern was surprising in other respects, too, so I'm looking\nto see if someone can help me understand it.\n \n 1 0 156 314760 1888 63457580 0 0 19704 39107 3070 11681 3 4 84 10 0\n 0 2 156 313004 1888 63459636 0 0 17176 34654 2807 10696 2 5 84 9 0\n 1 0 156 311020 1888 63461692 0 0 17152 34288 2662 10675 4 4 83 10 0\n 0 2 156 306404 1888 63465804 0 0 20056 40488 3085 12500 1 3 86 9 0\n 3 2 156 304780 1888 63468888 0 0 16936 33403 2798 11424 5 4 80 10 0\n 3 2 156 304236 1888 63468888 0 0 15768 37570 3066 10988 3 3 82 12 0\n 0 2 156 311384 1888 63462720 0 0 23800 48821 3866 14732 2 5 76 17 0\n 3 1 156 304244 1888 63468888 0 0 22440 46684 3609 13133 2 4 79 14 0\n 1 2 156 313672 1888 63459636 0 0 21528 43784 3433 12416 2 4 74 20 0\n 1 3 156 311856 1888 63461692 0 0 16968 101366 2876 8769 2 7 71 20 0\n 0 6 156 307892 1888 63464776 0 0 3824 71225 1178 2592 0 1 79 20 0\n 0 5 156 316172 1888 63456552 0 0 6904 99629 1883 5645 3 2 78 17 0\n 0 8 156 313232 1888 63459636 0 0 2880 82617 1259 3196 2 1 68 29 0\n 0 7 156 310892 1888 63461692 0 0 2384 81262 1118 4415 4 1 55 40 0\n 0 5 156 317728 1888 63453468 0 0 8616 104245 2080 8266 5 3 64 29 0\n 0 8 156 314368 1888 63457580 0 0 3352 82142 1280 4316 2 1 67 30 0\n 0 4 156 310160 1888 63460664 0 0 3928 96361 1466 3885 1 1 70 28 0\n 0 9 156 308240 1896 63462712 0 0 1880 77801 1092 2665 1 1 64 33 0\n 1 1 156 313044 1904 63460648 0 0 10568 61796 2423 8942 4 2 65 29 0\n 1 3 156 311952 1904 63461676 0 0 16112 84713 3038 9919 3 6 69 22 0\n 1 2 156 304212 1904 63469900 0 0 23200 78289 4094 14690 3 5 72 20 0\n 1 2 156 310516 1896 63463740 0 0 24384 52418 3995 14139 4 4 70 23 0\n 1 2 156 303192 1896 63470936 0 0 22608 46513 3689 10554 2 4 73 21 0\n 1 2 156 314660 1896 63459628 0 0 19464 40452 3362 9239 1 5 74 20 0\n 0 2 156 305652 1896 63467852 0 0 24080 49241 3803 10274 2 4 74 20 0\n 0 2 156 312012 1896 63461684 0 0 24360 49745 3995 11190 2 4 71 23 0\n 3 2 156 305596 1896 63466824 0 0 21896 45210 3670 12122 3 4 73 20 0\n \nNote that outside of the checkpoints (where writes shoot up and\nreads drop down), the writes track along at just over double the\nreads. This is on a database which has had relatively little\nactivity since the last database vacuum.\n \nWhy double writes per read, plus massive writes at checkpoint?\n \nIs there any harm in doing a VACUUM FREEZE after loading from\npg_dump output, before putting the machine into production?\nWhile the normal nightly vacuum, with scattered row freezes,\ndoesn't seem to cause any problems, a freeze on a mass scale\nsure seems to do so. I'd rather not slow down our regular\nnightly vacuum to acommodate the mass freeze case at some\nunpredicatable time.\n \n-Kevin\n \n \n PostgreSQL 8.2.5 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20070115 (prerelease) (SUSE Linux)\n \nlisten_addresses = '*'\nport = 5412\nmax_connections = 200\nshared_buffers = 160MB\ntemp_buffers = 50MB\nwork_mem = 32MB\nmaintenance_work_mem = 1GB\nmax_fsm_pages = 800000\nbgwriter_lru_percent = 20.0\nbgwriter_lru_maxpages = 200\nbgwriter_all_percent = 10.0\nbgwriter_all_maxpages = 600\nwal_buffers = 1MB\ncheckpoint_segments = 50\ncheckpoint_timeout = 30min\nseq_page_cost = 0.5\nrandom_page_cost = 0.8\neffective_cache_size = 63GB\ngeqo = off\nfrom_collapse_limit = 15\njoin_collapse_limit = 15\nredirect_stderr = on\nlog_line_prefix = '[%m] %p %q<%u %d %r> '\nstats_block_level = on\nstats_row_level = on\nautovacuum = on\nautovacuum_naptime = 10s\nautovacuum_vacuum_threshold = 1\nautovacuum_analyze_threshold = 1\ndatestyle = 'iso, mdy'\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\nescape_string_warning = off\nstandard_conforming_strings = on\nsql_inheritance = off\n \nBINDIR = /usr/local/pgsql-8.2.5/bin\nDOCDIR = /usr/local/pgsql-8.2.5/doc\nINCLUDEDIR = /usr/local/pgsql-8.2.5/include\nPKGINCLUDEDIR = /usr/local/pgsql-8.2.5/include\nINCLUDEDIR-SERVER = /usr/local/pgsql-8.2.5/include/server\nLIBDIR = /usr/local/pgsql-8.2.5/lib\nPKGLIBDIR = /usr/local/pgsql-8.2.5/lib\nLOCALEDIR =\nMANDIR = /usr/local/pgsql-8.2.5/man\nSHAREDIR = /usr/local/pgsql-8.2.5/share\nSYSCONFDIR = /usr/local/pgsql-8.2.5/etc\nPGXS = /usr/local/pgsql-8.2.5/lib/pgxs/src/makefiles/pgxs.mk\nCONFIGURE = '--prefix=/usr/local/pgsql-8.2.5' '--enable-integer-datetimes' '--enable-debug' '--disable-nls'\nCC = gcc\nCPPFLAGS = -D_GNU_SOURCE\nCFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -g\nCFLAGS_SL = -fpic\nLDFLAGS = -Wl,-rpath,'/usr/local/pgsql-8.2.5/lib'\nLDFLAGS_SL =\nLIBS = -lpgport -lz -lreadline -lcrypt -ldl -lm\nVERSION = PostgreSQL 8.2.5\n \nkgrittn@SOCRATES:~> cat /proc/version\nLinux version 2.6.16.53-0.8-smp (geeko@buildhost) (gcc version 4.1.2 20070115 (prerelease) (SUSE Linux)) #1 SMP Fri Aug 31 13:07:27 UTC 2007\nkgrittn@SOCRATES:~> cat /etc/SuSE-release\nSUSE Linux Enterprise Server 10 (x86_64)\nVERSION = 10\nPATCHLEVEL = 1\nkgrittn@SOCRATES:~> free -m\n total used free shared buffers cached\nMem: 64446 64145 300 0 1 61972\n-/+ buffers/cache: 2171 62274\nSwap: 1027 0 1027\n \n\n",
"msg_date": "Fri, 14 Dec 2007 11:12:52 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "VACUUM FREEZE output more than double input"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Why double writes per read, plus massive writes at checkpoint?\n\nThe double writes aren't surprising: freezing has to be WAL-logged,\nand the odds are that each page hasn't been touched since the last\ncheckpoint, so the WAL log will include a complete page image.\nSo in the steady state where all shared buffers are dirty, the\nper-page cycle is:\n\t* write out a dirty buffer so it can be reclaimed\n\t* read in a page\n\t* modify it to mark tuples frozen\n\t* write an image of the page to WAL\n\t* leave the now-dirty page in shared buffers for later writing\n\nThe checkpoint spikes would come from trying to flush out all the\ndirty buffers at once.\n\nYou'd expect a bit of a valley after each peak, since the vacuum\ncould presumably recycle some buffers without having to flush 'em\nfirst; but I don't see one in your data. That may just be because\nthe numbers are too noisy, but I kinda suspect that the vacuum is\ndirtying buffers nearly as fast as the bgwriter can clean them,\nleaving not a lot of daylight for a valley.\n\n8.3 should pretty well eliminate the checkpoint spike in this scenario,\nbecause vacuum will work in a limited number of shared buffers instead\nof dirtying the whole cache. But you'll still see 2X writes over reads.\n\nIf this is data that you could re-generate at need, it might make sense\nto turn off full_page_writes during the initial data load and vacuum.\n\nI concur with trying to FREEZE all the data while you do this, else\nyou'll see the same work done whenever the data happens to slip past\nthe auto freeze threshold.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Dec 2007 18:59:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FREEZE output more than double input "
},
{
"msg_contents": ">>> On Fri, Dec 14, 2007 at 5:59 PM, in message <[email protected]>,\nTom Lane <[email protected]> wrote: \n> \"Kevin Grittner\" <[email protected]> writes:\n>> Why double writes per read, plus massive writes at checkpoint?\n> \n> The double writes aren't surprising: freezing has to be WAL-logged,\n> and the odds are that each page hasn't been touched since the last\n> checkpoint, so the WAL log will include a complete page image.\n> So in the steady state where all shared buffers are dirty, the\n> per-page cycle is:\n> \t* write out a dirty buffer so it can be reclaimed\n> \t* read in a page\n> \t* modify it to mark tuples frozen\n> \t* write an image of the page to WAL\n> \t* leave the now-dirty page in shared buffers for later writing\n> \n> The checkpoint spikes would come from trying to flush out all the\n> dirty buffers at once.\n \nGot it. Thanks.\n \n> You'd expect a bit of a valley after each peak, since the vacuum\n> could presumably recycle some buffers without having to flush 'em\n> first; but I don't see one in your data. That may just be because\n> the numbers are too noisy, but I kinda suspect that the vacuum is\n> dirtying buffers nearly as fast as the bgwriter can clean them,\n> leaving not a lot of daylight for a valley.\n \nYeah, the pattern was pretty consistent and without valleys.\n \n> 8.3 should pretty well eliminate the checkpoint spike in this scenario,\n> because vacuum will work in a limited number of shared buffers instead\n> of dirtying the whole cache. But you'll still see 2X writes over reads.\n \nTesting 8.3beta4 so far has shown both smoother I/O and better\nperformance in all respects. The preliminary post I did where I\nthought I saw some regression on loading a pg_dump turned out to be\nwas an \"apples to oranges\" comparison; comparing the same load on\nthe same hardware and OS, 8.3 wins. (Kudos to all who worked on\nthese improvements!)\n \n> If this is data that you could re-generate at need, it might make sense\n> to turn off full_page_writes during the initial data load and vacuum.\n \nThanks for the suggestions; I'll try that.\n \n> I concur with trying to FREEZE all the data while you do this, else\n> you'll see the same work done whenever the data happens to slip past\n> the auto freeze threshold.\n \nThanks. I thought that made sense, but I'm still trying to get my\nhead around some of the dynamics of PostgreSQL and MVCC. I'll\nsuggest that as policy here.\n \n-Kevin\n \n\n",
"msg_date": "Mon, 17 Dec 2007 09:45:43 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM FREEZE output more than double input"
}
] |
[
{
"msg_contents": "\"bigtable\" has about 60M records, about 2M of which are dead at the time\nof VACUUM. Shared_buffers are about 1GB, and the machine has 4GB of\nmemory.\n\nIf I run a \"SELECT COUNT(*) FROM bigtable\", and I ktrace that (FreeBSD)\nfor 10 seconds, I see only a handful of lseek calls (33), which is no\nsurprise since I am asking for sequential I/O. I assume those lseeks are\njust to skip over pages that already happen to be in shared_buffers.\n\nHowever, If I have several indexes on that table, and I run a VACUUM, I\nobserve a lot of seeking. In a 10 second interval, I saw about 5000\nlseek calls in the ktrace to the same file descriptor (which is an\nindex). That's about one every 2ms, so I'm sure a large portion of the\nfile must have been in the OS buffer cache.\n\nI just don't quite understand what's causing the lseeks.\n\nMy understanding is that vacuum uses maintenance_work_mem to hold the\nlist of dead tuples. In my case that's 2M row versions, times about 6\nbytes per entry (in the list of dead tuples) equals about 12MB, which is\nmuch less than 128MB maintenance_work_mem. So it doesn't appear that\nmaintenance_work_mem is too small.\n\nEven if maintenance_work_mem was the limiting factor, wouldn't the\nVACUUM still be operating mostly sequentially, even if it takes multiple\npasses?\n\nThe only seeking that it seems like VACUUM would need to do in an index\nfile is when an index page completely empties out, but that wouldn't\naccount for 5000 lseeks in 10 seconds, would it? \n\nWhere am I going wrong? Are many of these lseeks no-ops or something?\n\nRegards,\n\tJeff Davis\n\n\n",
"msg_date": "Fri, 14 Dec 2007 11:29:54 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "explanation for seeks in VACUUM"
},
{
"msg_contents": "On Fri, 2007-12-14 at 11:29 -0800, Jeff Davis wrote:\n> \"bigtable\" has about 60M records, about 2M of which are dead at the time\n> of VACUUM. Shared_buffers are about 1GB, and the machine has 4GB of\n> memory.\n\nForgot to mention: version 8.2.4\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Fri, 14 Dec 2007 11:40:37 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: explanation for seeks in VACUUM (8.2.4)"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> Where am I going wrong? Are many of these lseeks no-ops or something?\n\nThey're not supposed to be, but if you only tracked seeks and not\nreads or writes, it's hard to be sure what's going on.\n\n8.2's VACUUM should process a btree index (this is a btree index no?)\nin physical order, so I'd expect lseeks only when a page is already in\nbuffers --- at least on the read side. On the write side things might\nbe a great deal less predictable. You're cleaning out about one tuple\nin 30, so the odds are that nearly every index page is getting dirtied,\nand they're going to need to be written sometime.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Dec 2007 19:04:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: explanation for seeks in VACUUM "
},
{
"msg_contents": "On Fri, 2007-12-14 at 19:04 -0500, Tom Lane wrote:\n> Jeff Davis <[email protected]> writes:\n> > Where am I going wrong? Are many of these lseeks no-ops or something?\n> \n> They're not supposed to be, but if you only tracked seeks and not\n> reads or writes, it's hard to be sure what's going on.\n\nThe seeks were comparable to reads and writes, which is what surprised\nme:\n\n$ ktrace -p 42440; sleep 5; ktrace -C\n$ kdump -f ktrace.out | grep \"GIO fd 38 read\" | wc -l\n 11479\n$ kdump -f ktrace.out | grep \"GIO fd 38 wrote\" | wc -l\n 11480\n$ kdump -f ktrace.out | grep \"lseek.0x26\" | wc -l\n 22960\n\n> 8.2's VACUUM should process a btree index (this is a btree index no?)\n> in physical order, so I'd expect lseeks only when a page is already in\n> buffers --- at least on the read side. On the write side things might\n> be a great deal less predictable. You're cleaning out about one tuple\n> in 30, so the odds are that nearly every index page is getting dirtied,\n> and they're going to need to be written sometime.\n\nActually, the table I was VACUUMing had both btree and GIN indexes, and\nI'm not entirely sure which one was happening at the time. I will\ninvestigate more.\n\nMy intuition tells me that, if the pages are being read and dirtied\nsequentially, it would be able to write mostly sequentially (at least in\ntheory).\n\nIn the box that was causing the problem (which had more constrained\nmemory than my reproduced case), it seemed to be swamped by random I/O\n-- low CPU usage, and low disk usage (about 1-2 MB/s write), yet VACUUM\nwould take forever and the box would appear very sluggish.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 14 Dec 2007 19:53:14 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: explanation for seeks in VACUUM"
}
] |
[
{
"msg_contents": "I'm not familiar at all with pg_read_file. Is it wide open so a user can\nread any file they want? Can you not lock it down like utl_file and\ndirectories in Oracle?\n\n\nJon\n> -----Original Message-----\n> From: Jonah H. Harris [mailto:[email protected]]\n> Sent: Friday, December 14, 2007 3:04 PM\n> To: Bill Moran\n> Cc: Joshua D. Drake; Roberts, Jon; [email protected]\n> Subject: Re: [PERFORM] viewing source code\n> \n> On Dec 14, 2007 2:03 PM, Bill Moran <[email protected]>\n> wrote:\n> > I disagree here. If they're connecting remotely to PG, they have no\n> > direct access to the disk.\n> \n> pg_read_file?\n> \n> --\n> Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324\n> EnterpriseDB Corporation | fax: 732.331.1301\n> 499 Thornall Street, 2nd Floor | [email protected]\n> Edison, NJ 08837 | http://www.enterprisedb.com/\n",
"msg_date": "Fri, 14 Dec 2007 15:35:19 -0600",
"msg_from": "\"Roberts, Jon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "Roberts, Jon escribi�:\n> I'm not familiar at all with pg_read_file. Is it wide open so a user can\n> read any file they want? Can you not lock it down like utl_file and\n> directories in Oracle?\n\nThat function is restricted to superusers.\n\n-- \nAlvaro Herrera Developer, http://www.PostgreSQL.org/\n\"The Postgresql hackers have what I call a \"NASA space shot\" mentality.\n Quite refreshing in a world of \"weekend drag racer\" developers.\"\n(Scott Marlowe)\n",
"msg_date": "Fri, 14 Dec 2007 19:03:15 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
}
] |
[
{
"msg_contents": "\nHello\n\ni have a python script to update 600000 rows to one table from a csv file in my\npostgres database and it takes me 5 hours to do the transaction...\n\nI'm on debian etch with 8.1 postgres server on a 64 bits quad bi opteron.\n\nI have desactived all index except the primary key who is not updated since it's\nthe reference column of the update too.\n\nWhen i run this script the server is not used by any other user.\n\nFirst when i run htop i see that the memory used is never more than 150 MB.\nI don't understand in this case why setting shmall and shmmax kernel's\nparameters to 16 GB of memory (the server has 32 GB) increase the rapidity of\nthe transaction a lot compared to a shmall and shmax in (only) 2 GB ?!\n\nThe script is run with only one transaction and pause by moment to let the time\nto postgres to write data to disk.\n\nIf the data were writed at the end of the transaction will be the perfomance\nbetter ? i wan't that in production data regulary writed to disk to prevent\nloosinf of data but it there any interest to write temporary data in disk in a\nmiddle of a transaction ???\n\nI'm completely noob to postgres and database configuration and help are\nwelcome.\n\nthanks\n\n\n",
"msg_date": "Sat, 15 Dec 2007 01:11:22 +0100",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "update 600000 rows"
},
{
"msg_contents": "[email protected] wrote:\n> Hello\n>\n> i have a python script to update 600000 rows to one table from a csv file in my\n> postgres database and it takes me 5 hours to do the transaction...\n>\n> \nLet's see if I guessed correctly.\n\nYour Python script is stepping through a 600,000 row file and updating \ninformation in a table (of unknown rows/columns) by making 600,000 \nindividual updates all wrapped in a big transaction. If correct, that \nmeans you are doing 600,000/(3,600 * 5) = 33 queries/second. If this is \ncorrect, I'd first investigate simply loading the csv data into a \ntemporary table, creating appropriate indexes, and running a single \nquery to update your other table.\n\n> First when i run htop i see that the memory used is never more than 150 MB.\n> I don't understand in this case why setting shmall and shmmax kernel's\n> parameters to 16 GB of memory (the server has 32 GB) increase the rapidity of\n> the transaction a lot compared to a shmall and shmax in (only) 2 GB ?!\n> \nAre you saying that you did this and the performance improved or you are \nwondering if it would?\n\n> The script is run with only one transaction and pause by moment to let the time\n> to postgres to write data to disk.\n> \nThis doesn't make sense. If the transaction completes successfully then \nPostgreSQL has committed the data to disk (unless you have done \nsomething non-standard and not recommended like turning off fsync). If \nyou are adding pauses between updates, don't do that - it will only slow \nyou down. If the full transaction doesn't complete, all updates will be \nthrown away anyway and if it does complete then they were committed.\n> If the data were writed at the end of the transaction will be the perfomance\n> better ? i wan't that in production data regulary writed to disk to prevent\n> loosinf of data but it there any interest to write temporary data in disk in a\n> middle of a transaction ???\n>\n> \nSee above. Actual disk IO is handled by the server. PostgreSQL is good \nat the \"D\" in ACID. If your transaction completes, the data has been \nwritten to disk. Guaranteed.\n\nCheers,\nSteve\n",
"msg_date": "Fri, 14 Dec 2007 17:16:56 -0800",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update 600000 rows"
},
{
"msg_contents": "Steve Crawford wrote:\n> [email protected] wrote:\n>> Hello\n>>\n>> i have a python script to update 600000 rows to one table from a csv \n>> file in my\n>> postgres database and it takes me 5 hours to do the transaction...\n>>\n>> \n> Let's see if I guessed correctly.\n>\n> Your Python script is stepping through a 600,000 row file and updating \n> information in a table (of unknown rows/columns) by making 600,000 \n> individual updates all wrapped in a big transaction. If correct, that \n> means you are doing 600,000/(3,600 * 5) = 33 queries/second. If this \n> is correct, I'd first investigate simply loading the csv data into a \n> temporary table, creating appropriate indexes, and running a single \n> query to update your other table.\n\ni can try this. The problem is that i have to make an insert if the \nupdate don't have affect a rows (the rows don't exist yet). The number \nof rows affected by insert is minor regards to the numbers of updated \nrows and was 0 when i test my script). I can do with a temporary table \n: update all the possible rows and then insert the rows that are in \ntemporary table and not in the production table with a 'not in' \nstatement. is this a correct way ?\n>\n>> First when i run htop i see that the memory used is never more than \n>> 150 MB.\n>> I don't understand in this case why setting shmall and shmmax kernel's\n>> parameters to 16 GB of memory (the server has 32 GB) increase the \n>> rapidity of\n>> the transaction a lot compared to a shmall and shmax in (only) 2 GB ?!\n>> \n> Are you saying that you did this and the performance improved or you \n> are wondering if it would?\n>\nYes i did this and the perfomance improved. Dont understand why. Sorry \nfor my poor english...\n\n\n>> The script is run with only one transaction and pause by moment to \n>> let the time\n>> to postgres to write data to disk.\n>> \n> This doesn't make sense. If the transaction completes successfully \n> then PostgreSQL has committed the data to disk (unless you have done \n> something non-standard and not recommended like turning off fsync). If \n> you are adding pauses between updates, don't do that - it will only \n> slow you down. If the full transaction doesn't complete, all updates \n> will be thrown away anyway and if it does complete then they were \n> committed.\n\nSorry, the pause is not caused by the python script but by postgres \nhimself. it does an average of +-3000 update and pause 2 min (htop say \nme that postgres is in writing process don't really know if it does io \nwriting). I say that : if he writes to disk some things during the \ntransaction i don't understand why ?!\n>> If the data were writed at the end of the transaction will be the \n>> perfomance\n>> better ? i wan't that in production data regulary writed to disk to \n>> prevent\n>> loosinf of data but it there any interest to write temporary data in \n>> disk in a\n>> middle of a transaction ???\n>>\n>> \n> See above. Actual disk IO is handled by the server. PostgreSQL is good \n> at the \"D\" in ACID. If your transaction completes, the data has been \n> written to disk. Guaranteed.\n>\n> Cheers,\n> Steve\n>\n>\ni try to say that in \"normal\" use (not when i run this maintenance \nscript) i want to be sure that by insert update request are write to \ndisk. They are small (1,2 or 3 rows affected) but they are a lot and \ndoing by many users. However just for this maintenance script i can \nperhaps doing other tweak to adjust the io stress during the transaction ?!\n\nCheers,\n\nLoic\n\n\n",
"msg_date": "Sat, 15 Dec 2007 12:43:10 +0100",
"msg_from": "=?ISO-8859-1?Q?Lo=EFc_Marteau?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update 600000 rows"
},
{
"msg_contents": "On Sat, 15 Dec 2007, [email protected] wrote:\n\n> First when i run htop i see that the memory used is never more than 150 MB.\n> I don't understand in this case why setting shmall and shmmax kernel's\n> parameters to 16 GB of memory (the server has 32 GB) increase the rapidity of\n> the transaction a lot compared to a shmall and shmax in (only) 2 GB ?!\n\nThe kernel parameters provide an upper limit for how much memory \nPostgreSQL can allocate, but by themselves they don't actually request \nmore memory. There is a configuration parameters called shared_buffers \nthat is the main thing to adjust. Since you say you're new to this, see \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm for the \nfirst set of things you should be adjusting.\n\nIf you're doing lots of updates, you'll need to increase \ncheckpoint_segments as well. Once you get the memory allocated and \ncheckpoint parameters in the right range, at that point you'll be prepared \nto look into transaction grouping and application issues in that area.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Sat, 15 Dec 2007 10:47:02 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update 600000 rows"
},
{
"msg_contents": "\[email protected] wrote:\n> Hello\n>\n> i have a python script to update 600000 rows to one table from a csv file in my\n> postgres database and it takes me 5 hours to do the transaction...\n>\n> I'm on debian etch with 8.1 postgres server on a 64 bits quad bi opteron.\n>\n> I have desactived all index except the primary key who is not updated since it's\n> the reference column of the update too.\n>\n> When i run this script the server is not used by any other user.\n>\n> First when i run htop i see that the memory used is never more than 150 MB.\n> I don't understand in this case why setting shmall and shmmax kernel's\n> parameters to 16 GB of memory (the server has 32 GB) increase the rapidity of\n> the transaction a lot compared to a shmall and shmax in (only) 2 GB ?!\n>\n> The script is run with only one transaction and pause by moment to let the time\n> to postgres to write data to disk.\n>\n> If the data were writed at the end of the transaction will be the perfomance\n> better ? i wan't that in production data regulary writed to disk to prevent\n> loosinf of data but it there any interest to write temporary data in disk in a\n> middle of a transaction ???\n>\n> I'm completely noob to postgres and database configuration and help are\n> welcome.\n>\n> thank\n\nYou will get a huge improvement in time if you use batch updates instead \nof updating a row at a time. See:\n\n http://www.postgresql.org/docs/8.2/interactive/populate.html\n and\n http://www.postgresql.org/docs/8.2/interactive/sql-begin.html\n\nYou will also get a big improvement if you can turn fsync off during the \nupdate. See:\n http://www.postgresql.org/docs/8.2/interactive/runtime-config-wal.html\n\nYou also need to vacuum the table after doing that many updates since pg \ndoes a delete and insert on each update, there will be a lot of holes.\n\nCheers\nHH\n\n\n\n",
"msg_date": "Sun, 16 Dec 2007 06:42:09 -0500",
"msg_from": "\"H. Hall\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update 600000 rows"
},
{
"msg_contents": "Note: I am resending this because the first never appeared after 40hrs.\nHH\n\[email protected] wrote:\n> Hello\n>\n> i have a python script to update 600000 rows to one table from a csv file in my\n> postgres database and it takes me 5 hours to do the transaction...\n>\n> I'm on debian etch with 8.1 postgres server on a 64 bits quad bi opteron.\n>\n> I have desactived all index except the primary key who is not updated since it's\n> the reference column of the update too.\n>\n> When i run this script the server is not used by any other user.\n>\n> First when i run htop i see that the memory used is never more than 150 MB.\n> I don't understand in this case why setting shmall and shmmax kernel's\n> parameters to 16 GB of memory (the server has 32 GB) increase the rapidity of\n> the transaction a lot compared to a shmall and shmax in (only) 2 GB ?!\n>\n> The script is run with only one transaction and pause by moment to let the time\n> to postgres to write data to disk.\n>\n> If the data were writed at the end of the transaction will be the perfomance\n> better ? i wan't that in production data regulary writed to disk to prevent\n> loosinf of data but it there any interest to write temporary data in disk in a\n> middle of a transaction ???\n>\n> I'm completely noob to postgres and database configuration and help are\n> welcome.\n>\n> thank\n\nYou will get a huge improvement in time if you use batch updates instead\nof updating a row at a time. See:\n\n http://www.postgresql.org/docs/8.2/interactive/populate.html\n and\n http://www.postgresql.org/docs/8.2/interactive/sql-begin.html\n\nYou will also get a big improvement if you can turn fsync off during the\nupdate. See:\n http://www.postgresql.org/docs/8.2/interactive/runtime-config-wal.html\n\nYou also need to vacuum the table after doing that many updates since pg\ndoes a delete and insert on each update, there will be a lot of holes.\n\nCheers\nHH\n\n\n\n\n",
"msg_date": "Mon, 17 Dec 2007 12:53:48 -0500",
"msg_from": "\"H. Hall\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update 600000 rows"
}
] |
[
{
"msg_contents": "Lo�c Marteau <[email protected]> wrote ..\n> Steve Crawford wrote:\n> > If this\n> > is correct, I'd first investigate simply loading the csv data into a\n> > temporary table, creating appropriate indexes, and running a single \n> > query to update your other table.\n\nMy experience is that this is MUCH faster. My predecessor in my current position was doing an update from a csv file line by line with perl. That is one reason he is my predecessor. Performance did not justify continuing his contract.\n \n> i can try this. The problem is that i have to make an insert if the \n> update don't have affect a rows (the rows don't exist yet). The number\n> of rows affected by insert is minor regards to the numbers of updated \n> rows and was 0 when i test my script). I can do with a temporary table\n> : update all the possible rows and then insert the rows that are in \n> temporary table and not in the production table with a 'not in' \n> statement. is this a correct way ?\n\nThat's what I did at first, but later I found better performance with a TRIGGER on the permanent table that deletes the target of an UPDATE, if any, before the UPDATE. That's what PG does anyway, and now I can do the entire UPDATE in one command.\n",
"msg_date": "Sat, 15 Dec 2007 21:21:47 -0800",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: update 600000 rows"
},
{
"msg_contents": "On Dec 16, 2007 12:21 AM, <[email protected]> wrote:\n> Loïc Marteau <[email protected]> wrote ..\n> > Steve Crawford wrote:\n> > > If this\n> > > is correct, I'd first investigate simply loading the csv data into a\n> > > temporary table, creating appropriate indexes, and running a single\n> > > query to update your other table.\n>\n> My experience is that this is MUCH faster. My predecessor in my current position was doing an update from a csv file line by line with perl. That is one reason he is my predecessor. Performance did not justify continuing his contract.\n>\n> > i can try this. The problem is that i have to make an insert if the\n> > update don't have affect a rows (the rows don't exist yet). The number\n> > of rows affected by insert is minor regards to the numbers of updated\n> > rows and was 0 when i test my script). I can do with a temporary table\n> > : update all the possible rows and then insert the rows that are in\n> > temporary table and not in the production table with a 'not in'\n> > statement. is this a correct way ?\n>\n> That's what I did at first, but later I found better performance with a TRIGGER on the permanent table that deletes the target of an UPDATE, if any, before the UPDATE. That's what PG does anyway, and now I can do the entire UPDATE in one command.\n\nthat's very clever, and probably is the fastest/best way to do it.\nyou can even temporarily add the trigger a transaction...I am going to\ntry this out in a couple of things (I currently do these type of\nthings in two statements) and see how it turns out.\n\nmerlin\n",
"msg_date": "Sun, 16 Dec 2007 13:52:00 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update 600000 rows"
}
] |
[
{
"msg_contents": "Hi pgsql-performance,\n\nI've a problem with the select * on a small table.\n\nSee below:\n\n\nx7=# EXPLAIN ANALYZE select * from megjelenesek;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------\n Seq Scan on megjelenesek (cost=0.00..15633.07 rows=207 width=52)\n(actual time=103.258..18802.530 rows=162 loops=1)\n Total runtime: 18815.362 ms\n(2 rows)\n\nx7=# \\d megjelenesek;\n Table \"public.megjelenesek\"\n Column | Type |\nModifiers\n-------------+-----------------------------+------------------------------------------------------------\n idn | integer | not null default\nnextval('megjelenesek_idn_seq'::regclass)\n tag_id | integer |\n tag_wlap_id | integer |\n get_date | timestamp without time zone | default now()\n megjelent | numeric | default 0\nIndexes:\n \"megjelenesek_pkey\" PRIMARY KEY, btree (idn)\n \"megjelenesek_tag_id\" hash (tag_id)\n \"megjelenesek_tag_wlap_id\" hash (tag_wlap_id)\n\nx7=# SELECT count(idn) from megjelenesek;\n count\n-------\n 162\n(1 row)\n\nWhy does it take cca 18-20 sec to get the results?\nToo many indexes?\n\n-- \nAdam PAPAI\nD i g i t a l Influence\nhttp://www.wooh.hu\nE-mail: [email protected]\nPhone: +36 30 33-55-735 (Hungary)\n\n",
"msg_date": "Sun, 16 Dec 2007 19:34:45 +0100",
"msg_from": "Adam PAPAI <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT * FROM table is too slow"
},
{
"msg_contents": "Adam PAPAI wrote:\n> Hi pgsql-performance,\n> \n> I've a problem with the select * on a small table.\n> \n> See below:\n> \n>\n> x7=# SELECT count(idn) from megjelenesek;\n> count\n> -------\n> 162\n> (1 row)\n> \n> Why does it take cca 18-20 sec to get the results?\n> Too many indexes?\n\nYou likely have a huge amount of dead rows. Try dumping and restoring \nthe table and remember to run vacuum (or autovacuum) often.\n\nSincerely,\n\n\nJoshua D. Drake\n\n\n",
"msg_date": "Sun, 16 Dec 2007 10:37:28 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT * FROM table is too slow"
},
{
"msg_contents": "On Sun, Dec 16, 2007 at 07:34:45PM +0100, Adam PAPAI wrote:\n> Why does it take cca 18-20 sec to get the results?\n> Too many indexes?\n\nYou cannot possibly have VACUUMed in a long time. Try a VACUUM FULL, and then\nschedule regular VACUUMs (or use autovacuum).\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sun, 16 Dec 2007 20:03:47 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT * FROM table is too slow"
},
{
"msg_contents": "\n> Adam PAPAI wrote:\n>> Hi pgsql-performance,\n>>\n>> I've a problem with the select * on a small table.\n>>\n>> See below:\n>>\n>>\n>> x7=# SELECT count(idn) from megjelenesek;\n>> count\n>> -------\n>> 162\n>> (1 row)\n>>\n>> Why does it take cca 18-20 sec to get the results?\n>> Too many indexes?\n>\n> You likely have a huge amount of dead rows. Try dumping and restoring \n> the table and remember to run vacuum (or autovacuum) often.\n>\n> Sincerely,\n>\n>\n\nJoshua D. Drake wrote:Hi,\n\nIf we run the commands \"vacumm full analyze\" and \"reindex table\", this \ncan be considered as equivalent to making a dump / restore in this case ?\n\n",
"msg_date": "Tue, 22 Jan 2008 17:44:33 -0200",
"msg_from": "\"Luiz K. Matsumura\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT * FROM table is too slow"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Tue, 22 Jan 2008 17:44:33 -0200\r\n\"Luiz K. Matsumura\" <[email protected]> wrote:\r\n\r\n> >\r\n> \r\n> Joshua D. Drake wrote:Hi,\r\n> \r\n> If we run the commands \"vacumm full analyze\" and \"reindex table\",\r\n> this can be considered as equivalent to making a dump / restore in\r\n> this case ?\r\n\r\nYes. \r\n\r\nJoshua D. Drake\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHlk/LATb/zqfZUUQRAua0AKCsZrWrkf0d+jKUa9KK3aTqzuZTZACbBiD5\r\nz3aIswcgRSwywxlhD+dgSHE=\r\n=vdeQ\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Tue, 22 Jan 2008 12:19:21 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT * FROM table is too slow"
},
{
"msg_contents": "\"Luiz K. Matsumura\" <luiz 'at' planit.com.br> writes:\n\n> If we run the commands \"vacumm full analyze\"\n\nIf you're using the cost based vacuum delay, don't forget that it\nwill probably take long; possibly, you may deactivate it locally\nbefore running VACUUM FULL, in case the locked table is mandatory\nfor your running application(s).\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA\n",
"msg_date": "Wed, 23 Jan 2008 09:47:42 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT * FROM table is too slow"
}
] |
[
{
"msg_contents": "I wrote\n> That's what I did at first, but later I found better performance with\n> a TRIGGER on the permanent table that deletes the target of an UPDATE,\n> if any, before the UPDATE. [INSERT!] That's what PG does anyway, and now I can do\n> the entire UPDATE [INSERT] in one command.\n\nIt's probably obvious, but what I should have said is that now I do the INSERT in one command; I have no need of an UPDATE. So I do no UPDATEs, only INSERTs.\n",
"msg_date": "Sun, 16 Dec 2007 12:45:02 -0800",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: update 600000 rows"
}
] |
[
{
"msg_contents": "Adam PAPAI <[email protected]> wrote ..\n> Hi pgsql-performance,\n> \n> I've a problem with the select * on a small table.\n> \n\nI can think of two possibilities for such incredibly slow performance.\n\nOne: your table has not been VACUUMed for a long time and is full of dead tuples. Try VACUUM FULL on it, or CLUSTER on the most frequently used index.\n\nTwo: did you accidentally put the database on your floppy drive?\n",
"msg_date": "Sun, 16 Dec 2007 14:49:55 -0800",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: SELECT * FROM table is too slow"
}
] |
[
{
"msg_contents": "Alvaro Herrera pointed out that pg_read_file requires superuser access which\nthese users won't have so revoking access to the function code should be\npossible.\n\nJoshua D. Drake suggested revoking pg_proc but that isn't the source code,\nit just has the definition of the functions. \n\nIf it isn't a feature today, what table has the source code in it? Maybe I\ncan revoke that.\n\n\nJon\n> -----Original Message-----\n> From: Jonah H. Harris [mailto:[email protected]]\n> Sent: Friday, December 14, 2007 3:04 PM\n> To: Bill Moran\n> Cc: Joshua D. Drake; Roberts, Jon; [email protected]\n> Subject: Re: [PERFORM] viewing source code\n> \n> On Dec 14, 2007 2:03 PM, Bill Moran <[email protected]>\n> wrote:\n> > I disagree here. If they're connecting remotely to PG, they have no\n> > direct access to the disk.\n> \n> pg_read_file?\n> \n> --\n> Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324\n> EnterpriseDB Corporation | fax: 732.331.1301\n> 499 Thornall Street, 2nd Floor | [email protected]\n> Edison, NJ 08837 | http://www.enterprisedb.com/\n",
"msg_date": "Mon, 17 Dec 2007 07:11:36 -0600",
"msg_from": "\"Roberts, Jon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Dec 17, 2007 8:11 AM, Roberts, Jon <[email protected]> wrote:\n> Alvaro Herrera pointed out that pg_read_file requires superuser access which\n> these users won't have so revoking access to the function code should be\n> possible.\n>\n> Joshua D. Drake suggested revoking pg_proc but that isn't the source code,\n> it just has the definition of the functions.\n>\n> If it isn't a feature today, what table has the source code in it? Maybe I\n> can revoke that.\n\nthe table is pg_proc. you have to revoke select rights from public\nand the user of interest. be aware this will make it very difficult\nfor that user to do certain things in psql and (especially) pgadmin.\nit works.\n\na better solution to this problem is to make a language wrapper for\npl/pgsql that encrypts the source on disk. afaik, no one is working on\nth is. it would secure the code from remote users but not necessarily\nfrom people logged in to the server. the pg_proc hack works ok\nthough.\n\nmerlin\n",
"msg_date": "Mon, 17 Dec 2007 09:13:52 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "Roberts, Jon wrote:\n> Alvaro Herrera pointed out that pg_read_file requires superuser access which\n> these users won't have so revoking access to the function code should be\n> possible.\n> \n> Joshua D. Drake suggested revoking pg_proc but that isn't the source code,\n> it just has the definition of the functions. \n\nActually I suggested using a obfuscation module.\n\n> \n> If it isn't a feature today, what table has the source code in it? Maybe I\n> can revoke that.\n\nIf your pl is perl or plpgsql it will be in the prosrc (pro_src?) column \nin pg_proc.\n\nJoshua D. Drake\n\n> \n> \n> Jon\n>> -----Original Message-----\n>> From: Jonah H. Harris [mailto:[email protected]]\n>> Sent: Friday, December 14, 2007 3:04 PM\n>> To: Bill Moran\n>> Cc: Joshua D. Drake; Roberts, Jon; [email protected]\n>> Subject: Re: [PERFORM] viewing source code\n>>\n>> On Dec 14, 2007 2:03 PM, Bill Moran <[email protected]>\n>> wrote:\n>>> I disagree here. If they're connecting remotely to PG, they have no\n>>> direct access to the disk.\n>> pg_read_file?\n>>\n>> --\n>> Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324\n>> EnterpriseDB Corporation | fax: 732.331.1301\n>> 499 Thornall Street, 2nd Floor | [email protected]\n>> Edison, NJ 08837 | http://www.enterprisedb.com/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n",
"msg_date": "Mon, 17 Dec 2007 08:13:43 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "\n\nOn Mon, 17 Dec 2007, Merlin Moncure wrote:\n\n> the table is pg_proc. you have to revoke select rights from public\n> and the user of interest. be aware this will make it very difficult\n> for that user to do certain things in psql and (especially) pgadmin.\n> it works.\n>\n> a better solution to this problem is to make a language wrapper for\n> pl/pgsql that encrypts the source on disk. afaik, no one is working on\n> th is. it would secure the code from remote users but not necessarily\n> from people logged in to the server. the pg_proc hack works ok\n> though.\n>\n\nAnother enhancement that would improve this situation would be to \nimplement per column permissions as the sql spec has, so that you could \nrevoke select on just the prosrc column and allow clients to retrieve the \nmetadata they need.\n\nKris Jurka\n",
"msg_date": "Mon, 17 Dec 2007 23:51:13 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
}
] |
[
{
"msg_contents": "Hi All,\n Thanks for all the help here. Sorry for the late update but we've found our problem and fixed it already. Prior to looking at the translated code more intently, I wanted to make sure that our environmental settings were acceptable and the various emails from members have confirmed that...\n\nIn a nutshell it seems that MS SQL allows bad T-SQL code by optimizing and ignoring redundant/useless from and where clauses in an update statement whereas plpgsql will execute exactly what the code is asking it to do...\n\nWe had several update instances in the T-SQL code that looked like this :\n\nupdate \"_tbl_tmp2\"\nset \"LongBackPeriod\" = (select count (\"EPeriod\") from \"_tbl_tmp1\" where \"_tbl_tmp1\".\"Row\" = \"_tbl_tmp2\".\"Row\");\n--------------------------------------------------\nfrom \"_tbl_tmp2\" tmp2, \"_tbl_tmp1\" tmp1\nwhere tmp2.\"Row\" = tmp1.\"Row\";\n---------------------------------------------------\n\nIn T-SQL, the performance is the same whether the last two lines are there or not...\n\nIn plpgsql, this is not the case the from and where clauses are not necessary and probably creates an internal (rather useless and time consuming) inner join in plpgsql which accounts for the original performance issue.\n\nI'm happy (actually ecstatic) to report that Win2kPro + PG performance is slightly faster than Win2kPro + MSSQL/MSDE. \n\nLinux(FC7) + PG 8.x performance seems to be 3x faster than Win2KPro + MSSQL/MSDE for our stored functions. \n\nThanks for all the help! Am a believer now. :) \n\n\n\n\nHi All, Thanks for all the help here. Sorry for the late update but we've found our problem and fixed it already. Prior to looking at the translated code more intently, I wanted to make sure that our environmental settings were acceptable and the various emails from members have confirmed that...In a nutshell it seems that MS SQL allows bad T-SQL code by optimizing and ignoring redundant/useless from and where clauses in an update statement whereas plpgsql will execute exactly what the code is asking it to do...We had several update instances in the T-SQL code that looked like this :update \"_tbl_tmp2\"set \"LongBackPeriod\" = (select count (\"EPeriod\") from\n \"_tbl_tmp1\" where \"_tbl_tmp1\".\"Row\" = \"_tbl_tmp2\".\"Row\");--------------------------------------------------from \"_tbl_tmp2\" tmp2, \"_tbl_tmp1\" tmp1where tmp2.\"Row\" = tmp1.\"Row\";---------------------------------------------------In T-SQL, the performance is the same whether the last two lines are there or not...In plpgsql, this is not the case the from and where clauses are not necessary and probably creates an internal (rather useless and time consuming) inner join in plpgsql which accounts for the original performance issue.I'm happy (actually ecstatic) to report that Win2kPro + PG performance is slightly faster than Win2kPro + MSSQL/MSDE. Linux(FC7) + PG 8.x performance seems to be 3x faster than Win2KPro + MSSQL/MSDE for our stored functions. Thanks for all the help! Am a believer now. :)",
"msg_date": "Mon, 17 Dec 2007 20:31:44 -0800 (PST)",
"msg_from": "Robert Bernabe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Evaluation of PG performance vs MSDE/MSSQL 2000 (not 2005)"
},
{
"msg_contents": "Robert Bernabe wrote:\n> I'm happy (actually ecstatic) to report that Win2kPro + PG performance\n> is slightly faster than Win2kPro + MSSQL/MSDE.\n> \n> Linux(FC7) + PG 8.x performance seems to be 3x faster than Win2KPro +\n> MSSQL/MSDE for our stored functions.\n> \n> Thanks for all the help! Am a believer now. :)\n\nThat's great news Robert - thanks for sharing!\n\nRegards, Dave.\n\n",
"msg_date": "Tue, 18 Dec 2007 09:05:26 +0000",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Evaluation of PG performance vs MSDE/MSSQL 2000 (not\n 2005)"
},
{
"msg_contents": "Robert Bernabe wrote:\n> In a nutshell it seems that MS SQL allows bad T-SQL code by optimizing and ignoring redundant/useless from and where clauses in an update statement whereas plpgsql will execute exactly what the code is asking it to do...\n> \n> We had several update instances in the T-SQL code that looked like this :\n> \n> update \"_tbl_tmp2\"\n> set \"LongBackPeriod\" = (select count (\"EPeriod\") from \"_tbl_tmp1\" where \"_tbl_tmp1\".\"Row\" = \"_tbl_tmp2\".\"Row\");\n> --------------------------------------------------\n> from \"_tbl_tmp2\" tmp2, \"_tbl_tmp1\" tmp1\n> where tmp2.\"Row\" = tmp1.\"Row\";\n> ---------------------------------------------------\n\nThose lines are not totally useless from DB point of view. If there is \nno rows that match the join, the WHERE clause will be false, and no rows \nwill be updated. So I'm sure MS SQL doesn't ignore those lines, but does \nuse a more clever plan. Perhaps it stops processing the join as soon as \nis finds a match, while we perform the whole join, for example.\n\n> In T-SQL, the performance is the same whether the last two lines are there or not...\n> \n> In plpgsql, this is not the case the from and where clauses are not necessary and probably creates an internal (rather useless and time consuming) inner join in plpgsql which accounts for the original performance issue.\n\nYou can check the access plan with EXPLAIN.\n\n> I'm happy (actually ecstatic) to report that Win2kPro + PG performance is slightly faster than Win2kPro + MSSQL/MSDE. \n> \n> Linux(FC7) + PG 8.x performance seems to be 3x faster than Win2KPro + MSSQL/MSDE for our stored functions. \n> \n> Thanks for all the help! Am a believer now. :) \n\nNice to hear :).\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 18 Dec 2007 09:23:10 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Evaluation of PG performance vs MSDE/MSSQL 2000 (not\n 2005)"
},
{
"msg_contents": ">>> On Tue, Dec 18, 2007 at 3:23 AM, in message\n<[email protected]>, Heikki Linnakangas\n<[email protected]> wrote: \n> Robert Bernabe wrote:\n>> In a nutshell it seems that MS SQL allows bad T-SQL code by optimizing and \n>> ignoring redundant/useless from and where clauses in an update statement \n>> whereas plpgsql will execute exactly what the code is asking it to do...\n>> \n>> We had several update instances in the T-SQL code that looked like this :\n>> \n>> update \"_tbl_tmp2\"\n>> set \"LongBackPeriod\" = (select count (\"EPeriod\") from \"_tbl_tmp1\" where \n> \"_tbl_tmp1\".\"Row\" = \"_tbl_tmp2\".\"Row\");\n>> --------------------------------------------------\n>> from \"_tbl_tmp2\" tmp2, \"_tbl_tmp1\" tmp1\n>> where tmp2.\"Row\" = tmp1.\"Row\";\n>> ---------------------------------------------------\n> \n> I'm sure MS SQL doesn't ignore those lines, but does \n> use a more clever plan.\n \nActually, this is what happens in the absence of a standard --\nallowing a FROM clause on an UPDATE statement is an extension to\nthe standard. MS SQL Server and PostgreSQL have both added such an\nextension with identical syntax and differing semantics. MS SQL\nServer allows you to declare the updated table in the FROM clause\nso that you can alias it; the first reference to the updated table\nin the FROM clause is not taken as a separate reference, so the\nabove is interpreted exactly the same as:\n \nupdate \"_tbl_tmp2\"\nset \"LongBackPeriod\" = (select count (\"EPeriod\") from \"_tbl_tmp1\" where \n_tbl_tmp1\".\"Row\" = \"_tbl_tmp2\".\"Row\")\nfrom \"_tbl_tmp1\" tmp1\nwhere \"_tbl_tmp2\".\"Row\" = tmp1.\"Row\"\n \nPostgreSQL sees tmp2 as a second, independent reference to the\nupdated table. This can be another big \"gotcha\" in migration.\n \n-Kevin\n \n\n",
"msg_date": "Fri, 28 Dec 2007 09:05:36 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Evaluation of PG performance vs MSDE/MSSQL 2000\n\t(not 2005)"
}
] |
[
{
"msg_contents": "If we are talking about enhancement requests, I would propose we create a\nrole that can be granted/revoked that enables a user to see dictionary\nobjects like source code. Secondly, users should be able to see their own\ncode they write but not others unless they have been granted this dictionary\nrole.\n\nRevoking pg_proc isn't good for users that shouldn't see other's code but\nstill need to be able to see their own code.\n\n\nJon\n> -----Original Message-----\n> From: Kris Jurka [mailto:[email protected]]\n> Sent: Monday, December 17, 2007 10:51 PM\n> To: Merlin Moncure\n> Cc: Roberts, Jon; Jonah H. Harris; Bill Moran; Joshua D. Drake; pgsql-\n> [email protected]\n> Subject: Re: [PERFORM] viewing source code\n> \n> \n> \n> On Mon, 17 Dec 2007, Merlin Moncure wrote:\n> \n> > the table is pg_proc. you have to revoke select rights from public\n> > and the user of interest. be aware this will make it very difficult\n> > for that user to do certain things in psql and (especially) pgadmin.\n> > it works.\n> >\n> > a better solution to this problem is to make a language wrapper for\n> > pl/pgsql that encrypts the source on disk. afaik, no one is working on\n> > th is. it would secure the code from remote users but not necessarily\n> > from people logged in to the server. the pg_proc hack works ok\n> > though.\n> >\n> \n> Another enhancement that would improve this situation would be to\n> implement per column permissions as the sql spec has, so that you could\n> revoke select on just the prosrc column and allow clients to retrieve the\n> metadata they need.\n> \n> Kris Jurka\n",
"msg_date": "Tue, 18 Dec 2007 10:05:46 -0600",
"msg_from": "\"Roberts, Jon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "Roberts, Jon escribi�:\n\n> Revoking pg_proc isn't good for users that shouldn't see other's code but\n> still need to be able to see their own code.\n\nSo create a view on top of pg_proc restricted by current role, and grant\nselect on that to users.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 18 Dec 2007 15:26:49 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Tue, 18 Dec 2007 10:05:46 -0600\r\n\"Roberts, Jon\" <[email protected]> wrote:\r\n\r\n> If we are talking about enhancement requests, I would propose we\r\n> create a role that can be granted/revoked that enables a user to see\r\n> dictionary objects like source code. Secondly, users should be able\r\n> to see their own code they write but not others unless they have been\r\n> granted this dictionary role.\r\n\r\nYou are likely not going to get any support on an obfuscation front.\r\nThis is an Open Source project :P\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHaBcBATb/zqfZUUQRAiHPAJ9qXeWnMEKRItO6HKZpqi/c4r5XdQCeMC4M\r\nIvdv24nAt63YkJz/5mr95aQ=\r\n=+3Wm\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Tue, 18 Dec 2007 10:52:49 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On 12/18/07, Joshua D. Drake <[email protected]> wrote:\n\n> On Tue, 18 Dec 2007 10:05:46 -0600\n> \"Roberts, Jon\" <[email protected]> wrote:\n\n> > If we are talking about enhancement requests, I would propose we\n> > create a role that can be granted/revoked that enables a user to see\n> > dictionary objects like source code. Secondly, users should be able\n> > to see their own code they write but not others unless they have been\n> > granted this dictionary role.\n\n> You are likely not going to get any support on an obfuscation front.\n> This is an Open Source project :P\n\nWait, what? This is a DBMS, with some existing security controls\nregarding the data users are able to access, and the proposal is about\nincreasing the granularity of that control. Arbitrary function bodies\nare just as much data as anything else in the system.\n\nObfuscation would be something like encrypting the function bodies so\nthat even the owner or administrator cannot view or modify the code\nwithout significant reverse engineering. I mean, some people do want\nthat sort of thing, but this proposal isn't even close.\n\nWhere on earth did \"obfuscation\" come from?\n",
"msg_date": "Wed, 19 Dec 2007 07:45:06 -0800",
"msg_from": "\"Trevor Talbot\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
}
] |
[
{
"msg_contents": "So you are saying I need to create a view per user to achieve this? That\nisn't practical for an enterprise level database.\n\nI'm basically suggesting row level security that would be implemented for a\nsystem table and then RLS could be used for user defined tables too.\n\n\nJon\n\n> -----Original Message-----\n> From: Alvaro Herrera [mailto:[email protected]]\n> Sent: Tuesday, December 18, 2007 12:27 PM\n> To: Roberts, Jon\n> Cc: 'Kris Jurka'; Merlin Moncure; Jonah H. Harris; Bill Moran; Joshua D.\n> Drake; [email protected]\n> Subject: Re: [PERFORM] viewing source code\n> \n> Roberts, Jon escribió:\n> \n> > Revoking pg_proc isn't good for users that shouldn't see other's code\n> but\n> > still need to be able to see their own code.\n> \n> So create a view on top of pg_proc restricted by current role, and grant\n> select on that to users.\n> \n> --\n> Alvaro Herrera\n> http://www.CommandPrompt.com/\n> The PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 18 Dec 2007 12:33:54 -0600",
"msg_from": "\"Roberts, Jon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "Roberts, Jon wrote:\n> So you are saying I need to create a view per user to achieve this? That\n> isn't practical for an enterprise level database.\n\nSurely you'd just have:\nCREATE VIEW ... AS SELECT * FROM pg_proc WHERE author=CURRENT_USER\n\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 18 Dec 2007 18:50:36 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "Roberts, Jon escribi�:\n> So you are saying I need to create a view per user to achieve this? That\n> isn't practical for an enterprise level database.\n\nNo -- that would be quite impractical indeed. I'm talking about\nsomething like\n\nrevoke all privileges on pg_proc from public;\ncreate view limited_pg_proc\nas select * from pg_proc\nwhere proowner = (select oid from pg_authid where rolname = current_user);\ngrant select on limited_pg_proc to public;\n\nOf course, it is only a rough sketch. It needs to be improved in a\nnumber of ways. But it shows that even with pure SQL the solution is\nnot far; with backend changes it is certainly doable (for example invent\na separate \"view source\" privilege for functions).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 18 Dec 2007 15:52:23 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
}
] |
[
{
"msg_contents": "\n\nI have a query that looks like this:\n\n\tSELECT DISTINCT ON (EEFSCode)\n\t \neefsbase.company||eefsbase.department||eefsbase.branch||eefsbase.franchise||eefsbase.subledger||eefsbase.account \nAS EEFSCode,\n\t\teefsbase.company AS company_code,\n\t\teefsbase.branch AS branch_code,\n\t\teefsbase.department AS department_code,\n\t\teefsbase.franchise AS franchise_code,\n\t\tfincompany.full_name AS company_description ,\n\t\tfinbranch.full_name AS branch_description ,\n\t\tfindepartment.full_name AS department_description ,\n\t\tfinfranchise.full_name AS franchise_description,\n\t\teefsbase.sort_key1 AS acct_type_rpt_code,\n\t\t''::text AS acct_type_rpt_description,\n\t\teefsbase.sort_key2 AS exec_fs_rptcat2,\n\t\t''::text AS exec_fs_rptcat2_desc,\n\t\teefsbase.sort_key3 AS exec_fs_rptcat3,\n\t\t''::text AS exec_fs_rptcat3_desc,\n\t\t0 AS financial_report_category,\n\t\t''::text AS financial_report_cat_desc\n\tFROM\n\t\t(SELECT DISTINCT ON (finbalance.dealer_id, finbalance.year_id, \nfinbalance.subledger_id, finbalance.account_id)\n\t\t\tfinbalance.year_id AS year,\n\t\t\tfinbalance.dealer_id AS dealer_id,\n\t\t\tfinbalance.sort_key1 AS sort_key1,\n\t\t\tfinbalance.sort_key2 AS sort_key2,\n\t\t\tfinbalance.sort_key3 AS sort_key3,\n\t\t\tlpad(finbalance.subledger_id::text,4,'0') AS subledger,\n\t\t\tlpad(finbalance.account_id::text,4,'0') AS account,\n\t\t\tlpad(finsubledger.company::text,4,'0') AS company,\n\t\t\tlpad(finsubledger.department::text,4,'0') AS department,\n\t\t\tlpad(finsubledger.branch::text,4,'0') AS branch,\n\t\t\tlpad(finsubledger.franchise::text,4,'0') AS franchise\n\t\tFROM finbalance\n\t\tINNER JOIN finsubledger on \n((finbalance.dealer_id=finsubledger.dealer_id) AND\n\t\t\t\t\t (finbalance.year_id=finsubledger.year) AND\n\t (finbalance.subledger_id = \nfinsubledger.subledger_number))) eefsbase\n\tINNER JOIN fincompany ON \n(eefsbase.company::int=fincompany.report_number AND\n\t eefsbase.dealer_id=fincompany.dealer_id AND\n\t eefsbase.year=fincompany.year)\n\tINNER JOIN finbranch ON \n(eefsbase.branch::int=finbranch.report_number AND\n\t eefsbase.dealer_id=finbranch.dealer_id AND\n\t eefsbase.year=finbranch.year)\n\tINNER JOIN findepartment ON \n(eefsbase.department::int=findepartment.report_number AND\n\t eefsbase.dealer_id=findepartment.dealer_id AND\n\t eefsbase.year=findepartment.year)\n\tINNER JOIN finfranchise ON \n(eefsbase.franchise::int=finfranchise.report_number AND\n\t eefsbase.dealer_id=finfranchise.dealer_id AND\n\t eefsbase.year=finfranchise.year);\n\nWhere in one of my test systems the finbalance table has approximately \n220,000 records, around 17,500 of them distinct on the fields mentioned \nin the distinct clause, the finsubledger table has 97 rows and the other \ntables mentioned -fincompany,fnbranch,findepartment,finfranchise each \nhave between 1 and 50 records, i.e. relatively small.\n\nThe above query runs between ten and twelve seconds on this test system \nand I would like to try and get that down a bit if possible.\n\nThe explain analyze looks like thus:\n\n\"Unique (cost=19801.92..19801.93 rows=1 width=380) (actual \ntime=10838.666..10884.568 rows=17227 loops=1)\"\n\" -> Sort (cost=19801.92..19801.92 rows=1 width=380) (actual \ntime=10838.662..10863.909 rows=17227 loops=1)\"\n\" Sort Key: (((((((lpad((finsubledger.company)::text, 4, \n'0'::text)) || (lpad((finsubledger.department)::text, 4, '0'::text))) || \n(lpad((finsubledger.branch)::text, 4, '0'::text))) || \n(lpad((finsubledger.franchise)::text, 4, '0'::text))) || \n(lpad((finbalance.subledger_id)::text, 4, '0'::text))) || \n(lpad((finbalance.account_id)::text, 4, '0'::text))))\"\n\" Sort Method: external merge Disk: 2288kB\"\n\" -> Nested Loop (cost=19733.14..19801.91 rows=1 width=380) \n(actual time=9073.324..10386.626 rows=17227 loops=1)\"\n\" -> Nested Loop (cost=19733.14..19793.60 rows=1 \nwidth=393) (actual time=9073.287..10128.155 rows=17227 loops=1)\"\n\" -> Nested Loop (cost=19733.13..19785.30 rows=1 \nwidth=336) (actual time=9073.253..9911.426 rows=17227 loops=1)\"\n\" -> Nested Loop (cost=19733.13..19777.00 \nrows=1 width=279) (actual time=9073.222..9685.723 rows=17227 loops=1)\"\n\" -> Unique (cost=19733.12..19733.27 \nrows=12 width=48) (actual time=9073.143..9426.581 rows=17227 loops=1)\"\n\" -> Sort \n(cost=19733.12..19733.15 rows=12 width=48) (actual \ntime=9073.141..9226.161 rows=206748 loops=1)\"\n\" Sort Key: \nfinbalance.dealer_id, finbalance.year_id, finbalance.subledger_id, \nfinbalance.account_id\"\n\" Sort Method: external sort \n Disk: 14544kB\"\n\" -> Merge Join \n(cost=35.56..19732.91 rows=12 width=48) (actual time=0.841..2309.828 \nrows=206748 loops=1)\"\n\" Merge Cond: \n(((finbalance.dealer_id)::text = (finsubledger.dealer_id)::text) AND \n(finbalance.subledger_id = finsubledger.subledger_number))\"\n\" Join Filter: \n(finbalance.year_id = finsubledger.year)\"\n\" -> Index Scan using \npk_finbalances_pk on finbalance (cost=0.00..18596.78 rows=210130 \nwidth=32) (actual time=0.079..310.804 rows=206748 loops=1)\"\n\" -> Sort \n(cost=35.56..36.73 rows=470 width=34) (actual time=0.731..113.207 \nrows=205742 loops=1)\"\n\" Sort Key: \nfinsubledger.dealer_id, finsubledger.subledger_number\"\n\" Sort Method: \nquicksort Memory: 24kB\"\n\" -> Seq Scan on \nfinsubledger (cost=0.00..14.70 rows=470 width=34) (actual \ntime=0.011..0.101 rows=97 loops=1)\"\n\" -> Index Scan using \npk_finfranchise_dealer_company on finfranchise (cost=0.01..3.61 rows=1 \nwidth=61) (actual time=0.009..0.010 rows=1 loops=17227)\"\n\" Index Cond: \n(((finfranchise.dealer_id)::text = (finbalance.dealer_id)::text) AND \n(finfranchise.year = finbalance.year_id) AND (finfranchise.report_number \n= ((lpad((finsubledger.franchise)::text, 4, '0'::text)))::integer))\"\n\" -> Index Scan using \npk_findepartment_dealer_company on findepartment (cost=0.01..8.28 \nrows=1 width=61) (actual time=0.008..0.009 rows=1 loops=17227)\"\n\" Index Cond: \n(((findepartment.dealer_id)::text = (finbalance.dealer_id)::text) AND \n(findepartment.year = finbalance.year_id) AND \n(findepartment.report_number = ((lpad((finsubledger.department)::text, \n4, '0'::text)))::integer))\"\n\" -> Index Scan using pk_finbranch_dealer_company on \nfinbranch (cost=0.01..8.28 rows=1 width=61) (actual time=0.007..0.008 \nrows=1 loops=17227)\"\n\" Index Cond: (((finbranch.dealer_id)::text = \n(finbalance.dealer_id)::text) AND (finbranch.year = finbalance.year_id) \nAND (finbranch.report_number = ((lpad((finsubledger.branch)::text, 4, \n'0'::text)))::integer))\"\n\" -> Index Scan using pk_fincompany_dealer_company on \nfincompany (cost=0.01..8.28 rows=1 width=61) (actual time=0.007..0.009 \nrows=1 loops=17227)\"\n\" Index Cond: (((fincompany.dealer_id)::text = \n(finbalance.dealer_id)::text) AND (fincompany.year = finbalance.year_id) \nAND (fincompany.report_number = ((lpad((finsubledger.company)::text, 4, \n'0'::text)))::integer))\"\n\"Total runtime: 10896.235 ms\"\n\nCan anyone suggest some alterations to my SQL or perhaps something else \nI may be able to to in order to get this query to run a good bit faster?\n\nCheers,\nPaul.\n\n-- \nPaul Lambert\nDatabase Administrator\nAutoLedgers - A Reynolds & Reynolds Company\n\n",
"msg_date": "Wed, 19 Dec 2007 15:06:09 +0900",
"msg_from": "Paul Lambert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimising a query"
},
{
"msg_contents": "Paul Lambert wrote:\n> <snip> \n\n\nThis part of the query alone takes a significant part of the time:\n\nSELECT DISTINCT ON (finbalance.dealer_id, finbalance.year_id, \nfinbalance.subledger_id, finbalance.account_id)\n\t\t\tfinbalance.year_id AS year,\n\t\t\tfinbalance.dealer_id AS dealer_id,\n\t\t\tlpad(finbalance.subledger_id::text,4,'0') AS subledger,\n\t\t\tlpad(finbalance.account_id::text,4,'0') AS account\n\t\tFROM finbalance\n\nRuns with a query plan of :\n\n\"Unique (cost=30197.98..32782.33 rows=20675 width=16) (actual \ntime=5949.695..7197.475 rows=17227 loops=1)\"\n\" -> Sort (cost=30197.98..30714.85 rows=206748 width=16) (actual \ntime=5949.691..7018.931 rows=206748 loops=1)\"\n\" Sort Key: dealer_id, year_id, subledger_id, account_id\"\n\" Sort Method: external merge Disk: 8880kB\"\n\" -> Seq Scan on finbalance (cost=0.00..8409.70 rows=206748 \nwidth=16) (actual time=0.042..617.949 rows=206748 loops=1)\"\n\"Total runtime: 7210.966 ms\"\n\n\nSo basically selecting from the finbalance table (approx. 206,000 \nrecords) takes 10 seconds, even longer without the distinct clause in \nthere - the distinct collapses the result-set down to around 17,000 rows.\n\nTaking out the two lpad's in there knocks off about 1500ms, so I can \ncome up with something else for them - but I'd like to get the query as \na whole down to under a second.\n\ndealer_id, year_id, subledger_id and account_id are all part of the \nprimary key on the finbalance table, so I don't think I can index them \ndown any further.\n\nAre there any config settings that would make it faster...\n\nI'm running on a Quad-core pentium Xeon 1.6GHZ server with 4GB RAM. I \nimagine shared_buffers (32MB) and work_mem (1MB) could be bumped up a \ngood bit more with 4GB of available RAM?\n\n\n-- \nPaul Lambert\nDatabase Administrator\nAutoLedgers - A Reynolds & Reynolds Company\n",
"msg_date": "Wed, 19 Dec 2007 15:46:35 +0900",
"msg_from": "Paul Lambert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimising a query"
},
{
"msg_contents": "Paul Lambert wrote:\n> Paul Lambert wrote:\n>> <snip> \n> \n> \n> This part of the query alone takes a significant part of the time:\n> \n> SELECT DISTINCT ON (finbalance.dealer_id, finbalance.year_id, \n> finbalance.subledger_id, finbalance.account_id)\n> finbalance.year_id AS year,\n> finbalance.dealer_id AS dealer_id,\n> lpad(finbalance.subledger_id::text,4,'0') AS subledger,\n> lpad(finbalance.account_id::text,4,'0') AS account\n> FROM finbalance\n> \n> Runs with a query plan of :\n> \n> \"Unique (cost=30197.98..32782.33 rows=20675 width=16) (actual \n> time=5949.695..7197.475 rows=17227 loops=1)\"\n> \" -> Sort (cost=30197.98..30714.85 rows=206748 width=16) (actual \n> time=5949.691..7018.931 rows=206748 loops=1)\"\n> \" Sort Key: dealer_id, year_id, subledger_id, account_id\"\n> \" Sort Method: external merge Disk: 8880kB\"\n> \" -> Seq Scan on finbalance (cost=0.00..8409.70 rows=206748 \n> width=16) (actual time=0.042..617.949 rows=206748 loops=1)\"\n> \"Total runtime: 7210.966 ms\"\n> \n> \n> So basically selecting from the finbalance table (approx. 206,000 \n> records) takes 10 seconds, even longer without the distinct clause in \n> there - the distinct collapses the result-set down to around 17,000 rows.\n\nWell, if you need to summarise all the rows then that plan is as good as \nany.\n\nIf you run this query very frequently, you'll probably want to look into \nkeeping a summary table updated via triggers.\n\nBefore that though, try issuing a \"SET work_mem = '9MB'\" before running \nyour query. If that doesn't change the plan step up gradually. You \nshould be able to get the sort stage to happen in RAM rather than on \ndisk (see \"Sort Method\" above). Don't go overboard though, your big \nquery will probably use multiples of that value.\n\n> Taking out the two lpad's in there knocks off about 1500ms, so I can \n> come up with something else for them - but I'd like to get the query as \n> a whole down to under a second.\n\nStick the lpads in a query that wraps your DISTINCT query.\n\n> dealer_id, year_id, subledger_id and account_id are all part of the \n> primary key on the finbalance table, so I don't think I can index them \n> down any further.\n\nA CLUSTER <pkey-index> ON <table> might help, but it will degrade as you \nupdate the finbalance table.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 19 Dec 2007 08:48:00 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a query"
},
{
"msg_contents": "\"Richard Huxton\" <[email protected]> writes:\n\n> Paul Lambert wrote:\n>\n>> \" -> Sort (cost=30197.98..30714.85 rows=206748 width=16) (actual >> time=5949.691..7018.931 rows=206748 loops=1)\"\n>> \" Sort Key: dealer_id, year_id, subledger_id, account_id\"\n>> \" Sort Method: external merge Disk: 8880kB\"\n\n> Before that though, try issuing a \"SET work_mem = '9MB'\" before running your\n> query. If that doesn't change the plan step up gradually. You should be able to\n> get the sort stage to happen in RAM rather than on disk (see \"Sort Method\"\n> above). \n\nFWIW you'll probably need more than that. Try something more like 20MB.\n\nAlso, note you can change this with SET for just this connection and even just\nthis query and then reset it to the normal value (or use SET LOCAL). You don't\nhave to change it in the config file and restart the whole server.\n\nAlso, try replacing the DISTINCT with GROUP BY. The code path for DISTINCT\nunfortunately needs a bit of cleaning up and isn't exactly equivalent to GROUP\nBY. In particular it doesn't support hash aggregates which, if your work_mem\nis large enough, might work for you here.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!\n",
"msg_date": "Wed, 19 Dec 2007 09:47:24 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a query"
},
{
"msg_contents": "\n> Also, try replacing the DISTINCT with GROUP BY. The code path for DISTINCT\n> unfortunately needs a bit of cleaning up and isn't exactly equivalent to GROUP\n> BY. In particular it doesn't support hash aggregates which, if your work_mem\n> is large enough, might work for you here.\n\nSorry, strike that last suggestion. I was looking at the plan and forgot that\nthe query had DISTINCT ON. It is possible to replace DISTINCT ON with GROUP BY\nbut it's not going to be faster than the DISTINCT ON case since you'll need\nthe sort anyways.\n\nActually it's possible to do without the sort if you write some fancy\naggregate functions but for this large a query that's going to be awfully\ncomplex.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!\n",
"msg_date": "Wed, 19 Dec 2007 09:50:29 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a query"
},
{
"msg_contents": "Paul Lambert wrote:\n> \" -> Merge Join \n> (cost=35.56..19732.91 rows=12 width=48) (actual time=0.841..2309.828 \n> rows=206748 loops=1)\"\n\nI'm no expert, but in the interests of learning: why is the\nrows estimate so far out for this join?\n\nThanks,\n Jeremy\n\n",
"msg_date": "Wed, 19 Dec 2007 17:12:08 +0000",
"msg_from": "Jeremy Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising a query"
},
{
"msg_contents": "Gregory Stark wrote:\n> \"Richard Huxton\" <[email protected]> writes:\n> \n>> Paul Lambert wrote:\n>>\n>>> \" -> Sort (cost=30197.98..30714.85 rows=206748 width=16) (actual >> time=5949.691..7018.931 rows=206748 loops=1)\"\n>>> \" Sort Key: dealer_id, year_id, subledger_id, account_id\"\n>>> \" Sort Method: external merge Disk: 8880kB\"\n> \n>> Before that though, try issuing a \"SET work_mem = '9MB'\" before running your\n>> query. If that doesn't change the plan step up gradually. You should be able to\n>> get the sort stage to happen in RAM rather than on disk (see \"Sort Method\"\n>> above). \n> \n> FWIW you'll probably need more than that. Try something more like 20MB.\n> \n> Also, note you can change this with SET for just this connection and even just\n> this query and then reset it to the normal value (or use SET LOCAL). You don't\n> have to change it in the config file and restart the whole server.\n> \n> Also, try replacing the DISTINCT with GROUP BY. The code path for DISTINCT\n> unfortunately needs a bit of cleaning up and isn't exactly equivalent to GROUP\n> BY. In particular it doesn't support hash aggregates which, if your work_mem\n> is large enough, might work for you here.\n> \n\n\nI changed work_mem to 20MB per suggestion and that knocks the query time \n down to just over 6 seconds... still a bit fast for my liking, but any \nhigher work_mem doesn't change the result - i.e. 30, 40, 50MB all give \njust over 6 seconds.\n\nThe explain analyze shows all the sorts taking place in memory now as \nquicksorts rather than on-disk merge in the previous query plan, so I'll \nmake a permanent change to the config to set work_mem to 20MB.\n\nI've also changed the inner-most select into a two level select with the \nlpad's on the outer so they are not being evaluated on every row, just \nthe collapsed rows - that accounted for about 1 second of the overall \ntime reduction.\n\nWould increasing the stats of anything on any of these tables speed \nthings up any more?\n\n-- \nPaul Lambert\nDatabase Administrator\nAutoLedgers - A Reynolds & Reynolds Company\n",
"msg_date": "Thu, 20 Dec 2007 06:55:58 +0900",
"msg_from": "Paul Lambert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimising a query"
}
] |
[
{
"msg_contents": "Hi.\n\nWe are looking at upgrading our primary servers. The final boxes will have\n128GB ram, fast disk arrays and 4 CPUs.\n\nWe currently have some eval units with 8GB ram and crappy disk to let us\nbenchmark CPU choice. One box has 4 3GHz dual core Opterons with 1MB cache,\nthe other box ha 4 3GHz quad core Xeons with 4MB cache.\n\nmodel name : Intel(R) Xeon(R) CPU X7350 @ 2.93GHz\ncache size : 4096 KB\nmodel name : Dual-Core AMD Opteron(tm) Processor 8222 SE\ncache size : 1024 KB\n\nI haven't had a chance to play with the hardware myself yet. The sysadmins\nhave been running some benchmarks themselves though.\n\nFor every non PG related benchmark they have run, the Xeon wins by around 20%.\n\nFor pgbench (PG 8.2 running Ubuntu), the Opteron is getting about 6x TPS\nover the Xeon (3000+ TPS on Opteron vs ~500 on Xeon). Things get a little\nbetter for Xeon with PG 8.3 (570-540 TPS).\n\nDoes this match what other people are seeing or expect, or have we screwed\nour benchmarks somehow?\n\nIs this a PG specific win for Opteron, or will we see similar results with\nother DBs?\n\nDo people see wins for non-PG databases on Xeon, and are they as dramatic as\nwe are seeing for PG on Opteron?\n\nWith PG 8.2 and 8.3, is it still pretty much limited to 8 cores making 2 of\nthe quad core Xeons redundant or detrimental?\n\nI expect we will be running this hardware for 8.2, 8.3 and 8.4. Anyone aware\nof anything that might change the landscape for 8.4?\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/",
"msg_date": "Wed, 19 Dec 2007 19:04:11 +0700",
"msg_from": "Stuart Bishop <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "Stuart Bishop a écrit :\n> Hi.\n>\n> We are looking at upgrading our primary servers. The final boxes will have\n> 128GB ram, fast disk arrays and 4 CPUs.\n>\n> We currently have some eval units with 8GB ram and crappy disk to let us\n> benchmark CPU choice. One box has 4 3GHz dual core Opterons with 1MB cache,\n> the other box ha 4 3GHz quad core Xeons with 4MB cache.\n>\n> model name : Intel(R) Xeon(R) CPU X7350 @ 2.93GHz\n> cache size : 4096 KB\n> model name : Dual-Core AMD Opteron(tm) Processor 8222 SE\n> cache size : 1024 KB\n>\n> I haven't had a chance to play with the hardware myself yet. The sysadmins\n> have been running some benchmarks themselves though.\n>\n> For every non PG related benchmark they have run, the Xeon wins by around 20%.\n>\n> For pgbench (PG 8.2 running Ubuntu), the Opteron is getting about 6x TPS\n> over the Xeon (3000+ TPS on Opteron vs ~500 on Xeon). Things get a little\n> better for Xeon with PG 8.3 (570-540 TPS).\n>\n> Does this match what other people are seeing or expect, or have we screwed\n> our benchmarks somehow?\n> \nhttp://tweakers.net/reviews/661/7\n\nas an example\n\nYou can travel the website for other benchs... (there are about dual \nand quad core)\n\n> Is this a PG specific win for Opteron, or will we see similar results with\n> other DBs?\n>\n> Do people see wins for non-PG databases on Xeon, and are they as dramatic as\n> we are seeing for PG on Opteron?\n>\n> With PG 8.2 and 8.3, is it still pretty much limited to 8 cores making 2 of\n> the quad core Xeons redundant or detrimental?\n>\n> I expect we will be running this hardware for 8.2, 8.3 and 8.4. Anyone aware\n> of anything that might change the landscape for 8.4?\n>\n> \n\n\n-- \nCédric Villemain\nAdministrateur de Base de Données\nCel: +33 (0)6 74 15 56 53\nhttp://dalibo.com - http://dalibo.org",
"msg_date": "Wed, 19 Dec 2007 13:41:56 +0100",
"msg_from": "=?UTF-8?B?Q8OpZHJpYyBWaWxsZW1haW4=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "\"Stuart Bishop\" <[email protected]> writes:\n\n> For pgbench (PG 8.2 running Ubuntu), the Opteron is getting about 6x TPS\n> over the Xeon (3000+ TPS on Opteron vs ~500 on Xeon). Things get a little\n> better for Xeon with PG 8.3 (570-540 TPS).\n\nThere was a problem in the past which affected Xeons. But I thought it had\nbeen mostly addressed. Xeons are (or were? these things are always changing)\nmore sensitive to interprocess contention due to their memory architecture\nthough.\n\nWhat are you actually doing in these transactions? Are they read-only? If not\nis fsync=off (which you don't want if you care about your data but you do if\nyou're trying to benchmark the cpu).\n\nAre the crappy disks *identical* crappy disks? If they have different\ncontrollers or different drives (or different versions of the OS) then you\nmight be being deceived by write caching on one set and not the other. If\nthey're not read-only transactions and fsync=on then the TPS of 3000+ is not\ncredible and this is likely.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n",
"msg_date": "Wed, 19 Dec 2007 12:49:26 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "On Dec 19, 2007 6:04 AM, Stuart Bishop <[email protected]> wrote:\n> Hi.\n>\n> We are looking at upgrading our primary servers. The final boxes will have\n> 128GB ram, fast disk arrays and 4 CPUs.\n>\n> We currently have some eval units with 8GB ram and crappy disk to let us\n> benchmark CPU choice. One box has 4 3GHz dual core Opterons with 1MB cache,\n> the other box ha 4 3GHz quad core Xeons with 4MB cache.\n\nImagine two scenarios. In one you have an infinite number of hard\ndrives with an infinite amount of battery backed cache, and an\ninfinite I/O bandwidth. In the other you have one disk. Which one is\nlikely to be I/O bound?\n\nYep. So, it's not likely you'll be able to do a realistic benchmark\nof the CPUs with such a limited disk subsystem...\n\n> For pgbench (PG 8.2 running Ubuntu), the Opteron is getting about 6x TPS\n> over the Xeon (3000+ TPS on Opteron vs ~500 on Xeon). Things get a little\n> better for Xeon with PG 8.3 (570-540 TPS).\n\npgbench is a mostly I/O bound benchmark. What are your -c, -t and -s\nsettings btw?\n\nIt's would be much better if you could benchmark something like the\nreal load you'll be running in the future. Are you looking at\nreporting, transactions, content management, etc...?\n",
"msg_date": "Wed, 19 Dec 2007 09:26:36 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "On Wed, 19 Dec 2007, Stuart Bishop wrote:\n\n> For pgbench (PG 8.2 running Ubuntu), the Opteron is getting about 6x TPS\n> over the Xeon (3000+ TPS on Opteron vs ~500 on Xeon). Things get a little\n> better for Xeon with PG 8.3 (570-540 TPS).\n\nThe 3000+ TPS figure is the correct one for a controller that can cache \nwrites. Around 500TPS is normal for a setup without one. I suspect all \nyou're testing is the difference between the I/O subsystem in the two \nserves, and it's probaby the case that the Opteron disk subsystem caches \nwrites while the Xeon doesn't. You haven't drawn any useful conclusions \ncomparing Xeons and Opterons yet.\n\nWarning: the system with the write caching can easily be doing that \nincorrectly, in a way that can corrupt your database one day. See \nhttp://momjian.us/main/writings/pgsql/sgml/wal-reliability.html for an \nintro and \nhttp://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm for way \nmore detail.\n\nIf you don't have a real disk setup, you can't use the default pgbench \ntest and expect the results to be useful. The main thing it does is write \nheavily in a way that makes the disk controller and associated I/O the \nbottleneck in most cases.\n\nThe only useful test you can do right now with pgbench is to pass it the \n-S parameter so that it does only reads instead. That will give you a \nmuch better idea how the CPUs compare. You still need to be careful about \nthe database scale relative to the amount of RAM; at some point even the \nread test will be limited by disk parameters instead of CPU. Take a look \nat http://www.westnet.com/~gsmith/content/postgresql/pgbench-scaling.htm \nfor a tutorial on using pgbench to quantify read performance. Note that \nthe way I compute the sizes of things in there is a little difficult, one \nday I'm going to use some of the suggestions at \nhttp://andreas.scherbaum.la/blog/archives/282-table-size,-database-size.html \nto improve that and you should take a look there as well.\n\nYou'll also need to vary the number of clients a bit. You should see the \nlargest difference between the two servers with around 16 of them (where \nthe Xeon system has a dedicated core for each while the Opteron has 2 \nclients/core) while a useful spot to compare the maximum throughput of the \nservers will be around 64 clients.\n\n> With PG 8.2 and 8.3, is it still pretty much limited to 8 cores making 2 of\n> the quad core Xeons redundant or detrimental?\n\nWhere'd you get the idea 8 cores was a limit? As cores go up eventually \nyou run out of disk or memory bandwidth, but how that plays out is very \napplication dependant and there's no hard line anywhere.\n\n> I expect we will be running this hardware for 8.2, 8.3 and 8.4. Anyone aware\n> of anything that might change the landscape for 8.4?\n\n8.4 is only in the earliest of planning stages right now, nobody knows \nwhat that will bring yet.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 19 Dec 2007 13:50:29 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Wed, 19 Dec 2007 13:50:29 -0500 (EST)\r\nGreg Smith <[email protected]> wrote:\r\n\r\n> > With PG 8.2 and 8.3, is it still pretty much limited to 8 cores\r\n> > making 2 of the quad core Xeons redundant or detrimental?\r\n> \r\n> Where'd you get the idea 8 cores was a limit? As cores go up\r\n> eventually you run out of disk or memory bandwidth, but how that\r\n> plays out is very application dependant and there's no hard line\r\n> anywhere.\r\n\r\nActually this is not true. Although I have yet to test 8.3. It is\r\npretty much common knowledge that after 8 cores the acceleration of\r\nperformance drops with PostgreSQL...\r\n\r\nThis has gotten better every release. 8.1 for example handles 8 cores\r\nvery well, 8.0 didn't and 7.4 well.... :)\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHaWoJATb/zqfZUUQRAgMeAJ9RS7BLAowXpJTbXuufJhIATj9gaACgrH6x\r\nLRVDPbyIvn71ANra2yiXmgY=\r\n=8QVl\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Wed, 19 Dec 2007 10:59:21 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "On Dec 19, 2007 12:59 PM, Joshua D. Drake <[email protected]> wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> On Wed, 19 Dec 2007 13:50:29 -0500 (EST)\n> Greg Smith <[email protected]> wrote:\n>\n> > > With PG 8.2 and 8.3, is it still pretty much limited to 8 cores\n> > > making 2 of the quad core Xeons redundant or detrimental?\n> >\n> > Where'd you get the idea 8 cores was a limit? As cores go up\n> > eventually you run out of disk or memory bandwidth, but how that\n> > plays out is very application dependant and there's no hard line\n> > anywhere.\n>\n> Actually this is not true. Although I have yet to test 8.3. It is\n> pretty much common knowledge that after 8 cores the acceleration of\n> performance drops with PostgreSQL...\n\nI thought Tom had played with some simple hacks that got the scaling\npretty close to linear for up to 16 cores earlier this year...\n",
"msg_date": "Wed, 19 Dec 2007 13:03:32 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Wed, 19 Dec 2007 13:03:32 -0600\r\n\"Scott Marlowe\" <[email protected]> wrote:\r\n\r\n\r\n> > Actually this is not true. Although I have yet to test 8.3. It is\r\n> > pretty much common knowledge that after 8 cores the acceleration of\r\n> > performance drops with PostgreSQL...\r\n> \r\n> I thought Tom had played with some simple hacks that got the scaling\r\n> pretty close to linear for up to 16 cores earlier this year...\r\n> \r\n\r\nSee.. have not tested 8.3 above and 8.2 is better than 8.1 etc...\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHaWwCATb/zqfZUUQRApqZAJ92yx3LhMIF2nhI2LKrKAaxK2pqdgCffK9A\r\n22rLNPRHHOaZAcvQTtLmRdA=\r\n=GHVK\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Wed, 19 Dec 2007 11:07:44 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> Actually this is not true. Although I have yet to test 8.3. It is\n> pretty much common knowledge that after 8 cores the acceleration of\n> performance drops with PostgreSQL...\n> \n> This has gotten better every release. 8.1 for example handles 8 cores\n> very well, 8.0 didn't and 7.4 well.... :)\n\nI agree with the spirit of what you say, but are you overstating\nthings a bit?\n\nBenchmarks I see[1] suggest that 8.1.2 scaled pretty reasonably to 16\ncores (from the chart on page 9 in the link below). But yeah, 8.0\nscaled to maybe 2 cores if you're lucky. :-)\n\nAgree with the rest of the things you say, tho. It's getting\nway better every recent release.\n\n\n\n[1] http://www.pgcon.org/2007/schedule/attachments/22-Scaling%20PostgreSQL%20on%20SMP%20Architectures%20--%20An%20Update\n\n\n",
"msg_date": "Wed, 19 Dec 2007 11:14:08 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "On Dec 19, 2007 1:07 PM, Joshua D. Drake <[email protected]> wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> On Wed, 19 Dec 2007 13:03:32 -0600\n> \"Scott Marlowe\" <[email protected]> wrote:\n>\n>\n> > > Actually this is not true. Although I have yet to test 8.3. It is\n> > > pretty much common knowledge that after 8 cores the acceleration of\n> > > performance drops with PostgreSQL...\n> >\n> > I thought Tom had played with some simple hacks that got the scaling\n> > pretty close to linear for up to 16 cores earlier this year...\n> >\n>\n> See.. have not tested 8.3 above and 8.2 is better than 8.1 etc...\n\nWell, I'm not even sure if those got applied or were just Tom hacking\nin the basement or, heck, my fevered imagination. :)\n",
"msg_date": "Wed, 19 Dec 2007 13:14:37 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Wed, 19 Dec 2007 11:14:08 -0800\r\nRon Mayer <[email protected]> wrote:\r\n\r\n> Joshua D. Drake wrote:\r\n> > Actually this is not true. Although I have yet to test 8.3. It is\r\n> > pretty much common knowledge that after 8 cores the acceleration of\r\n> > performance drops with PostgreSQL...\r\n> > \r\n> > This has gotten better every release. 8.1 for example handles 8\r\n> > cores very well, 8.0 didn't and 7.4 well.... :)\r\n> \r\n> I agree with the spirit of what you say, but are you overstating\r\n> things a bit?\r\n\r\nMy point was :)... which that PDF actually illustrates is the gain\r\nbetween say 2 cores and 8 cores is greater than 8 and 16 and even\r\nless when you go beyond 16.\r\n\r\n> \r\n> Benchmarks I see[1] suggest that 8.1.2 scaled pretty reasonably to 16\r\n> cores (from the chart on page 9 in the link below). But yeah, 8.0\r\n> scaled to maybe 2 cores if you're lucky. :-)\r\n\r\nI really need to check this test out more though because their numbers\r\ndon't reflect mine. I wonder if that is per connection.\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD4DBQFHaXjxATb/zqfZUUQRAhlyAJijNIytenaBH2c5mEivFCT4qRmPAKCiW7Qn\r\n2CDwNUBNd463Kz7G6n68yA==\r\n=bnaL\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Wed, 19 Dec 2007 12:02:57 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "On Wed, 19 Dec 2007, Ron Mayer wrote:\n\n> Benchmarks I see[1] suggest that 8.1.2 scaled pretty reasonably to 16\n> cores (from the chart on page 9 in the link below). But yeah, 8.0\n> scaled to maybe 2 cores if you're lucky. :-)\n> [1] http://www.pgcon.org/2007/schedule/attachments/22-Scaling%20PostgreSQL%20on%20SMP%20Architectures%20--%20An%20Update\n\nThank you, I was looking for that one but couldn't find it again. Note \nthat those results are using a TPC-C variant, which is not the most CPU \nintensive of tests out there. It's certainly possible that an application \nthat has more processing to do per transaction (I'm thinking something \nmore in the scientific computing database realm) could scale even better.\n\nWhile I'd expect the bang per buck to go down quite a bit beyond 8 cores, \nI know I haven't seen any data on what new systems running 8.3 are capable \nof, and extrapolating performance rules of thumb based on old data is \nperilous. Bottlenecks shift around in unexpected ways. In that Unisys \nexample, they're running 32-bit single core Xeons circa 2004 with 4MB of \n*L3* cache and there's evidence that scales >16 processors. Current Xeons \nare considerably faster and you can get them with 4-8MB of *L2* cache.\n\nWhat does that do to scalability? Beats me. Maybe since the individual \nCPUs are faster, you bottleneck on something else way before you can use \n16 of them usefully. Maybe the much better CPU cache means there's less \nreliance on the memory bus and they scale better. It depends a lot on the \nCPU vs. memory vs. disk requirements of your app, which is what I was \nsuggesting before.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 19 Dec 2007 15:31:22 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "\"Joshua D. Drake\" <[email protected]> writes:\n\n> It is pretty much common knowledge that\n\nI think we have too much \"common knowledge\".\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n",
"msg_date": "Wed, 19 Dec 2007 22:54:13 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "On Dec 19, 2007, at 4:54 PM, Gregory Stark wrote:\n> \"Joshua D. Drake\" <[email protected]> writes:\n>\n>> It is pretty much common knowledge that\n>\n> I think we have too much \"common knowledge\".\n\n\nYeah. For a lot of folks it's still common knowledge that you should \nonly set shared_buffers to 10% of memory...\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828",
"msg_date": "Wed, 19 Dec 2007 19:51:13 -0600",
"msg_from": "Decibel! <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Wed, 19 Dec 2007 19:51:13 -0600\r\nDecibel! <[email protected]> wrote:\r\n\r\n> On Dec 19, 2007, at 4:54 PM, Gregory Stark wrote:\r\n> > \"Joshua D. Drake\" <[email protected]> writes:\r\n> >\r\n> >> It is pretty much common knowledge that\r\n> >\r\n> > I think we have too much \"common knowledge\".\r\n> \r\n> \r\n> Yeah. For a lot of folks it's still common knowledge that you should \r\n> only set shared_buffers to 10% of memory...\r\n\r\nSometimes that's true ;).\r\n\r\nJoshua D. Drake\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHae7UATb/zqfZUUQRAsKAAKCkDNtWarrHT4yDrVn7Bs3GGMRBNACfd2+B\r\n8HDzjIF2OO4aS3AZ7+7muAs=\r\n=STaP\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Wed, 19 Dec 2007 20:25:56 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
},
{
"msg_contents": "\"Scott Marlowe\" <[email protected]> writes:\n> Well, I'm not even sure if those got applied or were just Tom hacking\n> in the basement or, heck, my fevered imagination. :)\n\nFor the record, I hack in the attic ... or what I tell the IRS is my\nthird-floor office ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Dec 2007 02:07:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons? "
},
{
"msg_contents": "On 12/20/07, Tom Lane <[email protected]> wrote:\n> \"Scott Marlowe\" <[email protected]> writes:\n> > Well, I'm not even sure if those got applied or were just Tom hacking\n> > in the basement or, heck, my fevered imagination. :)\n>\n> For the record, I hack in the attic ... or what I tell the IRS is my\n> third-floor office ...\n>\n\nAwesome band name - 'Hacking in the Attic'\n\n:)\n\njan\n",
"msg_date": "Thu, 20 Dec 2007 08:49:44 -0500",
"msg_from": "\"Jan de Visser\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dual core Opterons beating quad core Xeons?"
}
] |
[
{
"msg_contents": "\n> -----Original Message-----\n> From: Trevor Talbot [mailto:[email protected]]\n> Sent: Wednesday, December 19, 2007 9:45 AM\n> To: Joshua D. Drake\n> Cc: Roberts, Jon; Kris Jurka; Merlin Moncure; Jonah H. Harris; Bill Moran;\n> [email protected]\n> Subject: Re: [PERFORM] viewing source code\n> \n> On 12/18/07, Joshua D. Drake <[email protected]> wrote:\n> \n> > On Tue, 18 Dec 2007 10:05:46 -0600\n> > \"Roberts, Jon\" <[email protected]> wrote:\n> \n> > > If we are talking about enhancement requests, I would propose we\n> > > create a role that can be granted/revoked that enables a user to see\n> > > dictionary objects like source code. Secondly, users should be able\n> > > to see their own code they write but not others unless they have been\n> > > granted this dictionary role.\n> \n> > You are likely not going to get any support on an obfuscation front.\n> > This is an Open Source project :P\n> \n> Wait, what? This is a DBMS, with some existing security controls\n> regarding the data users are able to access, and the proposal is about\n> increasing the granularity of that control. Arbitrary function bodies\n> are just as much data as anything else in the system.\n> \n> Obfuscation would be something like encrypting the function bodies so\n> that even the owner or administrator cannot view or modify the code\n> without significant reverse engineering. I mean, some people do want\n> that sort of thing, but this proposal isn't even close.\n\nTrevor, thank you for making the proposal clearer.\n\nThe more I thought about a counter proposal to put views on pg_proc, I\nrealized that isn't feasible either. It would break functionality of\npgAdmin because users couldn't view their source code with the tool.\n\n> \n> Where on earth did \"obfuscation\" come from?\n\nDon't know. :)\n\n\nThis really is a needed feature to make PostgreSQL more attractive to\nbusinesses. A more robust security model that better follows commercial\nproducts is needed for adoption.\n\n\nJon\n\n",
"msg_date": "Wed, 19 Dec 2007 09:52:31 -0600",
"msg_from": "\"Roberts, Jon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "Roberts, Jon escribi�:\n\n> The more I thought about a counter proposal to put views on pg_proc, I\n> realized that isn't feasible either. It would break functionality of\n> pgAdmin because users couldn't view their source code with the tool.\n\nWhat's wrong with patching pgAdmin?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Wed, 19 Dec 2007 12:55:31 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "Roberts, Jon wrote:\n> \n> \n> This really is a needed feature to make PostgreSQL more attractive to\n> businesses. A more robust security model that better follows commercial\n> products is needed for adoption.\n> \n\nI would argue that commercial products need to get a clue and stop \nplaying bondage with their users to help stop their imminent and frankly \nobvious downfall from the Open Source competition.\n\nThis \"feature\" as it is called can be developed externally and has zero \nreason to exist within PostgreSQL. If the feature has the level of \ndemand that people think that it does, then the external project will be \nvery successful and that's cool.\n\nSincerely,\n\nJoshua D. Drake\n\n",
"msg_date": "Thu, 20 Dec 2007 08:39:59 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On 12/20/07, Joshua D. Drake <[email protected]> wrote:\n> Roberts, Jon wrote:\n\n> > This really is a needed feature to make PostgreSQL more attractive to\n> > businesses. A more robust security model that better follows commercial\n> > products is needed for adoption.\n\n> I would argue that commercial products need to get a clue and stop\n> playing bondage with their users to help stop their imminent and frankly\n> obvious downfall from the Open Source competition.\n\nI'm still not seeing where your comments are actually coming from, and\nI can't decipher your argument as a result. Exactly what is it about\nfine-grained security controls that is \"playing bondage with their\nusers\"?\n\n> This \"feature\" as it is called can be developed externally and has zero\n> reason to exist within PostgreSQL. If the feature has the level of\n> demand that people think that it does, then the external project will be\n> very successful and that's cool.\n\nI'm unsure of what you consider \"external\" here. Is SE-PostgreSQL the\ntype of thing you mean?\n",
"msg_date": "Thu, 20 Dec 2007 10:47:53 -0800",
"msg_from": "\"Trevor Talbot\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Thu, 20 Dec 2007 10:47:53 -0800\r\n\"Trevor Talbot\" <[email protected]> wrote:\r\n\r\n> \r\n> > This \"feature\" as it is called can be developed externally and has\r\n> > zero reason to exist within PostgreSQL. If the feature has the\r\n> > level of demand that people think that it does, then the external\r\n> > project will be very successful and that's cool.\r\n> \r\n> I'm unsure of what you consider \"external\" here. Is SE-PostgreSQL the\r\n> type of thing you mean?\r\n\r\nI don't know that it needs to be that extensive. I noted elsewhere in\r\nthe thread the idea of a plpgsql_s. I think that is an interesting\r\nidea. I just don't think it needs to be incorporated into\r\npostgresql-core. \r\n\r\nIf we were to remove viewing source from postgresql-core an interesting\r\npossibility would be to remove prosrc from pg_proc altogether. Instead\r\nprosrc becomes a lookup field to the prosrc table.\r\n\r\nThe prosrc table would only be accessible from a called function (thus\r\nyou can't grab source via select). Of course this wouldn't apply to\r\nsuperusers but any normal user would not be able to so much as look\r\nsideways at the prosrc table.\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n\r\n\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHasUbATb/zqfZUUQRAqINAJsFpvPkUJ6oL/gH7dX4YLsbldIC4gCfdujh\r\n/S2b/ZmQU+R54MlO5ATelns=\r\n=+2Ut\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Thu, 20 Dec 2007 11:40:11 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "Joshua D. Drake escribi�:\n\n> I don't know that it needs to be that extensive. I noted elsewhere in\n> the thread the idea of a plpgsql_s. I think that is an interesting\n> idea. I just don't think it needs to be incorporated into\n> postgresql-core. \n\nI don't think that makes any kind of sense. Hiding prosrc should happen\non a entirely different level from the language on which the function is\nwritten. It's a completely orthogonal decision. Besides, you probably\ndon't want prosrc to be encrypted -- just not accesible to everyone, and\nit doesn't make sense to have a different _language_ to do that.\n\nAlso, having an encrypted source code means there must be a decryption\nkey somewhere, which is a pain on itself. And if you expose the crypted\nprosrc, you are exposing to brute force attacks (to which you are not if\nprosrc is hidden).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Thu, 20 Dec 2007 17:07:39 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Dec 20, 2007 3:07 PM, Alvaro Herrera <[email protected]> wrote:\n> I don't think that makes any kind of sense. Hiding prosrc should happen\n> on a entirely different level from the language on which the function is\n> written. It's a completely orthogonal decision. Besides, you probably\n> don't want prosrc to be encrypted -- just not accesible to everyone, and\n> it doesn't make sense to have a different _language_ to do that.\n\nI kinda agree, kinda disagree on this point. You may recall the huge\ndebate a while back where AndrewSN and others were promoting a revised\nset of views to expose the system catalogs. I thought this was a good\nidea because the internal catalogs could be hidden from all but the su\nand the views could be much easier to manipulate in that fashion. The\nproposal was however shot down for other reasons.\n\nI don't really agree that wrapping pl/pgsql with encryptor/decryptor\nis a bad idea. It's fairly easy to do and very flexible (you don't\nhave to stop at encryption...for example you could run the code\nthrough a pre-processor for token substitution). We are not adding a\nlanguage in the semantic sense, wrapping an existing one. Could\nprobably be extended to multiple languages if desired without too much\neffort...I think it's only worthwhile bringing in core if you want to\nhide the internals inside the syntax (CREATE ENCRYPTED FUNCTION\nfoo...)\n\nKey management is an issue but easily solved. Uber simple solution is\nto create a designated table holding the key(s) and use classic\npermissions to guard it. So I don't agree with your negative comments\nin this direction but I'm not saying this is the only way to solve\nthis. It is, however the only realistic way to do it without changes\nto the project or breaking pgadmin.\n\n> Also, having an encrypted source code means there must be a decryption\n> key somewhere, which is a pain on itself. And if you expose the crypted\n> prosrc, you are exposing to brute force attacks (to which you are not if\n> prosrc is hidden).\n\ni don't buy the brute force argument at all...aes256 or blowfish are\nperfectly safe. The purpose of encryption is to move sensitive data\nthrough public channels...otherwise, why encrypt?\n\nmerlin\n",
"msg_date": "Thu, 20 Dec 2007 15:35:42 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Thu, Dec 20, 2007 at 03:35:42PM -0500, Merlin Moncure wrote:\n> \n> Key management is an issue but easily solved. Uber simple solution is\n> to create a designated table holding the key(s) and use classic\n> permissions to guard it. \n\nAny security expert worth the title would point and laugh at that\nsuggestion. If the idea is that the contents have to be encrypted to\nprotect them, then it is just not acceptable to have the encryption keys\nonline. That's the sort of \"security\" that inevitably causes programs to\nget a reputation for ill-thought-out protections.\n\nA\n\n",
"msg_date": "Thu, 20 Dec 2007 15:52:14 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Dec 20, 2007 3:52 PM, Andrew Sullivan <[email protected]> wrote:\n> On Thu, Dec 20, 2007 at 03:35:42PM -0500, Merlin Moncure wrote:\n> >\n> > Key management is an issue but easily solved. Uber simple solution is\n> > to create a designated table holding the key(s) and use classic\n> > permissions to guard it.\n>\n> Any security expert worth the title would point and laugh at that\n> suggestion. If the idea is that the contents have to be encrypted to\n> protect them, then it is just not acceptable to have the encryption keys\n> online. That's the sort of \"security\" that inevitably causes programs to\n> get a reputation for ill-thought-out protections.\n\nright, right, thanks for the lecture. I am aware of various issues\nwith key management.\n\nI said 'simple' not 'good'. there are many stronger things, like\nforcing the key to be passed in for each invocation, hmac, etc. etc.\nI am not making a proposal here and you don't have to denigrate my\nbroad suggestion on a technical detail which is quite distracting from\nthe real issue at hand, btw. I was just suggesting something easy to\nstop casual browsing. If you want to talk specifics, we can talk\nspecifics...\n\nmerlin\n",
"msg_date": "Thu, 20 Dec 2007 17:04:33 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "Merlin Moncure escribi�:\n\n> I don't really agree that wrapping pl/pgsql with encryptor/decryptor\n> is a bad idea.\n\nRight. But do you agree that it is separate from having hidden prosrc?\nIf we can complete a design then let's shot that way, and aim at\nencryption sometime in the future :-)\n\nI have to note that I would probably not be the one to actually produce\na patch in this direction, or even to work on a working, detailed design\n:-) You just read Joshua's opinion on this issue and I don't think I\nneed to say more :-)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 20 Dec 2007 19:28:21 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Thu, Dec 20, 2007 at 05:04:33PM -0500, Merlin Moncure wrote:\n> right, right, thanks for the lecture. I am aware of various issues\n> with key management.\n\nSorry to come off that way. It wasn't my intention to lecture, but rather\nto try to stop dead a cure that, in my opinion, is rather worse than the\ndisease.\n\n> I said 'simple' not 'good'. \n\nI think this is where we disagree. It's simple only because it's no\nsecurity at all. It's not that it's \"not good for some purposes\". I'm\narguing that it's the sort of approach that shouldn't be used ever, period. \n\nWe have learned, over and over again, that simple answers that might have\nbeen good enough for a very narrow purpose inevitably get used for a\nslightly wider case than that for which they're appropriate. Anything that\ninvolves storing the keys in the same repository as the encrypted data is\njust begging to be misused that way.\n\n> I am not making a proposal here and you don't have to denigrate my\n> broad suggestion on a technical detail which is quite distracting from\n> the real issue at hand, btw. \n\nThis isn't a technical detail that I'm talking about: it's a very serious\nmistake in the entire approach to which you alluded, and goes to the heart\nof why I think any talk of somehow encrypting or otherwise obfuscating the\ncontents of pg_proc are a bad idea. Column controls based on user roles are\nanother matter, because they'd be part of the access control system in the\nDBMS.\n\nBest,\n\nA\n",
"msg_date": "Thu, 20 Dec 2007 17:35:47 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Dec 20, 2007 5:28 PM, Alvaro Herrera <[email protected]> wrote:\n> > I don't really agree that wrapping pl/pgsql with encryptor/decryptor\n> > is a bad idea.\n>\n> Right. But do you agree that it is separate from having hidden prosrc?\n> If we can complete a design then let's shot that way, and aim at\n> encryption sometime in the future :-)\n>\n> I have to note that I would probably not be the one to actually produce\n> a patch in this direction, or even to work on a working, detailed design\n> :-) You just read Joshua's opinion on this issue and I don't think I\n> need to say more :-)\n\nit is separate. doing it hiding prosrc way requires, as i see it a)\nrow/col security, or b) view switcheroo\nrow/col security is great but views (IMO) are a better approach to\nthis generally. archives is of course replete with numerous generally\nfruitless treatments of both topics.\n\nview switcheroo is more of a 'do the ends justify the means' debate.\nthis could turn into a big discussion about what else could be done\nwith the system catalogs.\n\nsince its not really all that difficult to disable access to pg_proc,\nand there are relatively few side effects outside of hosing pgadmin, i\ndon't think the ends do justify the means at least in terms of\ninternal server changes. If the necessary features get added in for\nother reasons, then perhaps...\n\nwrapping language handlers is interesting from other angles too. many\ntimes I've wanted to do preprocessing on functions without sacrificing\nability of pasting from psql.\n\nmerlin\n",
"msg_date": "Thu, 20 Dec 2007 17:54:37 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> I don't really agree that wrapping pl/pgsql with encryptor/decryptor\n> is a bad idea.\n\nIt's quite a good idea, because it has more than zero chance of\nsucceeding politically in the community.\n\nThe fundamental reason why preventing access to pg_proc.prosrc won't\nhappen is this: all the pain (and there will be plenty) will be\ninflicted on people who get none of the benefit (because they don't give\na damn about hiding their own functions' code). The folks who want\nfunction hiding can shout all they want, but as long as there is a very\nsizable fraction of the community who flat out *don't* want it, it's\nnot going to get applied.\n\nEncrypted function bodies avoid this problem because they inflict no\nperformance penalty, operational complexity, or client-code breakage\non people who don't use the feature. They are arguably also a better\nsolution because they can guard against more sorts of threats than\na column-hiding solution can.\n\nI don't deny that the key-management problem is interesting, but it\nseems soluble; moreover, the difficulties that people have pointed to\nare nothing but an attempt to move the goalposts, because they\ncorrespond to requirements that a column-hiding solution would never\nmeet at all.\n\nSo if you want something other than endless arguments to happen,\ncome up with a nice key-management design for encrypted function\nbodies.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Dec 2007 18:01:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code "
},
{
"msg_contents": ">\n> wrapping pl/pgsql with encryptor/decryptor\n>\n> It's quite a good idea, because it has more than zero chance of\n> succeeding politically in the community.\n>\n\nIt's additionally a good idea because the other big database is using the\nsame approach. Easier sell to phb.\n\nHarald\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nSpielberger Straße 49\n70435 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nEuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!\n\nwrapping pl/pgsql with encryptor/decryptor\nIt's quite a good idea, because it has more than zero chance ofsucceeding politically in the community.It's additionally a good idea because the other big database is using the same approach. Easier sell to phb.\nHarald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaSpielberger Straße 4970435 Stuttgart0173/9409607fx 01212-5-13695179 -EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!",
"msg_date": "Fri, 21 Dec 2007 00:23:46 +0100",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "\nIs this a TODO?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > I don't really agree that wrapping pl/pgsql with encryptor/decryptor\n> > is a bad idea.\n> \n> It's quite a good idea, because it has more than zero chance of\n> succeeding politically in the community.\n> \n> The fundamental reason why preventing access to pg_proc.prosrc won't\n> happen is this: all the pain (and there will be plenty) will be\n> inflicted on people who get none of the benefit (because they don't give\n> a damn about hiding their own functions' code). The folks who want\n> function hiding can shout all they want, but as long as there is a very\n> sizable fraction of the community who flat out *don't* want it, it's\n> not going to get applied.\n> \n> Encrypted function bodies avoid this problem because they inflict no\n> performance penalty, operational complexity, or client-code breakage\n> on people who don't use the feature. They are arguably also a better\n> solution because they can guard against more sorts of threats than\n> a column-hiding solution can.\n> \n> I don't deny that the key-management problem is interesting, but it\n> seems soluble; moreover, the difficulties that people have pointed to\n> are nothing but an attempt to move the goalposts, because they\n> correspond to requirements that a column-hiding solution would never\n> meet at all.\n> \n> So if you want something other than endless arguments to happen,\n> come up with a nice key-management design for encrypted function\n> bodies.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Fri, 21 Dec 2007 09:34:53 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Dec 21, 2007 9:34 AM, Bruce Momjian <[email protected]> wrote:\n>\n> Is this a TODO?\n>\n\nI don't think so, at least not yet (it's not clear what if anything\nthere is to do).\n\nsee: http://archives.postgresql.org/pgsql-hackers/2007-12/msg00788.php\n\nmerlin\n",
"msg_date": "Fri, 21 Dec 2007 09:45:59 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Is this a TODO?\n> \n> ---------------------------------------------------------------------------\n> \n> Tom Lane wrote:\n>> \"Merlin Moncure\" <[email protected]> writes:\n>>> I don't really agree that wrapping pl/pgsql with encryptor/decryptor\n>>> is a bad idea.\n>> It's quite a good idea, because it has more than zero chance of\n>> succeeding politically in the community.\n>>\n>> The fundamental reason why preventing access to pg_proc.prosrc won't\n>> happen is this: all the pain (and there will be plenty) will be\n>> inflicted on people who get none of the benefit (because they don't give\n>> a damn about hiding their own functions' code). The folks who want\n>> function hiding can shout all they want, but as long as there is a very\n>> sizable fraction of the community who flat out *don't* want it, it's\n>> not going to get applied.\n>>\n>> Encrypted function bodies avoid this problem because they inflict no\n>> performance penalty, operational complexity, or client-code breakage\n>> on people who don't use the feature. They are arguably also a better\n>> solution because they can guard against more sorts of threats than\n>> a column-hiding solution can.\n>>\n>> I don't deny that the key-management problem is interesting, but it\n>> seems soluble; moreover, the difficulties that people have pointed to\n>> are nothing but an attempt to move the goalposts, because they\n>> correspond to requirements that a column-hiding solution would never\n>> meet at all.\n>>\n>> So if you want something other than endless arguments to happen,\n>> come up with a nice key-management design for encrypted function\n>> bodies.\n\nI keep thinking the problem of keys is similar that of Apache servers \nwhich use certificates that require passphrases. When the server is \nstarted, the passphrase is entered on the command line.\n\n-- \nDan Langille - http://www.langille.org/\n",
"msg_date": "Fri, 21 Dec 2007 09:51:24 -0500",
"msg_from": "Dan Langille <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "It seems like a lot of people only saw \"hide source code\" in the\noriginal message, and then went off on tangents that don't have\nanything to do with the request.\n\nAgain:\n\nOn 12/14/07, Roberts, Jon <[email protected]> wrote:\n> Is it possible yet in PostgreSQL to hide the source code of functions from\n> users based on role membership? I would like to avoid converting the code\n> to C to secure the source code and I don't want it obfuscated either.\n>\n> In an ideal world, if a user can't modify a function, he/she shouldn't be\n> able to see the source code. If the user can execute the function, then the\n> user should be able to see the signature of the function but not the body.\n\nAs a Role under PostgreSQL, I can create tables, views, functions,\netc. As the owner of those objects, I can control what other roles can\nview data through them, and what roles can modify them.\n\nHowever, unlike tables, I cannot control what roles can view the data\ncontained within my PL functions (body).\n\nThat's it. A very simple problem. One that has absolutely nothing\nwhatsoever to do with encrypted storage on disk or hiding things from\nDBAs or superusers.\n\nI'm surprised this group ended up so far off point. It's not as if\nobjecting to this requires a bunch of abstract hyperbole, just a\nsimple \"it's not worth the effort and it's considered a bad idea to\nput security-senstive data inside PL function bodies\".\n\n\nOn 12/20/07, Joshua D. Drake <[email protected]> wrote:\n> On Thu, 20 Dec 2007 10:47:53 -0800\n> \"Trevor Talbot\" <[email protected]> wrote:\n\n> > > This \"feature\" as it is called can be developed externally and has\n> > > zero reason to exist within PostgreSQL. If the feature has the\n> > > level of demand that people think that it does, then the external\n> > > project will be very successful and that's cool.\n\n> > I'm unsure of what you consider \"external\" here. Is SE-PostgreSQL the\n> > type of thing you mean?\n\n> I don't know that it needs to be that extensive. I noted elsewhere in\n> the thread the idea of a plpgsql_s. I think that is an interesting\n> idea. I just don't think it needs to be incorporated into\n> postgresql-core.\n\nI was trying to get a handle on whether you meant external as in\nmiddleware, or external as in third-party patches to PostgreSQL. The\nOP's request doesn't necessarily need something as extensive as\nSE-PostgreSQL, but it needs to be on the same level: something that\naffects the database surface clients see, not apps behind middleware.\n\n\nOn 12/20/07, Tom Lane <[email protected]> wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n\n> > I don't really agree that wrapping pl/pgsql with encryptor/decryptor\n> > is a bad idea.\n\n> It's quite a good idea, because it has more than zero chance of\n> succeeding politically in the community.\n\nSomething that looks a lot like encryption of the entire database is\nmore likely to succeed politically than a simple addition to\nPostgreSQL's existing role-based security model? Really?\n\nIt's not like I can claim otherwise, I'm just wondering if I woke up\nin an alternate universe this morning...\n\n> The fundamental reason why preventing access to pg_proc.prosrc won't\n> happen is this: all the pain (and there will be plenty) will be\n> inflicted on people who get none of the benefit (because they don't give\n> a damn about hiding their own functions' code). The folks who want\n> function hiding can shout all they want, but as long as there is a very\n> sizable fraction of the community who flat out *don't* want it, it's\n> not going to get applied.\n\nI don't understand. Can you give an example of pain you see coming?\n",
"msg_date": "Fri, 21 Dec 2007 11:02:43 -0800",
"msg_from": "\"Trevor Talbot\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "I wrote:\n\n> That's it. A very simple problem.\n\nIt was hinted to me off-list that my mail was fanning the flames, so\nto clarify: when I say things like the above, I mean conceptually.\n\nI think there might be a shared pool of knowledge that says it's\nanything but simple in practical terms, but that hasn't been\ncommunicated clearly in this thread. That's what I was getting at.\n",
"msg_date": "Fri, 21 Dec 2007 13:06:18 -0800",
"msg_from": "\"Trevor Talbot\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "\"Trevor Talbot\" <[email protected]> writes:\n> Something that looks a lot like encryption of the entire database is\n> more likely to succeed politically than a simple addition to\n> PostgreSQL's existing role-based security model? Really?\n\nI guess that you have failed to understand any of the discussion.\n\nAdding a GRANT SEESOURCECODE ON FUNCTION type of privilege would\nperhaps be a \"simple addition to PostgreSQL's existing security model\",\nbut it would accomplish precisely zero, because anyone smart enough\nto be using Postgres in the first place would simply look directly into\npg_proc to see the function body. In order to make it into a meaningful\nrestriction, we would have to restrict direct SQL access to the system\ncatalogs --- at least that one --- which would break vast quantities of\nstuff. The fact that psql, pg_dump, and pgAdmin would all break is\ndaunting in itself, but those are likely just the tip of the iceberg.\nLooking at the system catalogs has always been part of the culture\naround here, and it's impossible to guess how many one-off client\nprograms do it. I'd bet on \"a lot\", though.\n\nAnother problem is that you're facing a cultural bias. You quote\n\n> On 12/14/07, Roberts, Jon <[email protected]> wrote:\n>> In an ideal world, if a user can't modify a function, he/she shouldn't be\n>> able to see the source code.\n\nbut what neither of you apparently grasp is that to most open source\nprogrammers, that's not an \"ideal world\", that's a pretty good\ndescription of hell on earth. There is no way that you will persuade\nthis project that hiding source code should be the default behavior,\nor even especially easy.\n\nWe're willing to think about ways to hide source code where there is a\nreally serious commercial imperative to do it --- but in cases like\nthat, schemes that are as easily broken into as a SQL-level GRANT are\nprobably not good enough anyhow. And thus we arrive at encrypted source\ntext and discussions of where to keep the key.\n\nOnce again: this discussion is 100% off-topic for pgsql-performance.\nIf you want to keep talking about it, please join the child thread on\npgsql-hackers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Dec 2007 17:07:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code "
}
] |
[
{
"msg_contents": "I've been fighting with the common workarounds for inadequate response \ntimes on select count(*) and min(),max() on tables with tens of \nmillions of rows for quite a while now and understand the reasons for \nthe table scans.\n\nI have applications that regularly poll a table ( ideally, the more \nfrequent, the better ) to learn the most recent data inside it as well \nas the # of rows in it ( among a few other things ). As the databases \nhave grown in size, these summarizations could no longer be done on \nthe fly, so I wrote a database wrapper API that tracks those values \ninternally.\n\nThis wrapper has grown very complex and is difficult to manage across \ndifferent systems. What I'd like to do instead is implement triggers \nfor insert, updates, and deletes to check and/or replace a value in a \n\"table_stats\", representing table count, min/max dates, and a few \nother costly operations.. that can then be queried in short order. I \nknow this is a fairly common thing to do.\n\nThe thing that concerns me is dead tuples on the table_stats table. I \nbelieve that every insert of new data in one of the monitored tables \nwill result in an UPDATE of the table_stats table. When thousands \n( or millions ) of rows are inserted, the select performance ( even \ntrying with an index ) on table_stats slows down in a hurry. If I \nwrap the inserts into large transactions, will it only call the update \non table_states when I commit?\n\nObviously I want to vacuum this table regularly to recover this. The \nproblem I'm running into is contention between VACUUM ( not full ) and \npg_dump ( version 8.0.12 ). My system backups takes 6 hours to run \npg_dump on a 400GB cluster directory. If the vacuum command fires \nduring the dump, it forces an exclusive lock and any queries will hang \nuntil pg_dump finishes.\n\nIf I have to wait until pg_dump is finished before issuing the VACUUM \ncommand, everything slows down significantly as the dead tuples in \ntable_stats pile up.\n\nWhat strategy could I employ to either:\n\n1. resolve the contention between pg_dump and vacuum, or\n2. reduce the dead tuple pile up between vacuums\n\nThanks for reading\n\n-Dan\n",
"msg_date": "Wed, 19 Dec 2007 16:38:25 -0700",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Minimizing dead tuples caused by update triggers"
},
{
"msg_contents": "Dan Harris <[email protected]> writes:\n> The thing that concerns me is dead tuples on the table_stats table. I \n> believe that every insert of new data in one of the monitored tables \n> will result in an UPDATE of the table_stats table. When thousands \n> ( or millions ) of rows are inserted, the select performance ( even \n> trying with an index ) on table_stats slows down in a hurry.\n\nYup. FWIW, 8.3's \"HOT\" tuple updates might help this quite a lot.\nNot a lot to be done about it in 8.0.x though :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Dec 2007 19:39:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Minimizing dead tuples caused by update triggers "
},
{
"msg_contents": "On Dec 19, 2007, at 6:39 PM, Tom Lane wrote:\n>> The thing that concerns me is dead tuples on the table_stats \n>> table. I\n>> believe that every insert of new data in one of the monitored tables\n>> will result in an UPDATE of the table_stats table. When thousands\n>> ( or millions ) of rows are inserted, the select performance ( even\n>> trying with an index ) on table_stats slows down in a hurry.\n>\n> Yup. FWIW, 8.3's \"HOT\" tuple updates might help this quite a lot.\n> Not a lot to be done about it in 8.0.x though :-(\n\n\nA work-around others have used is to have the trigger just insert \ninto a 'staging' table and then periodically take the records from \nthat table and summarize them somewhere else. You still have a vacuum \nconcern on the staging table, but the advantage is that you trigger \npath is a simple insert instead of an update, which is effectively a \ndelete and an insert.\n\nThis is a case where a cron'd vacuum that runs once a minute is \nprobably a wise idea.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828",
"msg_date": "Wed, 19 Dec 2007 19:54:12 -0600",
"msg_from": "Decibel! <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Minimizing dead tuples caused by update triggers "
},
{
"msg_contents": "Le jeudi 20 décembre 2007, Decibel! a écrit :\n> A work-around others have used is to have the trigger just insert\n> into a 'staging' table and then periodically take the records from\n> that table and summarize them somewhere else.\n\nAnd you can even use the PgQ skytools implementation to easily have this kind \nof 'staging'-table with a producer and one or many subscribers. See those \nreferences if you're interrested:\nhttp://kaiv.wordpress.com/2007/10/19/skytools-database-scripting-framework-pgq/\n http://skytools.projects.postgresql.org/doc/pgq-sql.html\n http://skytools.projects.postgresql.org/doc/pgq-admin.html\n http://skytools.projects.postgresql.org/doc/pgq-nodupes.html\n\nHope this helps, regards,\n-- \ndim",
"msg_date": "Thu, 20 Dec 2007 12:14:34 +0100",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Minimizing dead tuples caused by update triggers"
}
] |
[
{
"msg_contents": "Hi,\nSorry but I couldn't find the answer to this...\n\nI would like to empty all stats (pg_stat_all_tables probably mostly)\nso I can get an idea of what's going on now. Is this possible? I\ndidn't want to just go deleting without knowing what it would do...\nThanks\nAnton\n\n-- \necho '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc\nThis will help you for 99.9% of your problems ...\n",
"msg_date": "Thu, 20 Dec 2007 11:05:20 +0100",
"msg_from": "\"Anton Melser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Reinitialising stats once only without restarting"
},
{
"msg_contents": "Anton Melser escribi�:\n> Hi,\n> Sorry but I couldn't find the answer to this...\n> \n> I would like to empty all stats (pg_stat_all_tables probably mostly)\n> so I can get an idea of what's going on now. Is this possible? I\n> didn't want to just go deleting without knowing what it would do...\n\nSure, use pg_stat_reset();\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 20 Dec 2007 10:35:59 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reinitialising stats once only without restarting"
},
{
"msg_contents": "On 20/12/2007, Alvaro Herrera <[email protected]> wrote:\n> Anton Melser escribió:\n> > Hi,\n> > Sorry but I couldn't find the answer to this...\n> >\n> > I would like to empty all stats (pg_stat_all_tables probably mostly)\n> > so I can get an idea of what's going on now. Is this possible? I\n> > didn't want to just go deleting without knowing what it would do...\n>\n> Sure, use pg_stat_reset();\n\nPura vida, gracias.\nA\n",
"msg_date": "Thu, 20 Dec 2007 17:23:36 +0100",
"msg_from": "\"Anton Melser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reinitialising stats once only without restarting"
}
] |
[
{
"msg_contents": "So your suggestion is first to come up with a query that dynamically checks\npermissions and create a view for it. Secondly, change pgAdmin to reference\nthis view in place of pg_proc. Actually, it should be extended to all\nobjects in the database, not just pg_proc. If you don't have this\ndictionary role, you shouldn't be able to look at any of the objects in the\ndatabase unless you own the object or have been granted rights to the\nobject.\n\nI don't know the information_schema that well so I don't know if this is\nsomething that should sit on top of PostgreSQL with views and then make\nsubsequent changes to pgAdmin or if the database itself needs to change to\nhandle this.\n\n\nJon\n\n> -----Original Message-----\n> From: Alvaro Herrera [mailto:[email protected]]\n> Sent: Wednesday, December 19, 2007 9:56 AM\n> To: Roberts, Jon\n> Cc: 'Trevor Talbot'; Joshua D. Drake; Kris Jurka; Merlin Moncure; Jonah H.\n> Harris; Bill Moran; [email protected]\n> Subject: Re: [PERFORM] viewing source code\n> \n> Roberts, Jon escribió:\n> \n> > The more I thought about a counter proposal to put views on pg_proc, I\n> > realized that isn't feasible either. It would break functionality of\n> > pgAdmin because users couldn't view their source code with the tool.\n> \n> What's wrong with patching pgAdmin?\n> \n> --\n> Alvaro Herrera\n> http://www.CommandPrompt.com/\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 20 Dec 2007 08:07:56 -0600",
"msg_from": "\"Roberts, Jon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Dec 20, 2007 9:07 AM, Roberts, Jon <[email protected]> wrote:\n> So your suggestion is first to come up with a query that dynamically checks\n> permissions and create a view for it. Secondly, change pgAdmin to reference\n> this view in place of pg_proc. Actually, it should be extended to all\n\nThis solution will not work. It requires cooperation from pgAdmin\nwhich is not going to happen and does nothing about psql or direct\nqueries from within pgadmin. Considered from a security/obfuscation\nperspective, its completely ineffective. As I've said many times,\nthere are only two solutions to this problem:\n\n1. disable permissions to pg_proc and deal with the side effects\n(mainly, pgadmin being broken).\n\n2. wrap procedure languages in encrypted handler (pl/pgsql_s) so that\nthe procedure code is encrypted in pg_proc. this is an ideal\nsolution, but the most work.\n\nmerlin\n",
"msg_date": "Thu, 20 Dec 2007 09:30:07 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "Roberts, Jon escribió:\n> So your suggestion is first to come up with a query that dynamically checks\n> permissions and create a view for it. Secondly, change pgAdmin to reference\n> this view in place of pg_proc. Actually, it should be extended to all\n> objects in the database, not just pg_proc. If you don't have this\n> dictionary role, you shouldn't be able to look at any of the objects in the\n> database unless you own the object or have been granted rights to the\n> object.\n\nRight.\n\nAnother thing that just occured to me was to rename pg_proc to something\nelse, and create the restricted view using the pg_proc name. This\nsounds dangerous in terms of internals, but actually the system catalogs\nare invoked by OID not name, so maybe it will still work.\n\nYou do need to make sure that superusers continue to see all functions\nthough ... (the view test should really be \"does the current user have\naccess to this function\".)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Thu, 20 Dec 2007 15:29:52 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Merlin Moncure [mailto:[email protected]]\n> Sent: Thursday, December 20, 2007 8:30 AM\n> To: Roberts, Jon\n> Cc: Alvaro Herrera; Trevor Talbot; Joshua D. Drake; Kris Jurka; Jonah H.\n> Harris; Bill Moran; [email protected]\n> Subject: Re: [PERFORM] viewing source code\n> \n> On Dec 20, 2007 9:07 AM, Roberts, Jon <[email protected]> wrote:\n> > So your suggestion is first to come up with a query that dynamically\n> checks\n> > permissions and create a view for it. Secondly, change pgAdmin to\n> reference\n> > this view in place of pg_proc. Actually, it should be extended to all\n> \n> This solution will not work. It requires cooperation from pgAdmin\n> which is not going to happen and does nothing about psql or direct\n> queries from within pgadmin. Considered from a security/obfuscation\n> perspective, its completely ineffective. As I've said many times,\n> there are only two solutions to this problem:\n> \n> 1. disable permissions to pg_proc and deal with the side effects\n> (mainly, pgadmin being broken).\n> \n> 2. wrap procedure languages in encrypted handler (pl/pgsql_s) so that\n> the procedure code is encrypted in pg_proc. this is an ideal\n> solution, but the most work.\n> \n\nI think there is an option 3. Enhance the db to have this feature built in\nwhich is more inline with commercial databases. This feature would drive\nadoption of PostgreSQL. It isn't feasible in most companies to allow\neveryone with access to the database to view all code written by anyone and\neveryone. \n\nFor instance, you could have a Finance group writing functions to calculate\nyour financial earnings. These calculations could be changing frequently\nand should only be visible to a small group of people. If the calculations\nwere visible by anyone with database access, they could figure out earnings\nprior to the release and thus have inside information on the stock.\n\n\nJon\n\n",
"msg_date": "Thu, 20 Dec 2007 10:30:43 -0600",
"msg_from": "\"Roberts, Jon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "\nOn Dec 20, 2007, at 11:30 AM, Roberts, Jon wrote:\n\n>\n>\n>> -----Original Message-----\n>> From: Merlin Moncure [mailto:[email protected]]\n>> Sent: Thursday, December 20, 2007 8:30 AM\n>> To: Roberts, Jon\n>> Cc: Alvaro Herrera; Trevor Talbot; Joshua D. Drake; Kris Jurka; \n>> Jonah H.\n>> Harris; Bill Moran; [email protected]\n>> Subject: Re: [PERFORM] viewing source code\n>>\n>> On Dec 20, 2007 9:07 AM, Roberts, Jon <[email protected]> \n>> wrote:\n>>> So your suggestion is first to come up with a query that dynamically\n>> checks\n>>> permissions and create a view for it. Secondly, change pgAdmin to\n>> reference\n>>> this view in place of pg_proc. Actually, it should be extended \n>>> to all\n>>\n>> This solution will not work. It requires cooperation from pgAdmin\n>> which is not going to happen and does nothing about psql or direct\n>> queries from within pgadmin. Considered from a security/obfuscation\n>> perspective, its completely ineffective. As I've said many times,\n>> there are only two solutions to this problem:\n>>\n>> 1. disable permissions to pg_proc and deal with the side effects\n>> (mainly, pgadmin being broken).\n>>\n>> 2. wrap procedure languages in encrypted handler (pl/pgsql_s) so that\n>> the procedure code is encrypted in pg_proc. this is an ideal\n>> solution, but the most work.\n>>\n>\n> I think there is an option 3. Enhance the db to have this feature \n> built in\n> which is more inline with commercial databases. This feature would \n> drive\n> adoption of PostgreSQL. It isn't feasible in most companies to allow\n> everyone with access to the database to view all code written by \n> anyone and\n> everyone.\n>\n> For instance, you could have a Finance group writing functions to \n> calculate\n> your financial earnings. These calculations could be changing \n> frequently\n> and should only be visible to a small group of people. If the \n> calculations\n> were visible by anyone with database access, they could figure out \n> earnings\n> prior to the release and thus have inside information on the stock.\n\nDoes everyone in your organization have login access to your \ndatabase? That seems like the main issue. Perhaps you should stick an \napplication server in between. The application server could also \nupload functions from the \"Finance group\" and ensure that no one can \nsee stored procedures.\n\nCheers,\nM\n",
"msg_date": "Thu, 20 Dec 2007 12:39:55 -0500",
"msg_from": "\"A.M.\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Dec 20, 2007 11:30 AM, Roberts, Jon <[email protected]> wrote:\n> > -----Original Message-----\n> > From: Merlin Moncure [mailto:[email protected]]\n> > Sent: Thursday, December 20, 2007 8:30 AM\n> > To: Roberts, Jon\n> > Cc: Alvaro Herrera; Trevor Talbot; Joshua D. Drake; Kris Jurka; Jonah H.\n> > Harris; Bill Moran; [email protected]\n> > Subject: Re: [PERFORM] viewing source code\n> >\n>\n> > On Dec 20, 2007 9:07 AM, Roberts, Jon <[email protected]> wrote:\n> > > So your suggestion is first to come up with a query that dynamically\n> > checks\n> > > permissions and create a view for it. Secondly, change pgAdmin to\n> > reference\n> > > this view in place of pg_proc. Actually, it should be extended to all\n> >\n> > This solution will not work. It requires cooperation from pgAdmin\n> > which is not going to happen and does nothing about psql or direct\n> > queries from within pgadmin. Considered from a security/obfuscation\n> > perspective, its completely ineffective. As I've said many times,\n> > there are only two solutions to this problem:\n> >\n> > 1. disable permissions to pg_proc and deal with the side effects\n> > (mainly, pgadmin being broken).\n> >\n> > 2. wrap procedure languages in encrypted handler (pl/pgsql_s) so that\n> > the procedure code is encrypted in pg_proc. this is an ideal\n> > solution, but the most work.\n> >\n>\n> I think there is an option 3. Enhance the db to have this feature built in\n> which is more inline with commercial databases. This feature would drive\n> adoption of PostgreSQL. It isn't feasible in most companies to allow\n> everyone with access to the database to view all code written by anyone and\n> everyone.\n\noption 3 is really option 2. having this option is all the flexibility\nyou need. i understand in certain cases you want to prevent code from\nbeing available to see from certain users, but i don't buy the\nadoption argument...most people dont actually become aware of\nimplications of pg_proc until after development has started. simply\nhaving a choice, either directly community supported or maintained\noutside in pgfoundry should be enough. in the majority of cases, who\ncan see the code doesn't matter.\n\ni do however strongly disagree that hiding the code is bad in\nprinciple... i was in the past in this exact situation for business\nreasons out of my control (this is why I know the pgadmin route wont\nwork, i've chased down that angle already), so i'm highly sympathetic\nto people who need to do this. i opted for revoke from pg_proc route,\nwhich, while crude was highly effective.\n\nmerlin\n",
"msg_date": "Thu, 20 Dec 2007 12:43:16 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Dec 20, 2007 12:39 PM, A.M. <[email protected]> wrote:\n> On Dec 20, 2007, at 11:30 AM, Roberts, Jon wrote:\n> >> On Dec 20, 2007 9:07 AM, Roberts, Jon <[email protected]>\n> >> wrote:\n> >>> So your suggestion is first to come up with a query that dynamically\n> >> checks\n> >>> permissions and create a view for it. Secondly, change pgAdmin to\n> >> reference\n> >>> this view in place of pg_proc. Actually, it should be extended\n> >>> to all\n> >>\n> >> This solution will not work. It requires cooperation from pgAdmin\n> >> which is not going to happen and does nothing about psql or direct\n> >> queries from within pgadmin. Considered from a security/obfuscation\n> >> perspective, its completely ineffective. As I've said many times,\n> >> there are only two solutions to this problem:\n> >>\n> >> 1. disable permissions to pg_proc and deal with the side effects\n> >> (mainly, pgadmin being broken).\n> >>\n> >> 2. wrap procedure languages in encrypted handler (pl/pgsql_s) so that\n> >> the procedure code is encrypted in pg_proc. this is an ideal\n> >> solution, but the most work.\n> >>\n> >\n> > I think there is an option 3. Enhance the db to have this feature\n> > built in\n> > which is more inline with commercial databases. This feature would\n> > drive\n> > adoption of PostgreSQL. It isn't feasible in most companies to allow\n> > everyone with access to the database to view all code written by\n> > anyone and\n> > everyone.\n> >\n> > For instance, you could have a Finance group writing functions to\n> > calculate\n> > your financial earnings. These calculations could be changing\n> > frequently\n> > and should only be visible to a small group of people. If the\n> > calculations\n> > were visible by anyone with database access, they could figure out\n> > earnings\n> > prior to the release and thus have inside information on the stock.\n>\n> Does everyone in your organization have login access to your\n> database? That seems like the main issue. Perhaps you should stick an\n> application server in between. The application server could also\n> upload functions from the \"Finance group\" and ensure that no one can\n> see stored procedures.\n\nforcing all database access through an app server is a (too) high\nprice to pay in many scenarios. while it works great for some things\n(web apps), in many companies the db is the 'brain' of the company\nthat must serve all kinds of different purposes across many\ninterfaces.\n\nfor example, ups provides software that communicates with databases\nover odbc for purposes to apply tracking #s to parts. think about all\nthe report engines, etc etc that run over those type of interfaces.\n\nmerlin\n",
"msg_date": "Thu, 20 Dec 2007 13:03:54 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Joshua D. Drake [mailto:[email protected]]\n> Sent: Thursday, December 20, 2007 10:40 AM\n> To: Roberts, Jon\n> Cc: 'Trevor Talbot'; Kris Jurka; Merlin Moncure; Jonah H. Harris; Bill\n> Moran; [email protected]\n> Subject: Re: [PERFORM] viewing source code\n> \n> Roberts, Jon wrote:\n> >\n> >\n> > This really is a needed feature to make PostgreSQL more attractive to\n> > businesses. A more robust security model that better follows commercial\n> > products is needed for adoption.\n> >\n> \n> I would argue that commercial products need to get a clue and stop\n> playing bondage with their users to help stop their imminent and frankly\n> obvious downfall from the Open Source competition.\n> \n> This \"feature\" as it is called can be developed externally and has zero\n> reason to exist within PostgreSQL. If the feature has the level of\n> demand that people think that it does, then the external project will be\n> very successful and that's cool.\n> \n\nI am obviously hitting on the nerve of the open source community because it\ncontradicts the notion that all source code should be open. However, data\nneeds to be protected. I don't want to share with the world my social\nsecurity number. I also don't want to share with the world my code I use to\nmanipulate data. My code is an extension of the data and is useless without\ndata. \n\nBusinesses use databases like crazy. Non-technical people write their own\ncode to analyze data. The stuff they write many times is as valuable as the\ndata itself and should be protected like the data. They don't need or want\nmany times to go through a middle tier to analyze data or through the hassle\nto obfuscate the code. \n\nI think it is foolish to not make PostgreSQL as feature rich when it comes\nto security as the competition because you are idealistic when it comes to\nthe concept of source code. PostgreSQL is better in many ways to MS SQL\nServer and equal to many features of Oracle but when it comes to security,\nit is closer to MS Access.\n\n\nJon\n",
"msg_date": "Thu, 20 Dec 2007 13:45:08 -0600",
"msg_from": "\"Roberts, Jon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Thu, 20 Dec 2007 13:45:08 -0600\r\n\"Roberts, Jon\" <[email protected]> wrote:\r\n\r\n> I think it is foolish to not make PostgreSQL as feature rich when it\r\n> comes to security as the competition because you are idealistic when\r\n> it comes to the concept of source code. PostgreSQL is better in many\r\n> ways to MS SQL Server and equal to many features of Oracle but when\r\n> it comes to security, it is closer to MS Access.\r\n\r\nIf this were true, we would be in a lot more trouble than what you are\r\npresenting here. Let's think about what PostgreSQL supports....\r\n\r\nGSSAPI\r\nKerberos\r\nSSL\r\nPAM\r\nRole based security\r\nSecurity definer functions\r\nData based views (ability to assign restrictions to particular\r\nroles via views)\r\nExternal security providers\r\n\r\n...\r\n\r\nSounds like you have some reading to do before you make broad\r\nassumptions about PostgreSQL security. Everything you want to do is\r\npossible with Postgresql today. You may have write an executor function\r\nto hide your code but you can do it. You may not be able to do it with\r\nplpgsql but you certainly could with any of the other procedural\r\nlanguages.\r\n\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHashRATb/zqfZUUQRAmlRAJoDWr44yld8Ow2qdcvoUdtMiOs5AgCfQ/e7\r\n4OGIPE6ZAHPQPCQ/Mc/dusk=\r\n=73a1\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Thu, 20 Dec 2007 11:53:51 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "[email protected] (\"Roberts, Jon\") writes:\n> I think it is foolish to not make PostgreSQL as feature rich when it\n> comes to security as the competition because you are idealistic when\n> it comes to the concept of source code. PostgreSQL is better in\n> many ways to MS SQL Server and equal to many features of Oracle but\n> when it comes to security, it is closer to MS Access.\n\nI don't think that's quite fair.\n\nThere most certainly *is* a rich set of security features in\nPostgreSQL, with some not-unreasonable defaults, to the point that it\nhas been pointed at as being 'more secure out of the box' than pretty\nwell any DBMS.\n\nWhen people try to put security measures into the database that are\nintended to secure it from, yea, verily, even the DBAs, it often\nappears that once the feature list gets long enough, the critical\nfaculties of peoples' brains seem to shut off. They seem to imagine\nthat since there's a named set of features, that:\n a) They are actually usable, and\n b) They actually accomplish what they claim to be intended for.\n\nFrequently, neither condition is true.\n\nWe've run into cases where attempts to manage fairly complex sets of\nrole-based security pretty much falls apart (e.g. - \"they are not\nusable\") because for it to work, it's necessary that too many people\nunderstand and follow the security design.\n\nWhen *reality* is that the developers build things in an ad-hoc\nfashion without regard to security, then you've got a ball of mud,\nfrom a security standpoint, that no amount of pounding will force into\nthe rigidly-defined \"security hole.\"\n\nNote that ad-hoc reporting and analysis will always tend to fall into\nthis \"ball of mud\" category. They don't know what data they need\nuntil they start exploring the problem they're given, and that tends\nto fit Really Badly with any attempt to strictly define security\naccess.\n\nUsability (item a) is troublesome :-(.\n\nWhen you write about trying to hide source code and the likes, we\nstart thinking of item b), the matter of whether it actually\naccomplishes what is claimed.\n\n------------------------------\n[Vizzini has just cut the rope The Dread Pirate Roberts is climbing up]\nVizzini: HE DIDN'T FALL? INCONCEIVABLE.\nInigo Montoya: You keep using that word. I do not think it means what\n you think it means.\n------------------------------\n\nPeople seem to think that adding passwords, encrypting things, whether\nvia private or public key encryption, or other obfuscation \"provides\nsecurity.\"\n\nRephrasing Inigo Montoy, I am not so sure that \"provides security\"\nmeans what you think it means.\n\nI worked one place where I heard a tale of \"Payroll of Years Past.\"\nThey used to manage executive payroll (for a Fortune 500 company,\nhence with some multi-million dollar paycheques!) via temporarily\nadding the data into the \"peons' system.\"\n\nThey had this clever idea:\n\n- We want to keep the execs' numbers secret from the peons who run the\n system.\n\n- Ergo, we'll load the data in, temporarily, run the cheques, whilst\n having someone watch that the peons aren't reading anything they\n shouldn't.\n\n- Then we'll reverse that data out, and the peons won't know what\n they shouldn't know.\n\nUnfortunately, the joker that thought this up didn't realize that the\ntransactional system would record those sets of changes multiple\ntimes. So anyone looking over the audit logs would see the Secret\nValues listed, not once, but twice. And they couldn't purge those\naudit logs without bringing down the wrath of the auditors; to do so\nwould be to invalidate internal controls that they spent more money\nthan those executive salaries on. Duh.\n\nThey quickly shifted Executive Payroll to be managed, by hand, by\ncertain members of the executives' administrative staff.\n\nThat's much the same kind of problem that pops up here. You may\n*imagine* that you're hiding the stored procedures, but if they're\nsufficiently there that they can be run, they obviously aren't hidden\nas far as the DBMS is concerned, and there can't be *too* much of a\nveil between DBA and DBMS, otherwise you have to accept that the\nsystem is not intended to be manageable.\n\nWe've done some thinking about how to try to hide this information;\nunfortunately, a whole lot of the mechanisms people think of simply\ndon't work. Vendors may *claim* that their products are \"secure,\" but\nthat may be because they know their customers neither know nor truly\ncare what the word means; they merely feel reassured because it's\n\"inconceivable\" (in roughly the _Princess Bride_ sense!) to break the\nsecurity of the product.\n-- \nlet name=\"cbbrowne\" and tld=\"linuxfinances.info\" in name ^ \"@\" ^ tld;;\nhttp://cbbrowne.com/info/spreadsheets.html\nRules of the Evil Overlord #109. \"I will see to it that plucky young\nlads/lasses in strange clothes and with the accent of an outlander\nshall REGULARLY climb some monument in the main square of my capital\nand denounce me, claim to know the secret of my power, rally the\nmasses to rebellion, etc. That way, the citizens will be jaded in case\nthe real thing ever comes along.\" <http://www.eviloverlord.com/>\n",
"msg_date": "Thu, 20 Dec 2007 16:03:08 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Thu, Dec 20, 2007 at 01:45:08PM -0600, Roberts, Jon wrote:\n> Businesses use databases like crazy. Non-technical people write their own\n> code to analyze data. The stuff they write many times is as valuable as the\n> data itself and should be protected like the data. They don't need or want\n> many times to go through a middle tier to analyze data or through the hassle\n> to obfuscate the code. \n\nI'm not opposed to this goal, I should note. I just think that any proposal\nthat is going to go anywhere may need to be better than the one you seem to\nhave made.\n\nI think column-level permissions is probably something that is needed.\n\na\n\n",
"msg_date": "Thu, 20 Dec 2007 16:04:13 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Joshua D. Drake [mailto:[email protected]]\n> Sent: Thursday, December 20, 2007 1:54 PM\n> To: Roberts, Jon\n> Cc: 'Trevor Talbot'; Kris Jurka; Merlin Moncure; Jonah H. Harris; Bill\n> Moran; [email protected]\n> Subject: Re: [PERFORM] viewing source code\n> \n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> On Thu, 20 Dec 2007 13:45:08 -0600\n> \"Roberts, Jon\" <[email protected]> wrote:\n> \n> > I think it is foolish to not make PostgreSQL as feature rich when it\n> > comes to security as the competition because you are idealistic when\n> > it comes to the concept of source code. PostgreSQL is better in many\n> > ways to MS SQL Server and equal to many features of Oracle but when\n> > it comes to security, it is closer to MS Access.\n> \n> If this were true, we would be in a lot more trouble than what you are\n> presenting here. Let's think about what PostgreSQL supports....\n> \n> GSSAPI\n> Kerberos\n> SSL\n> PAM\n> Role based security\n> Security definer functions\n> Data based views (ability to assign restrictions to particular\n> roles via views)\n> External security providers\n> \n> ...\n> \n> Sounds like you have some reading to do before you make broad\n> assumptions about PostgreSQL security. Everything you want to do is\n> possible with Postgresql today. You may have write an executor function\n> to hide your code but you can do it. You may not be able to do it with\n> plpgsql but you certainly could with any of the other procedural\n> languages.\n> \n> \n\nI'm tired of arguing. You win. I still say this I a needed feature if you\nwant adoption for enterprise level databases in larger companies. The\nsecurity out of the box is not enough and it is too much to ask everyone\nimplementing PostgreSQL to do it themselves. It will remain a small niche\ndatabase for small groups of people that have access to everything if they\ncan connect to the database at all. \n\n\nJon\n",
"msg_date": "Thu, 20 Dec 2007 14:02:57 -0600",
"msg_from": "\"Roberts, Jon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Thu, 20 Dec 2007 14:02:57 -0600\r\n\"Roberts, Jon\" <[email protected]> wrote:\r\n\r\n\r\n> I'm tired of arguing. You win. I still say this I a needed feature\r\n> if you want adoption for enterprise level databases in larger\r\n> companies. The security out of the box is not enough and it is too\r\n> much to ask everyone implementing PostgreSQL to do it themselves. It\r\n> will remain a small niche database for small groups of people that\r\n> have access to everything if they can connect to the database at\r\n> all. \r\n\r\nJon,\r\n\r\nWelcome to Open Source. We argue, we disagree, we try to prove one way\r\nor another that on or the other is right. That's life.\r\n\r\nI do not concur with your assessment in the least, especially the\r\namount of enterprise deployments that actually exist but you are\r\nwelcome to your opinion and you certainly don't have to accept mine.\r\n\r\nHave a great Christmas!\r\n\r\nJoshua D. Drake\r\n\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHas3eATb/zqfZUUQRAjmUAKCn1djme0RcGjOgqidUPTCgqSatSgCgnJdV\r\nKpvo0TaYKTE6AQElq3eEKxM=\r\n=aPCx\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Thu, 20 Dec 2007 12:17:34 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Thu, 20 Dec 2007, Roberts, Jon wrote:\n\n> I still say this I a needed feature if you want adoption for enterprise \n> level databases in larger companies.\n\nIt is to some people, and Joshua's opinion is, like everybody else's, just \none person's view on what's important.\n\n> The security out of the box is not enough and it is too much to ask \n> everyone implementing PostgreSQL to do it themselves.\n\nThis is a fair statement coming from the perspective of someone who \nexpects source code protection. What's not a fair statement is to compare \nthe security to Access just because you don't don't understand all the \noptions or think they're too complicated. An inflammatory comment like \nthat is just going to make the very developers who could be helping you \nhere mad.\n\nThe larger distinction that you might not be aware of here is that \nPostgreSQL tries to keep things that can be implemented separately out of \nthe database engine itself. As far as the core database group is \nconcerned, if there is a good interface available to provide these \nfeatures, it would be better to have an external project worry about \nthings like how to make that interface more palatable to people. Look at \npgadmin--that's the reason it's a separate project.\n\nThe right question to ask here may not be \"why isn't PostgreSQL adding \nthese features?\", but instead \"is there a project that makes this \nlow-level capability that already exists easier to use?\". Unfortunately \nfor you, making that distinction right now means you're stuck with a \nlittle bit of study to see whether any of the existing mechanisms might \nmeet the need you've already got, which is why people have been suggesting \nthings you might look into.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 20 Dec 2007 17:51:33 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Thu, Dec 20, 2007 at 02:02:57PM -0600, Roberts, Jon wrote:\n>I'm tired of arguing. You win. I still say this I a needed feature if you\n>want adoption for enterprise level databases in larger companies. The\n>security out of the box is not enough \n\nWhat a classic \"I want this, and if it isn't implemented postgres sucks\" \nargument. \n\nMike Stone\n",
"msg_date": "Fri, 21 Dec 2007 17:19:43 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Andrew Sullivan [mailto:[email protected]]\n> Sent: Thursday, December 20, 2007 3:04 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] viewing source code\n> \n> On Thu, Dec 20, 2007 at 01:45:08PM -0600, Roberts, Jon wrote:\n> > Businesses use databases like crazy. Non-technical people write their\n> own\n> > code to analyze data. The stuff they write many times is as valuable as\n> the\n> > data itself and should be protected like the data. They don't need or\n> want\n> > many times to go through a middle tier to analyze data or through the\n> hassle\n> > to obfuscate the code.\n> \n> I'm not opposed to this goal, I should note. I just think that any\n> proposal\n> that is going to go anywhere may need to be better than the one you seem\n> to\n> have made.\n> \n> I think column-level permissions is probably something that is needed.\n> \n> a\n\n\nActually, PostgreSQL already has column level security for pg_stat_activity.\n\n\n\nselect * from pg_stat_activity\n\nThe current_query column shows \"<insufficient privilege>\" for all rows\nexcept for rows related to my account. \n\nIt seems that we would want to do exact same thing for pg_proc.\n\n\nJon\n",
"msg_date": "Thu, 20 Dec 2007 15:24:34 -0600",
"msg_from": "\"Roberts, Jon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: viewing source code"
},
{
"msg_contents": "On Thu, Dec 20, 2007 at 03:24:34PM -0600, Roberts, Jon wrote:\n> \n> Actually, PostgreSQL already has column level security for pg_stat_activity.\n\nNot exactly. pg_stat_activity is a view. \n\nBut I think someone suggested upthread experimenting with making pg_proc\ninto a view, and making the real table pg_proc_real or something. This\nmight work.\n\nA\n\n",
"msg_date": "Thu, 20 Dec 2007 16:29:04 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: viewing source code"
}
] |
[
{
"msg_contents": "The server is running 8.2.5 FreeBSD 6.1 with 3 GB of RAM.\nI have a table with over 100M rows. I have a unique index (primary key) on\ncolumn name called aid.\nThe select count(aid) .. does a Bitmap heap scan when the right side\ncondition is above 100,000,000 (if i take one zero off it does a pure index\nscan).\nMy question : why is the optimizer choosing an Bitmap Heap Scan when count\ncan be done with index.\n\nWhen i set the bitmap scan to off, it selects an seq scan, after which i\nturn the seq scan off , then it does index scan only but it takes 8 minutes\nlonger than doing a index scan+bitmap scan.\n\nAny insight is appreciated.\nthank you !\ng\n\n\n\n\n\nexplain select count(aid) from topcat.aid where aid >= 10000000 and aid <=\n100000000;\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Aggregate (cost=2786143.42..2786143.43 rows=1 width=4)\n -> Bitmap Heap Scan on aid (cost=142949.13..2742507.95 rows=17454188\nwidth=4)\n Recheck Cond: ((aid >= 10000000) AND (aid <= 100000000))\n -> Bitmap Index Scan on idx_aid_aid\n(cost=0.00..142949.13rows=17454188 width=0)\n Index Cond: ((aid >= 10000000) AND (aid <= 100000000))\n(5 rows)\n\n explain select count(aid) from topcat.aid where aid >= 100000 and aid <=\n100000000;\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Aggregate (cost=2786143.42..2786143.43 rows=1 width=4)\n -> Bitmap Heap Scan on aid (cost=142949.13..2742507.95 rows=17454188\nwidth=4)\n Recheck Cond: ((aid >= 100000) AND (aid <= 100000000))\n -> Bitmap Index Scan on idx_aid_aid\n(cost=0.00..142949.13rows=17454188 width=0)\n Index Cond: ((aid >= 100000) AND (aid <= 100000000))\n(5 rows)\n\n explain select count(aid) from topcat.aid where aid >= 1000000 and aid <=\n100000000;\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Aggregate (cost=2786143.42..2786143.43 rows=1 width=4)\n -> Bitmap Heap Scan on aid (cost=142949.13..2742507.95 rows=17454188\nwidth=4)\n Recheck Cond: ((aid >= 1000000) AND (aid <= 100000000))\n -> Bitmap Index Scan on idx_aid_aid\n(cost=0.00..142949.13rows=17454188 width=0)\n Index Cond: ((aid >= 1000000) AND (aid <= 100000000))\n(5 rows)\n\nexplain select count(aid) from topcat.aid where aid >= 1000000 and aid <=\n10000000;\n QUERY PLAN\n-----------------------------------------------------------------------------\n Aggregate (cost=5.58..5.59 rows=1 width=4)\n -> Index Scan using idx_aid_aid on aid (cost=0.00..5.58 rows=1 width=4)\n Index Cond: ((aid >= 1000000) AND (aid <= 10000000))\n\nThe server is running 8.2.5 FreeBSD 6.1 with 3 GB of RAM.I have a table with over 100M rows. I have a unique index (primary key) on column name called aid.The select count(aid) .. does a Bitmap heap scan when the right side condition is above 100,000,000 (if i take one zero off it does a pure index scan).\nMy question : why is the optimizer choosing an Bitmap Heap Scan when count can be done with index.When i set the bitmap scan to off, it selects an seq scan, after which i turn the seq scan off , then it does index scan only but it takes 8 minutes longer than doing a index scan+bitmap scan.\nAny insight is appreciated.thank you !gexplain select count(aid) from topcat.aid where aid >= 10000000 and aid <= 100000000; QUERY PLAN\n-------------------------------------------------------------------------------------------- Aggregate (cost=2786143.42..2786143.43 rows=1 width=4) -> Bitmap Heap Scan on aid (cost=142949.13..2742507.95\n rows=17454188 width=4) Recheck Cond: ((aid >= 10000000) AND (aid <= 100000000)) -> Bitmap Index Scan on idx_aid_aid (cost=0.00..142949.13 rows=17454188 width=0) Index Cond: ((aid >= 10000000) AND (aid <= 100000000))\n(5 rows) explain select count(aid) from topcat.aid where aid >= 100000 and aid <= 100000000; QUERY PLAN--------------------------------------------------------------------------------------------\n Aggregate (cost=2786143.42..2786143.43 rows=1 width=4) -> Bitmap Heap Scan on aid (cost=142949.13..2742507.95 rows=17454188 width=4) Recheck Cond: ((aid >= 100000) AND (aid <= 100000000))\n -> Bitmap Index Scan on idx_aid_aid (cost=0.00..142949.13 rows=17454188 width=0) Index Cond: ((aid >= 100000) AND (aid <= 100000000))(5 rows) explain select count(aid) from \ntopcat.aid where aid >= 1000000 and aid <= 100000000; QUERY PLAN-------------------------------------------------------------------------------------------- Aggregate (cost=\n2786143.42..2786143.43 rows=1 width=4) -> Bitmap Heap Scan on aid (cost=142949.13..2742507.95 rows=17454188 width=4) Recheck Cond: ((aid >= 1000000) AND (aid <= 100000000)) -> Bitmap Index Scan on idx_aid_aid (cost=\n0.00..142949.13 rows=17454188 width=0) Index Cond: ((aid >= 1000000) AND (aid <= 100000000))(5 rows)explain select count(aid) from topcat.aid where aid >= 1000000 and aid <= 10000000;\n QUERY PLAN----------------------------------------------------------------------------- Aggregate (cost=5.58..5.59 rows=1 width=4) -> Index Scan using idx_aid_aid on aid (cost=\n0.00..5.58 rows=1 width=4) Index Cond: ((aid >= 1000000) AND (aid <= 10000000))",
"msg_date": "Thu, 20 Dec 2007 17:06:55 -0500",
"msg_from": "\"S Golly\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance index scan vs bitmap-seq scan."
},
{
"msg_contents": "On Dec 20, 2007 4:06 PM, S Golly <[email protected]> wrote:\n> The server is running 8.2.5 FreeBSD 6.1 with 3 GB of RAM.\n> I have a table with over 100M rows. I have a unique index (primary key) on\n> column name called aid.\n> The select count(aid) .. does a Bitmap heap scan when the right side\n> condition is above 100,000,000 (if i take one zero off it does a pure index\n> scan).\n> My question : why is the optimizer choosing an Bitmap Heap Scan when count\n> can be done with index.\n\nBecause count can't be done with the index alone.\n\nIn pgsql the visibility info is in the table itself, so even if it can\nhit the index, it has to go back and hit the table to be sure if the\ntuple is visible.\n",
"msg_date": "Thu, 20 Dec 2007 16:15:36 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance index scan vs bitmap-seq scan."
}
] |
[
{
"msg_contents": "On Dec 20, 2007 6:01 PM, Tom Lane <[email protected]> wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > I don't really agree that wrapping pl/pgsql with encryptor/decryptor\n> > is a bad idea.\n>\n> So if you want something other than endless arguments to happen,\n> come up with a nice key-management design for encrypted function\n> bodies.\n\nMaybe a key management solution isn't required. If, instead of\nstrictly wrapping a language with an encryption layer, we provide\nhooks (actors) that have the ability to operate on the function body\nwhen it arrives and leaves pg_proc, we may sidestep the key problem\n(leaving it to the user) and open up the doors to new functionality at\nthe same time.\n\nThe actor is basically a callback taking the function source code (as\ntext) and returning text for storage in pg_proc. Perhaps some other\nhouse keeping variables such as function name, etc. are passed to the\nactor as parameters as well. The actor operates on the function body\ngoing into pg_proc (input actors) and going out (output actors). In\neither case, the function 'body' is modified if necessary, and may\nraise an error.\n\nThe validator can be considered an actor that doesn't modify the body.\n Ideally, the actors can be written in any pl language. Naturally,\ndealing with actors is for the superuser. So, I'm suggesting to\nextend the validator concept, opening it up to the user, giving it\nmore power, and the ability to operate in both directions. The actor\nwill feel a lot like a trigger function.\n\nNow, everything is left to the user...by adding an 'encryption' actor\nto the language (trivial with pg_crypto), the user can broadly encrypt\nin a manner of their choosing. A clever user might write an actor to\nencrypt a subset of functions in a language, or register the same\nlanguage twice with different actors. Since the actor can call out to\nother functions, we don't limit to a particular key management\nstrategy.\n\nAnother nice thing is we may solve a problem that's been bothering me\nfor years, namely that 'CREATE FUNCTION' takes a string literal and\nnot a string returning expression. This is pretty limiting...there\nare a broad range of reasons why I might want to modify the code\nbefore it hits pg_proc. For example, with an actor I can now feed the\ndata into the C preprocessor without giving up the ability of pasting\nthe function body directly into psql.\n\nThis isn't a fully developed idea, and I'm glossing over several areas\n(for example, syntax to modify actors), and I'm not sure if it's a\ngood idea in principle...I might be missing an obvious reason why this\nwon't work. OTOH, it seems like a really neat way to introduce\nencryption.\n\ncomments? is it worth going down this road?\n\nmerlin\n",
"msg_date": "Fri, 21 Dec 2007 00:09:28 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "function body actors (was: viewing source code)"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> On Dec 20, 2007 6:01 PM, Tom Lane <[email protected]> wrote:\n>> So if you want something other than endless arguments to happen,\n>> come up with a nice key-management design for encrypted function\n>> bodies.\n\n> Maybe a key management solution isn't required. If, instead of\n> strictly wrapping a language with an encryption layer, we provide\n> hooks (actors) that have the ability to operate on the function body\n> when it arrives and leaves pg_proc, we may sidestep the key problem\n> (leaving it to the user) and open up the doors to new functionality at\n> the same time.\n\nI think you're focusing on mechanism and ignoring the question of\nwhether there is a useful policy for it to implement. Andrew Sullivan\nargued upthread that we cannot get anywhere with both keys and encrypted\nfunction bodies stored in the same database (I hope that's an adequate\nsummary of his point). I'm not convinced that he's right, but that has\nto be the first issue we think about. The whole thing is a dead end if\nthere's no way to do meaningful encryption --- punting an insoluble\nproblem to the user doesn't make it better.\n\n(This is not to say that you don't have a cute idea there, only that\nit's not a license to take our eyes off the ball.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Dec 2007 00:40:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function body actors (was: viewing source code) "
},
{
"msg_contents": "On Dec 21, 2007 12:40 AM, Tom Lane <[email protected]> wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > On Dec 20, 2007 6:01 PM, Tom Lane <[email protected]> wrote:\n> >> So if you want something other than endless arguments to happen,\n> >> come up with a nice key-management design for encrypted function\n> >> bodies.\n>\n> > Maybe a key management solution isn't required. If, instead of\n> > strictly wrapping a language with an encryption layer, we provide\n> > hooks (actors) that have the ability to operate on the function body\n> > when it arrives and leaves pg_proc, we may sidestep the key problem\n> > (leaving it to the user) and open up the doors to new functionality at\n> > the same time.\n>\n> I think you're focusing on mechanism and ignoring the question of\n> whether there is a useful policy for it to implement. Andrew Sullivan\n> argued upthread that we cannot get anywhere with both keys and encrypted\n> function bodies stored in the same database (I hope that's an adequate\n> summary of his point). I'm not convinced that he's right, but that has\n> to be the first issue we think about. The whole thing is a dead end if\n> there's no way to do meaningful encryption --- punting an insoluble\n> problem to the user doesn't make it better.\n\nWell, there is no 'one size fits all' policy. I'm still holding out\nthat we don't need any specific designs for this...simply offering the\nexample in the docs might get people started (just thinking out loud\nhere):\n\ncreate function encrypt_proc(proname text, prosrc_in text, prosrc_out\nout text) returns text as\n$$\n declare\n key bytea;\n begin\n -- could be a literal variable, field from a private table, temp\ntable, or 3rd party\n -- literal is dangerous, since its visible until 'create or\nreplaced' but thats maybe ok, depending\n key := get_key();\n select magic_string || encode(encrypt(prosrc_in, key, 'bf'),\n'hex'); -- magic string prevents attempting to unencrypt non-encrypted\nfunctions.\n end;\n$$ language plpgsql;\n\n-- ordering of actors is significant...need to think about that\nalter language plpgsql add actor 'encrypt_proc' on input;\nalter language plpgsql add actor 'decrypt_proc' on output;\n\nIf that's not enough, then you have build something more structured,\nthinking about who provides the key and how the database asks for it.\nThe user would have to seed the session somehow (maybe, stored in a\ntemp table?) with a secret value which would be translated into the\nkey directly on the database or by a 3rd party over a secure channel.\nThe structured approach doesn't appeal to me much though...\n\nThe temp table idea might not be so hot, since it's trivial for the\ndatabase admin to see data from other user's temp tables, and maybe we\ndon't want that in some cases. need to think about this some more...\n\nmerlin\n",
"msg_date": "Fri, 21 Dec 2007 02:06:12 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: function body actors (was: viewing source code)"
},
{
"msg_contents": "I have similar patch and it works. There is two isues:\n\n* we missing column in pg_proc about state (not all procedures are\nobfuscated), I solved it for plpgsl with using probin.\n* decrypt is expensive on language handler level. Every session have\nto do it again and again, better decrypt in system cache or somewhere\nthere.\n\nRegards\nPavel Stehule\n",
"msg_date": "Fri, 21 Dec 2007 09:18:18 +0100",
"msg_from": "\"Pavel Stehule\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function body actors (was: viewing source code)"
},
{
"msg_contents": "On Dec 21, 2007 3:18 AM, Pavel Stehule <[email protected]> wrote:\n> I have similar patch and it works. There is two isues:\n>\n> * we missing column in pg_proc about state (not all procedures are\n> obfuscated), I solved it for plpgsl with using probin.\n\nI was hoping to avoid making any catalog or other changes to support\nencryption specifically. Maybe your patch stands on its own\nmerits...I missed the original discussion. Do you think the code you\nwrote can be adapted to do other things besides encryption?\n\n> * decrypt is expensive on language handler level. Every session have\n> to do it again and again, better decrypt in system cache or somewhere\n> there.\n\nDoesn't bother me in the least...and caching unencrypted data is\nscary. Also, aes256 is pretty fast for what it gives you and function\nbodies are normally short. The real issue as I see it is where to\nkeep the key. How did you handle that?\n\nmerlin\n",
"msg_date": "Fri, 21 Dec 2007 09:13:32 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: function body actors (was: viewing source code)"
},
{
"msg_contents": "On 21/12/2007, Merlin Moncure <[email protected]> wrote:\n> On Dec 21, 2007 3:18 AM, Pavel Stehule <[email protected]> wrote:\n> > I have similar patch and it works. There is two isues:\n> >\n> > * we missing column in pg_proc about state (not all procedures are\n> > obfuscated), I solved it for plpgsl with using probin.\n>\n> I was hoping to avoid making any catalog or other changes to support\n> encryption specifically. Maybe your patch stands on its own\n> merits...I missed the original discussion. Do you think the code you\n> wrote can be adapted to do other things besides encryption?\n>\n\nI don't know. It was fast hack that just works. It hat to do\nobfuscation, and it do it well.\n\n> > * decrypt is expensive on language handler level. Every session have\n> > to do it again and again, better decrypt in system cache or somewhere\n> > there.\n>\n> Doesn't bother me in the least...and caching unencrypted data is\n> scary. Also, aes256 is pretty fast for what it gives you and function\n> bodies are normally short. The real issue as I see it is where to\n> keep the key. How did you handle that?\n>\n> merlin\n>\n\nSimply. I use for password some random plpgsql message text and\ncompile it. I though about GUC, and about storing password in\npostgresql.conf. It's equal to protection level. We cannot protect\ncode on 100%. If you have admin or superuser account and if you know\nsome internal, you can simply get code.\n\nhttp://blog.pgsql.cz/index.php?/archives/10-Obfuscator-PLpgSQL-procedur.html#extended\n\nsorry for czech desc\n\nPavel\n",
"msg_date": "Fri, 21 Dec 2007 15:39:53 +0100",
"msg_from": "\"Pavel Stehule\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function body actors (was: viewing source code)"
},
{
"msg_contents": "\"Pavel Stehule\" <[email protected]> writes:\n> On 21/12/2007, Merlin Moncure <[email protected]> wrote:\n>> ... The real issue as I see it is where to\n>> keep the key. How did you handle that?\n\n> Simply. I use for password some random plpgsql message text and\n> compile it. I though about GUC, and about storing password in\n> postgresql.conf. It's equal to protection level. We cannot protect\n> code on 100%. If you have admin or superuser account and if you know\n> some internal, you can simply get code.\n\nYeah. There is no defense against someone who is prepared to go in\nthere with a debugger and pull the post-decryption code out of memory.\nSo what we need to think about is what sorts of threats we *can* or\nshould defend against. A couple of goals that seem like they might\nbe reasonable are:\n\n* Even a superuser can't get the code at the SQL level, ie, it's\nsecure if you rule out debugger-level attacks. (For example, this\nmight prevent someone who had remotely breached the superuser account\nfrom getting the code.)\n\n* Code not available if you just look at what's on-disk, ie, you can't\nget it by stealing a backup tape.\n\nAny other threats we could consider defending against?\n\nBTW, this thread definitely doesn't belong on -performance anymore.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Dec 2007 11:18:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function body actors (was: [PERFORM] viewing source code) "
},
{
"msg_contents": "On Fri, Dec 21, 2007 at 12:09:28AM -0500, Merlin Moncure wrote:\n> Maybe a key management solution isn't required. If, instead of\n> strictly wrapping a language with an encryption layer, we provide\n> hooks (actors) that have the ability to operate on the function body\n> when it arrives and leaves pg_proc, we may sidestep the key problem\n> (leaving it to the user) and open up the doors to new functionality at\n> the same time.\n\nI like this idea much better, because the same basic mechanism can be used\nfor more than one thing, and it doesn't build in a system that is\nfundamentally weak. Of course, you _can_ build a weak system this way, but\nthere's an important difference between building a fundamentally weak system\nand making weak systems possible.\n\nA\n\n",
"msg_date": "Fri, 21 Dec 2007 11:24:41 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] function body actors (was: viewing source code)"
},
{
"msg_contents": "Andrew Sullivan <[email protected]> writes:\n> On Fri, Dec 21, 2007 at 12:09:28AM -0500, Merlin Moncure wrote:\n>> Maybe a key management solution isn't required.\n\n> I like this idea much better, because the same basic mechanism can be used\n> for more than one thing, and it doesn't build in a system that is\n> fundamentally weak. Of course, you _can_ build a weak system this way, but\n> there's an important difference between building a fundamentally weak system\n> and making weak systems possible.\n\nI find myself unconvinced by this argument. The main problem is: how\ndo we know that it's possible to build a strong system atop this\nmechanism? Just leaving it to non-security-savvy users seems to me\nto be a great way to guarantee a lot of weak systems in the field.\nISTM our minimum responsibility would be to design and document how\nto build a strong protection system using the feature ... and at that\npoint why not build it in?\n\nI've certainly got no objection to making a mechanism that can be used\nfor more than one purpose; but not offering a complete security solution\nis abdicating our responsibility.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Dec 2007 11:47:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function body actors (was: [PERFORM] viewing source code) "
},
{
"msg_contents": "On Fri, Dec 21, 2007 at 12:40:05AM -0500, Tom Lane wrote:\n\n> whether there is a useful policy for it to implement. Andrew Sullivan\n> argued upthread that we cannot get anywhere with both keys and encrypted\n> function bodies stored in the same database (I hope that's an adequate\n> summary of his point). \n\nIt is. I'm not a security expert, but I've been spending some time\nlistening to some of them lately. The fundamental problem with a system\nthat stores the keys online in the same repository is not just its potential\nfor compromise, but its brittle failure mode: once the key is recovered,\nyou're hosed. And there's no outside check of key validity, which means\nattackers have a nicely-contained target to hit.\n\n> I'm not convinced that he's right, but that has to be the first issue we\n> think about. The whole thing is a dead end if there's no way to do\n> meaningful encryption --- punting an insoluble problem to the user doesn't\n> make it better.\n\nWell, one thing you could do with the proposal is build a PKCS#11 actor,\nthat could talk to an HSM. Not everyone needs HSMs, of course, but they do\nmake online key storage much less risky (because correctly designed ones\nmake key recovery practically impossible). So the mechanism can be made\neffectively secure even for very strong cryptographic uses.\n\nWeaker cases might use a two-level key approach, with a \"data-signing key\"\nonline all the time to do the basic encryption and validation, but a\nkey-signing key that is always offline or otherwise unavailable from within\nthe system. The key signing key only authenticates (and doesn't encrypt)\nthe data signing key. You could use a different actor for this, to provide\nan interface to one-way functions or something. This gives you a way to\nrevoke a data-signing key. You couldn't protect already compromised data\nthis way, but at least you could prevent new disclosures. \n\nYes, I'm being hand-wavy now, but I can at least see how these different\napproaches are possible under the suggestion, so it seems like a possibly\nfruitful avenue to explore. The more I think about it, actually, the more I\nlike it.\n\nA\n",
"msg_date": "Fri, 21 Dec 2007 11:48:26 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function body actors (was: viewing source code)"
},
{
"msg_contents": "On Dec 21, 2007 11:48 AM, Andrew Sullivan <[email protected]> wrote:\n> On Fri, Dec 21, 2007 at 12:40:05AM -0500, Tom Lane wrote:\n>\n> > whether there is a useful policy for it to implement. Andrew Sullivan\n> > argued upthread that we cannot get anywhere with both keys and encrypted\n> > function bodies stored in the same database (I hope that's an adequate\n> > summary of his point).\n>\n> It is. I'm not a security expert, but I've been spending some time\n> listening to some of them lately. The fundamental problem with a system\n> that stores the keys online in the same repository is not just its potential\n> for compromise, but its brittle failure mode: once the key is recovered,\n> you're hosed. And there's no outside check of key validity, which means\n> attackers have a nicely-contained target to hit.\n>\n> > I'm not convinced that he's right, but that has to be the first issue we\n> > think about. The whole thing is a dead end if there's no way to do\n> > meaningful encryption --- punting an insoluble problem to the user doesn't\n> > make it better.\n>\n> Well, one thing you could do with the proposal is build a PKCS#11 actor,\n> that could talk to an HSM. Not everyone needs HSMs, of course, but they do\n> make online key storage much less risky (because correctly designed ones\n> make key recovery practically impossible). So the mechanism can be made\n> effectively secure even for very strong cryptographic uses.\n\nISTM the main issue is how exactly the authenticated user interacts\nwith the actor to give it the information it needs to get the real\nkey. This is significant because we don't want to be boxed into an\nactor implementation that doesn't allow that interaction. If simply\ncalling out via a function is enough (which, to be perfectly honest, I\ndon't know), then we can implement the actor system and let actor\nimplementations spring to life in contrib, pgfoundry, etc. as the\ncommunity presents them.\n\nmerlin\n",
"msg_date": "Fri, 21 Dec 2007 12:48:51 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: function body actors (was: [PERFORM] viewing source code)"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> ISTM the main issue is how exactly the authenticated user interacts\n> with the actor to give it the information it needs to get the real\n> key. This is significant because we don't want to be boxed into an\n> actor implementation that doesn't allow that interaction.\n\nWe don't? What purpose would such a setup serve? I would think\nthat for the applications we have in mind, the *last* thing you\nwant is for the end user to hold the key. The whole point of this\nis to keep him from seeing the function source code, remember?\n\nAndrew's suggestion of an outside-the-database key server is\napropos, but I think it would end up being a situation where\nthe key server is under the control of whoever wrote the function\nand wants to guard it against the end user. The key server would\nwant some kind of authentication token but I think that could\nperfectly well be an ID for the database server, not the individual\nend user. There's no need for anything as awkward as an interactive\nsign-on, AFAICS.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Dec 2007 13:57:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function body actors (was: [PERFORM] viewing source code) "
},
{
"msg_contents": "On Fri, Dec 21, 2007 at 01:57:44PM -0500, Tom Lane wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > ISTM the main issue is how exactly the authenticated user interacts\n> > with the actor to give it the information it needs to get the real\n> > key. This is significant because we don't want to be boxed into an\n> > actor implementation that doesn't allow that interaction.\n> \n> We don't? What purpose would such a setup serve? I would think\n> that for the applications we have in mind, the *last* thing you\n> want is for the end user to hold the key. The whole point of this\n> is to keep him from seeing the function source code, remember?\n\nHmm; this may be exactly part of the problem, though. It seems there are\ntwo possible cases in play:\n\n1.\tProtect the content in the database (in this case, function bodies)\nfrom _all_ users on a given server. This is a case where you want to\nprotect (say) your function body from your users, because you have a\nclosed-source application. \n\n2.\tProtect the content of a field from _some_ users on a given system,\nbased on the permissions they hold. This is roughly analagous to others not\nbeing able to look in the table I created, because I haven't GRANTed them\npermission.\n\n(2) is really a case for column-level access controls, I guess. But if\nwe're trying to solve this problem too, then user passwords or something\nmake sense.\n\nA\n\n",
"msg_date": "Fri, 21 Dec 2007 15:56:19 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function body actors (was: [PERFORM] viewing source code)"
},
{
"msg_contents": "Andrew Sullivan <[email protected]> writes:\n> Hmm; this may be exactly part of the problem, though. It seems there are\n> two possible cases in play:\n\n> 1.\tProtect the content in the database (in this case, function bodies)\n> from _all_ users on a given server. This is a case where you want to\n> protect (say) your function body from your users, because you have a\n> closed-source application. \n\n> 2.\tProtect the content of a field from _some_ users on a given system,\n> based on the permissions they hold. This is roughly analagous to others not\n> being able to look in the table I created, because I haven't GRANTed them\n> permission.\n\nI would argue that (2) is reasonably well served today by setting up\nseparate databases for separate users. The people who are complaining\nseem to want to send out a set of functions into a hostile environment,\nwhich is surely case (1).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Dec 2007 16:19:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function body actors (was: [PERFORM] viewing source code) "
},
{
"msg_contents": "On Fri, Dec 21, 2007 at 04:19:51PM -0500, Tom Lane wrote:\n> > 2.\tProtect the content of a field from _some_ users on a given system,\n> \n> I would argue that (2) is reasonably well served today by setting up\n> separate databases for separate users. \n\nI thought actually this was one of the use-cases we were hearing. Different\npeople using the same database (because the same data), with rules about the\ndifferent staff being able to see this or that function body. I can easily\nimagine such a case, for instance, in a large organization with different\ndepartments and different responsibilities. It seems a shame that the only\nanswer we have there is, \"Give them different databases.\" \n\nI actually think organizations that think keeping function bodies secret\nlike this to be a good idea are organizations that will eventually make\nreally stupid mistakes. But that doesn't mean they're not under the legal\nrequirement to do this. For instance, my current employer has\n(externally-mandated) organizational conflict of interest rules that require\nall disclosure to be done exclusively as \"need to know\". Under the right\n(!) legal guidance, such a requirement could easily lead to rules about\nfunction-body disclosure. From my point of view, such a use case is way\nmore compelling than function-body encryption (although I understand that\none too).\n\nA\n\n",
"msg_date": "Fri, 21 Dec 2007 16:47:46 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function body actors (was: [PERFORM] viewing source code)"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Fri, 21 Dec 2007 16:47:46 -0500\r\nAndrew Sullivan <[email protected]> wrote:\r\n\r\n> On Fri, Dec 21, 2007 at 04:19:51PM -0500, Tom Lane wrote:\r\n> > > 2.\tProtect the content of a field from _some_ users on a\r\n> > > given system,\r\n> > \r\n> > I would argue that (2) is reasonably well served today by setting up\r\n> > separate databases for separate users. \r\n> \r\n> I thought actually this was one of the use-cases we were hearing.\r\n> Different people using the same database (because the same data),\r\n> with rules about the different staff being able to see this or that\r\n> function body. I can easily imagine such a case, for instance, in a\r\n> large organization with different departments and different\r\n> responsibilities. It seems a shame that the only answer we have\r\n> there is, \"Give them different databases.\" \r\n\r\nI think there is a fundamental disconnect here. The \"Give them\r\ndifferent databases\" argument is essentially useless. Consider a\r\norganization that has Sales and HR.\r\n\r\nYou don't give both a separate database. They need access to \"some\" of\r\neach others information. Just not all.\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHbDkOATb/zqfZUUQRAqzmAJ9VhNXYtr/N7px3/iUenUJN+7r9jQCfQtj+\r\nHyfo4fLGrBGUN4jJcSgZEh0=\r\n=xeLs\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Fri, 21 Dec 2007 14:07:08 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function body actors (was: [PERFORM] viewing source\n code)"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm currently benchmarking the new PostgreSQL server of one of our\ncustomers with PostgreSQL 8.3 beta4. I have more or less the same\nconfiguration Stefan tested in his blog [1]:\n- Dell 2900 with two brand new X5365 processors (quad core 3.0 GHz),\n16 GB of memory\n- a RAID1 array for pg_xlog and a 6 disks RAID10 array for data (I\nmoved pg_xlog to the RAID10 array for a few runs - same behaviour) -\nall 73 GB 15k drives\n- CentOS 5.1 - 64 bits\n\nI started working on pgbench tests. I made a \"not so stupid\"\nconfiguration to begin with and I was quite disappointed by my results\ncompared to Stefan's. I decided to test with a more default\nshared_buffers configuration to be able to compare my results with\nStefan's graph [2]. And the fact is that with a very low\nshared_buffers configuration, my results are quite similar to Stefan's\nresults but, as soon as I put higher values of shared_buffers,\nperformances begins degrading [3].\n\nI performed my tests with: pgbench -i -s 100 -U postgres bench and\npgbench -s 100 -c 100 -t 30000 -U postgres bench. Of course, I\ninitialize the database before each run. I made my tests in one\ndirection then in the other with similar results so it's not a\ndegradation due to consecutive runs.\n\nI lowered the number of concurrent clients to 50 because 100 is quite\nhigh and I obtain the same sort of results:\nshared_buffers=32MB: 1869 tps\nshared_buffers=64MB: 1844 tps\nshared_buffers=512MB: 1676 tps\nshared_buffers=1024MB: 1559 tps\n\nNon default parameters are:\nmax_connections = 200\nwork_mem = 32MB\nwal_buffers = 1024kB\ncheckpoint_segments = 192\neffective_cache_size = 5GB\n(I use more or less the configuration used by Stefan - I had the same\nbehaviour with default wal_buffers and checkpoint_segments)\n\nWhile monitoring the server with vmstat, I can't see any real reason\nwhy it's slower. When shared_buffers has a higher value, I/O are\nlower, context switches too and finally performances. The CPU usage is\nquite similar (~50-60%). I/O doesn't limit the performances AFAICS.\n\nI must admit I'm a bit puzzled. Does anyone have any pointer which\ncould explain this behaviour or any way to track the issue? I'll be\nglad to perform any test needed to understand the problem.\n\nThanks.\n\n[1] http://www.kaltenbrunner.cc/blog/index.php?/archives/21-8.3-vs.-8.2-a-simple-benchmark.html\n[2] http://www.kaltenbrunner.cc/blog/uploads/83b4shm.gif\n[3] http://people.openwide.fr/~gsmet/postgresql/tps_shared_buffers.png\n(X=shared_buffers in MB/Y=results with pgbench)\n\n--\nGuillaume\n",
"msg_date": "Wed, 26 Dec 2007 01:06:45 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "More shared buffers causes lower performances"
},
{
"msg_contents": "Guillaume Smet a �crit :\n> Hi all,\n>\n> I'm currently benchmarking the new PostgreSQL server of one of our\n> customers with PostgreSQL 8.3 beta4. I have more or less the same\n> configuration Stefan tested in his blog [1]:\n> - Dell 2900 with two brand new X5365 processors (quad core 3.0 GHz),\n> 16 GB of memory\n> - a RAID1 array for pg_xlog and a 6 disks RAID10 array for data (I\n> moved pg_xlog to the RAID10 array for a few runs - same behaviour) -\n> all 73 GB 15k drives\n> - CentOS 5.1 - 64 bits\n>\n> \n\nWhich kernel do you have ?\n\n\n> I started working on pgbench tests. I made a \"not so stupid\"\n> configuration to begin with and I was quite disappointed by my results\n> compared to Stefan's. I decided to test with a more default\n> shared_buffers configuration to be able to compare my results with\n> Stefan's graph [2]. And the fact is that with a very low\n> shared_buffers configuration, my results are quite similar to Stefan's\n> results but, as soon as I put higher values of shared_buffers,\n> performances begins degrading [3].\n>\n> I performed my tests with: pgbench -i -s 100 -U postgres bench and\n> pgbench -s 100 -c 100 -t 30000 -U postgres bench. Of course, I\n> initialize the database before each run. I made my tests in one\n> direction then in the other with similar results so it's not a\n> degradation due to consecutive runs.\n>\n> I lowered the number of concurrent clients to 50 because 100 is quite\n> high and I obtain the same sort of results:\n> shared_buffers=32MB: 1869 tps\n> shared_buffers=64MB: 1844 tps\n> shared_buffers=512MB: 1676 tps\n> shared_buffers=1024MB: 1559 tps\n>\n> Non default parameters are:\n> max_connections = 200\n> work_mem = 32MB\n> wal_buffers = 1024kB\n> checkpoint_segments = 192\n> effective_cache_size = 5GB\n> (I use more or less the configuration used by Stefan - I had the same\n> behaviour with default wal_buffers and checkpoint_segments)\n>\n> While monitoring the server with vmstat, I can't see any real reason\n> why it's slower. When shared_buffers has a higher value, I/O are\n> lower, context switches too and finally performances. The CPU usage is\n> quite similar (~50-60%). I/O doesn't limit the performances AFAICS.\n>\n> I must admit I'm a bit puzzled. Does anyone have any pointer which\n> could explain this behaviour or any way to track the issue? I'll be\n> glad to perform any test needed to understand the problem.\n>\n> Thanks.\n>\n> [1] http://www.kaltenbrunner.cc/blog/index.php?/archives/21-8.3-vs.-8.2-a-simple-benchmark.html\n> [2] http://www.kaltenbrunner.cc/blog/uploads/83b4shm.gif\n> [3] http://people.openwide.fr/~gsmet/postgresql/tps_shared_buffers.png\n> (X=shared_buffers in MB/Y=results with pgbench)\n>\n> --\n> Guillaume\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n-- \nC�dric Villemain\nAdministrateur de Base de Donn�es\nCel: +33 (0)6 74 15 56 53\nhttp://dalibo.com - http://dalibo.org",
"msg_date": "Wed, 26 Dec 2007 12:06:47 +0100",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "On Wed, 2007-12-26 at 01:06 +0100, Guillaume Smet wrote:\n\n> I lowered the number of concurrent clients to 50 because 100 is quite\n> high and I obtain the same sort of results:\n> shared_buffers=32MB: 1869 tps\n> shared_buffers=64MB: 1844 tps\n> shared_buffers=512MB: 1676 tps\n> shared_buffers=1024MB: 1559 tps\n\nCan you try with\n\nbgwriter_lru_maxpages = 0\n\nSo we can see if the bgwriter has any hand in this?\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n",
"msg_date": "Wed, 26 Dec 2007 11:21:05 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "On Dec 26, 2007 12:06 PM, Cédric Villemain <[email protected]> wrote:\n> Which kernel do you have ?\n\nKernel of the distro. So a RH flavoured 2.6.18.\n\n--\nGuillaume\n",
"msg_date": "Wed, 26 Dec 2007 13:47:01 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "On Dec 26, 2007 12:21 PM, Simon Riggs <[email protected]> wrote:\n> Can you try with\n>\n> bgwriter_lru_maxpages = 0\n>\n> So we can see if the bgwriter has any hand in this?\n\nI will. I'm currently running tests with less concurrent clients (16)\nwith exactly the same results:\n64M 4213.314902\n256M 4012.782820\n512M 3676.840722\n768M 3377.791211\n1024M 2863.133965\n64M again 4274.531310\n\nI'm rerunning the tests using Greg Smith's pgbench-tools [1] to obtain\na graph of each run.\n\n[1] http://www.westnet.com/~gsmith/content/postgresql/pgbench-tools.htm\n\n--\nGuillaume\n",
"msg_date": "Wed, 26 Dec 2007 13:55:15 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "On Dec 26, 2007 12:21 PM, Simon Riggs <[email protected]> wrote:\n> bgwriter_lru_maxpages = 0\n>\n> So we can see if the bgwriter has any hand in this?\n\nIt doesn't change the behaviour I have.\n\nIt's not checkpointing either as using pgbench-tools, I can see that\ntps and latency are quite stable during the entire run. Btw, thanks\nGreg for these nice tools.\n\nI thought it may be some sort of lock contention so I made a few tests\nwith -N but I have the same behaviour.\n\nThen I decided to perform read-only tests using -S option (pgbench -S\n-s 100 -c 16 -t 30000 -U postgres bench). And still the same\nbehaviour:\nshared_buffers=64MB : 20k tps\nshared_buffers=1024MB : 8k tps\n\nAny other idea?\n\n--\nGuillaume\n",
"msg_date": "Wed, 26 Dec 2007 16:41:24 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "Hello\n\nI tested it and it is true. In my configuration 1GRam, Fedora 8, is\nPostgreSQL most fast with 32M shared buffers :(. Diff is about 5% to\n256M\n\nRegards\nPavel Stehule\n\nOn 26/12/2007, Guillaume Smet <[email protected]> wrote:\n> On Dec 26, 2007 12:21 PM, Simon Riggs <[email protected]> wrote:\n> > bgwriter_lru_maxpages = 0\n> >\n> > So we can see if the bgwriter has any hand in this?\n>\n> It doesn't change the behaviour I have.\n>\n> It's not checkpointing either as using pgbench-tools, I can see that\n> tps and latency are quite stable during the entire run. Btw, thanks\n> Greg for these nice tools.\n>\n> I thought it may be some sort of lock contention so I made a few tests\n> with -N but I have the same behaviour.\n>\n> Then I decided to perform read-only tests using -S option (pgbench -S\n> -s 100 -c 16 -t 30000 -U postgres bench). And still the same\n> behaviour:\n> shared_buffers=64MB : 20k tps\n> shared_buffers=1024MB : 8k tps\n>\n> Any other idea?\n>\n> --\n> Guillaume\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n",
"msg_date": "Wed, 26 Dec 2007 18:12:21 +0100",
"msg_from": "\"Pavel Stehule\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "On Dec 26, 2007 4:41 PM, Guillaume Smet <[email protected]> wrote:\n> Then I decided to perform read-only tests using -S option (pgbench -S\n> -s 100 -c 16 -t 30000 -U postgres bench). And still the same\n> behaviour:\n> shared_buffers=64MB : 20k tps\n> shared_buffers=1024MB : 8k tps\n\nSome more information. If I strace the backends during the test, the\ntest is faster with shared_buffers=1024MB and I have less system calls\n(less read and less lseek).\n\nA quick cut | uniq | sort gives me:\nWith 64MB:\n 12548 semop\n 160039 sendto\n 160056 recvfrom\n 294289 read\n 613338 lseek\n\nWith 1024MB:\n 11396 semop\n 129947 read\n 160039 sendto\n 160056 recvfrom\n 449584 lseek\n\n--\nGuillaume\n",
"msg_date": "Wed, 26 Dec 2007 18:20:15 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "On Wed, 26 Dec 2007, Guillaume Smet wrote:\n\n> It's not checkpointing either as using pgbench-tools, I can see that\n> tps and latency are quite stable during the entire run. Btw, thanks\n> Greg for these nice tools.\n\nI stole the graph idea from Mark Wong's DBT2 code and one of these days \nI'll credit him appropriately.\n\n> Then I decided to perform read-only tests using -S option (pgbench -S\n> -s 100 -c 16 -t 30000 -U postgres bench). And still the same\n> behaviour:\n> shared_buffers=64MB : 20k tps\n> shared_buffers=1024MB : 8k tps\n\nAh, now this is really interesting, as it rules out all the write \ncomponents and should be easy to replicate even on a smaller server. As \nyou've already dumped a bunch of time into this the only other thing I \nwould suggest checking is whether the same behavior also happens on 8.2 on \nyour server.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 26 Dec 2007 13:23:58 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "On Dec 26, 2007 7:23 PM, Greg Smith <[email protected]> wrote:\n> Ah, now this is really interesting, as it rules out all the write\n> components and should be easy to replicate even on a smaller server. As\n> you've already dumped a bunch of time into this the only other thing I\n> would suggest checking is whether the same behavior also happens on 8.2 on\n> your server.\n\nLet's go with 8.2.5 on the same server (-s 100 / 16 clients / 50k\ntransactions per client / only read using -S option):\n64MB: 33814 tps\n512MB: 35833 tps\n1024MB: 36986 tps\nIt's more consistent with what I expected.\n\nI used PGDG RPMs compiled by Devrim for 8.2.5 and the ones I compiled\nmyself for 8.3b4 (based on the src.rpm of Devrim). I just asked Devrim\nto build a set of x86_64 RPMs for 8.3b4 to see if it's not a\ncompilation problem (they were compiled on a brand new box freshly\ninstalled so it would be a bit surprising but I want to be sure). He's\nkindly uploading them right now so I'll work on new tests using his\nRPMs.\n\nI'll keep you informed of the results.\n\n--\nGuillaume\n",
"msg_date": "Wed, 26 Dec 2007 22:52:04 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "On Dec 26, 2007 10:52 PM, Guillaume Smet <[email protected]> wrote:\n> Let's go with 8.2.5 on the same server (-s 100 / 16 clients / 50k\n> transactions per client / only read using -S option):\n> 64MB: 33814 tps\n> 512MB: 35833 tps\n> 1024MB: 36986 tps\n> It's more consistent with what I expected.\n\nI had the same numbers with 8.3b4.x86_64 RPMs compiled by Devrim than\nwith the ones I compiled myself. While discussing with Devrim, I\nchecked the .spec with a little more attention and... I noticed that\nbeta RPMs are by default compiled with --enable-debug and\n--enable-cassert which doesn't help them to fly fast...\nI did all my previous benchmarks with binaries compiled directly from\nCVS so I didn't notice it before and this new server was far faster\nthan the box I tested 8.3devel before so I wasn't surprised by the\nother results..\n\nSo, the conclusion is: if you really want to test/benchmark 8.3beta4\nusing the RPM packages, you'd better compile your own set of RPMs\nusing --define \"beta 0\".\n\nReally sorry for the noise but anyway quite happy to have discovered\nthe pgbench-tools of Greg.\n\nI hope it will be useful to other people. I'll post new results\nyesterday with a clean beta4 install.\n\n--\nGuillaume\n",
"msg_date": "Wed, 26 Dec 2007 23:53:13 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "On Wed, 26 Dec 2007, Guillaume Smet wrote:\n\n> beta RPMs are by default compiled with --enable-debug and\n> --enable-cassert which doesn't help them to fly fast...\n\nGot that right. Last time I was going crazy after running pgbench with \nthose options and not having realized what I changed, I was getting a 50% \nslowdown on results that way compared to without the debugging stuff. \nDidn't realize it scaled with shared_buffers though.\n\n> Really sorry for the noise\n\nNothing to be sorry for, I know I wasn't aware the beta RPMs were compiled \nthat way. Probably need to put a disclaimer about that fact *somewhere*. \nIt's unfortunate for you, but I know I'm glad you run into it rather than \nsomeone who wouldn't have followed through to figure out the cause.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 26 Dec 2007 18:35:33 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "Hi,\n\nOn Wed, 2007-12-26 at 18:35 -0500, Greg Smith wrote:\n> Probably need to put a disclaimer about that fact *somewhere*. \n\nWe mention about that in README.rpm-dist file, but I think we should\nmention about that at a more visible place.\n\nRegards,\n-- \nDevrim GÜNDÜZ , RHCE\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, ODBCng - http://www.commandprompt.com/",
"msg_date": "Wed, 26 Dec 2007 15:40:53 -0800",
"msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> On Wed, 26 Dec 2007, Guillaume Smet wrote:\n>> beta RPMs are by default compiled with --enable-debug and\n>> --enable-cassert which doesn't help them to fly fast...\n\n> Got that right. Last time I was going crazy after running pgbench with \n> those options and not having realized what I changed, I was getting a 50% \n> slowdown on results that way compared to without the debugging stuff. \n> Didn't realize it scaled with shared_buffers though.\n\nSee AtEOXact_Buffers(). There are probably any number of other\ninteresting scaling behaviors --- in my tests, AllocSetCheck() is\nnormally a major cycle-eater if --enable-cassert is set, and that costs\ntime proportional to the number of memory chunks allocated by the query.\n\nCurrently the docs say that --enable-cassert\n\n Enables <firstterm>assertion</> checks in the server, which test for\n many <quote>cannot happen</> conditions. This is invaluable for\n code development purposes, but the tests slow things down a little.\n\nMaybe we ought to put that more strongly --- s/a little/significantly/,\nperhaps?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 Dec 2007 01:10:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances "
},
{
"msg_contents": "On Dec 27, 2007 7:10 AM, Tom Lane <[email protected]> wrote:\n> Enables <firstterm>assertion</> checks in the server, which test for\n> many <quote>cannot happen</> conditions. This is invaluable for\n> code development purposes, but the tests slow things down a little.\n>\n> Maybe we ought to put that more strongly --- s/a little/significantly/,\n> perhaps?\n\n+1. It seems closer to the reality.\n\n--\nGuillaume\n",
"msg_date": "Thu, 27 Dec 2007 09:41:37 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "On Thu, Dec 27, 2007 at 01:10:29AM -0500, Tom Lane wrote:\n> Greg Smith <[email protected]> writes:\n> > On Wed, 26 Dec 2007, Guillaume Smet wrote:\n> >> beta RPMs are by default compiled with --enable-debug and\n> >> --enable-cassert which doesn't help them to fly fast...\n> \n> > Got that right. Last time I was going crazy after running pgbench with \n> > those options and not having realized what I changed, I was getting a 50% \n> > slowdown on results that way compared to without the debugging stuff. \n> > Didn't realize it scaled with shared_buffers though.\n> \n> See AtEOXact_Buffers(). There are probably any number of other\n> interesting scaling behaviors --- in my tests, AllocSetCheck() is\n> normally a major cycle-eater if --enable-cassert is set, and that costs\n> time proportional to the number of memory chunks allocated by the query.\n> \n> Currently the docs say that --enable-cassert\n> \n> Enables <firstterm>assertion</> checks in the server, which test for\n> many <quote>cannot happen</> conditions. This is invaluable for\n> code development purposes, but the tests slow things down a little.\n> \n> Maybe we ought to put that more strongly --- s/a little/significantly/,\n> perhaps?\n\nSounds like a good idea. We got bit by the same thing when doing some\nbenchmarks on the MSVC port (and with we I mean Dave did the work, and several\npeople couldn't understand why the numbers sucked)\n\n//Magnus\n",
"msg_date": "Thu, 27 Dec 2007 09:47:59 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "Tom Lane escribi�:\n\n> Currently the docs say that --enable-cassert\n> \n> Enables <firstterm>assertion</> checks in the server, which test for\n> many <quote>cannot happen</> conditions. This is invaluable for\n> code development purposes, but the tests slow things down a little.\n> \n> Maybe we ought to put that more strongly --- s/a little/significantly/,\n> perhaps?\n\nI don't think it will make any difference, because people don't read\nconfigure documentation. They read configure --help.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 27 Dec 2007 10:20:03 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "\"Alvaro Herrera\" <[email protected]> writes:\n\n> Tom Lane escribió:\n>\n>> Currently the docs say that --enable-cassert\n>> \n>> Enables <firstterm>assertion</> checks in the server, which test for\n>> many <quote>cannot happen</> conditions. This is invaluable for\n>> code development purposes, but the tests slow things down a little.\n>> \n>> Maybe we ought to put that more strongly --- s/a little/significantly/,\n>> perhaps?\n>\n> I don't think it will make any difference, because people don't read\n> configure documentation. They read configure --help.\n\nFwiw I think you're all getting a bit caught up in this one context. While the\nslowdown is significant when you take out the stopwatch, under normal\ninteractive use you're not going to notice your queries being especially slow.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n",
"msg_date": "Thu, 27 Dec 2007 14:51:40 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "On Thu, 27 Dec 2007, Gregory Stark wrote:\n\n> Fwiw I think you're all getting a bit caught up in this one context.\n\nI lost a day once over this problem. Guillaume lost at least that much. \nSounds like Magnus and Dave got a good sized dose as well. Seems like \nsomething worth warning people about to me. The worst time people can run \ninto a performance regression is when they're running a popular \nbenchmarking tool. I didn't think this was a big problem because I \nthought it was limited to developers who shot their own foot, but if there \nare packagers turning this on to improve beta feedback it deserves some \nwider mention.\n\nAs for the suggestion that people don't read the documentation, take a \nlook at the above list of developers and tell me whether that group is \naware of what's in the docs or not. I had never seen anyone bring this up \nbefore I ran into it, and I dumped a strong warning into \nhttp://developer.postgresql.org/index.php/Working_with_CVS#Initial_setup \nso at least it was written down somewhere.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 27 Dec 2007 10:53:43 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> ... I didn't think this was a big problem because I \n> thought it was limited to developers who shot their own foot, but if there \n> are packagers turning this on to improve beta feedback it deserves some \n> wider mention.\n\nYeah, binary packages that are built with --enable-cassert perhaps need\nto be labeled as \"not intended for benchmarking\" or some such.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 Dec 2007 11:04:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances "
},
{
"msg_contents": "\"Greg Smith\" <[email protected]> writes:\n\n> The worst time people can run into a performance\n> regression is when they're running a popular benchmarking tool. \n\nHm, perhaps pg_bench should do a \"show debug_assertions\" and print a warning\nif the answer isn't \"off\". We could encourage other benchmark software to do\nsomething similar.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n",
"msg_date": "Thu, 27 Dec 2007 16:42:44 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "Tom Lane escribi�:\n> Greg Smith <[email protected]> writes:\n> > ... I didn't think this was a big problem because I \n> > thought it was limited to developers who shot their own foot, but if there \n> > are packagers turning this on to improve beta feedback it deserves some \n> > wider mention.\n> \n> Yeah, binary packages that are built with --enable-cassert perhaps need\n> to be labeled as \"not intended for benchmarking\" or some such.\n\nPerhaps make them emit a WARNING at server start or something.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 27 Dec 2007 13:50:46 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Perhaps make them emit a WARNING at server start or something.\n\nI concur with Greg Stark's earlier comment that this is all\noverreaction. Let's just fix the misleading comment in the\ndocumentation and leave it at that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 Dec 2007 13:54:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances "
},
{
"msg_contents": "On Dec 27, 2007 7:54 PM, Tom Lane <[email protected]> wrote:\n> I concur with Greg Stark's earlier comment that this is all\n> overreaction. Let's just fix the misleading comment in the\n> documentation and leave it at that.\n\nIMHO, we should also have a special tag for all the binaries\ndistributed with these options on the official website (RPM or not).\nIf the RPM packages' version has been tagged .debug or something like\nthat, it would have been the first thing I checked.\n\nI like Gregory's idea to add a warning in pgbench. I usually run a few\npgbench tests to check there is no obvious problem even if I use\nanother more complicated benchmark afterwards. I don't know if that's\nwhat other people do, though.\n\n--\nGuillaume\n",
"msg_date": "Sat, 29 Dec 2007 11:38:40 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "On Dec 25, 2007 7:06 PM, Guillaume Smet <[email protected]> wrote:\n> While monitoring the server with vmstat, I can't see any real reason\n> why it's slower. When shared_buffers has a higher value, I/O are\n> lower, context switches too and finally performances. The CPU usage is\n> quite similar (~50-60%). I/O doesn't limit the performances AFAICS.\n\nCan you confirm that i/o is lower according to iostat? One\npossibility is that you are on the cusp of where your server's memory\ncovers the database and the higher buffers results in lower memory\nefficiency.\n\nIf raising shared buffers is getting you more page faults to disk,\nthis would explain the lower figures regardless of the # of syscalls.\nIf your iowait is zero though the test is cpu bound and this\ndistinction is moot.\n\nmerlin\n",
"msg_date": "Sat, 29 Dec 2007 12:22:00 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances"
},
{
"msg_contents": "Tom Lane wrote:\n> Greg Smith <[email protected]> writes:\n> > On Wed, 26 Dec 2007, Guillaume Smet wrote:\n> >> beta RPMs are by default compiled with --enable-debug and\n> >> --enable-cassert which doesn't help them to fly fast...\n> \n> > Got that right. Last time I was going crazy after running pgbench with \n> > those options and not having realized what I changed, I was getting a 50% \n> > slowdown on results that way compared to without the debugging stuff. \n> > Didn't realize it scaled with shared_buffers though.\n> \n> See AtEOXact_Buffers(). There are probably any number of other\n> interesting scaling behaviors --- in my tests, AllocSetCheck() is\n> normally a major cycle-eater if --enable-cassert is set, and that costs\n> time proportional to the number of memory chunks allocated by the query.\n> \n> Currently the docs say that --enable-cassert\n> \n> Enables <firstterm>assertion</> checks in the server, which test for\n> many <quote>cannot happen</> conditions. This is invaluable for\n> code development purposes, but the tests slow things down a little.\n> \n> Maybe we ought to put that more strongly --- s/a little/significantly/,\n> perhaps?\n\nDocs updated with attached patch, backpatched to 8.3.X.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +",
"msg_date": "Thu, 6 Mar 2008 16:37:56 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More shared buffers causes lower performances"
}
] |
[
{
"msg_contents": "Hi list,\n\nI am building kind of a poor man's database server: \n\nPentium D 945 (2 x 3 Ghz cores)\n4 GB RAM\n4 x 160 GB SATA II 7200 rpm (Intel server motherboard has only 4 SATA ports)\n\nDatabase will be about 30 GB in size initially and growing 10 GB per year.\nData is inserted overnight in two big tables and during the day mostly\nread-only queries are run. Parallelism is rare.\n\nI have read about different raid levels with Postgres but the advice found\nseems to apply on 8+ disks systems. With only four disks and performance in\nmind should I build a RAID 10 or RAID 5 array? Raid 0 is overruled since\nredundancy is needed.\nI am going to use software Raid with Linux (Ubuntu Server 6.06). \n\nThanks for any hindsight.\nRegards,\nFernando.\n\n\n\n\n\nWith 4 disks should I go for RAID 5 or RAID 10\n\n\n\nHi list,\nI am building kind of a poor man’s database server: \nPentium D 945 (2 x 3 Ghz cores)\n4 GB RAM\n4 x 160 GB SATA II 7200 rpm (Intel server motherboard has only 4 SATA ports)\n\nDatabase will be about 30 GB in size initially and growing 10 GB per year. Data is inserted overnight in two big tables and during the day mostly read-only queries are run. Parallelism is rare.\n\nI have read about different raid levels with Postgres but the advice found seems to apply on 8+ disks systems. With only four disks and performance in mind should I build a RAID 10 or RAID 5 array? Raid 0 is overruled since redundancy is needed.\nI am going to use software Raid with Linux (Ubuntu Server 6.06). \n\nThanks for any hindsight.\nRegards,\nFernando.",
"msg_date": "Wed, 26 Dec 2007 12:15:03 -0300",
"msg_from": "\"Fernando Hevia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "\nRAID 10.\n\nI snipped the rest of your message because none of it matters. Never use\nRAID 5 on a database system. Ever. There is absolutely NO reason to\nevery put yourself through that much suffering. If you hate yourself\nthat much just commit suicide, it's less drastic.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Wed, 26 Dec 2007 10:21:05 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "Fernando Hevia wrote:\n>\n> Database will be about 30 GB in size initially and growing 10 GB per \n> year. Data is inserted overnight in two big tables and during the day \n> mostly read-only queries are run. Parallelism is rare.\n>\n> I have read about different raid levels with Postgres but the advice \n> found seems to apply on 8+ disks systems. With only four disks and \n> performance in mind should I build a RAID 10 or RAID 5 array? Raid 0 \n> is overruled since redundancy is needed.\n>\n> I am going to use software Raid with Linux (Ubuntu Server 6.06).\n>\n\nIn my experience, software RAID 5 is horrible. Write performance can \ndecrease below the speed of one disk on its own, and read performance \nwill not be significantly more than RAID 1+0 as the number of stripes \nhas only increased from 2 to 3, and if reading while writing, you will \nnot get 3X as RAID 5 write requires at least two disks to be involved. I \nbelieve hardware RAID 5 is also horrible, but since the hardware hides \nit from the application, a hardware RAID 5 user might not care.\n\nSoftware RAID 1+0 works fine on Linux with 4 disks. This is the setup I \nuse for my personal server.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nFernando Hevia wrote:\n\n\n\nWith 4 disks should I go for RAID 5 or RAID 10\n\n\nDatabase will\nbe about 30 GB\nin size initially and growing 10 GB per year. Data\nis inserted overnight in\ntwo big tables and during the day mostly read-only queries are run. Parallelism is\nrare.\n\nI have read about different\nraid levels with Postgres but the advice found seems\nto apply on 8+ disks systems. With only four disks and performance in mind should I build a RAID 10 or RAID 5 array? Raid 0 is overruled since redundancy is needed.\nI am going to use software\nRaid with Linux (Ubuntu Server 6.06). \n\n\n\n\nIn my experience, software RAID 5 is horrible. Write performance can\ndecrease below the speed of one disk on its own, and read performance\nwill not be significantly more than RAID 1+0 as the number of stripes\nhas only increased from 2 to 3, and if reading while writing, you will\nnot get 3X as RAID 5 write requires at least two disks to be involved.\nI believe hardware RAID 5 is also horrible, but since the hardware\nhides it from the application, a hardware RAID 5 user might not care.\n\nSoftware RAID 1+0 works fine on Linux with 4 disks. This is the setup I\nuse for my personal server.\n\nCheers,\nmark\n\n\n-- \nMark Mielke <[email protected]>",
"msg_date": "Wed, 26 Dec 2007 11:23:28 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "On Wed, 26 Dec 2007, Mark Mielke wrote:\n\n> I believe hardware RAID 5 is also horrible, but since the hardware hides \n> it from the application, a hardware RAID 5 user might not care.\n\nTypically anything doing hardware RAID 5 also has a reasonable sized write \ncache on the controller, which softens the problem a bit. As soon as you \nexceed what it can buffer you're back to suffering again.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 26 Dec 2007 13:13:56 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "\n> Bill Moran wrote:\n> \n> RAID 10.\n> \n> I snipped the rest of your message because none of it matters. Never use\n> RAID 5 on a database system. Ever. There is absolutely NO reason to\n> every put yourself through that much suffering. If you hate yourself\n> that much just commit suicide, it's less drastic.\n> \n\nWell, that's a pretty strong argument. No suicide in my plans, gonna stick\nto RAID 10. :)\nThanks.\n\n",
"msg_date": "Wed, 26 Dec 2007 16:28:25 -0300",
"msg_from": "\"Fernando Hevia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "Mark Mielke Wrote:\n\n>In my experience, software RAID 5 is horrible. Write performance can\n>decrease below the speed of one disk on its own, and read performance will\n>not be significantly more than RAID 1+0 as the number of stripes has only\n>increased from 2 to 3, and if reading while writing, you will not get 3X as\n>RAID 5 write requires at least two disks to be involved. I believe hardware\n>RAID 5 is also horrible, but since the hardware hides it from the\n>application, a hardware RAID 5 user might not care.\n\n>Software RAID 1+0 works fine on Linux with 4 disks. This is the setup I use\n>for my personal server.\n\nI will use software RAID so RAID 1+0 seems to be the obvious choice.\nThanks for the advice!\n\n\n",
"msg_date": "Wed, 26 Dec 2007 16:32:21 -0300",
"msg_from": "\"Fernando Hevia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "\n\n> David Lang Wrote:\n> \n> with only four drives the space difference between raid 1+0 and raid 5\n> isn't that much, but when you do a write you must write to two drives (the\n> drive holding the data you are changing, and the drive that holds the\n> parity data for that stripe, possibly needing to read the old parity data\n> first, resulting in stalling for seek/read/calculate/seek/write since\n> the drive moves on after the read), when you read you must read _all_\n> drives in the set to check the data integrity.\n\nThanks for the explanation David. It's good to know not only what but also\nwhy. Still I wonder why reads do hit all drives. Shouldn't only 2 disks be\nread: the one with the data and the parity disk?\n\n> \n> for seek heavy workloads (which almost every database application is) the\n> extra seeks involved can be murder on your performance. if your workload\n> is large sequential reads/writes, and you can let the OS buffer things for\n> you, the performance of raid 5 is much better.\n\nWell, actually most of my application involves large sequential\nreads/writes. The memory available for buffering (4GB) isn't bad either, at\nleast for my scenario. On the other hand I have got such strong posts\nagainst RAID 5 that I doubt to even consider it.\n\n> \n> Linux software raid can do more then two disks in a mirror, so you may be\n> able to get the added protection with raid 1 sets (again, probably not\n> relavent to four drives), although there were bugs in this within the last\n> six months or so, so you need to be sure your kernel is new enough to have\n> the fix.\n> \n\nWell, here rises another doubt. Should I go for a single RAID 1+0 storing OS\n+ Data + WAL files or will I be better off with two RAID 1 separating data\nfrom OS + Wal files?\n\n> now, if you can afford solid-state drives which don't have noticable seek\n> times, things are completely different ;-)\n\nHa, sadly budget is very tight. :)\n\nRegards,\nFernando.\n\n",
"msg_date": "Wed, 26 Dec 2007 17:52:27 -0300",
"msg_from": "\"Fernando Hevia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "> seek/read/calculate/seek/write since the drive moves on after the\n> read), when you read you must read _all_ drives in the set to check\n> the data integrity.\n\nI don't know of any RAID implementation that performs consistency\nchecking on each read operation. 8-(\n",
"msg_date": "Wed, 26 Dec 2007 21:55:38 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "In response to \"Fernando Hevia\" <[email protected]>:\n> \n> > David Lang Wrote:\n> > \n> > with only four drives the space difference between raid 1+0 and raid 5\n> > isn't that much, but when you do a write you must write to two drives (the\n> > drive holding the data you are changing, and the drive that holds the\n> > parity data for that stripe, possibly needing to read the old parity data\n> > first, resulting in stalling for seek/read/calculate/seek/write since\n> > the drive moves on after the read), when you read you must read _all_\n> > drives in the set to check the data integrity.\n> \n> Thanks for the explanation David. It's good to know not only what but also\n> why. Still I wonder why reads do hit all drives. Shouldn't only 2 disks be\n> read: the one with the data and the parity disk?\n\nIn order to recalculate the parity, it has to have data from all disks. Thus,\nif you have 4 disks, it has to read 2 (the unknown data blocks included in\nthe parity calculation) then write 2 (the new data block and the new\nparity data) Caching can help some, but if your data ends up being any\nsize at all, the cache misses become more frequent than the hits. Even\nwhen caching helps, you max speed is still only the speed of a single\ndisk.\n\n> > for seek heavy workloads (which almost every database application is) the\n> > extra seeks involved can be murder on your performance. if your workload\n> > is large sequential reads/writes, and you can let the OS buffer things for\n> > you, the performance of raid 5 is much better.\n> \n> Well, actually most of my application involves large sequential\n> reads/writes.\n\nWill it? Will you be deleting or updating data? If so, you'll generate\ndead tuples, which vacuum will have to clean up, which means seeks, and\nmeans you new data isn't liable to be sequentially written.\n\nThe chance that you actually have a workload that will result in\nconsistently sequential writes at the disk level is very slim, in my\nexperience. When vacuum is taking hours and hours, you'll understand\nthe pain.\n\n> The memory available for buffering (4GB) isn't bad either, at\n> least for my scenario. On the other hand I have got such strong posts\n> against RAID 5 that I doubt to even consider it.\n\nIf 4G is enough to buffer all your data, then why do you need the extra\nspace of RAID 5? If you need the extra space of the RAID 5, then 4G\nisn't enough to buffer all your data, and that buffer will be of limited\nusefulness.\n\nIn any event, even if you've got 300G of RAM to buffer data in, sooner or\nlater you've got to write it to disk, and no matter how much RAM you have,\nyour write speed will be limited by how fast your disks can commit.\n\nIf you had a database multiple petabytes in size, you could worry about\nneeding the extra space that RAID 5 gives you, but then you'd realize\nthat the speed problems associated with RAID 5 will make a petabyte sized\ndatabase completely unmanageable.\n\nThere's just no scenario where RAID 5 is a win for database work. Period.\nRationalize all you want. For those trying to defend RAID 5, I invite you\nto try it. When you're on the verge of suicide because you can't get\nany work done, don't say I didn't say so.\n\n> Well, here rises another doubt. Should I go for a single RAID 1+0 storing OS\n> + Data + WAL files or will I be better off with two RAID 1 separating data\n> from OS + Wal files?\n\nGenerally speaking, if you want the absolute best performance, it's\ngenerally recommended to keep the WAL logs on one partition/controller\nand the remaining database files on a second one. However, with only\n4 disks, you might get just as much out of a RAID 1+0.\n\n> > now, if you can afford solid-state drives which don't have noticable seek\n> > times, things are completely different ;-)\n> \n> Ha, sadly budget is very tight. :)\n\nBudget is always tight. That's why you don't want a RAID 5. Do a RAID 5\nnow thinking you'll save a few bucks, and you'll be spending twice that\nmuch later trying to fix your mistake. It's called tripping over a dime\nto pick up a nickel.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Wed, 26 Dec 2007 16:14:30 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "On Wed, 26 Dec 2007, Fernando Hevia wrote:\n\n> Mark Mielke Wrote:\n>\n>> In my experience, software RAID 5 is horrible. Write performance can\n>> decrease below the speed of one disk on its own, and read performance will\n>> not be significantly more than RAID 1+0 as the number of stripes has only\n>> increased from 2 to 3, and if reading while writing, you will not get 3X as\n>> RAID 5 write requires at least two disks to be involved. I believe hardware\n>> RAID 5 is also horrible, but since the hardware hides it from the\n>> application, a hardware RAID 5 user might not care.\n>\n>> Software RAID 1+0 works fine on Linux with 4 disks. This is the setup I use\n>> for my personal server.\n>\n> I will use software RAID so RAID 1+0 seems to be the obvious choice.\n> Thanks for the advice!\n\nto clarify things a bit more.\n\nwith only four drives the space difference between raid 1+0 and raid 5 \nisn't that much, but when you do a write you must write to two drives (the \ndrive holding the data you are changing, and the drive that holds the \nparity data for that stripe, possibly needing to read the old parity data \nfirst, resulting in stalling for seek/read/calculate/seek/write since \nthe drive moves on after the read), when you read you must read _all_ \ndrives in the set to check the data integrity.\n\nfor seek heavy workloads (which almost every database application is) the \nextra seeks involved can be murder on your performance. if your workload \nis large sequential reads/writes, and you can let the OS buffer things for \nyou, the performance of raid 5 is much better.\n\non the other hand, doing raid 6 (instead of raid 5) gives you extra data \nprotection in exchange for the performance hit, but with only 4 drives \nthis probably isn't what you are looking for.\n\nLinux software raid can do more then two disks in a mirror, so you may be \nable to get the added protection with raid 1 sets (again, probably not \nrelavent to four drives), although there were bugs in this within the last \nsix months or so, so you need to be sure your kernel is new enough to have \nthe fix.\n\nnow, if you can afford solid-state drives which don't have noticable seek \ntimes, things are completely different ;-)\n\nDavid Lang\n",
"msg_date": "Wed, 26 Dec 2007 13:28:12 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "Florian Weimer wrote:\n>> seek/read/calculate/seek/write since the drive moves on after the\n>> read), when you read you must read _all_ drives in the set to check\n>> the data integrity.\n>> \n> I don't know of any RAID implementation that performs consistency\n> checking on each read operation. 8-(\n> \n\nDave had too much egg nog... :-)\n\nYep - checking consistency on read would eliminate the performance \nbenefits of RAID under any redundant configuration.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nFlorian Weimer wrote:\n\n\nseek/read/calculate/seek/write since the drive moves on after the\nread), when you read you must read _all_ drives in the set to check\nthe data integrity.\n \n\nI don't know of any RAID implementation that performs consistency\nchecking on each read operation. 8-(\n \n\n\nDave had too much egg nog... :-)\n\nYep - checking consistency on read would eliminate the performance\nbenefits of RAID under any redundant configuration.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>",
"msg_date": "Wed, 26 Dec 2007 16:40:40 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "[email protected] wrote:\n>> Thanks for the explanation David. It's good to know not only what but \n>> also\n>> why. Still I wonder why reads do hit all drives. Shouldn't only 2 \n>> disks be\n>> read: the one with the data and the parity disk?\n> no, becouse the parity is of the sort (A+B+C+P) mod X = 0\n> so if X=10 (which means in practice that only the last decimal digit \n> of anything matters, very convienient for examples)\n> A=1, B=2, C=3, A+B+C=6, P=4, A+B+C+P=10=0\n> if you read B and get 3 and P and get 4 you don't know if this is \n> right or not unless you also read A and C (at which point you would \n> get A+B+C+P=11=1=error)\nI don't think this is correct. RAID 5 is parity which is XOR. The \nproperty of XOR is such that it doesn't matter what the other drives \nare. You can write any block given either: 1) The block you are \noverwriting and the parity, or 2) all other blocks except for the block \nwe are writing and the parity. Now, it might be possible that option 2) \nis taken more than option 1) for some complicated reasons, but it is NOT \nto check consistency. The array is assumed consistent until proven \notherwise.\n\n> in theory a system could get the same performance with a large \n> sequential read/write on raid5/6 as on a raid0 array of equivilent \n> size (i.e. same number of data disks, ignoring the parity disks) \n> becouse the OS could read the entire stripe in at once, do the \n> calculation once, and use all the data (or when writing, don't write \n> anything until you are ready to write the entire stripe, calculate the \n> parity and write everything once).\nFor the same number of drives, this cannot be possible. With 10 disks, \non raid5, 9 disks hold data, and 1 holds parity. The theoretical maximum \nperformance is only 9/10 of the 10/10 performance possible with RAID 0.\n\n> Unfortunantly in practice filesystems don't support this, they don't \n> do enough readahead to want to keep the entire stripe (so after they \n> read it all in they throw some of it away), they (mostly) don't know \n> where a stripe starts (and so intermingle different types of data on \n> one stripe and spread data across multiple stripes unessasarily), and \n> they tend to do writes in small, scattered chunks (rather then \n> flushing an entire stripes worth of data at once)\nIn my experience, this theoretical maximum is not attainable without \nsignificant write cache, and an intelligent controller, neither of which \nLinux software RAID seems to have by default. My situation was a bit \nworse in that I used applications that fsync() or journalled metadata \nthat is ordered, which forces the Linux software RAID to flush far more \nthan it should - but the same system works very well with RAID 1+0.\n\n>>> Linux software raid can do more then two disks in a mirror, so you \n>>> may be\n>>> able to get the added protection with raid 1 sets (again, probably not\n>>> relavent to four drives), although there were bugs in this within \n>>> the last\n>>> six months or so, so you need to be sure your kernel is new enough \n>>> to have\n>>> the fix.\n>>>\n>> Well, here rises another doubt. Should I go for a single RAID 1+0 \n>> storing OS\n>> + Data + WAL files or will I be better off with two RAID 1 separating \n>> data\n>> from OS + Wal files?\n> if you can afford the space, you are almost certinly better seperating \n> the WAL from the data (I think I've seen debates about which is better \n> OS+data/Wal or date/OS+Wal, but very little disagreement that either \n> is better than combining them all)\nI don't think there is a good answer for this question. If you can \nafford my drives, you could also afford to make your RAID 1+0 bigger. \nSplitting OS/DATA/WAL is only \"absolute best\" if can arrange your 3 \narrays such that there size is relative to their access patterns. For \nexample, in an overly simplified case, if you use OS 1/4 of DATA, and \nWAL 1/2 of DATA, then perhaps \"best\" is to have a two-disk RAID 1 for \nOS, a four-disk RAID 1+0 for WAL, and an eight-disk RAID 1+0 for DATA. \nThis gives a total of 14 disks. :-)\n\nIn practice, if you have four drives, and you try and it into two plus \ntwo, you're going to find that two of the drives are going to be more \nidle than the other two.\n\nI have a fun setup - I use RAID 1 across all four drives for the OS, \nRAID 1+0 for the database, wal, and other parts, and RAID 0 for a \n\"build\" partition. :-)\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n",
"msg_date": "Wed, 26 Dec 2007 16:54:00 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "[email protected] wrote:\n> On Wed, 26 Dec 2007, Mark Mielke wrote:\n>\n>> Florian Weimer wrote:\n>>>> seek/read/calculate/seek/write since the drive moves on after the\n>>>> read), when you read you must read _all_ drives in the set to check\n>>>> the data integrity.\n>>> I don't know of any RAID implementation that performs consistency\n>>> checking on each read operation. 8-(\n>> Dave had too much egg nog... :-)\n>> Yep - checking consistency on read would eliminate the performance \n>> benefits of RAID under any redundant configuration.\n> except for raid0, raid is primarily a reliability benifit, any \n> performance benifit is incidental, not the primary purpose.\n> that said, I have heard of raid1 setups where it only reads off of one \n> of the drives, but I have not heard of higher raid levels doing so.\nWhat do you mean \"heard of\"? Which raid system do you know of that reads \nall drives for RAID 1?\n\nLinux dmraid reads off ONLY the first. Linux mdadm reads off the \"best\" \none. Neither read from both. Why should it need to read from both? What \nwill it do if the consistency check fails? It's not like it can tell \nwhich disk is the right one. It only knows that the whole array is \ninconsistent. Until it gets an actual hardware failure (read error, \nwrite error), it doesn't know which disk is wrong.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n",
"msg_date": "Wed, 26 Dec 2007 16:57:08 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "[email protected] wrote:\n> I could see a raid 1 array not doing consistancy checking (after all, \n> it has no way of knowing what's right if it finds an error), but since \n> raid 5/6 can repair the data I would expect them to do the checking \n> each time.\nYour messages are spread across the thread. :-)\n\nRAID 5 cannot repair the data. I don't know much about RAID 6, but I \nexpect it cannot necessarily repair the data either. It still doesn't \nknow which drive is wrong. In any case, there is no implementation I am \naware of that performs mandatory consistency checks on read. This would \nbe silliness.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n",
"msg_date": "Wed, 26 Dec 2007 16:59:45 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "Bill Moran wrote:\n> In order to recalculate the parity, it has to have data from all disks. Thus,\n> if you have 4 disks, it has to read 2 (the unknown data blocks included in\n> the parity calculation) then write 2 (the new data block and the new\n> parity data) Caching can help some, but if your data ends up being any\n> size at all, the cache misses become more frequent than the hits. Even\n> when caching helps, you max speed is still only the speed of a single\n> disk.\n> \nIf you have 4 disks, it can do either:\n\n 1) Read the old block, read the parity block, XOR the old block with \nthe parity block and the new block resulting in the new parity block, \nwrite both the new parity block and the new block.\n 2) Read the two unknown blocks, XOR with the new block resulting in \nthe new parity block, write both the new parity block and the new block.\n\nYou are emphasizing 2 - but the scenario is also overly simplistic. \nImagine you had 10 drives on RAID 5. Would it make more sense to read 8 \nblocks and then write two (option 2, and the one you describe), or read \ntwo blocks and then write two (option 1). Obviously, if option 1 or \noption 2 can be satisfied from cache, it is better to not read at all.\n\nI note that you also disagree with Dave, in that you are not claiming it \nperforms consistency checks on read. No system does this as performance \nwould go to the crapper.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n",
"msg_date": "Wed, 26 Dec 2007 17:05:54 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "In response to Mark Mielke <[email protected]>:\n\n> [email protected] wrote:\n> > On Wed, 26 Dec 2007, Mark Mielke wrote:\n> >\n> >> Florian Weimer wrote:\n> >>>> seek/read/calculate/seek/write since the drive moves on after the\n> >>>> read), when you read you must read _all_ drives in the set to check\n> >>>> the data integrity.\n> >>> I don't know of any RAID implementation that performs consistency\n> >>> checking on each read operation. 8-(\n> >> Dave had too much egg nog... :-)\n> >> Yep - checking consistency on read would eliminate the performance \n> >> benefits of RAID under any redundant configuration.\n> > except for raid0, raid is primarily a reliability benifit, any \n> > performance benifit is incidental, not the primary purpose.\n> > that said, I have heard of raid1 setups where it only reads off of one \n> > of the drives, but I have not heard of higher raid levels doing so.\n> What do you mean \"heard of\"? Which raid system do you know of that reads \n> all drives for RAID 1?\n\nI'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing\nconsistency checking at that point.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Wed, 26 Dec 2007 17:11:05 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "In response to Mark Mielke <[email protected]>:\n\n> Bill Moran wrote:\n> > In order to recalculate the parity, it has to have data from all disks. Thus,\n> > if you have 4 disks, it has to read 2 (the unknown data blocks included in\n> > the parity calculation) then write 2 (the new data block and the new\n> > parity data) Caching can help some, but if your data ends up being any\n> > size at all, the cache misses become more frequent than the hits. Even\n> > when caching helps, you max speed is still only the speed of a single\n> > disk.\n> > \n> If you have 4 disks, it can do either:\n> \n> 1) Read the old block, read the parity block, XOR the old block with \n> the parity block and the new block resulting in the new parity block, \n> write both the new parity block and the new block.\n> 2) Read the two unknown blocks, XOR with the new block resulting in \n> the new parity block, write both the new parity block and the new block.\n> \n> You are emphasizing 2 - but the scenario is also overly simplistic. \n> Imagine you had 10 drives on RAID 5. Would it make more sense to read 8 \n> blocks and then write two (option 2, and the one you describe), or read \n> two blocks and then write two (option 1). Obviously, if option 1 or \n> option 2 can be satisfied from cache, it is better to not read at all.\n\nGood point that I wasn't aware of.\n\n> I note that you also disagree with Dave, in that you are not claiming it \n> performs consistency checks on read. No system does this as performance \n> would go to the crapper.\n\nI call straw man :)\n\nI don't disagree. I simply don't know. There's no reason why it _couldn't_\ndo consistency checking as it ran ... of course, performance would suck.\n\nGenerally what you expect out of RAID 5|6 is that it can rebuild a drive\nin the event of a failure, so I doubt if anyone does consistency checking\nby default, and I wouldn't be surprised if a lot of systems don't have\nthe option to do it at all.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Wed, 26 Dec 2007 17:16:08 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "Bill Moran wrote:\n>\n>> What do you mean \"heard of\"? Which raid system do you know of that reads \n>> all drives for RAID 1?\n>> \n>\n> I'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing\n> consistency checking at that point.\n> \nAccording to this:\n\nhttp://www.freebsd.org/cgi/man.cgi?query=gmirror&apropos=0&sektion=8&manpath=FreeBSD+6-current&format=html\n\nThere is a -b (balance) option that seems pretty clear that it does not \nread from all drives if it does not have to:\n\n Create a mirror. \n The order of components is important,\n because a component's priority is based on its position\n (starting from 0). The component with the biggest priority\n is used by the prefer balance algorithm and is also used as a\n master component when resynchronization is needed, e.g. after\n a power failure when the device was open for writing.\n\n Additional options include:\n\n *-b* /balance/ Specifies balance algorithm to use, one of:\n\n *load* Read from the component with the\n lowest load.\n\n *prefer* Read from the component with the\n biggest priority.\n\n *round-robin* Use round-robin algorithm when\n choosing component to read.\n\n *split* Split read requests, which are big-\n ger than or equal to slice size on N\n pieces, where N is the number of\n active components. This is the\n default balance algorithm.\n\n\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nBill Moran wrote:\n\n\nWhat do you mean \"heard of\"? Which raid system do you know of that reads \nall drives for RAID 1?\n \n\n\nI'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing\nconsistency checking at that point.\n \n\nAccording to this:\n\nhttp://www.freebsd.org/cgi/man.cgi?query=gmirror&apropos=0&sektion=8&manpath=FreeBSD+6-current&format=html\n\nThere is a -b (balance) option that seems pretty clear that it does not\nread from all drives if it does not have to:\n\n Create a mirror. \n The order of components is important,\n because a component's priority is based on its position\n (starting from 0). The component with the biggest priority\n is used by the prefer balance algorithm and is also used as a\n master component when resynchronization is needed, e.g. after\n a power failure when the device was open for writing.\n Additional options include:\n\n -b balance Specifies balance algorithm to use, one of:\n\n load Read from the component with the\n lowest load.\n\n prefer Read from the component with the\n biggest priority.\n\n round-robin Use round-robin algorithm when\n choosing component to read.\n\n split Split read requests, which are big-\n ger than or equal to slice size on N\n pieces, where N is the number of\n active components. This is the\n default balance algorithm.\n\n\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>",
"msg_date": "Wed, 26 Dec 2007 17:21:14 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "[email protected] wrote:\n> however I was addressing the point that for reads you can't do any \n> checking until you have read in all the blocks.\n> if you never check the consistency, how will it ever be proven otherwise.\nA scheme often used is to mark the disk/slice as \"clean\" during clean \nsystem shutdown (or RAID device shutdown). When it comes back up, it is \nassumed clean. Why wouldn't it be clean?\n\nHowever, if it comes up \"unclean\", this does indeed require an EXPENSIVE \nresynchronization process. Note, however, that resynchronization usually \nreads or writes all disks, whether RAID 1, RAID 5, RAID 6, or RAID 1+0. \nMy RAID 1+0 does a full resynchronization if shut down uncleanly. There \nis nothing specific about RAID 5 here.\n\nNow, technically - none of these RAID levels requires a full \nresynchronization, even though it is almost always recommended and \nperformed by default. There is an option in Linux software RAID (mdadm) \nto \"skip\" the resynchronization process. The danger here is that you \ncould read one of the blocks this minute and get one block, and read the \nsame block a different minute, and get a different block. This would \noccur in RAID 1 if it did round-robin or disk with the nearest head to \nthe desired block, or whatever, and it made a different decision before \nand after the minute. What is the worst that can happen though? Any \nsystem that does careful journalling / synchronization should usually be \nfine. The \"risk\" is similar to write caching without battery backing, in \nthat if the drive tells the system \"write complete\", and the system goes \non to perform other work, but the write is not complete, then corruption \nbecomes a possibility.\n\nAnyways - point is again that RAID 5 is not special here.\n\n> but for your application, the fact that you are doing lots of fsyncs \n> is what's killing you, becouse the fsync forces a lot of data to be \n> written out, swamping the caches involved, and requiring that you wait \n> for seeks. nothing other then a battery backed disk cache of some sort \n> (either on the controller or a solid-state drive on a journaled \n> filesystem would work)\nYep. :-)\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n",
"msg_date": "Wed, 26 Dec 2007 17:33:07 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "On Wed, 26 Dec 2007, [email protected] wrote:\n\n> yes, the two linux software implementations only read from one disk, but I \n> have seen hardware implementations where it reads from both drives, and if \n> they disagree it returns a read error rather then possibly invalid data (it's \n> up to the admin to figure out which drive is bad at that point).\n\nRight, many of the old implementations did that; even the Wikipedia \narticle on this subject mentions it in the \"RAID 1 performance\" section: \nhttp://en.wikipedia.org/wiki/Standard_RAID_levels\n\nThe thing that changed is on modern drives, the internal error detection \nand correction is good enough that if you lose a sector, the drive will \nnormally figure that out at the firmware level and return a read error \nrather than bad data. That lowers of the odds of one drive becoming \ncorrupted and returning a bad sector as a result enough that the overhead \nof reading from both drives isn't considered as important. I'm not aware \nof a current card that does that but I wouldn't be surprised to discover \none existed.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 26 Dec 2007 17:40:06 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "On Wed, 26 Dec 2007, Fernando Hevia wrote:\n\n>> David Lang Wrote:\n>>\n>> with only four drives the space difference between raid 1+0 and raid 5\n>> isn't that much, but when you do a write you must write to two drives (the\n>> drive holding the data you are changing, and the drive that holds the\n>> parity data for that stripe, possibly needing to read the old parity data\n>> first, resulting in stalling for seek/read/calculate/seek/write since\n>> the drive moves on after the read), when you read you must read _all_\n>> drives in the set to check the data integrity.\n>\n> Thanks for the explanation David. It's good to know not only what but also\n> why. Still I wonder why reads do hit all drives. Shouldn't only 2 disks be\n> read: the one with the data and the parity disk?\n\nno, becouse the parity is of the sort (A+B+C+P) mod X = 0\n\nso if X=10 (which means in practice that only the last decimal digit of \nanything matters, very convienient for examples)\n\nA=1, B=2, C=3, A+B+C=6, P=4, A+B+C+P=10=0\n\nif you read B and get 3 and P and get 4 you don't know if this is right or \nnot unless you also read A and C (at which point you would get \nA+B+C+P=11=1=error)\n\n>> for seek heavy workloads (which almost every database application is) the\n>> extra seeks involved can be murder on your performance. if your workload\n>> is large sequential reads/writes, and you can let the OS buffer things for\n>> you, the performance of raid 5 is much better.\n>\n> Well, actually most of my application involves large sequential\n> reads/writes. The memory available for buffering (4GB) isn't bad either, at\n> least for my scenario. On the other hand I have got such strong posts\n> against RAID 5 that I doubt to even consider it.\n\nin theory a system could get the same performance with a large sequential \nread/write on raid5/6 as on a raid0 array of equivilent size (i.e. same \nnumber of data disks, ignoring the parity disks) becouse the OS could read \nthe entire stripe in at once, do the calculation once, and use all the \ndata (or when writing, don't write anything until you are ready to write \nthe entire stripe, calculate the parity and write everything once).\n\nUnfortunantly in practice filesystems don't support this, they don't do \nenough readahead to want to keep the entire stripe (so after they read it \nall in they throw some of it away), they (mostly) don't know where a \nstripe starts (and so intermingle different types of data on one stripe \nand spread data across multiple stripes unessasarily), and they tend to do \nwrites in small, scattered chunks (rather then flushing an entire stripes \nworth of data at once)\n\nthose who have been around long enough to remember the days of MFM/RLL \n(when you could still find the real layout of the drives) may remember \noptmizing things to work a track at a time instead of a sector at a time. \nthis is the exact same logic, just needing to be applied to drive stripes \ninstead of sectors and tracks on a single drive.\n\nthe issue has been raised with the kernel developers, but there's a lot of \nwork to be done (especially in figuring out how to get all the layers the \ninfo they need in a reasonable way)\n\n>> Linux software raid can do more then two disks in a mirror, so you may be\n>> able to get the added protection with raid 1 sets (again, probably not\n>> relavent to four drives), although there were bugs in this within the last\n>> six months or so, so you need to be sure your kernel is new enough to have\n>> the fix.\n>>\n>\n> Well, here rises another doubt. Should I go for a single RAID 1+0 storing OS\n> + Data + WAL files or will I be better off with two RAID 1 separating data\n> from OS + Wal files?\n\nif you can afford the space, you are almost certinly better seperating the \nWAL from the data (I think I've seen debates about which is better \nOS+data/Wal or date/OS+Wal, but very little disagreement that either is \nbetter than combining them all)\n\nDavid Lang\n\n>> now, if you can afford solid-state drives which don't have noticable seek\n>> times, things are completely different ;-)\n>\n> Ha, sadly budget is very tight. :)\n>\n> Regards,\n> Fernando.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n",
"msg_date": "Wed, 26 Dec 2007 14:52:20 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "On Wed, 26 Dec 2007, Florian Weimer wrote:\n\n>> seek/read/calculate/seek/write since the drive moves on after the\n>> read), when you read you must read _all_ drives in the set to check\n>> the data integrity.\n>\n> I don't know of any RAID implementation that performs consistency\n> checking on each read operation. 8-(\n\nI could see a raid 1 array not doing consistancy checking (after all, it \nhas no way of knowing what's right if it finds an error), but since raid \n5/6 can repair the data I would expect them to do the checking each time.\n\nDavid Lang\n",
"msg_date": "Wed, 26 Dec 2007 14:54:15 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "On Wed, 26 Dec 2007, Mark Mielke wrote:\n\n> Florian Weimer wrote:\n>>> seek/read/calculate/seek/write since the drive moves on after the\n>>> read), when you read you must read _all_ drives in the set to check\n>>> the data integrity.\n>>> \n>> I don't know of any RAID implementation that performs consistency\n>> checking on each read operation. 8-(\n>> \n>\n> Dave had too much egg nog... :-)\n>\n> Yep - checking consistency on read would eliminate the performance benefits \n> of RAID under any redundant configuration.\n\nexcept for raid0, raid is primarily a reliability benifit, any performance \nbenifit is incidental, not the primary purpose.\n\nthat said, I have heard of raid1 setups where it only reads off of one of \nthe drives, but I have not heard of higher raid levels doing so.\n\nDavid Lang\n",
"msg_date": "Wed, 26 Dec 2007 15:05:50 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "On Wed, 26 Dec 2007, Mark Mielke wrote:\n\n> [email protected] wrote:\n>>> Thanks for the explanation David. It's good to know not only what but also\n>>> why. Still I wonder why reads do hit all drives. Shouldn't only 2 disks be\n>>> read: the one with the data and the parity disk?\n>> no, becouse the parity is of the sort (A+B+C+P) mod X = 0\n>> so if X=10 (which means in practice that only the last decimal digit of \n>> anything matters, very convienient for examples)\n>> A=1, B=2, C=3, A+B+C=6, P=4, A+B+C+P=10=0\n>> if you read B and get 3 and P and get 4 you don't know if this is right or \n>> not unless you also read A and C (at which point you would get \n>> A+B+C+P=11=1=error)\n> I don't think this is correct. RAID 5 is parity which is XOR. The property of \n> XOR is such that it doesn't matter what the other drives are. You can write \n> any block given either: 1) The block you are overwriting and the parity, or \n> 2) all other blocks except for the block we are writing and the parity. Now, \n> it might be possible that option 2) is taken more than option 1) for some \n> complicated reasons, but it is NOT to check consistency. The array is assumed \n> consistent until proven otherwise.\n\nI was being sloppy in explaining the reason, you are correct that for \nwrites you don't need to read all the data, you just need the current \nparity block, the old data you are going to replace, and the new data to \nbe able to calculate the new parity block (and note that even with my \nchecksum example this would be the case).\n\nhowever I was addressing the point that for reads you can't do any \nchecking until you have read in all the blocks.\n\nif you never check the consistency, how will it ever be proven otherwise.\n\n>> in theory a system could get the same performance with a large sequential \n>> read/write on raid5/6 as on a raid0 array of equivilent size (i.e. same \n>> number of data disks, ignoring the parity disks) becouse the OS could read \n>> the entire stripe in at once, do the calculation once, and use all the data \n>> (or when writing, don't write anything until you are ready to write the \n>> entire stripe, calculate the parity and write everything once).\n> For the same number of drives, this cannot be possible. With 10 disks, on \n> raid5, 9 disks hold data, and 1 holds parity. The theoretical maximum \n> performance is only 9/10 of the 10/10 performance possible with RAID 0.\n\nI was saying that a 10 drive raid0 could be the same performance as a 10+1 \ndrive raid 5 or a 10+2 drive raid 6 array.\n\nthis is why I said 'same number of data disks, ignoring the parity disks'.\n\nin practice you would probably not do quite this good anyway (you have the \nparity calculation to make and the extra drive or two's worth of data \npassing over your busses), but it could be a lot closer then any \nimplementation currently is.\n\n>> Unfortunantly in practice filesystems don't support this, they don't do \n>> enough readahead to want to keep the entire stripe (so after they read it \n>> all in they throw some of it away), they (mostly) don't know where a stripe \n>> starts (and so intermingle different types of data on one stripe and spread \n>> data across multiple stripes unessasarily), and they tend to do writes in \n>> small, scattered chunks (rather then flushing an entire stripes worth of \n>> data at once)\n> In my experience, this theoretical maximum is not attainable without \n> significant write cache, and an intelligent controller, neither of which \n> Linux software RAID seems to have by default. My situation was a bit worse in \n> that I used applications that fsync() or journalled metadata that is ordered, \n> which forces the Linux software RAID to flush far more than it should - but \n> the same system works very well with RAID 1+0.\n\nmy statements above apply to any type of raid implementation, hardware or \nsoftware.\n\nthe thing that saves the hardware implementation is that the data is \nwritten to a battery-backed cache and the controller lies to the system, \ntelling it that the write is complete, and then it does the write later.\n\non a journaling filesystem you could get very similar results if you put \nthe journal on a solid-state drive.\n\nbut for your application, the fact that you are doing lots of fsyncs is \nwhat's killing you, becouse the fsync forces a lot of data to be written \nout, swamping the caches involved, and requiring that you wait for seeks. \nnothing other then a battery backed disk cache of some sort (either on the \ncontroller or a solid-state drive on a journaled filesystem would work)\n\nDavid Lang\n\n",
"msg_date": "Wed, 26 Dec 2007 15:34:35 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "On Wed, 26 Dec 2007, Mark Mielke wrote:\n\n> [email protected] wrote:\n>> On Wed, 26 Dec 2007, Mark Mielke wrote:\n>> \n>>> Florian Weimer wrote:\n>>>>> seek/read/calculate/seek/write since the drive moves on after the\n>>>>> read), when you read you must read _all_ drives in the set to check\n>>>>> the data integrity.\n>>>> I don't know of any RAID implementation that performs consistency\n>>>> checking on each read operation. 8-(\n>>> Dave had too much egg nog... :-)\n>>> Yep - checking consistency on read would eliminate the performance \n>>> benefits of RAID under any redundant configuration.\n>> except for raid0, raid is primarily a reliability benifit, any performance \n>> benifit is incidental, not the primary purpose.\n>> that said, I have heard of raid1 setups where it only reads off of one of \n>> the drives, but I have not heard of higher raid levels doing so.\n> What do you mean \"heard of\"? Which raid system do you know of that reads all \n> drives for RAID 1?\n>\n> Linux dmraid reads off ONLY the first. Linux mdadm reads off the \"best\" one. \n> Neither read from both. Why should it need to read from both? What will it do \n> if the consistency check fails? It's not like it can tell which disk is the \n> right one. It only knows that the whole array is inconsistent. Until it gets \n> an actual hardware failure (read error, write error), it doesn't know which \n> disk is wrong.\n\nyes, the two linux software implementations only read from one disk, but I \nhave seen hardware implementations where it reads from both drives, and if \nthey disagree it returns a read error rather then possibly invalid data \n(it's up to the admin to figure out which drive is bad at that point).\n\nno, I don't remember which card this was. I've been playing around with \nthings in this space for quite a while.\n\nDavid Lang\n",
"msg_date": "Wed, 26 Dec 2007 15:36:38 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "On Wed, 26 Dec 2007, Mark Mielke wrote:\n\n> [email protected] wrote:\n>> I could see a raid 1 array not doing consistancy checking (after all, it \n>> has no way of knowing what's right if it finds an error), but since raid \n>> 5/6 can repair the data I would expect them to do the checking each time.\n> Your messages are spread across the thread. :-)\n>\n> RAID 5 cannot repair the data. I don't know much about RAID 6, but I expect \n> it cannot necessarily repair the data either. It still doesn't know which \n> drive is wrong. In any case, there is no implementation I am aware of that \n> performs mandatory consistency checks on read. This would be silliness.\n\nsorry, raid 5 can repair data if it knows which chunk is bad (the same way \nit can rebuild a drive). Raid 6 does something slightly different for it's \nparity, I know it can recover from two drives going bad, but I haven't \nlooked into the question of it detecting bad data.\n\nDavid Lang\n",
"msg_date": "Wed, 26 Dec 2007 15:38:34 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "Fernando Hevia wrote:\n\nI'll start a little ways back first -\n\n> Well, here rises another doubt. Should I go for a single RAID 1+0 storing OS\n> + Data + WAL files or will I be better off with two RAID 1 separating data\n> from OS + Wal files?\n\nearlier you wrote -\n> Database will be about 30 GB in size initially and growing 10 GB per year.\n> Data is inserted overnight in two big tables and during the day mostly\n> read-only queries are run. Parallelism is rare.\n\nNow if the data is added overnight while no-one is using the server then \nreading is where you want performance, provided any degradation in \nwriting doesn't slow down the overnight data loading enough to make it \ntoo long to finish while no-one else is using it.\n\nSo in theory the only time you will have an advantage of having WAL on a \nseparate disk from data is at night when the data is loading itself (I \nam assuming this is an automated step)\nBut *some*? gains can be made from having the OS separate from the data.\n\n\n\n\n(This is for a theoretical discussion challenging the info/rumors that \nabounds about RAID setups) not to start a bitch fight or flame war.\n\n\nSo for the guys who know the intricacies of RAID implementation -\n\nI don't have any real world performance measures here.\n\nFor a setup that is only reading from disk (Santa sprinkles the data \ndown the air vent while we are all snug in our bed)\n\nIt has been mentioned that raid drivers/controllers can balance the \nworkload across the different disks - as Mark mentioned from the FreeBSD \n6 man pages - the balance option can be set to\nload|prefer|round-robin|split\n\nSo in theory a modern RAID 1 setup can be configured to get similar read \nspeeds as RAID 0 but would still drop to single disk speeds (or similar) \nwhen writing, but RAID 0 can get the faster write performance.\n\nSo in a perfect setup (probably 1+0) 4x 300MB/s SATA drives could \ndeliver 1200MB/s of data to RAM, which is also assuming that all 4 \nchannels have their own data path to RAM and aren't sharing.\n(anyone know how segregated the on board controllers such as these are?)\n(do some pci controllers offer better throughput?)\n\nWe all know that doesn't happen in the real world ;-) Let's say we are \nrestricted to 80% - 1000MB/s - and some of that (10%) gets used by the \nsystem - so we end up with 900MB/s delivered off disk to postgres - that \nwould still be more than the perfect rate at which 2x 300MB/s drives can \ndeliver.\n\nSo in this situation - if configured correctly with a good controller \n(driver for software RAID etc) a single 4 disk RAID 1+0 could outperform \ntwo 2 disk RAID 1 setups with data/OS+WAL split between the two.\n\nIs the real world speeds so different that this theory is real fantasy \nor has hardware reached a point performance wise where this is close to \nfact??\n\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Thu, 27 Dec 2007 14:04:10 +1030",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "On Thu, 27 Dec 2007, Shane Ambler wrote:\n\n> So in theory a modern RAID 1 setup can be configured to get similar read \n> speeds as RAID 0 but would still drop to single disk speeds (or similar) when \n> writing, but RAID 0 can get the faster write performance.\n\nThe trick is, you need a perfect controller that scatters individual reads \nevenly across the two disks as sequential reads move along the disk to \npull this off, bouncing between a RAID 1 pair to use all the bandwidth \navailable. There are caches inside the disk, read-ahead strategies as \nwell, and that all has to line up just right for a single client to get \nall the bandwidth. Real-world disks and controllers don't quite behave \nwell enough for that to predictably deliver what you might expect from \ntheory. With RAID 0, getting the full read speed of 2Xsingle drive is \nmuch more likely to actually happen than in RAID 1.\n\n> So in a perfect setup (probably 1+0) 4x 300MB/s SATA drives could \n> deliver 1200MB/s of data to RAM, which is also assuming that all 4 \n> channels have their own data path to RAM and aren't sharing. (anyone \n> know how segregated the on board controllers such as these are?) (do \n> some pci controllers offer better throughput?)\n\nOK, first off, beyond the occasional trivial burst you'll be hard pressed \nto ever sustain over 60MB/s out of any single SATA drive. So the \ntheoretical max 4-channel speed is closer to 240MB/s.\n\nA regular PCI bus tops out at a theoretical 133MB/s, and you sure can \nsaturate one with 4 disks and a good controller. This is why server \nconfigurations have controller cards that use PCI-X (1024MB/s) or lately \nPCI-e aka PCI/Express (250MB/s for each channel with up to 16 being \ncommon). If your SATA cards are on a motherboard, that's probably using \nsome integrated controller via the Southbridge AKA the ICH. That's \nprobably got 250MB/s or more and in current products can easily outrun \nmost sets of disks you'll ever connect. Even on motherboards that support \n8 SATA channels it will be difficult for anything else on the system to go \nhigher than 250MB/s even if the drives could potentially do more, and once \nyou're dealing with real-world workloads.\n\nIf you have multiple SATA controllers each with their own set of disk, \nthen you're back to having to worry about the bus limits. So, yes, there \nare bus throughput considerations here, but unless you're building a giant \narray or using some older bus technology you're unlikely to hit them with \nspinning SATA disks.\n\n> We all know that doesn't happen in the real world ;-) Let's say we are \n> restricted to 80% - 1000MB/s\n\nYeah, as mentioned above it's actually closer to 20%.\n\nWhile your numbers are off by a bunch, the reality for database use means \nthese computations don't matter much anyway. The seek related behavior \ndrives a lot of this more than sequential throughput, and decisions like \nwhether to split out the OS or WAL or whatever need to factor all that, \nrather than just the theoretical I/O.\n\nFor example, one reason it's popular to split the WAL onto another disk is \nthat under normal operation the disk never does a seek. So if there's a \ndedicated disk for that, the disk just writes but never moves much. \nWhere if the WAL is shared, the disk has to jump between writing that data \nand whatever else is going on, and peak possible WAL throughput is waaaay \nslower because of those seeks. (Note that unless you have a bunch of \ndisks, your WAL is unlikely to be a limiter anyway so you still may not \nwant to make it separate).\n\n(This topic so badly needs a PostgreSQL specific FAQ)\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 26 Dec 2007 23:10:46 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "Shane Ambler wrote:\n> So in theory a modern RAID 1 setup can be configured to get similar \n> read speeds as RAID 0 but would still drop to single disk speeds (or \n> similar) when writing, but RAID 0 can get the faster write performance.\n\nUnfortunately, it's a bit more complicated than that. RAID 1 has a \nsequential read problem, as read-ahead is wasted, and you may as well \nread from one disk and ignore the others. RAID 1 does, however, allows \nfor much greater concurrency. 4 processes on a 4 disk RAID 1 system can, \ntheoretically, each do whatever they want, without impacting each other. \nDatabase loads involving a single active read user will see greater \nperformance with RAID 0. Database loads involving multiple concurrent \nactive read users will see greater performance with RAID 1. All of these \nassume writes are not being performed to any great significance. Even \nsingle writes cause all disks in a RAID 1 system to synchronize \ntemporarily eliminating the read benefit. RAID 0 allows some degree of \nconcurrent reads and writes occurring at the same time (assuming even \ndistribution of the data across the devices). Of course, RAID 0 systems \nhave an expected life that decreases as the number of disks in the \nsystem increase.\n\nSo, this is where we get to RAID 1+0. Redundancy, good read performance, \ngood write performance, relatively simple implementation. For a mere \ncost of double the number of disk storage,\nyou can get around the problems of RAID 1 and the problems of RAID 0. :-)\n\n> So in a perfect setup (probably 1+0) 4x 300MB/s SATA drives could \n> deliver 1200MB/s of data to RAM, which is also assuming that all 4 \n> channels have their own data path to RAM and aren't sharing.\n> (anyone know how segregated the on board controllers such as these are?)\n> (do some pci controllers offer better throughput?)\n> We all know that doesn't happen in the real world ;-) Let's say we are \n> restricted to 80% - 1000MB/s - and some of that (10%) gets used by the \n> system - so we end up with 900MB/s delivered off disk to postgres - \n> that would still be more than the perfect rate at which 2x 300MB/s \n> drives can deliver.\n\nI expect you would have to have good hardware, and a well tuned system \nto see 80%+ theoretical for common work loads. But then, this isn't \nunique to RAID. Even in a single disk system, one has trouble achieving \n80%+ theoretical. :-)\n\nI achieve something closer to +20% - +60% over the theoretical \nperformance of a single disk with my four disk RAID 1+0 partitions. Lots \nof compromises in my system though that I won't get into. For me, I \nvalue the redundancy, allowing for a single disk to fail and giving me \ntime to easily recover, but for the cost of two more disks, I am able to \ncounter the performance cost of redundancy, and actually see a positive \nperformance effect instead.\n\n> So in this situation - if configured correctly with a good controller \n> (driver for software RAID etc) a single 4 disk RAID 1+0 could \n> outperform two 2 disk RAID 1 setups with data/OS+WAL split between the \n> two.\n> Is the real world speeds so different that this theory is real fantasy \n> or has hardware reached a point performance wise where this is close \n> to fact??\nI think it depends on the balance. If every second operation requires a \nWAL write, having separate might make sense. However, if the balance is \nless than even, one would end up with one of the 2 disk RAID 1 setups \nbeing more idle than the other. It's not an exact science when it comes \nto the various compromises being made. :-)\n\nIf you can only put 4 disks in to the system (either cost, or because of \nthe system size), I would suggest RAID 1+0 on all four as sensible \ncompromise. If you can put more in - start to consider breaking it up. :-)\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n",
"msg_date": "Wed, 26 Dec 2007 23:36:27 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "Mark Mielke wrote:\n> Shane Ambler wrote:\n>> So in a perfect setup (probably 1+0) 4x 300MB/s SATA drives could\n>> deliver 1200MB/s of data to RAM, which is also assuming that all 4\n>> channels have their own data path to RAM and aren't sharing. \n>> (anyone know how segregated the on board controllers such as these\n>> are?)\n >> (do some pci controllers offer better throughput?)\n >> We all know that doesn't happen in the real world ;-) Let's say we\n >> are restricted to 80% - 1000MB/s - and some of that (10%) gets used\n >> by the system - so we end up with 900MB/s delivered off disk to\n>> postgres - that would still be more than the perfect rate at which\n>> 2x 300MB/s drives can deliver.\n> \n> I achieve something closer to +20% - +60% over the theoretical \n> performance of a single disk with my four disk RAID 1+0 partitions.\n\nIf a good 4 disk SATA RAID 1+0 can achieve 60% more throughput than a \nsingle SATA disk, what sort of percentage can be achieved from a good \nSCSI controller with 4 disks in RAID 1+0?\n\nAre we still hitting the bus limits at this point or can a SCSI RAID \nstill outperform in raw data throughput?\n\nI would still think that SCSI would still provide the better reliability \nthat it always has, but performance wise is it still in front of SATA?\n\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Thu, 27 Dec 2007 17:46:38 +1030",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "Greg Smith wrote:\n> On Thu, 27 Dec 2007, Shane Ambler wrote:\n> \n>> So in theory a modern RAID 1 setup can be configured to get similar \n>> read speeds as RAID 0 but would still drop to single disk speeds (or \n>> similar) when writing, but RAID 0 can get the faster write performance.\n> \n> The trick is, you need a perfect controller that scatters individual \n> reads evenly across the two disks as sequential reads move along the \n> disk to pull this off, bouncing between a RAID 1 pair to use all the \n> bandwidth available. There are caches inside the disk, read-ahead \n> strategies as well, and that all has to line up just right for a single \n> client to get all the bandwidth. Real-world disks and controllers don't \n> quite behave well enough for that to predictably deliver what you might \n> expect from theory. With RAID 0, getting the full read speed of \n> 2Xsingle drive is much more likely to actually happen than in RAID 1.\n\nKind of makes the point for using 1+0\n\n>> So in a perfect setup (probably 1+0) 4x 300MB/s SATA drives could \n>> deliver 1200MB/s of data to RAM, which is also assuming that all 4 \n>> channels have their own data path to RAM and aren't sharing.\n\n> OK, first off, beyond the occasional trivial burst you'll be hard \n> pressed to ever sustain over 60MB/s out of any single SATA drive. So \n> the theoretical max 4-channel speed is closer to 240MB/s.\n> \n> A regular PCI bus tops out at a theoretical 133MB/s, and you sure can \n> saturate one with 4 disks and a good controller. This is why server \n> configurations have controller cards that use PCI-X (1024MB/s) or lately \n> PCI-e aka PCI/Express (250MB/s for each channel with up to 16 being \n> common). If your SATA cards are on a motherboard, that's probably using \n\nSo I guess as far as performance goes your motherboard will determine \nhow far you can take it.\n\n(talking from a db only server view on things)\n\nA PCI system will have little benefit from more than 2 disks but would \nneed 4 to get both reliability and performance.\n\nPCI-X can benefit from up to 17 disks\n\nPCI-e (with 16 channels) can benefit from 66 disks\n\nThe trick there will be dividing your db over a large number of disk \nsets to balance the load among them (I don't see 66 disks being setup in \none array), so this would be of limited use to anyone but the most \ndedicated DBA's.\n\nFor most servers these days the number of disks are added to reach a \nperformance level not a storage requirement.\n\n> While your numbers are off by a bunch, the reality for database use \n> means these computations don't matter much anyway. The seek related \n> behavior drives a lot of this more than sequential throughput, and \n> decisions like whether to split out the OS or WAL or whatever need to \n> factor all that, rather than just the theoretical I/O.\n> \n\nSo this is where solid state disks come in - lack of seek times \n(basically) means they can saturate your bus limits.\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Thu, 27 Dec 2007 18:09:51 +1030",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "In response to Mark Mielke <[email protected]>:\n\n> Bill Moran wrote:\n> >\n> >> What do you mean \"heard of\"? Which raid system do you know of that reads \n> >> all drives for RAID 1?\n> >> \n> >\n> > I'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing\n> > consistency checking at that point.\n> > \n> According to this:\n> \n> http://www.freebsd.org/cgi/man.cgi?query=gmirror&apropos=0&sektion=8&manpath=FreeBSD+6-current&format=html\n> \n> There is a -b (balance) option that seems pretty clear that it does not \n> read from all drives if it does not have to:\n\n From where did you draw that conclusion? Note that the \"split\" algorithm\n(which is the default) divides requests up among multiple drives. I'm\nunclear as to how you reached a conclusion opposite of what the man page\nsays -- did you test and find it not to work?\n\n> \n> Create a mirror. \n> The order of components is important,\n> because a component's priority is based on its position\n> (starting from 0). The component with the biggest priority\n> is used by the prefer balance algorithm and is also used as a\n> master component when resynchronization is needed, e.g. after\n> a power failure when the device was open for writing.\n> \n> Additional options include:\n> \n> *-b* /balance/ Specifies balance algorithm to use, one of:\n> \n> *load* Read from the component with the\n> lowest load.\n> \n> *prefer* Read from the component with the\n> biggest priority.\n> \n> *round-robin* Use round-robin algorithm when\n> choosing component to read.\n> \n> *split* Split read requests, which are big-\n> ger than or equal to slice size on N\n> pieces, where N is the number of\n> active components. This is the\n> default balance algorithm.\n> \n> \n> \n> Cheers,\n> mark\n> \n> -- \n> Mark Mielke <[email protected]>\n> \n> \n> \n> \n> \n> \n> \n> \n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n",
"msg_date": "Thu, 27 Dec 2007 08:50:09 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "Bill Moran wrote:\n> In response to Mark Mielke <[email protected]>:\n>\n> \n>> Bill Moran wrote:\n>> \n>>> I'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing\n>>> consistency checking at that point.\n>>> \n>> According to this:\n>>\n>> http://www.freebsd.org/cgi/man.cgi?query=gmirror&apropos=0&sektion=8&manpath=FreeBSD+6-current&format=html\n>>\n>> There is a -b (balance) option that seems pretty clear that it does not \n>> read from all drives if it does not have to:\n>> \n>\n> >From where did you draw that conclusion? Note that the \"split\" algorithm\n> (which is the default) divides requests up among multiple drives. I'm\n> unclear as to how you reached a conclusion opposite of what the man page\n> says -- did you test and find it not to work?\n> \nPerhaps you and I are speaking slightly different languages? :-) When I \nsay \"does not read from all drives\", I mean \"it will happily read from \nany of the drives to satisfy the request, and allows some level of \nconfiguration as to which drive it will select. It does not need to read \nall of the drives to satisfy the request.\"\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nBill Moran wrote:\n\nIn response to Mark Mielke <[email protected]>:\n\n \n\nBill Moran wrote:\n \n\nI'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing\nconsistency checking at that point.\n \n\nAccording to this:\n\nhttp://www.freebsd.org/cgi/man.cgi?query=gmirror&apropos=0&sektion=8&manpath=FreeBSD+6-current&format=html\n\nThere is a -b (balance) option that seems pretty clear that it does not \nread from all drives if it does not have to:\n \n\n\n>From where did you draw that conclusion? Note that the \"split\" algorithm\n(which is the default) divides requests up among multiple drives. I'm\nunclear as to how you reached a conclusion opposite of what the man page\nsays -- did you test and find it not to work?\n \n\nPerhaps you and I are speaking slightly different languages? :-) When I\nsay \"does not read from all drives\", I mean \"it will happily read from\nany of the drives to satisfy the request, and allows some level of\nconfiguration as to which drive it will select. It does not need to\nread all of the drives to satisfy the request.\"\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>",
"msg_date": "Thu, 27 Dec 2007 10:24:30 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "Shane Ambler wrote:\n\n>> I achieve something closer to +20% - +60% over the theoretical \n>> performance of a single disk with my four disk RAID 1+0 partitions.\n> \n> If a good 4 disk SATA RAID 1+0 can achieve 60% more throughput than a \n> single SATA disk, what sort of percentage can be achieved from a good \n> SCSI controller with 4 disks in RAID 1+0?\n> \n> Are we still hitting the bus limits at this point or can a SCSI RAID \n> still outperform in raw data throughput?\n> \n> I would still think that SCSI would still provide the better reliability \n> that it always has, but performance wise is it still in front of SATA?\n> \nI have a SuperMicro X5DP8-G2 motherboard with two hyperthreaded\nmicroprocessors on it. This motherboard has 5 PCI-X busses (not merely 5\nsockets: in fact it has 6 sockets, but also a dual Ultra/320 SCSI controller\nchip, a dual gigabit ethernet chip.\n\nSo I hook up my 4 10,000 rpm database hard drives on one SCSI controller and\nthe two other 10,000 rpm hard drives on the other. Nothing else is on the\nSCSI controller or its PCI-X bus that goes to the main memory except the\nother SCSI controller. These PCI-X busses are 133 MHz, and the memory as 266\nMHz but the FSB runs at 533MHz as the memory modules are run in parallel;\ni.e., there are 8 modules and they run two at a time.\n\nNothing else is on the other SCSI controller. Of the two hard drives on the\nsecond controller, one has the WAL on it, but when my database is running\nsomething (it is up all the time, but frequently idle) nothing else uses\nthat drive much.\n\nSo in theory, I should be able to get about 320 megabytes/second through\neach SCSI controller, though I have never seen that. I do get over 60\nmegabytes/second for brief (several second) periods though. I do not run RAID.\n\nI think it is probably very difficult to generalize how things go without a\ngood knowledge of how the motherboard is organized, the amounts and types of\ncaching that take place (both sortware and hardware), the speeds of the\nvarious devices and their controllers, the bandwidths of the various\ncommunication paths, and so on.\n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 11:00:01 up 10 days, 11:30, 2 users, load average: 4.20, 4.20, 4.25\n",
"msg_date": "Thu, 27 Dec 2007 11:18:52 -0500",
"msg_from": "Jean-David Beyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "\nOn Dec 26, 2007, at 10:21 AM, Bill Moran wrote:\n\n> I snipped the rest of your message because none of it matters. \n> Never use\n> RAID 5 on a database system. Ever. There is absolutely NO reason to\n> every put yourself through that much suffering. If you hate yourself\n> that much just commit suicide, it's less drastic.\n\nOnce you hit 14 or more spindles, the difference between RAID10 and \nRAID5 (or preferably RAID6) is minimal.\n\nIn your 4 disk scenario, I'd vote RAID10.\n\n",
"msg_date": "Thu, 27 Dec 2007 11:24:30 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "\nOn Dec 26, 2007, at 4:28 PM, [email protected] wrote:\n\n> now, if you can afford solid-state drives which don't have noticable \n> seek times, things are completely different ;-)\n\nWho makes one with \"infinite\" lifetime? The only ones I know of are \nbuilt using RAM and have disk drive backup with internal monitoring \nare *really* expensive.\n\nI've pondered building a raid enclosure using these new SATA flash \ndrives, but that would be an expensive brick after a short period as \none of my DB servers...\n\n",
"msg_date": "Thu, 27 Dec 2007 11:26:58 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
},
{
"msg_contents": "In response to Mark Mielke <[email protected]>:\n\n> Bill Moran wrote:\n> > In response to Mark Mielke <[email protected]>:\n> >\n> > \n> >> Bill Moran wrote:\n> >> \n> >>> I'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing\n> >>> consistency checking at that point.\n> >>> \n> >> According to this:\n> >>\n> >> http://www.freebsd.org/cgi/man.cgi?query=gmirror&apropos=0&sektion=8&manpath=FreeBSD+6-current&format=html\n> >>\n> >> There is a -b (balance) option that seems pretty clear that it does not \n> >> read from all drives if it does not have to:\n> >> \n> >\n> > >From where did you draw that conclusion? Note that the \"split\" algorithm\n> > (which is the default) divides requests up among multiple drives. I'm\n> > unclear as to how you reached a conclusion opposite of what the man page\n> > says -- did you test and find it not to work?\n> > \n> Perhaps you and I are speaking slightly different languages? :-) When I \n> say \"does not read from all drives\", I mean \"it will happily read from \n> any of the drives to satisfy the request, and allows some level of \n> configuration as to which drive it will select. It does not need to read \n> all of the drives to satisfy the request.\"\n\nAhh ... I did misunderstand you.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Thu, 27 Dec 2007 11:54:44 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: With 4 disks should I go for RAID 5 or RAID 10"
}
] |
[
{
"msg_contents": "Is anyone running their production PostgreSQL server on the RHEL Cluster\nsoftware? If so, how is it working for you? My linux admin is looking at\ntrying to increase the robustness of our environment and wanting to try and\neliminate as many single points of failure as possible.\n\nSo, I am looking for any hands on experience in running PostgreSQL in some\nsort of HA cluster.\n\nThanks,\n\nChris\n\nIs anyone running their production PostgreSQL server on the RHEL Cluster software? If so, how is it working for you? My linux admin is looking at trying to increase the robustness of our environment and wanting to try and eliminate as many single points of failure as possible.\nSo, I am looking for any hands on experience in running PostgreSQL in some sort of HA cluster.Thanks,Chris",
"msg_date": "Wed, 26 Dec 2007 12:36:15 -0500",
"msg_from": "\"Chris Hoover\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Anyone running on RHEL Cluster?"
},
{
"msg_contents": "On Dec 26, 2007 7:36 PM, Chris Hoover <[email protected]> wrote:\n\n> Is anyone running their production PostgreSQL server on the RHEL Cluster\n> software? If so, how is it working for you? My linux admin is looking at\n> trying to increase the robustness of our environment and wanting to try and\n> eliminate as many single points of failure as possible.\n>\n> So, I am looking for any hands on experience in running PostgreSQL in some\n> sort of HA cluster.\n>\n>\nI was looking for the same information previously this year (\nhttp://archives.postgresql.org/pgsql-general/2007-08/msg00910.php). At that\ntime I concluded that using drbd for data replication was probably better\nchoice than using RHCS and shared storage, but afterwards the requirements\nfor the cluster changed so that drbd was no longer a suitable choice, and we\nended up with building a RHCS cluster with shared disk and gfs.\n\nThe cluster has been working as expected and not counting a few minor\nannoyances I haven't had any problems with it (this is with RHEL 5.1). My\ninitial fear was that somehow two postmasters could be accessing the same\nset of file but after testing I have not been able to produce this (ie. the\nfencing is working). So overall I see no reason not to go with RHCS if it\nsatisifies your business requirements.\n\nRegards\n\nM\n\nOn Dec 26, 2007 7:36 PM, Chris Hoover <[email protected]> wrote:\nIs anyone running their production PostgreSQL server on the RHEL Cluster software? If so, how is it working for you? My linux admin is looking at trying to increase the robustness of our environment and wanting to try and eliminate as many single points of failure as possible.\nSo, I am looking for any hands on experience in running PostgreSQL in some sort of HA cluster.I was looking for the same information previously this year (\nhttp://archives.postgresql.org/pgsql-general/2007-08/msg00910.php). At that time I concluded that using drbd for data replication was probably better choice than using RHCS and shared storage, but afterwards the requirements for the cluster changed so that drbd was no longer a suitable choice, and we ended up with building a RHCS cluster with shared disk and gfs.\nThe cluster has been working as expected and not counting a few minor annoyances I haven't had any problems with it (this is with RHEL 5.1). My initial fear was that somehow two postmasters could be accessing the same set of file but after testing I have not been able to produce this (ie. the fencing is working). So overall I see no reason not to go with RHCS if it satisifies your business requirements.\nRegardsM",
"msg_date": "Fri, 28 Dec 2007 10:12:17 +0200",
"msg_from": "\"Mikko Partio\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone running on RHEL Cluster?"
}
] |
[
{
"msg_contents": "\n\tI've been looking at the performance of pg_dump in\nthe past week off and on trying to see if I can get it to\nwork a bit faster and was looking for tips on this.\n\n\tdoing a pg_dump on my 34311239 row table (1h of data btw)\nresults in a wallclock time of 187.9 seconds or ~182k rows/sec.\n\n\tI've got my insert (COPY) performance around 100k/sec and\nwas hoping to get the reads to be much faster. The analysis I'm\ndoing is much faster doing a pg_dump than utilizing a few\nqueries for numerous reasons. (If you care, I can enumerate them\nto you privately but the result is pg_dump is the best way to handle\nthe multiple bits of analysis that are needed, please trust me).\n\n\tWhat i'm seeing:\n\n\tpg_dump is utilizing about 13% of the cpu and the\ncorresponding postgres backend is at 100% cpu time.\n(multi-core, multi-cpu, lotsa ram, super-fast disk).\n\n\tI'm not seeing myself being I/O bound so was interested\nif there was a way I could tweak the backend performance or\noffload some of the activity to another process.\n\n\tpg8.3(beta) with the following variances from default\n\ncheckpoint_segments = 300 # in logfile segments, min 1, 16MB each\neffective_cache_size = 512MB # typically 8KB each\nwal_buffers = 128MB # min 4, 8KB each\nshared_buffers = 128MB # min 16, at least max_connections*2, 8KB each\nwork_mem = 512MB # min 64, size in KB\n\n\n\tunrelated but associated data, the table has one index on it.\nnot relevant for pg_dump but i'm interested in getting better concurent index\ncreation (utilize those cpus better but not slow down my row/sec perf)\nbut that's another topic entirely..\n\n\tAny tips on getting pg_dump (actually the backend) to perform \nmuch closer to 500k/sec or more? This would also aide me when I upgrade \npg versions and need to dump/restore with minimal downtime (as the data \nnever stops coming.. whee).\n\n\tThanks!\n\n\t- Jared\n\n-- \nJared Mauch | pgp key available via finger from [email protected]\nclue++; | http://puck.nether.net/~jared/ My statements are only mine.\n",
"msg_date": "Wed, 26 Dec 2007 15:22:30 -0500",
"msg_from": "Jared Mauch <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump performance"
},
{
"msg_contents": "Jared Mauch wrote:\n> \tpg_dump is utilizing about 13% of the cpu and the\n> corresponding postgres backend is at 100% cpu time.\n> (multi-core, multi-cpu, lotsa ram, super-fast disk).\n> ...\n> \tAny tips on getting pg_dump (actually the backend) to perform \n> much closer to 500k/sec or more? This would also aide me when I upgrade \n> pg versions and need to dump/restore with minimal downtime (as the data \n> never stops coming.. whee).\n\nI would suggest running oprofile to see where the time is spent. There \nmight be some simple optimizations that you could do at the source level \nthat would help.\n\nWhere the time is spent depends a lot on the schema and data. For \nexample, I profiled a pg_dump run on a benchmark database a while ago, \nand found that most of the time was spent in sprintf, formatting \ntimestamp columns. If you have a lot of timestamp columns that might be \nthe bottleneck for you as well, or something else.\n\nOr if you can post the schema for the table you're dumping, maybe we can \n make a more educated guess.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 26 Dec 2007 22:52:08 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump performance"
},
{
"msg_contents": "On Wed, Dec 26, 2007 at 10:52:08PM +0200, Heikki Linnakangas wrote:\n> Jared Mauch wrote:\n>> \tpg_dump is utilizing about 13% of the cpu and the\n>> corresponding postgres backend is at 100% cpu time.\n>> (multi-core, multi-cpu, lotsa ram, super-fast disk).\n>> ...\n>> \tAny tips on getting pg_dump (actually the backend) to perform much closer \n>> to 500k/sec or more? This would also aide me when I upgrade pg versions \n>> and need to dump/restore with minimal downtime (as the data never stops \n>> coming.. whee).\n>\n> I would suggest running oprofile to see where the time is spent. There \n> might be some simple optimizations that you could do at the source level \n> that would help.\n>\n> Where the time is spent depends a lot on the schema and data. For example, \n> I profiled a pg_dump run on a benchmark database a while ago, and found \n> that most of the time was spent in sprintf, formatting timestamp columns. \n> If you have a lot of timestamp columns that might be the bottleneck for you \n> as well, or something else.\n>\n> Or if you can post the schema for the table you're dumping, maybe we can \n> make a more educated guess.\n\n\there's the template table that they're all copies\nof:\n\nCREATE TABLE template_flowdatas (\n routerip inet,\n starttime integer,\n srcip inet,\n dstip inet,\n srcifc smallint,\n dstifc smallint,\n srcasn integer,\n dstasn integer,\n proto smallint,\n srcport integer,\n dstport integer,\n flowlen integer,\n tcpflags smallint,\n tosbit smallint\n);\n\n\n-- \nJared Mauch | pgp key available via finger from [email protected]\nclue++; | http://puck.nether.net/~jared/ My statements are only mine.\n",
"msg_date": "Wed, 26 Dec 2007 15:58:44 -0500",
"msg_from": "Jared Mauch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump performance"
},
{
"msg_contents": "Jared Mauch wrote:\n> On Wed, Dec 26, 2007 at 10:52:08PM +0200, Heikki Linnakangas wrote:\n>> Jared Mauch wrote:\n>>> \tpg_dump is utilizing about 13% of the cpu and the\n>>> corresponding postgres backend is at 100% cpu time.\n>>> (multi-core, multi-cpu, lotsa ram, super-fast disk).\n>>> ...\n>>> \tAny tips on getting pg_dump (actually the backend) to perform much closer \n>>> to 500k/sec or more? This would also aide me when I upgrade pg versions \n>>> and need to dump/restore with minimal downtime (as the data never stops \n>>> coming.. whee).\n>> I would suggest running oprofile to see where the time is spent. There \n>> might be some simple optimizations that you could do at the source level \n>> that would help.\n>>\n>> Where the time is spent depends a lot on the schema and data. For example, \n>> I profiled a pg_dump run on a benchmark database a while ago, and found \n>> that most of the time was spent in sprintf, formatting timestamp columns. \n>> If you have a lot of timestamp columns that might be the bottleneck for you \n>> as well, or something else.\n>>\n>> Or if you can post the schema for the table you're dumping, maybe we can \n>> make a more educated guess.\n> \n> \there's the template table that they're all copies\n> of:\n> \n> CREATE TABLE template_flowdatas (\n> routerip inet,\n> starttime integer,\n> srcip inet,\n> dstip inet,\n> srcifc smallint,\n> dstifc smallint,\n> srcasn integer,\n> dstasn integer,\n> proto smallint,\n> srcport integer,\n> dstport integer,\n> flowlen integer,\n> tcpflags smallint,\n> tosbit smallint\n> );\n\nI run a quick oprofile run on my laptop, with a table like that, filled \nwith dummy data. It looks like indeed ~30% of the CPU time is spent in \nsprintf, to convert the integers and inets to string format. I think you \ncould speed that up by replacing the sprintf calls in int2, int4 and \ninet output functions with faster, customized functions. We don't need \nall the bells and whistles of sprintf, which gives the opportunity to \noptimize.\n\n\nA binary mode dump should go a lot faster, because it doesn't need to do \nthose conversions, but binary dumps are not guaranteed to work across \nversions.\n\nBTW, the profiling I did earlier led me to think this should be \noptimized in the compiler. I started a thread about that on the gcc \nmailing list but got busy with other stuff and didn't follow through \nthat idea: http://gcc.gnu.org/ml/gcc/2007-10/msg00073.html\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 26 Dec 2007 23:35:59 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump performance"
},
{
"msg_contents": "On Wed, Dec 26, 2007 at 11:35:59PM +0200, Heikki Linnakangas wrote:\n> I run a quick oprofile run on my laptop, with a table like that, filled \n> with dummy data. It looks like indeed ~30% of the CPU time is spent in \n> sprintf, to convert the integers and inets to string format. I think you \n> could speed that up by replacing the sprintf calls in int2, int4 and inet \n> output functions with faster, customized functions. We don't need all the \n> bells and whistles of sprintf, which gives the opportunity to optimize.\n\n\tHmm. Given the above+below perhaps there's something that can\nbe tackled in the source here.. will look at poking around in there ...\nour sysadmin folks don't like the idea of running patched stuff (aside from\nconf changes) as they're concerned about losing patches btw upgrades.\n\n\tI'm waiting on one of my hosts in Japan to come back online\nso perhaps I can hack the source and attempt some optimization\nafter that point. It's not the beefy host that I have this on\nthough and not even multi-{core,cpu} so my luck may be poor.\n\n> A binary mode dump should go a lot faster, because it doesn't need to do \n> those conversions, but binary dumps are not guaranteed to work across \n> versions.\n\n\tI'll look at this. Since this stuff is going into something else\nperhaps I can get it to be slightly faster to not convert from binary ->\nstring -> binary(memory) again. A number of the columns are unused in my\nprocessing and some are used only when certain criteria are met (some\nare always used).\n\n> BTW, the profiling I did earlier led me to think this should be optimized \n> in the compiler. I started a thread about that on the gcc mailing list but \n> got busy with other stuff and didn't follow through that idea: \n> http://gcc.gnu.org/ml/gcc/2007-10/msg00073.html\n\n(* drift=off mode=drifting-fast *)\n\tI'd have to say after a quick review of this, it does look\nlike they're right and it should go somewhat in the C lib. I'm on\nSolaris 10 with my host. There may be some optimizations that the compiler\ncould do when linking the C library but I currently think they're on\nsound footing.\n\n(* drift=off mode=end *)\n\n\t- Jared\n\n\n-- \nJared Mauch | pgp key available via finger from [email protected]\nclue++; | http://puck.nether.net/~jared/ My statements are only mine.\n",
"msg_date": "Wed, 26 Dec 2007 16:49:59 -0500",
"msg_from": "Jared Mauch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump performance"
},
{
"msg_contents": "\"Jared Mauch\" <[email protected]> writes:\n\n> \tpg_dump is utilizing about 13% of the cpu and the\n> corresponding postgres backend is at 100% cpu time.\n> (multi-core, multi-cpu, lotsa ram, super-fast disk).\n>...\n> \tpg8.3(beta) with the following variances from default\n>\n> checkpoint_segments = 300 # in logfile segments, min 1, 16MB each\n> effective_cache_size = 512MB # typically 8KB each\n> wal_buffers = 128MB # min 4, 8KB each\n> shared_buffers = 128MB # min 16, at least max_connections*2, 8KB each\n> work_mem = 512MB # min 64, size in KB\n\nFwiw those are pretty unusual numbers. Normally work_mem is much smaller than\nshared_buffers since you only need one block of memory for shared buffers and\nwork_mem is for every query (and every sort within those queries). If you have\nten queries running two sorts each this setting of work_mem could consume 5GB.\n\nRaising shared buffers could improve your pg_dump speed. If all the data is in\ncache it would reduce the time spend moving data between filesystem cache and\npostgres shared buffers.\n\nWhat made you raise wal_buffers so high? I don't think it hurts but that's a\nfew orders of magnitude higher than what I would expect to help.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n",
"msg_date": "Thu, 27 Dec 2007 13:14:25 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump performance"
},
{
"msg_contents": "On Thu, Dec 27, 2007 at 01:14:25PM +0000, Gregory Stark wrote:\n> \"Jared Mauch\" <[email protected]> writes:\n> \n> > \tpg_dump is utilizing about 13% of the cpu and the\n> > corresponding postgres backend is at 100% cpu time.\n> > (multi-core, multi-cpu, lotsa ram, super-fast disk).\n> >...\n> > \tpg8.3(beta) with the following variances from default\n> >\n> > checkpoint_segments = 300 # in logfile segments, min 1, 16MB each\n> > effective_cache_size = 512MB # typically 8KB each\n> > wal_buffers = 128MB # min 4, 8KB each\n> > shared_buffers = 128MB # min 16, at least max_connections*2, 8KB each\n> > work_mem = 512MB # min 64, size in KB\n> \n> Fwiw those are pretty unusual numbers. Normally work_mem is much smaller than\n> shared_buffers since you only need one block of memory for shared buffers and\n> work_mem is for every query (and every sort within those queries). If you have\n> ten queries running two sorts each this setting of work_mem could consume 5GB.\n\n\tI'd still have lots of ram left :)\n\n\tI'm dealing with normal query results that end up matching 5-10 million\nrows based on the index (starttime) not counting the filter afterwards. Each\nbackend rarely makes it over 256m.\n\n> Raising shared buffers could improve your pg_dump speed. If all the data is in\n> cache it would reduce the time spend moving data between filesystem cache and\n> postgres shared buffers.\n\n\tI doubt it's all in cache, but I can look at this. I did not do a\nlot of fine tuning of numbers, just enough to get past the defaults and have\nan acceptable amount of performance.\n\n> What made you raise wal_buffers so high? I don't think it hurts but that's a\n> few orders of magnitude higher than what I would expect to help.\n\n\tI'm adding chunks of ~1.2m rows every other minute. Once I increase\nmy data collection pool, this will go up to around [1]2-3m rows or so. I\nfound having higher wal and checkpoint helped. I didn't spend a lot of time\ntweaking these options. Is there some way you know to determine high \nwatermark numbers for what is being used?\n\n\t- Jared\n\n[1] - I am concerned that with my 'insert' speed being around 100k/sec\n and raw pg_dump speed being around 182k/sec i will start getting data\n faster than can be stored and postprocessed.\n\n-- \nJared Mauch | pgp key available via finger from [email protected]\nclue++; | http://puck.nether.net/~jared/ My statements are only mine.\n",
"msg_date": "Thu, 27 Dec 2007 09:58:43 -0500",
"msg_from": "Jared Mauch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump performance"
}
] |
[
{
"msg_contents": "@Albe Wrote:\n> Does the dump file keep growing or not?\nNo, it stops growing.\n\n@sanjeev\nWe are using\n\"..\\pgAdmin III\\1.6\\pg_dump.exe\" -i -h IPHOST -p 5432 -U user -F c -b -v -f \"..\\back\\myfile.backup\" db\nWe are going to try the other one.\n\n@Harald, we've used pg_dump 8.3.0.7045 (From pg Admin III 1.6) and pg_dump 8.3.0.7277 from pg Admin III 1.8\nWe are going to try the 8.2.5.7260 from the PostgreSQL folder.\n\nThanks\nSebasti�n\n\n\nHarald Armin Massa <[email protected]> escribi�: Sebastian,\n\nare you sure that you are using the 8.2.5 pg_dump, and not by old\npaths a pg_dump of 8.0 ?\n\nIt is not good to use older pg_dumps on newer databases.\n\n\nHarald\n\n\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nSpielberger Stra�e 49\n70435 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nEuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!\n\n\n \n---------------------------------\n\nTarjeta de cr�dito Yahoo! de Banco Supervielle.Solicit� tu nueva Tarjeta de cr�dito. De tu PC directo a tu casa. \n Visit� www.tuprimeratarjeta.com.ar\n@Albe Wrote:> Does the dump file keep growing or not?No, it stops growing.@sanjeevWe are using\"..\\pgAdmin III\\1.6\\pg_dump.exe\" -i -h IPHOST -p 5432 -U user -F c -b -v -f \"..\\back\\myfile.backup\" dbWe are going to try the other one.@Harald, we've used pg_dump 8.3.0.7045 (From pg Admin III 1.6) and pg_dump 8.3.0.7277 from pg Admin III 1.8We are going to try the 8.2.5.7260 from the PostgreSQL folder.ThanksSebasti�nHarald Armin Massa <[email protected]> escribi�: Sebastian,are you sure that you are using the 8.2.5 pg_dump, and not by oldpaths a pg_dump of 8.0 ?It is not good to use older pg_dumps on newer databases.Harald-- GHUM Harald Massapersuadere et programmareHarald Armin\n MassaSpielberger Stra�e 4970435 Stuttgart0173/9409607fx 01212-5-13695179-EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!\nTarjeta de cr�dito Yahoo! de Banco Supervielle.\nSolicit� tu nueva Tarjeta de cr�dito. De tu PC directo a tu casa. Visit� www.tuprimeratarjeta.com.ar",
"msg_date": "Wed, 2 Jan 2008 09:24:03 -0300 (ART)",
"msg_from": "=?iso-8859-1?q?Sebasti=E1n=20Baioni?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Can't make backup"
}
] |
[
{
"msg_contents": "Hi all,\n\nwe have a PostgreSQL dedicated Linux server with 8 cores (2xX5355). We \ncame accross a strange issue: when running with all 8 cores enabled \napproximatly once a minute (period differs) the system is very busy for \na few seconds (~5-10s) and we don't know why - this issue don't show up \nwhen we tell Linux to use only 2 cores, with 4 cores the problem is here \nbut it is still better than with 8 cores - all on the same machine, same \nconfig, same workload. We don't see any apparent reason for these peaks. \nWe'd like to investigate it further but we don't know what to try next. \nAny suggenstions? Any tunning tips for Linux+PostgreSQL on 8-way system? \nCan this be connected with our heavy use of listen/notify and hundreds \nbackends in listen mode?\n\nMore details are below.\n\nThanks,\n\nKuba\n\nSystem: HP DL360 2x5355, 8G RAM, P600+MSA50 - internal 2x72GB RAID 10 \nfor OS, 10x72G disks RAID 10 for PostgreSQL data and wal\nOS: Linux 2.6 64bit (kernel 2.6.21, 22, 23 makes little difference)\nPostgreSQL: 8.2.4 (64bit), shared buffers 1G\n\nNothing else than PostgreSQL is running on the server. Cca 800 \nconcurrent backends. Majority of backends in LISTEN doing nothing. \nClient interface for most backends is ecpg+libpq.\n\nProblem description:\n\nThe system is usually running 80-95% idle. Approximatly once a minute \nfor cca 5-10s there is a peak in activity which looks like this:\n\nvmstat (and top or atop) reports 0% idle, 100% in user mode, very low \niowait, low IO activity, higher number of contex switches than usual but \nnot exceedingly high (2000-4000cs/s, usually 1500cs/s), few hundreds \nwaiting processes per second (usually 0-1/s). From looking at top and \nrunning processes we can't see any obvious reason for the peak. \nAccording to PostgreSQL log the long running commands from these moments \nare e.g. begin transaction lasting several seconds.\n\nWhen only 2 cores are enabled (kernel command line) then everything is \nrunning smoothly. 4 cores exibits slightly better behavior than 8 cores \nbut worse than 2 cores - the peaks are visible.\n\nWe've tried kernel versions 2.6.21-23 (latest revisions as of beginning \nDecember from kernel.org) the pattern slightly changed but it may also \nbe that the workload slightly changed.\n\npgbench or any other stress testing runs smoothly on the server.\n\nThe only strange thing about our usage pattern I can think of is heavy \nuse of LISTEN/NOTIFY especially hunderds backends in listen mode.\n\nWhen restarting our connected clients the peaks are not there from time \n0, they are visible after a while - seems something gets synchronized \nand causing troubles then.\n\nSince the server is PostgreSQL dedicated and no our client applications \nare running on it - and there is a difference when 2 and 8 cores are \nenabled - we think that the peaks are not caused by our client \napplications.\n\nHow can we diagnose what is happening during the peaks?\n",
"msg_date": "Thu, 03 Jan 2008 12:50:01 +0100",
"msg_from": "Jakub Ouhrabka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Linux/PostgreSQL scalability issue - problem with 8 cores"
},
{
"msg_contents": "Hi Jakub,\n\nI do have a similar server (from DELL), which performance well with our\nPostgreSQL application. I guess the peak in context switches is the only\nthink you can see.\n\nAnyhow, I think it is you're LISTEN/NOTIFY approach which cause that\nbehaviour. I guess all backends do listen to the same notification.\nI don't know the exact implementation, but I can imagine that all\nbackends are access the same section in the shared memory which cause\nthe increase of context switches. More cores means more access at the\nsame time.\n\nCan you change your implementation?\n- split you problem - create multiple notification if possible\n- do an UNLISTEN if possible\n- use another signalisation technique\n\nRegards\nSven\n\n\nJakub Ouhrabka schrieb:\n> Hi all,\n> \n> we have a PostgreSQL dedicated Linux server with 8 cores (2xX5355). We\n> came accross a strange issue: when running with all 8 cores enabled\n> approximatly once a minute (period differs) the system is very busy for\n> a few seconds (~5-10s) and we don't know why - this issue don't show up\n> when we tell Linux to use only 2 cores, with 4 cores the problem is here\n> but it is still better than with 8 cores - all on the same machine, same\n> config, same workload. We don't see any apparent reason for these peaks.\n> We'd like to investigate it further but we don't know what to try next.\n> Any suggenstions? Any tunning tips for Linux+PostgreSQL on 8-way system?\n> Can this be connected with our heavy use of listen/notify and hundreds\n> backends in listen mode?\n> \n> More details are below.\n> \n> Thanks,\n> \n> Kuba\n> \n> System: HP DL360 2x5355, 8G RAM, P600+MSA50 - internal 2x72GB RAID 10\n> for OS, 10x72G disks RAID 10 for PostgreSQL data and wal\n> OS: Linux 2.6 64bit (kernel 2.6.21, 22, 23 makes little difference)\n> PostgreSQL: 8.2.4 (64bit), shared buffers 1G\n> \n> Nothing else than PostgreSQL is running on the server. Cca 800\n> concurrent backends. Majority of backends in LISTEN doing nothing.\n> Client interface for most backends is ecpg+libpq.\n> \n> Problem description:\n> \n> The system is usually running 80-95% idle. Approximatly once a minute\n> for cca 5-10s there is a peak in activity which looks like this:\n> \n> vmstat (and top or atop) reports 0% idle, 100% in user mode, very low\n> iowait, low IO activity, higher number of contex switches than usual but\n> not exceedingly high (2000-4000cs/s, usually 1500cs/s), few hundreds\n> waiting processes per second (usually 0-1/s). From looking at top and\n> running processes we can't see any obvious reason for the peak.\n> According to PostgreSQL log the long running commands from these moments\n> are e.g. begin transaction lasting several seconds.\n> \n> When only 2 cores are enabled (kernel command line) then everything is\n> running smoothly. 4 cores exibits slightly better behavior than 8 cores\n> but worse than 2 cores - the peaks are visible.\n> \n> We've tried kernel versions 2.6.21-23 (latest revisions as of beginning\n> December from kernel.org) the pattern slightly changed but it may also\n> be that the workload slightly changed.\n> \n> pgbench or any other stress testing runs smoothly on the server.\n> \n> The o usage panly strange thing about ourttern I can think of is heavy\n> use of LISTEN/NOTIFY especially hunderds backends in listen mode.\n> \n> When restarting our connected clients the peaks are not there from time\n> 0, they are visible after a while - seems something gets synchronized\n> and causing troubles then.\n> \n> Since the server is PostgreSQL dedicated and no our client applications\n> are running on it - and there is a difference when 2 and 8 cores are\n> enabled - we think that the peaks are not caused by our client\n> applications.\n> \n> How can we diagnose what is happening during the peaks?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \nSven Geisler <[email protected]> Tel +49.30.921017.81 Fax .50\nSenior Developer, AEC/communications GmbH & Co. KG Berlin, Germany\n",
"msg_date": "Thu, 03 Jan 2008 17:27:44 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n cores"
},
{
"msg_contents": "Jakub Ouhrabka <[email protected]> writes:\n> we have a PostgreSQL dedicated Linux server with 8 cores (2xX5355). We \n> came accross a strange issue: when running with all 8 cores enabled \n> approximatly once a minute (period differs) the system is very busy for \n> a few seconds (~5-10s) and we don't know why - this issue don't show up \n> when we tell Linux to use only 2 cores, with 4 cores the problem is here \n> but it is still better than with 8 cores - all on the same machine, same \n> config, same workload. We don't see any apparent reason for these peaks. \n\nInteresting. Maybe you could use oprofile to try to see what's\nhappening? It sounds a bit like momentary contention for a spinlock,\nbut exactly what isn't clear.\n\n> Can this be connected with our heavy use of listen/notify and hundreds \n> backends in listen mode?\n\nPerhaps. Have you tried logging executions of NOTIFY to see if they are\ncorrelated with the spikes?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jan 2008 11:59:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8 cores "
},
{
"msg_contents": "Hi Tom,\n\n > Interesting. Maybe you could use oprofile to try to see what's\n > happening? It sounds a bit like momentary contention for a spinlock,\n > but exactly what isn't clear.\n\nok, we're going to try oprofile, will let you know...\n\n > Perhaps. Have you tried logging executions of NOTIFY to see if they\n > are correlated with the spikes?\n\nWe didn't log the notifies but I think it's not correlated. We'll have a \ndetailed look next time we try it (with oprofile).\n\nThanks for suggestions!\n\nKuba\n\n",
"msg_date": "Thu, 03 Jan 2008 18:37:29 +0100",
"msg_from": "Jakub Ouhrabka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n cores"
},
{
"msg_contents": "Hi Sven,\n\n > I guess all backends do listen to the same notification.\n\nUnfortunatelly no. The backends are listening to different notifications \nin different databases. Usually there are only few listens per database \nwith only one exception - there are many (hundreds) listens in one \ndatabase but all for different notifications.\n\n > Can you change your implementation?\n > - split you problem - create multiple notification if possible\n\nYes, it is like this.\n\n > - do an UNLISTEN if possible\n\nYes, we're issuing unlistens when appropriate.\n\n > - use another signalisation technique\n\nWe're planning to reduce the number of databases/backends/listens but \nanyway we'd like to run system on 8 cores if it is running without any \nproblems on 2 cores...\n\nThanks for the suggestions!\n\nKuba\n\nSven Geisler napsal(a):\n> Hi Jakub,\n> \n> I do have a similar server (from DELL), which performance well with our\n> PostgreSQL application. I guess the peak in context switches is the only\n> think you can see.\n> \n> Anyhow, I think it is you're LISTEN/NOTIFY approach which cause that\n> behaviour. I guess all backends do listen to the same notification.\n> I don't know the exact implementation, but I can imagine that all\n> backends are access the same section in the shared memory which cause\n> the increase of context switches. More cores means more access at the\n> same time.\n> \n> Can you change your implementation?\n> - split you problem - create multiple notification if possible\n> - do an UNLISTEN if possible\n> - use another signalisation technique\n> \n> Regards\n> Sven\n> \n> \n> Jakub Ouhrabka schrieb:\n>> Hi all,\n>>\n>> we have a PostgreSQL dedicated Linux server with 8 cores (2xX5355). We\n>> came accross a strange issue: when running with all 8 cores enabled\n>> approximatly once a minute (period differs) the system is very busy for\n>> a few seconds (~5-10s) and we don't know why - this issue don't show up\n>> when we tell Linux to use only 2 cores, with 4 cores the problem is here\n>> but it is still better than with 8 cores - all on the same machine, same\n>> config, same workload. We don't see any apparent reason for these peaks.\n>> We'd like to investigate it further but we don't know what to try next.\n>> Any suggenstions? Any tunning tips for Linux+PostgreSQL on 8-way system?\n>> Can this be connected with our heavy use of listen/notify and hundreds\n>> backends in listen mode?\n>>\n>> More details are below.\n>>\n>> Thanks,\n>>\n>> Kuba\n>>\n>> System: HP DL360 2x5355, 8G RAM, P600+MSA50 - internal 2x72GB RAID 10\n>> for OS, 10x72G disks RAID 10 for PostgreSQL data and wal\n>> OS: Linux 2.6 64bit (kernel 2.6.21, 22, 23 makes little difference)\n>> PostgreSQL: 8.2.4 (64bit), shared buffers 1G\n>>\n>> Nothing else than PostgreSQL is running on the server. Cca 800\n>> concurrent backends. Majority of backends in LISTEN doing nothing.\n>> Client interface for most backends is ecpg+libpq.\n>>\n>> Problem description:\n>>\n>> The system is usually running 80-95% idle. Approximatly once a minute\n>> for cca 5-10s there is a peak in activity which looks like this:\n>>\n>> vmstat (and top or atop) reports 0% idle, 100% in user mode, very low\n>> iowait, low IO activity, higher number of contex switches than usual but\n>> not exceedingly high (2000-4000cs/s, usually 1500cs/s), few hundreds\n>> waiting processes per second (usually 0-1/s). From looking at top and\n>> running processes we can't see any obvious reason for the peak.\n>> According to PostgreSQL log the long running commands from these moments\n>> are e.g. begin transaction lasting several seconds.\n>>\n>> When only 2 cores are enabled (kernel command line) then everything is\n>> running smoothly. 4 cores exibits slightly better behavior than 8 cores\n>> but worse than 2 cores - the peaks are visible.\n>>\n>> We've tried kernel versions 2.6.21-23 (latest revisions as of beginning\n>> December from kernel.org) the pattern slightly changed but it may also\n>> be that the workload slightly changed.\n>>\n>> pgbench or any other stress testing runs smoothly on the server.\n>>\n>> The o usage panly strange thing about ourttern I can think of is heavy\n>> use of LISTEN/NOTIFY especially hunderds backends in listen mode.\n>>\n>> When restarting our connected clients the peaks are not there from time\n>> 0, they are visible after a while - seems something gets synchronized\n>> and causing troubles then.\n>>\n>> Since the server is PostgreSQL dedicated and no our client applications\n>> are running on it - and there is a difference when 2 and 8 cores are\n>> enabled - we think that the peaks are not caused by our client\n>> applications.\n>>\n>> How can we diagnose what is happening during the peaks?\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n> \n",
"msg_date": "Thu, 03 Jan 2008 18:51:43 +0100",
"msg_from": "Jakub Ouhrabka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8 cores"
},
{
"msg_contents": "Jakub Ouhrabka wrote:\n\n> > - do an UNLISTEN if possible\n>\n> Yes, we're issuing unlistens when appropriate.\n\nYou are vacuuming pg_listener periodically, yes? Not that this seems to\nhave any relationship to your problem, but ...\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Thu, 3 Jan 2008 15:31:18 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n\tcores"
},
{
"msg_contents": "Alvaro,\n\n >>> - do an UNLISTEN if possible\n >> Yes, we're issuing unlistens when appropriate.\n >\n > You are vacuuming pg_listener periodically, yes? Not that this seems\n > to have any relationship to your problem, but ...\n\nyes, autovacuum should take care of this. But looking forward for \nmultiple-workers in 8.3 as it should help us during high load periods \n(some tables might wait too long for autovacuum now - but it's not that \nbig problem for us...).\n\nThanks for great work!\n\nKuba\n",
"msg_date": "Thu, 03 Jan 2008 20:26:30 +0100",
"msg_from": "Jakub Ouhrabka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8 cores"
},
{
"msg_contents": "Jakub Ouhrabka wrote:\n> How can we diagnose what is happening during the peaks?\nCan you try forcing a core from a bunch of the busy processes? (Hmm - \ndoes Linux have an equivalent to the useful Solaris pstacks?)\n\nJames\n",
"msg_date": "Fri, 04 Jan 2008 20:46:52 +0000",
"msg_from": "James Mansion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n cores"
},
{
"msg_contents": "James Mansion wrote:\n> Jakub Ouhrabka wrote:\n>> How can we diagnose what is happening during the peaks?\n> Can you try forcing a core from a bunch of the busy processes? (Hmm - \n> does Linux have an equivalent to the useful Solaris pstacks?)\nThere's a 'pstack' for Linux, shipped at least in Red Hat distributions \n(and possibly others,\nI'm not sure). It's a shell script wrapper around gdb, so easily ported \nto any Linux.\n\n\n",
"msg_date": "Fri, 04 Jan 2008 13:55:08 -0700",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n cores"
},
{
"msg_contents": "Hi Tom & all,\n\n >> It sounds a bit like momentary contention for a spinlock,\n >> but exactly what isn't clear.\n\n > ok, we're going to try oprofile, will let you know...\n\nyes, it seems like contention for spinlock if I'm intepreting oprofile \ncorrectly, around 60% of time during spikes is in s_lock. [for details \nsee below].\n\nWe've tried several times to get stacktrace from some of the running \nbackends during spikes, we got always this:\n\n0x00002b005d00a9a9 in semop () from /lib/libc.so.6\n#0 0x00002b005d00a9a9 in semop () from /lib/libc.so.6\n#1 0x000000000054fe53 in PGSemaphoreLock (sema=0x2b00a04e5090, \ninterruptOK=0 '\\0') at pg_sema.c:411\n#2 0x0000000000575d95 in LWLockAcquire (lockid=SInvalLock, \nmode=LW_EXCLUSIVE) at lwlock.c:455\n#3 0x000000000056fbfe in ReceiveSharedInvalidMessages \n(invalFunction=0x5e9a30 <LocalExecuteInvalidationMessage>,\n resetFunction=0x5e9df0 <InvalidateSystemCaches>) at sinval.c:159\n#4 0x0000000000463505 in StartTransactionCommand () at xact.c:1439\n#5 0x000000000056fa4b in ProcessCatchupEvent () at sinval.c:347\n#6 0x000000000056fb20 in CatchupInterruptHandler \n(postgres_signal_arg=<value optimized out>) at sinval.c:221\n#7 0x00002b005cf6f110 in killpg () from /lib/libc.so.6\n#8 0x0000000000000000 in ?? ()\n\nIs this enough info to guess what's happening? What should we try next?\n\nThanks,\n\nKuba\n\noprofile results:\n\n[I've shortened path from /usr/local/pg... to just pg for better \nreadilibity]\n\n# opreport --long-filenames\nCPU: Core 2, speed 2666.76 MHz (estimated)\nCounted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a \nunit mask of 0x00 (Unhalted core cycles) count 100000\nCPU_CLK_UNHALT...|\n samples| %|\n------------------\n 125577 90.7584 pg-8.2.4/bin/postgres\n 3792 2.7406 /lib/libc-2.3.6.so\n 3220 2.3272 /usr/src/linux-2.6.22.15/vmlinux\n 2145 1.5503 /usr/bin/oprofiled\n 1540 1.1130 /xfs\n 521 0.3765 pg-8.2.4/lib/plpgsql.so\n 441 0.3187 /cciss\n 374 0.2703 /oprofile\n\n...\n\nCPU: Core 2, speed 2666.76 MHz (estimated)\nCounted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a \nunit mask of 0x00 (Unhalted core cycles) count 100000\nsamples % app name symbol name\n85355 61.6887 pg-8.2.4/bin/postgres s_lock\n9803 7.0849 pg-8.2.4/bin/postgres LWLockRelease\n5535 4.0003 pg-8.2.4/bin/postgres LWLockAcquire\n3792 2.7406 /lib/libc-2.3.6.so (no symbols)\n3724 2.6915 pg-8.2.4/bin/postgres DropRelFileNodeBuffers\n2145 1.5503 /usr/bin/oprofiled (no symbols)\n2069 1.4953 pg-8.2.4/bin/postgres GetSnapshotData\n1540 1.1130 /xfs (no symbols)\n1246 0.9005 pg-8.2.4/bin/postgres hash_search_with_hash_value\n1052 0.7603 pg-8.2.4/bin/postgres AllocSetAlloc\n1015 0.7336 pg-8.2.4/bin/postgres heapgettup\n879 0.6353 pg-8.2.4/bin/postgres hash_any\n862 0.6230 /usr/src/linux-2.6.22.15/vmlinux mwait_idle\n740 0.5348 pg-8.2.4/bin/postgres hash_seq_search\n674 0.4871 pg-8.2.4/bin/postgres HeapTupleSatisfiesNow\n557 0.4026 pg-8.2.4/bin/postgres SIGetDataEntry\n552 0.3989 pg-8.2.4/bin/postgres equal\n469 0.3390 pg-8.2.4/bin/postgres SearchCatCache\n441 0.3187 /cciss (no symbols)\n433 0.3129 /usr/src/linux-2.6.22.15/vmlinux find_busiest_group\n413 0.2985 pg-8.2.4/bin/postgres PinBuffer\n393 0.2840 pg-8.2.4/bin/postgres MemoryContextAllocZeroAligned\n374 0.2703 /oprofile (no symbols)\n275 0.1988 pg-8.2.4/bin/postgres ExecInitExpr\n253 0.1829 pg-8.2.4/bin/postgres base_yyparse\n206 0.1489 pg-8.2.4/bin/postgres CatalogCacheFlushRelation\n201 0.1453 pg-8.2.4/bin/postgres MemoryContextAlloc\n194 0.1402 pg-8.2.4/bin/postgres _bt_compare\n188 0.1359 /nf_conntrack (no symbols)\n158 0.1142 /bnx2 (no symbols)\n147 0.1062 pg-8.2.4/bin/postgres pgstat_initstats\n139 0.1005 pg-8.2.4/bin/postgres fmgr_info_cxt_security\n132 0.0954 /usr/src/linux-2.6.22.15/vmlinux task_rq_lock\n131 0.0947 /bin/bash (no symbols)\n129 0.0932 pg-8.2.4/bin/postgres AllocSetFree\n125 0.0903 pg-8.2.4/bin/postgres ReadBuffer\n124 0.0896 pg-8.2.4/bin/postgres MemoryContextCreate\n124 0.0896 pg-8.2.4/bin/postgres SyncOneBuffer\n124 0.0896 pg-8.2.4/bin/postgres XLogInsert\n123 0.0889 pg-8.2.4/bin/postgres _equalAggref\n122 0.0882 pg-8.2.4/bin/postgres HeapTupleSatisfiesSnapshot\n112 0.0809 pg-8.2.4/bin/postgres copyObject\n102 0.0737 pg-8.2.4/bin/postgres UnpinBuffer\n99 0.0716 pg-8.2.4/bin/postgres _SPI_execute_plan\n99 0.0716 pg-8.2.4/bin/postgres nocachegetattr\n98 0.0708 /usr/src/linux-2.6.22.15/vmlinux __wake_up_bit\n97 0.0701 pg-8.2.4/bin/postgres TransactionIdIsInProgress\n94 0.0679 pg-8.2.4/bin/postgres check_stack_depth\n93 0.0672 pg-8.2.4/bin/postgres base_yylex\n91 0.0658 pg-8.2.4/bin/postgres pfree\n89 0.0643 pg-8.2.4/lib/plpgsql.so exec_stmts\n86 0.0622 /usr/src/linux-2.6.22.15/vmlinux __switch_to\n85 0.0614 pg-8.2.4/bin/postgres LockAcquire\n83 0.0600 pg-8.2.4/bin/postgres FunctionCall2\n82 0.0593 pg-8.2.4/bin/postgres ExecInitAgg\n82 0.0593 /usr/src/linux-2.6.22.15/vmlinux system_call\n79 0.0571 pg-8.2.4/bin/postgres pgstat_write_statsfile\n77 0.0557 pg-8.2.4/bin/postgres heap_getsysattr\n73 0.0528 pg-8.2.4/bin/postgres .plt\n72 0.0520 /lib/ld-2.3.6.so (no symbols)\n71 0.0513 pg-8.2.4/bin/postgres SearchSysCache\n71 0.0513 pg-8.2.4/bin/postgres _bt_checkkeys\n68 0.0491 /usr/src/linux-2.6.22.15/vmlinux apic_timer_interrupt\n67 0.0484 pg-8.2.4/bin/postgres slot_deform_tuple\n64 0.0463 pg-8.2.4/bin/postgres TupleDescInitEntry\n64 0.0463 pg-8.2.4/bin/postgres newarc\n63 0.0455 /usr/src/linux-2.6.22.15/vmlinux __wake_up\n62 0.0448 pg-8.2.4/bin/postgres LocalExecuteInvalidationMessage\n61 0.0441 /usr/src/linux-2.6.22.15/vmlinux try_to_wake_up\n60 0.0434 pg-8.2.4/bin/postgres ReceiveSharedInvalidMessages\n59 0.0426 pg-8.2.4/bin/postgres ExecutorStart\n58 0.0419 pg-8.2.4/bin/postgres DirectFunctionCall1\n58 0.0419 pg-8.2.4/bin/postgres ScanKeywordLookup\n57 0.0412 pg-8.2.4/bin/postgres hash_search\n56 0.0405 pg-8.2.4/bin/postgres CatalogCacheComputeHashValue\n56 0.0405 pg-8.2.4/bin/postgres ExecProject\n55 0.0398 pg-8.2.4/bin/postgres _bt_first\n54 0.0390 pg-8.2.4/lib/plpgsql.so exec_eval_simple_expr\n53 0.0383 pg-8.2.4/bin/postgres AllocSetDelete\n53 0.0383 pg-8.2.4/bin/postgres CleanupTempFiles\n52 0.0376 pg-8.2.4/bin/postgres ExecCreateTupleTable\n52 0.0376 pg-8.2.4/lib/plpgsql.so copy_plpgsql_datum\n49 0.0354 pg-8.2.4/bin/postgres MemoryContextAllocZero\n49 0.0354 pg-8.2.4/bin/postgres SIDelExpiredDataEntries\n49 0.0354 pg-8.2.4/bin/postgres fix_opfuncids_walker\n47 0.0340 pg-8.2.4/bin/postgres expression_tree_walker\n46 0.0332 pg-8.2.4/bin/postgres LockBuffer\n45 0.0325 pg-8.2.4/bin/postgres lappend\n45 0.0325 /usr/src/linux-2.6.22.15/vmlinux do_IRQ\n44 0.0318 pg-8.2.4/bin/postgres LockReleaseAll\n43 0.0311 pg-8.2.4/bin/postgres ExecutorRun\n43 0.0311 pg-8.2.4/bin/postgres exprTypmod\n42 0.0304 pg-8.2.4/bin/postgres ExecClearTuple\n42 0.0304 pg-8.2.4/bin/postgres heap_fill_tuple\n41 0.0296 pg-8.2.4/bin/postgres ExecIndexBuildScanKeys\n40 0.0289 pg-8.2.4/bin/postgres _bt_readpage\n40 0.0289 pg-8.2.4/lib/plpgsql.so plpgsql_exec_function\n40 0.0289 /usr/src/linux-2.6.22.15/vmlinux lock_timer_base\n39 0.0282 pg-8.2.4/bin/postgres heap_release_fetch\n38 0.0275 pg-8.2.4/bin/postgres ExecEvalVar\n38 0.0275 pg-8.2.4/bin/postgres ExecTypeFromTLInternal\n37 0.0267 pg-8.2.4/bin/postgres LockReassignCurrentOwner\n37 0.0267 /usr/src/linux-2.6.22.15/vmlinux scheduler_tick\n36 0.0260 pg-8.2.4/bin/postgres ReleaseAndReadBuffer\n36 0.0260 pg-8.2.4/bin/postgres _bt_checkpage\n36 0.0260 pg-8.2.4/bin/postgres freearc\n36 0.0260 pg-8.2.4/bin/postgres heapgetpage\n36 0.0260 /usr/src/linux-2.6.22.15/vmlinux resched_task\n35 0.0253 pg-8.2.4/bin/postgres PGSemaphoreLock\n35 0.0253 pg-8.2.4/bin/postgres optimize\n35 0.0253 pg-8.2.4/bin/postgres slot_getattr\n35 0.0253 /usr/src/linux-2.6.22.15/vmlinux effective_prio\n35 0.0253 /usr/src/linux-2.6.22.15/vmlinux hrtimer_run_queues\n34 0.0246 /ip_tables (no symbols)\n34 0.0246 pg-8.2.4/bin/postgres ExecCountSlotsNode\n34 0.0246 pg-8.2.4/bin/postgres ExecMakeFunctionResultNoSets\n34 0.0246 pg-8.2.4/bin/postgres ResourceOwnerForgetBuffer\n34 0.0246 pg-8.2.4/bin/postgres TransactionIdPrecedes\n34 0.0246 pg-8.2.4/bin/postgres _bt_getroot\n34 0.0246 pg-8.2.4/lib/plpgsql.so exec_stmt_block\n34 0.0246 /usr/src/linux-2.6.22.15/vmlinux do_page_fault\n34 0.0246 /usr/src/linux-2.6.22.15/vmlinux run_rebalance_domains\n33 0.0239 pg-8.2.4/bin/postgres AtEOXact_GUC\n33 0.0239 /usr/src/linux-2.6.22.15/vmlinux __exit_idle\n33 0.0239 /usr/src/linux-2.6.22.15/vmlinux sched_clock\n32 0.0231 pg-8.2.4/bin/postgres _bt_moveright\n32 0.0231 pg-8.2.4/bin/postgres compact\n32 0.0231 /usr/src/linux-2.6.22.15/vmlinux __mod_timer\n31 0.0224 pg-8.2.4/bin/postgres CreateExprContext\n31 0.0224 pg-8.2.4/bin/postgres ExecInitNode\n31 0.0224 pg-8.2.4/bin/postgres ExecMakeFunctionResult\n31 0.0224 pg-8.2.4/bin/postgres ReleaseCatCache\n31 0.0224 pg-8.2.4/bin/postgres exprType\n30 0.0217 pg-8.2.4/bin/postgres _bt_binsrch\n30 0.0217 pg-8.2.4/bin/postgres new_tail_cell\n30 0.0217 /usr/src/linux-2.6.22.15/vmlinux run_timer_softirq\n29 0.0210 pg-8.2.4/bin/postgres ExecAssignScanProjectionInfo\n29 0.0210 pg-8.2.4/bin/postgres heap_form_tuple\n28 0.0202 /usr/lib/gconv/ISO8859-1.so (no symbols)\n28 0.0202 pg-8.2.4/bin/postgres AllocSetContextCreate\n28 0.0202 pg-8.2.4/bin/postgres ResourceOwnerForgetCatCacheRef\n28 0.0202 pg-8.2.4/bin/postgres hashoid\n28 0.0202 pg-8.2.4/bin/postgres new_list\n27 0.0195 pg-8.2.4/bin/postgres CreateTemplateTupleDesc\n27 0.0195 pg-8.2.4/bin/postgres MemoryContextStrdup\n27 0.0195 pg-8.2.4/bin/postgres SPI_execute_plan\n27 0.0195 pg-8.2.4/bin/postgres btint4cmp\n27 0.0195 /usr/src/linux-2.6.22.15/vmlinux IRQ0xa9_interrupt\n26 0.0188 pg-8.2.4/bin/postgres ExecBuildProjectionInfo\n26 0.0188 pg-8.2.4/bin/postgres ExecDropTupleTable\n26 0.0188 pg-8.2.4/bin/postgres ExecEndNode\n26 0.0188 pg-8.2.4/bin/postgres ExecEvalParam\n26 0.0188 pg-8.2.4/bin/postgres FreeExecutorState\n26 0.0188 pg-8.2.4/bin/postgres MemoryContextReset\n26 0.0188 /usr/src/linux-2.6.22.15/vmlinux smp_apic_timer_interrupt\n25 0.0181 pg-8.2.4/bin/postgres ExecInitSubPlan\n25 0.0181 pg-8.2.4/bin/postgres FuncnameGetCandidates\n25 0.0181 pg-8.2.4/bin/postgres MemoryContextDelete\n25 0.0181 pg-8.2.4/bin/postgres SearchCatCacheList\n25 0.0181 /usr/src/linux-2.6.22.15/vmlinux handle_edge_irq\n25 0.0181 /usr/src/linux-2.6.22.15/vmlinux msecs_to_jiffies\n24 0.0173 /nf_conntrack_ipv4 (no symbols)\n24 0.0173 pg-8.2.4/bin/postgres ExecInitIndexScan\n24 0.0173 pg-8.2.4/bin/postgres ExecProcNode\n24 0.0173 pg-8.2.4/bin/postgres InternalCreateExecutorState\n24 0.0173 pg-8.2.4/bin/postgres PostgresMain\n24 0.0173 pg-8.2.4/bin/postgres RelationIncrementReferenceCount\n24 0.0173 pg-8.2.4/bin/postgres heapgettup_pagemode\n24 0.0173 /usr/src/linux-2.6.22.15/vmlinux __wake_up_common\n24 0.0173 /usr/src/linux-2.6.22.15/vmlinux get_task_mm\n24 0.0173 /usr/src/linux-2.6.22.15/vmlinux page_waitqueue\n23 0.0166 pg-8.2.4/bin/postgres ExecCheckRTPerms\n23 0.0166 pg-8.2.4/bin/postgres TransactionIdFollowsOrEquals\n23 0.0166 pg-8.2.4/bin/postgres internal_putbytes\n23 0.0166 pg-8.2.4/bin/postgres scanner_init\n23 0.0166 pg-8.2.4/lib/plpgsql.so exec_stmt_execsql\n23 0.0166 /usr/src/linux-2.6.22.15/vmlinux IRQ0xc1_interrupt\n23 0.0166 /usr/src/linux-2.6.22.15/vmlinux __do_softirq\n22 0.0159 pg-8.2.4/bin/postgres AfterTriggerBeginXact\n22 0.0159 pg-8.2.4/bin/postgres GetTransactionSnapshot\n22 0.0159 pg-8.2.4/bin/postgres ResourceOwnerReleaseInternal\n22 0.0159 pg-8.2.4/bin/postgres lcons\n22 0.0159 pg-8.2.4/bin/postgres markcanreach\n22 0.0159 pg-8.2.4/lib/plpgsql.so exec_eval_datum\n22 0.0159 pg-8.2.4/lib/plpgsql.so exec_eval_expr\n22 0.0159 pg-8.2.4/lib/utf8_and_win.so win_to_utf8\n22 0.0159 /usr/src/linux-2.6.22.15/vmlinux enqueue_task\n22 0.0159 /usr/src/linux-2.6.22.15/vmlinux math_state_restore\n21 0.0152 pg-8.2.4/bin/postgres AtCommit_Notify\n21 0.0152 pg-8.2.4/bin/postgres CreateTupleDescCopy\n21 0.0152 pg-8.2.4/bin/postgres ExecScan\n21 0.0152 pg-8.2.4/bin/postgres ResourceOwnerEnlargeCatCacheRefs\n21 0.0152 pg-8.2.4/bin/postgres ResourceOwnerForgetRelationRef\n21 0.0152 pg-8.2.4/bin/postgres ResourceOwnerRememberCatCacheRef\n21 0.0152 pg-8.2.4/bin/postgres SHMQueueInsertBefore\n21 0.0152 pg-8.2.4/bin/postgres exec_simple_query\n21 0.0152 pg-8.2.4/bin/postgres get_hash_value\n21 0.0152 pg-8.2.4/bin/postgres index_getnext\n21 0.0152 pg-8.2.4/bin/postgres next_token\n21 0.0152 pg-8.2.4/bin/postgres pgstat_report_activity\n21 0.0152 pg-8.2.4/lib/plpgsql.so exec_run_select\n20 0.0145 pg-8.2.4/bin/postgres AllocSetReset\n20 0.0145 pg-8.2.4/bin/postgres AtProcExit_Buffers\n20 0.0145 pg-8.2.4/bin/postgres LocalToUtf\n20 0.0145 pg-8.2.4/bin/postgres index_getprocinfo\n20 0.0145 pg-8.2.4/bin/postgres spi_dest_startup\n20 0.0145 pg-8.2.4/bin/postgres tag_hash\n20 0.0145 /usr/src/linux-2.6.22.15/vmlinux sysret_check\n19 0.0137 pg-8.2.4/bin/postgres Async_Unlisten\n19 0.0137 pg-8.2.4/bin/postgres FileSeek\n19 0.0137 pg-8.2.4/bin/postgres PortalStart\n19 0.0137 pg-8.2.4/bin/postgres heap_open\n19 0.0137 pg-8.2.4/bin/postgres pg_mblen\n19 0.0137 /usr/src/linux-2.6.22.15/vmlinux prepare_to_wait\n18 0.0130 /iptable_filter (no symbols)\n18 0.0130 /iptable_nat (no symbols)\n18 0.0130 /lib/libm-2.3.6.so (no symbols)\n18 0.0130 pg-8.2.4/bin/postgres CopySnapshot\n18 0.0130 pg-8.2.4/bin/postgres ExecSetSlotDescriptor\n18 0.0130 pg-8.2.4/bin/postgres SPI_connect\n18 0.0130 pg-8.2.4/bin/postgres cleartraverse\n18 0.0130 pg-8.2.4/bin/postgres pg_mbcliplen\n18 0.0130 pg-8.2.4/bin/postgres strlcpy\n18 0.0130 pg-8.2.4/lib/plpgsql.so plpgsql_compile\n18 0.0130 /usr/src/linux-2.6.22.15/vmlinux account_system_time\n18 0.0130 /usr/src/linux-2.6.22.15/vmlinux bit_waitqueue\n17 0.0123 pg-8.2.4/bin/postgres AtEOXact_MultiXact\n17 0.0123 pg-8.2.4/bin/postgres CreateTupleDescCopyConstr\n17 0.0123 pg-8.2.4/bin/postgres ExecAgg\n17 0.0123 pg-8.2.4/bin/postgres ExecResult\n17 0.0123 pg-8.2.4/bin/postgres FlushBuffer\n17 0.0123 pg-8.2.4/bin/postgres RelationGetIndexScan\n17 0.0123 pg-8.2.4/bin/postgres SHMQueueDelete\n17 0.0123 pg-8.2.4/bin/postgres SendRowDescriptionMessage\n17 0.0123 pg-8.2.4/bin/postgres _bt_search\n17 0.0123 pg-8.2.4/bin/postgres _copyTargetEntry\n17 0.0123 pg-8.2.4/bin/postgres heap_getnext\n17 0.0123 pg-8.2.4/bin/postgres markreachable\n17 0.0123 pg-8.2.4/bin/postgres miss\n17 0.0123 pg-8.2.4/bin/postgres scanRTEForColumn\n17 0.0123 pg-8.2.4/lib/plpgsql.so exec_assign_value\n17 0.0123 /usr/src/linux-2.6.22.15/vmlinux do_softirq\n16 0.0116 pg-8.2.4/bin/postgres ExecEvalConst\n16 0.0116 pg-8.2.4/bin/postgres ExecReScan\n16 0.0116 pg-8.2.4/bin/postgres ExecSetParamPlan\n16 0.0116 pg-8.2.4/bin/postgres downcase_truncate_identifier\n16 0.0116 pg-8.2.4/bin/postgres transformExpr\n16 0.0116 pg-8.2.4/bin/postgres varstr_cmp\n16 0.0116 pg-8.2.4/lib/plpgsql.so plpgsql_call_handler\n16 0.0116 pg-8.2.4/lib/plpgsql.so plpgsql_estate_setup\n16 0.0116 /usr/src/linux-2.6.22.15/vmlinux cpu_idle\n15 0.0108 pg-8.2.4/bin/postgres CommitTransactionCommand\n15 0.0108 pg-8.2.4/bin/postgres ExecEvalFuncArgs\n15 0.0108 pg-8.2.4/bin/postgres GrantLockLocal\n15 0.0108 pg-8.2.4/bin/postgres PrepareToInvalidateCacheTuple\n15 0.0108 pg-8.2.4/bin/postgres UtfToLocal\n15 0.0108 pg-8.2.4/bin/postgres cost_nestloop\n15 0.0108 pg-8.2.4/bin/postgres errstart\n15 0.0108 pg-8.2.4/bin/postgres index_open\n15 0.0108 pg-8.2.4/bin/postgres init_fcache\n15 0.0108 pg-8.2.4/bin/postgres list_delete_cell\n14 0.0101 pg-8.2.4/bin/postgres ExecStoreTuple\n14 0.0101 pg-8.2.4/bin/postgres ReadNewTransactionId\n14 0.0101 pg-8.2.4/bin/postgres ReleaseBuffer\n14 0.0101 pg-8.2.4/bin/postgres ResourceOwnerRememberBuffer\n14 0.0101 pg-8.2.4/bin/postgres SHMQueueNext\n14 0.0101 pg-8.2.4/bin/postgres btgettuple\n14 0.0101 pg-8.2.4/bin/postgres duptraverse\n14 0.0101 pg-8.2.4/bin/postgres heap_compute_data_size\n14 0.0101 pg-8.2.4/bin/postgres index_rescan\n14 0.0101 pg-8.2.4/bin/postgres list_copy\n14 0.0101 pg-8.2.4/bin/postgres smgrclosenode\n14 0.0101 pg-8.2.4/lib/plpgsql.so .plt\n14 0.0101 /usr/src/linux-2.6.22.15/vmlinux idle_cpu\n14 0.0101 /usr/src/linux-2.6.22.15/vmlinux run_workqueue\n13 0.0094 pg-8.2.4/bin/postgres Async_Listen\n13 0.0094 pg-8.2.4/bin/postgres DLMoveToFront\n13 0.0094 pg-8.2.4/bin/postgres FreeExprContext\n13 0.0094 pg-8.2.4/bin/postgres IndexNext\n13 0.0094 pg-8.2.4/bin/postgres LockRelease\n13 0.0094 pg-8.2.4/bin/postgres ResourceOwnerEnlargeBuffers\n13 0.0094 pg-8.2.4/bin/postgres ShutdownExprContext\n13 0.0094 pg-8.2.4/bin/postgres cleanup\n13 0.0094 pg-8.2.4/bin/postgres datumCopy\n13 0.0094 pg-8.2.4/bin/postgres grouping_planner\n13 0.0094 pg-8.2.4/bin/postgres newstate\n13 0.0094 pg-8.2.4/bin/postgres pgstat_start\n13 0.0094 pg-8.2.4/bin/postgres spi_printtup\n13 0.0094 pg-8.2.4/bin/postgres transformStmt\n12 0.0087 pg-8.2.4/bin/postgres ExecEvalOper\n12 0.0087 pg-8.2.4/bin/postgres ExecQual\n12 0.0087 pg-8.2.4/bin/postgres IsSharedRelation\n12 0.0087 pg-8.2.4/bin/postgres RelationIdGetRelation\n12 0.0087 pg-8.2.4/bin/postgres StartTransactionCommand\n12 0.0087 pg-8.2.4/bin/postgres _bt_insertonpg\n12 0.0087 pg-8.2.4/bin/postgres advance_aggregates\n12 0.0087 pg-8.2.4/bin/postgres getvacant\n12 0.0087 pg-8.2.4/bin/postgres list_delete_ptr\n12 0.0087 pg-8.2.4/bin/postgres namestrcpy\n12 0.0087 pg-8.2.4/bin/postgres oideq\n12 0.0087 /usr/src/linux-2.6.22.15/vmlinux local_bh_enable_ip\n12 0.0087 /usr/src/linux-2.6.22.15/vmlinux sys_rt_sigreturn\n11 0.0080 pg-8.2.4/bin/postgres CommitTransaction\n11 0.0080 pg-8.2.4/bin/postgres ExecEvalNullTest\n11 0.0080 pg-8.2.4/bin/postgres LockRelationOid\n11 0.0080 pg-8.2.4/bin/postgres ProcessUtility\n11 0.0080 pg-8.2.4/bin/postgres UnGrantLock\n11 0.0080 pg-8.2.4/bin/postgres XidInSnapshot\n11 0.0080 pg-8.2.4/bin/postgres _bt_doinsert\n11 0.0080 pg-8.2.4/bin/postgres _bt_steppage\n11 0.0080 pg-8.2.4/bin/postgres appendBinaryStringInfo\n11 0.0080 pg-8.2.4/bin/postgres okcolors\n11 0.0080 pg-8.2.4/bin/postgres pq_getmessage\n11 0.0080 pg-8.2.4/lib/plpgsql.so exec_move_row\n11 0.0080 /usr/src/linux-2.6.22.15/vmlinux clocksource_get_next\n11 0.0080 /usr/src/linux-2.6.22.15/vmlinux handle_IRQ_event\n11 0.0080 /usr/src/linux-2.6.22.15/vmlinux local_bh_enable\n11 0.0080 /usr/src/linux-2.6.22.15/vmlinux mod_timer\n10 0.0072 pg-8.2.4/bin/postgres CommandEndInvalidationMessages\n10 0.0072 pg-8.2.4/bin/postgres CopyTriggerDesc\n10 0.0072 pg-8.2.4/bin/postgres ExecAssignExprContext\n10 0.0072 pg-8.2.4/bin/postgres ExecEvalScalarArrayOp\n10 0.0072 pg-8.2.4/bin/postgres FunctionCall5\n10 0.0072 pg-8.2.4/bin/postgres GetCurrentSubTransactionId\n10 0.0072 pg-8.2.4/bin/postgres RecordTransactionCommit\n10 0.0072 pg-8.2.4/bin/postgres StrategyGetBuffer\n10 0.0072 pg-8.2.4/bin/postgres _bt_getbuf\n10 0.0072 pg-8.2.4/bin/postgres _bt_preprocess_keys\n10 0.0072 pg-8.2.4/bin/postgres eval_const_expressions_mutator\n10 0.0072 pg-8.2.4/bin/postgres index_beginscan_internal\n10 0.0072 pg-8.2.4/bin/postgres new_head_cell\n10 0.0072 pg-8.2.4/bin/postgres perform_default_encoding_conversion\n10 0.0072 pg-8.2.4/bin/postgres pg_regcomp\n10 0.0072 pg-8.2.4/lib/plpgsql.so exec_eval_boolean\n10 0.0072 pg-8.2.4/lib/plpgsql.so exec_stmt_if\n10 0.0072 /usr/src/linux-2.6.22.15/vmlinux __rcu_pending\n10 0.0072 /usr/src/linux-2.6.22.15/vmlinux __rcu_process_callbacks\n\n\n\n\n\n\n\n",
"msg_date": "Mon, 07 Jan 2008 12:00:12 +0100",
"msg_from": "Jakub Ouhrabka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n cores"
},
{
"msg_contents": "Jakub Ouhrabka wrote:\n\n> We've tried several times to get stacktrace from some of the running \n> backends during spikes, we got always this:\n>\n> 0x00002b005d00a9a9 in semop () from /lib/libc.so.6\n> #0 0x00002b005d00a9a9 in semop () from /lib/libc.so.6\n> #1 0x000000000054fe53 in PGSemaphoreLock (sema=0x2b00a04e5090, \n> interruptOK=0 '\\0') at pg_sema.c:411\n> #2 0x0000000000575d95 in LWLockAcquire (lockid=SInvalLock, \n> mode=LW_EXCLUSIVE) at lwlock.c:455\n> #3 0x000000000056fbfe in ReceiveSharedInvalidMessages \n> (invalFunction=0x5e9a30 <LocalExecuteInvalidationMessage>,\n> resetFunction=0x5e9df0 <InvalidateSystemCaches>) at sinval.c:159\n> #4 0x0000000000463505 in StartTransactionCommand () at xact.c:1439\n> #5 0x000000000056fa4b in ProcessCatchupEvent () at sinval.c:347\n> #6 0x000000000056fb20 in CatchupInterruptHandler \n> (postgres_signal_arg=<value optimized out>) at sinval.c:221\n> #7 0x00002b005cf6f110 in killpg () from /lib/libc.so.6\n> #8 0x0000000000000000 in ?? ()\n\nPerhaps it would make sense to try to take the \"fast path\" in\nSIDelExpiredDataEntries with only a shared lock rather than exclusive.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Mon, 7 Jan 2008 12:12:34 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n\tcores"
},
{
"msg_contents": "Jakub Ouhrabka <[email protected]> writes:\n> We've tried several times to get stacktrace from some of the running \n> backends during spikes, we got always this:\n\n> 0x00002b005d00a9a9 in semop () from /lib/libc.so.6\n> #0 0x00002b005d00a9a9 in semop () from /lib/libc.so.6\n> #1 0x000000000054fe53 in PGSemaphoreLock (sema=0x2b00a04e5090, \n> interruptOK=0 '\\0') at pg_sema.c:411\n> #2 0x0000000000575d95 in LWLockAcquire (lockid=SInvalLock, \n> mode=LW_EXCLUSIVE) at lwlock.c:455\n> #3 0x000000000056fbfe in ReceiveSharedInvalidMessages \n> (invalFunction=0x5e9a30 <LocalExecuteInvalidationMessage>,\n> resetFunction=0x5e9df0 <InvalidateSystemCaches>) at sinval.c:159\n> #4 0x0000000000463505 in StartTransactionCommand () at xact.c:1439\n> #5 0x000000000056fa4b in ProcessCatchupEvent () at sinval.c:347\n> #6 0x000000000056fb20 in CatchupInterruptHandler \n> (postgres_signal_arg=<value optimized out>) at sinval.c:221\n\nCatchupInterruptHandler, eh? That seems to let NOTIFY off the hook, and\ninstead points in the direction of sinval processing; which is to say,\npropagation of changes to system catalogs. Does your app create and\ndestroy a tremendous number of temp tables, or anything else in the way\nof frequent DDL commands?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Jan 2008 17:31:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8 cores "
},
{
"msg_contents": "Jakub Ouhrabka <[email protected]> writes:\n>>> Does your app create and destroy a tremendous number of temp tables,\n>>> or anything else in the way of frequent DDL commands?\n\n> Hmm. I can't think of anything like this. Maybe there are few backends \n> which create temp tables but not tremendous number. I don't think our \n> applications issue DDL statements either. Can LOCK TABLE IN EXCLUSIVE \n> MODE cause this?\n\nNo.\n\nI did some experimenting to see exactly how large the sinval message\nbuffer is in today's terms, and what I find is that about 22 cycles\nof\n\ncreate temp table foo (f1 int, f2 text); drop table foo;\n\nis enough to force a CatchupInterrupt to a sleeping backend. This case\nis a bit more complex than it appears since the text column forces the\ntemp table to have a toast table; but even with only fixed-width\ncolumns, if you were creating one temp table a second that would be\nplenty to explain once-a-minute-or-so CatchupInterrupt processing.\nAnd if you've got a lot of backends that normally *don't* wake up\nthat often, then they'd be sitting and failing to absorb the sinval\ntraffic in any more timely fashion. So I'm betting that we've got\nthe source of the spike identified. You could check this theory\nout by strace'ing some of the idle backends and seeing if their\nactivity spikes are triggered by receipt of SIGUSR1 signals.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Jan 2008 19:01:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8 cores "
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Perhaps it would make sense to try to take the \"fast path\" in\n> SIDelExpiredDataEntries with only a shared lock rather than exclusive.\n\nI think the real problem here is that sinval catchup processing is well\ndesigned to create contention :-(. Once we've decided that the message\nqueue is getting too full, we SIGUSR1 all the backends at once (or as\nfast as the postmaster can do it anyway), then they all go off and try\nto touch the sinval queue. Backends that haven't awoken even once\nsince the last time will have to process the entire queue contents,\nand they're all trying to do that at the same time. What's worse, they\ntake and release the SInvalLock once for each message they take off the\nqueue. This isn't so horrid for one-core machines (since each process\nwill monopolize the CPU for probably less than one timeslice while it's\ncatching up) but it's pretty obvious where all the contention is coming\nfrom on an 8-core.\n\nSome ideas for improving matters:\n\n1. Try to avoid having all the backends hit the queue at once. Instead\nof SIGUSR1'ing everybody at the same time, maybe hit only the process\nwith the oldest message pointer, and have him hit the next oldest after\nhe's done reading the queue.\n\n2. Try to take more than one message off the queue per SInvalLock cycle.\n(There is a tuning tradeoff here, since it would mean holding the lock\nfor longer at a time.)\n\n3. Try to avoid having every backend run SIDelExpiredDataEntries every\ntime through ReceiveSharedInvalidMessages. It's not critical to delete\nentries until the queue starts getting full --- maybe we could rejigger\nthe logic so it only happens once when somebody notices the queue is\ngetting full, or so that only the guy(s) who had nextMsgNum == minMsgNum\ndo it, or something like that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Jan 2008 19:54:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8 cores "
},
{
"msg_contents": " > You could check this theory\n > out by strace'ing some of the idle backends and seeing if their\n > activity spikes are triggered by receipt of SIGUSR1 signals.\n\nYes, I can confirm that it's triggered by SIGUSR1 signals.\n\nIf I understand it correctly we have following choices now:\n\n1) Use only 2 cores (out of 8 cores)\n\n2) Lower the number of idle backends - at least force backends to do \nsomething at different times to eliminate spikes - is \"select 1\" enough \nto force processing the queue?\n\n3) Is there any chance of this being fixed/improved in 8.3 or even 8.2? \nIt's a (performance) bug from our point of view. I realize we're first \nwho noticed it and it's not typical use case to have so many idle \nbackends. But large installation with connection pooling is something \nsimilar and that's not that uncommon - one active backend doing DDL can \nthen cause unexpected spikes during otherwise quiet hours...\n\n4) Sure we'll try to reduce the number of DDL statements (which in fact \nwe're not sure where exactly they are comming from) but I guess it would \nonly longer the time between spikes but not make them any smoother.\n\nAny other suggestions?\n\nThanks,\n\nKuba\n\n",
"msg_date": "Tue, 08 Jan 2008 13:19:24 +0100",
"msg_from": "Jakub Ouhrabka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n cores"
},
{
"msg_contents": "Jakub Ouhrabka <[email protected]> writes:\n> Yes, I can confirm that it's triggered by SIGUSR1 signals.\n\nOK, that confirms the theory that it's sinval-queue contention.\n\n> If I understand it correctly we have following choices now:\n\n> 1) Use only 2 cores (out of 8 cores)\n\n> 2) Lower the number of idle backends - at least force backends to do \n> something at different times to eliminate spikes - is \"select 1\" enough \n> to force processing the queue?\n\nYeah, if you could get your clients to issue trivial queries every few\nseconds (not all at the same time) the spikes should go away.\n\nIf you don't want to change your clients, one possible amelioration is\nto reduce the signaling threshold in SIInsertDataEntry --- instead of\n70% of MAXNUMMESSAGES, maybe signal at 10% or 20%. That would make the\nspikes more frequent but smaller, which might help ... or not.\n\n\n> 3) Is there any chance of this being fixed/improved in 8.3 or even 8.2? \n\nI doubt we'd risk destabilizing 8.3 at this point, for a problem that\naffects so few people; let alone back-patching into 8.2. There are some\nother known performance problems in the sinval signaling (for instance,\nthat a queue overflow results in cache resets for all backends, not only\nthe slowest), so I think addressing all of them at once would be the\nthing to do. That would be a large enough patch that it would certainly\nneed to go through beta testing before I'd want to inflict it on the\nworld...\n\nThis discussion has raised the priority of the problem in my mind,\nso I'm thinking it should be worked on in 8.4; but it's too late for\n8.3.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Jan 2008 12:25:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8 cores "
},
{
"msg_contents": "Hi Tom,\n\n > I doubt we'd risk destabilizing 8.3 at this point, for a problem that\n > affects so few people; let alone back-patching into 8.2.\n\nunderstand.\n\n > OK, that confirms the theory that it's sinval-queue contention.\n\nWe'we tried hard to identify what's the cause of filling sinval-queue. \nWe went through query logs as well as function bodies stored in the \ndatabase. We were not able to find any DDL, temp table creations etc.\n\nWe did following experiment: stop one of our clients, so there started \nto be queue of events (aka rows in db) for it to process. Then we \nstarted the client again, it started processing the queue - that means \ncalling simple selects, updates and complex plpgsql function(s). And at \nthis moment, the spike started even it shouldn't start to meet usual \nperiodicity. It's consistent. We are pretty sure that this client is not \ndoing any DDL...\n\nWhat should we look for to find the cause?\n\nThanks for any hints,\n\nKuba\n\n",
"msg_date": "Fri, 11 Jan 2008 12:11:36 +0100",
"msg_from": "Jakub Ouhrabka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n cores"
},
{
"msg_contents": "Jakub Ouhrabka <[email protected]> writes:\n> We'we tried hard to identify what's the cause of filling sinval-queue. \n> We went through query logs as well as function bodies stored in the \n> database. We were not able to find any DDL, temp table creations etc.\n\nStrange. The best idea that comes to mind is to add some debugging\ncode to SendSharedInvalidMessage to log the content of each message\nthat's sent out. That would at least tell us *what* is going into\nthe queue, even if not directly *why*. Try something like (untested)\n\nvoid\nSendSharedInvalidMessage(SharedInvalidationMessage *msg)\n{\n\tbool\t\tinsertOK;\n\n+\telog(LOG, \"sending inval msg %d %u %u %u %u %u\",\n+\t\tmsg->cc.id,\n+\t\tmsg->cc.tuplePtr.ip_blkid.bi_hi,\n+\t\tmsg->cc.tuplePtr.ip_blkid.bi_lo,\n+\t\tmsg->cc.tuplePtr.ip_posid,\n+\t\tmsg->cc.dbId,\n+\t\tmsg->cc.hashValue);\n+\n\tLWLockAcquire(SInvalLock, LW_EXCLUSIVE);\n\tinsertOK = SIInsertDataEntry(shmInvalBuffer, msg);\n\tLWLockRelease(SInvalLock);\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Jan 2008 10:05:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8 cores "
},
{
"msg_contents": "Hi Tom,\n\n > Strange. The best idea that comes to mind is to add some debugging\n > code to SendSharedInvalidMessage to log the content of each message\n > that's sent out. That would at least tell us *what* is going into\n > the queue, even if not directly *why*.\n\nwe've patched postgresql and run one of our plpgsql complex procedures. \nThere are many of sinval messages - log output is below.\n\nWhat does it mean?\n\nThanks,\n\nKuba\n\nLOG: sending inval msg 30 0 26 13 30036 4294936593\nLOG: sending inval msg 29 0 26 13 30036 337030170\nLOG: sending inval msg 30 0 25 46 30036 4294936593\nLOG: sending inval msg 29 0 25 46 30036 337030170\nLOG: sending inval msg 30 0 26 13 30036 4294936593\nLOG: sending inval msg 29 0 26 13 30036 337030170\nLOG: sending inval msg 30 0 25 45 30036 4294936595\nLOG: sending inval msg 29 0 25 45 30036 2019111801\nLOG: sending inval msg 30 0 26 11 30036 4294936595\nLOG: sending inval msg 29 0 26 11 30036 2019111801\nLOG: sending inval msg 30 0 25 44 30036 4294936597\nLOG: sending inval msg 29 0 25 44 30036 3703878920\nLOG: sending inval msg 30 0 26 10 30036 4294936597\nLOG: sending inval msg 29 0 26 10 30036 3703878920\nLOG: sending inval msg 30 0 26 9 30036 4294936616\nLOG: sending inval msg 29 0 26 9 30036 3527122063\nLOG: sending inval msg 30 0 25 43 30036 4294936616\nLOG: sending inval msg 29 0 25 43 30036 3527122063\nLOG: sending inval msg 30 0 26 9 30036 4294936616\nLOG: sending inval msg 29 0 26 9 30036 3527122063\nLOG: sending inval msg 30 0 25 41 30036 4294936618\nLOG: sending inval msg 29 0 25 41 30036 2126866956\nLOG: sending inval msg 30 0 26 7 30036 4294936618\nLOG: sending inval msg 29 0 26 7 30036 2126866956\nLOG: sending inval msg 30 0 25 40 30036 4294936620\nLOG: sending inval msg 29 0 25 40 30036 1941919314\nLOG: sending inval msg 30 0 26 5 30036 4294936620\nLOG: sending inval msg 29 0 26 5 30036 1941919314\nLOG: sending inval msg 30 0 26 4 30036 4294936633\nLOG: sending inval msg 29 0 26 4 30036 544523647\nLOG: sending inval msg 30 0 25 39 30036 4294936633\nLOG: sending inval msg 29 0 25 39 30036 544523647\nLOG: sending inval msg 30 0 26 4 30036 4294936633\nLOG: sending inval msg 29 0 26 4 30036 544523647\nLOG: sending inval msg 30 0 25 38 30036 4294936635\nLOG: sending inval msg 29 0 25 38 30036 2557582018\nLOG: sending inval msg 30 0 26 3 30036 4294936635\nLOG: sending inval msg 29 0 26 3 30036 2557582018\nLOG: sending inval msg 30 0 25 37 30036 4294936637\nLOG: sending inval msg 29 0 25 37 30036 2207280630\nLOG: sending inval msg 30 0 26 2 30036 4294936637\nLOG: sending inval msg 29 0 26 2 30036 2207280630\nLOG: sending inval msg 30 0 26 1 30036 4294936669\nLOG: sending inval msg 29 0 26 1 30036 1310188568\nLOG: sending inval msg 30 0 25 36 30036 4294936669\nLOG: sending inval msg 29 0 25 36 30036 1310188568\nLOG: sending inval msg 30 0 26 1 30036 4294936669\nLOG: sending inval msg 29 0 26 1 30036 1310188568\nLOG: sending inval msg 30 0 25 35 30036 4294936671\nLOG: sending inval msg 29 0 25 35 30036 2633053415\nLOG: sending inval msg 30 0 25 48 30036 4294936671\nLOG: sending inval msg 29 0 25 48 30036 2633053415\nLOG: sending inval msg 30 0 25 33 30036 4294936673\nLOG: sending inval msg 29 0 25 33 30036 2049964857\nLOG: sending inval msg 30 0 25 47 30036 4294936673\nLOG: sending inval msg 29 0 25 47 30036 2049964857\nLOG: sending inval msg -1 0 30036 0 30700 3218341912\nLOG: sending inval msg -2 2084 1663 0 30036 50335\nLOG: sending inval msg -2 0 1663 0 30036 50336\nLOG: sending inval msg -1 2075 30036 0 30702 30036\nLOG: sending inval msg -2 0 1663 0 30036 50324\nLOG: sending inval msg -1 0 30036 0 30702 30036\nLOG: sending inval msg -2 0 1663 0 30036 50336\nLOG: sending inval msg -2 0 1663 0 30036 50323\nLOG: sending inval msg -1 0 30036 0 30700 30036\nLOG: sending inval msg -2 0 1663 0 30036 50335\nLOG: sending inval msg -2 0 1663 0 30036 50322\nLOG: sending inval msg -1 0 30036 0 30698 30036\nLOG: sending inval msg -2 0 1663 0 30036 50334\nLOG: sending inval msg -1 0 30036 0 30677 3218341912\nLOG: sending inval msg -2 2084 1663 0 30036 50332\nLOG: sending inval msg -2 0 1663 0 30036 50333\nLOG: sending inval msg -1 2075 30036 0 30679 30036\nLOG: sending inval msg -2 0 1663 0 30036 50321\nLOG: sending inval msg -1 0 30036 0 30679 30036\nLOG: sending inval msg -2 0 1663 0 30036 50333\nLOG: sending inval msg -2 0 1663 0 30036 50320\nLOG: sending inval msg -1 0 30036 0 30677 30036\nLOG: sending inval msg -2 0 1663 0 30036 50332\nLOG: sending inval msg -2 0 1663 0 30036 50319\nLOG: sending inval msg -1 0 30036 0 30675 30036\nLOG: sending inval msg -2 0 1663 0 30036 50331\nLOG: sending inval msg -1 0 30036 0 30660 3218341912\nLOG: sending inval msg -2 2084 1663 0 30036 50329\nLOG: sending inval msg -2 0 1663 0 30036 50330\nLOG: sending inval msg -1 2075 30036 0 30662 30036\nLOG: sending inval msg -2 0 1663 0 30036 50318\nLOG: sending inval msg -1 0 30036 0 30662 30036\nLOG: sending inval msg -2 0 1663 0 30036 50330\nLOG: sending inval msg -2 0 1663 0 30036 50317\nLOG: sending inval msg -1 0 30036 0 30660 30036\nLOG: sending inval msg -2 0 1663 0 30036 50329\nLOG: sending inval msg -2 0 1663 0 30036 50316\nLOG: sending inval msg -1 0 30036 0 30658 30036\nLOG: sending inval msg -2 0 1663 0 30036 50328\nLOG: sending inval msg -1 0 30036 0 30624 3218341912\nLOG: sending inval msg -2 2084 1663 0 30036 50326\nLOG: sending inval msg -2 0 1663 0 30036 50327\nLOG: sending inval msg -1 2075 30036 0 30626 30036\nLOG: sending inval msg -2 0 1663 0 30036 50315\nLOG: sending inval msg -1 0 30036 0 30626 30036\nLOG: sending inval msg -2 0 1663 0 30036 50327\nLOG: sending inval msg -2 0 1663 0 30036 50314\nLOG: sending inval msg -1 0 30036 0 30624 30036\nLOG: sending inval msg -2 0 1663 0 30036 50326\nLOG: sending inval msg -2 0 1663 0 30036 50313\nLOG: sending inval msg -1 0 30036 0 30622 30036\nLOG: sending inval msg -2 0 1663 0 30036 50325\n\n\nTom Lane napsal(a):\n> Jakub Ouhrabka <[email protected]> writes:\n>> We'we tried hard to identify what's the cause of filling sinval-queue. \n>> We went through query logs as well as function bodies stored in the \n>> database. We were not able to find any DDL, temp table creations etc.\n> \n> Strange. The best idea that comes to mind is to add some debugging\n> code to SendSharedInvalidMessage to log the content of each message\n> that's sent out. That would at least tell us *what* is going into\n> the queue, even if not directly *why*. Try something like (untested)\n> \n> void\n> SendSharedInvalidMessage(SharedInvalidationMessage *msg)\n> {\n> \tbool\t\tinsertOK;\n> \n> +\telog(LOG, \"sending inval msg %d %u %u %u %u %u\",\n> +\t\tmsg->cc.id,\n> +\t\tmsg->cc.tuplePtr.ip_blkid.bi_hi,\n> +\t\tmsg->cc.tuplePtr.ip_blkid.bi_lo,\n> +\t\tmsg->cc.tuplePtr.ip_posid,\n> +\t\tmsg->cc.dbId,\n> +\t\tmsg->cc.hashValue);\n> +\n> \tLWLockAcquire(SInvalLock, LW_EXCLUSIVE);\n> \tinsertOK = SIInsertDataEntry(shmInvalBuffer, msg);\n> \tLWLockRelease(SInvalLock);\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Mon, 14 Jan 2008 12:47:05 +0100",
"msg_from": "Jakub Ouhrabka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n cores"
},
{
"msg_contents": "Jakub Ouhrabka <[email protected]> writes:\n> What does it mean?\n\nLook at src/include/storage/sinval.h and src/include/utils/syscache.h.\nWhat you seem to have here is a bunch of tuple updates in pg_class\n(invalidating caches 29 and 30, which in 8.2 correspond to RELNAMENSP\nand RELOID), followed by a bunch of SharedInvalRelcacheMsg and\nSharedInvalSmgrMsg.\n\nWhat I find interesting is that the hits are coming against\nnearly-successive tuple CTIDs in pg_class, eg these are all\non pages 25 and 26 of pg_class:\n\n> LOG: sending inval msg 30 0 25 45 30036 4294936595\n> LOG: sending inval msg 29 0 25 45 30036 2019111801\n> LOG: sending inval msg 30 0 26 11 30036 4294936595\n> LOG: sending inval msg 29 0 26 11 30036 2019111801\n> LOG: sending inval msg 30 0 25 44 30036 4294936597\n> LOG: sending inval msg 29 0 25 44 30036 3703878920\n> LOG: sending inval msg 30 0 26 10 30036 4294936597\n> LOG: sending inval msg 29 0 26 10 30036 3703878920\n> LOG: sending inval msg 30 0 26 9 30036 4294936616\n> LOG: sending inval msg 29 0 26 9 30036 3527122063\n> LOG: sending inval msg 30 0 25 43 30036 4294936616\n> LOG: sending inval msg 29 0 25 43 30036 3527122063\n\nThe ordering is a little strange --- not sure what's producing that.\n\nI can think of three things that might be producing this:\n\n1. DDL operations ... but most sorts of DDL on a table would touch\nmore catalogs than just pg_class, so this is a bit hard to credit.\n\n2. VACUUM.\n\n3. Some sort of direct update of pg_class.\n\nThe fact that we have a bunch of catcache invals followed by\nrelcache/smgr invals says that this all happened in one transaction,\nelse they'd have been intermixed better. That lets VACUUM off the\nhook, because it processes each table in a separate transaction.\n\nI am wondering if maybe your app does one of those sneaky things like\nfooling with pg_class.reltriggers. If so, the problem might be soluble\nby just avoiding unnecessary updates.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 14 Jan 2008 10:22:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8 cores "
},
{
"msg_contents": "Hi Tom,\n\n > I can think of three things that might be producing this:\n\nwe've found it: TRUNCATE\n\nWe'll try to eliminate use of TRUNCATE and the periodical spikes should \ngo off. There will still be possibility of spikes because of database \ncreation etc - we'll try to handle this by issuing trivial commands from \nidle backeds and in longer run to decrease the number of \nbackends/databases running. This is the way to go, right?\n\nOne more question: is it ok to do mass regexp update of pg_proc.prosrc \nchanging TRUNCATEs to DELETEs? Anything we should be aware of? Sure \nwe'll do our own testing but in case you see any obvious downsides...\n\nMany thanks for your guidance!\n\nKuba\n",
"msg_date": "Tue, 15 Jan 2008 13:09:18 +0100",
"msg_from": "Jakub Ouhrabka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n cores"
},
{
"msg_contents": "Hi\n\n> > I can think of three things that might be producing this:\n> \n> we've found it: TRUNCATE\n\nI haven't been following this thread. Can someone please explain to me \nwhy TRUNCATE causes these spikes?\n\n-- \nAdrian Moisey\nSystem Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n",
"msg_date": "Tue, 15 Jan 2008 14:26:10 +0200",
"msg_from": "Adrian Moisey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n cores"
},
{
"msg_contents": "Jakub Ouhrabka <[email protected]> writes:\n> we've found it: TRUNCATE\n\nHuh. One transaction truncating a dozen tables? That would match the\nsinval trace all right ...\n\n> One more question: is it ok to do mass regexp update of pg_proc.prosrc \n> changing TRUNCATEs to DELETEs?\n\nYou might be throwing the baby out with the bathwater, performance-wise.\nMass DELETEs will require cleanup by VACUUM, and that'll likely eat more\ncycles and I/O than you save. I'd think in terms of trying to spread\nout the TRUNCATEs or check to see if you really need one (maybe the\ntable's already empty), rather than this.\n\nI do plan to look at the sinval code once 8.3 is out the door, so\nanother possibility if you can wait a few weeks/months is to leave your\napp alone and see if the eventual patch looks sane to back-patch.\n(I don't think the community would consider doing so, but you could\nrun a locally modified Postgres with it.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Jan 2008 10:20:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8 cores "
},
{
"msg_contents": " > Huh. One transaction truncating a dozen tables? That would match the\n > sinval trace all right ...\n\nIt should be 4 tables - the shown log looks like there were more truncates?\n\n > You might be throwing the baby out with the bathwater,\n > performance-wise.\n\nYes, performance was the initial reason to use truncate instead of \ndelete many years ago. But today the truncated tables usualy contain \nexactly one row - quick measurements now show that it's faster to issue \ndelete instead of truncate in this case.\n\nAgain, many thanks for your invaluable advice!\n\nKuba\n",
"msg_date": "Tue, 15 Jan 2008 16:45:57 +0100",
"msg_from": "Jakub Ouhrabka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n cores"
},
{
"msg_contents": "Adrian Moisey <[email protected]> writes:\n>> we've found it: TRUNCATE\n\n> I haven't been following this thread. Can someone please explain to me \n> why TRUNCATE causes these spikes?\n\nIt's not so much the TRUNCATE as the overhead of broadcasting the\nresultant catalog changes to the many hundreds of (mostly idle)\nbackends he's got --- all of which respond by trying to lock the\nshared sinval message queue at about the same time.\n\nYou could see the whole thread as an object lesson in why connection\npooling is a good idea. But certainly it seems that sinval is the\nnext bottleneck in terms of being able to scale Postgres up to very\nlarge numbers of backends.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Jan 2008 10:51:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8 cores "
},
{
"msg_contents": "Jakub Ouhrabka <[email protected]> writes:\n>>> Huh. One transaction truncating a dozen tables? That would match the\n>>> sinval trace all right ...\n\n> It should be 4 tables - the shown log looks like there were more truncates?\n\nActually, counting up the entries, there are close to 2 dozen relations\napparently being truncated in the trace you showed. But that might be\nonly four tables at the user level, since each index on these tables\nwould appear separately, and you might have a toast table plus index\nfor each one too. If you want to dig down, the table OIDs are visible\nin the trace, in the messages with type -1:\n\n>> LOG: sending inval msg -1 0 30036 0 30700 3218341912\n ^^^^^ ^^^^^\n DBOID RELOID\n\nso you could look into pg_class to confirm what's what.\n\n> Yes, performance was the initial reason to use truncate instead of \n> delete many years ago. But today the truncated tables usualy contain \n> exactly one row - quick measurements now show that it's faster to issue \n> delete instead of truncate in this case.\n\nOkay, for a table of just a few entries I agree that DELETE is probably\nbetter. But don't forget you're going to need to have those tables\nvacuumed fairly regularly now, else they'll start to bloat.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Jan 2008 11:26:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8 cores "
},
{
"msg_contents": " > Okay, for a table of just a few entries I agree that DELETE is\n > probably better. But don't forget you're going to need to have those\n > tables vacuumed fairly regularly now, else they'll start to bloat.\n\nI think we'll go with DELETE also for another reason:\n\nJust after we figured out the cause of the spikes we started to \ninvestigate a long-term issue we had with PostgreSQL: pg_dump of big \ndatabase was blocking some of our applications. And yes, we replaced \nTRUNCATE with DELETE and everything is running as expected.\n\nLooking at the docs now I see there is a new paragraph in 8.3 docs \nmentioning that TRUNCATE is not MVCC-safe and also the blocking issue. \nIt's a pity that the warning wasn't there in 7.1 times :-)\n\nThanks,\n\nKuba\n\nTom Lane napsal(a):\n> Jakub Ouhrabka <[email protected]> writes:\n>>>> Huh. One transaction truncating a dozen tables? That would match the\n>>>> sinval trace all right ...\n> \n>> It should be 4 tables - the shown log looks like there were more truncates?\n> \n> Actually, counting up the entries, there are close to 2 dozen relations\n> apparently being truncated in the trace you showed. But that might be\n> only four tables at the user level, since each index on these tables\n> would appear separately, and you might have a toast table plus index\n> for each one too. If you want to dig down, the table OIDs are visible\n> in the trace, in the messages with type -1:\n> \n>>> LOG: sending inval msg -1 0 30036 0 30700 3218341912\n> ^^^^^ ^^^^^\n> DBOID RELOID\n> \n> so you could look into pg_class to confirm what's what.\n> \n>> Yes, performance was the initial reason to use truncate instead of \n>> delete many years ago. But today the truncated tables usualy contain \n>> exactly one row - quick measurements now show that it's faster to issue \n>> delete instead of truncate in this case.\n> \n> Okay, for a table of just a few entries I agree that DELETE is probably\n> better. But don't forget you're going to need to have those tables\n> vacuumed fairly regularly now, else they'll start to bloat.\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Jan 2008 18:06:33 +0100",
"msg_from": "Jakub Ouhrabka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n cores"
},
{
"msg_contents": "On Mon, 2008-01-07 at 19:54 -0500, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > Perhaps it would make sense to try to take the \"fast path\" in\n> > SIDelExpiredDataEntries with only a shared lock rather than exclusive.\n> \n> I think the real problem here is that sinval catchup processing is well\n> designed to create contention :-(. \n\nThinking some more about handling TRUNCATEs...\n\n> Some ideas for improving matters:\n> \n> 1. Try to avoid having all the backends hit the queue at once. Instead\n> of SIGUSR1'ing everybody at the same time, maybe hit only the process\n> with the oldest message pointer, and have him hit the next oldest after\n> he's done reading the queue.\n> \n> 2. Try to take more than one message off the queue per SInvalLock cycle.\n> (There is a tuning tradeoff here, since it would mean holding the lock\n> for longer at a time.)\n> \n> 3. Try to avoid having every backend run SIDelExpiredDataEntries every\n> time through ReceiveSharedInvalidMessages. It's not critical to delete\n> entries until the queue starts getting full --- maybe we could rejigger\n> the logic so it only happens once when somebody notices the queue is\n> getting full, or so that only the guy(s) who had nextMsgNum == minMsgNum\n> do it, or something like that?\n\n(2) is unnecessary if we can reduce the number of Exclusive lockers so\nthat repeated access to the backend's messages is not contended.\n\n(1) would do this, but seems like it would be complex. We can reduce the\npossibility of multiple re-signals though.\n\n(3) seems like the easiest route, as long as we get a reasonable\nalgorithm for reducing the access rate to a reasonable level. \n\nI'm posting a patch for discussion to -patches now that will do this. It\nseems straightforward enough to include in 8.3, but that may rise a few\neyebrows, but read the patch first.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n",
"msg_date": "Fri, 25 Jan 2008 23:27:09 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n\tcores"
},
{
"msg_contents": "On Fri, 25 Jan 2008, Simon Riggs wrote:\n>> 1. Try to avoid having all the backends hit the queue at once. Instead\n>> of SIGUSR1'ing everybody at the same time, maybe hit only the process\n>> with the oldest message pointer, and have him hit the next oldest after\n>> he's done reading the queue.\n\nMy feeling was that an \"obvious\" way to deal with this is to implement \nsome sort of \"random early detect\". That is, randomly SIGUSR1 processes as \nentries are added to the queue. The signals should become more frequent as \nthe queue length increases, until it reaches the current cut-off of \nsignalling everyone when the queue really is full. The hope would be that \nthat would never happen.\n\nMatthew\n",
"msg_date": "Mon, 28 Jan 2008 14:27:34 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem with 8\n cores"
},
{
"msg_contents": "\nAdded to TODO:\n\n* Improve performance of shared invalidation queue for multiple CPUs\n\n http://archives.postgresql.org/pgsql-performance/2008-01/msg00023.php\n\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > Perhaps it would make sense to try to take the \"fast path\" in\n> > SIDelExpiredDataEntries with only a shared lock rather than exclusive.\n> \n> I think the real problem here is that sinval catchup processing is well\n> designed to create contention :-(. Once we've decided that the message\n> queue is getting too full, we SIGUSR1 all the backends at once (or as\n> fast as the postmaster can do it anyway), then they all go off and try\n> to touch the sinval queue. Backends that haven't awoken even once\n> since the last time will have to process the entire queue contents,\n> and they're all trying to do that at the same time. What's worse, they\n> take and release the SInvalLock once for each message they take off the\n> queue. This isn't so horrid for one-core machines (since each process\n> will monopolize the CPU for probably less than one timeslice while it's\n> catching up) but it's pretty obvious where all the contention is coming\n> from on an 8-core.\n> \n> Some ideas for improving matters:\n> \n> 1. Try to avoid having all the backends hit the queue at once. Instead\n> of SIGUSR1'ing everybody at the same time, maybe hit only the process\n> with the oldest message pointer, and have him hit the next oldest after\n> he's done reading the queue.\n> \n> 2. Try to take more than one message off the queue per SInvalLock cycle.\n> (There is a tuning tradeoff here, since it would mean holding the lock\n> for longer at a time.)\n> \n> 3. Try to avoid having every backend run SIDelExpiredDataEntries every\n> time through ReceiveSharedInvalidMessages. It's not critical to delete\n> entries until the queue starts getting full --- maybe we could rejigger\n> the logic so it only happens once when somebody notices the queue is\n> getting full, or so that only the guy(s) who had nextMsgNum == minMsgNum\n> do it, or something like that?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Fri, 21 Mar 2008 21:44:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux/PostgreSQL scalability issue - problem\n with 8 cores"
}
] |
[
{
"msg_contents": "Using Postgresql 8.1.10 every so often I get a transaction that takes a\nwhile to commit.\n\nI log everything that takes over 500ms and quite reguallly it says things\nlike\n\n707.036 ms statement: COMMIT\n\nIs there anyway to speed this up?\n\nPeter Childs\n\nUsing Postgresql 8.1.10 every so often I get a transaction that takes a while to commit.I log everything that takes over 500ms and quite reguallly it says things like707.036 ms statement: COMMITIs there anyway to speed this up?\nPeter Childs",
"msg_date": "Thu, 3 Jan 2008 14:35:37 +0000",
"msg_from": "\"Peter Childs\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commit takes a long time."
},
{
"msg_contents": "Hello\n\nOn 03/01/2008, Peter Childs <[email protected]> wrote:\n> Using Postgresql 8.1.10 every so often I get a transaction that takes a\n> while to commit.\n>\n> I log everything that takes over 500ms and quite reguallly it says things\n> like\n>\n> 707.036 ms statement: COMMIT\n>\n> Is there anyway to speed this up?\n>\n\nthere can be two issues:\na) some trigger activity for DEFERRED constraints\nb) slow write to WAL\n\nhttp://www.westnet.com/~gsmith/content/postgresql/\n\nin normal cases COMMIT is really fast operation.\n\nRegards\nPavel Stehule\n\n> Peter Childs\n>\n>\n>\n>\n",
"msg_date": "Thu, 3 Jan 2008 17:20:01 +0100",
"msg_from": "\"Pavel Stehule\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commit takes a long time."
},
{
"msg_contents": "\"Peter Childs\" <[email protected]> writes:\n> Using Postgresql 8.1.10 every so often I get a transaction that takes a\n> while to commit.\n\n> I log everything that takes over 500ms and quite reguallly it says things\n> like\n\n> 707.036 ms statement: COMMIT\n\nAFAIK there are only two likely explanations for that:\n\n1. You have a lot of deferred triggers that have to run at COMMIT time.\n\n2. The disk system gets so bottlenecked that fsync'ing the commit record\ntakes a long time.\n\nIf it's #2 you could probably correlate the problem with spikes in I/O\nactivity as seen in iostat or vmstat.\n\nIf it is a disk usage spike then I would make the further guess that\nwhat causes it might be a Postgres checkpoint. You might be able to\ndampen the spike a bit by playing with the checkpoint parameters, but\nthe only real fix will be 8.3's spread-out-checkpoints feature.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Jan 2008 11:35:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commit takes a long time. "
},
{
"msg_contents": "On 03/01/2008, Tom Lane <[email protected]> wrote:\n>\n> \"Peter Childs\" <[email protected]> writes:\n> > Using Postgresql 8.1.10 every so often I get a transaction that takes a\n> > while to commit.\n>\n> > I log everything that takes over 500ms and quite reguallly it says\n> things\n> > like\n>\n> > 707.036 ms statement: COMMIT\n>\n> AFAIK there are only two likely explanations for that:\n>\n> 1. You have a lot of deferred triggers that have to run at COMMIT time.\n>\n> 2. The disk system gets so bottlenecked that fsync'ing the commit record\n> takes a long time.\n>\n> If it's #2 you could probably correlate the problem with spikes in I/O\n> activity as seen in iostat or vmstat.\n>\n> If it is a disk usage spike then I would make the further guess that\n> what causes it might be a Postgres checkpoint. You might be able to\n> dampen the spike a bit by playing with the checkpoint parameters, but\n> the only real fix will be 8.3's spread-out-checkpoints feature.\n>\n> regards, tom lane\n>\n\n\n2 Seams most likely as they seam to occur more often when other when large\nqueries (they are often followed by a record for a very very long query in a\ndeferent transaction) or at particularly busy period when quite a lots of\nother short queries are also taking place.\n\nI planning an upgrade to 8.3 once its out anyway so that might increase\nspeed anyway.\n\nPeter.\n\nOn 03/01/2008, Tom Lane <[email protected]> wrote:\n\"Peter Childs\" <[email protected]> writes:> Using Postgresql 8.1.10 every so often I get a transaction that takes a> while to commit.> I log everything that takes over 500ms and quite reguallly it says things\n> like> 707.036 ms statement: COMMITAFAIK there are only two likely explanations for that:1. You have a lot of deferred triggers that have to run at COMMIT time.2. The disk system gets so bottlenecked that fsync'ing the commit record\ntakes a long time.If it's #2 you could probably correlate the problem with spikes in I/Oactivity as seen in iostat or vmstat.If it is a disk usage spike then I would make the further guess that\nwhat causes it might be a Postgres checkpoint. You might be able todampen the spike a bit by playing with the checkpoint parameters, butthe only real fix will be 8.3's spread-out-checkpoints feature.\n regards, tom lane2 Seams most likely as they seam to occur more often when other when large queries (they are often followed by a record for a very very long query in a deferent transaction) or at particularly busy period when quite a lots of other short queries are also taking place. \nI planning an upgrade to 8.3 once its out anyway so that might increase speed anyway.Peter.",
"msg_date": "Fri, 4 Jan 2008 08:46:37 +0000",
"msg_from": "\"Peter Childs\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commit takes a long time."
},
{
"msg_contents": "On Thu, 2008-01-03 at 11:35 -0500, Tom Lane wrote:\n> \"Peter Childs\" <[email protected]> writes:\n> > Using Postgresql 8.1.10 every so often I get a transaction that takes a\n> > while to commit.\n> \n> > I log everything that takes over 500ms and quite reguallly it says things\n> > like\n> \n> > 707.036 ms statement: COMMIT\n> \n> AFAIK there are only two likely explanations for that:\n> \n> 1. You have a lot of deferred triggers that have to run at COMMIT time.\n> \n> 2. The disk system gets so bottlenecked that fsync'ing the commit record\n> takes a long time.\n\nI've seen 3 other reasons for this in the field while tuning people's\nsystems. In 8.3 we've fixed one, reduced the other and the third is\namenable to tuning via wal_buffers even in 8.1\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n",
"msg_date": "Fri, 04 Jan 2008 10:17:17 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commit takes a long time."
}
] |
[
{
"msg_contents": "Hi. Running postgres 8.2 on debian.\nI've noticed that concurrent inserts (archiving) of large batches data\ninto two completely unrelated tables are many times slower than the\nsame inserts done in sequence.\nIs there any way to speed them up apart from buying faster HDs/\nchanging RAID configuration?\n",
"msg_date": "Sat, 5 Jan 2008 19:00:00 -0800 (PST)",
"msg_from": "Sergei Shelukhin <[email protected]>",
"msg_from_op": true,
"msg_subject": "concurrent inserts into two separate tables are very slow"
},
{
"msg_contents": "On Jan 5, 2008 9:00 PM, Sergei Shelukhin <[email protected]> wrote:\n> Hi. Running postgres 8.2 on debian.\n> I've noticed that concurrent inserts (archiving) of large batches data\n> into two completely unrelated tables are many times slower than the\n> same inserts done in sequence.\n> Is there any way to speed them up apart from buying faster HDs/\n> changing RAID configuration?\n\nWhat method are you using to load these data? Got a short example\nthat illustrates what you're doing?\n",
"msg_date": "Mon, 7 Jan 2008 16:17:19 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: concurrent inserts into two separate tables are very slow"
},
{
"msg_contents": "Scott Marlowe wrote:\n> On Jan 5, 2008 9:00 PM, Sergei Shelukhin <[email protected]> wrote:\n> \n>> Hi. Running postgres 8.2 on debian.\n>> I've noticed that concurrent inserts (archiving) of large batches data\n>> into two completely unrelated tables are many times slower than the\n>> same inserts done in sequence.\n>> Is there any way to speed them up apart from buying faster HDs/\n>> changing RAID configuration?\n>> \n>\n> What method are you using to load these data? Got a short example\n> that illustrates what you're doing?\n>\n> \nThe basic structure is as follows: there are several tables with \ntransaction data that is stored for one month only.\nThe data comes from several sources in different formats and is pushed \nin using a custom script.\nIt gets the source data and puts it into a table it creates (import \ntable) with the same schema as the main table; then it deletes the month \nold data from the main table; it also searches for duplicates in the \nmain table using some specific criteria and deletes them too (to make \nuse of indexes 2nd temp table is created with id int column and it's \npopulated with one insert ... select query with the transaction ids of \ndata duplicate in main and import tables, after that delete from pages \nwhere id in (select id from 2nd-temp-table) is called). Then it inserts \nthe remainder of the imports table into the main table.\nThere are several data load processes that function in the same manner \nwith different target tables.\nWhen they are running in sequence, they take about 20 minutes to \ncomplete on average. If, however, they are running in parallel, they can \ntake up to 3 hours... I was wondering if it's solely the HD bottleneck \ncase, given that there's plenty of CPU and RAM available and postgres is \nconfigured to use it.\n\n",
"msg_date": "Tue, 08 Jan 2008 01:49:42 +0300",
"msg_from": "Sergei Shelukhin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: concurrent inserts into two separate tables are very\n slow"
},
{
"msg_contents": "On Jan 7, 2008 4:49 PM, Sergei Shelukhin <[email protected]> wrote:\n>\n> Scott Marlowe wrote:\n> > On Jan 5, 2008 9:00 PM, Sergei Shelukhin <[email protected]> wrote:\n> >\n> >> Hi. Running postgres 8.2 on debian.\n> >> I've noticed that concurrent inserts (archiving) of large batches data\n> >> into two completely unrelated tables are many times slower than the\n> >> same inserts done in sequence.\n> >> Is there any way to speed them up apart from buying faster HDs/\n> >> changing RAID configuration?\n> >>\n> >\n> > What method are you using to load these data? Got a short example\n> > that illustrates what you're doing?\n> >\n> >\n> The basic structure is as follows: there are several tables with\n> transaction data that is stored for one month only.\n> The data comes from several sources in different formats and is pushed\n> in using a custom script.\n> It gets the source data and puts it into a table it creates (import\n> table) with the same schema as the main table; then it deletes the month\n> old data from the main table; it also searches for duplicates in the\n> main table using some specific criteria and deletes them too (to make\n> use of indexes 2nd temp table is created with id int column and it's\n> populated with one insert ... select query with the transaction ids of\n> data duplicate in main and import tables, after that delete from pages\n> where id in (select id from 2nd-temp-table) is called). Then it inserts\n> the remainder of the imports table into the main table.\n> There are several data load processes that function in the same manner\n> with different target tables.\n> When they are running in sequence, they take about 20 minutes to\n> complete on average. If, however, they are running in parallel, they can\n> take up to 3 hours... I was wondering if it's solely the HD bottleneck\n> case, given that there's plenty of CPU and RAM available and postgres is\n> configured to use it.\n\nAhh, thanks for the more detailed explanation. Now I get what you're facing.\n\nThere are a few things you could do that would probably help. Doing\nmore than one might help.\n\n1: Buy a decent battery backed caching RAID controller. This will\nsmooth out writes a lot. If you can't afford that...\n2: Build a nice big RAID-10 array, say 8 to 14 discs.\n3: Put pg_xlog on a physically separate drive from the rest of the database.\n4: Put each table being inserted to on a separate physical hard drives.\n5: Stop writing to multiple tables at once.\n6: (Not recommended) run with fsync turned off.\n\nEach of these things can help on their own. My personal preference\nfor heavily written databases is a good RAID controller with battery\nbacked caching on and a lot of discs in RAID-10 or RAID-6 (depending\non read versus write ratio and the need for storage space.) RAID-10\nis normally better for performance, RAID-6 with large arrays is better\nfor maximizing your size while maintaining decent performance and\nreliability. RAID-5 is right out.\n",
"msg_date": "Mon, 7 Jan 2008 17:08:46 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: concurrent inserts into two separate tables are very slow"
}
] |
[
{
"msg_contents": "Hello,\nI have a problem with my install of postgresql. I have a program that\nrequests DB by opening persistent connexions. When the program is launched,\nthe disk IO are very high until postgresql cache is good enough (a few\nhours). The problem is that when I stop the program all the connexions are\nclosed and so are the postrgesql processes. And the cache is lost.\nI really want to use postgresql but today I use mssqlserver and it never\nlost its cache till I stop it (mssqlserver). So I don't have problem with\ndisk IO when the program is stopped an restarted.\n\nI assume that each postgresql process manage its own cache but that there is\nno global cache. Am I right ? If true is there any possibility to have a\nglobal cache for all processes and is it possible not to free this cache\nwhen connexions are closed ?\n\nRegards\n\n-- \n=========\nGuillaume Pungeot - mappy\n\nHello,I have a problem with my install of postgresql. I have a\nprogram that requests DB by opening persistent connexions. When the\nprogram is launched, the disk IO are very high until postgresql cache\nis good enough (a few hours). The problem is that when I stop the\nprogram all the connexions are closed and so are the postrgesql\nprocesses. And the cache is lost.\nI really want to use postgresql but today I use mssqlserver and it\nnever lost its cache till I stop it (mssqlserver). So I don't have\nproblem with disk IO when the program is stopped an restarted.\nI assume that each postgresql process manage its own cache but that\nthere is no global cache. Am I right ? If true is there any possibility\nto have a global cache for all processes and is it possible not to free\nthis cache when connexions are closed ?\nRegards-- =========Guillaume Pungeot - mappy",
"msg_date": "Tue, 8 Jan 2008 16:35:56 +0100",
"msg_from": "\"Guillaume Pungeot\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Loss of cache when persistent connexion is closed"
},
{
"msg_contents": "\"Guillaume Pungeot\" <[email protected]> writes:\n> I assume that each postgresql process manage its own cache but that there is\n> no global cache. Am I right ?\n\nNo, you aren't.\n\nAre you sure you're not shutting down the postmaster? Just exiting\nindividual sessions shouldn't result in anything getting discarded\nfrom shared buffers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Jan 2008 19:05:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Loss of cache when persistent connexion is closed "
}
] |
[
{
"msg_contents": "My set-up:\n\nPostgres 8.2.5 on AMD x86_64 compiled with GCC 3.4.4 on Gentoo Linux 2.6.17\n4 GB of RAM,\nshared_buffers = 1000\nwork_mem = 1024\n\n\nThis is regarding performance of set-returning functions in queries. I\nuse generate_series() in the following as an example. The true\nmotivation is a need for a custom set-returning function that returns\nlarge result sets, on which I'd like to use Postgres to compute groups\nand aggregates.\n\nConsider the following query:\n\n postgres=# select count(*) from generate_series(1,100000000000000000);\n\nA vmstat while this query was running seems to suggest that the\ngenerate_series() was being materialized to disk first and then the\ncount computed (blocks were being written to disk even though there\nwas loads of memory still available).\n\nA small variation in the query (i.e. placing the generate_series in\nthe select list without a from clause) exhibited a different\nbehaviour:\n\n postgres=# select count(*) from (select\ngenerate_series(1,100000000000000000)) as A;\n\nThis time, Postgres seemed to be trying to build the entire\ngenerate_series() result set in memory and eventually started to swap\nout until the swap space limit after which it crashed.\n\n server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n The connection to the server was lost. Attempting reset: Failed.\n !>\n\nInterestingly though, when the range in the generate_series() was\nsmall enough to fit in 4 bytes of memory (e.g.\ngenerate_series(1,1000000000) ), the above query completed consuming\nonly negligible amount of memory. So, it looked like the aggregate\ncomputation was being pipelined with the tuples returned from\ngenerate_series().\n\n postgres=# select count(*) from (select\ngenerate_series(1,1000000000)) as A;\n count\n ------------\n 1000000000\n (1 row)\n\nIt seems the only difference between the queries is that in the former\ncase, the generate_series(bigint, bigint) overload is selected,\nwhereas in the latter case, the generate_series(int, int) overload is\nselected. A little investigation seemed to suggest that in the former\ncase (where the generate_series() was returning bigint), Postgres was\nusing palloc to store the numbers so that it could pass them by\nreference. By contrast, in the latter case, the numbers are small\nenough (4-byte int) to be passed by value.\n\nAssuming the aggregate computation is pipelined with the reading of\nthe tuples from the function (like in the 4-byte case), the aggregate\nwas probably immediately updated. But then, the memory so allocated\nwasnt freed and that's what was resulting in eating up memory. Is this\nright? If not, could you explain why this behaviour was being\nobserved?\n\nFirst question: why isnt the aggregate computation being pipelined\nwith the read of the tuples from the set-returning function in the\nfirst query's case (i.e., when generate_series appears in the from\nlist)?\n\nSecond question: is there a recommended way of using set returning\nfunctions in queries that will enable such a pipelining, i.e., without\nmaterializing and without consuming huge amounts of memory?\n\nThird question: more generally, is there a way I can ask Postgres to\nreport an Out-of-memory the moment it tries to consume greater than a\ncertain percentage of memory (instead of letting it hit the swap and\neventually die or thrash) ?\n\n\nThanks!\n- John\n",
"msg_date": "Tue, 8 Jan 2008 18:21:46 -0800",
"msg_from": "\"John Smith\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of aggregates over set-returning functions"
},
{
"msg_contents": "\"John Smith\" <[email protected]> writes:\n> Consider the following query:\n> postgres=# select count(*) from generate_series(1,100000000000000000);\n> A vmstat while this query was running seems to suggest that the\n> generate_series() was being materialized to disk first and then the\n> count computed\n\nYeah, that's what I'd expect given the way nodeFunctionscan.c works.\n\n> A small variation in the query (i.e. placing the generate_series in\n> the select list without a from clause) exhibited a different\n> behaviour:\n> postgres=# select count(*) from (select\n> generate_series(1,100000000000000000)) as A;\n> This time, Postgres seemed to be trying to build the entire\n> generate_series() result set in memory and eventually started to swap\n> out until the swap space limit after which it crashed.\n\nHmm, at worst you should get an \"out of memory\" error --- that's what\nI get. I think you're suffering from the dreaded \"OOM kill\" disease.\n\n> Interestingly though, when the range in the generate_series() was\n> small enough to fit in 4 bytes of memory (e.g.\n> generate_series(1,1000000000) ), the above query completed consuming\n> only negligible amount of memory. So, it looked like the aggregate\n> computation was being pipelined with the tuples returned from\n> generate_series().\n\nIt's pipelined either way. But int8 is a pass-by-reference data type,\nand it sounds like we have a memory leak for this case.\n\n> Third question: more generally, is there a way I can ask Postgres to\n> report an Out-of-memory the moment it tries to consume greater than a\n> certain percentage of memory (instead of letting it hit the swap and\n> eventually die or thrash) ?\n\nFirst, you want to turn off memory overcommit in your kernel;\nsecond, you might find that running the postmaster under conservative\nulimit settings would make the behavior nicer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Jan 2008 22:07:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of aggregates over set-returning functions "
},
{
"msg_contents": "> > Interestingly though, when the range in the generate_series() was\n> > small enough to fit in 4 bytes of memory (e.g.\n> > generate_series(1,1000000000) ), the above query completed consuming\n> > only negligible amount of memory. So, it looked like the aggregate\n> > computation was being pipelined with the tuples returned from\n> > generate_series().\n>\n> It's pipelined either way. But int8 is a pass-by-reference data type,\n> and it sounds like we have a memory leak for this case.\n\nThanks for your reply. How easy is it to fix this? Which portion of\nthe code should we look to change?\n\n- John\n",
"msg_date": "Tue, 8 Jan 2008 19:33:33 -0800",
"msg_from": "\"John Smith\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of aggregates over set-returning functions"
},
{
"msg_contents": "\"John Smith\" <[email protected]> writes:\n>> It's pipelined either way. But int8 is a pass-by-reference data type,\n>> and it sounds like we have a memory leak for this case.\n\n> Thanks for your reply. How easy is it to fix this? Which portion of\n> the code should we look to change?\n\nI was just looking at that. The issue looks to me that nodeResult.c\n(and other plan node types that support SRFs in their targetlists)\ndo this:\n\n /*\n * Check to see if we're still projecting out tuples from a previous scan\n * tuple (because there is a function-returning-set in the projection\n * expressions). If so, try to project another one.\n */\n if (node->ps.ps_TupFromTlist)\n {\n resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);\n if (isDone == ExprMultipleResult)\n return resultSlot;\n /* Done with that source tuple... */\n node->ps.ps_TupFromTlist = false;\n }\n\n /*\n * Reset per-tuple memory context to free any expression evaluation\n * storage allocated in the previous tuple cycle. Note this can't happen\n * until we're done projecting out tuples from a scan tuple.\n */\n ResetExprContext(econtext);\n\nwhereas there would be no memory leak if these two chunks of code were\nin the other order. The question is whether resetting the context first\nwould throw away any data that we *do* still need for the repeated\nExecProject calls. That second comment block implies there's something\nwe do need.\n\nI'm not sure why it's like this. Some digging in the CVS history shows\nthat indeed the code used to be in the other order, and I switched it\n(and added the second comment block) in this old commit:\n\nhttp://archives.postgresql.org/pgsql-committers/2000-08/msg00218.php\n\nI suppose that the SQL-function support at the time required that its\ncalling memory context be persistent until it returned ExprEndResult,\nbut I sure don't recall any details. It's entirely possible that that\nrequirement no longer exists, or could easily be eliminated given all\nthe other changes that have happened since then. nodeFunctionscan.c\nseems to reset the current context for each call of a SRF, so I'd think\nthat anything that can't cope with that should have been flushed out\nby now.\n\nIf you feel like poking at this, I *strongly* recommend doing your\ntesting in an --enable-cassert build. You'll have no idea whether you\nfreed stuff too early if you don't have CLOBBER_FREED_MEMORY enabled.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Jan 2008 22:51:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of aggregates over set-returning functions "
},
{
"msg_contents": "\nThis this a bug or TODO item?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> \"John Smith\" <[email protected]> writes:\n> >> It's pipelined either way. But int8 is a pass-by-reference data type,\n> >> and it sounds like we have a memory leak for this case.\n> \n> > Thanks for your reply. How easy is it to fix this? Which portion of\n> > the code should we look to change?\n> \n> I was just looking at that. The issue looks to me that nodeResult.c\n> (and other plan node types that support SRFs in their targetlists)\n> do this:\n> \n> /*\n> * Check to see if we're still projecting out tuples from a previous scan\n> * tuple (because there is a function-returning-set in the projection\n> * expressions). If so, try to project another one.\n> */\n> if (node->ps.ps_TupFromTlist)\n> {\n> resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);\n> if (isDone == ExprMultipleResult)\n> return resultSlot;\n> /* Done with that source tuple... */\n> node->ps.ps_TupFromTlist = false;\n> }\n> \n> /*\n> * Reset per-tuple memory context to free any expression evaluation\n> * storage allocated in the previous tuple cycle. Note this can't happen\n> * until we're done projecting out tuples from a scan tuple.\n> */\n> ResetExprContext(econtext);\n> \n> whereas there would be no memory leak if these two chunks of code were\n> in the other order. The question is whether resetting the context first\n> would throw away any data that we *do* still need for the repeated\n> ExecProject calls. That second comment block implies there's something\n> we do need.\n> \n> I'm not sure why it's like this. Some digging in the CVS history shows\n> that indeed the code used to be in the other order, and I switched it\n> (and added the second comment block) in this old commit:\n> \n> http://archives.postgresql.org/pgsql-committers/2000-08/msg00218.php\n> \n> I suppose that the SQL-function support at the time required that its\n> calling memory context be persistent until it returned ExprEndResult,\n> but I sure don't recall any details. It's entirely possible that that\n> requirement no longer exists, or could easily be eliminated given all\n> the other changes that have happened since then. nodeFunctionscan.c\n> seems to reset the current context for each call of a SRF, so I'd think\n> that anything that can't cope with that should have been flushed out\n> by now.\n> \n> If you feel like poking at this, I *strongly* recommend doing your\n> testing in an --enable-cassert build. You'll have no idea whether you\n> freed stuff too early if you don't have CLOBBER_FREED_MEMORY enabled.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Thu, 6 Mar 2008 12:23:21 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of aggregates over set-returning\n functions"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> This this a bug or TODO item?\n\nTODO, I think. I wouldn't want to risk pushing a change in this into\nback branches.\n\n\t\t\tregards, tom lane\n\n>> I'm not sure why it's like this. Some digging in the CVS history shows\n>> that indeed the code used to be in the other order, and I switched it\n>> (and added the second comment block) in this old commit:\n>> \n>> http://archives.postgresql.org/pgsql-committers/2000-08/msg00218.php\n>> \n>> I suppose that the SQL-function support at the time required that its\n>> calling memory context be persistent until it returned ExprEndResult,\n>> but I sure don't recall any details. It's entirely possible that that\n>> requirement no longer exists, or could easily be eliminated given all\n>> the other changes that have happened since then. nodeFunctionscan.c\n>> seems to reset the current context for each call of a SRF, so I'd think\n>> that anything that can't cope with that should have been flushed out\n>> by now.\n>> \n>> If you feel like poking at this, I *strongly* recommend doing your\n>> testing in an --enable-cassert build. You'll have no idea whether you\n>> freed stuff too early if you don't have CLOBBER_FREED_MEMORY enabled.\n",
"msg_date": "Thu, 06 Mar 2008 12:53:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of aggregates over set-returning functions "
},
{
"msg_contents": "\nOK, added to TODO:\n\n* Reduce memory usage of aggregates in set returning functions\n\n http://archives.postgresql.org/pgsql-performance/2008-01/msg00031.php\n\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > This this a bug or TODO item?\n> \n> TODO, I think. I wouldn't want to risk pushing a change in this into\n> back branches.\n> \n> \t\t\tregards, tom lane\n> \n> >> I'm not sure why it's like this. Some digging in the CVS history shows\n> >> that indeed the code used to be in the other order, and I switched it\n> >> (and added the second comment block) in this old commit:\n> >> \n> >> http://archives.postgresql.org/pgsql-committers/2000-08/msg00218.php\n> >> \n> >> I suppose that the SQL-function support at the time required that its\n> >> calling memory context be persistent until it returned ExprEndResult,\n> >> but I sure don't recall any details. It's entirely possible that that\n> >> requirement no longer exists, or could easily be eliminated given all\n> >> the other changes that have happened since then. nodeFunctionscan.c\n> >> seems to reset the current context for each call of a SRF, so I'd think\n> >> that anything that can't cope with that should have been flushed out\n> >> by now.\n> >> \n> >> If you feel like poking at this, I *strongly* recommend doing your\n> >> testing in an --enable-cassert build. You'll have no idea whether you\n> >> freed stuff too early if you don't have CLOBBER_FREED_MEMORY enabled.\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Thu, 6 Mar 2008 13:01:12 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of aggregates over set-returning\n functions"
},
{
"msg_contents": "Sorry for the long delay in following up on this suggestion. The\nchange Tom suggested fixed the performance problems I was seeing, but\nI never ran the full regression suite on the modified code, as\neverything in my performance tests seemed to indicate the bug was\nfixed (i.e, no errors even with --cassert-enabled). When I recently\nran the regression suite on the \"fixed\" version of Postgres, the\n\"misc\" test suite fails with the following error message: \"ERROR:\ncache lookup failed for type 2139062143\". Is this a manifestation of\nthe problem where certain items are being freed too early? Any other\nideas as to what's going on here?\n\nThanks,\nJohn\n\nOn Tue, Jan 8, 2008 at 8:51 PM, Tom Lane <[email protected]> wrote:\n> \"John Smith\" <[email protected]> writes:\n>>> It's pipelined either way. But int8 is a pass-by-reference data type,\n>>> and it sounds like we have a memory leak for this case.\n>\n>> Thanks for your reply. How easy is it to fix this? Which portion of\n>> the code should we look to change?\n>\n> I was just looking at that. The issue looks to me that nodeResult.c\n> (and other plan node types that support SRFs in their targetlists)\n> do this:\n>\n> /*\n> * Check to see if we're still projecting out tuples from a previous scan\n> * tuple (because there is a function-returning-set in the projection\n> * expressions). If so, try to project another one.\n> */\n> if (node->ps.ps_TupFromTlist)\n> {\n> resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);\n> if (isDone == ExprMultipleResult)\n> return resultSlot;\n> /* Done with that source tuple... */\n> node->ps.ps_TupFromTlist = false;\n> }\n>\n> /*\n> * Reset per-tuple memory context to free any expression evaluation\n> * storage allocated in the previous tuple cycle. Note this can't happen\n> * until we're done projecting out tuples from a scan tuple.\n> */\n> ResetExprContext(econtext);\n>\n> whereas there would be no memory leak if these two chunks of code were\n> in the other order. The question is whether resetting the context first\n> would throw away any data that we *do* still need for the repeated\n> ExecProject calls. That second comment block implies there's something\n> we do need.\n>\n> I'm not sure why it's like this. Some digging in the CVS history shows\n> that indeed the code used to be in the other order, and I switched it\n> (and added the second comment block) in this old commit:\n>\n> http://archives.postgresql.org/pgsql-committers/2000-08/msg00218.php\n>\n> I suppose that the SQL-function support at the time required that its\n> calling memory context be persistent until it returned ExprEndResult,\n> but I sure don't recall any details. It's entirely possible that that\n> requirement no longer exists, or could easily be eliminated given all\n> the other changes that have happened since then. nodeFunctionscan.c\n> seems to reset the current context for each call of a SRF, so I'd think\n> that anything that can't cope with that should have been flushed out\n> by now.\n>\n> If you feel like poking at this, I *strongly* recommend doing your\n> testing in an --enable-cassert build. You'll have no idea whether you\n> freed stuff too early if you don't have CLOBBER_FREED_MEMORY enabled.\n>\n> regards, tom lane\n>\n",
"msg_date": "Mon, 9 Jun 2008 15:57:52 -0700",
"msg_from": "\"John Smith\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of aggregates over set-returning functions"
},
{
"msg_contents": "\"John Smith\" <[email protected]> writes:\n> Sorry for the long delay in following up on this suggestion. The\n> change Tom suggested fixed the performance problems I was seeing, but\n> I never ran the full regression suite on the modified code, as\n> everything in my performance tests seemed to indicate the bug was\n> fixed (i.e, no errors even with --cassert-enabled). When I recently\n> ran the regression suite on the \"fixed\" version of Postgres, the\n> \"misc\" test suite fails with the following error message: \"ERROR:\n> cache lookup failed for type 2139062143\". Is this a manifestation of\n> the problem where certain items are being freed too early?\n\nYup --- something's trying to use memory that's already been freed.\nThe particular case is evidently a field containing a type OID.\n\nYou might be able to get a clue where the problem is by getting a gdb\nstack trace back from errfinish(), but this sort of kills my optimism\nthat the 7.0-era problem is gone ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Jun 2008 20:17:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of aggregates over set-returning functions "
}
] |
[
{
"msg_contents": "Hi All,\n\n \n\nI have a procedure which contains only one Select statement. The\nprocedure take more time than expected to complete, the select statement\ninside the procedure was taking about 2 minutes to complete. \n\n \n\nAfter running a Vacuum Analyze on the underlying tables the select\nstatement could complete within seconds. But the procedure was still\ntaking long time. \n\n \n\nI tried after recreating the procedure, the procedure is taking only a\nfew seconds after recreation.\n\n \n\nWhy the procedure is not getting the performance advantage of Vacuum\nanalyse? \n\n \n\nRegards, \n\nAnoo\n\n \n\n \nVisit our Website at http://www.rmesi.co.in\n\nThis message is confidential. You should not copy it or disclose its contents to anyone. You may use and apply the information for the intended purpose only. Internet communications are not secure; therefore, RMESI does not accept legal responsibility for the contents of this message. Any views or opinions presented are those of the author only and not of RMESI. If this email has come to you in error, please delete it, along with any attachments. Please note that RMESI may intercept incoming and outgoing email communications. \n\nFreedom of Information Act 2000\nThis email and any attachments may contain confidential information belonging to RMESI. Where the email and any attachments do contain information of a confidential nature, including without limitation information relating to trade secrets, special terms or prices these shall be deemed for the purpose of the Freedom of Information Act 2000 as information provided in confidence by RMESI and the disclosure of which would be prejudicial to RMESI's commercial interests.\n\nThis email has been scanned for viruses by Trend ScanMail.\n\n\n\n\n\n\n\n\n\n\n\n\nHi All,\n \nI have a procedure which contains only one Select statement.\nThe procedure take more time than expected to complete, the select statement\ninside the procedure was taking about 2 minutes to complete. \n \nAfter running a Vacuum Analyze on the underlying tables the\nselect statement could complete within seconds. But the procedure was still\ntaking long time. \n \nI tried after recreating the procedure, the procedure is\ntaking only a few seconds after recreation.\n \nWhy the procedure is not getting the performance advantage\nof Vacuum analyse? \n \nRegards, \nAnoo\n \n \n\n\n\nVisit our Website at www.rmesi.co.in\n\n\nThis message is confidential. You should not copy it or disclose its contents to anyone. You may use and apply the information for the intended purpose only. Internet communications are not secure; therefore, RMESI does not accept legal responsibility for the contents of this message. Any views or opinions presented are those of the author only and not of RMESI. If this email has come to you in error, please delete it, along with any attachments. Please note that RMESI may intercept incoming and outgoing email communications. \n\nFreedom of Information Act 2000\n\nThis email and any attachments may contain confidential information belonging to RMESI. Where the email and any attachments do contain information of a confidential nature, including without limitation information relating to trade secrets, special terms or prices these shall be deemed for the purpose of the Freedom of Information Act 2000 as information provided in confidence by RMESI and the disclosure of which would be prejudicial to RMESI's commercial interests.\n\nThis email has been scanned for viruses by Trend ScanMail.",
"msg_date": "Wed, 9 Jan 2008 09:33:57 +0530",
"msg_from": "\"Anoo Sivadasan Pillai\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "After Vacuum Analyse - Procedure performance not improved - Innner\n\tselect is faster"
},
{
"msg_contents": "Anoo Sivadasan Pillai wrote:\n\n> Why the procedure is not getting the performance advantage of Vacuum\n> analyse? \n\nPlan caching by the function, probably. Try disconnecting the session\nand reconnecting to prove the hypothesis.\n\nIf it is a recurring problem for you, you could put the SELECT under\nEXECUTE in the function. But most likely this is just a one-time\nproblem.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Wed, 9 Jan 2008 02:06:44 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: After Vacuum Analyse - Procedure performance not\n\timproved - Innner select is faster"
},
{
"msg_contents": "\n>> Why the procedure is not getting the performance advantage of Vacuum\n>> analyse? \n\n>Plan caching by the function, probably. Try disconnecting the session\n>and reconnecting to prove the hypothesis.\n\n>If it is a recurring problem for you, you could put the SELECT under\n>EXECUTE in the function. But most likely this is just a one-time\n>problem.\n\nIs there any way to clear the cached plan manually other than\ndisconnecting (With the help of some commands/Configuration settings) ? \n\n\n\nVisit our Website at http://www.rmesi.co.in\n\nThis message is confidential. You should not copy it or disclose its contents to anyone. You may use and apply the information for the intended purpose only. Internet communications are not secure; therefore, RMESI does not accept legal responsibility for the contents of this message. Any views or opinions presented are those of the author only and not of RMESI. If this email has come to you in error, please delete it, along with any attachments. Please note that RMESI may intercept incoming and outgoing email communications. \n\nFreedom of Information Act 2000\nThis email and any attachments may contain confidential information belonging to RMESI. Where the email and any attachments do contain information of a confidential nature, including without limitation information relating to trade secrets, special terms or prices these shall be deemed for the purpose of the Freedom of Information Act 2000 as information provided in confidence by RMESI and the disclosure of which would be prejudicial to RMESI's commercial interests.\n\nThis email has been scanned for viruses by Trend ScanMail.\n\n\n",
"msg_date": "Wed, 9 Jan 2008 11:30:33 +0530",
"msg_from": "\"Anoo Sivadasan Pillai\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: After Vacuum Analyse - Procedure performance notimproved - Innner\n\tselect is faster"
},
{
"msg_contents": "\nOn Jan 9, 2008, at 12:00 AM, Anoo Sivadasan Pillai wrote:\n\n>\n>>> Why the procedure is not getting the performance advantage of Vacuum\n>>> analyse?\n>\n>> Plan caching by the function, probably. Try disconnecting the \n>> session\n>> and reconnecting to prove the hypothesis.\n>\n>> If it is a recurring problem for you, you could put the SELECT under\n>> EXECUTE in the function. But most likely this is just a one-time\n>> problem.\n>\n> Is there any way to clear the cached plan manually other than\n> disconnecting (With the help of some commands/Configuration \n> settings) ?\n\nOnly as of 8.3.\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n",
"msg_date": "Wed, 9 Jan 2008 12:06:35 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: After Vacuum Analyse - Procedure performance notimproved - Innner\n\tselect is faster"
}
] |
[
{
"msg_contents": "Hi\n\nWe recently converted to postgres (from mssql) and we're having \nperformance issues. Not all the issues are related to postgres, but \nwe're trying to sort everything out.\n\nThe server is running ubuntu Gutsy with the database stored on a IBM \nSAN. It is a Dell box with dual quad core 2.00GHz Xeons and 8GB RAM.\n\n\nThe database is about 71GB in size.\n\nI've looked at the postgres config files and we've tweaked as much as \nour knowledge allows.\n\nCan someone shed some light on the settings I should use ?\n\n\nThanks in advance\n-- \nAdrian Moisey\nSystem Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n",
"msg_date": "Wed, 09 Jan 2008 10:18:34 +0200",
"msg_from": "Adrian Moisey <[email protected]>",
"msg_from_op": true,
"msg_subject": "big database performance"
},
{
"msg_contents": "Adrian Moisey wrote:\n> Hi\n> \n> We recently converted to postgres (from mssql) and we're having \n> performance issues. Not all the issues are related to postgres, but \n> we're trying to sort everything out.\n> \n> The server is running ubuntu Gutsy with the database stored on a IBM \n> SAN. It is a Dell box with dual quad core 2.00GHz Xeons and 8GB RAM.\n> \n> \n> The database is about 71GB in size.\n> \n> I've looked at the postgres config files and we've tweaked as much as \n> our knowledge allows.\n> \n> Can someone shed some light on the settings I should use ?\n\nUmpf that isn't quite enough info :) but assuming you are running 8.2.x:\n\nStart with 1GB shared_buffers (you may be able to go hire), 4MB \nwork_mem, wal_sync_method = open_sync, checkpoint_segments = 30, \ndefault_statistics_target = 150, effective_cache_size = 6GB .\n\n\nRestart, VACUUM ANALYZE VERBOSE, post back last 4 lines of output.\n\nOther good items to know:\n\n64bit Gutsy?\nHow is the SAN connected?\nWhat does mpstat 5 (3 iterations) say?\nEven better what does sar -A say over a 24 hour period?\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n> \n> \n> Thanks in advance\n\n",
"msg_date": "Wed, 09 Jan 2008 00:27:33 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big database performance"
},
{
"msg_contents": "Hi,\n\nwhat segment size do you use for the san partition? This could also be a \nbottle neck for db servers.\n\nFrank\n",
"msg_date": "Wed, 09 Jan 2008 09:42:10 +0100",
"msg_from": "Frank Habermann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big database performance"
},
{
"msg_contents": "Hi\n\n>> We recently converted to postgres (from mssql) and we're having \n>> performance issues. Not all the issues are related to postgres, but \n>> we're trying to sort everything out.\n>>\n>> The server is running ubuntu Gutsy with the database stored on a IBM \n>> SAN. It is a Dell box with dual quad core 2.00GHz Xeons and 8GB RAM.\n>>\n>>\n>> The database is about 71GB in size.\n>>\n>> I've looked at the postgres config files and we've tweaked as much as \n>> our knowledge allows.\n>>\n>> Can someone shed some light on the settings I should use ?\n> \n> Umpf that isn't quite enough info :) but assuming you are running 8.2.x:\n\nSorry :/ Yes, we are running 8.2.x\n\n> Start with 1GB shared_buffers (you may be able to go hire), 4MB \n> work_mem, wal_sync_method = open_sync, checkpoint_segments = 30, \n> default_statistics_target = 150, effective_cache_size = 6GB .\n\nOur shared_buffers is 1GB.\nwork_mem is 32MB\nI changed wal_sync_method to open_sync (which helped a ton!)\n\nCan someone please explain effective_cache_size. what cache does it \nwant to know about? Linux cache?\n\nAlso, we're running the db on ext3 with noatime. Should I look at \nchanging or getting rid of journaling ?\n\n\n> 64bit Gutsy?\n\nYes\n\n> How is the SAN connected?\n\nfibre\n\n> What does mpstat 5 (3 iterations) say?\n> Even better what does sar -A say over a 24 hour period?\n\nI'll get these for you\n\n-- \nAdrian Moisey\nSystem Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n",
"msg_date": "Wed, 09 Jan 2008 14:53:09 +0200",
"msg_from": "Adrian Moisey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: big database performance"
},
{
"msg_contents": "On Wednesday 09 January 2008, Adrian Moisey <[email protected]> \nwrote:\n>\n> Also, we're running the db on ext3 with noatime. Should I look at\n> changing or getting rid of journaling ?\n\nNo (unless you like really long fsck times). data=writeback is safe with \nPostgreSQL, though.\n\n",
"msg_date": "Wed, 9 Jan 2008 08:16:48 -0800",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big database performance"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Wed, 9 Jan 2008 08:16:48 -0800\r\nAlan Hodgson <[email protected]> wrote:\r\n\r\n> On Wednesday 09 January 2008, Adrian Moisey\r\n> <[email protected]> wrote:\r\n> >\r\n> > Also, we're running the db on ext3 with noatime. Should I look at\r\n> > changing or getting rid of journaling ?\r\n> \r\n> No (unless you like really long fsck times). data=writeback is safe\r\n> with PostgreSQL, though.\r\n> \r\n\r\nExcept :)... for pg_xlog. If you have pg_xlog on a different partition,\r\nfeel free to run ext2 for it.\r\n\r\nJoshua D. Drake\r\n\r\n> \r\n> ---------------------------(end of\r\n> broadcast)--------------------------- TIP 7: You can help support the\r\n> PostgreSQL project by donating at\r\n> \r\n> http://www.postgresql.org/about/donate\r\n> \r\n\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHhPtyATb/zqfZUUQRAk32AKCTvJPBCvHtb4JWMu7+xwxQZdA/ZQCgn3K2\r\npCmcUXAiAibLkTgEwGVXPyQ=\r\n=H2bK\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Wed, 9 Jan 2008 08:50:58 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big database performance"
},
{
"msg_contents": "Hi Joshua,\n\nOn Jan 9, 2008 9:27 AM, Joshua D. Drake <[email protected]> wrote:\n> wal_sync_method = open_sync\n\nDo you recommend it in every situation or just because data are on a\nSAN? Do you have any numbers/real cases explaining this choice.\n\nThanks.\n\n--\nGuillaume\n",
"msg_date": "Wed, 9 Jan 2008 17:59:58 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big database performance"
},
{
"msg_contents": "On Wed, Jan 09, 2008 at 12:27:33AM -0800, Joshua D. Drake wrote:\n> Adrian Moisey wrote:\n>> Hi\n>>\n>> We recently converted to postgres (from mssql) and we're having \n>> performance issues. Not all the issues are related to postgres, but we're \n>> trying to sort everything out.\n\n\tHi,\n\n\tI do large databases in Pg, like 300GB/day of new data. Need a lot\nmore data on what you're having issues with.\n\n\tIs your problem with performance database reads? \nwrites? (insert/copy?) How many indicies do you have?\n\n\t- jared\n\n-- \nJared Mauch | pgp key available via finger from [email protected]\nclue++; | http://puck.nether.net/~jared/ My statements are only mine.\n",
"msg_date": "Wed, 9 Jan 2008 12:42:43 -0500",
"msg_from": "Jared Mauch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big database performance"
},
{
"msg_contents": "On Wed, 9 Jan 2008, Guillaume Smet wrote:\n\n> On Jan 9, 2008 9:27 AM, Joshua D. Drake <[email protected]> wrote:\n>> wal_sync_method = open_sync\n>\n> Do you recommend it in every situation or just because data are on a\n> SAN? Do you have any numbers/real cases explaining this choice.\n\nSync writes are faster on Linux in every case I've ever tried, compared to \nthe default config that does a write followed by a sync. With regular \ndiscs they're just a little faster. On some SAN configurations, they're \nenormously faster, because the SANs are often optimized to handle \nsyncronous writes far more efficiently than write/sync ones. This is \nmainly because Oracle does its writes that way, so if you want good Oracle \nperformance you have to handle sync writes well.\n\nI have something on this topic I keep meaning to publish, but I got \nspooked about the potential to have silent problems or crashes when using \nopen_sync due to a Linux kernel issue reported here:\n\nhttp://archives.postgresql.org/pgsql-hackers/2007-10/msg01310.php\n\nCertainly with that report floating out there I'd only recommend open_sync \nto people who are putting plenty of time into testing their database is \nrobust under load with that configuration before deploying it; I sure \nwouldn't just make that changes on a production system just to see if it's \nfaster.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 9 Jan 2008 13:44:34 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big database performance"
},
{
"msg_contents": "On Wed, 2008-01-09 at 10:18 +0200, Adrian Moisey wrote:\n\n> We recently converted to postgres (from mssql) and we're having \n> performance issues.\n\nI think you need to say more about what the performance issues actually\nare, otherwise everybody will just speculate you to death.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n",
"msg_date": "Wed, 09 Jan 2008 21:01:18 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big database performance"
},
{
"msg_contents": "Hi\n\n>> Also, we're running the db on ext3 with noatime. Should I look at\n>> changing or getting rid of journaling ?\n> \n> No (unless you like really long fsck times). data=writeback is safe with \n> PostgreSQL, though.\n\nI tested that on a dev box, and I didn't notice a difference when using \npgbench\n\n-- \nAdrian Moisey\nSystem Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n",
"msg_date": "Thu, 10 Jan 2008 10:45:17 +0200",
"msg_from": "Adrian Moisey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: big database performance"
},
{
"msg_contents": "Hi\n\n> \tI do large databases in Pg, like 300GB/day of new data. Need a lot\n> more data on what you're having issues with.\n\nThat is big!\n\nWhat sort of information do you need from me ?\n\n> \tIs your problem with performance database reads? \n> writes? (insert/copy?) How many indicies do you have?\n\nI think the problem is related to load. Everything is slow because \nthere are way too many connections. So everything is making everything \nelse slow. Not much detail, is it?\n\nWe have 345 indicies on the db.\n\n-- \nAdrian Moisey\nSystem Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n",
"msg_date": "Thu, 10 Jan 2008 10:57:46 +0200",
"msg_from": "Adrian Moisey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: big database performance"
},
{
"msg_contents": "Jared Mauch wrote:\n> \tI do large databases in Pg, like 300GB/day of new data.\nThat's impressive. Would it be possible to have details on your \nhardware, schema and configuration and type of usage ?\n\nI'm sure there's something to learn in there for a lot of people (or at \nleast for me)\n\nCheers,\n\n-- stephane\n",
"msg_date": "Thu, 10 Jan 2008 12:08:39 +0100",
"msg_from": "Stephane Bailliez <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big database performance"
},
{
"msg_contents": "On Thu, Jan 10, 2008 at 10:57:46AM +0200, Adrian Moisey wrote:\n> What sort of information do you need from me ?\n\n\tRatio of read vs write operations (select vs insert/copy).\n\n\taverage number of indicies per table\n\n\taverage table size. (analyze verbose <tablename> if you want to get\ninto more details).\n\n\tWhat is the process doing (eg: in top, is it just on the CPU or\nis it blocking for I/O?).\n\n\tI/O information, from iostat -d (You may need to build an iostat\nbinary for Linux, the source is out there, i can give you a pointer if\nyou need it).\n\n>> \tIs your problem with performance database reads? writes? (insert/copy?)\n>> How many indicies do you have?\n>\n> I think the problem is related to load. Everything is slow because there \n> are way too many connections. So everything is making everything else \n> slow. Not much detail, is it?\n>\n> We have 345 indicies on the db.\n\n\tIf the tables are heavily indexed this could easily slow down\ninsert performance. Taking a large dataset and adding a second\nindex, postgres doesn't use threads to create the two indicies on\ndifferent cpus/cores in parallel. This could represent some of your\nperformance difference. If you're doing a lot of write operations\nand fewer read, perhaps the cost of an index isn't worth it in the\ncpu time spent creating it vs the amount of time for a seq scan.\n\n\t- Jared\n\n-- \nJared Mauch | pgp key available via finger from [email protected]\nclue++; | http://puck.nether.net/~jared/ My statements are only mine.\n",
"msg_date": "Thu, 10 Jan 2008 09:21:22 -0500",
"msg_from": "Jared Mauch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big database performance"
},
{
"msg_contents": "On Thu, Jan 10, 2008 at 12:08:39PM +0100, Stephane Bailliez wrote:\n> Jared Mauch wrote:\n>> \tI do large databases in Pg, like 300GB/day of new data.\n>\n> That's impressive. Would it be possible to have details on your hardware, \n> schema and configuration and type of usage ?\n>\n> I'm sure there's something to learn in there for a lot of people (or at \n> least for me)\n\nhttp://archives.postgresql.org/pgsql-performance/2007-12/msg00372.php\n\nhttp://archives.postgresql.org/pgsql-performance/2006-05/msg00444.php\n\n\tThe hardware specs are kinda boring since it's not\nI/O bound, so you could get the same disk performance out of\nsome EIDE 7200 rpm disks (which I have done for testing).\n\n\tThe current setup is a 4xOpteron 8218 (dual core) w/ 16G ram.\nI have roughly 12TB usable disk space on the sysem connected via some\nSATA <-> FC thing our systems folks got us. Problem I have is the linear\ncpu speed isn't enough and there would be challenges splitting the\nworkload across multiple cpus. All my major reporting is done via\npg_dump and I'm pondering what would happen if I just removed Pg\nfrom the equation for the major reporting tasks entirely. I may see\nmuch better performance without the database [in my way]. I've not\ndone that as some types of data access would need to be significantly\nredone and I don't want to spend the time on that...\n\n\t- Jared\n\n-- \nJared Mauch | pgp key available via finger from [email protected]\nclue++; | http://puck.nether.net/~jared/ My statements are only mine.\n",
"msg_date": "Thu, 10 Jan 2008 09:34:40 -0500",
"msg_from": "Jared Mauch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big database performance"
}
] |
[
{
"msg_contents": "Hello,\n\nI have an interesting generic search task, for which I have done \ndifferent performance tests and I would like to share and discuss my \nresults on this newsgroup.\n\nSo I begin to describe the search task:\n\n=========\nYou have a set of N unique IDs. Every ID is associated with an integer \nscoring value. Every ID is also associated with up to K different \nkeywords (there are totally K different keywords K1 ... Kn). Now find \nthe first Z best-scored IDs which are associated with a given set of \nkeywords in one of two ways:\n\n(C1) The ID must be associated with all keywords of the given set of \nkeywords.\n(C2) The ID must be associated with at least one keyword of the given \nset of keywords.\n=========\n\n\nMy tests showed that only a Multiple-Column-approach resulted in a \nacceptable query response time. I also tried out an int-array approach \nusing gist, a sub-string approach, a bit-column approach, and even a \nsub-string approach using Solr.\nActually, the int-array approach was 20% faster for Z=infinity, but it \nbecame linear for the test case [Z=1000 and *all* IDs matches the search \ncondition].\n(To be not misunderstood, \"acceptable time\" means: having a fixed Z, a \nfixed set of keywords K, a fixed query, and an increasing N, results in \nconstant up to logarithmic response time; linear or worser-than-linear \ntime is not accepted)\n\nIn the Multiple-Column-approach, there is one table. The table has a \nboolean column for each keyword. It has also a column for the ID and for \nthe scoring. Now, for each keyword column and for the scoring column a \nseparate index is created.\nC1 is implemented by an AND-query on the keyword columns, C2 by and OR \nquery, and the result is sorted for the scoring column, cutting of after \nthe first Z results.\n\nHowever our requirements for the search task have changed and I not yet \nmanaged to find a search approach with acceptable response time for \nfollowing variation:\nNamely that one uses C2 and do not sort for a scoring column but use as \nscoring value the number of matched keywords for a given ID.\nThe difficulty in this query type is that the scoring is dependent on \nthe query itself..\n\nSo has anyone an idea how to solve this query type with acceptable \nresponse time, or can anybody tell/prove, that this is theoretically not \npossible?\n\n\n",
"msg_date": "Wed, 09 Jan 2008 11:43:42 +0100",
"msg_from": "=?ISO-8859-1?Q?J=F6rg_Kiegeland?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Search for fixed set of keywords"
},
{
"msg_contents": "Did you try integer arrays with GIN (inverted index) ?\n\nOleg\nOn Wed, 9 Jan 2008, J?rg Kiegeland wrote:\n\n> Hello,\n>\n> I have an interesting generic search task, for which I have done different \n> performance tests and I would like to share and discuss my results on this \n> newsgroup.\n>\n> So I begin to describe the search task:\n>\n> =========\n> You have a set of N unique IDs. Every ID is associated with an integer \n> scoring value. Every ID is also associated with up to K different keywords \n> (there are totally K different keywords K1 ... Kn). Now find the first Z \n> best-scored IDs which are associated with a given set of keywords in one of \n> two ways:\n>\n> (C1) The ID must be associated with all keywords of the given set of \n> keywords.\n> (C2) The ID must be associated with at least one keyword of the given set of \n> keywords.\n> =========\n>\n>\n> My tests showed that only a Multiple-Column-approach resulted in a acceptable \n> query response time. I also tried out an int-array approach using gist, a \n> sub-string approach, a bit-column approach, and even a sub-string approach \n> using Solr.\n> Actually, the int-array approach was 20% faster for Z=infinity, but it became \n> linear for the test case [Z=1000 and *all* IDs matches the search condition].\n> (To be not misunderstood, \"acceptable time\" means: having a fixed Z, a fixed \n> set of keywords K, a fixed query, and an increasing N, results in constant up \n> to logarithmic response time; linear or worser-than-linear time is not \n> accepted)\n>\n> In the Multiple-Column-approach, there is one table. The table has a boolean \n> column for each keyword. It has also a column for the ID and for the scoring. \n> Now, for each keyword column and for the scoring column a separate index is \n> created.\n> C1 is implemented by an AND-query on the keyword columns, C2 by and OR query, \n> and the result is sorted for the scoring column, cutting of after the first Z \n> results.\n>\n> However our requirements for the search task have changed and I not yet \n> managed to find a search approach with acceptable response time for following \n> variation:\n> Namely that one uses C2 and do not sort for a scoring column but use as \n> scoring value the number of matched keywords for a given ID.\n> The difficulty in this query type is that the scoring is dependent on the \n> query itself..\n>\n> So has anyone an idea how to solve this query type with acceptable response \n> time, or can anybody tell/prove, that this is theoretically not possible?\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Wed, 9 Jan 2008 13:55:06 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Search for fixed set of keywords"
},
{
"msg_contents": "> Did you try integer arrays with GIN (inverted index) ?\nI now tried this, and GIN turned out to be linear time, compared with \nGIST which was acceptable time. However I tested this only for \nZ=infinity, for Z=1000, GIST/GIN are both not acceptable.\n\n",
"msg_date": "Thu, 10 Jan 2008 10:23:47 +0100",
"msg_from": "=?ISO-8859-1?Q?J=F6rg_Kiegeland?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Search for fixed set of keywords"
},
{
"msg_contents": "On Thu, 10 Jan 2008, J?rg Kiegeland wrote:\n\n>> Did you try integer arrays with GIN (inverted index) ?\n> I now tried this, and GIN turned out to be linear time, compared with GIST \n> which was acceptable time. However I tested this only for Z=infinity, for \n> Z=1000, GIST/GIN are both not acceptable.\n\nSorry, I didn't follow your problem, but GIN should be certainly\nlogarithmic on the number of unique words. Also, it'd be much clear\nif you show us your queries and explain analyze.\n\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Thu, 10 Jan 2008 15:05:59 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Search for fixed set of keywords"
}
] |
[
{
"msg_contents": "I cannot get the \"not exists\" clause of ANSI SQL to execute correctly.\nselect t.col11, t.col1... from table1 t where not exists (select 1 from\ntable2 where col2 = t.col1);\ntable1 has 40M + rows. if that matters.\n\nOS is FreeBSD 6.2, postgresql version 8.2.6\n\nIs it not supported or a bug ?\nthank you for your support.\n\nI cannot get the \"not exists\" clause of ANSI SQL to execute correctly.\nselect t.col11, t.col1... from table1 t where not exists (select 1 from table2 where col2 = t.col1);\ntable1 has 40M + rows. if that matters.\n \nOS is FreeBSD 6.2, postgresql version 8.2.6\n \nIs it not supported or a bug ?\nthank you for your support.",
"msg_date": "Thu, 10 Jan 2008 15:15:53 -0500",
"msg_from": "\"S Golly\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "not exists clause"
},
{
"msg_contents": "Golly,\n\n> I cannot get the \"not exists\" clause of ANSI SQL to execute correctly.\n> select t.col11, t.col1... from table1 t where not exists (select 1 from\n> table2 where col2 = t.col1);\n> table1 has 40M + rows. if that matters.\n>\n> OS is FreeBSD 6.2, postgresql version 8.2.6\n\nYou'll have to post the actual query and error message. WHERE NOT EXISTS \nhas been supported since version 7.1.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Thu, 10 Jan 2008 13:50:30 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: not exists clause"
},
{
"msg_contents": "S Golly wrote:\n> I cannot get the \"not exists\" clause of ANSI SQL to execute correctly.\n> select t.col11, t.col1... from table1 t where not exists (select 1 from \n> table2 where col2 = t.col1);\n> table1 has 40M + rows. if that matters.\n> \n> OS is FreeBSD 6.2, postgresql version 8.2.6\n> \n> Is it not supported or a bug ?\n> thank you for your support.\n\nThis is really not a performance question, but a general SQL question.\n\nselect * from t1\n\nf1\n--\n1\n2\n3\n\nselect * from t2\n\nf1\n--\n1\n2\n\nselect * from t1\nwhere not exists\n(\nselect 1\nfrom t2\nwhere t2.f1 = t1.f1\n)\n\nf1\n--\n3\n\n-- \nGuy Rouillier\n",
"msg_date": "Thu, 10 Jan 2008 23:00:23 -0500",
"msg_from": "Guy Rouillier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: not exists clause"
}
] |
[
{
"msg_contents": "Hi List;\n\nWe'll be loading a table with begining & ending I.P.'s - the table will likely \nhave upwards of 30million rows. Any thoughts on how to get the best \nperformance out of queries that want to look for IP ranges or the use of \nbetween queries? Should these be modeled as integers? \n\nThanks in advance\n\n/Kevin\n",
"msg_date": "Thu, 10 Jan 2008 16:14:54 -0700",
"msg_from": "Kevin Kempter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Best way to index IP data?"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Thu, 10 Jan 2008 16:14:54 -0700\r\nKevin Kempter <[email protected]> wrote:\r\n\r\n> Hi List;\r\n> \r\n> We'll be loading a table with begining & ending I.P.'s - the table\r\n> will likely have upwards of 30million rows. Any thoughts on how to\r\n> get the best performance out of queries that want to look for IP\r\n> ranges or the use of between queries? Should these be modeled as\r\n> integers? \r\n> \r\n\r\nhttp://www.postgresql.org/docs/current/static/datatype-net-types.html\r\n\r\n> Thanks in advance\r\n> \r\n> /Kevin\r\n> \r\n> ---------------------------(end of\r\n> broadcast)--------------------------- TIP 1: if posting/reading\r\n> through Usenet, please send an appropriate subscribe-nomail command\r\n> to [email protected] so that your message can get through to\r\n> the mailing list cleanly\r\n> \r\n\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHhqjHATb/zqfZUUQRAvMOAJ984Np5GMrFd1vixP/zECIl3qUWYgCff6U4\r\nbCBBz1VaxqIoZfCFfKEIZLU=\r\n=+9vD\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Thu, 10 Jan 2008 15:22:45 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "\nOn Jan 10, 2008, at 3:14 PM, Kevin Kempter wrote:\n\n> Hi List;\n>\n> We'll be loading a table with begining & ending I.P.'s - the table \n> will likely\n> have upwards of 30million rows. Any thoughts on how to get the best\n> performance out of queries that want to look for IP ranges or the \n> use of\n> between queries? Should these be modeled as integers?\n\nhttp://pgfoundry.org/projects/ip4r/\n\nThat has the advantage over using integers, or the built-in inet type,\nof being indexable for range and overlap queries.\n\nCheers,\n Steve\n\n",
"msg_date": "Thu, 10 Jan 2008 15:25:20 -0800",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Jan 10, 2008 6:25 PM, Steve Atkins <[email protected]> wrote:\n> http://pgfoundry.org/projects/ip4r/\n>\n> That has the advantage over using integers, or the built-in inet type,\n> of being indexable for range and overlap queries.\n\nAgreed. ip4r is da bomb.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n",
"msg_date": "Thu, 10 Jan 2008 18:41:56 -0500",
"msg_from": "\"Jonah H. Harris\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Thu, 10 Jan 2008, Jonah H. Harris wrote:\n\n> On Jan 10, 2008 6:25 PM, Steve Atkins <[email protected]> wrote:\n>> http://pgfoundry.org/projects/ip4r/\n>>\n>> That has the advantage over using integers, or the built-in inet type,\n>> of being indexable for range and overlap queries.\n>\n> Agreed. ip4r is da bomb.\n\nHello to all,\n\nI also have to store a lot of IP v4 addresses, and I think the internal \ninet type is somewhat overkill for that, since it always require 8 bytes, \neven if you don't need to store a netmask.\nWhen storing millions of IP add, this means MB of space used for nothing \nin that case.\n\nAs ip4r seems to work very well with postgresql, is there a possibility to \nsee it merged in postgresql, to have a native 4 bytes IPv4 address date \ntype ?\n\nNicolas\n\n\n\n",
"msg_date": "Fri, 11 Jan 2008 10:31:26 +0100 (CET)",
"msg_from": "Pomarede Nicolas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "Pomarede Nicolas <[email protected]> writes:\n> As ip4r seems to work very well with postgresql, is there a possibility to \n> see it merged in postgresql, to have a native 4 bytes IPv4 address date \n> type ?\n\nGiven that the world is going to IPv6 in a few years whether you like it\nor not, that seems pretty darn short-sighted to me.\n\nWhat would make sense IMHO is to adapt the improved indexing support in\nip4r to work on the native inet/cidr types.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Jan 2008 10:19:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data? "
},
{
"msg_contents": "On Fri, Jan 11, 2008 at 10:19:51AM -0500, Tom Lane wrote:\n> Given that the world is going to IPv6 in a few years whether you like it\n> or not, that seems pretty darn short-sighted to me.\n\nIndeed. Even ARIN has finally started to tell people that IPv4 is running\nout. There are currently significant deployments of IPv6 in the\nAsia-Pacific region. And it appears that Comcast is planning to move to\nIPv6 for its own network deployment, which may mean that many U.S. homes\nwill have native v6 in the near future (the upshot of their plans aren't\nactually clear to me yet, but if you're interested in some of what they're\ntelling people they're doing, look for Alain Durand's presentation to the\nv6ops working group at the last IETF meeting). \n\n> What would make sense IMHO is to adapt the improved indexing support in\n> ip4r to work on the native inet/cidr types.\n\nThis seems like a good idea to me.\n\n",
"msg_date": "Fri, 11 Jan 2008 10:37:34 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, 11 Jan 2008, Tom Lane wrote:\n\n> Pomarede Nicolas <[email protected]> writes:\n>> As ip4r seems to work very well with postgresql, is there a possibility to\n>> see it merged in postgresql, to have a native 4 bytes IPv4 address date\n>> type ?\n>\n> Given that the world is going to IPv6 in a few years whether you like it\n> or not, that seems pretty darn short-sighted to me.\n>\n> What would make sense IMHO is to adapt the improved indexing support in\n> ip4r to work on the native inet/cidr types.\n>\n\nI understand your point on IPv6, but still being able to store IPv4 \naddresses with as little overhead as possible is important.\n\nIPv6 will certainly grow in the year to come, but if you consider the case \nof a very large private lan, with ip in the form 10.x.y.z, the fact that \nthe outside world is now ipv6 doesn't necessarily imply you will rename \nall your internal equipments to be ipv6 if you don't need more addresses \n(you can do the translation when packets cross the ipv6/ipv4 gateway in \nyour network).\n\nTo be more concret, I'm working for a large french ISP, so far we have 2+ \nmillions \"boxes\" (triple play adsl equipments) at our customers' home.\n\nAll theses boxes have a private IPv4 address in 10.x.y.z as well as a \npublic IPV4 address, and although we recently activated a public IPv6 addr \non these boxes too (which certainly gives one of the biggest IPv6 network \nso far), we still need to store one ipv4 and one ipv6 addr for each box.\n\nSo, my point was not to be short-sighted, we will go IPv6 for sure, it's \njust that there're a lot of applications where storing ipv4 addr could be \nneeded (whether ipv6 is required for other applications or not), and in \nthis regard, I think that being able to store ipv4 addr with 4 bytes \ninstead of 8 could be appreciated.\n\nOr perhaps another solution would be to have built-in inet_aton / \ninet_ntoa functions in postgres, to store the result using an integer \n(unsigned) ?\n\n\nNicolas\n",
"msg_date": "Fri, 11 Jan 2008 16:55:30 +0100 (CET)",
"msg_from": "Pomarede Nicolas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data? "
},
{
"msg_contents": "\nOn Jan 11, 2008, at 7:19 AM, Tom Lane wrote:\n\n> Pomarede Nicolas <[email protected]> writes:\n>> As ip4r seems to work very well with postgresql, is there a \n>> possibility to\n>> see it merged in postgresql, to have a native 4 bytes IPv4 address \n>> date\n>> type ?\n>\n> Given that the world is going to IPv6 in a few years whether you \n> like it\n> or not, that seems pretty darn short-sighted to me.\n>\n> What would make sense IMHO is to adapt the improved indexing support \n> in\n> ip4r to work on the native inet/cidr types.\n\nCan't be done. The native types are too limited to be effectively \nindexed\nin that way - they cannot represent arbitrary ranges. ip4r started \nwith me\ntrying to retrofit decent indexing onto the cidr type and failing \nmiserably.\n\nI'll likely be rolling out ip6r/ipr sometime in 2008, as users are \nbeginning to\nexpress an interest. But even then I don't expect it to replace the inet\nand cidr types in core, because it isn't compatible with them.\n\nI'd actually support removing inet/cidr from core completely in the \nlonger\nrun. Postgresql is extensible, so we really don't need types used only\nby niche users in core, once we have pgfoundry and something like\nmysqludf.org/CPAN. But that's a longer term thought.\n\nCheers,\n Steve\n\n",
"msg_date": "Fri, 11 Jan 2008 08:11:15 -0800",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data? "
},
{
"msg_contents": "Steve Atkins wrote:\n\n> I'd actually support removing inet/cidr from core completely in the longer\n> run. Postgresql is extensible, so we really don't need types used only\n> by niche users in core, once we have pgfoundry and something like\n> mysqludf.org/CPAN. But that's a longer term thought.\n\nI believe this is going to be solved by postgresqlpackages.org.\n\nJoshua D. Drake\n\n> \n> Cheers,\n> Steve\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n\n",
"msg_date": "Fri, 11 Jan 2008 08:18:21 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, Jan 11, 2008 at 10:19:51AM -0500, Tom Lane wrote:\n>Given that the world is going to IPv6 in a few years whether you like it\n>or not, that seems pretty darn short-sighted to me.\n\nWell, a native IPv6 type would also be nice; inet is ridiculously \nbloated for both IPv4 *and* IPv6. \n\nMike Stone\n",
"msg_date": "Fri, 11 Jan 2008 11:27:49 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "Michael Stone <[email protected]> writes:\n> Well, a native IPv6 type would also be nice; inet is ridiculously \n> bloated for both IPv4 *and* IPv6. \n\nNonsense. 3 bytes overhead on a 16-byte address is not \"ridiculously\nbloated\", especially if you want a netmask with it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Jan 2008 15:07:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data? "
},
{
"msg_contents": "On Fri, 11 Jan 2008 15:07:38 -0500\nTom Lane <[email protected]> wrote:\n> Michael Stone <[email protected]> writes:\n> > Well, a native IPv6 type would also be nice; inet is ridiculously \n> > bloated for both IPv4 *and* IPv6. \n> \n> Nonsense. 3 bytes overhead on a 16-byte address is not \"ridiculously\n> bloated\", especially if you want a netmask with it.\n\nBesides, there are many cases where you want to track both ipv4 and\nipv6 for the same purpose and requiring two different fields would be\nless than ideal.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 11 Jan 2008 15:19:35 -0500",
"msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, Jan 11, 2008 at 03:07:38PM -0500, Tom Lane wrote:\n>Michael Stone <[email protected]> writes:\n>> Well, a native IPv6 type would also be nice; inet is ridiculously \n>> bloated for both IPv4 *and* IPv6. \n>\n>Nonsense. 3 bytes overhead on a 16-byte address is not \"ridiculously\n>bloated\", especially if you want a netmask with it.\n\nBig if, no? There's a very large set of users that *don't* want/need a \nnetmask, which is why the topic keeps coming back. (Also, according to \nthe docs, inet requires 24 bytes, which is 50% more than needed; is that \nnot correct?)\n\nMike Stone\n",
"msg_date": "Fri, 11 Jan 2008 16:27:05 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, Jan 11, 2008 at 03:19:35PM -0500, D'Arcy J.M. Cain wrote:\n>Besides, there are many cases where you want to track both ipv4 and\n>ipv6 for the same purpose and requiring two different fields would be\n>less than ideal.\n\nAnd, there are many cases where you don't. I've got two kinds of db's \nthat have IPs in them. In some, the IP is a small part of a table which \nis focused on something else. For those I use inet, which provides a \nnice bit of future-proofing. In other db's the IPs are the primary \nfocus. There are lots and lots of IPs, and the space used by IPs may be \nthe largest chunk of a particular table. For those tables, I don't use \ninet because the overhead really is a significant fraction of the space.\n\nMike Stone\n",
"msg_date": "Fri, 11 Jan 2008 16:32:05 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "Michael Stone wrote:\n> On Fri, Jan 11, 2008 at 03:07:38PM -0500, Tom Lane wrote:\n>> Michael Stone <[email protected]> writes:\n>>> Well, a native IPv6 type would also be nice; inet is ridiculously bloated \n>>> for both IPv4 *and* IPv6. \n>>\n>> Nonsense. 3 bytes overhead on a 16-byte address is not \"ridiculously\n>> bloated\", especially if you want a netmask with it.\n>\n> Big if, no? There's a very large set of users that *don't* want/need a \n> netmask, which is why the topic keeps coming back. (Also, according to the \n> docs, inet requires 24 bytes, which is 50% more than needed; is that not \n> correct?)\n\nSo what this means is that our type oughta be optimized. How about\nhaving a separate bit to indicate whether there is a netmask or not, and\nchop the storage earlier. (I dunno if this already done)\n\nAlso, with packed varlenas the overhead is reduced AFAIK.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Fri, 11 Jan 2008 18:37:10 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, Jan 11, 2008 at 04:32:05PM -0500, Michael Stone wrote:\n>On Fri, Jan 11, 2008 at 03:19:35PM -0500, D'Arcy J.M. Cain wrote:\n>>Besides, there are many cases where you want to track both ipv4 and\n>>ipv6 for the same purpose and requiring two different fields would be\n>>less than ideal.\n>\n>And, there are many cases where you don't. I've got two kinds of db's \n>that have IPs in them. In some, the IP is a small part of a table which \n>is focused on something else. For those I use inet, which provides a \n>nice bit of future-proofing. In other db's the IPs are the primary \n>focus. There are lots and lots of IPs, and the space used by IPs may be \n>the largest chunk of a particular table. For those tables, I don't use \n>inet because the overhead really is a significant fraction of the space.\n\nOh, yeah, the latter type also has seperate IPv4 and IPv6 tables, \nbecause there's no point in bloating 99% of the data for the 1% that's \nIPv6. Is that a niche requirement? Maybe--but I think that storing \nnetmasks is even *more* of a niche...\n\nI'm not arguing for the removal of inet, but I do think there's room for \nmore than one type--and I certainly think its nuts to pretend that inet \ncan meet every requirement well.\n\nMike Stone\n",
"msg_date": "Fri, 11 Jan 2008 16:40:48 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, Jan 11, 2008 at 06:37:10PM -0300, Alvaro Herrera wrote:\n>So what this means is that our type oughta be optimized. How about\n>having a separate bit to indicate whether there is a netmask or not, and\n>chop the storage earlier. (I dunno if this already done)\n\nWhy not just have a type that indicates whether there is a netmask or \nnot? We currently have this (8.3 docs, which I see reflects the 3 byte \noverhead--down to 20% rather than 50% for IPv6):\n\ncidr\t7 or 19 bytes\tIPv4 and IPv6 networks\ninet\t7 or 19 bytes\tIPv4 and IPv6 hosts and networks\n\nNote that there's a type for (networks), and there's a type for (hosts and \nnetworks), but there's a conspicuous lack of a type for (hosts). I \nsuppose if you really are sure that you want to store hosts and not \nnetworks you should use inet and then set a constraint like\n if (family() == 4 && masklen() == 32)\n elsif (family() == 6 && masklen() == 128)\n\n(For people whose databases don't resolve around network data, this \nprobably seems like not a big deal. OTOH, I can only imagine the outcry \nif the only available arithmetic type was an intfloat, which can be \neither an integer or a real number, has very low overhead to keep track \nof whether there's a decimal point, and can easily be made to behave \nlike an integer if you set a constraint forbidding fractional parts.\nBecause, hey, you *never know* when you might need a real number, and \nwouldn't want to paint yourself into a corner by stupidly specifying an \ninteger-only type.)\n\nMike Stone\n",
"msg_date": "Fri, 11 Jan 2008 17:02:36 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, Jan 11, 2008 at 05:02:36PM -0500, Michael Stone wrote:\n\n> networks), but there's a conspicuous lack of a type for (hosts). I \n> suppose if you really are sure that you want to store hosts and not \n> networks \n\nWell, part of the trouble is that in the CIDR world, an IP without a netmask\ncan be dangerously ambiguous. I can see why the design is as it is for that\nreason. (But I understand the problem.)\n\nA\n\n",
"msg_date": "Fri, 11 Jan 2008 17:24:03 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "\nOn Jan 11, 2008, at 2:24 PM, Andrew Sullivan wrote:\n\n> On Fri, Jan 11, 2008 at 05:02:36PM -0500, Michael Stone wrote:\n>\n>> networks), but there's a conspicuous lack of a type for (hosts). I\n>> suppose if you really are sure that you want to store hosts and not\n>> networks\n>\n> Well, part of the trouble is that in the CIDR world, an IP without a \n> netmask\n> can be dangerously ambiguous. I can see why the design is as it is \n> for that\n> reason. (But I understand the problem.)\n\nI don't think there's ambiguity about what an dotted-quad without a \nnetmask\nmeans, and hasn't been for a long time. Am I missing something?\n\nThere is ambiguity when you feed non dotted-quads into the\nexisting cidr I/O functions[1], but that's both a dead horse,\nand not something likely to actually affect users negatively.\n\nCheers,\n Steve\n\n[1] Because postgresql copied obsolete pre-CIDR code from libbind.\n",
"msg_date": "Fri, 11 Jan 2008 14:38:27 -0800",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, 11 Jan 2008, Andrew Sullivan wrote:\n\n> On Fri, Jan 11, 2008 at 05:02:36PM -0500, Michael Stone wrote:\n>\n>> networks), but there's a conspicuous lack of a type for (hosts). I\n>> suppose if you really are sure that you want to store hosts and not\n>> networks\n>\n> Well, part of the trouble is that in the CIDR world, an IP without a netmask\n> can be dangerously ambiguous. I can see why the design is as it is for that\n> reason. (But I understand the problem.)\n>\n> A\n>\n\nYes, in fact it all depends on the meaning you give to an IP.\n\nIf you want to store subnets, then you need an IP and a netmask, but if \nyou just want to store the IP of a particular equipment (that is, the IP \nthat will be refered to in the TCP/IP header), then there's no ambiguity, \nyou just need 4 bytes to describe this IP.\n\nAnd it's true for IPv6 too, storing an IP that refer to an end point and \nnot a subnet is requiring twice as much data as needed, because the \nnetmask would always be ff:ff:ff:..:ff\n\nSo, for people dealing with large database of IPs, it would be nice to be \nable to save 50% of the corresponding disk/cache/ram space for these IPs.\n\n\nNicolas\n",
"msg_date": "Fri, 11 Jan 2008 23:43:58 +0100 (CET)",
"msg_from": "Pomarede Nicolas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, Jan 11, 2008 at 02:38:27PM -0800, Steve Atkins wrote:\n> I don't think there's ambiguity about what an dotted-quad without a \n> netmask\n> means, and hasn't been for a long time. Am I missing something?\n\nWell, maybe. The problem is actually that, without a netmask under CIDR,\nthe address alone isn't really enough. You have to have a netmask to get\nthe packets to the destination. As it happens, we have some nice\nconventions, defined in the RFCs, for how to interpret hosts with no\nnetmask; note though that some of those are only for humans. Or, to put it\nanother way, without context, a dotted-quad is insufficient on its own. \nWhat you're really arguing is that the context ought to be storable\nsomewhere else (maybe in a human's brain). I'm not suggesting that's wrong,\nbut I can see the \"correctness\" argument that someone might have made to get\nto the datatype as it exists. I think calling it \"needless bloat\" is just\nholding it to the wrong criteria.\n\nIf you look at the binary wire data, that netmask is always represented in\nsome sense. It can sometimes be more compact than the general-purpose data\ntype, though, no question. This is why somewhere in this thread someone\ntalked about optimisation: there certainly are ways to make these things\nmore compact.\n\nA\n",
"msg_date": "Fri, 11 Jan 2008 18:00:55 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "\nOn Jan 11, 2008, at 3:00 PM, Andrew Sullivan wrote:\n\n> On Fri, Jan 11, 2008 at 02:38:27PM -0800, Steve Atkins wrote:\n>> I don't think there's ambiguity about what an dotted-quad without a\n>> netmask\n>> means, and hasn't been for a long time. Am I missing something?\n>\n> Well, maybe. The problem is actually that, without a netmask under \n> CIDR,\n> the address alone isn't really enough. You have to have a netmask \n> to get\n> the packets to the destination.\n\nNot really. You may well need netmasks to configure your interface, but\nthere's absolutely no need for them to identify an IP endpoint, which \nis all\nyou need to identify the destination the packet is going to, and that \nis the\nmost common use of IP addresses.\n\n> As it happens, we have some nice\n> conventions, defined in the RFCs, for how to interpret hosts with no\n> netmask; note though that some of those are only for humans. Or, to \n> put it\n> another way, without context, a dotted-quad is insufficient on its \n> own.\n> What you're really arguing is that the context ought to be storable\n> somewhere else (maybe in a human's brain).\n\nA dotted quad without any additional information is an IPv4 address,\nthe same as it would be if followed by \"/32\".\n\nNetmasks are rarely needed at any level above routing\nor (some forms of) address assignment, and that's where an awful lot \nof use of IP\naddresses happens. When you do need netmasks then the cidr type is\ngreat, but that's not the common case.\n\n(And the inet-with-a-netmask is an even odder duck, as it's packing \ntwo mostly\nunrelated bits of information into a single type).\n\n> I'm not suggesting that's wrong,\n> but I can see the \"correctness\" argument that someone might have \n> made to get\n> to the datatype as it exists. I think calling it \"needless bloat\" \n> is just\n> holding it to the wrong criteria.\n>\n>\n> If you look at the binary wire data, that netmask is always \n> represented in\n> some sense. It can sometimes be more compact than the general- \n> purpose data\n> type, though, no question. This is why somewhere in this thread \n> someone\n> talked about optimisation: there certainly are ways to make these \n> things\n> more compact.\n\nI think we're drifting a long way away from a -performance topic here, \nas\nwe're agreed that inet or cidr are likely not the best types for the \noriginal\nposter to use.\n\nCheers,\n Steve\n\n",
"msg_date": "Fri, 11 Jan 2008 15:37:37 -0800",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Fri, 11 Jan 2008 15:37:37 -0800\r\nSteve Atkins <[email protected]> wrote:\r\n\r\n> > Well, maybe. The problem is actually that, without a netmask\r\n> > under CIDR,\r\n> > the address alone isn't really enough. You have to have a netmask \r\n> > to get\r\n> > the packets to the destination.\r\n> \r\n> Not really. You may well need netmasks to configure your interface,\r\n> but there's absolutely no need for them to identify an IP endpoint,\r\n> which is all\r\n> you need to identify the destination the packet is going to, and\r\n> that is the\r\n> most common use of IP addresses.\r\n\r\nSteve I think you are speaking of practicality and implementation\r\nversus RFC compliance. I believe per the RFC Andrew is correct.\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHiAEtATb/zqfZUUQRAqVjAJ9cXrRmDyCYy1vwP6nYI2kbOlYxKgCgga9q\r\njIMuXCy8LKquevyPdehaQaA=\r\n=FNIf\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Fri, 11 Jan 2008 15:52:11 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "\nOn Jan 11, 2008, at 3:52 PM, Joshua D. Drake wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> On Fri, 11 Jan 2008 15:37:37 -0800\n> Steve Atkins <[email protected]> wrote:\n>\n>>> Well, maybe. The problem is actually that, without a netmask\n>>> under CIDR,\n>>> the address alone isn't really enough. You have to have a netmask\n>>> to get\n>>> the packets to the destination.\n>>\n>> Not really. You may well need netmasks to configure your interface,\n>> but there's absolutely no need for them to identify an IP endpoint,\n>> which is all\n>> you need to identify the destination the packet is going to, and\n>> that is the\n>> most common use of IP addresses.\n>\n> Steve I think you are speaking of practicality and implementation\n> versus RFC compliance. I believe per the RFC Andrew is correct.\n\nI don't believe that's the case, but really we're at \"how many angels\ndance on the head of a pin\" level quibbling by this point. :)\n\nCheers,\n Steve\n\n",
"msg_date": "Fri, 11 Jan 2008 15:55:46 -0800",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, Jan 11, 2008 at 06:00:55PM -0500, Andrew Sullivan wrote:\n>another way, without context, a dotted-quad is insufficient on its own. \n>What you're really arguing is that the context ought to be storable\n>somewhere else (maybe in a human's brain)\n\nOr, say, in a database schema, where you say \"this column contains host \nIPs\"? Are you suggesting that the IP associated with the host can \nsomehow be ambiguous? IMO, the only ambiguity is that the type is \ndefined as \"network or host\".\n\nMike Stone\n",
"msg_date": "Fri, 11 Jan 2008 19:11:37 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "Michael Stone <[email protected]> writes:\n> On Fri, Jan 11, 2008 at 03:07:38PM -0500, Tom Lane wrote:\n>> Nonsense. 3 bytes overhead on a 16-byte address is not \"ridiculously\n>> bloated\", especially if you want a netmask with it.\n\n> Big if, no? There's a very large set of users that *don't* want/need a \n> netmask, which is why the topic keeps coming back. (Also, according to \n> the docs, inet requires 24 bytes, which is 50% more than needed; is that \n> not correct?)\n\nIt was correct, but not as of 8.3. Considering you could save a whole\none byte by not storing the netmask (well, maybe more depending on\nalignment considerations), the complaint level is unjustified.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Jan 2008 19:19:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data? "
},
{
"msg_contents": "On Fri, Jan 11, 2008 at 03:52:11PM -0800, Joshua D. Drake wrote:\n>Steve I think you are speaking of practicality and implementation\n>versus RFC compliance. I believe per the RFC Andrew is correct.\n\nThere's an RFC for storing IPs in a database?\n\nMike Stone\n",
"msg_date": "Fri, 11 Jan 2008 19:21:29 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, Jan 11, 2008 at 07:19:10PM -0500, Tom Lane wrote:\n>It was correct, but not as of 8.3. Considering you could save a whole\n>one byte by not storing the netmask\n\nHmm. One for the netmask, plus the other two mystery bytes. :-) A byte \nhere and a byte there is fine. 20% of a few billion IPs does start to\nadd up.\n\nMike Stone\n",
"msg_date": "Fri, 11 Jan 2008 19:36:02 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Fri, 11 Jan 2008 19:21:29 -0500\r\nMichael Stone <[email protected]> wrote:\r\n\r\n> On Fri, Jan 11, 2008 at 03:52:11PM -0800, Joshua D. Drake wrote:\r\n> >Steve I think you are speaking of practicality and implementation\r\n> >versus RFC compliance. I believe per the RFC Andrew is correct.\r\n> \r\n> There's an RFC for storing IPs in a database?\r\n\r\nSigh. No. But there is an RFC that declare how IPs are denoted and used.\r\n\r\nJoshua D. Drake\r\n\r\n> \r\n> Mike Stone\r\n> \r\n> ---------------------------(end of\r\n> broadcast)--------------------------- TIP 1: if posting/reading\r\n> through Usenet, please send an appropriate subscribe-nomail command\r\n> to [email protected] so that your message can get through to\r\n> the mailing list cleanly\r\n> \r\n\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHiA0oATb/zqfZUUQRAgWNAJ9F+9Kxxazh0QK0z9zcskkG1UhFnQCgpqA1\r\nhGiNsL5wjqxM7bzW6qNJfrE=\r\n=/FxX\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Fri, 11 Jan 2008 16:43:18 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "Pomarede Nicolas <[email protected]> writes:\n> And it's true for IPv6 too, storing an IP that refer to an end point and \n> not a subnet is requiring twice as much data as needed, because the \n> netmask would always be ff:ff:ff:..:ff\n> So, for people dealing with large database of IPs, it would be nice to be \n> able to save 50% of the corresponding disk/cache/ram space for these IPs.\n\nThere seem to be a number of people in this thread laboring under the\nillusion that we store a netmask as a mask. It's a bit count (think\n/32 or /128) and occupies a whole one byte on disk. Killer overhead,\nfor sure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Jan 2008 19:51:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data? "
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Fri, 11 Jan 2008 19:36:02 -0500\r\nMichael Stone <[email protected]> wrote:\r\n\r\n> On Fri, Jan 11, 2008 at 07:19:10PM -0500, Tom Lane wrote:\r\n> >It was correct, but not as of 8.3. Considering you could save a\r\n> >whole one byte by not storing the netmask\r\n> \r\n> Hmm. One for the netmask, plus the other two mystery bytes. :-) A\r\n> byte here and a byte there is fine. 20% of a few billion IPs does\r\n> start to add up.\r\n\r\nAbout a 65.00 hard disk:\r\n\r\nselect pg_size_pretty((500000000000 * 0.2)::bigint);\r\n pg_size_pretty \r\n- ----------------\r\n 93 GB\r\n\r\nBut wumpf... that is a lot of ip addresses. Question is.. are you going\r\nto have a few billion IPs in your database? Doubtful.\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n\r\n> \r\n> Mike Stone\r\n> \r\n> ---------------------------(end of\r\n> broadcast)--------------------------- TIP 9: In versions below 8.0,\r\n> the planner will ignore your desire to choose an index scan if your\r\n> joining column's datatypes do not match\r\n> \r\n\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHiBCMATb/zqfZUUQRAotjAJ4r6kjuO8pOZzD316Va1AE8VNt6TgCggQcT\r\nlI/kT2DF59Zuu7cbipdBpPI=\r\n=/xl5\r\n-----END PGP SIGNATURE-----\r\n",
"msg_date": "Fri, 11 Jan 2008 16:57:46 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, Jan 11, 2008 at 04:43:18PM -0800, Joshua D. Drake wrote:\n>Sigh. No. But there is an RFC that declare how IPs are denoted and used.\n\nSigh, I'm honestly curious about what RFC says that endpoint IPs must \nhave netmasks associated with them.\n\nMike Stone\n",
"msg_date": "Fri, 11 Jan 2008 20:04:25 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, Jan 11, 2008 at 04:57:46PM -0800, Joshua D. Drake wrote:\n>Question is.. are you going\n>to have a few billion IPs in your database? Doubtful.\n\nReally depends on what you're using the database for, doesn't it?\n\nMike Stone\n",
"msg_date": "Fri, 11 Jan 2008 20:07:40 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, 11 Jan 2008, Steve Atkins wrote:\n\n> You may well need netmasks to configure your interface, but there's \n> absolutely no need for them to identify an IP endpoint, which is all you \n> need to identify the destination the packet is going to, and that is the \n> most common use of IP addresses.\n\nTechnically you can't ever send a packet unless you know both the endpoint \nand your local netmask. As the sender, you're obligated to determine if \nthe destination is on your local LAN (in which case you send it there) or \nif it goes to the gateway. That's similar to a routing decision, but it's \nnot quite--if you don't have to look in a routing table, it's not actually \npart of routing.\n\nI believe this sort of detail is why subnet masks are considered required \nfor some things even though it doesn't seem like they are needed. \nRegardless, the details of how the packets move aren't important to some \napplications, and arguing over what the RFCs do and don't require doesn't \nchange that.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Fri, 11 Jan 2008 21:56:38 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, 11 Jan 2008 21:56:38 -0500 (EST)\nGreg Smith <[email protected]> wrote:\n> On Fri, 11 Jan 2008, Steve Atkins wrote:\n> \n> > You may well need netmasks to configure your interface, but there's \n> > absolutely no need for them to identify an IP endpoint, which is all you \n> > need to identify the destination the packet is going to, and that is the \n> > most common use of IP addresses.\n> \n> Technically you can't ever send a packet unless you know both the endpoint \n> and your local netmask. As the sender, you're obligated to determine if \n> the destination is on your local LAN (in which case you send it there) or \n> if it goes to the gateway. That's similar to a routing decision, but it's \n> not quite--if you don't have to look in a routing table, it's not actually \n> part of routing.\n\nNot sure what your point is here. Sure, you need the netmask but not\nof every IP address you send to, only for the IP/network that you are\non. That's a grand total of one netmask per interface that you need to\nknow. And you don't store it in your database.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 12 Jan 2008 01:00:48 -0500",
"msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Fri, 11 Jan 2008, Greg Smith wrote:\n\n>> You may well need netmasks to configure your interface, but there's \n>> absolutely no need for them to identify an IP endpoint, which is all you \n>> need to identify the destination the packet is going to, and that is the \n>> most common use of IP addresses.\n>\n> Technically you can't ever send a packet unless you know both the endpoint \n> and your local netmask. As the sender, you're obligated to determine if the \n> destination is on your local LAN (in which case you send it there) or if it \n> goes to the gateway. That's similar to a routing decision, but it's not \n> quite--if you don't have to look in a routing table, it's not actually part \n> of routing.\n\nyou also need to know your local IP address, but there is no reason to \nneed the netmask of the other end\n\nmy IP address is 64.81.33.126 why do you need to know my netmaask? how \nwould you find out what it is?\n\nDNS doesn't report the netmask,and it's arguably the biggest database of \nIP addresses around ;-)\n\none of the biggest reasons for storing IP addresses in a SQL database is \nas part of log analysis.\n\nDavid Lang\n\n> I believe this sort of detail is why subnet masks are considered required for \n> some things even though it doesn't seem like they are needed. Regardless, the \n> details of how the packets move aren't important to some applications, and \n> arguing over what the RFCs do and don't require doesn't change that.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n",
"msg_date": "Sat, 12 Jan 2008 11:17:31 +0000 (GMT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "* Steve Atkins:\n\n> I don't think there's ambiguity about what an dotted-quad without a\n> netmask means, and hasn't been for a long time. Am I missing\n> something?\n\nClassful addressing is still part of many user interfaces, for\ninstance Cisco's IOS. It's not just that the CLI accepts it, it's\nalso the standard[*] output format (that is, \"192.0.2.0\" instead of\n\"192.0.2.0/24\"; if no prefix length is given, the one based on the\nclass is used).\n\n[*] I don't think you can switch it off. Obviously, for backwards\n compatibility reasons, the default has to stay anyway.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Mon, 14 Jan 2008 09:27:52 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Mon, Jan 14, 2008 at 09:27:52AM +0100, Florian Weimer wrote:\n>* Steve Atkins:\n>> I don't think there's ambiguity about what an dotted-quad without a\n>> netmask means, and hasn't been for a long time. Am I missing\n>> something?\n>\n>Classful addressing is still part of many user interfaces, for\n>instance Cisco's IOS. It's not just that the CLI accepts it, it's\n>also the standard[*] output format (that is, \"192.0.2.0\" instead of\n>\"192.0.2.0/24\"; if no prefix length is given, the one based on the\n>class is used).\n\nAgain, is this at all ambiguous *in the context of a host IP*? (IOW, if \nI say that I recieved an IP packet from 10.0.0.1, are you really going \nto ask me to clarify the netmask?)\n\nMike Stone\n",
"msg_date": "Mon, 14 Jan 2008 06:19:24 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "* Michael Stone:\n\n>>Classful addressing is still part of many user interfaces, for\n>>instance Cisco's IOS. It's not just that the CLI accepts it, it's\n>>also the standard[*] output format (that is, \"192.0.2.0\" instead of\n>>\"192.0.2.0/24\"; if no prefix length is given, the one based on the\n>>class is used).\n>\n> Again, is this at all ambiguous *in the context of a host IP*? (IOW,\n> if I say that I recieved an IP packet from 10.0.0.1, are you really\n> going to ask me to clarify the netmask?)\n\nHmm. It's an argument for a separate CIDR type, not against a host\ntype.\n\nI agree that a host type would be helpful. I currently use check\nconstraints to ensure no one accidentally stores data of the wrong\ntype, which is not ideal (space is not a main concern at this point).\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Mon, 14 Jan 2008 15:05:27 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
},
{
"msg_contents": "On Mon, Jan 14, 2008 at 03:05:27PM +0100, Florian Weimer wrote:\n>Hmm. It's an argument for a separate CIDR type, not against a host\n>type.\n\nI don't think anyone argued against the CIDR type. :-)\n\nMike Stone\n",
"msg_date": "Mon, 14 Jan 2008 09:12:01 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to index IP data?"
}
] |
[
{
"msg_contents": "I have the following problem:\n\nIn my table T, there are a fixed number of boolean columns, C1, .., Cn.\n\nNow, a common query is to search in this table for tuples fullfilling an \narbitrary boolean condition (however only using AND and OR), e.g.\n\nSELECT * FROM T WHERE (C1 OR C2) AND (C3 OR C4 AND C5) OR C6\n\nFor every boolean column, I created an index, e.g.\n\nCREATE INDEX columnIndexForC2 ON DIM (C2);\n\nNow I did performance tests and got response times for such a query \nwhich are worser than linear response time..\n\nHowever I also realized that with boolean queries which use only OR or \nuse only AND, the response time is very fast, i.e. it seems to be \nconstant, regardless of how many tuples I add to the table.\n\nSo my question is: How can I get an acceptable response time for \narbitrary boolean terms?\n\nThanks\n",
"msg_date": "Fri, 11 Jan 2008 18:06:03 +0100",
"msg_from": "=?ISO-8859-1?Q?J=F6rg_Kiegeland?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Evaluating boolean formula: slow performance "
},
{
"msg_contents": "I have to correct my statement, a query seems to be only linear when \nusing OR.\n\nThis however is strange since some weeks ago, I could get good response \ntimes when using exclusively AND.\n\nI will investigate on this..\n\n",
"msg_date": "Fri, 11 Jan 2008 18:42:13 +0100",
"msg_from": "=?ISO-8859-1?Q?J=F6rg_Kiegeland?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Evaluating boolean formula: slow performance"
}
] |
[
{
"msg_contents": "Hi,\n\nI am having a really hard time trying to figure out why my simple\ncount(*) query is taking so long. I have a table with 1,296,070 rows in\nit. There are 2 different types of information that each row has that I\ncare about:\n\n \n\nstatus : character(1)\n\nsource_id : bigint\n\n \n\nThen, I have the following index on the table:\n\n \n\n\"this_index\" (status, source_id, <another_column>)\n\n \n\nNow when I do the following select, it takes a really long time:\n\n \n\nstingray_4_4_d=# explain analyze select count(*) from listings where\ninsert_status = '1' and data_source_id = 52;\n\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n---\n\n Aggregate (cost=15222.83..15222.84 rows=1 width=0) (actual\ntime=87050.129..87050.130 rows=1 loops=1)\n\n -> Index Scan using listing_status_idx on listings\n(cost=0.00..15211.20 rows=4649 width=0) (actual time=31.118..87031.776\nrows=17209 loops=1)\n\n Index Cond: ((insert_status = '1'::bpchar) AND (data_source_id\n= 52))\n\n Total runtime: 87050.213 ms\n\n(4 rows)\n\n \n\nI actually have the same exact data over on a Mysql box, with the same\nexact index, and that runs in 0.10 seconds.\n\n \n\nClearly something is wrong. Here are a couple of the parameters I have\nset on my Postgres box:\n\n \n\nstingray_4_4_d=# show shared_buffers;\n\n shared_buffers\n\n----------------\n\n 1900MB\n\n(1 row)\n\n \n\nstingray_4_4_d=# show max_fsm_pages;\n\n max_fsm_pages\n\n---------------\n\n 5000000\n\n(1 row)\n\n \n\nAny help would be much appreciated. This is really frustrating. Thanks.\n\n\n\n\n\n\n\n\n\n\n\nHi,\nI am having a really hard time trying to figure out why my\nsimple count(*) query is taking so long. I have a table with 1,296,070 rows in\nit. There are 2 different types of information that each row has that I care\nabout:\n \nstatus : character(1)\nsource_id : bigint\n \nThen, I have the following index on the table:\n \n“this_index”\n(status, source_id, <another_column>)\n \nNow when I do the following select, it takes a really long\ntime:\n \nstingray_4_4_d=#\nexplain analyze select count(*) from listings where insert_status = '1' and\ndata_source_id = 52;\n \nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate \n(cost=15222.83..15222.84 rows=1 width=0) (actual time=87050.129..87050.130\nrows=1 loops=1)\n \n-> Index Scan using listing_status_idx on listings \n(cost=0.00..15211.20 rows=4649 width=0) (actual time=31.118..87031.776\nrows=17209 loops=1)\n \nIndex Cond: ((insert_status = '1'::bpchar) AND (data_source_id = 52))\n Total runtime:\n87050.213 ms\n(4 rows)\n \nI actually have the same exact data over on a Mysql box,\nwith the same exact index, and that runs in 0.10 seconds.\n \nClearly something is wrong. Here are a couple of the\nparameters I have set on my Postgres box:\n \nstingray_4_4_d=#\nshow shared_buffers;\n shared_buffers\n----------------\n 1900MB\n(1 row)\n \nstingray_4_4_d=#\nshow max_fsm_pages;\n max_fsm_pages\n---------------\n 5000000\n(1 row)\n \nAny help would be much appreciated. This is really frustrating.\nThanks.",
"msg_date": "Fri, 11 Jan 2008 17:33:54 -0800",
"msg_from": "\"James DeMichele\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simple select, but takes long time"
},
{
"msg_contents": "\"James DeMichele\" <[email protected]> writes:\n> I am having a really hard time trying to figure out why my simple\n> count(*) query is taking so long. I have a table with 1,296,070 rows in\n> it. There are 2 different types of information that each row has that I\n> care about:\n\nHmm, the EXPLAIN output works out to about 5 msec per row, which is not\ntoo out of line for a lot of random-access disk fetches. I'm surprised\nthe planner bothered with an indexscan for this --- I'd bet a seqscan\nmight be faster, seeing you're having to read about 1% of the rows which\nwill likely touch most pages of the table anyway. Or a bitmap indexscan\nmight be even better. What do you get if you try the EXPLAIN ANALYZE\nwith enable_indexscan = off?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Jan 2008 21:11:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple select, but takes long time "
}
] |
[
{
"msg_contents": "Tom wrote:\n>There seem to be a number of people in this thread laboring under the \n>illusion that we store a netmask as a mask. It's a bit count (think \n>/32 or /128) and occupies a whole one byte on disk. Killer overhead, \n>for sure.\n\nThere's no need to be quite so snarky. The netmask isn't the only part \nof the overhead, but you did invite discussion of the netmask in \nparticular when you said \"3 bytes overhead on a 16-byte address is not \n'ridiculously bloated', *especially if you want a netmask with it*.\" You \nmight be hearing less about netmasks if you hadn't used them to justify \nthe size of the inet type. :-) There's also a number of issues being \nconflated, as tends to happen when the pent up displeasure about inet \nerupts on its semi-annual schedule. For myself, I mentioned two distinct \nissues:\n\n1) overhead (over absolute minimum required): 20% for IPv6 and *75%* for \nIPv4. (In fairness, I actually am testing using inet for ipv6 tables, on \nthe assumption that I'll get another order of magnitude out of the \nhardware before I really need high-volume ipv6 storage, and then it \nreally won't matter. But today, for the kind of addresses seen in the \nreal world, it really does matter. Also, recall that while you live in \nthe development version, those of us in the release world are dealing \nwith overheads of 50% for IPv6 and *200%* for IPv4. It'll take us a \nwhile to recalibrate. :-)\n\n2) ambiguity/completeness of data types (is it a host? is it a network? \nwhat data type do I use if I really want to ensure that people don't \nstick routing information into my host column?)\n\nnetmasks are kinda part of both, but they aren't the main point of \neither. \n\nAs far as the hostility surrounding discussions of inet, I understand \nand appreciate that there are reasons inet & cidr work the way they do, \nand I find inet to be very useful in certain cases. But there are also \ncases where they suck, and responses along the lines of \"you should be \nusing ipv6 anyway\" don't ease the pain any. :-) Responses like \"it would \njust be too much work to support seperate ipv4 & ipv6 data types\" would \nstill suck :-) but at least they wouldn't be telling people that they're \nimagining the problems that inet has had meeting their particular \nrequirements. And regardless of whether postgres gets a seperate ipv4 \ndata type, I'd still like to have an \"inethost\" data type as a \ncomplement to the \"cidr\" data type. :-)\n\nMike Stone\n",
"msg_date": "Fri, 11 Jan 2008 20:58:47 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best way to index IP data?"
}
] |
[
{
"msg_contents": "\"James DeMichele\" <[email protected]> wrote ..\n\n> Then, I have the following index on the table:\n \n> \"this_index\" (status, source_id, <another_column>)\n\nIf you have many queries of this type, do\n\nCLUSTER this_index ON tablename;\n\nand retry the SELECT.\n\nMySQL is using some sort of auto-clustering ISAM on the other box mayhaps?\n\nIn Postgres you will have to re-CLUSTER periodically if there are INSERTs and UPDATEs.\n",
"msg_date": "Fri, 11 Jan 2008 21:37:29 -0800",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Simple select, but takes long time"
},
{
"msg_contents": "[email protected] writes:\n> MySQL is using some sort of auto-clustering ISAM on the other box mayhaps?\n\nI think they probably count the index entries without ever visiting the\ntable proper. Works great for mainly-static data ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 12 Jan 2008 00:44:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple select, but takes long time "
}
] |
[
{
"msg_contents": "Useing 8.1.9 I'm running into some performance issues with inheritance. \nI've abstracted the situation below because otherwise there's lots of \nspurious stuff, but I'll gladly provide the real EXPLAIN ANALYZE output if \nnecessary.\n\nI have a superclass and a dozen subclasses, of which I've picked one as an \nexample here. The 'point' type in the class is not, as far as I can see, \nrelevant to the issue -- it could be any type.\n\ncreate table superclass (\n id integer PRIMARY KEY,\n node point,\n ...\n)\n\ncreate table subclass (\n...\n) INHERITS (superclass);\nCREATE UNIQUE INDEX subclass_id ON subclass USING btree (id);\n\ncreate table some_table (\n node point,\n ...\n)\n\nI perform a query on some_table using an expensive function with two of its \nthree point parameters looked up in the subclass (as an id -> node lookup \ntable). The first two point parameters to expensive_function are effectively \nconstants looked up once. I can structure it using scalar subqueries q1, or \nas a join q2. Both are quick, the join is a little quicker.\n\n-- q1 scalar subqueries\nselect *\n from some_table\n where\n and expensive_function(\n (select node from subclass where id = 101),\n (select node from subclass where id = 102),\n some_table.node);\n\n-- q2 join\nselect *\n from some_table, subclass g1, subclass g2\n where expensive_function(g1.node, g2.node, some_table.node)\nand g1.id = 101\nand g2.id = 102;\n\nNow what if I use the superclass? The scalar subquery strategy q3 is fine. \nThe result of the subquery is unique because it's a scalar subquery, and the \nplanner knows that:\n\n-- q3 scalar subqueries using superclass\nselect *\n from some_table\n where\n and expensive_function(\n (select node from superclass where id = 101),\n (select node from superclass where id = 102),\n some_table.node);\n\nBut the join q4 is a disaster.\n\n-- q4 join join using superclass\nselect *\n from some_table, superclass g1, superclass g2\n where expensive_function(g1.node, g2.node, some_table.node)\nand g1.id = 101\nand g2.id = 102;\n\nAnd I *think* I can see why -- I hope I'm not trying to be too smart here ;) \n: superclass.id is not guaranteed to be unique, and the planner must cater \nfor multiple rows where g1.id = 101, and multiple rows where g2.id = 102 \nacross the dozen tables comprising superclass. So it picks a different \nstrategy involving sequential scans of all the superclass tables (even \nthough they have been ANALYZED) which is 100 times slower.\n\nSo the scalar-subqueries method is the only one I can use for the \nsuperclass. That's all very well as a workaround, but what I really want to \ndo is a further join. Here are the queries using the subclass.\n\ncreate table other_table (\n route integer,\n leg_no integer,\n start_id integer,\n end_id integer\n)\n\n\n-- q5 scalar subqueries\nselect some_table.*\n from some_table, other_table\n where\n and expensive_function(\n (select node from subclass where id = start_id),\n (select node from subclass where id = end_id),\n some_table.node)\n and other_table.route = 1;\n\n-- q6 join\nselect some_table.*\n from some_table, other_table, subclass g1, subclass g2\n where expensive_function(g1.node, g2.node, some_table.node)\n and other_table.route = 1\n and other_table.start_id = g1.id\n and other_table.end_id = g2.id;\n\nWhen I test this on the subclass, as the \"route\" acquires more and more \n\"legs\", the join q6 outperforms q5 by more and more.\n\n-- q7 join\nselect some_table.*\n from some_table, other_table, superclass g1, superclass g2\n where expensive_function(g1.node, g2.node, some_table.node)\n and other_table.route = 1\n and other_table.start_id = g1.id\n and other_table.end_id = g2.id;\n\nSo is there some way I can hint to the planner in q7 that superclass.id is \nunique and that all it has to do is use superclass as an id -> node lookup \ntable?\n\nThanks\n\nJulian \n\n",
"msg_date": "Sat, 12 Jan 2008 19:34:23 -0000",
"msg_from": "\"Julian Scarfe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inheritance, unique keys and performance"
},
{
"msg_contents": "\"Julian Scarfe\" <[email protected]> writes:\n> Useing 8.1.9 I'm running into some performance issues with inheritance. \n> I've abstracted the situation below because otherwise there's lots of \n> spurious stuff, but I'll gladly provide the real EXPLAIN ANALYZE output if \n> necessary.\n\nWithout the EXPLAIN ANALYZE output, nobody can say whether you have\ninterpreted your performance problem correctly or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 12 Jan 2008 15:35:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance, unique keys and performance "
},
{
"msg_contents": "> Without the EXPLAIN ANALYZE output, nobody can say whether you have\n> interpreted your performance problem correctly or not.\n\nFair enough, Tom.\n\n superclass = \"geonode\", subclass = \"airport\", expensive_function = \n\"gc_offroute\".\n\nFor this test, some_table is also \"airport\".\n\nThere's also a coarse filter applied (using bounding boxes and an rtree \nindex) as well as the expensive_function.\n\n(And before anyone suggests it, I know this looks geospatial, but I don't \nthink PostGIS does what I need.)\n\nThanks\n\nJulian\n\n----------------------------------------------\n\ncreate temp table route_leg_tmp (\n route integer,\n seq_no integer,\n start_id integer,\n end_id integer\n);\nCREATE TABLE\ninsert into route_leg_tmp values (2,1,347428,347140);\nINSERT 0 1\ninsert into route_leg_tmp values (2,2,347140,347540);\nINSERT 0 1\ninsert into route_leg_tmp values (2,3,347540,347164);\nINSERT 0 1\ninsert into route_leg_tmp values (2,4,347428,347140);\nINSERT 0 1\ninsert into route_leg_tmp values (2,5,347140,347540);\nINSERT 0 1\ninsert into route_leg_tmp values (2,6,347540,347164);\nINSERT 0 1\nanalyze route_leg_tmp;\nANALYZE\n\n\ntest 1 subclass, scalar subquery\nexplain analyze\nselect airport.airport_id, airport.name, seq_no\n from airport, route_leg_tmp\n where box(airport.node,airport.node) && bounding_box(\n (select node from airport where geonode_id = start_id),\n (select node from airport where geonode_id = end_id),\n 30.0)\n and gc_offroute(\n (select node from airport where geonode_id = start_id),\n (select node from airport where geonode_id = end_id),\n airport.node) < 30.0\nand route = 2;\n QUERY \nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=9.25..8687.51 rows=165 width=24) (actual \ntime=41.585..57.670 rows=126 loops=1)\n Join Filter: (gc_offroute((subplan), (subplan), \"inner\".node) < \n30::double precision)\n -> Seq Scan on route_leg_tmp (cost=0.00..1.07 rows=6 width=12) (actual \ntime=0.013..0.030 rows=6 loops=1)\n Filter: (route = 2)\n -> Bitmap Heap Scan on airport (cost=9.25..290.98 rows=83 width=36) \n(actual time=0.122..0.285 rows=69 loops=6)\n Recheck Cond: (box(airport.node, airport.node) && \nbounding_box((subplan), (subplan), 30::double precision))\n -> Bitmap Index Scan on geonode_node (cost=0.00..9.25 rows=83 \nwidth=0) (actual time=0.110..0.110 rows=69 loops=6)\n Index Cond: (box(airport.node, airport.node) && \nbounding_box((subplan), (subplan), 30::double precision))\n SubPlan\n -> Index Scan using airport_geonode_id on airport \n(cost=0.00..3.48 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=6)\n Index Cond: (geonode_id = $1)\n -> Index Scan using airport_geonode_id on airport \n(cost=0.00..3.48 rows=1 width=16) (actual time=0.020..0.022 rows=1 loops=6)\n Index Cond: (geonode_id = $0)\n SubPlan\n -> Index Scan using airport_geonode_id on airport \n(cost=0.00..3.48 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $1)\n -> Index Scan using airport_geonode_id on airport \n(cost=0.00..3.48 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $0)\n SubPlan\n -> Index Scan using airport_geonode_id on airport (cost=0.00..3.48 \nrows=1 width=16) (actual time=0.006..0.007 rows=1 loops=412)\n Index Cond: (geonode_id = $1)\n -> Index Scan using airport_geonode_id on airport (cost=0.00..3.48 \nrows=1 width=16) (actual time=0.006..0.007 rows=1 loops=412)\n Index Cond: (geonode_id = $0)\n Total runtime: 58.227 ms\n(24 rows)\n\ntest 2 subclass, join\nexplain analyze\nselect airport.airport_id, airport.name, seq_no\n from airport, route_leg_tmp, airport g1, airport g2\n where box(airport.node,airport.node) && bounding_box(g1.node, g2.node, \n30.0)\n and gc_offroute(g1.node, g2.node, airport.node) < 30.0\nand route = 2\nand start_id = g1.geonode_id\nand end_id = g2.geonode_id;\n QUERY \nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=2.29..1758.30 rows=165 width=24) (actual \ntime=0.690..7.597 rows=126 loops=1)\n Join Filter: (gc_offroute(\"outer\".node, \"outer\".node, \"inner\".node) < \n30::double precision)\n -> Nested Loop (cost=0.00..42.97 rows=6 width=36) (actual \ntime=0.035..0.178 rows=6 loops=1)\n -> Nested Loop (cost=0.00..22.02 rows=6 width=24) (actual \ntime=0.024..0.106 rows=6 loops=1)\n -> Seq Scan on route_leg_tmp (cost=0.00..1.07 rows=6 \nwidth=12) (actual time=0.008..0.020 rows=6 loops=1)\n Filter: (route = 2)\n -> Index Scan using airport_geonode_id on airport g2 \n(cost=0.00..3.48 rows=1 width=20) (actual time=0.009..0.009 rows=1 loops=6)\n Index Cond: (\"outer\".end_id = g2.geonode_id)\n -> Index Scan using airport_geonode_id on airport g1 \n(cost=0.00..3.48 rows=1 width=20) (actual time=0.006..0.008 rows=1 loops=6)\n Index Cond: (\"outer\".start_id = g1.geonode_id)\n -> Bitmap Heap Scan on airport (cost=2.29..284.02 rows=83 width=36) \n(actual time=0.087..0.171 rows=69 loops=6)\n Recheck Cond: (box(airport.node, airport.node) && \nbounding_box(\"outer\".node, \"outer\".node, 30::double precision))\n -> Bitmap Index Scan on geonode_node (cost=0.00..2.29 rows=83 \nwidth=0) (actual time=0.078..0.078 rows=69 loops=6)\n Index Cond: (box(airport.node, airport.node) && \nbounding_box(\"outer\".node, \"outer\".node, 30::double precision))\n Total runtime: 7.856 ms\n(15 rows)\n\ntest 3 superclass, scalar subquery\nexplain analyze\nselect airport.airport_id, airport.name, seq_no\n from airport, route_leg_tmp\n where box(airport.node,airport.node) && bounding_box(\n (select node from geonode where geonode_id = start_id),\n (select node from geonode where geonode_id = end_id),\n 30.0)\n and gc_offroute(\n (select node from geonode where geonode_id = start_id),\n (select node from geonode where geonode_id = end_id),\n airport.node) < 30.0\nand route = 2;\n \n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=61.46..60998.04 rows=165 width=24) (actual \ntime=1.455..59.031 rows=126 loops=1)\n Join Filter: (gc_offroute((subplan), (subplan), \"inner\".node) < \n30::double precision)\n -> Seq Scan on route_leg_tmp (cost=0.00..1.07 rows=6 width=12) (actual \ntime=0.014..0.031 rows=6 loops=1)\n Filter: (route = 2)\n -> Bitmap Heap Scan on airport (cost=61.46..343.19 rows=83 width=36) \n(actual time=0.089..0.220 rows=69 loops=6)\n Recheck Cond: (box(airport.node, airport.node) && \nbounding_box((subplan), (subplan), 30::double precision))\n -> Bitmap Index Scan on geonode_node (cost=0.00..61.46 rows=83 \nwidth=0) (actual time=0.079..0.079 rows=69 loops=6)\n Index Cond: (box(airport.node, airport.node) && \nbounding_box((subplan), (subplan), 30::double precision))\n SubPlan\n -> Result (cost=0.00..29.58 rows=9 width=16) (actual \ntime=0.016..0.063 rows=1 loops=6)\n -> Append (cost=0.00..29.58 rows=9 width=16) \n(actual time=0.012..0.057 rows=1 loops=6)\n -> Index Scan using geonode_pkey on geonode \n(cost=0.00..4.82 rows=1 width=16) (actual time=0.002..0.002 rows=0 loops=6)\n Index Cond: (geonode_id = $1)\n -> Index Scan using airport_geonode_id on \nairport geonode (cost=0.00..3.48 rows=1 width=16) (actual time=0.006..0.008 \nrows=1 loops=6)\n Index Cond: (geonode_id = $1)\n -> Index Scan using \nairport_communications_geonode_id on airport_communications geonode \n(cost=0.00..3.01 rows=1 width=16) (actual time=0.005..0.005 rows=0 loops=6)\n Index Cond: (geonode_id = $1)\n -> Index Scan using \nairport_waypoint_geonode_id on airport_waypoint geonode (cost=0.00..3.20 \nrows=1 width=16) (actual time=0.004..0.004 rows=0 loops=6)\n Index Cond: (geonode_id = $1)\n -> Index Scan using \nenroute_waypoint_geonode_id on enroute_waypoint geonode (cost=0.00..3.01 \nrows=1 width=16) (actual time=0.004..0.004 rows=0 loops=6)\n Index Cond: (geonode_id = $1)\n -> Index Scan using ils_navaid_geonode_id on \nils_navaid geonode (cost=0.00..3.01 rows=1 width=16) (actual \ntime=0.004..0.004 rows=0 loops=6)\n Index Cond: (geonode_id = $1)\n -> Index Scan using ndb_navaid_geonode_id on \nndb_navaid geonode (cost=0.00..3.01 rows=1 width=16) (actual \ntime=0.006..0.006 rows=0 loops=6)\n Index Cond: (geonode_id = $1)\n -> Index Scan using runway_geonode_id on \nrunway geonode (cost=0.00..3.01 rows=1 width=16) (actual time=0.005..0.005 \nrows=0 loops=6)\n Index Cond: (geonode_id = $1)\n -> Index Scan using vhf_navaid_geonode_id on \nvhf_navaid geonode (cost=0.00..3.01 rows=1 width=16) (actual \ntime=0.004..0.004 rows=0 loops=6)\n Index Cond: (geonode_id = $1)\n -> Result (cost=0.00..29.58 rows=9 width=16) (actual \ntime=0.028..0.136 rows=1 loops=6)\n -> Append (cost=0.00..29.58 rows=9 width=16) \n(actual time=0.024..0.130 rows=1 loops=6)\n -> Index Scan using geonode_pkey on geonode \n(cost=0.00..4.82 rows=1 width=16) (actual time=0.012..0.012 rows=0 loops=6)\n Index Cond: (geonode_id = $0)\n -> Index Scan using airport_geonode_id on \nairport geonode (cost=0.00..3.48 rows=1 width=16) (actual time=0.009..0.010 \nrows=1 loops=6)\n Index Cond: (geonode_id = $0)\n -> Index Scan using \nairport_communications_geonode_id on airport_communications geonode \n(cost=0.00..3.01 rows=1 width=16) (actual time=0.014..0.014 rows=0 loops=6)\n Index Cond: (geonode_id = $0)\n -> Index Scan using \nairport_waypoint_geonode_id on airport_waypoint geonode (cost=0.00..3.20 \nrows=1 width=16) (actual time=0.014..0.014 rows=0 loops=6)\n Index Cond: (geonode_id = $0)\n -> Index Scan using \nenroute_waypoint_geonode_id on enroute_waypoint geonode (cost=0.00..3.01 \nrows=1 width=16) (actual time=0.013..0.013 rows=0 loops=6)\n Index Cond: (geonode_id = $0)\n -> Index Scan using ils_navaid_geonode_id on \nils_navaid geonode (cost=0.00..3.01 rows=1 width=16) (actual \ntime=0.012..0.012 rows=0 loops=6)\n Index Cond: (geonode_id = $0)\n -> Index Scan using ndb_navaid_geonode_id on \nndb_navaid geonode (cost=0.00..3.01 rows=1 width=16) (actual \ntime=0.013..0.013 rows=0 loops=6)\n Index Cond: (geonode_id = $0)\n -> Index Scan using runway_geonode_id on \nrunway geonode (cost=0.00..3.01 rows=1 width=16) (actual time=0.013..0.013 \nrows=0 loops=6)\n Index Cond: (geonode_id = $0)\n -> Index Scan using vhf_navaid_geonode_id on \nvhf_navaid geonode (cost=0.00..3.01 rows=1 width=16) (actual \ntime=0.012..0.012 rows=0 loops=6)\n Index Cond: (geonode_id = $0)\n SubPlan\n -> Result (cost=0.00..29.58 rows=9 width=16) (never executed)\n -> Append (cost=0.00..29.58 rows=9 width=16) (never \nexecuted)\n -> Index Scan using geonode_pkey on geonode \n(cost=0.00..4.82 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $1)\n -> Index Scan using airport_geonode_id on airport \ngeonode (cost=0.00..3.48 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $1)\n -> Index Scan using \nairport_communications_geonode_id on airport_communications geonode \n(cost=0.00..3.01 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $1)\n -> Index Scan using airport_waypoint_geonode_id on \nairport_waypoint geonode (cost=0.00..3.20 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $1)\n -> Index Scan using enroute_waypoint_geonode_id on \nenroute_waypoint geonode (cost=0.00..3.01 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $1)\n -> Index Scan using ils_navaid_geonode_id on \nils_navaid geonode (cost=0.00..3.01 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $1)\n -> Index Scan using ndb_navaid_geonode_id on \nndb_navaid geonode (cost=0.00..3.01 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $1)\n -> Index Scan using runway_geonode_id on runway \ngeonode (cost=0.00..3.01 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $1)\n -> Index Scan using vhf_navaid_geonode_id on \nvhf_navaid geonode (cost=0.00..3.01 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $1)\n -> Result (cost=0.00..29.58 rows=9 width=16) (never executed)\n -> Append (cost=0.00..29.58 rows=9 width=16) (never \nexecuted)\n -> Index Scan using geonode_pkey on geonode \n(cost=0.00..4.82 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $0)\n -> Index Scan using airport_geonode_id on airport \ngeonode (cost=0.00..3.48 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $0)\n -> Index Scan using \nairport_communications_geonode_id on airport_communications geonode \n(cost=0.00..3.01 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $0)\n -> Index Scan using airport_waypoint_geonode_id on \nairport_waypoint geonode (cost=0.00..3.20 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $0)\n -> Index Scan using enroute_waypoint_geonode_id on \nenroute_waypoint geonode (cost=0.00..3.01 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $0)\n -> Index Scan using ils_navaid_geonode_id on \nils_navaid geonode (cost=0.00..3.01 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $0)\n -> Index Scan using ndb_navaid_geonode_id on \nndb_navaid geonode (cost=0.00..3.01 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $0)\n -> Index Scan using runway_geonode_id on runway \ngeonode (cost=0.00..3.01 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $0)\n -> Index Scan using vhf_navaid_geonode_id on \nvhf_navaid geonode (cost=0.00..3.01 rows=1 width=16) (never executed)\n Index Cond: (geonode_id = $0)\n SubPlan\n -> Result (cost=0.00..29.58 rows=9 width=16) (actual \ntime=0.014..0.056 rows=1 loops=412)\n -> Append (cost=0.00..29.58 rows=9 width=16) (actual \ntime=0.011..0.052 rows=1 loops=412)\n -> Index Scan using geonode_pkey on geonode \n(cost=0.00..4.82 rows=1 width=16) (actual time=0.002..0.002 rows=0 \nloops=412)\n Index Cond: (geonode_id = $1)\n -> Index Scan using airport_geonode_id on airport geonode \n(cost=0.00..3.48 rows=1 width=16) (actual time=0.006..0.007 rows=1 \nloops=412)\n Index Cond: (geonode_id = $1)\n -> Index Scan using airport_communications_geonode_id on \nairport_communications geonode (cost=0.00..3.01 rows=1 width=16) (actual \ntime=0.005..0.005 rows=0 loops=412)\n Index Cond: (geonode_id = $1)\n -> Index Scan using airport_waypoint_geonode_id on \nairport_waypoint geonode (cost=0.00..3.20 rows=1 width=16) (actual \ntime=0.004..0.004 rows=0 loops=412)\n Index Cond: (geonode_id = $1)\n -> Index Scan using enroute_waypoint_geonode_id on \nenroute_waypoint geonode (cost=0.00..3.01 rows=1 width=16) (actual \ntime=0.004..0.004 rows=0 loops=412)\n Index Cond: (geonode_id = $1)\n -> Index Scan using ils_navaid_geonode_id on ils_navaid \ngeonode (cost=0.00..3.01 rows=1 width=16) (actual time=0.004..0.004 rows=0 \nloops=412)\n Index Cond: (geonode_id = $1)\n -> Index Scan using ndb_navaid_geonode_id on ndb_navaid \ngeonode (cost=0.00..3.01 rows=1 width=16) (actual time=0.004..0.004 rows=0 \nloops=412)\n Index Cond: (geonode_id = $1)\n -> Index Scan using runway_geonode_id on runway geonode \n(cost=0.00..3.01 rows=1 width=16) (actual time=0.004..0.004 rows=0 \nloops=412)\n Index Cond: (geonode_id = $1)\n -> Index Scan using vhf_navaid_geonode_id on vhf_navaid \ngeonode (cost=0.00..3.01 rows=1 width=16) (actual time=0.004..0.004 rows=0 \nloops=412)\n Index Cond: (geonode_id = $1)\n -> Result (cost=0.00..29.58 rows=9 width=16) (actual \ntime=0.015..0.058 rows=1 loops=412)\n -> Append (cost=0.00..29.58 rows=9 width=16) (actual \ntime=0.012..0.053 rows=1 loops=412)\n -> Index Scan using geonode_pkey on geonode \n(cost=0.00..4.82 rows=1 width=16) (actual time=0.003..0.003 rows=0 \nloops=412)\n Index Cond: (geonode_id = $0)\n -> Index Scan using airport_geonode_id on airport geonode \n(cost=0.00..3.48 rows=1 width=16) (actual time=0.006..0.007 rows=1 \nloops=412)\n Index Cond: (geonode_id = $0)\n -> Index Scan using airport_communications_geonode_id on \nairport_communications geonode (cost=0.00..3.01 rows=1 width=16) (actual \ntime=0.005..0.005 rows=0 loops=412)\n Index Cond: (geonode_id = $0)\n -> Index Scan using airport_waypoint_geonode_id on \nairport_waypoint geonode (cost=0.00..3.20 rows=1 width=16) (actual \ntime=0.004..0.004 rows=0 loops=412)\n Index Cond: (geonode_id = $0)\n -> Index Scan using enroute_waypoint_geonode_id on \nenroute_waypoint geonode (cost=0.00..3.01 rows=1 width=16) (actual \ntime=0.004..0.004 rows=0 loops=412)\n Index Cond: (geonode_id = $0)\n -> Index Scan using ils_navaid_geonode_id on ils_navaid \ngeonode (cost=0.00..3.01 rows=1 width=16) (actual time=0.004..0.004 rows=0 \nloops=412)\n Index Cond: (geonode_id = $0)\n -> Index Scan using ndb_navaid_geonode_id on ndb_navaid \ngeonode (cost=0.00..3.01 rows=1 width=16) (actual time=0.004..0.004 rows=0 \nloops=412)\n Index Cond: (geonode_id = $0)\n -> Index Scan using runway_geonode_id on runway geonode \n(cost=0.00..3.01 rows=1 width=16) (actual time=0.004..0.004 rows=0 \nloops=412)\n Index Cond: (geonode_id = $0)\n -> Index Scan using vhf_navaid_geonode_id on vhf_navaid \ngeonode (cost=0.00..3.01 rows=1 width=16) (actual time=0.004..0.004 rows=0 \nloops=412)\n Index Cond: (geonode_id = $0)\n Total runtime: 60.119 ms\n(132 rows)\n\ntest 4 superclass, join\nexplain analyze\nselect airport.airport_id, airport.name, seq_no\n from airport, route_leg_tmp, geonode g1, geonode g2\n where box(airport.node,airport.node) && bounding_box(g1.node, g2.node, \n30.0)\n and gc_offroute(g1.node, g2.node, airport.node) < 30.0\nand route = 2\nand start_id = g1.geonode_id\nand end_id = g2.geonode_id;\n QUERY \nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=16218.72..1319268499.15 rows=126889605 width=24) (actual \ntime=699.803..1367.254 rows=126 loops=1)\n Join Filter: (gc_offroute(\"outer\".node, \"outer\".node, \"inner\".node) < \n30::double precision)\n -> Hash Join (cost=16216.43..846068.41 rows=4611652 width=36) (actual \ntime=699.023..1359.460 rows=6 loops=1)\n Hash Cond: (\"outer\".geonode_id = \"inner\".start_id)\n -> Append (cost=0.00..14834.48 rows=175348 width=20) (actual \ntime=1.262..546.943 rows=174503 loops=1)\n -> Seq Scan on geonode g1 (cost=0.00..16.50 rows=650 \nwidth=20) (actual time=0.002..0.002 rows=0 loops=1)\n -> Seq Scan on airport g1 (cost=0.00..2536.09 rows=16509 \nwidth=20) (actual time=1.257..42.097 rows=16509 loops=1)\n -> Seq Scan on airport_communications g1 (cost=0.00..742.55 \nrows=11855 width=20) (actual time=0.025..19.324 rows=11855 loops=1)\n -> Seq Scan on airport_waypoint g1 (cost=0.00..2048.75 \nrows=28975 width=20) (actual time=0.016..48.997 rows=28975 loops=1)\n -> Seq Scan on enroute_waypoint g1 (cost=0.00..6162.84 \nrows=73384 width=20) (actual time=26.347..137.920 rows=73189 loops=1)\n -> Seq Scan on ils_navaid g1 (cost=0.00..421.72 rows=3472 \nwidth=20) (actual time=0.023..7.718 rows=3472 loops=1)\n -> Seq Scan on ndb_navaid g1 (cost=0.00..857.86 rows=8086 \nwidth=20) (actual time=0.025..16.332 rows=8086 loops=1)\n -> Seq Scan on runway g1 (cost=0.00..1241.88 rows=26388 \nwidth=20) (actual time=0.024..38.679 rows=26388 loops=1)\n -> Seq Scan on vhf_navaid g1 (cost=0.00..806.29 rows=6029 \nwidth=20) (actual time=0.026..14.019 rows=6029 loops=1)\n -> Hash (cost=16203.28..16203.28 rows=5260 width=24) (actual \ntime=683.843..683.843 rows=6 loops=1)\n -> Hash Join (cost=1.09..16203.28 rows=5260 width=24) \n(actual time=15.878..683.828 rows=6 loops=1)\n Hash Cond: (\"outer\".geonode_id = \"inner\".end_id)\n -> Append (cost=0.00..14834.48 rows=175348 width=20) \n(actual time=0.087..553.947 rows=174503 loops=1)\n -> Seq Scan on geonode g2 (cost=0.00..16.50 \nrows=650 width=20) (actual time=0.002..0.002 rows=0 loops=1)\n -> Seq Scan on airport g2 (cost=0.00..2536.09 \nrows=16509 width=20) (actual time=0.083..45.540 rows=16509 loops=1)\n -> Seq Scan on airport_communications g2 \n(cost=0.00..742.55 rows=11855 width=20) (actual time=0.021..19.947 \nrows=11855 loops=1)\n -> Seq Scan on airport_waypoint g2 \n(cost=0.00..2048.75 rows=28975 width=20) (actual time=0.025..49.609 \nrows=28975 loops=1)\n -> Seq Scan on enroute_waypoint g2 \n(cost=0.00..6162.84 rows=73384 width=20) (actual time=26.250..134.931 \nrows=73189 loops=1)\n -> Seq Scan on ils_navaid g2 (cost=0.00..421.72 \nrows=3472 width=20) (actual time=0.028..7.621 rows=3472 loops=1)\n -> Seq Scan on ndb_navaid g2 (cost=0.00..857.86 \nrows=8086 width=20) (actual time=0.040..16.490 rows=8086 loops=1)\n -> Seq Scan on runway g2 (cost=0.00..1241.88 \nrows=26388 width=20) (actual time=0.027..38.942 rows=26388 loops=1)\n -> Seq Scan on vhf_navaid g2 (cost=0.00..806.29 \nrows=6029 width=20) (actual time=0.029..13.836 rows=6029 loops=1)\n -> Hash (cost=1.07..1.07 rows=6 width=12) (actual \ntime=0.035..0.035 rows=6 loops=1)\n -> Seq Scan on route_leg_tmp (cost=0.00..1.07 \nrows=6 width=12) (actual time=0.014..0.023 rows=6 loops=1)\n Filter: (route = 2)\n -> Bitmap Heap Scan on airport (cost=2.29..284.02 rows=83 width=36) \n(actual time=0.107..0.226 rows=69 loops=6)\n Recheck Cond: (box(airport.node, airport.node) && \nbounding_box(\"outer\".node, \"outer\".node, 30::double precision))\n -> Bitmap Index Scan on geonode_node (cost=0.00..2.29 rows=83 \nwidth=0) (actual time=0.097..0.097 rows=69 loops=6)\n Index Cond: (box(airport.node, airport.node) && \nbounding_box(\"outer\".node, \"outer\".node, 30::double precision))\n Total runtime: 1367.578 ms\n(35 rows)\n\n",
"msg_date": "Sun, 13 Jan 2008 09:13:21 -0000",
"msg_from": "\"Julian Scarfe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inheritance, unique keys and performance "
},
{
"msg_contents": "\"Julian Scarfe\" <[email protected]> writes:\n>> Without the EXPLAIN ANALYZE output, nobody can say whether you have\n>> interpreted your performance problem correctly or not.\n\n> Fair enough, Tom.\n\nOkay, so the \"expensive function\" isn't as expensive as all that ---\nit seems to be adding only a few msec to the total runtime. The\nproblem you've got is that you need a nestloop-with-inner-indexscan\nplan type, where the inner side is an Append group (that is, an\ninheritance tree) --- and 8.1 doesn't know how to create such a plan.\nIf you can update to 8.2 or later it should get better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 13 Jan 2008 13:46:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance, unique keys and performance "
},
{
"msg_contents": "From: \"Tom Lane\" <[email protected]>\n> The\n> problem you've got is that you need a nestloop-with-inner-indexscan\n> plan type, where the inner side is an Append group (that is, an\n> inheritance tree) --- and 8.1 doesn't know how to create such a plan.\n> If you can update to 8.2 or later it should get better.\n\nIsn't test 3 (scalar subqueries) also a nestloop-with-inner-indexscan with \nan inner Append?\n\nJulian \n\n",
"msg_date": "Sun, 13 Jan 2008 19:28:18 -0000",
"msg_from": "\"Julian Scarfe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inheritance, unique keys and performance "
},
{
"msg_contents": "\"Julian Scarfe\" <[email protected]> writes:\n> From: \"Tom Lane\" <[email protected]>\n>> The\n>> problem you've got is that you need a nestloop-with-inner-indexscan\n>> plan type, where the inner side is an Append group (that is, an\n>> inheritance tree) --- and 8.1 doesn't know how to create such a plan.\n>> If you can update to 8.2 or later it should get better.\n\n> Isn't test 3 (scalar subqueries) also a nestloop-with-inner-indexscan with \n> an inner Append?\n\nNo. The subquery is planned separately and sees the upper query's\nvariable as a pseudo-constant Param. In neither query does the\nplanner think that a join is happening.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 13 Jan 2008 15:47:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance, unique keys and performance "
}
] |
[
{
"msg_contents": "\n\tGreetings,\n\n\tI was trying to get informations on #portgresql about a query plan I\nthink is quite strange, and I was said to post on this list. I hope my\nmail will be clear enough. I have included the query, the query plan,\nand the table definitions. I just don't understand the \"Seq Scan\" on\nfileds that are indexed.\n\n\tThanks in advance\n\n\tYannick\n\n\tHere we go :\n\n\n--------------------------------\n\nQUERY \n\nSELECT \n\t_article.*, \n\t( \n\t\tSELECT COUNT (id) \n\t\tFROM _comment \n\t\tWHERE parent_id = _article.id \n\t) AS nb_comments, \n\t_blog.id as blog_id, \n\t_blog.name as blog_name,\n\txpath_string(_blog.data,'/data/title') as blog_title, \n\t_blog.reference as blog_ref, \n\t_blog.main_host as blog_main_host, \n\t_user.id as user_id, \n\t_user.reference as user_ref, \n\t_user.nickname as user_nickname\n\t\nFROM _article \n\nINNER JOIN _blog \n\tON _article.path <@ _blog.path \nINNER JOIN _entity_has_element \n\tON _entity_has_element.element_id = _blog.id \nINNER JOIN _user \n\tON _user.id = _entity_has_element.entity_id\n\tAND _entity_has_element.role_id = 5 \n\t\nWHERE _article.id IN \n\t( \n\t\tSELECT _relation.destination_id AS id \n\t\tFROM _relation \n\t\tWHERE _relation.parent_id = 1008109112\n\t) \nAND _article.date_publishing < now () \n\nORDER BY nb_comments DESC \nOFFSET 0 \nLIMIT 5\n\n--------------------------------\n QUERY PLAN\nLimit (cost=378253213.46..378253213.47 rows=5 width=1185)\n-> Sort (cost=378253213.46..378260027.29 rows=2725530 width=1185)\nSort Key: (subplan)\n-> Hash Join (cost=4907270.58..375534454.12 rows=2725530 width=1185)\nHash Cond: (_entity_has_element.element_id = _blog.id)\n-> Hash Join (cost=220747.87..260246.60 rows=543801 width=32)\nHash Cond: (_entity_has_element.entity_id = _user.id)\n-> Seq Scan on _entity_has_element (cost=0.00..19696.96 rows=543801 width=16)\nFilter: (role_id = 5)\n-> Hash (cost=205537.72..205537.72 rows=806972 width=24)\n-> Seq Scan on _user (cost=0.00..205537.72 rows=806972 width=24)\n-> Hash (cost=4309146.55..4309146.55 rows=2388333 width=1161)\n-> Nested Loop (cost=8782.56..4309146.55 rows=2388333 width=1161)\n-> Nested Loop (cost=8689.45..43352.96 rows=5012 width=1073)\n-> HashAggregate (cost=8689.45..8740.05 rows=5060 width=8)\n-> Bitmap Heap Scan on _relation (cost=124.98..8674.40 rows=6021 width=8)\nRecheck Cond: (parent_id = 1008109112)\n-> Bitmap Index Scan on idx_relation_parent_id (cost=0.00..123.47 rows=6021 width=0)\nIndex Cond: (parent_id = 1008109112)\n-> Index Scan using _article_pkey on _article (cost=0.00..6.83 rows=1 width=1073)\nIndex Cond: (_article.id = _relation.destination_id)\nFilter: (date_publishing < now())\n-> Bitmap Heap Scan on _blog (cost=93.11..845.15 rows=477 width=114)\nRecheck Cond: (_article.path <@ _blog.path)\n-> Bitmap Index Scan on gist_idx_blog_path (cost=0.00..92.99 rows=477 width=0)\nIndex Cond: (_article.path <@ _blog.path)\nSubPlan\n-> Aggregate (cost=135.81..135.82 rows=1 width=8)\n-> Index Scan using idx_comment_parent_id on _comment (cost=0.00..135.61 rows=79 width=8)\nIndex Cond: (parent_id = $0)\n\n--------------------\n\nNow on the table definition :\n\n--------------------------------------------\n\nCREATE TABLE \"_article\" (\n\t\"id\" bigint NOT NULL DEFAULT nextval('element_id_sequence'::regclass),\n \"parent_id\" bigint,\n \"path\" ltree,\n \"data\" text,\n \"date_creation\" timestamp without time zone NOT NULL DEFAULT now(),\n \"date_publishing\" timestamp without time zone NOT NULL DEFAULT\nnow(),\n \"date_modification\" timestamp without time zone NOT NULL DEFAULT\nnow(),\n \"counters\" hstore,\n \"reference\" integer NOT NULL DEFAULT\nnextval('_article_reference_seq'::regclass),\n \"title\" character varying NOT NULL,\n \"text\" text,\n CONSTRAINT \"_article_pkey\" PRIMARY KEY (id)\n) WITHOUT OIDS;\n\n-- Indexes\n\nCREATE UNIQUE INDEX _article_pkey ON _article USING btree (id);\nCREATE INDEX gist_idx_article_path ON _article USING gist (path);\nCREATE INDEX idx_article_date_creation ON _article USING btree\n(date_creation);\nCREATE INDEX idx_article_date_modification ON _article USING btree\n(date_modification);\nCREATE INDEX idx_article_date_publishing ON _article USING btree\n(date_publishing);\nCREATE INDEX idx_article_parent_id ON _article USING btree (parent_id);\nCREATE INDEX idx_article_reference ON _article USING btree (reference);\n\n--------------------------------------------\n\nCREATE TABLE \"_blog\" (\n \"id\" bigint NOT NULL DEFAULT\nnextval('element_id_sequence'::regclass),\n \"parent_id\" bigint,\n \"path\" ltree,\n \"data\" text,\n \"date_creation\" timestamp without time zone NOT NULL DEFAULT now(),\n \"date_publishing\" timestamp without time zone NOT NULL DEFAULT\nnow(),\n \"date_modification\" timestamp without time zone NOT NULL DEFAULT\nnow(),\n \"counters\" hstore,\n \"reference\" integer NOT NULL DEFAULT\nnextval('_blog_reference_seq'::regclass),\n \"name\" character varying NOT NULL,\n \"main_host\" character varying NOT NULL,\n \"base_host\" character varying NOT NULL,\n \"description\" text,\n \"rating\" integer DEFAULT 0,\n CONSTRAINT \"_blog_pkey\" PRIMARY KEY (id)\n) WITHOUT OIDS;\n\n-- Indexes\n\nCREATE UNIQUE INDEX _blog_pkey ON _blog USING btree (id);\nCREATE INDEX gist_idx_blog_path ON _blog USING gist (path);\nCREATE INDEX idx_blog_base_host ON _blog USING btree (base_host);\nCREATE INDEX idx_blog_date_creation ON _blog USING btree\n(date_creation);\nCREATE INDEX idx_blog_date_modification ON _blog USING btree\n(date_modification);\nCREATE INDEX idx_blog_date_publishing ON _blog USING btree\n(date_publishing);\nCREATE INDEX idx_blog_main_host ON _blog USING btree (main_host);\nCREATE INDEX idx_blog_name ON _blog USING btree (name);\nCREATE INDEX idx_blog_parent_id ON _blog USING btree (parent_id);\nCREATE INDEX idx_blog_rating ON _blog USING btree (rating);\nCREATE INDEX idx_blog_reference ON _blog USING btree (reference);\n\n--------------------------------------------\n\nCREATE TABLE \"_comment\" (\n \"id\" bigint NOT NULL DEFAULT\nnextval('element_id_sequence'::regclass),\n \"parent_id\" bigint,\n \"path\" ltree,\n \"data\" text,\n \"date_creation\" timestamp without time zone NOT NULL DEFAULT now(),\n \"date_publishing\" timestamp without time zone NOT NULL DEFAULT\nnow(),\n \"date_modification\" timestamp without time zone NOT NULL DEFAULT\nnow(),\n \"counters\" hstore,\n \"reference\" integer NOT NULL DEFAULT\nnextval('_comment_reference_seq'::regclass),\n \"text\" text,\n CONSTRAINT \"_comment_pkey\" PRIMARY KEY (id)\n) WITHOUT OIDS;\n\n-- Indexes\n\nCREATE UNIQUE INDEX _comment_pkey ON _comment USING btree (id);\nCREATE INDEX gist_idx_comment_path ON _comment USING gist (path);\nCREATE INDEX idx_comment_date_creation ON _comment USING btree\n(date_creation);\nCREATE INDEX idx_comment_date_modification ON _comment USING btree\n(date_modification);\nCREATE INDEX idx_comment_date_publishing ON _comment USING btree\n(date_publishing);\nCREATE INDEX idx_comment_parent_id ON _comment USING btree (parent_id);\nCREATE INDEX idx_comment_reference ON _comment USING btree (reference);\n\n--------------------------------------------\n\nCREATE TABLE \"_relation\" (\n \"id\" bigint NOT NULL DEFAULT\nnextval('element_id_sequence'::regclass),\n \"parent_id\" bigint,\n \"path\" ltree,\n \"data\" text,\n \"date_creation\" timestamp without time zone NOT NULL DEFAULT now(),\n \"date_publishing\" timestamp without time zone NOT NULL DEFAULT\nnow(),\n \"date_modification\" timestamp without time zone NOT NULL DEFAULT\nnow(),\n \"counters\" hstore,\n \"destination_id\" bigint NOT NULL,\n CONSTRAINT \"_relation_pkey\" PRIMARY KEY (id)\n) WITHOUT OIDS;\n\n-- Indexes\n\nCREATE UNIQUE INDEX _relation_pkey ON _relation USING btree (id);\nCREATE INDEX gist_idx_relation_path ON _relation USING gist (path);\nCREATE INDEX idx_relation_date_creation ON _relation USING btree\n(date_creation);\nCREATE INDEX idx_relation_date_modification ON _relation USING btree\n(date_modification);\nCREATE INDEX idx_relation_date_publishing ON _relation USING btree\n(date_publishing);\nCREATE INDEX idx_relation_destination_id ON _relation USING btree\n(destination_id);\nCREATE INDEX idx_relation_parent_id ON _relation USING btree\n(parent_id);\n\n--------------------------------------------\n\nCREATE TABLE \"_entity_has_element\" (\n \"element_id\" bigint NOT NULL,\n \"entity_id\" bigint NOT NULL,\n \"role_id\" bigint NOT NULL,\n CONSTRAINT \"_entity_has_element_pkey\" PRIMARY KEY (element_id,\nentity_id, role_id)\n) WITHOUT OIDS;\n\n-- Indexes\n\nCREATE UNIQUE INDEX _entity_has_element_pkey ON _entity_has_element\nUSING btree (element_id, entity_id, role_id);\nCREATE INDEX idx_element_id ON _entity_has_element USING btree\n(element_id);\nCREATE INDEX idx_entity_id ON _entity_has_element USING btree\n(entity_id);\n\n--------------------------------------------\n\nCREATE TABLE \"_user\" (\n \"id\" bigint NOT NULL DEFAULT\nnextval('entity_id_sequence'::regclass),\n \"is_group\" boolean,\n \"data\" text,\n \"site_id\" bigint,\n \"date_inscription\" date NOT NULL DEFAULT now(),\n \"reference\" integer NOT NULL DEFAULT\nnextval('_user_reference_seq'::regclass),\n \"login\" character varying,\n \"passwd\" character varying NOT NULL,\n \"nickname\" character varying,\n CONSTRAINT \"_user_pkey\" PRIMARY KEY (id)\n) WITHOUT OIDS;\n\n-- Indexes\n\nCREATE UNIQUE INDEX _user_pkey ON _user USING btree (id);\nCREATE INDEX idx_user_login ON _user USING btree (\"login\");\nCREATE INDEX idx_user_nickname ON _user USING btree (nickname);\nCREATE INDEX idx_user_reference ON _user USING btree (reference);\n\n\n\n",
"msg_date": "Mon, 14 Jan 2008 15:18:51 +0100",
"msg_from": "Yannick Le =?ISO-8859-1?Q?Gu=E9dart?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Seq scans on indexed columns."
},
{
"msg_contents": "Yannick Le Gu�dart <yannick 'at' over-blog.com> writes:\n\n> \tGreetings,\n>\n> \tI was trying to get informations on #portgresql about a query plan I\n> think is quite strange, and I was said to post on this list. I hope my\n> mail will be clear enough. I have included the query, the query plan,\n> and the table definitions. I just don't understand the \"Seq Scan\" on\n> fileds that are indexed.\n\nThis is a FAQ entry (curious that the people there didn't direct\nyou to the FAQ first):\n\nhttp://www.postgresql.org/docs/faqs.FAQ.html#item4.6\n\nIt's possible that more than 800,000 rows is not a \"small\npercentage of the rows in the table\", and also there's the fact\nthat \"indexes are normally not used (..) to perform joins\".\n\nAlso, your database may not be analyzed, or not properly tuned\n(for example, it seems that the default random_page_cost of 4 is\ntoo large for current disks - at least for us, we've found that 2\nis more correct)\n\n[...]\n\n> -> Seq Scan on _user (cost=0.00..205537.72 rows=806972 width=24)\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n",
"msg_date": "Mon, 14 Jan 2008 15:34:36 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq scans on indexed columns."
},
{
"msg_contents": "Yannick Le Gu�dart wrote:\n> \tGreetings,\n> \n> \tI was trying to get informations on #portgresql about a query plan I\n> think is quite strange, and I was said to post on this list. I hope my\n> mail will be clear enough. I have included the query, the query plan,\n> and the table definitions. I just don't understand the \"Seq Scan\" on\n> fileds that are indexed.\n> \n> \tThanks in advance\n> \n> \tYannick\n\nFirst off what sort of response times are you getting with the query? or \nare you just after an understanding of why it plans that way?\n\npg version?\n\nWhat sort of row counts do you have in the tables?\nSome of the steps show it expecting to get over 2M rows being returned.\n\nHow many rows are returned without the limit 5? Is it close to what you \nwould expect?\n\n\n\n From what I see the first seq scan (role_id=5) can't use an index as \nrole_id is only the third element of an index, it would need to be \nmatching all three on the condition to use it. But I don't expect that \nto make much difference (time wise) if any.\n\nThe first nested loop has the biggest cost estimate difference and I \nwould expect that to be closer to what you are looking for.\n\n\nMy guess is that the joins are not producing the result you expect - \nstart without the joins (and the joined columns) and add one at a time \nto see which is causing the problem. Then find a better way to join the \ndata.\n\n\nhttp://www.postgresql.org/docs/8.2/interactive/explicit-joins.html\nmay give you some ideas...\n\n\n> \tHere we go :\n> \n> \n> --------------------------------\n> \n> QUERY \n> \n> SELECT \n> \t_article.*, \n> \t( \n> \t\tSELECT COUNT (id) \n> \t\tFROM _comment \n> \t\tWHERE parent_id = _article.id \n> \t) AS nb_comments, \n> \t_blog.id as blog_id, \n> \t_blog.name as blog_name,\n> \txpath_string(_blog.data,'/data/title') as blog_title, \n> \t_blog.reference as blog_ref, \n> \t_blog.main_host as blog_main_host, \n> \t_user.id as user_id, \n> \t_user.reference as user_ref, \n> \t_user.nickname as user_nickname\n> \t\n> FROM _article \n> \n> INNER JOIN _blog \n> \tON _article.path <@ _blog.path \n> INNER JOIN _entity_has_element \n> \tON _entity_has_element.element_id = _blog.id \n> INNER JOIN _user \n> \tON _user.id = _entity_has_element.entity_id\n> \tAND _entity_has_element.role_id = 5 \n> \t\n> WHERE _article.id IN \n> \t( \n> \t\tSELECT _relation.destination_id AS id \n> \t\tFROM _relation \n> \t\tWHERE _relation.parent_id = 1008109112\n> \t) \n> AND _article.date_publishing < now () \n> \n> ORDER BY nb_comments DESC \n> OFFSET 0 \n> LIMIT 5\n> \n> --------------------------------\n> QUERY PLAN\n> Limit (cost=378253213.46..378253213.47 rows=5 width=1185)\n> -> Sort (cost=378253213.46..378260027.29 rows=2725530 width=1185)\n> Sort Key: (subplan)\n> -> Hash Join (cost=4907270.58..375534454.12 rows=2725530 width=1185)\n> Hash Cond: (_entity_has_element.element_id = _blog.id)\n> -> Hash Join (cost=220747.87..260246.60 rows=543801 width=32)\n> Hash Cond: (_entity_has_element.entity_id = _user.id)\n> -> Seq Scan on _entity_has_element (cost=0.00..19696.96 rows=543801 width=16)\n> Filter: (role_id = 5)\n> -> Hash (cost=205537.72..205537.72 rows=806972 width=24)\n> -> Seq Scan on _user (cost=0.00..205537.72 rows=806972 width=24)\n> -> Hash (cost=4309146.55..4309146.55 rows=2388333 width=1161)\n> -> Nested Loop (cost=8782.56..4309146.55 rows=2388333 width=1161)\n> -> Nested Loop (cost=8689.45..43352.96 rows=5012 width=1073)\n> -> HashAggregate (cost=8689.45..8740.05 rows=5060 width=8)\n> -> Bitmap Heap Scan on _relation (cost=124.98..8674.40 rows=6021 width=8)\n> Recheck Cond: (parent_id = 1008109112)\n> -> Bitmap Index Scan on idx_relation_parent_id (cost=0.00..123.47 rows=6021 width=0)\n> Index Cond: (parent_id = 1008109112)\n> -> Index Scan using _article_pkey on _article (cost=0.00..6.83 rows=1 width=1073)\n> Index Cond: (_article.id = _relation.destination_id)\n> Filter: (date_publishing < now())\n> -> Bitmap Heap Scan on _blog (cost=93.11..845.15 rows=477 width=114)\n> Recheck Cond: (_article.path <@ _blog.path)\n> -> Bitmap Index Scan on gist_idx_blog_path (cost=0.00..92.99 rows=477 width=0)\n> Index Cond: (_article.path <@ _blog.path)\n> SubPlan\n> -> Aggregate (cost=135.81..135.82 rows=1 width=8)\n> -> Index Scan using idx_comment_parent_id on _comment (cost=0.00..135.61 rows=79 width=8)\n> Index Cond: (parent_id = $0)\n> \n> --------------------\n> \n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Tue, 15 Jan 2008 05:36:42 +1030",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq scans on indexed columns."
}
] |
[
{
"msg_contents": "Hi,\n There will be some flames i suppose.\n Well I've a normalized database.. \n For instance:\n\n create table Y ( pk, data... );\n create table Z ( pk , data... );\n\n create table X ( char, references Y, references Z);\n\n SELECT * from X;\n\n Now I want to make a listing of the result set from X.\n If there are references to Z or Y (not null refs), I want to display \nthat data too.\n\n Normally I would SELECT, to get that data, not in my case.\n Nearly all queries will be SELECTs, no UPDATEs or INSERTs, so need to \noptimize that case.\n\n The dirty little denormalization would look like this:\n\n create table X ( char, ref. to Y, ref. to Z, StoreY Y , StoreZ Z);\n\n On insert or update of Z or Y, I would update these two (StoreY, \nStoreZ) columns by RULE or TRIGGER..\n I know this is not nice etc.. Codd would sue for this, but in my case \nperformance over beauty is ok.\n I'm looking for something like UPDATE X set StoreY=(SELECT * FROM Y \nWHERE pk=4) WHERE foreignID2Y = 4;\n\n Is there a away to accomplish this straightforward in a single \nstatement without doing loops and stuff in a serverside procedure?\n\n Thanks in advance,\n Patric\n\n \n \n",
"msg_date": "Mon, 14 Jan 2008 19:19:36 +0100",
"msg_from": "Patric <[email protected]>",
"msg_from_op": true,
"msg_subject": "Saving result set of SELECT to table column"
},
{
"msg_contents": "On 2008-01-14 Patric wrote:\n> Well I've a normalized database.. \n> For instance:\n> \n> create table Y ( pk, data... );\n> create table Z ( pk , data... );\n> \n> create table X ( char, references Y, references Z);\n> \n> SELECT * from X;\n> \n> Now I want to make a listing of the result set from X.\n> If there are references to Z or Y (not null refs), I want to display \n> that data too.\n> \n> Normally I would SELECT, to get that data, not in my case.\n> Nearly all queries will be SELECTs, no UPDATEs or INSERTs, so need to \n> optimize that case.\n> \n> The dirty little denormalization would look like this:\n> \n> create table X ( char, ref. to Y, ref. to Z, StoreY Y , StoreZ Z);\n> \n> On insert or update of Z or Y, I would update these two (StoreY, \n> StoreZ) columns by RULE or TRIGGER..\n> I know this is not nice etc.. Codd would sue for this, but in my case \n> performance over beauty is ok.\n> I'm looking for something like UPDATE X set StoreY=(SELECT * FROM Y \n> WHERE pk=4) WHERE foreignID2Y = 4;\n> \n> Is there a away to accomplish this straightforward in a single \n> statement without doing loops and stuff in a serverside procedure?\n\nLooks to me like you want to (LEFT|RIGHT) OUTER JOIN the tables.\n\nRegards\nAnsgar Wiechers\n-- \n\"The Mac OS X kernel should never panic because, when it does, it\nseriously inconveniences the user.\"\n--http://developer.apple.com/technotes/tn2004/tn2118.html\n",
"msg_date": "Mon, 14 Jan 2008 19:38:38 +0100",
"msg_from": "Ansgar -59cobalt- Wiechers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Saving result set of SELECT to table column"
},
{
"msg_contents": "Hi Patric,\n\nThis doesn't seem to be a question pertaining to the PERFORM queue.\n\nIf I understand you correctly, this should solve your problems, without the\nneed for any RULES / TRIGGERS.\n\nCREATE TABLE y\n(\n y1 int4 NOT NULL,\n y2 varchar,\n CONSTRAINT a PRIMARY KEY (y1)\n)\n\n\nCREATE TABLE z\n(\n z1 int4 NOT NULL,\n z2 varchar,\n CONSTRAINT zz PRIMARY KEY (z1)\n)\n\n\n\nCREATE TABLE x\n(\n x1 int4 NOT NULL,\n xy1 int4 NOT NULL,\n xz1 int4 NOT NULL,\n xy2 varchar,\n xz2 varchar,\n CONSTRAINT xa PRIMARY KEY (x1),\n CONSTRAINT xy1 FOREIGN KEY (xy1)\n REFERENCES y (y1) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT xz1 FOREIGN KEY (xz1)\n REFERENCES z (z1) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\n\n\nINSERT INTO x (x1, xy1, xz1, xy2, xz2)\nSELECT 1, y1, z1, y2, z2\nFROM y, z\nWHERE y1 = 1\n AND z1 = 1;\n\n\n*Robins*\n\nOn Jan 14, 2008 11:49 PM, Patric <[email protected]> wrote:\n\n> Hi,\n> There will be some flames i suppose.\n> Well I've a normalized database..\n> For instance:\n>\n> create table Y ( pk, data... );\n> create table Z ( pk , data... );\n>\n> create table X ( char, references Y, references Z);\n>\n> SELECT * from X;\n>\n> Now I want to make a listing of the result set from X.\n> If there are references to Z or Y (not null refs), I want to display\n> that data too.\n>\n> Normally I would SELECT, to get that data, not in my case.\n> Nearly all queries will be SELECTs, no UPDATEs or INSERTs, so need to\n> optimize that case.\n>\n> The dirty little denormalization would look like this:\n>\n> create table X ( char, ref. to Y, ref. to Z, StoreY Y , StoreZ Z);\n>\n> On insert or update of Z or Y, I would update these two (StoreY,\n> StoreZ) columns by RULE or TRIGGER..\n> I know this is not nice etc.. Codd would sue for this, but in my case\n> performance over beauty is ok.\n> I'm looking for something like UPDATE X set StoreY=(SELECT * FROM Y\n> WHERE pk=4) WHERE foreignID2Y = 4;\n>\n> Is there a away to accomplish this straightforward in a single\n> statement without doing loops and stuff in a serverside procedure?\n>\n> Thanks in advance,\n> Patric\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\nHi Patric,\n\nThis doesn't seem to be a question pertaining to the PERFORM queue.\nIf I understand you correctly, this should solve your problems, without the need for any RULES / TRIGGERS.\n\nCREATE TABLE y\n(\n y1 int4 NOT NULL,\n y2 varchar,\n CONSTRAINT a PRIMARY KEY (y1)\n) \n\n\nCREATE TABLE z\n(\n z1 int4 NOT NULL,\n z2 varchar,\n CONSTRAINT zz PRIMARY KEY (z1)\n) \n\n\n\nCREATE TABLE x\n(\n x1 int4 NOT NULL,\n xy1 int4 NOT NULL,\n xz1 int4 NOT NULL,\n xy2 varchar,\n xz2 varchar,\n CONSTRAINT xa PRIMARY KEY (x1),\n CONSTRAINT xy1 FOREIGN KEY (xy1)\n REFERENCES y (y1) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT xz1 FOREIGN KEY (xz1)\n REFERENCES z (z1) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n) \n\n\nINSERT INTO x (x1, xy1, xz1, xy2, xz2)\nSELECT 1, y1, z1, y2, z2\nFROM y, z\nWHERE y1 = 1\n AND z1 = 1;\n \n\nRobinsOn Jan 14, 2008 11:49 PM, Patric <[email protected]> wrote:\nHi, There will be some flames i suppose. Well I've a normalized database.. For instance: create table Y ( pk, data... ); create table Z ( pk , data... ); create table X ( char, references Y, references Z);\n SELECT * from X; Now I want to make a listing of the result set from X. If there are references to Z or Y (not null refs), I want to displaythat data too. Normally I would SELECT, to get that data, not in my case.\n Nearly all queries will be SELECTs, no UPDATEs or INSERTs, so need tooptimize that case. The dirty little denormalization would look like this: create table X ( char, ref. to Y, ref. to Z, StoreY Y , StoreZ Z);\n On insert or update of Z or Y, I would update these two (StoreY,StoreZ) columns by RULE or TRIGGER.. I know this is not nice etc.. Codd would sue for this, but in my caseperformance over beauty is ok.\n I'm looking for something like UPDATE X set StoreY=(SELECT * FROM YWHERE pk=4) WHERE foreignID2Y = 4; Is there a away to accomplish this straightforward in a singlestatement without doing loops and stuff in a serverside procedure?\n Thanks in advance, Patric---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the postmaster",
"msg_date": "Tue, 15 Jan 2008 09:00:27 +0530",
"msg_from": "\"Robins Tharakan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Saving result set of SELECT to table column"
}
] |
[
{
"msg_contents": "All,\nI have been asked to move this thread to the performance list. Below is\nthe full discussion to this point.\n\nDoug Knight\nWSI Corp\nAndover, MA\n\n-------- Forwarded Message --------\nFrom: Doug Knight <[email protected]>\nTo: [email protected]\nCc: [email protected]\nSubject: Re: [HACKERS] Tuning Postgresql on Windows XP Pro 32 bit\nDate: Tue, 15 Jan 2008 09:09:16 -0500\n\nWe tried reducing the memory footprint of the postgres processes, via\nshared_buffers (from 30000 on Linux to 3000 on Windows), max_fsm_pages\n(from 2000250 on Linux to 100000 on Windows), max_fsm_relations (from\n20000 on Linux to 5000 on Windows), and max_connections (from 222 on\nLinux to 100 on Windows). Another variable we played with was\neffective_cache_size (174000 on Linux, 43700 on Windows). None of these\nreduced memory usage, or improved performance, significantly. We still\nsee the high page fault rate too. Other things we tried were reducing\nthe number of WAL buffers, and changing the wal_sync_method to\nopendata_sync, all with minimal effect. I've attached the latest version\nof our Windows postgresql.conf file.\n\nDoug\n\n\n\nOn Mon, 2008-01-07 at 19:49 +0500, Usama Dar wrote: \n\n> Doug Knight wrote:\n> > We are running the binary distribution, version 8.2.5-1, installed on \n> > Windows XP Pro 32 bit with SP2. We typically run postgres on linux, \n> > but have a need to run it under windows as well. Our typical admin \n> > tuning for postgresql.conf doesn't seem to be as applicable for windows.\n> \n> \n> So what have you tuned so far? what are your current postgresql settings \n> that you have modified? What are your system specs for Hardware, RAM , \n> CPU etc?\n> \n> \n> On Sun, 2008-01-06 at 18:23 +0500, Usama Dar wrote:\n> \n> \n> \n> On Jan 3, 2008 8:57 PM, Doug Knight <[email protected]> wrote:\n> \n> All,\n> Is there a place where I can find information about tuning\n> postgresql running on a Windows XP Pro 32 bit system? I\n> installed using the binary installer. I am seeing a high page\n> fault delta and total page faults for one of the postgresql\n> processes. Any help would be great. \n> \n> \n> \n> \n> Which version of postgres? the process you are seeing this for is a\n> user process? \n> \n> \n> \n> -- \n> Usama Munir Dar http://www.linkedin.com/in/usamadar\n> Consultant Architect\n> Cell:+92 321 5020666\n> Skype: usamadar \n\n\n\n\n\n\n\nAll,\nI have been asked to move this thread to the performance list. Below is the full discussion to this point.\n\nDoug Knight\nWSI Corp\nAndover, MA\n\n-------- Forwarded Message --------\nFrom: Doug Knight <[email protected]>\nTo: [email protected]\nCc: [email protected]\nSubject: Re: [HACKERS] Tuning Postgresql on Windows XP Pro 32 bit\nDate: Tue, 15 Jan 2008 09:09:16 -0500\n\nWe tried reducing the memory footprint of the postgres processes, via shared_buffers (from 30000 on Linux to 3000 on Windows), max_fsm_pages (from 2000250 on Linux to 100000 on Windows), max_fsm_relations (from 20000 on Linux to 5000 on Windows), and max_connections (from 222 on Linux to 100 on Windows). Another variable we played with was effective_cache_size (174000 on Linux, 43700 on Windows). None of these reduced memory usage, or improved performance, significantly. We still see the high page fault rate too. Other things we tried were reducing the number of WAL buffers, and changing the wal_sync_method to opendata_sync, all with minimal effect. I've attached the latest version of our Windows postgresql.conf file.\n\nDoug\n\n\n\nOn Mon, 2008-01-07 at 19:49 +0500, Usama Dar wrote: \n\n\nDoug Knight wrote:\n> We are running the binary distribution, version 8.2.5-1, installed on \n> Windows XP Pro 32 bit with SP2. We typically run postgres on linux, \n> but have a need to run it under windows as well. Our typical admin \n> tuning for postgresql.conf doesn't seem to be as applicable for windows.\n\n\nSo what have you tuned so far? what are your current postgresql settings \nthat you have modified? What are your system specs for Hardware, RAM , \nCPU etc?\n\n\nOn Sun, 2008-01-06 at 18:23 +0500, Usama Dar wrote:\n\n\n\nOn Jan 3, 2008 8:57 PM, Doug Knight <[email protected]> wrote:\n\nAll,\nIs there a place where I can find information about tuning postgresql running on a Windows XP Pro 32 bit system? I installed using the binary installer. I am seeing a high page fault delta and total page faults for one of the postgresql processes. Any help would be great. \n\n\n\n\nWhich version of postgres? the process you are seeing this for is a user process? \n\n\n\n-- \nUsama Munir Dar http://www.linkedin.com/in/usamadar\nConsultant Architect\nCell:+92 321 5020666\nSkype: usamadar",
"msg_date": "Tue, 15 Jan 2008 12:05:20 -0500",
"msg_from": "Doug Knight <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning Postgresql on Windows XP Pro 32 bit [was on HACKERS list]"
},
{
"msg_contents": ">>> On Tue, Jan 15, 2008 at 11:05 AM, in message\n<[email protected]>, Doug Knight\n<[email protected]> wrote: \n \n> We tried reducing the memory footprint of the postgres processes, via\n> shared_buffers (from 30000 on Linux to 3000 on Windows),\n \nI would never go below 10000. 20000 to 30000 is a good start.\n \n> max_fsm_pages (from 2000250 on Linux to 100000 on Windows)\n> max_fsm_relations (from 20000 on Linux to 5000 on Windows)\n \nFigure out what you need and use that. Low values can cause bloat.\nCheck the output of VACUUM ANALYZE VERBOSE (for the database)\nfor a line like this:\n \nINFO: free space map contains 717364 pages in 596 relations\n \n> and max_connections (from 222 on Linux to 100 on Windows).\n \nIf you don't need more than 100, good.\n \n> Another variable we played with was effective_cache_size\n> (174000 on Linux, 43700 on Windows).\n \nFigure out how much space is available to cache data,\ncounting both the shared buffers and the Windows cache.\nThis setting has no affect on memory usage, just the planner.\n \n> On Mon, 2008-01-07 at 19:49 +0500, Usama Dar wrote: \n> \n>> Doug Knight wrote:\n>> > We are running the binary distribution, version 8.2.5-1, installed on \n>> > Windows XP Pro 32 bit with SP2. We typically run postgres on linux, \n>> > but have a need to run it under windows as well. Our typical admin \n>> > tuning for postgresql.conf doesn't seem to be as applicable for windows.\n>> \n>> So what have you tuned so far? what are your current postgresql settings \n>> that you have modified? What are your system specs for Hardware, RAM , \n>> CPU etc?\n \nI would add that you should post what problems you're seeing.\n \n>> Is there a place where I can find information about tuning\n>> postgresql running on a Windows XP Pro 32 bit system? I\n>> installed using the binary installer. I am seeing a high page\n>> fault delta and total page faults for one of the postgresql\n>> processes.\n \nAre those \"hard\" or \"soft\" page faults. There's a big difference.\nReview Windows docs for descriptions and how to check.\nYou may not be able to do much about soft page faults,\nand they may not have much impact on your performance.\n \nWe abandoned Windows for database servers after testing\nidentical processing on identical hardware -- Linux was twice\nas fast. If you really need to run under Windows, you may need\nto adjust your performance expectations.\n \n-Kevin\n \n\n",
"msg_date": "Fri, 18 Jan 2008 16:50:34 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgresql on Windows XP Pro 32 bit [was on HACKERS list]"
}
] |
[
{
"msg_contents": "Hi\n\nI've noticed a strange pattern in our logs.\n\nWe only log anything that takes longer then 5 seconds.\n\nIf I tail the logs, I see a steady stream of functions that took longer \nthen 5 seconds. But every now and then I see nothing for 3 minutes and \nafter that a whole bunch (about 20 - 30) queries all finishing at the \nsame second.\n\nDuring this 3 minutes of nothing being logged, the db is still active \nand I can access our website\n\nLooks to me like \"something\" is \"locking\" and causing requests to be \ndelayed. Once this \"lock\" unlocks, everything goes through.\n\nSorry for the bad description, can anybody help? Where do I start looking?\n\n-- \nAdrian Moisey\nSystem Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n",
"msg_date": "Wed, 16 Jan 2008 12:42:11 +0200",
"msg_from": "Adrian Moisey <[email protected]>",
"msg_from_op": true,
"msg_subject": "strange pauses"
},
{
"msg_contents": "Adrian Moisey wrote:\n> \n> If I tail the logs, I see a steady stream of functions that took longer \n> then 5 seconds. But every now and then I see nothing for 3 minutes and \n> after that a whole bunch (about 20 - 30) queries all finishing at the \n> same second.\n\nSearch this list for references to \"checkpoints\". If you run \nvmstat/iostat for a bit you should see bursts of disk activity at those \ntimes.\n\nCounter-intuitively you probably want *more* checkpoints (with less data \nto write out each time), but see the list archives for discussion. Also \ncheck the manuals for some background info.\n\nHTH\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 16 Jan 2008 10:50:23 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pauses"
},
{
"msg_contents": "Richard Huxton <[email protected]> writes:\n> Adrian Moisey wrote:\n>> If I tail the logs, I see a steady stream of functions that took longer \n>> then 5 seconds. But every now and then I see nothing for 3 minutes and \n>> after that a whole bunch (about 20 - 30) queries all finishing at the \n>> same second.\n\n> Search this list for references to \"checkpoints\". If you run \n> vmstat/iostat for a bit you should see bursts of disk activity at those \n> times.\n\nA different line of thought is that something is taking out an exclusive\nlock on a table that all these queries need. If vmstat doesn't show any\nburst of activity during these pauses, look into pg_locks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Jan 2008 10:23:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pauses "
},
{
"msg_contents": "On Jan 16, 2008 4:42 AM, Adrian Moisey <[email protected]> wrote:\n> Hi\n>\n> I've noticed a strange pattern in our logs.\n>\n> We only log anything that takes longer then 5 seconds.\n>\n> If I tail the logs, I see a steady stream of functions that took longer\n> then 5 seconds. But every now and then I see nothing for 3 minutes and\n> after that a whole bunch (about 20 - 30) queries all finishing at the\n> same second.\n>\n> During this 3 minutes of nothing being logged, the db is still active\n> and I can access our website\n>\n> Looks to me like \"something\" is \"locking\" and causing requests to be\n> delayed. Once this \"lock\" unlocks, everything goes through.\n\nHmmm. Are these queries actually all finishing at the same time or\njust showing up in the logs at the same time. Just thinking there\nmight be some kind of issue going on with logging. Are you using\nsyslog or postgresql stderr logging?\n",
"msg_date": "Wed, 16 Jan 2008 09:30:48 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pauses"
},
{
"msg_contents": "On Wed, 16 Jan 2008, Richard Huxton wrote:\n\n> Search this list for references to \"checkpoints\". If you run vmstat/iostat \n> for a bit you should see bursts of disk activity at those times.\n\nThe most straightforward way to prove or disprove that the slow queries \nline up with checkpoints is to set to checkpoint_warning to a high value \n(3600 should work), which should log every checkpoint, and then see if \nthey show up at the same time in the logs.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 16 Jan 2008 10:37:58 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pauses"
},
{
"msg_contents": "Hi\n\n>> Search this list for references to \"checkpoints\". If you run \n>> vmstat/iostat for a bit you should see bursts of disk activity at \n>> those times.\n> \n> The most straightforward way to prove or disprove that the slow queries \n> line up with checkpoints is to set to checkpoint_warning to a high value \n> (3600 should work), which should log every checkpoint, and then see if \n> they show up at the same time in the logs.\n\nYou guys were spot on. During these pauses the IO goes up high.\n\nI've got the following set:\ncheckpoint_timeout = 5min\ncheckpoint_warning = 3600s\n\nlog_min_messages = info\n\nBut I see nothing in the logs about checkpoints\n\n-- \nAdrian Moisey\nSystem Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n",
"msg_date": "Thu, 17 Jan 2008 08:50:24 +0200",
"msg_from": "Adrian Moisey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strange pauses"
},
{
"msg_contents": "On Jan 17, 2008 12:50 AM, Adrian Moisey <[email protected]> wrote:\n> Hi\n>\n> >> Search this list for references to \"checkpoints\". If you run\n> >> vmstat/iostat for a bit you should see bursts of disk activity at\n> >> those times.\n> >\n> > The most straightforward way to prove or disprove that the slow queries\n> > line up with checkpoints is to set to checkpoint_warning to a high value\n> > (3600 should work), which should log every checkpoint, and then see if\n> > they show up at the same time in the logs.\n>\n> You guys were spot on. During these pauses the IO goes up high.\n\nYou need to tune your background writer process. See this link:\n\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n",
"msg_date": "Thu, 17 Jan 2008 07:44:28 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pauses"
},
{
"msg_contents": "Hi\n\n>> If I tail the logs, I see a steady stream of functions that took \n>> longer then 5 seconds. But every now and then I see nothing for 3 \n>> minutes and after that a whole bunch (about 20 - 30) queries all \n>> finishing at the same second.\n> \n> Search this list for references to \"checkpoints\". If you run \n> vmstat/iostat for a bit you should see bursts of disk activity at those \n> times.\n> \n> Counter-intuitively you probably want *more* checkpoints (with less data \n> to write out each time), but see the list archives for discussion. Also \n> check the manuals for some background info.\n\nWe use a lot of checkpoints in our database, we don't care about these \nmuch. We think this is causing the checkpoint to write a lot of data at \ncheckpoint time. Can we stop this behavior ?\n\n-- \nAdrian Moisey\nSystem Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n",
"msg_date": "Fri, 18 Jan 2008 12:58:06 +0200",
"msg_from": "Adrian Moisey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strange pauses"
},
{
"msg_contents": "Hi\n\n>>> If I tail the logs, I see a steady stream of functions that took \n>>> longer then 5 seconds. But every now and then I see nothing for 3 \n>>> minutes and after that a whole bunch (about 20 - 30) queries all \n>>> finishing at the same second.\n>>\n>> Search this list for references to \"checkpoints\". If you run \n>> vmstat/iostat for a bit you should see bursts of disk activity at \n>> those times.\n>>\n>> Counter-intuitively you probably want *more* checkpoints (with less \n>> data to write out each time), but see the list archives for \n>> discussion. Also check the manuals for some background info.\n> \n> We use a lot of checkpoints in our database, we don't care about these \n\nWe use a lot of temporary tables in our database, sorry.\n\n> much. We think this is causing the checkpoint to write a lot of data at \n> checkpoint time. Can we stop this behavior ?\n> \n\n-- \nAdrian Moisey\nSystem Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n",
"msg_date": "Fri, 18 Jan 2008 14:03:03 +0200",
"msg_from": "Adrian Moisey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strange pauses"
},
{
"msg_contents": "Adrian Moisey wrote:\n> Hi\n>\n>>>> If I tail the logs, I see a steady stream of functions that took longer \n>>>> then 5 seconds. But every now and then I see nothing for 3 minutes and \n>>>> after that a whole bunch (about 20 - 30) queries all finishing at the \n>>>> same second.\n>>>\n>>> Search this list for references to \"checkpoints\". If you run \n>>> vmstat/iostat for a bit you should see bursts of disk activity at those \n>>> times.\n>>>\n>>> Counter-intuitively you probably want *more* checkpoints (with less data \n>>> to write out each time), but see the list archives for discussion. Also \n>>> check the manuals for some background info.\n>>\n>> We use a lot of checkpoints in our database, we don't care about these \n>\n> We use a lot of temporary tables in our database, sorry.\n>\n>> much. We think this is causing the checkpoint to write a lot of data at \n>> checkpoint time. Can we stop this behavior ?\n\nNo, data written in temp tables do not cause extra I/O on checkpoint.\nCatalog changes due to temp table creation could, but I doubt that's the\nproblem.\n\nPerhaps, if you want to avoid I/O caused by temp tables (but it's not at\ncheckpoint time, so perhaps this has nothing to do with your problem),\nyou could try raising temp_buffers.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Fri, 18 Jan 2008 09:41:21 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pauses"
},
{
"msg_contents": "On Jan 18, 2008 4:58 AM, Adrian Moisey <[email protected]> wrote:\n>\n> We use a lot of checkpoints in our database, we don't care about these\n> much. We think this is causing the checkpoint to write a lot of data at\n> checkpoint time. Can we stop this behavior ?\n\nYou can smooth out the checkpoint behaviour by using that link I\nposted earlier to tune your bgwriter to write things out ahead of when\nthe checkpoint will occurr.\n",
"msg_date": "Fri, 18 Jan 2008 09:36:08 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pauses"
},
{
"msg_contents": "Hi\n\n> Perhaps, if you want to avoid I/O caused by temp tables (but it's not at\n> checkpoint time, so perhaps this has nothing to do with your problem),\n> you could try raising temp_buffers.\n\nHow can I find out if temp_buffers is being exceeded ?\n\n-- \nAdrian Moisey\nSystem Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n",
"msg_date": "Mon, 21 Jan 2008 09:58:30 +0200",
"msg_from": "Adrian Moisey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strange pauses"
},
{
"msg_contents": "On Jan 21, 2008, at 1:58 AM, Adrian Moisey wrote:\n>> Perhaps, if you want to avoid I/O caused by temp tables (but it's \n>> not at\n>> checkpoint time, so perhaps this has nothing to do with your \n>> problem),\n>> you could try raising temp_buffers.\n>\n> How can I find out if temp_buffers is being exceeded ?\n\nYou could monitor the pgsql_tmp directory under the appropriate \ndatabase directory ($PGDATA/base/oid_number_of_the_database). You \ncould also see how many pages temporary objects in that connection \nare using; you'd have to find the temp schema that your session is \nusing (\\dn pg_temp* from psql), and then\n\nSELECT sum(relpages) FROM pg_class c JOIN pg_namespace n ON \n(c.relnamespace=n.oid) AND n.nspname='pg_temp_blah';\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828",
"msg_date": "Mon, 21 Jan 2008 13:29:52 -0600",
"msg_from": "Decibel! <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange pauses"
}
] |
[
{
"msg_contents": "We had the same situation, and did two things\n\n1. Reduce checkpoint timeout\n2. Reduce quantity of data going into database (nice if it's possible!)\n\n1 alone wasn't enough to eliminate the delays, but it did make each delay small enough that the user interface was only minimally affected. Previously, the delays were causing timeouts in the user interface.\n\nOur symptoms were that the queries finishing \"at the same time\" were appearing in clusters every 5 minutes + some seconds, which happens to be the checkpoint timeout. Seems a new checkpoint timeout is started only after the checkpoint is complete, hence 5 minute plus, rather than exactly 5 minutes.\n\nBrian\n \n\n----- Original Message ----\nFrom: Adrian Moisey <[email protected]>\n\nHi\n\n>> Search this list for references to \"checkpoints\". If you run \n>> vmstat/iostat for a bit you should see bursts of disk activity at \n>> those times.\n> \n> The most straightforward way to prove or disprove that the slow\n queries \n> line up with checkpoints is to set to checkpoint_warning to a high\n value \n> (3600 should work), which should log every checkpoint, and then see\n if \n> they show up at the same time in the logs.\n\nYou guys were spot on. During these pauses the IO goes up high.\n\nI've got the following set:\ncheckpoint_timeout = 5min\ncheckpoint_warning = 3600s\n\nlog_min_messages = info\n\nBut I see nothing in the logs about checkpoints\n\n-- \nAdrian Moisey\nSystem Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n\n---------------------------(end of\n broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that\n your\n message can get through to the mailing list cleanly\n\n\n\n",
"msg_date": "Thu, 17 Jan 2008 01:16:40 -0800 (PST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strange pauses"
}
] |
[
{
"msg_contents": "We have a machine that serves as a fileserver and a database server. Our\nserver hosts a raid array of 40 disk drives, attached to two3-ware cards,\none 9640SE-24 and one 9640SE-16. We have noticed that activity on one\ncontroller blocks access on the second controller, not only for disk-IO but\nalso the command line tools which become unresponsive for the inactive\ncontroller. The controllers are sitting in adjacent PCI-express slots on a\nmachine with dual-dual AMD and 16GB of RAM. Has anyone else noticed issues\nlike this? Throughput for either controller is a pretty respectable\n150-200MB/s writing and somewhat faster for reading, but the \"blocking\" is\nproblematic, as the machine is serving multiple purposes.\n\nI know this is off-topic, but I know lots of folks here deal with very large\ndisk arrays; it is hard to get real-world input on machines such as these.\n\n\nThanks,\nSean\n\nWe have a machine that serves as a fileserver and a database server. Our server hosts a raid array of 40 disk drives, attached to two3-ware cards, one 9640SE-24 and one 9640SE-16. We have noticed that activity on one controller blocks access on the second controller, not only for disk-IO but also the command line tools which become unresponsive for the inactive controller. The controllers are sitting in adjacent PCI-express slots on a machine with dual-dual AMD and 16GB of RAM. Has anyone else noticed issues like this? Throughput for either controller is a pretty respectable 150-200MB/s writing and somewhat faster for reading, but the \"blocking\" is problematic, as the machine is serving multiple purposes. \nI know this is off-topic, but I know lots of folks here deal with very large disk arrays; it is hard to get real-world input on machines such as these. \nThanks,Sean",
"msg_date": "Thu, 17 Jan 2008 15:17:08 -0500",
"msg_from": "\"Sean Davis\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[OT] RAID controllers blocking one another?"
},
{
"msg_contents": "On Jan 17, 2008 2:17 PM, Sean Davis <[email protected]> wrote:\n> We have a machine that serves as a fileserver and a database server. Our\n> server hosts a raid array of 40 disk drives, attached to two3-ware cards,\n> one 9640SE-24 and one 9640SE-16. We have noticed that activity on one\n> controller blocks access on the second controller, not only for disk-IO but\n> also the command line tools which become unresponsive for the inactive\n> controller. The controllers are sitting in adjacent PCI-express slots on a\n> machine with dual-dual AMD and 16GB of RAM. Has anyone else noticed issues\n> like this? Throughput for either controller is a pretty respectable\n> 150-200MB/s writing and somewhat faster for reading, but the \"blocking\" is\n> problematic, as the machine is serving multiple purposes.\n>\n> I know this is off-topic, but I know lots of folks here deal with very large\n> disk arrays; it is hard to get real-world input on machines such as these.\n\nSounds like they're sharing something they shouldn't be. I'm not real\nfamiliar with PCI-express. Aren't those the ones that use up to 16\nchannels for I/O? Can you divide it to 8 and 8 for each PCI-express\nslot in the BIOS maybe, or something like that?\n\nJust a SWAG.\n",
"msg_date": "Thu, 17 Jan 2008 15:07:02 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [OT] RAID controllers blocking one another?"
},
{
"msg_contents": "On Thu, 17 Jan 2008, Scott Marlowe wrote:\n\n> On Jan 17, 2008 2:17 PM, Sean Davis <[email protected]> wrote:\n>> two3-ware cards, one 9640SE-24 and one 9640SE-16\n> Sounds like they're sharing something they shouldn't be. I'm not real\n> familiar with PCI-express. Aren't those the ones that use up to 16\n> channels for I/O? Can you divide it to 8 and 8 for each PCI-express\n> slot in the BIOS maybe, or something like that?\n\nI can't find the 9640SE-24/16 anywhere, but presuming these are similar to \n(or are actually) the 9650SE cards then each of them is using 8 lanes of \nthe 16 available. I'd need to know the exact motherboard or system to \neven have a clue what the options are for adjusting the BIOS and whether \nthey are shared or independant.\n\nBut I haven't seen one where there's any real ability to adjust how the \nI/O is partitioned beyond adjusting what slot you plug things into so \nthat's probably a dead end anyway. Given the original symptoms, one thing \nI would be suspicious of though is whether there's some sort of IRQ \nconflict going on. Sadly we still haven't left that kind of junk behind \neven on current PC motherboards.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 17 Jan 2008 18:23:47 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [OT] RAID controllers blocking one another?"
},
{
"msg_contents": "On Thu, Jan 17, 2008 at 03:07:02PM -0600, Scott Marlowe wrote:\n> Sounds like they're sharing something they shouldn't be. I'm not real\n> familiar with PCI-express. Aren't those the ones that use up to 16\n> channels for I/O? Can you divide it to 8 and 8 for each PCI-express\n> slot in the BIOS maybe, or something like that?\n\nPCI-E is a point-to-point-system.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 18 Jan 2008 00:30:55 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [OT] RAID controllers blocking one another?"
},
{
"msg_contents": "On Jan 17, 2008 6:23 PM, Greg Smith <[email protected]> wrote:\n\n> On Thu, 17 Jan 2008, Scott Marlowe wrote:\n>\n> > On Jan 17, 2008 2:17 PM, Sean Davis <[email protected]> wrote:\n> >> two3-ware cards, one 9640SE-24 and one 9640SE-16\n> > Sounds like they're sharing something they shouldn't be. I'm not real\n> > familiar with PCI-express. Aren't those the ones that use up to 16\n> > channels for I/O? Can you divide it to 8 and 8 for each PCI-express\n> > slot in the BIOS maybe, or something like that?\n>\n> I can't find the 9640SE-24/16 anywhere, but presuming these are similar to\n> (or are actually) the 9650SE cards then each of them is using 8 lanes of\n> the 16 available. I'd need to know the exact motherboard or system to\n> even have a clue what the options are for adjusting the BIOS and whether\n> they are shared or independant.\n>\n> But I haven't seen one where there's any real ability to adjust how the\n> I/O is partitioned beyond adjusting what slot you plug things into so\n> that's probably a dead end anyway. Given the original symptoms, one thing\n> I would be suspicious of though is whether there's some sort of IRQ\n> conflict going on. Sadly we still haven't left that kind of junk behind\n> even on current PC motherboards.\n\n\nThanks, Greg. After a little digging, 3-ware suggested moving one of the\ncards, also. We will probably give that a try. I'll also look into the\nbios, but since the machine is running as a fileserver, there is precious\nlittle time for downtime tinkering. FYI, here are the specs on the server.\n\nhttp://www.thinkmate.com/System/8U_Dual_Xeon_i2SS40-8U_Storage_Server\n\nSean\n\nOn Jan 17, 2008 6:23 PM, Greg Smith <[email protected]> wrote:\nOn Thu, 17 Jan 2008, Scott Marlowe wrote:> On Jan 17, 2008 2:17 PM, Sean Davis <[email protected]> wrote:>> two3-ware cards, one 9640SE-24 and one 9640SE-16\n> Sounds like they're sharing something they shouldn't be. I'm not real> familiar with PCI-express. Aren't those the ones that use up to 16> channels for I/O? Can you divide it to 8 and 8 for each PCI-express\n> slot in the BIOS maybe, or something like that?I can't find the 9640SE-24/16 anywhere, but presuming these are similar to(or are actually) the 9650SE cards then each of them is using 8 lanes of\nthe 16 available. I'd need to know the exact motherboard or system toeven have a clue what the options are for adjusting the BIOS and whetherthey are shared or independant.But I haven't seen one where there's any real ability to adjust how the\nI/O is partitioned beyond adjusting what slot you plug things into sothat's probably a dead end anyway. Given the original symptoms, one thingI would be suspicious of though is whether there's some sort of IRQ\nconflict going on. Sadly we still haven't left that kind of junk behindeven on current PC motherboards.Thanks, Greg. After a little digging, 3-ware suggested moving one of the cards, also. We will probably give that a try. I'll also look into the bios, but since the machine is running as a fileserver, there is precious little time for downtime tinkering. FYI, here are the specs on the server.\nhttp://www.thinkmate.com/System/8U_Dual_Xeon_i2SS40-8U_Storage_ServerSean",
"msg_date": "Fri, 18 Jan 2008 08:03:42 -0500",
"msg_from": "\"Sean Davis\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [OT] RAID controllers blocking one another?"
},
{
"msg_contents": "On Fri, 18 Jan 2008, Sean Davis wrote:\n\n> FYI, here are the specs on the server.\n> http://www.thinkmate.com/System/8U_Dual_Xeon_i2SS40-8U_Storage_Server\n\nNow we're getting somewhere. I'll dump this on-list as it's a good \nexample of how to fight this class of performance problems.\n\nThe usual troubleshooting procedure is to figure out how the motherboard \nis mapping all the I/O internally and then try to move things out of the \nsame path. That tells us that you have an Intel S5000PSL motherboard, and \nthe tech specs are at \nhttp://support.intel.com/support/motherboards/server/s5000psl/sb/CS-022619.htm\n\nWhat you want to stare at is the block diagram that's Figure 10, page 27 \n(usually this isn't in the motherboard documentation, and instead you have \nto drill down into the chipset documentation to find it). Slots 5 and 6 \nthat have PCI Express x16 connectors (but run at x8 speed) both go \nstraight into the memory hub. Slots 3 and 4, which are x8 but run at x4 \nspeed, go through the I/O controller first. Those should be slower, so if \nyou put a card into there it will have a degraded top-end performance \ncompared to slots 5/6.\n\nLine that up with the layout in Figure 2, page 17, and you should be able \nto get an idea what the possibilities are for moving the cards around and \nwhat the trade-offs involved are. Ideally you'd want both 3Ware cards to \nbe in slots 5+6, but if that's your current configuration you could try \nmoving the less important of the two (maybe the one with less drives) to \neither slot 3/4 and see if the contention you're seeing drops.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Fri, 18 Jan 2008 14:14:31 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [OT] RAID controllers blocking one another?"
},
{
"msg_contents": "On Jan 18, 2008 2:14 PM, Greg Smith <[email protected]> wrote:\n\n> On Fri, 18 Jan 2008, Sean Davis wrote:\n>\n> > FYI, here are the specs on the server.\n> > http://www.thinkmate.com/System/8U_Dual_Xeon_i2SS40-8U_Storage_Server\n>\n> Now we're getting somewhere. I'll dump this on-list as it's a good\n> example of how to fight this class of performance problems.\n>\n> The usual troubleshooting procedure is to figure out how the motherboard\n> is mapping all the I/O internally and then try to move things out of the\n> same path. That tells us that you have an Intel S5000PSL motherboard, and\n> the tech specs are at\n>\n> http://support.intel.com/support/motherboards/server/s5000psl/sb/CS-022619.htm\n>\n> What you want to stare at is the block diagram that's Figure 10, page 27\n> (usually this isn't in the motherboard documentation, and instead you have\n> to drill down into the chipset documentation to find it). Slots 5 and 6\n> that have PCI Express x16 connectors (but run at x8 speed) both go\n> straight into the memory hub. Slots 3 and 4, which are x8 but run at x4\n> speed, go through the I/O controller first. Those should be slower, so if\n> you put a card into there it will have a degraded top-end performance\n> compared to slots 5/6.\n>\n> Line that up with the layout in Figure 2, page 17, and you should be able\n> to get an idea what the possibilities are for moving the cards around and\n> what the trade-offs involved are. Ideally you'd want both 3Ware cards to\n> be in slots 5+6, but if that's your current configuration you could try\n> moving the less important of the two (maybe the one with less drives) to\n> either slot 3/4 and see if the contention you're seeing drops.\n>\n\nGreg, this is GREAT information and I'm glad you stepped through the\nprocess. It is really interesting to see to what extent these slots that\nhave similar names (or the same names) should be expected to behave so\ndifferently. We'll try some of the things you suggest (although it will\nprobably take a while to do) and if we come to any conclusions will let\neveryone know our conclusions.\n\nSean\n\nOn Jan 18, 2008 2:14 PM, Greg Smith <[email protected]> wrote:\nOn Fri, 18 Jan 2008, Sean Davis wrote:> FYI, here are the specs on the server.> http://www.thinkmate.com/System/8U_Dual_Xeon_i2SS40-8U_Storage_Server\nNow we're getting somewhere. I'll dump this on-list as it's a goodexample of how to fight this class of performance problems.The usual troubleshooting procedure is to figure out how the motherboard\nis mapping all the I/O internally and then try to move things out of thesame path. That tells us that you have an Intel S5000PSL motherboard, andthe tech specs are at\nhttp://support.intel.com/support/motherboards/server/s5000psl/sb/CS-022619.htmWhat you want to stare at is the block diagram that's Figure 10, page 27(usually this isn't in the motherboard documentation, and instead you have\nto drill down into the chipset documentation to find it). Slots 5 and 6that have PCI Express x16 connectors (but run at x8 speed) both gostraight into the memory hub. Slots 3 and 4, which are x8 but run at x4\nspeed, go through the I/O controller first. Those should be slower, so ifyou put a card into there it will have a degraded top-end performancecompared to slots 5/6.Line that up with the layout in Figure 2, page 17, and you should be able\nto get an idea what the possibilities are for moving the cards around andwhat the trade-offs involved are. Ideally you'd want both 3Ware cards tobe in slots 5+6, but if that's your current configuration you could try\nmoving the less important of the two (maybe the one with less drives) toeither slot 3/4 and see if the contention you're seeing drops.Greg, this is GREAT information and I'm glad you stepped through the process. It is really interesting to see to what extent these slots that have similar names (or the same names) should be expected to behave so differently. We'll try some of the things you suggest (although it will probably take a while to do) and if we come to any conclusions will let everyone know our conclusions. \nSean",
"msg_date": "Fri, 18 Jan 2008 14:35:15 -0500",
"msg_from": "\"Sean Davis\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [OT] RAID controllers blocking one another?"
},
{
"msg_contents": "On Thu, 17 Jan 2008, Sean Davis wrote:\n\n> We have a machine that serves as a fileserver and a database server. Our\n> server hosts a raid array of 40 disk drives, attached to two3-ware cards,\n> one 9640SE-24 and one 9640SE-16. We have noticed that activity on one\n> controller blocks access on the second controller, not only for disk-IO but\n> also the command line tools which become unresponsive for the inactive\n> controller. The controllers are sitting in adjacent PCI-express slots on a\n> machine with dual-dual AMD and 16GB of RAM. Has anyone else noticed issues\n> like this? Throughput for either controller is a pretty respectable\n> 150-200MB/s writing and somewhat faster for reading, but the \"blocking\" is\n> problematic, as the machine is serving multiple purposes.\n>\n> I know this is off-topic, but I know lots of folks here deal with very large\n> disk arrays; it is hard to get real-world input on machines such as these.\n\nthere have been a lot of discussions on the linux-kernel mailing list over \nthe last several months on the topic of IO to one set of drives \ninterfearing with IO to another set of drives. The soon-to-be-released \n2.6.24 kernel includes a substantial amount of work in this area that (at \nleast on initial reports) is showing significant improvements.\n\nI haven't had the time to test this out yet, so I can't add personal \nexperiance, but it's definantly something to look at on a test system.\n\nDavid Lang\n",
"msg_date": "Sat, 19 Jan 2008 09:31:39 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [OT] RAID controllers blocking one another?"
}
] |
[
{
"msg_contents": "\nHi,\n\nI have been having difficulty with some functions which return sets of\nrows. The functions seem to run very slowly, even though the queries\nthey run execute very quicky if I run them directly from psgl.\nTypically these queries are only returning a few hundred rows with my\nreal data.\n\nI have had difficulty coming up with a simple test case, but the code\nbelow usually shows the same problem. Sometimes I have to run the\nsetup code a few times before it happens - not sure why (I would\nexpect this to be deterministic), but perhaps there is some randomness\nintroduced by the sampling done by the analyse.\n\nThe function foo() which has a hard-coded LIMIT always executes\nquickly (comparable to running the query directly).\n\nHowever, the function foo(int) which is passed the same LIMIT as a\nparameter executes around 30 times slower. The only difference is that\nthe LIMIT is a parameter to the function, although the LIMIT isn't\nreached anyway in this case. Sometimes running this same script\ngenerates data for which this function executes as fast as the other\none (which is always fast).\n\nI am finding this all very puzzling. Am I just doing something silly?\n\nIs there any way that I can see what execution plan is being used\ninternally by the functions?\n\nThanks,\n\nDean.\n\n\n\nDROP FUNCTION IF EXISTS setup();\nCREATE FUNCTION setup()\nRETURNS void AS\n$$\nDECLARE\n i int;\n c1 int;\n c2 int;\n c3 int;\nBEGIN\n DROP TABLE IF EXISTS foo CASCADE;\n CREATE TABLE foo (id int PRIMARY KEY, name text);\n\n i := 1;\n c1 := ascii('A');\n WHILE c1 <= ascii('Z') LOOP\n c2 := ascii('a');\n WHILE c2 <= ascii('z') LOOP\n c3 := ascii('a');\n WHILE c3 <= ascii('z') LOOP\n INSERT INTO foo VALUES(i, chr(c1)||chr(c2)||'o');\n i := i+1;\n c3 := c3+1;\n END LOOP;\n c2 := c2+1;\n END LOOP;\n c1 := c1+1;\n END LOOP;\n\n CREATE INDEX foo_name_idx ON foo(lower(name) text_pattern_ops);\nEND;\n$$ LANGUAGE plpgsql;\n\nSELECT setup();\nANALYZE foo;\n\nDROP FUNCTION IF EXISTS foo();\nCREATE FUNCTION foo() RETURNS SETOF int AS\n$$\n SELECT id FROM foo WHERE lower(name) LIKE 'foo' ORDER BY id OFFSET 0 LIMIT 100;\n$$ LANGUAGE SQL STABLE;\n\nDROP FUNCTION IF EXISTS foo(int);\nCREATE FUNCTION foo(int) RETURNS SETOF int AS\n$$\n SELECT id FROM foo WHERE lower(name) LIKE 'foo' ORDER BY id OFFSET 0 LIMIT $1;\n$$ LANGUAGE SQL STABLE;\n\nEXPLAIN ANALYZE SELECT id FROM foo WHERE name ILIKE 'foo' ORDER BY id OFFSET 0 LIMIT 100;\nEXPLAIN ANALYZE SELECT id FROM foo WHERE lower(name) LIKE 'foo' ORDER BY id OFFSET 0 LIMIT 100;\nEXPLAIN ANALYZE SELECT * FROM foo();\nEXPLAIN ANALYZE SELECT * FROM foo(100);\n\n\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------\n Limit (cost=336.52..336.77 rows=100 width=4) (actual time=13.377..13.507 rows=26 loops=1)\n -> Sort (cost=336.52..336.86 rows=136 width=4) (actual time=13.369..13.415 rows=26 loops=1)\n Sort Key: id\n -> Seq Scan on foo (cost=0.00..331.70 rows=136 width=4) (actual time=2.462..13.267 rows=26 loops=1)\n Filter: (name ~~* 'foo'::text)\n Total runtime: 13.627 ms\n(6 rows)\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=68.13..68.19 rows=26 width=4) (actual time=0.345..0.475 rows=26 loops=1)\n -> Sort (cost=68.13..68.19 rows=26 width=4) (actual time=0.338..0.381 rows=26 loops=1)\n Sort Key: id\n -> Bitmap Heap Scan on foo (cost=4.46..67.52 rows=26 width=4) (actual time=0.164..0.257 rows=26 loops=1)\n Filter: (lower(name) ~~ 'foo'::text)\n -> Bitmap Index Scan on foo_name_idx (cost=0.00..4.45 rows=26 width=0) (actual time=0.109..0.109 rows=26 loops=1)\n Index Cond: (lower(name) ~=~ 'foo'::text)\n Total runtime: 0.597 ms\n(8 rows)\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Function Scan on foo (cost=0.00..12.50 rows=1000 width=4) (actual time=1.524..1.570 rows=26 loops=1)\n Total runtime: 1.647 ms\n(2 rows)\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Function Scan on foo (cost=0.00..12.50 rows=1000 width=4) (actual time=21.496..21.545 rows=26 loops=1)\n Total runtime: 21.636 ms\n(2 rows)\n\n\n[PostgreSQL 8.2.5, tested on SUSE Linux 10.3, 64-bit and Windows 2000]\n\n_________________________________________________________________\nWho's friends with who and co-starred in what?\nhttp://www.searchgamesbox.com/celebrityseparation.shtml",
"msg_date": "Sun, 20 Jan 2008 11:40:19 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow set-returning functions"
},
{
"msg_contents": "Dean Rasheed wrote:\n> I have been having difficulty with some functions which return sets of\n> rows. The functions seem to run very slowly, even though the queries\n> they run execute very quicky if I run them directly from psgl.\n> Typically these queries are only returning a few hundred rows with my\n> real data.\n> \n> I have had difficulty coming up with a simple test case, but the code\n> below usually shows the same problem. Sometimes I have to run the\n> setup code a few times before it happens - not sure why (I would\n> expect this to be deterministic), but perhaps there is some randomness\n> introduced by the sampling done by the analyse.\n> \n> The function foo() which has a hard-coded LIMIT always executes\n> quickly (comparable to running the query directly).\n> \n> However, the function foo(int) which is passed the same LIMIT as a\n> parameter executes around 30 times slower. The only difference is that\n> the LIMIT is a parameter to the function, although the LIMIT isn't\n> reached anyway in this case. Sometimes running this same script\n> generates data for which this function executes as fast as the other\n> one (which is always fast).\n\nThis is clearly because the planner doesn't know what the value for the \nparameter will be at run time, so it chooses a plan that's not optimal \nfor LIMIT 100.\n\n> Is there any way that I can see what execution plan is being used\n> internally by the functions?\n\nNot directly, but you can do this:\n\npostgres=# PREPARE p (int4) AS SELECT id FROM foo WHERE lower(name) LIKE \n'foo' ORDER BY id OFFSET 0 LIMIT $1;\nPREPARE\npostgres=# EXPLAIN EXECUTE p(100); QUERY \nPLAN\n-----------------------------------------------------------------------------\n Limit (cost=0.00..49.18 rows=2 width=4)\n -> Index Scan using foo_pkey on foo (cost=0.00..614.77 rows=25 \nwidth=4)\n Filter: (lower(name) ~~ 'foo'::text)\n(3 rows)\n\nYou could work around that by using EXECUTE in the plpgsql function, to \nforce the query to be planned on every execution with the actual value \nof the LIMIT.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Sun, 20 Jan 2008 14:34:52 +0000",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow set-returning functions"
},
{
"msg_contents": "On Jan 20, 2008 9:34 AM, Heikki Linnakangas <[email protected]> wrote:\n> Dean Rasheed wrote:\n> > I have been having difficulty with some functions which return sets of\n> > rows. The functions seem to run very slowly, even though the queries\n> > they run execute very quicky if I run them directly from psgl.\n> > Typically these queries are only returning a few hundred rows with my\n> > real data.\n> >\n> > I have had difficulty coming up with a simple test case, but the code\n> > below usually shows the same problem. Sometimes I have to run the\n> > setup code a few times before it happens - not sure why (I would\n> > expect this to be deterministic), but perhaps there is some randomness\n> > introduced by the sampling done by the analyse.\n> >\n> > The function foo() which has a hard-coded LIMIT always executes\n> > quickly (comparable to running the query directly).\n> >\n> > However, the function foo(int) which is passed the same LIMIT as a\n> > parameter executes around 30 times slower. The only difference is that\n> > the LIMIT is a parameter to the function, although the LIMIT isn't\n> > reached anyway in this case. Sometimes running this same script\n> > generates data for which this function executes as fast as the other\n> > one (which is always fast).\n>\n> This is clearly because the planner doesn't know what the value for the\n> parameter will be at run time, so it chooses a plan that's not optimal\n> for LIMIT 100.\n>\n> > Is there any way that I can see what execution plan is being used\n> > internally by the functions?\n\nprepared statements have the same problem. IIRC the planner assumes\n10%, which will often drop to a seqscan or a bitmap index scan. Some\nyears back I argued (unsuccessfully) to have the planner guess 100\nrows or something like that. Ideally, I think it would generate the\nplan from the value passed into the first invocation of the function.\n\nmerlin\n",
"msg_date": "Sun, 20 Jan 2008 10:25:48 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow set-returning functions"
},
{
"msg_contents": "\nThanks for the replies.\n\nConverting the functions to plpgsql and using EXECUTE works a treat.\nOn the real data, one of my functions is now over 50x faster :-)\n\nDean\n\n\n> Date: Sun, 20 Jan 2008 10:25:48 -0500\n> From: [email protected]\n> To: [email protected]\n> Subject: Re: [PERFORM] Slow set-returning functions\n> CC: [email protected]; [email protected]\n>\n> On Jan 20, 2008 9:34 AM, Heikki Linnakangas wrote:\n>> Dean Rasheed wrote:\n>>> I have been having difficulty with some functions which return sets of\n>>> rows. The functions seem to run very slowly, even though the queries\n>>> they run execute very quicky if I run them directly from psgl.\n>>> Typically these queries are only returning a few hundred rows with my\n>>> real data.\n>>>\n>>> I have had difficulty coming up with a simple test case, but the code\n>>> below usually shows the same problem. Sometimes I have to run the\n>>> setup code a few times before it happens - not sure why (I would\n>>> expect this to be deterministic), but perhaps there is some randomness\n>>> introduced by the sampling done by the analyse.\n>>>\n>>> The function foo() which has a hard-coded LIMIT always executes\n>>> quickly (comparable to running the query directly).\n>>>\n>>> However, the function foo(int) which is passed the same LIMIT as a\n>>> parameter executes around 30 times slower. The only difference is that\n>>> the LIMIT is a parameter to the function, although the LIMIT isn't\n>>> reached anyway in this case. Sometimes running this same script\n>>> generates data for which this function executes as fast as the other\n>>> one (which is always fast).\n>>\n>> This is clearly because the planner doesn't know what the value for the\n>> parameter will be at run time, so it chooses a plan that's not optimal\n>> for LIMIT 100.\n>>\n>>> Is there any way that I can see what execution plan is being used\n>>> internally by the functions?\n>\n> prepared statements have the same problem. IIRC the planner assumes\n> 10%, which will often drop to a seqscan or a bitmap index scan. Some\n> years back I argued (unsuccessfully) to have the planner guess 100\n> rows or something like that. Ideally, I think it would generate the\n> plan from the value passed into the first invocation of the function.\n>\n> merlin\n\n_________________________________________________________________\nGet Hotmail on your mobile, text MSN to 63463!\nhttp://mobile.uk.msn.com/pc/mail.aspx",
"msg_date": "Sun, 20 Jan 2008 17:32:29 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow set-returning functions"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> prepared statements have the same problem. IIRC the planner assumes\n> 10%, which will often drop to a seqscan or a bitmap index scan. Some\n> years back I argued (unsuccessfully) to have the planner guess 100\n> rows or something like that. Ideally, I think it would generate the\n> plan from the value passed into the first invocation of the function.\n\nI believe it's the case that that will happen now in the same contexts\nwhere the planner uses the first value of any other parameter (ie,\nunnamed statements in the extended-query protocol).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 20 Jan 2008 13:30:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow set-returning functions "
},
{
"msg_contents": "\n>> Is there any way that I can see what execution plan is being used\n>> internally by the functions?\n>> \n> \n> Not directly, but you can do this:\n> \n> \n> postgres=# PREPARE p (int4) AS SELECT id FROM foo WHERE lower(name) LIKE\n> 'foo' ORDER BY id OFFSET 0 LIMIT $1; \n> PREPARE\n> \n> postgres=# EXPLAIN EXECUTE p(100); QUERY PLAN \n> -----------------------------------------------------------------------------\n> Limit (cost=0.00..49.18 rows=2 width=4)\n> \n> -> Index Scan using foo_pkey on foo (cost=0.00..614.77 rows=25 width=4) \n> Filter: (lower(name) ~~ 'foo'::text)\n> (3 rows)\n\n\nI think that having the ability to see the execution plans being used\nby queries inside functions would be quite useful.\n\nMore generally, I would like to be able to log the execution plans of\nall queries issued by an application (in my case I am working on a web\napplication, where some of the queries are auto-generated by\nHibernate). I've tried setting debug_print_plan, but the results are a\nlittle hard to interpret.\n\nAs an experiment, I have tried hacking around a little with the code.\nThis is my first foray into the source code, so I might well be\nmissing something, but basically I added a new configuration parameter\ndebug_explain_plan which causes all queries to be instrumented and\nExecutorRun() to call explain_outNode() at the end, logging the\nresults at level DEBUG1.\n\nIt seems to work quite well as a development aid for my web\napplication. It is also useful from a psql session (similar to\nAUTOTRACE in Oracle):\n\ntest=# create table foo(a int primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"foo_pkey\" for table \"foo\"\nCREATE TABLE\ntest=# insert into foo values (1), (2), (3), (4), (5);\nINSERT 0 5\ntest=# set debug_explain_plan=true;\nSET\ntest=# set client_min_messages=debug1;\nSET\ntest=# select count(*) from foo where a>3;\nDEBUG: ------------------- query plan -------------------\nDETAIL: Aggregate (cost=32.45..32.46 rows=1 width=0) (actual time=0.066..0.068 rows=1 loops=1)\n -> Bitmap Heap Scan on foo (cost=10.45..30.45 rows=800 width=0) (actual time=0.039..0.043 rows=2 loops=1)\n Recheck Cond: (a> 3)\n -> Bitmap Index Scan on foo_pkey (cost=0.00..10.25 rows=800 width=0) (actual time=0.025..0.025 rows=2 loops=1)\n Index Cond: (a> 3)\nQuery runtime: 0.089 ms\n count\n-------\n 2\n(1 row)\n\ntest=# create function foo() returns int as 'select max(a) from foo;' language sql stable;\nCREATE FUNCTION\ntest=# select * from foo where a=foo();\nDEBUG: ------------------- query plan -------------------\nDETAIL: Result (cost=0.04..0.05 rows=1 width=0) (actual time=0.044..0.044 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..0.04 rows=1 width=4) (actual time=0.032..0.034 rows=1 loops=1)\n -> Index Scan Backward using foo_pkey on foo (cost=0.00..84.25 rows=2400 width=4) (actual time=0.025..0.025 rows=1 loops=1)\n Filter: (a IS NOT NULL)\nQuery runtime: 0.050 ms\nCONTEXT: SQL function \"foo\" statement 1\nDEBUG: ------------------- query plan -------------------\nDETAIL: Result (cost=0.04..0.05 rows=1 width=0) (actual time=0.037..0.037 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..0.04 rows=1 width=4) (actual time=0.027..0.029 rows=1 loops=1)\n -> Index Scan Backward using foo_pkey on foo (cost=0.00..84.25 rows=2400 width=4) (actual time=0.021..0.021 rows=1 loops=1)\n Filter: (a IS NOT NULL)\nQuery runtime: 0.044 ms\nCONTEXT: SQL function \"foo\" statement 1\nDEBUG: ------------------- query plan -------------------\nDETAIL: Index Scan using foo_pkey on foo (cost=0.25..8.52 rows=1 width=4) (actual time=1.638..1.642 rows=1 loops=1)\n Index Cond: (a = foo())\nQuery runtime: 1.686 ms\n a\n---\n 5\n(1 row)\n\n\n(Curious that foo() is being executed twice in this case).\n\nThe CONTEXT is very useful, particularly when functions call other\nfunctions, since it gives the call stack (presumably only for SQL and\nPL/pgSQL functions). For top-level queries I would ideally like the\nCONTEXT to log the SQL being executed, but I couldn't figure out how\nto access that information.\n\nAnyway, I'd be interested to know if anyone has thought about doing\nanything like this before and if anyone else might find this useful.\n\nDean\n\n_________________________________________________________________\nFree games, great prizes - get gaming at Gamesbox. \nhttp://www.searchgamesbox.com",
"msg_date": "Sun, 27 Jan 2008 17:29:34 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow set-returning functions"
},
{
"msg_contents": "\nDnia 27-01-2008, N o godzinie 17:29 +0000, Dean Rasheed pisze:\n> The CONTEXT is very useful, particularly when functions call other\n> functions, since it gives the call stack (presumably only for SQL and\n> PL/pgSQL functions). For top-level queries I would ideally like the\n> CONTEXT to log the SQL being executed, but I couldn't figure out how\n> to access that information.\n> \n> Anyway, I'd be interested to know if anyone has thought about doing\n> anything like this before and if anyone else might find this useful.\n> \n> Dean\n\nI'd love to see that. The mentioned PREPARE workaround doesn't seem to\nwork when executed function calls for example three another (or I don't\nknow how to use it in such situation) - and is generally painful to\nuse. \n\nI'm afraid I can't help you much (besides testing) but I'd be more than\ninterested in such improvement.\n\nRegards,\nMarcin\n\n",
"msg_date": "Sun, 27 Jan 2008 20:16:20 +0100",
"msg_from": "Marcin =?UTF-8?Q?St=C4=99pnicki?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow set-returning functions"
},
{
"msg_contents": "On Jan 27, 2008 12:29 PM, Dean Rasheed <[email protected]> wrote:\n> >> Is there any way that I can see what execution plan is being used\n> >> internally by the functions?\n> >>\n> >\n> > Not directly, but you can do this:\n> >\n> >\n> > postgres=# PREPARE p (int4) AS SELECT id FROM foo WHERE lower(name) LIKE\n> > 'foo' ORDER BY id OFFSET 0 LIMIT $1;\n> > PREPARE\n> >\n> > postgres=# EXPLAIN EXECUTE p(100); QUERY PLAN\n> > -----------------------------------------------------------------------------\n> > Limit (cost=0.00..49.18 rows=2 width=4)\n> >\n> > -> Index Scan using foo_pkey on foo (cost=0.00..614.77 rows=25 width=4)\n> > Filter: (lower(name) ~~ 'foo'::text)\n> > (3 rows)\n>\n>\n> I think that having the ability to see the execution plans being used\n> by queries inside functions would be quite useful.\n>\n> More generally, I would like to be able to log the execution plans of\n> all queries issued by an application (in my case I am working on a web\n> application, where some of the queries are auto-generated by\n> Hibernate). I've tried setting debug_print_plan, but the results are a\n> little hard to interpret.\n>\n> As an experiment, I have tried hacking around a little with the code.\n> This is my first foray into the source code, so I might well be\n> missing something, but basically I added a new configuration parameter\n> debug_explain_plan which causes all queries to be instrumented and\n> ExecutorRun() to call explain_outNode() at the end, logging the\n> results at level DEBUG1.\n\nI read your email, blinked twice, and thought: where have you been all\nmy life! :-)\n\n(IOW, +1)\n\nmerlin\n",
"msg_date": "Sun, 27 Jan 2008 18:01:08 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow set-returning functions"
},
{
"msg_contents": "\nI've posted the patch here:\n\nhttp://archives.postgresql.org/pgsql-patches/2008-01/msg00123.php\n\nDean.\n\n_________________________________________________________________\nGet Hotmail on your mobile, text MSN to 63463!\nhttp://mobile.uk.msn.com/pc/mail.aspx",
"msg_date": "Mon, 28 Jan 2008 12:38:57 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow set-returning functions"
}
] |
[
{
"msg_contents": "Hi,\n \nSince I moved from PostgreSQL 7.3 to 8.2 I have a query which suddenly runs very slow. In 7.3 it was really fast. It seems that the query analyser makes other choices, which I don't understand.\n \nI have the query:\n \nSELECT * FROM fpuArticle \n LEFT OUTER JOIN fpuArticleText ON a_No=at_a_No AND coalesce(at_Type,1)=1 AND coalesce(at_Language,0)=0 \n WHERE strpos(lower(coalesce(a_Code,'') || ' ' || coalesce(at_Text,'')), 'string')>0\n \nwhen I use a normal join, this query is very fast, but with this left outer join it is slow. \n \nThis is the query analysis:\n \nNested Loop Left Join (cost=1796.69..3327.98 rows=5587 width=516)\n Join Filter: (fpuarticle.a_no = fpuarticletext.at_a_no)\n Filter: (strpos(lower((((COALESCE(fpuarticle.a_code, ''::character varying))::text || ' '::text) || (COALESCE(fpuarticletext.at_text, ''::character varying))::text)), 'string'::text) > 0)\n -> Seq Scan on fpuarticle (cost=0.00..944.62 rows=16762 width=386)\n -> Materialize (cost=1796.69..1796.70 rows=1 width=130)\n -> Seq Scan on fpuarticletext (cost=0.00..1796.69 rows=1 width=130)\n Filter: ((COALESCE((at_type)::integer, 1) = 1) AND (COALESCE(at_language, 0::numeric) = 0::numeric))\n \nIt seems that the filter on at_type and at_Language is used at the and, while it is much faster to use it at the beginning. Why is this, and how can I influence this?\n \nWith kind regards\n \nMarten Verhoeven\nVan Beek B.V.\n\n\n\n\n\nHi,\n \nSince I moved from \nPostgreSQL 7.3 to 8.2 I have a query which suddenly runs very slow. In \n7.3 it was really fast. It seems that the query analyser makes other choices, \nwhich I don't understand.\n \nI have the \nquery:\n \nSELECT * FROM \nfpuArticle \n \nLEFT OUTER JOIN fpuArticleText ON a_No=at_a_No AND coalesce(at_Type,1)=1 AND \ncoalesce(at_Language,0)=0 \n WHERE \nstrpos(lower(coalesce(a_Code,'') || ' ' || coalesce(at_Text,'')), \n'string')>0\n \nwhen I use a normal \njoin, this query is very fast, but with this left outer join it is slow. \n\n \nThis is the query \nanalysis:\n \nNested Loop Left \nJoin (cost=1796.69..3327.98 rows=5587 width=516) Join Filter: \n(fpuarticle.a_no = fpuarticletext.at_a_no) Filter: \n(strpos(lower((((COALESCE(fpuarticle.a_code, ''::character varying))::text || ' \n'::text) || (COALESCE(fpuarticletext.at_text, ''::character varying))::text)), \n'string'::text) > 0) -> Seq Scan on fpuarticle \n(cost=0.00..944.62 rows=16762 width=386) -> Materialize \n(cost=1796.69..1796.70 rows=1 \nwidth=130) -> Seq Scan on \nfpuarticletext (cost=0.00..1796.69 rows=1 \nwidth=130) \nFilter: ((COALESCE((at_type)::integer, 1) = 1) AND (COALESCE(at_language, \n0::numeric) = 0::numeric))\n \nIt seems that the \nfilter on at_type and at_Language is used at the and, while it is much faster to \nuse it at the beginning. Why is this, and how can I influence \nthis?\n \nWith kind \nregards\n \nMarten \nVerhoeven\nVan Beek \nB.V.",
"msg_date": "Mon, 21 Jan 2008 16:00:02 +0100",
"msg_from": "\"Marten Verhoeven\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow performance with left outer join"
},
{
"msg_contents": "Hello\n\nplease, send output EXPLAIN ANALYZE statement\n\nRegards\nPavel Stehule\n\nOn 21/01/2008, Marten Verhoeven <[email protected]> wrote:\n>\n>\n> Hi,\n>\n> Since I moved from PostgreSQL 7.3 to 8.2 I have a query which suddenly runs\n> very slow. In 7.3 it was really fast. It seems that the query analyser makes\n> other choices, which I don't understand.\n>\n> I have the query:\n>\n> SELECT * FROM fpuArticle\n> LEFT OUTER JOIN fpuArticleText ON a_No=at_a_No AND coalesce(at_Type,1)=1\n> AND coalesce(at_Language,0)=0\n> WHERE strpos(lower(coalesce(a_Code,'') || ' ' ||\n> coalesce(at_Text,'')), 'string')>0\n>\n> when I use a normal join, this query is very fast, but with this left outer\n> join it is slow.\n>\n> This is the query analysis:\n>\n> Nested Loop Left Join (cost=1796.69..3327.98 rows=5587 width=516)\n> Join Filter: (fpuarticle.a_no = fpuarticletext.at_a_no)\n> Filter: (strpos(lower((((COALESCE(fpuarticle.a_code, ''::character\n> varying))::text || ' '::text) || (COALESCE(fpuarticletext.at_text,\n> ''::character varying))::text)), 'string'::text) > 0)\n> -> Seq Scan on fpuarticle (cost=0.00..944.62 rows=16762 width=386)\n> -> Materialize (cost=1796.69..1796.70 rows=1 width=130)\n> -> Seq Scan on fpuarticletext (cost=0.00..1796.69 rows=1\n> width=130)\n> Filter: ((COALESCE((at_type)::integer, 1) = 1) AND\n> (COALESCE(at_language, 0::numeric) = 0::numeric))\n>\n> It seems that the filter on at_type and at_Language is used at the and,\n> while it is much faster to use it at the beginning. Why is this, and how can\n> I influence this?\n>\n> With kind regards\n>\n> Marten Verhoeven\n> Van Beek B.V.\n",
"msg_date": "Mon, 21 Jan 2008 16:17:16 +0100",
"msg_from": "\"Pavel Stehule\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance with left outer join"
},
{
"msg_contents": "\"Marten Verhoeven\" <[email protected]> writes:\n> This is the query analysis:\n \n> Nested Loop Left Join (cost=1796.69..3327.98 rows=5587 width=516)\n> Join Filter: (fpuarticle.a_no = fpuarticletext.at_a_no)\n> Filter: (strpos(lower((((COALESCE(fpuarticle.a_code, ''::character varying))::text || ' '::text) || (COALESCE(fpuarticletext.at_text, ''::character varying))::text)), 'string'::text) > 0)\n> -> Seq Scan on fpuarticle (cost=0.00..944.62 rows=16762 width=386)\n> -> Materialize (cost=1796.69..1796.70 rows=1 width=130)\n> -> Seq Scan on fpuarticletext (cost=0.00..1796.69 rows=1 width=130)\n> Filter: ((COALESCE((at_type)::integer, 1) = 1) AND (COALESCE(at_language, 0::numeric) = 0::numeric))\n\nIf this is slow, it must be that the scan of fpuarticletext actually\nreturns many more rows than the single row the planner is expecting.\nThe reason the estimate is off is probably that the planner cannot make\nany useful estimate about those COALESCE expressions. Try rewriting\nthem in the simpler forms\n\n\t(at_type = 1 or at_type is null) AND\n\t(at_language = 0 or at_language is null)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Jan 2008 11:55:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance with left outer join "
},
{
"msg_contents": "Hello\n\n> > Filter: ((COALESCE((at_type)::integer, 1) = 1) AND (COALESCE(at_language, 0::numeric) = 0::numeric))\n>\n> If this is slow, it must be that the scan of fpuarticletext actually\n> returns many more rows than the single row the planner is expecting.\n> The reason the estimate is off is probably that the planner cannot make\n> any useful estimate about those COALESCE expressions. Try rewriting\n> them in the simpler forms\n>\n> (at_type = 1 or at_type is null) AND\n> (at_language = 0 or at_language is null)\n>\n\nwhat about put this topic into FAQ.\n\nRegards\nPavel Stehule\n",
"msg_date": "Mon, 21 Jan 2008 19:04:33 +0100",
"msg_from": "\"Pavel Stehule\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance with left outer join"
}
] |
[
{
"msg_contents": "I might completely misunderstand this feature. Shouldn't \n\"synchronous_commit = off\" improve performance?\n\nWhatever I do, I find \"synchronous_commit = off\" to degrade performance. \n\nEspecially it doesn't like the CFQ I/O scheduler, it's not so bad with \ndeadline. Synthetic load like\n\npgbench -i -s 10 -U pgsql -d bench && pgbench -t 1000 -c 100 -U pgsql -d \nbench\n\nor the same with scale 100.\n\nMaybe it's just my test box.. single SATA-II drive, XFS on top of LVM.\n\nI'll retry without LVM once I have another drive.. I've seen LVM mess \nwith other things in the past.\n\n\n-- \nBest regards,\nHannes Dorbath\n",
"msg_date": "Mon, 21 Jan 2008 22:55:16 +0100",
"msg_from": "Hannes Dorbath <[email protected]>",
"msg_from_op": true,
"msg_subject": "8.3 synchronous_commit"
},
{
"msg_contents": "Hello\n\nsynchronous_commit = off is well for specific load,\n\ntry only one connect pgbench ~ it is analogy for database import or\nsome administrator's work.\n\nRegards\nPavel Stehule\n\n\nOn 21/01/2008, Hannes Dorbath <[email protected]> wrote:\n> I might completely misunderstand this feature. Shouldn't\n> \"synchronous_commit = off\" improve performance?\n>\n> Whatever I do, I find \"synchronous_commit = off\" to degrade performance.\n>\n> Especially it doesn't like the CFQ I/O scheduler, it's not so bad with\n> deadline. Synthetic load like\n>\n> pgbench -i -s 10 -U pgsql -d bench && pgbench -t 1000 -c 100 -U pgsql -d\n> bench\n>\n> or the same with scale 100.\n>\n> Maybe it's just my test box.. single SATA-II drive, XFS on top of LVM.\n>\n> I'll retry without LVM once I have another drive.. I've seen LVM mess\n> with other things in the past.\n>\n>\n> --\n> Best regards,\n> Hannes Dorbath\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n",
"msg_date": "Mon, 21 Jan 2008 23:11:07 +0100",
"msg_from": "\"Pavel Stehule\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3 synchronous_commit"
},
{
"msg_contents": "On Mon, 2008-01-21 at 22:55 +0100, Hannes Dorbath wrote:\n> I might completely misunderstand this feature. Shouldn't \n> \"synchronous_commit = off\" improve performance?\n> \n> Whatever I do, I find \"synchronous_commit = off\" to degrade performance. \n> \n> Especially it doesn't like the CFQ I/O scheduler, it's not so bad with \n> deadline. Synthetic load like\n> \n\nThe CFQ scheduler is bad for performance in the tests that I have run.\nWhen I have a chance I'll put together some tests to try to demonstrate\nthat.\n\nThe reason it may be bad in your case is if you have many backends\ncommit many transactions asynchronously, and then the WAL writer tries\nto make those transactions durable, CFQ might think that the WAL writer\nis \"unfairly\" using a lot of I/O. This is just speculation though.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Mon, 21 Jan 2008 14:25:30 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3 synchronous_commit"
},
{
"msg_contents": "On Mon, 21 Jan 2008, Hannes Dorbath wrote:\n\n> pgbench -i -s 10 -U pgsql -d bench && pgbench -t 1000 -c 100 -U pgsql -d \n> bench\n\npgbench doesn't handle 100 clients at once very well on the same box as \nthe server, unless you have a pretty serious system. The pgbench program \nitself has a single process model that doesn't handle the CFQ round-robin \nvery well at all. On top of that, the database scale should be bigger \nthan the number of clients or everybody just fights for the branches \ntable. You said you tried with a larger scale as well, but that also \nvastly increases the size of the database which shifts to a completely \ndifferent set of bottlenecks. See \nhttp://www.westnet.com/~gsmith/content/postgresql/pgbench-scaling.htm for \nmore on this.\n\nTry something more in the range of 4 clients/CPU and set the scale to \ncloser to twice that (so with a dual-core system you might do 8 clients \nand a scale of 16). If you really want to simulate a large number of \nclients, do that on another system and connect to the server remotely.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Mon, 21 Jan 2008 18:31:16 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3 synchronous_commit"
},
{
"msg_contents": "On Mon, 2008-01-21 at 18:31 -0500, Greg Smith wrote:\n> pgbench doesn't handle 100 clients at once very well on the same box as \n> the server, unless you have a pretty serious system. The pgbench program \n> itself has a single process model that doesn't handle the CFQ round-robin \n> very well at all. On top of that, the database scale should be bigger \n\nHe was referring to the CFQ I/O scheduler. I don't think that will\naffect pgbench itself, because it doesn't read/write to disk, right?\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Mon, 21 Jan 2008 15:45:54 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3 synchronous_commit"
},
{
"msg_contents": "On Mon, 21 Jan 2008, Jeff Davis wrote:\n\n> He was referring to the CFQ I/O scheduler. I don't think that will\n> affect pgbench itself, because it doesn't read/write to disk, right?\n\nIt does if you are writing latency log files but it shouldn't be in the \ncases given. But there's something weird about the interplay between the \nserver disk I/O under CFQ and the time slices the pgbench client gets when \nthey're all on the same system. pgbench is certainly a badly scaling (in \nregards to how it handles simulating many clients) single process utility \nand anything that starves its execution sometimes is deadly. I never dug \ndeep into the exact scheduler issues, but regardless in this case it's \nkind of unrealistic for Hannes to expect to simulate 100 clients on a \nsmall system and still get a true view of how the server on that system \nperforms.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Mon, 21 Jan 2008 19:26:14 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3 synchronous_commit"
},
{
"msg_contents": "Em Mon, 21 Jan 2008 15:45:54 -0800\nJeff Davis <[email protected]> escreveu:\n\n> On Mon, 2008-01-21 at 18:31 -0500, Greg Smith wrote:\n> > pgbench doesn't handle 100 clients at once very well on the same\n> > box as the server, unless you have a pretty serious system. The\n> > pgbench program itself has a single process model that doesn't\n> > handle the CFQ round-robin very well at all. On top of that, the\n> > database scale should be bigger \n> \n> He was referring to the CFQ I/O scheduler. I don't think that will\n> affect pgbench itself, because it doesn't read/write to disk, right?\n> \n \n No. But pgbench local running, it will be concurrency with\nPostgreSQL. I'm realized some test and really confirm disable\n*synchronous_commit* performance degrade with CFQ and Deadline.\n\n\n\nKind Regards,\n-- \nFernando Ike\nhttp://www.midstorm.org/~fike/weblog\n",
"msg_date": "Mon, 21 Jan 2008 22:48:21 -0200",
"msg_from": "Fernando Ike <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3 synchronous_commit"
},
{
"msg_contents": "* Hannes Dorbath:\n\n> I might completely misunderstand this feature. Shouldn't\n> \"synchronous_commit = off\" improve performance?\n\nIndeed.\n\nWe've seen something similar in one test, but couldn't reproduce it in\na clean environment so far.\n\n> Maybe it's just my test box.. single SATA-II drive, XFS on top of LVM.\n\nOurs was ext3, no LVM or RAID.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Tue, 22 Jan 2008 09:32:25 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3 synchronous_commit"
},
{
"msg_contents": "On Jan 22, 2008 9:32 AM, Florian Weimer <[email protected]> wrote:\n> > Maybe it's just my test box.. single SATA-II drive, XFS on top of LVM.\n>\n> Ours was ext3, no LVM or RAID.\n\nAlso with SATA? If your SATA disk is lying about effectively SYNCing\nthe data, I'm not that surprised you don't see any improvement. Being\nslower is a bit surprising though.\n\n--\nGuillaume\n",
"msg_date": "Tue, 22 Jan 2008 09:49:02 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3 synchronous_commit"
},
{
"msg_contents": "* Guillaume Smet:\n\n> On Jan 22, 2008 9:32 AM, Florian Weimer <[email protected]> wrote:\n>> > Maybe it's just my test box.. single SATA-II drive, XFS on top of LVM.\n>>\n>> Ours was ext3, no LVM or RAID.\n>\n> Also with SATA?\n\nYes, desktop-class SATA.\n\n> If your SATA disk is lying about effectively SYNCing the data, I'm\n> not that surprised you don't see any improvement.\n\nIt still should be faster since the commit doesn't even have to leave\nthe kernel write cache. For us, this is the common case because most\nnon-desktop systems have battery-backed write cache anyway.\n\n> Being slower is a bit surprising though.\n\nExactly.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Tue, 22 Jan 2008 09:56:51 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3 synchronous_commit"
},
{
"msg_contents": "Guillaume Smet wrote:\n> Also with SATA? If your SATA disk is lying about effectively SYNCing\n> the data, I'm not that surprised you don't see any improvement. Being\n> slower is a bit surprising though.\n\nThe disc is not lying, but LVM does not support write barriers, so the \nresult is the same. Indeed nothing is flushing the disc's write cache on \nfsync. However I could disable the disc's write cache entirely. One \nreason why I'm a ZFS fan boy lately. It just get's all of this right by \n*default*.\n\nAnyway, with some further testing my benchmark results vary so much that \n further discussion seems pointless. I'll repost when I really have \nreproducible values and followed Greg Smith's advice.\n\n\n-- \nBest regards,\nHannes Dorbath\n",
"msg_date": "Tue, 22 Jan 2008 12:56:59 +0100",
"msg_from": "Hannes Dorbath <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.3 synchronous_commit"
},
{
"msg_contents": "Greg Smith wrote:\n> Try something more in the range of 4 clients/CPU and set the scale to \n> closer to twice that (so with a dual-core system you might do 8 clients \n> and a scale of 16). If you really want to simulate a large number of \n> clients, do that on another system and connect to the server remotely.\n\nWith 4 clients and scale 10 I get 246 TPS for synchronous_commit \ndisabled and 634 TPS for synchronous_commit enabled. So the effect just \ngot even stronger. That was for CFQ.\n\nFor deadline they are now pretty close, but synchronous_commit disabled \nis still slower. 690 to 727.\n\nValues are AVG from 3 runs each. DROP/CREATE DATABASE and CHECKPOINT; \nbefore each run.\n\n\n-- \nBest regards,\nHannes Dorbath\n",
"msg_date": "Tue, 22 Jan 2008 13:13:57 +0100",
"msg_from": "Hannes Dorbath <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.3 synchronous_commit"
}
] |
[
{
"msg_contents": "Hi\n\nWhich scheduler is recommended for a box that is dedicated to running \npostgres?\n\nI've asked google and found no answers.\n\n-- \nAdrian Moisey\nSystem Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n",
"msg_date": "Tue, 22 Jan 2008 09:00:53 +0200",
"msg_from": "Adrian Moisey <[email protected]>",
"msg_from_op": true,
"msg_subject": "scheduler"
},
{
"msg_contents": "> Which scheduler is recommended for a box that is dedicated to running\n> postgres?\n>\n> I've asked google and found no answers.\n\nIs it the OS itself?\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n",
"msg_date": "Tue, 22 Jan 2008 09:10:07 +0100",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scheduler"
},
{
"msg_contents": "Hi\n\n>> Which scheduler is recommended for a box that is dedicated to running\n>> postgres?\n>>\n>> I've asked google and found no answers.\n> \n> Is it the OS itself?\n\nYes, in linux. I've found that cfq or deadline is best, but I haven't \nseen anyone try a benchmark\n\n-- \nAdrian Moisey\nSystem Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n",
"msg_date": "Tue, 22 Jan 2008 10:27:21 +0200",
"msg_from": "Adrian Moisey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: scheduler"
},
{
"msg_contents": "On Tue, 22 Jan 2008, Adrian Moisey wrote:\n\n> Hi\n>\n>>> Which scheduler is recommended for a box that is dedicated to running\n>>> postgres?\n>>> \n>>> I've asked google and found no answers.\n>> \n>> Is it the OS itself?\n>\n> Yes, in linux. I've found that cfq or deadline is best, but I haven't seen \n> anyone try a benchmark\n\nit also depends on your hardware. if you have a good battery-backed cache \non your I/O system you may be best off with 'none' (or simple elevator, I \ndon't remember the exact name)\n\nthe more complexity in your I/O that the kernek oesn't know about, the \nless likely it is to make the right decision.\n\nDavid Lang\n",
"msg_date": "Tue, 22 Jan 2008 08:37:21 +0000 (GMT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: scheduler"
},
{
"msg_contents": "Deadline works best for us. The new AS is getting better, but last we\ntried there were issues with it.\n\n- Luke \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Adrian Moisey\n> Sent: Monday, January 21, 2008 11:01 PM\n> To: [email protected]\n> Subject: [PERFORM] scheduler\n> \n> Hi\n> \n> Which scheduler is recommended for a box that is dedicated to \n> running postgres?\n> \n> I've asked google and found no answers.\n> \n> --\n> Adrian Moisey\n> System Administrator | CareerJunction | Your Future Starts Here.\n> Web: www.careerjunction.co.za | Email: [email protected]\n> Phone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 \n> 21 686 6842\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n",
"msg_date": "Tue, 22 Jan 2008 03:45:45 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scheduler"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm developing some routines that will use temporary tables and need \nadvice on how to not lose performance.\n\nI will insert data in a temporary table and use this data to generate \nnew sets that will update or add to the same temporary table.\n\nI have some questions that I'm concerned about:\n\n- I can create indexes on this temp table to optimize queries ?\n\nIf yes...\n- This indexes will be destroyed together (automatic?) the temp table \nwhen the connection is closed ?\n- When I insert a lot of new rows, need to run analyze on this temporary \ntable ?\n\nThanks in Advance.\n\n-- \nLuiz K. Matsumura\nPlan IT Tecnologia Inform�tica Ltda.\n\n",
"msg_date": "Tue, 22 Jan 2008 17:32:17 -0200",
"msg_from": "\"Luiz K. Matsumura\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing queries that use temporary tables"
},
{
"msg_contents": "On Jan 22, 2008 1:32 PM, Luiz K. Matsumura <[email protected]> wrote:\n> Hello,\n>\n> I'm developing some routines that will use temporary tables and need\n> advice on how to not lose performance.\n>\n> I will insert data in a temporary table and use this data to generate\n> new sets that will update or add to the same temporary table.\n>\n> I have some questions that I'm concerned about:\n>\n> - I can create indexes on this temp table to optimize queries ?\n\nYes\n\n> If yes...\n> - This indexes will be destroyed together (automatic?) the temp table\n> when the connection is closed ?\n\nYes\n\n> - When I insert a lot of new rows, need to run analyze on this temporary\n> table ?\n\nYes\n",
"msg_date": "Tue, 22 Jan 2008 14:34:15 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing queries that use temporary tables"
}
] |
[
{
"msg_contents": "Hi -performance,\n\nWhile testing 8.3, I found this query which is equally slow on 8.1 and\n8.3 and seems to be really slow for a not so complex query. The stats\nare as good as possible and the behaviour of PostgreSQL seems to be\nlogical considering the stats but I'm looking for a workaround to\nspeed up this query.\n\nSo here is the original query:\ncityvox_prod=# EXPLAIN ANALYZE SELECT vq.codequar, vq.liblong, vq.libcourt\n\tFROM lieu l, vilquartier vq, rubtylieu rtl, genrelieu gl, lieugelieu lgl\n\tWHERE l.codequar = vq.codequar AND l.dfinvalidlieu is null AND\nvq.codevil = 'MUC' AND lgl.numlieu = l.numlieu AND lgl.codegelieu =\ngl.codegelieu\n\tAND gl.codetylieu = rtl.codetylieu AND rtl.codeth = 'RES' -- the\ninteresting part is here\n\tGROUP BY vq.codequar, vq.liblong, vq.libcourt, vq.flagintramuros\nORDER BY vq.flagintramuros, vq.liblong;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=2773.18..2773.25 rows=26 width=43) (actual\ntime=602.822..602.829 rows=13 loops=1)\n Sort Key: vq.flagintramuros, vq.liblong\n Sort Method: quicksort Memory: 26kB\n -> HashAggregate (cost=2772.31..2772.57 rows=26 width=43) (actual\ntime=602.769..602.778 rows=13 loops=1)\n -> Hash Join (cost=2737.48..2769.83 rows=248 width=43)\n(actual time=601.999..602.580 rows=167 loops=1)\n Hash Cond: ((vq.codequar)::text = (l.codequar)::text)\n -> Bitmap Heap Scan on vilquartier vq\n(cost=1.91..27.50 rows=47 width=43) (actual time=0.032..0.067 rows=47\nloops=1)\n Recheck Cond: ((codevil)::text = 'MUC'::text)\n -> Bitmap Index Scan on idx_vilquartier_codevil\n(cost=0.00..1.90 rows=47 width=0) (actual time=0.023..0.023 rows=47\nloops=1)\n Index Cond: ((codevil)::text = 'MUC'::text)\n -> Hash (cost=2646.25..2646.25 rows=7145 width=5)\n(actual time=601.955..601.955 rows=62526 loops=1)\n -> Nested Loop (cost=0.00..2646.25 rows=*7145*\nwidth=5) (actual time=0.058..548.618 rows=*62526* loops=1)\n -> Nested Loop (cost=0.00..349.71\nrows=7232 width=4) (actual time=0.049..147.221 rows=66292 loops=1)\n -> Nested Loop (cost=0.00..8.59\nrows=13 width=4) (actual time=0.027..0.254 rows=88 loops=1)\n -> Seq Scan on rubtylieu rtl\n(cost=0.00..2.74 rows=1 width=4) (actual time=0.013..0.026 rows=1\nloops=1)\n Filter: ((codeth)::text =\n'RES'::text)\n -> Index Scan using\nind_genrelieu2 on genrelieu gl (cost=0.00..5.68 rows=*14* width=8)\n(actual time=0.013..0.119 rows=*88* loops=1)\n Index Cond:\n((gl.codetylieu)::text = (rtl.codetylieu)::text)\n -> Index Scan using\nidx_lieugelieu_codegelieu on lieugelieu lgl (cost=0.00..18.33\nrows=633 width=8) (actual time=0.014..0.802 rows=753 loops=88)\n Index Cond:\n((lgl.codegelieu)::text = (gl.codegelieu)::text)\n -> Index Scan using pk_lieu on lieu l\n(cost=0.00..0.31 rows=1 width=9) (actual time=0.003..0.004 rows=1\nloops=66292)\n Index Cond: (l.numlieu = lgl.numlieu)\n Filter: (l.dfinvalidlieu IS NULL)\n Total runtime: 602.930 ms\n\nThe query is looking for parts of the city where we can find\nrestaurants. The problem of this query is that we have several\ncategories (here genrelieu) for a given type (here rubtylieu) and we\nhave several types for a given theme (here the theme is codeth =\n'RES'). When the value of the theme is RES (restaurants), we have only\n1 type (it's also RES). The fact is that there are a lot of rows for\nthe value RES in genrelieu (all the types of food and so on) compared\nto other values. Considering that PostgreSQL doesn't have the value of\nRES when building the stats for genrelieu, 14 rows is an expected\nvalue considering the distribution of the values in genrelieu so\nthere's nothing really wrong here. But it's really really slow.\n\nIf I remove the join on rubtylieu and I inject directly the value\nobtained in the query, the stats are exact and I get a far better\nplan:\n\ncityvox_prod=# EXPLAIN ANALYZE SELECT vq.codequar, vq.liblong, vq.libcourt\n\tFROM lieu l, vilquartier vq, genrelieu gl, lieugelieu lgl\n\tWHERE l.codequar = vq.codequar AND l.dfinvalidlieu is null AND\nvq.codevil = 'MUC' AND lgl.numlieu = l.numlieu AND lgl.codegelieu =\ngl.codegelieu\n\tAND gl.codetylieu = 'RES'\n\tGROUP BY vq.codequar, vq.liblong, vq.libcourt, vq.flagintramuros\nORDER BY vq.flagintramuros, vq.liblong;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=7070.53..7070.59 rows=26 width=43) (actual\ntime=8.502..8.511 rows=13 loops=1)\n Sort Key: vq.flagintramuros, vq.liblong\n Sort Method: quicksort Memory: 26kB\n -> HashAggregate (cost=7069.65..7069.91 rows=26 width=43) (actual\ntime=8.451..8.458 rows=13 loops=1)\n -> Hash Join (cost=10.06..7053.22 rows=1643 width=43)\n(actual time=0.318..8.267 rows=167 loops=1)\n Hash Cond: ((lgl.codegelieu)::text = (gl.codegelieu)::text)\n -> Nested Loop (cost=2.63..7001.77 rows=7358\nwidth=47) (actual time=0.098..7.361 rows=973 loops=1)\n -> Nested Loop (cost=2.63..5527.36 rows=4319\nwidth=47) (actual time=0.084..2.574 rows=630 loops=1)\n -> Index Scan using\nidx_vilquartier_codevil on vilquartier vq (cost=0.00..34.06 rows=47\nwidth=43) (actual time=0.023..0.062 rows=47 loops=1)\n Index Cond: ((codevil)::text = 'MUC'::text)\n -> Bitmap Heap Scan on lieu l\n(cost=2.63..115.05 rows=146 width=9) (actual time=0.014..0.035 rows=13\nloops=47)\n Recheck Cond: ((l.codequar)::text =\n(vq.codequar)::text)\n Filter: (l.dfinvalidlieu IS NULL)\n -> Bitmap Index Scan on\nlieu_i_codequar (cost=0.00..2.59 rows=146 width=0) (actual\ntime=0.010..0.010 rows=13 loops=47)\n Index Cond: ((l.codequar)::text\n= (vq.codequar)::text)\n -> Index Scan using\nidx_lieugelieu_numlieu_principal on lieugelieu lgl (cost=0.00..0.32\nrows=2 width=8) (actual time=0.003..0.005 rows=2 loops=630)\n Index Cond: (lgl.numlieu = l.numlieu)\n -> Hash (cost=6.33..6.33 rows=88 width=4) (actual\ntime=0.153..0.153 rows=88 loops=1)\n -> Bitmap Heap Scan on genrelieu gl\n(cost=2.23..6.33 rows=*88* width=4) (actual time=0.028..0.088\nrows=*88* loops=1)\n Recheck Cond: ((codetylieu)::text = 'RES'::text)\n -> Bitmap Index Scan on ind_genrelieu2\n(cost=0.00..2.21 rows=88 width=0) (actual time=0.021..0.021 rows=88\nloops=1)\n Index Cond: ((codetylieu)::text = 'RES'::text)\n Total runtime: 8.589 ms\n\nSo the question is: is there any way to improve the results of the\noriginal query, other than doing a first query in the application to\nget the list of types and inject them in a second query (the one just\nabove)?\n\nThanks.\n\n--\nGuillaume\n",
"msg_date": "Wed, 23 Jan 2008 01:57:07 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Workaround for cross column stats dependency"
},
{
"msg_contents": "\"Guillaume Smet\" <[email protected]> writes:\n> So the question is: is there any way to improve the results of the\n> original query, other than doing a first query in the application to\n> get the list of types and inject them in a second query (the one just\n> above)?\n\nWell, if you're willing to cheat like mad, you can use a phony immutable\nfunction to perform that injection. Here's a really silly example in\nthe regression database:\n\nregression=# create or replace function getu2(int) returns int[] as $$\nselect array(select unique2 from tenk1 where thousand = $1);\n$$ language sql immutable;\nCREATE FUNCTION\nregression=# explain select * from tenk1 where unique1 = any(getu2(42));\n QUERY PLAN \n------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on tenk1 (cost=38.59..73.80 rows=10 width=244)\n Recheck Cond: (unique1 = ANY ('{381,932,2369,4476,5530,6342,6842,6961,7817,7973}'::integer[]))\n -> Bitmap Index Scan on tenk1_unique1 (cost=0.00..38.59 rows=10 width=0)\n Index Cond: (unique1 = ANY ('{381,932,2369,4476,5530,6342,6842,6961,7817,7973}'::integer[]))\n(4 rows)\n\nSince the function is marked immutable, it'll be pre-evaluated during\nplanning and then the constant array result is exposed for statistics\npurposes.\n\nNow this method *only* works for interactive queries, or EXECUTE'd\nqueries in plpgsql, because you don't want the plan containing the\nfolded constants to get cached. At least not if you're worried about\nresponding promptly to changes in the table you're fetching from.\nBut if that table is essentially constant anyway in your application,\nthere's little downside to this trick.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Jan 2008 20:43:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Workaround for cross column stats dependency "
},
{
"msg_contents": "On Jan 23, 2008 2:43 AM, Tom Lane <[email protected]> wrote:\n> regression=# create or replace function getu2(int) returns int[] as $$\n> select array(select unique2 from tenk1 where thousand = $1);\n> $$ language sql immutable;\n> CREATE FUNCTION\n> regression=# explain select * from tenk1 where unique1 = any(getu2(42));\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on tenk1 (cost=38.59..73.80 rows=10 width=244)\n> Recheck Cond: (unique1 = ANY ('{381,932,2369,4476,5530,6342,6842,6961,7817,7973}'::integer[]))\n> -> Bitmap Index Scan on tenk1_unique1 (cost=0.00..38.59 rows=10 width=0)\n> Index Cond: (unique1 = ANY ('{381,932,2369,4476,5530,6342,6842,6961,7817,7973}'::integer[]))\n> (4 rows)\n\nI'll give it a try tomorrow.\n\n> Now this method *only* works for interactive queries, or EXECUTE'd\n> queries in plpgsql, because you don't want the plan containing the\n> folded constants to get cached. At least not if you're worried about\n> responding promptly to changes in the table you're fetching from.\n> But if that table is essentially constant anyway in your application,\n> there's little downside to this trick.\n\nYeah, that sounds like a good idea in our case. We don't use prepared\nstatements for these queries.\n\nI'll post my results tomorrow morning.\n\nThanks.\n\n--\nGuillaume\n",
"msg_date": "Wed, 23 Jan 2008 03:02:50 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Workaround for cross column stats dependency"
},
{
"msg_contents": "On Jan 23, 2008 3:02 AM, Guillaume Smet <[email protected]> wrote:\n> I'll post my results tomorrow morning.\n\nIt works perfectly well:\ncityvox_prod=# CREATE OR REPLACE FUNCTION\ngetTypesLieuFromTheme(codeTheme text) returns text[] AS\n$f$\nSELECT ARRAY(SELECT codetylieu::text FROM rubtylieu WHERE codeth = $1);\n$f$ LANGUAGE SQL IMMUTABLE;\nCREATE FUNCTION\n\ncityvox_prod=# EXPLAIN ANALYZE SELECT vq.codequar, vq.liblong, vq.libcourt\nFROM lieu l, vilquartier vq, genrelieu gl, lieugelieu lgl\nWHERE l.codequar = vq.codequar AND l.dfinvalidlieu is null AND\nvq.codevil = 'MUC' AND lgl.numlieu = l.numlieu AND lgl.codegelieu =\ngl.codegelieu\nAND gl.codetylieu = ANY(getTypesLieuFromTheme('RES'))\nGROUP BY vq.codequar, vq.liblong, vq.libcourt, vq.flagintramuros\nORDER BY vq.flagintramuros, vq.liblong;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=5960.02..5960.08 rows=26 width=43) (actual\ntime=7.467..7.475 rows=13 loops=1)\n Sort Key: vq.flagintramuros, vq.liblong\n Sort Method: quicksort Memory: 26kB\n -> HashAggregate (cost=5959.15..5959.41 rows=26 width=43) (actual\ntime=7.421..7.428 rows=13 loops=1)\n -> Hash Join (cost=7.32..5944.52 rows=1463 width=43)\n(actual time=0.241..7.212 rows=167 loops=1)\n Hash Cond: ((lgl.codegelieu)::text = (gl.codegelieu)::text)\n -> Nested Loop (cost=0.00..5898.00 rows=6552\nwidth=47) (actual time=0.038..6.354 rows=973 loops=1)\n -> Nested Loop (cost=0.00..4585.64 rows=3845\nwidth=47) (actual time=0.031..1.959 rows=630 loops=1)\n -> Index Scan using\nidx_vilquartier_codevil on vilquartier vq (cost=0.00..34.06 rows=47\nwidth=43) (actual time=0.015..0.047 rows=47 loops=1)\n Index Cond: ((codevil)::text = 'MUC'::text)\n -> Index Scan using idx_test on lieu l\n(cost=0.00..95.53 rows=105 width=9) (actual time=0.008..0.024 rows=13\nloops=47)\n Index Cond: ((l.codequar)::text =\n(vq.codequar)::text)\n -> Index Scan using\nidx_lieugelieu_numlieu_principal on lieugelieu lgl (cost=0.00..0.32\nrows=2 width=8) (actual time=0.003..0.004 rows=2 loops=630)\n Index Cond: (lgl.numlieu = l.numlieu)\n -> Hash (cost=6.22..6.22 rows=88 width=4) (actual\ntime=0.146..0.146 rows=88 loops=1)\n -> Bitmap Heap Scan on genrelieu gl\n(cost=2.23..6.22 rows=88 width=4) (actual time=0.022..0.075 rows=88\nloops=1)\n Recheck Cond: ((codetylieu)::text = ANY\n('{RES}'::text[]))\n -> Bitmap Index Scan on ind_genrelieu2\n(cost=0.00..2.21 rows=88 width=0) (actual time=0.016..0.016 rows=88\nloops=1)\n Index Cond: ((codetylieu)::text = ANY\n('{RES}'::text[]))\n Total runtime: 7.558 ms\n\nIt seems like a good tip to keep in mind.\n\nThanks for your help.\n\n--\nGuillaume\n",
"msg_date": "Wed, 23 Jan 2008 10:05:53 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Workaround for cross column stats dependency"
}
] |
[
{
"msg_contents": "\nHey folks --\n\nFor starters, I am fairly new to database tuning and I'm still learning \nthe ropes. I understand the concepts but I'm still learning the real \nworld impact of some of the configuration options for postgres.\n\nWe have an application that has been having some issues with performance \nwithin postgres 8.1.9 and later 8.2.5. The upgrade to 8.2.5 gained us a \nnice little performance increase just off the improved query \noptimization, but we are still having other performance issues.\n\nThe database itself is not that large -- a db_dump of the sql file as \ntext is only about 110MB. I haven't checked the exact size of the actual \ndata base, but the entire data directory is smaller than the available \nmemory at about 385MB including logs and config files. This is a single \ndatabase with a relatively small number of client connections (50 or so) \nmaking a fair number of smaller queries. This is not a massive data \neffort by any means at this time, but it will be growing.\n\nWe have available currently ~4GB (8GB total) for Postgres. We will be \nmoving to a server that will have about 24GB (32GB total) available for \nthe database, with the current server becoming a hot backup, probably \nwith slony or something similar to keep the databases in sync.\n\nI've been monitoring the memory usage of postgres on the current system \nand it seems like none of the threads ever allocate more than about \n400MB total and about 80-90MB shared memory. It seems to me that since \nwe have a very large chunk of memory relative to the database size we \nshould be loading the entire database into memory. How can we be sure \nwe're getting the most out of the memory we're allocating to postgres? \nWhat can we do to improve the memory usage, looking for performance \nfirst and foremost, on both the larger and smaller systems?\n\nHere's the salient config items for the 8GB system:\n\nmax_connections = 200 # realistically we expect 50-150 open\nshared_buffers = 38000\nsort_mem = 1048576\nwork_mem = 32000\nmaintenance_work_mem = 32000\nmax_fsm_pages = 480001 # probably too large for the max_fsm_* \nmax_fsm_relations = 20000 # items; one Db with ~400 tables.\neffective_cache_size = 212016 # ~2GB, could probably double this\n\n\nThanks,\nJ\n-- \nJoshua J. Fielek\nSr. Software Engineer\nConcursive Corporation\n223 East City Hall Ave., Suite 212\nNorfolk, VA 23510\nPhone : (757) 627-3002x6656\nMobile : (757) 754-4462\nFax : (757) 627-8773\nEmail : [email protected]\nhttp://www.concursive.com\n",
"msg_date": "Tue, 22 Jan 2008 23:11:48 -0500",
"msg_from": "Joshua Fielek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Making the most of memory?"
},
{
"msg_contents": "On Jan 22, 2008 10:11 PM, Joshua Fielek <[email protected]> wrote:\n>\n> Hey folks --\n>\n> For starters, I am fairly new to database tuning and I'm still learning\n> the ropes. I understand the concepts but I'm still learning the real\n> world impact of some of the configuration options for postgres.\n>\n> We have an application that has been having some issues with performance\n> within postgres 8.1.9 and later 8.2.5. The upgrade to 8.2.5 gained us a\n> nice little performance increase just off the improved query\n> optimization, but we are still having other performance issues.\n>\n> The database itself is not that large -- a db_dump of the sql file as\n> text is only about 110MB. I haven't checked the exact size of the actual\n> data base, but the entire data directory is smaller than the available\n> memory at about 385MB including logs and config files. This is a single\n> database with a relatively small number of client connections (50 or so)\n> making a fair number of smaller queries. This is not a massive data\n> effort by any means at this time, but it will be growing.\n>\n> We have available currently ~4GB (8GB total) for Postgres.\n\nHow are you \"allocating\" this memory to postgresql? VM, ulimit? Or\nare you just saying that you want to tune pgsql to use about 4Gig of\nram?\n\n> We will be\n> moving to a server that will have about 24GB (32GB total) available for\n> the database, with the current server becoming a hot backup, probably\n> with slony or something similar to keep the databases in sync.\n>\n> I've been monitoring the memory usage of postgres on the current system\n> and it seems like none of the threads ever allocate more than about\n> 400MB total and about 80-90MB shared memory. It seems to me that since\n> we have a very large chunk of memory relative to the database size we\n> should be loading the entire database into memory.\n\nYou'd think so. But you might be wrong. The OS itself will naturally\ncache all of the data in memory anyway. Having PostgreSQL cache it\nmight as well might make things faster, might make them slower,\ndepending on your usage patterns.\n\nHowever, it's far more important that PostgreSQL be able to allocate\nmemory for individual backends for things like sorts and maintenance\nthan to use it all to hold mostly static data that may or may not be\naccessed all that often.\n\n> How can we be sure\n> we're getting the most out of the memory we're allocating to postgres?\n\nI'd suggest not worrying too much about it. Using 100% of your memory\nis much more dangerous than not. Since when you run out the machine\nwill start swapping and slow to a crawl.\n\n> What can we do to improve the memory usage, looking for performance\n> first and foremost, on both the larger and smaller systems?\n>\n> Here's the salient config items for the 8GB system:\n>\n> max_connections = 200 # realistically we expect 50-150 open\n> shared_buffers = 38000\n\nThat's a good number for the size database you're currently running.\nHaving shared_buffers be larger than your data set doesn't really\nhelp. Depending on your workload, having it be smaller can help (i.e.\nlots of small transactions).\n\n> sort_mem = 1048576\n\nThis setting doesn't exist in 8.1 and 8.2 anymore, it was replaced\nwith this one:\n\n> work_mem = 32000\n\nWhich, by the way, is a pretty reasonable number, except if you're\ncommonly handling 200 actual connections in which case you could be\nallocating 32M*200 = 6.4Gig max if each connection is running a sort\nat the same time. If most won't be using that much, you might be\nsafe.\n\n> maintenance_work_mem = 32000\n> max_fsm_pages = 480001 # probably too large for the max_fsm_*\n\nThat's ok. it's better to allocate a few hundred thousand extra fsm\npages than not. Since you have to restart to change it, it's better\nto be prepared.\n\n> max_fsm_relations = 20000 # items; one Db with ~400 tables.\n> effective_cache_size = 212016 # ~2GB, could probably double this\n\nSince effective cache size doesn't allocate anything, but rather acts\nas a big round knob telling pgsql about how much memory the OS is\ncaching postgresql stuff in, you can approximate it.\n\nI'd worry more about what kind of drive subsystem you have in this\nsystem. In a database server the I/O subsystem is often the most\nimportant part of planning for good performance.\n",
"msg_date": "Tue, 22 Jan 2008 23:20:43 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "In response to Joshua Fielek <[email protected]>:\n> \n> Hey folks --\n> \n> For starters, I am fairly new to database tuning and I'm still learning \n> the ropes. I understand the concepts but I'm still learning the real \n> world impact of some of the configuration options for postgres.\n> \n> We have an application that has been having some issues with performance \n> within postgres 8.1.9 and later 8.2.5. The upgrade to 8.2.5 gained us a \n> nice little performance increase just off the improved query \n> optimization, but we are still having other performance issues.\n> \n> The database itself is not that large -- a db_dump of the sql file as \n> text is only about 110MB. I haven't checked the exact size of the actual \n> data base, but the entire data directory is smaller than the available \n> memory at about 385MB including logs and config files. This is a single \n> database with a relatively small number of client connections (50 or so) \n> making a fair number of smaller queries. This is not a massive data \n> effort by any means at this time, but it will be growing.\n> \n> We have available currently ~4GB (8GB total) for Postgres. We will be \n> moving to a server that will have about 24GB (32GB total) available for \n> the database, with the current server becoming a hot backup, probably \n> with slony or something similar to keep the databases in sync.\n> \n> I've been monitoring the memory usage of postgres on the current system \n> and it seems like none of the threads ever allocate more than about \n> 400MB total and about 80-90MB shared memory. It seems to me that since \n> we have a very large chunk of memory relative to the database size we \n> should be loading the entire database into memory. How can we be sure \n> we're getting the most out of the memory we're allocating to postgres? \n> What can we do to improve the memory usage, looking for performance \n> first and foremost, on both the larger and smaller systems?\n\nEvery system is a little different. I recommend you do some profiling.\n\nFirst off, Install the pg_buffercache add-on. This gives you an easy\nview to see how much of your shared_buffers are being used, with a\nquery like:\nselect count(*) from pg_buffercache where reldatabase is not null;\n\nThere is also a lot of interesting information in the pg_stat_database\ntable, i.e.:\nselect sum(blks_hit) from pg_stat_database;\nWhich gives you the # of reads that were satisfied from shared_buffers,\nor\nselect sum(blks_read) from pg_stat_database;\nwhich gives you the # of reads that had to go to disk.\n\nThere are lots of other stats you can graph, but those are some that I\nfind particularly telling as to how things are being used.\n\n From there, I recommend that you graph those #s and any others that you\nfind interesting. We use MRTG, but there are certainly other options.\nAdd that to stats collecting that you should be doing on machine data,\nsuch as overall IO and CPU usage, and you start to get a pretty clear\nview of what your machine is doing.\n\nNote that you have to flip some stats collecting switches on in your\npostgresql.conf file, and overall this can put some additional load on\nyour machine. My opinion is that you're _FAR_ better off sizing your\nhardware up a bit so that you can gather this data on a continual basis\nthan if you don't know what's going on.\n\nAnother thing to do is turn on statement timing. This will create huge\nlog files and increase your IO traffic considerably, but the data involved\nis priceless. Run it through pgFouine periodically (possibly on a schedule\nvia a cron job) to isolate problematic queries and address them\nindividually.\n\nNote that it can be tempting to configure Postgres to \"only log queries\nthat take longer than 500ms\" in an attempt to \"only catch the slow and\nproblematic queries without creating unmanageable amounts of IO\" The\ndanger in this is that you may have some relatively fast queries that\nare used so often that they constitute a serious performance problem.\nOptimizing a query from 25ms to 22ms doesn't seem like it's worth the\neffort, but if it's run 1x10^25 times a day it is. If the IO load of\nlogging all queries presents too much of a slowdown, I recommend selecting\ndata collection periods and do it for perhaps an hour, then turn it\nback off. Maybe once a week or so.\n\nHope this helps.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Wed, 23 Jan 2008 09:59:08 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "On Jan 23, 2008 8:01 AM, mike long <[email protected]> wrote:\n> Scott,\n>\n> What are your thoughts on using one of those big RAM appliances for\n> storing a Postgres database?\n\nI assume you're talking about solid state drives? They have their\nuses, but for most use cases, having plenty of RAM in your server will\nbe a better way to spend your money. For certain high throughput,\nrelatively small databases (i.e. transactional work) the SSD can be\nquite useful. For large reporting databases running into the terabyte\nrange they're prohibitively expensive.\n",
"msg_date": "Wed, 23 Jan 2008 12:44:38 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "Josh what about the rest of your system? What operating system? Your\nhardware setup. Drives? Raids? What indices do you have setup for\nthese queries? There are other reasons that could cause bad queries\nperformance.\n\nOn Jan 22, 2008 11:11 PM, Joshua Fielek <[email protected]> wrote:\n>\n> Hey folks --\n>\n> For starters, I am fairly new to database tuning and I'm still learning\n> the ropes. I understand the concepts but I'm still learning the real\n> world impact of some of the configuration options for postgres.\n>\n> We have an application that has been having some issues with performance\n> within postgres 8.1.9 and later 8.2.5. The upgrade to 8.2.5 gained us a\n> nice little performance increase just off the improved query\n> optimization, but we are still having other performance issues.\n>\n> The database itself is not that large -- a db_dump of the sql file as\n> text is only about 110MB. I haven't checked the exact size of the actual\n> data base, but the entire data directory is smaller than the available\n> memory at about 385MB including logs and config files. This is a single\n> database with a relatively small number of client connections (50 or so)\n> making a fair number of smaller queries. This is not a massive data\n> effort by any means at this time, but it will be growing.\n>\n> We have available currently ~4GB (8GB total) for Postgres. We will be\n> moving to a server that will have about 24GB (32GB total) available for\n> the database, with the current server becoming a hot backup, probably\n> with slony or something similar to keep the databases in sync.\n>\n> I've been monitoring the memory usage of postgres on the current system\n> and it seems like none of the threads ever allocate more than about\n> 400MB total and about 80-90MB shared memory. It seems to me that since\n> we have a very large chunk of memory relative to the database size we\n> should be loading the entire database into memory. How can we be sure\n> we're getting the most out of the memory we're allocating to postgres?\n> What can we do to improve the memory usage, looking for performance\n> first and foremost, on both the larger and smaller systems?\n>\n> Here's the salient config items for the 8GB system:\n>\n> max_connections = 200 # realistically we expect 50-150 open\n> shared_buffers = 38000\n> sort_mem = 1048576\n> work_mem = 32000\n> maintenance_work_mem = 32000\n> max_fsm_pages = 480001 # probably too large for the max_fsm_*\n> max_fsm_relations = 20000 # items; one Db with ~400 tables.\n> effective_cache_size = 212016 # ~2GB, could probably double this\n>\n>\n> Thanks,\n> J\n> --\n> Joshua J. Fielek\n> Sr. Software Engineer\n> Concursive Corporation\n> 223 East City Hall Ave., Suite 212\n> Norfolk, VA 23510\n> Phone : (757) 627-3002x6656\n> Mobile : (757) 754-4462\n> Fax : (757) 627-8773\n> Email : [email protected]\n> http://www.concursive.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n",
"msg_date": "Wed, 23 Jan 2008 14:06:02 -0500",
"msg_from": "Rich <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "Scott Marlowe wrote:\n> I assume you're talking about solid state drives? They have their\n> uses, but for most use cases, having plenty of RAM in your server will\n> be a better way to spend your money. For certain high throughput,\n> relatively small databases (i.e. transactional work) the SSD can be\n> quite useful.\n\nUnless somebody has changes some physics recently, I'm not understanding \nthe recent discussions of SSD in the general press. Flash has a limited \nnumber of writes before it becomes unreliable. On good quality consumer \ngrade, that's about 300,000 writes, while on industrial grade it's about \n10 times that. That's fine for mp3 players and cameras; even \nprofessional photographers probably won't rewrite the same spot on a \nflash card that many times in a lifetime. But for database \napplications, 300,000 writes is trivial. 3 million will go a lot longer, \nbut in non-archival applications, I imagine even that mark won't take \nbut a year or two to surpass.\n\n-- \nGuy Rouillier\n",
"msg_date": "Wed, 23 Jan 2008 14:57:09 -0500",
"msg_from": "Guy Rouillier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "\nOn Jan 23, 2008, at 2:57 PM, Guy Rouillier wrote:\n\n> Scott Marlowe wrote:\n>> I assume you're talking about solid state drives? They have their\n>> uses, but for most use cases, having plenty of RAM in your server \n>> will\n>> be a better way to spend your money. For certain high throughput,\n>> relatively small databases (i.e. transactional work) the SSD can be\n>> quite useful.\n>\n> Unless somebody has changes some physics recently, I'm not \n> understanding the recent discussions of SSD in the general press. \n> Flash has a limited number of writes before it becomes unreliable. \n> On good quality consumer grade, that's about 300,000 writes, while \n> on industrial grade it's about 10 times that. That's fine for mp3 \n> players and cameras; even professional photographers probably won't \n> rewrite the same spot on a flash card that many times in a \n> lifetime. But for database applications, 300,000 writes is \n> trivial. 3 million will go a lot longer, but in non-archival \n> applications, I imagine even that mark won't take but a year or two \n> to surpass.\n\nPlease let outdated numbers rest in peace.\nhttp://www.storagesearch.com/ssdmyths-endurance.html\n\nConclusion:\n\"With current technologies write endurance is not a factor you should \nbe worrying about when deploying flash SSDs for server acceleration \napplications - even in a university or other analytics intensive \nenvironment. \"\n\nThat said, postgresql is likely making assumptions about non-volatile \nstorage that will need to be shattered once SSDs become more widely \ndeployed. Perhaps SSDs will replace RAID BBUs and then the HDs \nthemselves?\n\nCheers,\nM\n",
"msg_date": "Wed, 23 Jan 2008 15:40:28 -0500",
"msg_from": "\"A.M.\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "Guy Rouillier wrote:\n\n> Scott Marlowe wrote:\n>\n>> I assume you're talking about solid state drives? They have their\n>> uses, but for most use cases, having plenty of RAM in your server will\n>> be a better way to spend your money. For certain high throughput,\n>> relatively small databases (i.e. transactional work) the SSD can be\n>> quite useful.\n>\n>\n> Unless somebody has changes some physics recently, I'm not \n> understanding the recent discussions of SSD in the general press. \n> Flash has a limited number of writes before it becomes unreliable. On \n> good quality consumer grade, that's about 300,000 writes, while on \n> industrial grade it's about 10 times that. That's fine for mp3 \n> players and cameras; even professional photographers probably won't \n> rewrite the same spot on a flash card that many times in a lifetime. \n> But for database applications, 300,000 writes is trivial. 3 million \n> will go a lot longer, but in non-archival applications, I imagine even \n> that mark won't take but a year or two to surpass.\n>\nI think the original poster was talking about drives like these:\nhttp://www.texmemsys.com/\n\nBasically, they're not using Flash, they're just big ol' hunks of \nbattery-backed RAM. Not unlike a 10GB battery backed buffer for your \nraid, except there is no raid.\n\nBrian\n\n",
"msg_date": "Wed, 23 Jan 2008 15:50:59 -0500",
"msg_from": "Brian Hurt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "Guy Rouillier wrote:\n> Scott Marlowe wrote:\n>> I assume you're talking about solid state drives? They have their\n>> uses, but for most use cases, having plenty of RAM in your server will\n>> be a better way to spend your money. For certain high throughput,\n>> relatively small databases (i.e. transactional work) the SSD can be\n>> quite useful.\n> \n> Unless somebody has changes some physics recently, I'm not understanding \n> the recent discussions of SSD in the general press. Flash has a limited \n> number of writes before it becomes unreliable. On good quality consumer \n> grade, that's about 300,000 writes, while on industrial grade it's about \n> 10 times that. That's fine for mp3 players and cameras; even \n> professional photographers probably won't rewrite the same spot on a \n> flash card that many times in a lifetime. But for database \n> applications, 300,000 writes is trivial. 3 million will go a lot longer, \n> but in non-archival applications, I imagine even that mark won't take \n> but a year or two to surpass.\n\nOne trick they use is to remap the physical Flash RAM to different logical addresses. Typical apps update a small percentage of the data frequently, and the rest of the data rarely or never. By shuffling the physical Flash RAM around, the media lasts a lot longer than a simple analysis might indicate.\n\nCraig\n",
"msg_date": "Wed, 23 Jan 2008 13:06:22 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "On Wed, 23 Jan 2008, Guy Rouillier wrote:\n\n> Flash has a limited number of writes before it becomes unreliable. On \n> good quality consumer grade, that's about 300,000 writes, while on \n> industrial grade it's about 10 times that.\n\nThe main advance that's made SSD practical given the write cycle \nlimitation is increasing sophisticated wear leveling: \nhttp://en.wikipedia.org/wiki/Wear_levelling\n\nThe best devices now use static wear levelling; overviews at \nhttp://en.wikipedia.org/wiki/Static_Wear_Leveling and \nhttp://www.storagesearch.com/siliconsys-art1.html\n\nThe basic idea is that the number of writes to each block is tracked, and \nas it approaches the limit that block gets swapped with one that has been \nmore read-only. So essentially the number of writes before failure \napproaches something closer to 1M x number of blocks. This means that as \nthe size of the device goes up, so does its longevity. If you believe the \nhype, the combination in the increase in size of designs with these more \nsophisticated wear-levelling approaches has now crossed the line where \nit's more likely a standard moving-parts hard drive will fail first if you \ncompare it to a similarly sized SDD doing the same job (a standard \nmechanical drive under heavy write load also wears out faster than one \ndoing less work).\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 23 Jan 2008 19:54:24 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "On Jan 23, 2008 1:57 PM, Guy Rouillier <[email protected]> wrote:\n> Scott Marlowe wrote:\n> > I assume you're talking about solid state drives? They have their\n> > uses, but for most use cases, having plenty of RAM in your server will\n> > be a better way to spend your money. For certain high throughput,\n> > relatively small databases (i.e. transactional work) the SSD can be\n> > quite useful.\n>\n> Unless somebody has changes some physics recently, I'm not understanding\n> the recent discussions of SSD in the general press. Flash has a limited\n> number of writes before it becomes unreliable. On good quality consumer\n\nActually, I was referring to all SSD systems, some of which are based\non flash memory, some on DRAM, sometimes backed by hard drives.\n\nThere's always a use case for a given piece of tech.\n",
"msg_date": "Wed, 23 Jan 2008 19:45:07 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "On Wed, Jan 23, 2008 at 07:54:24PM -0500, Greg Smith wrote:\n> (a standard mechanical drive under heavy write load also wears out faster\n> than one doing less work).\n\nWasn't this one of the myths that was dispelled in the Google disk paper a\nwhile ago?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 24 Jan 2008 09:05:23 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "Joshua Fielek wrote:\n> We have an application that has been having some issues with performance \n> within postgres 8.1.9 and later 8.2.5. The upgrade to 8.2.5 gained us a \n> nice little performance increase just off the improved query \n> optimization, but we are still having other performance issues.\n\nWhat kind of performance issues are you having? A slow query?\n\nWhat kind of transactions are you running? Read-only? A lot of updates? \nHow many transactions per minute?\n\n> We have available currently ~4GB (8GB total) for Postgres. We will be \n> moving to a server that will have about 24GB (32GB total) available for \n> the database, with the current server becoming a hot backup, probably \n> with slony or something similar to keep the databases in sync.\n> \n> I've been monitoring the memory usage of postgres on the current system \n> and it seems like none of the threads ever allocate more than about \n> 400MB total and about 80-90MB shared memory. It seems to me that since \n> we have a very large chunk of memory relative to the database size we \n> should be loading the entire database into memory. How can we be sure \n> we're getting the most out of the memory we're allocating to postgres? \n> What can we do to improve the memory usage, looking for performance \n> first and foremost, on both the larger and smaller systems?\n\nHow are you measuring the amount of memory used? Which operating system \nare you using?\n\nThose numbers don't seem unreasonable to me, though I would've expected \na bit over ~300 MB of shared memory to be used given your shared_buffers \nsetting.\n\nOn a database of ~400MB in size , I doubt you'll ever find use for more \nthan 1-2 gigs of RAM.\n\nOthers have asked about your I/O system, but if the database stays in \nmemory all the time, that shouldn't matter much. Except for one thing: \nfsyncs. Perhaps you're bottlenecked by the fact that each commit needs \nto flush the WAL to disk? A RAID array won't help with that, but a RAID \ncontroller with a battery-backed up cache will. You could try turning \nfsync=off to test that theory, but you don't want to do that in production.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Thu, 24 Jan 2008 09:19:58 +0000",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "On Jan 23, 2008 2:57 PM, Guy Rouillier <[email protected]> wrote:\n> Unless somebody has changes some physics recently, I'm not understanding\n> the recent discussions of SSD in the general press. Flash has a limited\n> number of writes before it becomes unreliable. On good quality consumer\n> grade, that's about 300,000 writes, while on industrial grade it's about\n> 10 times that. That's fine for mp3 players and cameras; even\n\nwrong. at 1 million writes (which better flash drives can do) wear\nleveling, this can be disproved with a simple back of napkin\ncalculation...\n\nthe major problem with flash drives in the server space is actually\nrandom write performance...if random write performed as well as random\nread for flash ssd, you would be able to replace a stack of 15k sas\ndrives with a single flash ssd in terms of iops.\n\nmerlin\n",
"msg_date": "Thu, 24 Jan 2008 09:01:17 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "On Wed, 23 Jan 2008, Brian Hurt wrote:\n> I think the original poster was talking about drives like these:\n> http://www.texmemsys.com/\n>\n> Basically, they're not using Flash, they're just big ol' hunks of \n> battery-backed RAM. Not unlike a 10GB battery backed buffer for your raid, \n> except there is no raid.\n\nSo, that web site seems to list products starting at about 32GB in a \nseparate rack-mounted box with redundant everything. I'd be more \ninterested in just putting the WAL on an SSD device, so 500MB or 1GB would \nbe quite sufficient. Can anyone point me towards such a device?\n\nIt'd preferably have a form-factor of a normal 3.5\" hard drive, with a \nnormal SATA or SAS connection, a gigabyte of RAM, a battery, and a \ngigabyte of flash. When the power cuts, the device would copy the RAM over \nto the flash and then power down. The device would therefore get the write \nperformance of normal RAM, without wasting space, and could \n(theoretically) be pretty cheap, and would improve the transaction speed \nof Postgres significantly.\n\nIf someone doesn't already make one, they should!\n\nMatthew\n",
"msg_date": "Thu, 24 Jan 2008 14:15:19 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "> So, that web site seems to list products starting at about 32GB in a\n> separate rack-mounted box with redundant everything. I'd be more\n> interested in just putting the WAL on an SSD device, so 500MB or 1GB\n> would be quite sufficient. Can anyone point me towards such a device?\n\nA dedicated RAID controller with battery-backed cache of ssuficient\nsize and two mirrored disks should not perform that bad, and has the\nadvantage of easy availability.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Thu, 24 Jan 2008 15:26:46 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "[email protected] (\"Scott Marlowe\") writes:\n> On Jan 23, 2008 1:57 PM, Guy Rouillier <[email protected]> wrote:\n>> Scott Marlowe wrote:\n>> > I assume you're talking about solid state drives? They have their\n>> > uses, but for most use cases, having plenty of RAM in your server will\n>> > be a better way to spend your money. For certain high throughput,\n>> > relatively small databases (i.e. transactional work) the SSD can be\n>> > quite useful.\n>>\n>> Unless somebody has changes some physics recently, I'm not understanding\n>> the recent discussions of SSD in the general press. Flash has a limited\n>> number of writes before it becomes unreliable. On good quality consumer\n>\n> Actually, I was referring to all SSD systems, some of which are based\n> on flash memory, some on DRAM, sometimes backed by hard drives.\n>\n> There's always a use case for a given piece of tech.\n\nYeah, I could see an SSD making use of a mixture of technologies...\n - Obviously, it needs a pile of RAM.\n - Then, have a battery that can keep the RAM backed up for [a while].\n - If power goes out, then contents of RAM get copied out to the \"flash\"\n memory.\n\nIn this context, \"flash\" has several merits over disk drives.\nNotably, the absence of moving mechanical parts means:\n - Hopefully lower power consumption than a disk drive\n - Less fragility than a disk drive\n - Quite likely the \"flash\" will be smaller than a disk drive\n\nThe fact that the number of writes may be limited should only be an\nimportant factor if power goes out *INCREDIBLY* frequently, as data\nonly gets written upon power loss.\n\nThe combination of RAM + battery + flash looks like a real winner,\nwhen they are combined using a protocol that takes advantage of their\nstrengths, and which doesn't rest on their weaknesses.\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in String.concat \"@\" [name;tld];;\nhttp://cbbrowne.com/info/advocacy.html\nRoses are red\nViolets are blue\nSome poems rhyme\nBut this one doesn't. \n",
"msg_date": "Thu, 24 Jan 2008 11:09:08 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "[email protected] (Florian Weimer) writes:\n>> So, that web site seems to list products starting at about 32GB in a\n>> separate rack-mounted box with redundant everything. I'd be more\n>> interested in just putting the WAL on an SSD device, so 500MB or 1GB\n>> would be quite sufficient. Can anyone point me towards such a device?\n>\n> A dedicated RAID controller with battery-backed cache of ssuficient\n> size and two mirrored disks should not perform that bad, and has the\n> advantage of easy availability.\n\nThat won't provide as \"souped up\" performance as \"WAL on SSD,\" and\nit's from technical people wishing for things that some of those\nthings actually emerge...\n\nIt appears that the SSD market place is going pretty \"nuts\" right now\nas vendors are hawking flash-based SSD devices that are specifically\ntargeted at replacing disk drives for laptops.\n\nI agree that there would be a considerable value for DBMS applications\nin having availability of a device that combines the strengths of both\nFlash (persistence) and DRAM (sheer volume of IOPs) to provide\nsomething better than they offered alone. I expect that the standard\nsize for this is more likely to be 32GB than 1GB, what with modern\nshrinkage of physical sizing...\n-- \nlet name=\"cbbrowne\" and tld=\"linuxfinances.info\" in String.concat \"@\" [name;tld];;\nhttp://www3.sympatico.ca/cbbrowne/spiritual.html\n\"When we write programs that \"learn\", it turns out that we do and they\ndon't.\" -- Alan J. Perlis\n\n",
"msg_date": "Thu, 24 Jan 2008 11:23:02 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "On Jan 22, 2008 11:11 PM, Joshua Fielek <[email protected]> wrote:\n> The database itself is not that large -- a db_dump of the sql file as\n> text is only about 110MB. I haven't checked the exact size of the actual\n> data base, but the entire data directory is smaller than the available\n> memory at about 385MB including logs and config files. This is a single\n> database with a relatively small number of client connections (50 or so)\n> making a fair number of smaller queries. This is not a massive data\n> effort by any means at this time, but it will be growing.\n>\n> We have available currently ~4GB (8GB total) for Postgres. We will be\n> moving to a server that will have about 24GB (32GB total) available for\n> the database, with the current server becoming a hot backup, probably\n> with slony or something similar to keep the databases in sync.\n\nThe database is cached in RAM. As soon as the database files are read\nfor the first time, they will stay cached in the o/s basically forever\n(in either o/s file cache or postgresql buffer cache) as long as there\nare no other demands on memory...not likely in your case. This also\nmeans extra ram is not likely to help performance much if at all.\n\nI'll give you a little hint about postgresql.conf...tuning shared\nbuffers rarely has a huge impact on performance...the o/s will\n\npossible issues you might be having:\n*) sync issues: asking drives to sync more often they can handle.\npossible solutions...faster/more drives or ask database to sync less\n(fsync off, or better transaction management)\n*) cpu bound issues: poorly designed queries, or poorly designed\ntables, bad/no indexes, etc\n*) unrealistic expectations of database performance\n*) not maintaining database properly, vacuum, etc\n*) mvcc issues\n\nmaybe post your transaction load, and/or some slow queries you are dealing with.\n\nmerlin\n",
"msg_date": "Thu, 24 Jan 2008 13:01:33 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "On Jan 24, 2008 1:01 PM, Merlin Moncure <[email protected]> > I'll\ngive you a little hint about postgresql.conf...tuning shared\n> buffers rarely has a huge impact on performance...the o/s will\n\noops. i meant to say the o/s will cache the files just fine...the\nsetting that _does_ affect query performance is work_mem, but this\napplies to queries on a case by case basis.\n\nmerlin\n",
"msg_date": "Thu, 24 Jan 2008 13:21:21 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
},
{
"msg_contents": "* Chris Browne:\n\n>> A dedicated RAID controller with battery-backed cache of ssuficient\n>> size and two mirrored disks should not perform that bad, and has the\n>> advantage of easy availability.\n>\n> That won't provide as \"souped up\" performance as \"WAL on SSD,\" and\n> it's from technical people wishing for things that some of those\n> things actually emerge...\n\nFor WAL (I/O which is mostly sequential), the proposed approach isn't\nthat bad. You can easily get more than 15,000 write transactions per\nsecond, with a single thread. Good luck finding a SSD NAS with a\n60 usec round-trip time. 8->\n\nSomething which directly speaks SATA or PCI might offer comparable\nperformance, but SSD alone isn't sufficient.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Fri, 25 Jan 2008 09:18:41 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the most of memory?"
}
] |
[
{
"msg_contents": "Hi Tom,\n\nOn May 9, 2007 6:40 PM, Tom Lane <[email protected]> wrote:\n> To return to your original comment: if you're trying to model a\n> situation with a fully cached database, I think it's sensible\n> to set random_page_cost = seq_page_cost = 0.1 or so.\n\nIs it still valid for 8.3 or is there any reason to change this\nrecommendation, considering the work you did on the planner during the\n8.3 cycle?\n\nThanks.\n\n--\nGuillaume\n",
"msg_date": "Wed, 23 Jan 2008 13:36:44 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "*_cost recommendation with 8.3 and a fully cached db"
}
] |
[
{
"msg_contents": "I've got two huge tables with one-to-many relationship with complex\nkey. There's also a view, which JOINs the tables, and planner chooses\nunoptimal plan on SELECTs from this view.\n\nThe db schema is declared as: (from on now, I skip the unsignificant\ncolumns for the sake of simplicity)\n\nCREATE TABLE t1 (\n id integer NOT NULL,\n m1 integer NOT NULL DEFAULT 0,\n m2 bigint NOT NULL DEFAULT 0,\n m3 bigint NOT NULL DEFAULT 0,\n time_stamp timestamp without time zone DEFAULT now() NOT NULL,\n[...skipped...]\n);\n\nCREATE TABLE t2 (\n id integer NOT NULL,\n m1 integer NOT NULL DEFAULT 0,\n m2 bigint NOT NULL DEFAULT 0,\n m3 bigint NOT NULL DEFAULT 0,\n time_stamp timestamp without time zone DEFAULT now() NOT NULL,\n[...skipped...]\n);\n\nCREATE VIEW t1t2_view AS SELECT ..., t1.m1, t1.m2, t1.m3,\nt1.time_stamp FROM t1 JOIN t2 on ( (t1.m1=t2.m1) AND (t1.m2=t2.m2)\nAND (t1.m3=t2.m3));\n\nCREATE UNIQUE INDEX i_t1_ms ON t1(m1,m2,m3);\nCREATE INDEX i_t1_ts ON t1(time_stamp);\nCREATE INDEX i_t2_ms ON t2(m1,m2,m3);\n\nTable t1 contains ~20M rows, t2 contains ~30M rows. The complex key\nthat ties one table to another is implied, i.e. (m1,m2,m3) isn't\ndeclared as foreign key. There's a reason for that: an app needs to\npush lots of INSERTs to these tables pretty quickly, and additional\nforeign key constraint check will kill the performance.\n\nSo, here's the query in question:\n\nSELECT * FROM t1t2_view ORDER BY time_stamp ASC LIMIT 100;\n\nEXPLAIN ANALYZE SELECT * FROM t1t2_view ORDER BY time_stamp ASC LIMIT 100:\n\nLimit (cost=13403340.40..13403340.40 rows=1 width=152)\n -> Sort (cost=13403340.40..13403340.40 rows=1 width=152)\n Sort Key: t1.time_stamp\n -> Merge Join (cost=6663466.28..13403340.39 rows=1 width=152)\n Merge Cond: ((t1.m1 = t2.m1) AND (t1.m2 = t2.m2) AND\n(t1.m3 = t2.m3))\n -> Index Scan using i_t1_ms on t1\n(cost=0.00..6272009.52 rows=21639880 width=121)\n -> Sort (cost=6663466.28..6739884.33 rows=30567222 width=51)\n Sort Key: t2.m1, t2.m2, t2.m3\n -> Seq Scan on t2 (cost=0.00..922814.22\nrows=30567222 width=51)\n\nWhen I set enable_sort and enable_mergejoin to off, the planner\nchooses better plan:\n\nEXPLAIN ANALYZE SELECT * FROM t1t2_view ORDER BY time_stamp ASC LIMIT 100\n\n Limit (cost=0.00..175299576.86 rows=1 width=152)\n -> Nested Loop (cost=0.00..175299576.86 rows=1 width=152)\n -> Index Scan using i_t1_ts on t1 (cost=0.00..1106505.70\nrows=21642342 width=121)\n -> Index Scan using i_t2_ms on t2 (cost=0.00..8.03 rows=1 width=51)\n Index Cond: ((t1.m1 = t2.m1) AND (t1.m2 = t2.m2) AND\n(t1.m3 = t2.m3))\n\nThe problem here is, as far as I understand, is the wrong estimate of\nrow count in join result table.\n\nPostgresql version is 8.2.5. The tables are ANALYZEd, Changing\ndefault_statistics_target from 10 to 100, and even 300 doesn't affect\nplanner's behaviour.\n\nIs there any possibility to make the planner to choose an optimal plan\nwithout turning off enable_sort and enable_mergejoin?\n\nThanks in advance.\n\n\n-- \nRegards,\n Dmitry\n",
"msg_date": "Wed, 23 Jan 2008 16:48:16 +0300",
"msg_from": "\"Dmitry Potapov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "planner chooses unoptimal plan on joins with complex key"
},
{
"msg_contents": "Dmitry,\n\nOn Jan 23, 2008 2:48 PM, Dmitry Potapov <[email protected]> wrote:\n> EXPLAIN ANALYZE SELECT * FROM t1t2_view ORDER BY time_stamp ASC LIMIT 100:\n>\n> Limit (cost=13403340.40..13403340.40 rows=1 width=152)\n\nIt doesn't look like an EXPLAIN ANALYZE output. Can you provide a real\none (you should have a second set of numbers with EXPLAIN ANALYZE)?\n\nThanks,\n\n--\nGuillaume\n",
"msg_date": "Wed, 23 Jan 2008 14:58:52 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses unoptimal plan on joins with complex key"
},
{
"msg_contents": "\"Guillaume Smet\" <[email protected]> writes:\n> It doesn't look like an EXPLAIN ANALYZE output. Can you provide a real\n> one (you should have a second set of numbers with EXPLAIN ANALYZE)?\n\nAlso, could we see the pg_stats rows for the columns being joined?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Jan 2008 12:17:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses unoptimal plan on joins with complex key "
},
{
"msg_contents": "\"Dmitry Potapov\" <[email protected]> writes:\n> Sorry, it was just EXPLAIN. I can't run EXPLAIN ANALYZE on that\n> (production) server, so I uploaded 3days old backup to a spare box and\n> here's what I've got:\n\n> -> Merge Join (cost=0.00..4955790.28 rows=1 width=59)\n> (actual time=0.048..4575782.472 rows=30805113 loops=1)\n> Merge Cond: ((t1.m1 = t2.m1) AND (t1.m2 = t2.m2) AND\n> (t1.m3 = t2.m3))\n\nWell, there's our problem: an estimate of 1 row for a join that's\nactually 30805113 rows is uncool :-(.\n\nIt's hard to tell whether the planner is just being overoptimistic\nabout the results of ANDing the three join conditions, or if one or\nmore of the basic condition selectivities were misestimated. Could\nyou try\n\n\texplain analyze select 1 from t1, t2 where t1.m1 = t2.m1;\n\texplain analyze select 1 from t1, t2 where t1.m2 = t2.m2;\n\texplain analyze select 1 from t1, t2 where t1.m3 = t2.m3;\n\nand show the results? This will probably be slow too, but we don't\ncare --- we just need to see the estimated and actual rowcounts.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Jan 2008 21:27:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses unoptimal plan on joins with complex key "
},
{
"msg_contents": " Hello,\n\n(Tom, sorry if you receive this letter twice. The first copy was\nunintentionally sent with 'reply to sender only', I resend it to the\nlist, reply this one to keep the thread, please.)\n\n2008/1/25, Tom Lane <[email protected]>:\n> Well, there's our problem: an estimate of 1 row for a join that's\n> actually 30805113 rows is uncool :-(.\n>\n> It's hard to tell whether the planner is just being overoptimistic\n> about the results of ANDing the three join conditions, or if one or\n> more of the basic condition selectivities were misestimated. Could\n> you try\n>\n> explain analyze select 1 from t1, t2 where t1.m1 = t2.m1;\n> explain analyze select 1 from t1, t2 where t1.m2 = t2.m2;\n> explain analyze select 1 from t1, t2 where t1.m3 = t2.m3;\n>\n> and show the results? This will probably be slow too, but we don't\n> care --- we just need to see the estimated and actual rowcounts.\n\nI've indexed m1, m2, m3 in each table individually, to speed things up.\n\nThe first query is too slow. In fact, it's still running, for 4 days now:\n=# select procpid, current_query, now()-query_start from pg_stat_activity;\n procpid | current_query\n | ?column?\n---------+-------------------------------------------------------------------------+-----------------------\n 11403 | explain analyze select 1 from t1, t2 where t1.m1 = t2.m1;\n | 4 days 06:11:52.18082\n\nI wonder if it will ever finish the work :( So, for now, the only\nthing I can show is:\n\n=# explain select 1 from t1, t2 where t1.m1=t2.m1 ;\n QUERY PLAN\n--------------------------------------------------------------------------------\n Nested Loop (cost=0.00..162742993234.25 rows=57462242912003 width=0)\n -> Seq Scan on t2 (cost=0.00..690784.54 rows=30820054 width=4)\n -> Index Scan using i_t1_m1 on t1 (cost=0.00..3080.74 rows=175973 width=4)\n Index Cond: (t1.m1 = t2.m1)\n(4 rows)\n\nI'll post explain analyze result as soon as it'll finish up. Second\nand third queries are less work:\n\nResult for m2 join:\n\n=# explain analyze select 1 from t1, t2 where t1.m2=t2.m2 ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..2390772.31 rows=32274433 width=0) (actual\ntime=44.460..376892.633 rows=32466668 loops=1)\n Merge Cond: (t1.m2 = t2.m2)\n -> Index Scan using i_t1_m2 on t1 (cost=0.00..861938.04\nrows=21292688 width=8) (actual time=22.178..54862.030 rows=21292689\nloops=1)\n -> Index Scan using i_t2_m2 on t2 (cost=0.00..1023944.35\nrows=30820054 width=8) (actual time=22.263..245649.669 rows=32481348\nloops=1)\n Total runtime: 389871.753 ms\n(5 rows)\n\nResults for m3 join:\n\n=# explain analyze select 1 from t1, t2 where t1.m3=t2.m3 ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=3460701.40..6971662.01 rows=148454021 width=0)\n(actual time=292433.263..1269127.008 rows=170051982 loops=1)\n Merge Cond: (t2.m3 = t1.m3)\n -> Index Scan using i_t2_m3 on t2 (cost=0.00..1207227.84\nrows=30820054 width=8) (actual time=28.996..622876.390 rows=30820054\nloops=1)\n -> Sort (cost=3460701.40..3513933.12 rows=21292688 width=8)\n(actual time=292404.240..426620.070 rows=170051989 loops=1)\n Sort Key: t1.m3\n -> Seq Scan on t1 (cost=0.00..635040.88 rows=21292688\nwidth=8) (actual time=0.031..65919.482 rows=21292689 loops=1)\n Total runtime: 1333669.966 ms\n(7 rows)\n\n\n-- \nRegards,\n Dmitry\n",
"msg_date": "Tue, 29 Jan 2008 18:25:55 +0300",
"msg_from": "\"Dmitry Potapov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: planner chooses unoptimal plan on joins with complex key"
},
{
"msg_contents": "\"Dmitry Potapov\" <[email protected]> writes:\n> 2008/1/25, Tom Lane <[email protected]>:\n>> It's hard to tell whether the planner is just being overoptimistic\n>> about the results of ANDing the three join conditions, or if one or\n>> more of the basic condition selectivities were misestimated.\n\n> I wonder if it will ever finish the work :( So, for now, the only\n> thing I can show is:\n\n> =# explain select 1 from t1, t2 where t1.m1=t2.m1 ;\n> QUERY PLAN\n> --------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..162742993234.25 rows=57462242912003 width=0)\n> -> Seq Scan on t2 (cost=0.00..690784.54 rows=30820054 width=4)\n> -> Index Scan using i_t1_m1 on t1 (cost=0.00..3080.74 rows=175973 width=4)\n> Index Cond: (t1.m1 = t2.m1)\n> (4 rows)\n\nSo this is estimating the join selectivity for m1 alone as\n\nregression=# select 57462242912003 / (30820054.0 * 21292688);\n ?column? \n------------------------\n 0.08756260792962293676\n(1 row)\n\nwhich would suggest that there are only 11 or so distinct values for m1;\nis that about right?\n\nThere's something a bit funny here --- looking at the plan components,\nthe output row estimate should be about 30820054 * 175973, but it's ten\ntimes that. I'm not sure if this is just a presentation artifact or\nif it points to a real bug.\n\nCould you show us the pg_stats rows for t1.m1 and t2.m1?\n\nThe m2 estimate seems nearly dead on, and the m3 estimate is within 10%\nor so which is certainly close enough. Barring the possibility that\nthere's something wrong underneath that m1 estimate, the products of\nthe selectivities do indeed work out to give us about 1 row out of the\njoin with all three conditions. So my conclusion is that these three\nvalues are not independent, but are highly correlated (in fact, it seems\nthat m2 alone provides most of the join selectivity). Do you actually\nneed to compare all three to make the join, or is one field perhaps\ndependent on the other two? Can you refactor the data to make it so?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Jan 2008 16:59:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses unoptimal plan on joins with complex key "
},
{
"msg_contents": "2008/1/30, Tom Lane <[email protected]>:\n> \"Dmitry Potapov\" <[email protected]> writes:\n> > 2008/1/25, Tom Lane <[email protected]>:\n> >> It's hard to tell whether the planner is just being overoptimistic\n> >> about the results of ANDing the three join conditions, or if one or\n> >> more of the basic condition selectivities were misestimated.\n>\n> > I wonder if it will ever finish the work :( So, for now, the only\n> > thing I can show is:\n>\n> > =# explain select 1 from t1, t2 where t1.m1=t2.m1 ;\n> > QUERY PLAN\n> > --------------------------------------------------------------------------------\n> > Nested Loop (cost=0.00..162742993234.25 rows=57462242912003 width=0)\n> > -> Seq Scan on t2 (cost=0.00..690784.54 rows=30820054 width=4)\n> > -> Index Scan using i_t1_m1 on t1 (cost=0.00..3080.74 rows=175973 width=4)\n> > Index Cond: (t1.m1 = t2.m1)\n> > (4 rows)\n>\n> So this is estimating the join selectivity for m1 alone as\n>\n> regression=# select 57462242912003 / (30820054.0 * 21292688);\n> ?column?\n> ------------------------\n> 0.08756260792962293676\n> (1 row)\n>\n> which would suggest that there are only 11 or so distinct values for m1;\n> is that about right?\nNo, that's wrong. There's 155 distinct values in t1, and 157 in t2. By\nthe way, due to the nature of the data stored in m1 (see below), the\nselectivity for m1, imho, should be bigger than 0.08756, something\nabout 0,80 or so. Unfortunally I can't be sure about that, the query\nin question is still running, so the real rowcount is still not\navailable :(\n\n> There's something a bit funny here --- looking at the plan components,\n> the output row estimate should be about 30820054 * 175973, but it's ten\n> times that. I'm not sure if this is just a presentation artifact or\n> if it points to a real bug.\nThat looks strange, indeed.\n\n> Could you show us the pg_stats rows for t1.m1 and t2.m1?\nI've attached them as text files to one of my previous letters. I'm\nsure that email formatting will kill their readability. I attached\nthem once again to this letter, pgstats_t1_m,txt is for t1,\npgstats_t2_m.txt for t2.\n\nTo save your time, some fields:\nt1.m1:\nn_distinct=121\ncorrelation=0.0910695\n\nt2.m2:\nn_distinct=111\ncorrelation=0.148101\n\n> The m2 estimate seems nearly dead on, and the m3 estimate is within 10%\n> or so which is certainly close enough. Barring the possibility that\n> there's something wrong underneath that m1 estimate, the products of\n> the selectivities do indeed work out to give us about 1 row out of the\n> join with all three conditions. So my conclusion is that these three\n> values are not independent, but are highly correlated (in fact, it seems\n> that m2 alone provides most of the join selectivity). Do you actually\nSo, about the nature of the data in m1,m2,m3. m1 is something like\nsubsystem ID to which the row belongs, m3 is supposed to be unique\nrecord id within that subsystem, but it starts with 0 when the system\nis restarted. m2 is a timestamp in java format, it is there to\nguarantee the uniqueness of (m1,m2,m3) triplet.\nSo, about corelation, m2 and m3 may seem to be correlated: if a system\ninserts records at nearly constant rate, m2=a+b*m3..\n> need to compare all three to make the join, or is one field perhaps\n> dependent on the other two? Can you refactor the data to make it so?\nyes, I absolutely need to compare all three. about refactoring, the\nonly thing I can think out is to put m1,m2,m3 to a separate table,\nwith bigint primary key, and then to tie t1 and t2 with that key. but\nthat will require changes in the application and some amount of time\nto convert the data :(\n\n\n-- \nRegards,\n Dmitry\n",
"msg_date": "Thu, 31 Jan 2008 14:40:11 +0300",
"msg_from": "\"Dmitry Potapov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: planner chooses unoptimal plan on joins with complex key"
}
] |
[
{
"msg_contents": "I'm not sure what is going on but looking for some advice, knowledge.\n\nI'm running multiple postgres servers in a slon relationship. I have\nhundreds of thousands of updates, inserts a day. But what I'm seeing\nis my server appears to \"deallocate\" memory (for the lack of a better\nterm) and performance goes to heck, slow response, a sub second query\ntakes anywhere from 6-40 seconds to complete when this happens.\n\nMy guess is that postgres does not have enough memory to handle the\nquery and starts to swap (although I can't see any swapping happening\n(vmstat)\n\nSome examples: Also when the memory deallocates, I start stacking\nconnections (again, performance goes to heck).\n\nFC6, postgres 8.2 on a dual quad core intel box, 8 gigs of physical RAM\n\nBetween 8:17:36 and 8:17:57, my system goes from almost all memory\nconsumed to almost all memory free.. Previous to this big swing, Cache\ngoes from 7 gigs down to what you see here, connections start stacking\nand IOWAIT goes thru the roof.\n\n1200932256 - Mon Jan 21 08:17:36 2008 - qdb02.gc.sv.admission.net - 0\nsec. elapsed.\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id wa st\n 0 2 9072 297460 4904 1543316 0 0 52 175 2 2 21\n3 75 2 0\n\n1200932277 - Mon Jan 21 08:17:57 2008 - qdb02.gc.sv.admission.net - 2\nsec. elapsed.\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id wa st\n18 6 9072 6354560 6128 1574324 0 0 52 175 2 2 21\n 3 75 2 0\n\nAt this point I have connections stacking (simple queries), Until\nwhat I believe is that the kernel starts to allocate memory for the dB\n(or the db starts to be moved from disk to memory?) (queries, tables\netc).. However performance is still dismal and I do see high IOWAIT\nwhen I don't have sufficient memory for the DB (again my feeling). If\nI do a reindex at this time, the system alocates most of my RAM and my\nqueries are subsecond agiain and I work fine until the next time, the\nDB appears to be flushed from RAM.\n\nI'm having a hard time understanding what I'm seeing. I do grab\nnetstat/ps/free/vmstat/netstat every few seconds so I have insight as\nto the systems health when these things happen.\n\nI suspect I've outgrown our initial postgres config, or there are more\nsysctl or other kernel tweaks that need to happen.\n\nI really am just trying to understand the memory allocation etc. And\nwhy my memory goes from fully utilized to 90% free and DB performance\ngoes to heck.\n\nI'll provide more info as needed, and my apologies for this being a\nbit scattered, but I'm really confused. I'm either running out of a\nresource or other (but no errors in any logs, postgres or otherwise)..\n\nThanks\nTory\n",
"msg_date": "Wed, 23 Jan 2008 10:23:51 -0800",
"msg_from": "\"Tory M Blue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres 8.2 memory weirdness"
},
{
"msg_contents": "On Wed, 23 Jan 2008, Tory M Blue wrote:\n\n> I have hundreds of thousands of updates, inserts a day. But what I'm \n> seeing is my server appears to \"deallocate\" memory (for the lack of a \n> better term) and performance goes to heck, slow response, a sub second \n> query takes anywhere from 6-40 seconds to complete when this happens.\n\nGenerally if you have a system doing many updates and inserts that slows \nfor that long, it's because it hit a checkpoint. I'm not sure what your \nmemory-related issues are but it's possible that might be from a backlog \nof sessions using memory that are stuck behind the checkpoint, \nparticularly since you mention simple query connections stacking up during \nthese periods.\n\nIn any case you should prove/disprove this is checkpoint-related behavior \nbefore you chase down something more esoteric. There's a quick intro to \nthis area in the \"Monitoring checkpoints\" section of \nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm and the \nlater sections go into what you can do about it.\n\n> I suspect I've outgrown our initial postgres config, or there are more\n> sysctl or other kernel tweaks that need to happen.\n\nYou should post a list of what you're changed from the defaults. You're \nanalyzing from the perspective where you assume it's a memory problem and \na look at your config will give a better idea whether that's possible or \nnot. Other good things to mention: exact 8.2 version, OS, total memory, \noutline of disk configuration.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 23 Jan 2008 19:31:51 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.2 memory weirdness"
},
{
"msg_contents": "On Jan 23, 2008 4:31 PM, Greg Smith <[email protected]> wrote:\n\n> Generally if you have a system doing many updates and inserts that slows\n> for that long, it's because it hit a checkpoint. I'm not sure what your\n> memory-related issues are but it's possible that might be from a backlog\n> of sessions using memory that are stuck behind the checkpoint,\n> particularly since you mention simple query connections stacking up during\n> these periods.\n>\n> In any case you should prove/disprove this is checkpoint-related behavior\n> before you chase down something more esoteric. There's a quick intro to\n> this area in the \"Monitoring checkpoints\" section of\n> http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm and the\n> later sections go into what you can do about it.\n>\n\n>\n\nThanks Greg,\n\nParticulars:\n\nPostg: 8.2.1fc6\nFedora FC6: 2.6.19-1.2911.fc6\nDell 2950, Dual quad core\n8 Gigs of Ram\nLefthand Iscsi; 48 drives\n\n\nPostgres.conf\n\nmax_connections = 300\nshared_buffers = 75000 <--- Believe these need tuning (based on the\nreading last night)\nmax_prepared_transactions = 0\nwork_mem = 102400\nmaintenance_work_mem = 65536\nmax_fsm_pages = 1087500 <-- modified last night, based on warnings in log\nmax_fsm_relations = 430\nfsync = true\ncheckpoint_segments = 50\ncheckpoint_timeout = 300\ncheckpoint_warning = 3600s <--- set this last night and\nalready see instances of\n\n\"2008-01-24 03:54:39 PST LOG: checkpoints are occurring too\nfrequently (89 seconds apart)\n2008-01-24 03:54:39 PST HINT: Consider increasing the\nconfiguration parameter \"checkpoint_segments\".\"\n\neffective_cache_size = 330000 <-- This appears totally wrong and\nsomething I noticed last night. left over from previous versions of\npostgres on different hardware. (thinking to set this to 6-7G)\n\nautovacuum = on\nautovacuum_analyze_threshold = 2000\n\nThanks for the link, I read lots of good information last night and\nwill start pushing forward with some changes in my test area.\n\nAny insight into what my current settings are telling you is appreciated\n\n-Tory\n",
"msg_date": "Thu, 24 Jan 2008 09:46:19 -0800",
"msg_from": "\"Tory M Blue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 8.2 memory weirdness"
},
{
"msg_contents": "On Thu, 24 Jan 2008, Tory M Blue wrote:\n\n> Postg: 8.2.1fc6\n\n8.2.1 has a nasty bug related to statistics collection that causes \nperformance issues exactly in the kind of heavy update situation you're \nin. That's actually why i asked for the exact 8.2 version. You should \nplan an upgrade as soon as feasible to the current release just to \neliminate this as a possible influence on your problems. No need to dump \nthe database or do anything fancy, just get the new version going and \npoint it at the existing database.\n\nTo do a quick check on whether this is impacting things, run top, press \n\"c\" to show the full process lines, and note whether the statistics \ncollector process is taking up a significant amount of CPU time. If it \nis, you're being nailed by the bug, and you really need that ugprade.\n\n> 8 Gigs of Ram\n> shared_buffers = 75000 <--- Believe these need tuning (based on the\n> reading last night)\n\nProbably, but if you're having checkpoint problems now making \nshared_buffers bigger will likely make them worse. Some people with \nupdate-heavy workloads end up reducing this to a very small value (<250MB) \neven with large amounts of RAM because that makes less information to dump \nat checkpoint time.\n\n> checkpoint_segments = 50\n> checkpoint_timeout = 300\n> checkpoint_warning = 3600s <--- set this last night and\n> already see instances of\n>\n> \"2008-01-24 03:54:39 PST LOG: checkpoints are occurring too\n> frequently (89 seconds apart)\n> 2008-01-24 03:54:39 PST HINT: Consider increasing the\n> configuration parameter \"checkpoint_segments\".\"\n\nIf you're getting checkpoints every 89 seconds it's no wonder your system \nis dying. You may need to consider a large increase to \ncheckpoint_segments to get the interval between checkpoints to increase. \nIt should at least be a few minutes between them if you want any \nreasonable performance level.\n\n> effective_cache_size = 330000 <-- This appears totally wrong and\n> something I noticed last night. left over from previous versions of\n> postgres on different hardware. (thinking to set this to 6-7G)\n\nRight, that's where it should be.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 24 Jan 2008 13:49:34 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.2 memory weirdness"
},
{
"msg_contents": "On Jan 24, 2008 10:49 AM, Greg Smith <[email protected]> wrote:\n> 8.2.1 has a nasty bug related to statistics collection that causes\n> performance issues exactly in the kind of heavy update situation you're\n> in. That's actually why i asked for the exact 8.2 version. You should\n> plan an upgrade as soon as feasible to the current release just to\n> eliminate this as a possible influence on your problems. No need to dump\n> the database or do anything fancy, just get the new version going and\n> point it at the existing database.\n\nNot seeing any excessive cpu from the stats collector process.. So\nmaybe not being hit with this bug.\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P SWAP\nTIME COMMAND\n28445 postgres 15 0 7432 828 408 S 0 0.0 4:15.47 3 6604\n4:15 postgres: stats collector process\n\nWith the above said, we had started sometime ago to move 8.2.5 into\nour environments, so that should be on these servers next week (the\npush is a slow process, but we are really liking what we are seeing\nfor 8.3, so I'm hoping once blessed, i'll push it thru quickly)..\n\n\n> > checkpoint_segments = 50\n> > checkpoint_timeout = 300\n> > checkpoint_warning = 3600s <--- set this last night and\n>\n> If you're getting checkpoints every 89 seconds it's no wonder your system\n> is dying. You may need to consider a large increase to\n> checkpoint_segments to get the interval between checkpoints to increase.\n> It should at least be a few minutes between them if you want any\n> reasonable performance level.\n\nI doubled the checkpoint segments yesterday and have not seen any\nwarnings. Will run with segments of 100 for a while and see how things\nlook.. Anyway to make sure that there is not a number between 50 and\n100 that makes more sense?\n\n\n> > effective_cache_size = 330000 <-- This appears totally wrong and\n> > something I noticed last night. left over from previous versions of\n> > postgres on different hardware. (thinking to set this to 6-7G)\n>\n> Right, that's where it should be.\n\nWe have started some performance analysis and this numvber is sure\naffecting performance in good ways by having it set semi correctly.\nThis has not been pushed (too many changes), but we will continue\nperformance testing and it will probably make it to prod next week.\n\nThanks for some sanity checks here Greg, it's truly appreciated.\n\nTory\n",
"msg_date": "Fri, 25 Jan 2008 09:36:18 -0800",
"msg_from": "\"Tory M Blue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 8.2 memory weirdness"
},
{
"msg_contents": "On Fri, 25 Jan 2008, Tory M Blue wrote:\n\n> I doubled the checkpoint segments yesterday and have not seen any\n> warnings. Will run with segments of 100 for a while and see how things\n> look.. Anyway to make sure that there is not a number between 50 and\n> 100 that makes more sense?\n\nMore segments means more disk space taken up with them and a longer crash \nrecovery. Those are the downsides; if you can live with those there's no \nreason to run at <100 if that works for you. Fine-tuning here isn't \nreally that helpful.\n\nI'm a little confused by your report through because you should still be \nseeing regular checkpoint warnings if you set checkpoint_warning = 3600s , \nthey should just be spaced further apart.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Sun, 27 Jan 2008 19:08:51 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 8.2 memory weirdness"
},
{
"msg_contents": "On Jan 27, 2008 4:08 PM, Greg Smith <[email protected]> wrote:\n\n>\n> More segments means more disk space taken up with them and a longer crash\n> recovery. Those are the downsides; if you can live with those there's no\n> reason to run at <100 if that works for you. Fine-tuning here isn't\n> really that helpful.\n>\n> I'm a little confused by your report through because you should still be\n> seeing regular checkpoint warnings if you set checkpoint_warning = 3600s ,\n> they should just be spaced further apart.\n\nI'm not seeing any warnings at all.\n\n[idb01 ~]$ sudo cat /data/logs/pgsql-27.log | grep -i check\n[idb01 ~]$ sudo cat /data/logs/pgsql-26.log | grep -i check\n[idb01 ~]$ sudo cat /data/logs/pgsql-25.log | grep -i check\n[idb01 ~]$ sudo cat /data/logs/pgsql-24.log | grep -i check\n2008-01-24 03:54:39 PST LOG: checkpoints are occurring too\nfrequently (89 seconds apart)\n2008-01-24 03:54:39 PST HINT: Consider increasing the\nconfiguration parameter \"checkpoint_segments\".\n2008-01-24 07:26:25 PST LOG: checkpoints are occurring too\nfrequently (106 seconds apart)\n2008-01-24 07:26:25 PST HINT: Consider increasing the\nconfiguration parameter \"checkpoint_segments\".\n2008-01-24 11:34:18 PST LOG: checkpoints are occurring too\nfrequently (173 seconds apart)\n2008-01-24 11:34:18 PST HINT: Consider increasing the\nconfiguration parameter \"checkpoint_segments\".\n\nSegment config still:\n\n# - Checkpoints -\ncheckpoint_segments = 100 # bumped from 50\ncheckpoint_timeout = 300 # range 30s-1h\n#checkpoint_warning = 30s # 0 is off\ncheckpoint_warning = 3600s # 0 is off\n\nNo warnings in my logs, I see some LOG information but it pertains to\nslon and not postgres directly.\n\nIdeas?!\n\nThanks again\n\nTory\n",
"msg_date": "Sun, 27 Jan 2008 22:09:12 -0800",
"msg_from": "\"Tory M Blue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 8.2 memory weirdness"
}
] |
[
{
"msg_contents": "hi\n\nWe have an installation of Postgres 8.1.2 (32bit on Solaris 9) with a DB\nsize of about 250GB on disk. The DB is subject to fair amount of\ninserts, deletes and updates per day. \n\nRunning VACUUM VERBOSE tells me that I should allocate around 20M pages\nto FSM (max_fsm_pages)! This looks like a really large amount to me. \n\nHas anyone gone ever that high with max_fsm_pages?\n\nThe other question is why such a large number is required in the first\nplace. \nAuto vacuum is enabled. Here are the settings:\n\nautovacuum = true\t\nautovacuum_naptime = 900\nautovacuum_vacuum_threshold = 2000\nautovacuum_analyze_threshold = 1000\nautovacuum_vacuum_scale_factor = 0.25\nautovacuum_analyze_scale_factor = 0.18\nautovacuum_vacuum_cost_delay = 150\nautovacuum_vacuum_cost_limit = 120\n\nA manual vacuum takes very long (around 4 days), so maybe the cost delay\nand limit or too high.\nAny suggestions anyone?\n\nCheers,\n-- Tom.\n",
"msg_date": "Wed, 23 Jan 2008 19:29:16 +0100",
"msg_from": "\"Thomas Lozza\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuum and FSM page size"
},
{
"msg_contents": "On Wed, Jan 23, 2008 at 07:29:16PM +0100, Thomas Lozza wrote:\n> hi\n> \n> We have an installation of Postgres 8.1.2 (32bit on Solaris 9) with a DB\n> size of about 250GB on disk. The DB is subject to fair amount of\n> inserts, deletes and updates per day. \n> \n> Running VACUUM VERBOSE tells me that I should allocate around 20M pages\n> to FSM (max_fsm_pages)! This looks like a really large amount to me. \n> \n> Has anyone gone ever that high with max_fsm_pages?\n\nNo, that's telling me that you have a lot of bloat. A 250G database is\nabout 31M pages. If you have 20M pages with free space then you've got a\nlot of bloat. Ideally, with a autovac_vacuum_scale_factor of .25 you\nshould only need 4M FSM pages. At most you should only need 8M.\n\n> The other question is why such a large number is required in the first\n> place. \n> Auto vacuum is enabled. Here are the settings:\n> \n> autovacuum = true\t\n> autovacuum_naptime = 900\nWhy'd you change that? That's pretty high.\n\n> autovacuum_vacuum_threshold = 2000\n> autovacuum_analyze_threshold = 1000\n\nBoth of those seem high...\n\n> autovacuum_vacuum_scale_factor = 0.25\nThat means that 12.5% of your database (on average) will be dead\nspace... I'd probably cut that back to 0.2.\n\n> autovacuum_analyze_scale_factor = 0.18\nThis also seems pretty high.\n\n> autovacuum_vacuum_cost_delay = 150\n\nWoah, that's *really* high. That means at most you'll get 6 vacuum\nrounds in per second; with default cost settings that means you'd be\nable to actually vacuum about 50 dirty pages per second, tops. Of course\nnot all pages will be dirty, but still...\n\nI normally use between 10 and 20 for cost_delay (lower values for faster\ndrive arrays).\n\n> autovacuum_vacuum_cost_limit = 120\nWhy'd you reduce this? I'd put it back to 200...\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828",
"msg_date": "Wed, 23 Jan 2008 13:02:21 -0600",
"msg_from": "Decibel! <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum and FSM page size"
},
{
"msg_contents": "\nOn Jan 23, 2008, at 1:29 PM, Thomas Lozza wrote:\n\n> We have an installation of Postgres 8.1.2 (32bit on Solaris 9) with \n> a DB\n> size of about 250GB on disk. The DB is subject to fair amount of\n> inserts, deletes and updates per day.\n>\n> Running VACUUM VERBOSE tells me that I should allocate around 20M \n> pages\n> to FSM (max_fsm_pages)! This looks like a really large amount to me.\n>\n> Has anyone gone ever that high with max_fsm_pages?\n\nwow. you must have a *lot* of pages with empty space in them....\n\nit sounds to me like your autovacuum is not running frequently enough.\n\n",
"msg_date": "Wed, 23 Jan 2008 14:22:20 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum and FSM page size"
},
{
"msg_contents": "Vivek Khera <[email protected]> writes:\n> On Jan 23, 2008, at 1:29 PM, Thomas Lozza wrote:\n>> We have an installation of Postgres 8.1.2 (32bit on Solaris 9) with \n>> ...\n\n> it sounds to me like your autovacuum is not running frequently enough.\n\nYeah. The default autovac settings in 8.1 are extremely conservative\n(in the direction of not letting autovac eat many cycles), so a\nhigh-traffic installation will need to adjust them to keep from falling\nbehind.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Jan 2008 15:25:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum and FSM page size "
},
{
"msg_contents": "On Jan 23, 2008 12:29 PM, Thomas Lozza <[email protected]> wrote:\n> Auto vacuum is enabled. Here are the settings:\n>\n> autovacuum = true\n> autovacuum_naptime = 900\n> autovacuum_vacuum_threshold = 2000\n> autovacuum_analyze_threshold = 1000\n> autovacuum_vacuum_scale_factor = 0.25\n> autovacuum_analyze_scale_factor = 0.18\n> autovacuum_vacuum_cost_delay = 150\n> autovacuum_vacuum_cost_limit = 120\n>\n> A manual vacuum takes very long (around 4 days), so maybe the cost delay\n> and limit or too high.\n\nYour autovacuum_vacuum_cost_delay is REALLY high. Try setting it to\n10 or 20 and see if that helps.\n\nWhat is your plain old vacuum_cost_delay set to?\n",
"msg_date": "Thu, 24 Jan 2008 09:48:04 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum and FSM page size"
},
{
"msg_contents": "Thanks for the advice. \nI used the default settings before, thought though that vacuum was a bit\naggressive, ie, using too many resources. Now its taking very long. So\nwill have to find reasonable settings in between I guess.\n\nOn the other hand, if I keep the fsm_page number high enough, the system\nshould be fine with a low number of vacuum cycles, right. As memory is\nnot really scarce (16G, 32 bit PG though) an x million fsm_page entry\nshould be ok. Any thoughts on that?\n\ncheers,\n-- tom.\n\n\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Thursday, 24. January, 2008 10:48\nTo: Thomas Lozza\nCc: [email protected]\nSubject: Re: [PERFORM] Vacuum and FSM page size\n\nOn Jan 23, 2008 12:29 PM, Thomas Lozza <[email protected]>\nwrote:\n> Auto vacuum is enabled. Here are the settings:\n>\n> autovacuum = true\n> autovacuum_naptime = 900\n> autovacuum_vacuum_threshold = 2000\n> autovacuum_analyze_threshold = 1000\n> autovacuum_vacuum_scale_factor = 0.25\n> autovacuum_analyze_scale_factor = 0.18 autovacuum_vacuum_cost_delay = \n> 150 autovacuum_vacuum_cost_limit = 120\n>\n> A manual vacuum takes very long (around 4 days), so maybe the cost \n> delay and limit or too high.\n\nYour autovacuum_vacuum_cost_delay is REALLY high. Try setting it to 10\nor 20 and see if that helps.\n\nWhat is your plain old vacuum_cost_delay set to?\n",
"msg_date": "Mon, 28 Jan 2008 00:01:30 +0100",
"msg_from": "\"Thomas Lozza\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuum and FSM page size"
},
{
"msg_contents": "On Jan 27, 2008 5:01 PM, Thomas Lozza <[email protected]> wrote:\n> Thanks for the advice.\n> I used the default settings before, thought though that vacuum was a bit\n> aggressive, ie, using too many resources. Now its taking very long. So\n> will have to find reasonable settings in between I guess.\n>\n> On the other hand, if I keep the fsm_page number high enough, the system\n> should be fine with a low number of vacuum cycles, right. As memory is\n> not really scarce (16G, 32 bit PG though) an x million fsm_page entry\n> should be ok. Any thoughts on that?\n\nThe issue you then run into is bloat, where you have a table or index\nthat is 90% or so empty space, and performance on selects might\nsuffer, especially on larger tables or indexes.\n\nWhat often works best is to let autovacuum handle most of your tables,\nthen schedule individual tables to be vacuumed by cron, setting the\nnap time for vacuum at 20 or 30 milliseconds so they don't chew up all\nof your I/O\n",
"msg_date": "Sun, 27 Jan 2008 17:54:12 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum and FSM page size"
}
] |
[
{
"msg_contents": "On a linux box (Linux db1 2.6.18.8-md #1 SMP Wed May 23 17:21:37 EDT \n2007 i686 GNU/Linux)\nI edited postgresql.conf and changed:\n\nshared_buffers = 5000 work_mem = 16384\nmax_stack_depth = 4096\n\nand then restarted postgres. The puzzling part is that postgres \nactually started. When I have done this on other boxes, I had to edit \nkernel settings to allow for more shared memory.\n\nThe kernel settings are currently:\n\nshmmax: 33554432\nshmall: 2097152\n\nI would have expected to need to increase these before postgres would run.\n\nI have a nagging suspicion that something isn't right. Like it's not \nactually reading the conf file, or I might have a problem on reboot.\n\nI'm not sure if I'm worrying about nothing, or if something weird is \ngoing on.\n\n\n",
"msg_date": "Thu, 24 Jan 2008 13:00:24 -0500",
"msg_from": "Rick Schumeyer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Configuration settings (shared_buffers, etc) in Linux: puzzled"
},
{
"msg_contents": "On Thu, 24 Jan 2008, Rick Schumeyer wrote:\n\n> On a linux box (Linux db1 2.6.18.8-md #1 SMP Wed May 23 17:21:37 EDT 2007 \n> i686 GNU/Linux)\n> I edited postgresql.conf and changed:\n>\n> shared_buffers = 5000 work_mem = 16384\n> max_stack_depth = 4096\n>\n> and then restarted postgres. The puzzling part is that postgres actually \n> started. When I have done this on other boxes, I had to edit kernel settings \n> to allow for more shared memory.\n\nYou can confirm whether the settings are actually taking like this:\n\n$ psql\nWelcome to psql 8.2.4, the PostgreSQL interactive terminal.\n\npostgres=# show shared_buffers;\n shared_buffers\n----------------\n 256MB\n(1 row)\n\nSince your shared_buffers setting should probably be in the hundreds of \nmegabytes range if you want good performance, you may need to update your \nkernel settings anyway, but the above will let you see what the server is \nactually starting with.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 24 Jan 2008 13:52:49 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration settings (shared_buffers, etc) in Linux:\n puzzled"
},
{
"msg_contents": "Em Thu, 24 Jan 2008 13:00:24 -0500\nRick Schumeyer <[email protected]> escreveu:\n\n> On a linux box (Linux db1 2.6.18.8-md #1 SMP Wed May 23 17:21:37 EDT \n> 2007 i686 GNU/Linux)\n> I edited postgresql.conf and changed:\n> \n> shared_buffers = 5000 work_mem = 16384\n> max_stack_depth = 4096\n> \n\n\n This configuration is possibility of the PostgreSQL 8.1. But if don't\nby 8.1, for example: 8.2 or 8.3, follow the talk Greg Smith. :)\n\n\n\nBest Regards,\n-- \nFernando Ike\nhttp://www.midstorm.org/~fike/weblog\n",
"msg_date": "Thu, 24 Jan 2008 23:53:52 -0200",
"msg_from": "Fernando Ike <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration settings (shared_buffers, etc) in\n Linux: puzzled"
}
] |
[
{
"msg_contents": "A simple update query, over roughly 17 million rows, populating a newly added column in a table, resulted in an out of memory error when the process memory usage reached 2GB. Could this be due to a poor choice of some configuration parameter, or is there a limit on how many rows I can update in a single statement?\r\n\r\nLog:\r\n...\r\n2008-01-25 09:42:08.119 NZDT [3432]: [1-1] LOG: checkpoint starting: time\r\n2008-01-25 09:42:08.697 NZDT [3432]: [2-1] LOG: checkpoint complete: wrote 2 buffers (0.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=0.218 s, sync=0.047 s, total=0.578 s\r\n...\r\n2008-01-25 10:44:49.011 NZDT [3824]: [1-1] LOG: connection received: host=(removed) port=3207\r\n2008-01-25 10:44:49.042 NZDT [3824]: [2-1] LOG: connection authorized: user=postgres database=(removed)\r\n2008-01-25 10:52:08.204 NZDT [3432]: [3-1] LOG: checkpoint starting: time\r\n2008-01-25 10:52:39.673 NZDT [3432]: [4-1] LOG: checkpoint complete: wrote 275 buffers (6.7%); 1 transaction log file(s) added, 0 removed, 0 recycled; write=27.078 s, sync=1.485 s, total=31.407 s\r\n2008-01-25 11:02:08.055 NZDT [3432]: [5-1] LOG: checkpoint starting: time\r\n2008-01-25 11:02:32.759 NZDT [3432]: [6-1] LOG: checkpoint complete: wrote 222 buffers (5.4%); 0 transaction log file(s) added, 0 removed, 69 recycled; write=22.766 s, sync=0.968 s, total=24.704 s\r\n2008-01-25 11:12:08.344 NZDT [3432]: [7-1] LOG: checkpoint starting: time\r\n2008-01-25 11:12:38.423 NZDT [3432]: [8-1] LOG: checkpoint complete: wrote 268 buffers (6.5%); 0 transaction log file(s) added, 0 removed, 77 recycled; write=27.875 s, sync=1.312 s, total=30.094 s\r\n2008-01-25 11:22:08.088 NZDT [3432]: [9-1] LOG: checkpoint starting: time\r\n2008-01-25 11:22:29.526 NZDT [3432]: [10-1] LOG: checkpoint complete: wrote 188 buffers (4.6%); 0 transaction log file(s) added, 0 removed, 48 recycled; write=18.155 s, sync=1.391 s, total=21.312 s\r\n2008-01-25 11:32:08.362 NZDT [3432]: [11-1] LOG: checkpoint starting: time\r\n2008-01-25 11:33:21.706 NZDT [3432]: [12-1] LOG: checkpoint complete: wrote 672 buffers (16.4%); 0 transaction log file(s) added, 0 removed, 59 recycled; write=70.423 s, sync=1.562 s, total=73.375 s\r\n2008-01-25 11:42:08.244 NZDT [3432]: [13-1] LOG: checkpoint starting: time\r\n2008-01-25 11:42:27.010 NZDT [3432]: [14-1] LOG: checkpoint complete: wrote 175 buffers (4.3%); 0 transaction log file(s) added, 0 removed, 51 recycled; write=17.077 s, sync=1.204 s, total=18.813 s\r\n2008-01-25 11:52:08.299 NZDT [3432]: [15-1] LOG: checkpoint starting: time\r\n2008-01-25 11:52:33.627 NZDT [3432]: [16-1] LOG: checkpoint complete: wrote 233 buffers (5.7%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=23.328 s, sync=1.468 s, total=25.391 s\r\nTopMemoryContext: 49816 total in 6 blocks; 5656 free (6 chunks); 44160 used\r\n RI compare cache: 8192 total in 1 blocks; 1800 free (0 chunks); 6392 used\r\n RI query cache: 8192 total in 1 blocks; 5968 free (0 chunks); 2224 used\r\n TopTransactionContext: 8192 total in 1 blocks; 7792 free (0 chunks); 400 used\r\n Operator class cache: 8192 total in 1 blocks; 3848 free (0 chunks); 4344 used\r\n Operator lookup cache: 24576 total in 2 blocks; 14072 free (6 chunks); 10504 used\r\n MessageContext: 40960 total in 3 blocks; 19960 free (5 chunks); 21000 used\r\n smgr relation table: 8192 total in 1 blocks; 2808 free (0 chunks); 5384 used\r\n TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0 chunks); 16 used\r\n Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used\r\n PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used\r\n PortalHeapMemory: 1024 total in 1 blocks; 760 free (0 chunks); 264 used\r\n ExecutorState: 2044715008 total in 270 blocks; 21056 free (262 chunks); 2044693952 used\r\n ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\r\n ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\r\n ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\r\n ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\r\n Relcache by OID: 8192 total in 1 blocks; 3376 free (0 chunks); 4816 used\r\n CacheMemoryContext: 667472 total in 20 blocks; 182800 free (1 chunks); 484672 used\r\n location_ix: 1024 total in 1 blocks; 304 free (0 chunks); 720 used\r\n... [Cut 58 indexes with very similar lines to the above, to save space]\r\n MdSmgr: 8192 total in 1 blocks; 7240 free (0 chunks); 952 used\r\n LOCALLOCK hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used\r\n Timezones: 49432 total in 2 blocks; 5968 free (0 chunks); 43464 used\r\n ErrorContext: 8192 total in 1 blocks; 8176 free (3 chunks); 16 used\r\n2008-01-25 11:53:10.315 NZDT [3824]: [3-1] ERROR: out of memory\r\n2008-01-25 11:53:10.362 NZDT [3824]: [4-1] DETAIL: Failed on request of size 28.\r\n2008-01-25 11:53:10.362 NZDT [3824]: [5-1] STATEMENT: UPDATE document_file SET document_type_id = (SELECT document_type_id FROM document d where d.id = document_id);\r\n2008-01-25 12:00:53.571 NZDT [3604]: [1-1] LOG: connection received: host=(removed) port=3399\r\n2008-01-25 12:00:54.274 NZDT [3604]: [2-1] LOG: connection authorized: user=postgres database=(removed)\r\n2008-01-25 12:00:55.727 NZDT [3604]: [3-1] LOG: duration: 1264.999 ms statement: SET DateStyle=ISO;SELECT oid, pg_encoding_to_char(encoding) AS encoding, datlastsysoid\r\n\t FROM pg_database WHERE oid = 16466\r\n2008-01-25 12:02:08.322 NZDT [3432]: [17-1] LOG: checkpoint starting: time\r\n2008-01-25 12:07:03.591 NZDT [3432]: [18-1] LOG: checkpoint complete: wrote 2784 buffers (68.0%); 0 transaction log file(s) added, 0 removed, 92 recycled; write=292.488 s, sync=1.515 s, total=295.473 s\r\n2008-01-25 12:10:07.031 NZDT [3604]: [4-1] LOG: duration: 539646.999 ms statement: select count(*) from document_file;\r\n2008-01-25 12:12:08.048 NZDT [3432]: [19-1] LOG: checkpoint starting: time\r\n2008-01-25 12:15:22.176 NZDT [3432]: [20-1] LOG: checkpoint complete: wrote 949 buffers (23.2%); 0 transaction log file(s) added, 0 removed, 8 recycled; write=193.097 s, sync=0.936 s, total=194.127 s\r\n\r\nEnvironment:\r\nOS: Windows XP\r\nPostgreSQL: 8.3RC1\r\n\r\nNon default Resource and WAL configuration settings:\r\nshared_buffers = 32MB\r\nmax_fsm_pages = 204800\r\ncheckpoint_segments = 300\r\ncheckpoint_timeout = 10min\r\n\r\nThe previous query (not logged due to log_min_duration_statement = 500) had been:\r\nALTER TABLE document_file ADD document_type_id integer;\r\n\r\nThe query plan:\r\nSeq Scan on document_file (cost=0.00..280337907.00 rows=27619541 width=617)\r\n SubPlan\r\n -> Index Scan using pk_document_id on document d (cost=0.00..10.12 rows=1 width=4)\r\n Index Cond: (id = $0)\r\n\r\nStephen Denne\r\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality \n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/services/bqem.htm for details.\n__________________________________________________________________\n\n",
"msg_date": "Fri, 25 Jan 2008 12:46:20 +1300",
"msg_from": "\"Stephen Denne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "8.3rc1 Out of memory when performing update"
},
{
"msg_contents": "Em Fri, 25 Jan 2008 12:46:20 +1300\n\"Stephen Denne\" <[email protected]> escreveu:\n\n> A simple update query, over roughly 17 million rows, populating a\n> newly added column in a table, resulted in an out of memory error\n> when the process memory usage reached 2GB. Could this be due to a\n> poor choice of some configuration parameter, or is there a limit on\n> how many rows I can update in a single statement?\n> \n\n I believe that it is plataform problem. Because on *nix this limit\ndon't occur. But I don't specialist Windows.\n\n\n\nKind Regards,\n-- \nFernando Ike\nhttp://www.midstorm.org/~fike/weblog\n",
"msg_date": "Thu, 24 Jan 2008 23:14:33 -0200",
"msg_from": "Fernando Ike <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3rc1 Out of memory when performing update"
},
{
"msg_contents": "\"Stephen Denne\" <[email protected]> writes:\n> A simple update query, over roughly 17 million rows, populating a newly added column in a table, resulted in an out of memory error when the process memory usage reached 2GB. Could this be due to a poor choice of some configuration parameter, or is there a limit on how many rows I can update in a single statement?\n\nDo you have any triggers or foreign keys on that table? For that\nmatter, let's see its whole schema definition.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Jan 2008 21:31:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3rc1 Out of memory when performing update "
},
{
"msg_contents": "> \"Stephen Denne\" <[email protected]> writes:\n> > A simple update query, over roughly 17 million rows, \n> > populating a newly added column in a table, resulted in an \n> > out of memory error when the process memory usage reached \n> > 2GB. Could this be due to a poor choice of some configuration \n> > parameter, or is there a limit on how many rows I can update \n> > in a single statement?\n> \n> Do you have any triggers or foreign keys on that table? For that\n> matter, let's see its whole schema definition.\n> \n> \t\t\tregards, tom lane\n\nNo triggers on that table, one primary key, one foreign key, two indexes.\nThe foreign key references a primary key which is also an integer.\nNo other tables which reference document_file.\nNo inherited tables.\nThere are as many document_file rows as there are rows in the document table,\ndocument_file.document_id is unique, though not constrained.\n(Designed as a one to many relationship, but only ever used as one to one.)\n\n\nI altered the update statement slightly, and reran the query.\n\nI disabled autovacuum after a while and cancelled the autovacuum process that was trying to vacuum analyze document_file.\n\nThe altered query has been running over 3 hours now,\nwithout using lots of memory (38M private bytes).\n2046 temp files were created (2.54GB worth), \nwhich have recently changed from slowly growing in size\nto very very slowly reducing in number.\n\n\nAltered query that has not crashed:\nUPDATE ONLY document_file AS df SET document_type_id = d.document_type_id FROM document AS d WHERE d.id = document_id;\n\nHash Join (cost=674810.80..6701669.63 rows=16972702 width=621)\n Hash Cond: (df.document_id = d.id)\n -> Seq Scan on document_file df (cost=0.00..750298.65 rows=27702365 width=617)\n -> Hash (cost=396352.02..396352.02 rows=16972702 width=8)\n -> Seq Scan on document d (cost=0.00..396352.02 rows=16972702 width=8)\n\n\nc.f. original (re-explained):\nUPDATE document_file SET document_type_id = (SELECT document_type_id FROM document d where d.id = document_id);\n\nSeq Scan on document_file (cost=0.00..281183329.64 rows=27702834 width=617)\n SubPlan\n -> Index Scan using pk_document_id on document d (cost=0.00..10.12 rows=1 width=4)\n Index Cond: (id = $0)\n\n\n\nSchema as reported by pgadmin:\n\nCREATE TABLE document_file\n(\n id integer NOT NULL DEFAULT nextval(('document_file_seq'::text)::regclass),\n document_id integer NOT NULL,\n archive_directory_location character varying(255) NOT NULL,\n mime_type character varying(255),\n file_name character varying(255) NOT NULL,\n life_cycle_status character varying(255),\n version integer DEFAULT 0,\n is_current boolean DEFAULT true,\n file_size integer NOT NULL,\n document_type_id integer,\n CONSTRAINT pk_document_file_id PRIMARY KEY (id),\n CONSTRAINT fk_document_id FOREIGN KEY (document_id)\n REFERENCES document (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE\n)\nWITH (OIDS=FALSE);\nALTER TABLE document_file OWNER TO postgres;\nGRANT ALL ON TABLE document_file TO postgres;\nGRANT ALL ON TABLE document_file TO vapps;\nGRANT ALL ON TABLE document_file TO vrconfig;\n\nCREATE INDEX location_ix\n ON document_file\n USING btree\n (archive_directory_location);\n\nCREATE INDEX tc_file_document\n ON document_file\n USING btree\n (document_id);\n\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality \n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/services/bqem.htm for details.\n__________________________________________________________________\n\n",
"msg_date": "Fri, 25 Jan 2008 16:20:49 +1300",
"msg_from": "\"Stephen Denne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.3rc1 Out of memory when performing update "
},
{
"msg_contents": "\"Stephen Denne\" <[email protected]> writes:\n> I altered the update statement slightly, and reran the query.\n> The altered query has been running over 3 hours now,\n> without using lots of memory (38M private bytes).\n> 2046 temp files were created (2.54GB worth), \n> which have recently changed from slowly growing in size\n> to very very slowly reducing in number.\n\nHmm. I think what that really means is you haven't got to the part of\nthe query where the leak is :-(. In my attempt to reproduce this\nI found that 8.3 has introduced a memory leak into the RI trigger\nsupport, such that even if an UPDATE doesn't change the FK columns\nit's still likely to leak a few dozen bytes per updated row.\n\nPlease see if the attached patch makes it better for you.\n\n\t\t\tregards, tom lane\n\nIndex: ri_triggers.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/adt/ri_triggers.c,v\nretrieving revision 1.101\ndiff -c -r1.101 ri_triggers.c\n*** ri_triggers.c\t3 Jan 2008 21:23:15 -0000\t1.101\n--- ri_triggers.c\t25 Jan 2008 04:45:56 -0000\n***************\n*** 3099,3104 ****\n--- 3099,3106 ----\n \t\telog(ERROR, \"conkey is not a 1-D smallint array\");\n \triinfo->nkeys = numkeys;\n \tmemcpy(riinfo->fk_attnums, ARR_DATA_PTR(arr), numkeys * sizeof(int16));\n+ \tif ((Pointer) arr != DatumGetPointer(adatum))\n+ \t\tpfree(arr);\t\t\t\t/* free de-toasted copy, if any */\n \n \tadatum = SysCacheGetAttr(CONSTROID, tup,\n \t\t\t\t\t\t\t Anum_pg_constraint_confkey, &isNull);\n***************\n*** 3113,3118 ****\n--- 3115,3122 ----\n \t\tARR_ELEMTYPE(arr) != INT2OID)\n \t\telog(ERROR, \"confkey is not a 1-D smallint array\");\n \tmemcpy(riinfo->pk_attnums, ARR_DATA_PTR(arr), numkeys * sizeof(int16));\n+ \tif ((Pointer) arr != DatumGetPointer(adatum))\n+ \t\tpfree(arr);\t\t\t\t/* free de-toasted copy, if any */\n \n \tadatum = SysCacheGetAttr(CONSTROID, tup,\n \t\t\t\t\t\t\t Anum_pg_constraint_conpfeqop, &isNull);\n***************\n*** 3127,3132 ****\n--- 3131,3138 ----\n \t\tARR_ELEMTYPE(arr) != OIDOID)\n \t\telog(ERROR, \"conpfeqop is not a 1-D Oid array\");\n \tmemcpy(riinfo->pf_eq_oprs, ARR_DATA_PTR(arr), numkeys * sizeof(Oid));\n+ \tif ((Pointer) arr != DatumGetPointer(adatum))\n+ \t\tpfree(arr);\t\t\t\t/* free de-toasted copy, if any */\n \n \tadatum = SysCacheGetAttr(CONSTROID, tup,\n \t\t\t\t\t\t\t Anum_pg_constraint_conppeqop, &isNull);\n***************\n*** 3141,3146 ****\n--- 3147,3154 ----\n \t\tARR_ELEMTYPE(arr) != OIDOID)\n \t\telog(ERROR, \"conppeqop is not a 1-D Oid array\");\n \tmemcpy(riinfo->pp_eq_oprs, ARR_DATA_PTR(arr), numkeys * sizeof(Oid));\n+ \tif ((Pointer) arr != DatumGetPointer(adatum))\n+ \t\tpfree(arr);\t\t\t\t/* free de-toasted copy, if any */\n \n \tadatum = SysCacheGetAttr(CONSTROID, tup,\n \t\t\t\t\t\t\t Anum_pg_constraint_conffeqop, &isNull);\n***************\n*** 3155,3160 ****\n--- 3163,3170 ----\n \t\tARR_ELEMTYPE(arr) != OIDOID)\n \t\telog(ERROR, \"conffeqop is not a 1-D Oid array\");\n \tmemcpy(riinfo->ff_eq_oprs, ARR_DATA_PTR(arr), numkeys * sizeof(Oid));\n+ \tif ((Pointer) arr != DatumGetPointer(adatum))\n+ \t\tpfree(arr);\t\t\t\t/* free de-toasted copy, if any */\n \n \tReleaseSysCache(tup);\n }",
"msg_date": "Thu, 24 Jan 2008 23:50:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3rc1 Out of memory when performing update "
},
{
"msg_contents": "I don't have a PostgreSQL build environment.\n \nIt is now Friday night for me. I left the alternate query running, and will find out on Monday what happened.\n \nIf I drop the fk constraint, and/or its index, would I still be affected by the leak you found?\n \nRegards,\nStephen Denne.\n \n\n________________________________\n\nFrom: Tom Lane [mailto:[email protected]]\nSent: Fri 25/01/2008 5:50 p.m.\nTo: Stephen Denne\nCc: [email protected]\nSubject: Re: [PERFORM] 8.3rc1 Out of memory when performing update \n\n\n\n\"Stephen Denne\" <[email protected]> writes:\n> I altered the update statement slightly, and reran the query.\n> The altered query has been running over 3 hours now,\n> without using lots of memory (38M private bytes).\n> 2046 temp files were created (2.54GB worth),\n> which have recently changed from slowly growing in size\n> to very very slowly reducing in number.\n\nHmm. I think what that really means is you haven't got to the part of\nthe query where the leak is :-(. In my attempt to reproduce this\nI found that 8.3 has introduced a memory leak into the RI trigger\nsupport, such that even if an UPDATE doesn't change the FK columns\nit's still likely to leak a few dozen bytes per updated row.\n\nPlease see if the attached patch makes it better for you.\n\n regards, tom lane\n\n\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality \n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/services/bqem.htm for details.\n__________________________________________________________________\n\n\nRe: [PERFORM] 8.3rc1 Out of memory when performing update\n\n\n\n\nI don't have a PostgreSQL build environment.\n \nIt is now Friday night for me. I left the alternate query running, and will find out on Monday what happened.\n \nIf I drop the fk constraint, and/or its index, would I still be affected by the leak you found?\n \nRegards,\nStephen Denne.\n \n\n\nFrom: Tom Lane [mailto:[email protected]]Sent: Fri 25/01/2008 5:50 p.m.To: Stephen DenneCc: [email protected]: Re: [PERFORM] 8.3rc1 Out of memory when performing update \n\n\"Stephen Denne\" <[email protected]> writes:> I altered the update statement slightly, and reran the query.> The altered query has been running over 3 hours now,> without using lots of memory (38M private bytes).> 2046 temp files were created (2.54GB worth),> which have recently changed from slowly growing in size> to very very slowly reducing in number.Hmm. I think what that really means is you haven't got to the part ofthe query where the leak is :-(. In my attempt to reproduce thisI found that 8.3 has introduced a memory leak into the RI triggersupport, such that even if an UPDATE doesn't change the FK columnsit's still likely to leak a few dozen bytes per updated row.Please see if the attached patch makes it better for you. regards, tom lane\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality \n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/services/bqem.htm for details.\n__________________________________________________________________",
"msg_date": "Fri, 25 Jan 2008 23:14:40 +1300",
"msg_from": "\"Stephen Denne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.3rc1 Out of memory when performing update "
},
{
"msg_contents": "> \n> > A simple update query, over roughly 17 million rows, populating a\n> > newly added column in a table, resulted in an out of memory error\n> > when the process memory usage reached 2GB. Could this be due to a\n> > poor choice of some configuration parameter, or is there a limit on\n> > how many rows I can update in a single statement?\n> > \n> \n> I believe that it is plataform problem. Because on *nix this limit\n> don't occur. But I don't specialist Windows.\n\nOn most Windows Servers(except for database edition and a few other\nvariants), 2Gb is the most you can address to a single\nprocess without booting the machine with a special parameter called \\3G\nwhich will allow for allocating up to 3Gb per process. That is the\nlimit unless you get special versions of windows server 2003 as far as I\nknow. If you do a google search on \\3G with windows you will find what\nI am refering too.\n\nHope that helps a bit.\n\nCheers,\n\n---\nCurtis Gallant\[email protected]\n",
"msg_date": "Fri, 25 Jan 2008 13:32:46 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: 8.3rc1 Out of memory when performing update"
},
{
"msg_contents": "> Subject: Re: [PERFORM] 8.3rc1 Out of memory when performing update\n> \n> >\n> > > A simple update query, over roughly 17 million rows, populating a\n> > > newly added column in a table, resulted in an out of memory error\n> > > when the process memory usage reached 2GB. Could this be due to a\n> > > poor choice of some configuration parameter, or is there a limit\non\n> > > how many rows I can update in a single statement?\n> > >\n> >\n> > I believe that it is plataform problem. Because on *nix this\nlimit\n> > don't occur. But I don't specialist Windows.\n> \n> On most Windows Servers(except for database edition and a few other\n> variants), 2Gb is the most you can address to a single\n> process without booting the machine with a special parameter called\n\\3G\n> which will allow for allocating up to 3Gb per process. That is the\n> limit unless you get special versions of windows server 2003 as far as\nI\n> know. If you do a google search on \\3G with windows you will find\nwhat\n> I am refering too.\n\nWindows 32 bit is limited to 2 or 3 GB as you state but 64 bit Windows\nisn't. 32 bit Linux has similar limits too.\n\n\nJon\n",
"msg_date": "Fri, 25 Jan 2008 12:36:28 -0600",
"msg_from": "\"Roberts, Jon\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3rc1 Out of memory when performing update"
},
{
"msg_contents": "Roberts, Jon wrote:\n>> Subject: Re: [PERFORM] 8.3rc1 Out of memory when performing update\n>>\n>>>> A simple update query, over roughly 17 million rows, populating a\n>>>> newly added column in a table, resulted in an out of memory error\n>>>> when the process memory usage reached 2GB. Could this be due to a\n>>>> poor choice of some configuration parameter, or is there a limit\n> on\n>>>> how many rows I can update in a single statement?\n>>>>\n>>> I believe that it is plataform problem. Because on *nix this\n> limit\n>>> don't occur. But I don't specialist Windows.\n>> On most Windows Servers(except for database edition and a few other\n>> variants), 2Gb is the most you can address to a single\n>> process without booting the machine with a special parameter called\n> \\3G\n>> which will allow for allocating up to 3Gb per process. That is the\n>> limit unless you get special versions of windows server 2003 as far as\n> I\n>> know. If you do a google search on \\3G with windows you will find\n> what\n>> I am refering too.\n> \n> Windows 32 bit is limited to 2 or 3 GB as you state but 64 bit Windows\n> isn't. 32 bit Linux has similar limits too.\n\nWell, PostgreSQL on Windows is a 32-bit binary, so the limit applies to \nthis case.\n\n//Magnus\n",
"msg_date": "Fri, 25 Jan 2008 19:45:43 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3rc1 Out of memory when performing update"
},
{
"msg_contents": ">>\"Stephen Denne\" <[email protected]> writes:\n>>> I altered the update statement slightly, and reran the query.\n>>> The altered query has been running over 3 hours now,\n>>> without using lots of memory (38M private bytes).\n>>> 2046 temp files were created (2.54GB worth),\n>>> which have recently changed from slowly growing in size\n>>> to very very slowly reducing in number.\n\nTo which Tom Lane replied:\n>>Hmm. I think what that really means is you haven't got to the part of\n>>the query where the leak is :-(.\n\nI then said:\n> It is now Friday night for me. I left the alternate query running, and will find out on Monday what happened.\n\nWell, it is now Monday morning for me, and those temp files are still slowly reducing in number.\nThere are now only 1629 of them left, so I'm guessing that the query is about 20% done.\nThe PC seems to have been steadily but very slowly working away at this very simple query for close to 70 hours.\nI decided not to leave this query running for a fortnight to find out if I then strike the memory leak.\nPrivate Bytes had grown to 685MB\nI cancelled the query.\n\nRough snapshot of what was happening with IO (a single 7200 IDE disk):\n\nThe process for the update query was reading about 500KB/second , writing between 80KB/second to 200KB/second.\nThe stats collector process was writing about 100KB/second\nThe wal writer process was writing about 200KB/second\nThe writer process was writing about 400KB/second\nCheckpoints were 10 minutes apart, taking about 85 seconds to write 1000 buffers.\n\nWhat could cause such poor performance?\nI presume that the disk was being forced to move the head a great deal.\n\nI also asked:\n> If I drop the fk constraint, and/or its index, would I still be affected by the leak you found?\n\nI dropped two indexes and one fk constraint and ran VACUUM FULL VERBOSE ANALYZE document_file;\nAs an indication of the disk performance: at its peak the vacuum process was reading and writing 20MB/seconds (sustained), completing in less than 11 minutes.\n\nI reran the original query.\nIt used constant memory (6.7MB private bytes)\nIt was reading 2 to 3MB/second, writing 3 to 6MB/second.\nThe stats collector process was writing about 100KB/second\nThe wal writer process was writing about 200KB/second\nThe writer process was initially writing about 1MB/second, increasing to about 3MB/second\n\nCheckpoints in the middle of this query were taking up to 13 seconds to write 100 buffers.\n\nThe checkpoint after the query took 300 seconds (exactly half the checkpoint interval), and was writing about 60KB/second. It wrote 2148 buffers.\n\nSo dropping the fk constraint and index results in successful query execution with constant memory usage. Does this confirm that the memory leak you found is the one I was suffering from?\n\nI'd also class the extremely poor performance of the alternate query as a bug.\nWhy take a fortnight when you could take three quarters of an hour? (Granted there where two less indexes to update, but that is still too long.)\n\nAside: I must say that I am impressed with PostgreSQL's handling of this connection. It recovers extremely well from running out of memory, cancelling very long running queries, reloading config (to turn autovacuum off), and continues to work as expected (the 3 day old connection that is).\n\nStephen Denne.\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality \n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/services/bqem.htm for details.\n__________________________________________________________________\n\n",
"msg_date": "Mon, 28 Jan 2008 10:55:10 +1300",
"msg_from": "\"Stephen Denne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.3rc1 Out of memory when performing update "
},
{
"msg_contents": "\"Stephen Denne\" <[email protected]> writes:\n> So dropping the fk constraint and index results in successful query execution with constant memory usage. Does this confirm that the memory leak you found is the one I was suffering from?\n\nWell, it confirms that you were suffering from that memory leak. What's\nnot clear is whether the leak completely explains the bad performance\nyou saw. The leak would have resulted in extra swapping, but I wouldn't\nhave thought it would drive the machine completely into swap hell.\nWould the monitoring tools you were using have shown swapping I/O?\n\nIf you want to try a focused experiment, please apply the patch shown\nhere:\nhttp://archives.postgresql.org/pgsql-committers/2008-01/msg00322.php\nand see if the behavior gets better.\n\n> I'd also class the extremely poor performance of the alternate query as a bug.\n\nYeah, we should look into that. The plan you showed before estimated\nabout 16.9M rows in \"document\" --- is that about right? What have you\ngot work_mem set to?\n\n> Aside: I must say that I am impressed with PostgreSQL's handling of this connection. It recovers extremely well from running out of memory, cancelling very long running queries, reloading config (to turn autovacuum off), and continues to work as expected (the 3 day old connection that is).\n\nHey, we've had plenty of opportunity to improve the system's robustness\nin the face of such things ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 27 Jan 2008 23:16:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3rc1 Out of memory when performing update "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Stephen Denne\" <[email protected]> writes:\n> > So dropping the fk constraint and index results in \n> successful query execution with constant memory usage. Does \n> this confirm that the memory leak you found is the one I was \n> suffering from?\n> \n> Well, it confirms that you were suffering from that memory \n> leak. What's\n> not clear is whether the leak completely explains the bad performance\n> you saw. The leak would have resulted in extra swapping, but \n> I wouldn't\n> have thought it would drive the machine completely into swap hell.\n\nThe query crashing from out of memory did so after an hour,\nwhich isn't bad performance given the workaround with less indexes to update succeeded after 45 minutes.\n\nIt was the rewritten one which I killed after 3 days.\n\n> Would the monitoring tools you were using have shown swapping I/O?\n\nI was using Process Explorer, which shows page faults and deltas,\nwhich are not included in the read & write IO stats.\nThe query with poor IO performance wasn't swapping.\n\n> > I'd also class the extremely poor performance of the \n> alternate query as a bug.\n> \n> Yeah, we should look into that. The plan you showed before estimated\n> about 16.9M rows in \"document\" --- is that about right? What have you\n> got work_mem set to?\n\nYes, 16894164 rows.\nExactly the same number of rows in document as in document_file.\n[count(*) queries taking 38 and 63 seconds]\n\nwork_mem appears to be left as the default 1MB\n\nI get 1023 temp files created straight away, which take four minutes (250s) to grow to about 448K each\n(reading @ 5MB/s writing @ 2MB/s)\nmemory usage during this first phase slowly increased from 13.4M to 14.4M\nthen 1023 more temp files are created, and they grow to about 2170K each\n(reading @ 2MB/s writing @ 2MB/s until the checkpoint starts, when the speed decreases to 200K/s, and doesn't increase again after the checkpoint finishes.)\nmemory usage during this first phase slowly increased from 22.5M to 26.4M\nMy concern is with what it then does. (Spends a fortnight doing really slow IO)\n\nAn hour's worth of logs from during this phase show 6 checkpoints, and 6 temp files reported (which seems to coincide with them being deleted):\n\n2008-01-26 06:02:08.086 NZDT [3432]: [233-1] LOG: checkpoint starting: time\n2008-01-26 06:03:28.916 NZDT [3432]: [234-1] LOG: checkpoint complete: wrote 899 buffers (21.9%); 0 transaction log file(s) added, 0 removed, 11 recycled; write=77.798 s, sync=2.750 s, total=80.830 s\n2008-01-26 06:12:08.094 NZDT [3432]: [235-1] LOG: checkpoint starting: time\n2008-01-26 06:12:23.407 NZDT [3824]: [209-1] LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp3824.1321\", size 2224520\n2008-01-26 06:12:23.407 NZDT [3824]: [210-1] STATEMENT: UPDATE ONLY document_file AS df SET document_type_id = d.document_type_id FROM document AS d WHERE d.id = document_id;\n2008-01-26 06:12:24.157 NZDT [3824]: [211-1] LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp3824.477\", size 461356\n2008-01-26 06:12:24.157 NZDT [3824]: [212-1] STATEMENT: UPDATE ONLY document_file AS df SET document_type_id = d.document_type_id FROM document AS d WHERE d.id = document_id;\n2008-01-26 06:13:21.876 NZDT [3432]: [236-1] LOG: checkpoint complete: wrote 724 buffers (17.7%); 0 transaction log file(s) added, 0 removed, 17 recycled; write=71.500 s, sync=2.108 s, total=73.781 s\n2008-01-26 06:22:08.024 NZDT [3432]: [237-1] LOG: checkpoint starting: time\n2008-01-26 06:23:25.415 NZDT [3432]: [238-1] LOG: checkpoint complete: wrote 877 buffers (21.4%); 0 transaction log file(s) added, 0 removed, 11 recycled; write=74.141 s, sync=2.985 s, total=77.391 s\n2008-01-26 06:29:36.311 NZDT [3824]: [213-1] LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp3824.1350\", size 2220990\n2008-01-26 06:29:36.311 NZDT [3824]: [214-1] STATEMENT: UPDATE ONLY document_file AS df SET document_type_id = d.document_type_id FROM document AS d WHERE d.id = document_id;\n2008-01-26 06:29:36.982 NZDT [3824]: [215-1] LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp3824.516\", size 463540\n2008-01-26 06:29:36.982 NZDT [3824]: [216-1] STATEMENT: UPDATE ONLY document_file AS df SET document_type_id = d.document_type_id FROM document AS d WHERE d.id = document_id;\n2008-01-26 06:32:08.016 NZDT [3432]: [239-1] LOG: checkpoint starting: time\n2008-01-26 06:33:19.501 NZDT [3432]: [240-1] LOG: checkpoint complete: wrote 872 buffers (21.3%); 0 transaction log file(s) added, 0 removed, 15 recycled; write=69.062 s, sync=2.171 s, total=71.484 s\n2008-01-26 06:42:08.101 NZDT [3432]: [241-1] LOG: checkpoint starting: time\n2008-01-26 06:43:27.431 NZDT [3432]: [242-1] LOG: checkpoint complete: wrote 813 buffers (19.8%); 0 transaction log file(s) added, 0 removed, 14 recycled; write=76.579 s, sync=2.592 s, total=79.329 s\n2008-01-26 06:46:45.558 NZDT [3824]: [217-1] LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp3824.1940\", size 2229130\n2008-01-26 06:46:45.558 NZDT [3824]: [218-1] STATEMENT: UPDATE ONLY document_file AS df SET document_type_id = d.document_type_id FROM document AS d WHERE d.id = document_id;\n2008-01-26 06:46:46.246 NZDT [3824]: [219-1] LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp3824.631\", size 459564\n2008-01-26 06:46:46.246 NZDT [3824]: [220-1] STATEMENT: UPDATE ONLY document_file AS df SET document_type_id = d.document_type_id FROM document AS d WHERE d.id = document_id;\n2008-01-26 06:52:08.078 NZDT [3432]: [243-1] LOG: checkpoint starting: time\n2008-01-26 06:53:31.173 NZDT [3432]: [244-1] LOG: checkpoint complete: wrote 983 buffers (24.0%); 0 transaction log file(s) added, 0 removed, 13 recycled; write=78.203 s, sync=4.641 s, total=83.094 s\n\nStephen Denne.\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality \n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/services/bqem.htm for details.\n__________________________________________________________________\n\n",
"msg_date": "Mon, 28 Jan 2008 18:04:00 +1300",
"msg_from": "\"Stephen Denne\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.3rc1 Out of memory when performing update "
},
{
"msg_contents": "On Jan 25, 2008 5:50 AM, Tom Lane <[email protected]> wrote:\n> Hmm. I think what that really means is you haven't got to the part of\n> the query where the leak is :-(. In my attempt to reproduce this\n> I found that 8.3 has introduced a memory leak into the RI trigger\n> support, such that even if an UPDATE doesn't change the FK columns\n> it's still likely to leak a few dozen bytes per updated row.\n>\n> Please see if the attached patch makes it better for you.\n\nJust FYI, somebody on #postgresql had the exact same problem of\nmemleaks during update yesterday and your patch fixed it for him too.\n\n--\nGuillaume\n",
"msg_date": "Mon, 28 Jan 2008 09:18:08 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.3rc1 Out of memory when performing update"
}
] |
[
{
"msg_contents": "Hi,\n\nI'd appreciate some assistance in working through what would be the \noptimal configuration for the following situation.\n\nWe currently have one large DB (~1.2TB on disk), that essentially \nconsists of 1 table with somewhere in the order of 500 million rows , \nthis database has daily inserts as well as being used for some semi- \ndata mining type operations, so there are a fairly large number of \nindices on the table. The hardware storing this DB (a software RAID6) \narray seems to be very IO bound for writes and this is restricting our \ninsert performance to ~50TPS.\n\nAs we need to achieve significantly faster insert performance I have \nbeen considering splitting the table into 'new' and 'old' data, \nmaking inserts into the 'new' table (which will also help as there are \nreally 1 insert, an update and some selects involved in populating the \ntable), then moving the data over to the 'old' DB on a periodic \nbasis. There would be new hardware involved, I'm thinking of HW RAID \n10 to improve the write performance.\n\nThe question really is, is it best to use two separate servers and \ndatabases (and have to come up with some copy process to move data \nfrom one to another), or to just add the faster storage hardware to \nthe existing server and create a new tablespace for the 'new data' \ntable on that hardware. Doing this would enable us to potentially \nmove data more easily from new to old (we can't use partitioning \nbecause there is some logic involved in when things would need to be \nmoved to 'old'). Are there any global resources that make just adding \nthe faster storage to the existing box a bad idea (the wal_log springs \nto mind - although that could be moved too), that would make adding an \nadditional server instead a better idea?\n\nAlso are there any settings that might boost our TPS on the existing \nhardware (sync=off isn't an option.. (-: ). I have already \nsignificantly increased the various buffers, but this was mainly to \nimprove select performance?\n\nVerson of Postgresql is 8.2.3.\n\nThanks,\n\nDavid.\n\n\n\n",
"msg_date": "Fri, 25 Jan 2008 11:36:52 -0500",
"msg_from": "David Brain <[email protected]>",
"msg_from_op": true,
"msg_subject": "1 or 2 servers for large DB scenario."
},
{
"msg_contents": "On Fri, 25 Jan 2008, David Brain wrote:\n\n> The hardware storing this DB (a software RAID6) array seems to be very \n> IO bound for writes and this is restricting our insert performance to \n> ~50TPS.\n\nIf you're seeing <100TPS you should consider if it's because you're \nlimited by how fast WAL commits can make it to disk. If you really want \ngood insert performance, there is no substitute for getting a disk \ncontroller with a good battery-backed cache to work around that. You \ncould just put the WAL xlog directory on a RAID-1 pair of disks to \naccelerate that, you don't have to move the whole database to a new \ncontroller.\n\n> Also are there any settings that might boost our TPS on the existing hardware \n> (sync=off isn't an option..\n\nHaving a good write cache lets you run almost as fast as when fsync is \noff.\n\n> Verson of Postgresql is 8.2.3.\n\nI'm hearing an echo here...8.2 versions before 8.2.4 have a bug related to \nstatistics that can limit performance in several situations. You should \nconsider an upgrade just to remove that as a potential contributor to your \nissues.\n\nTo do a quick check on whether this is impacting things, run top, press \n\"c\" to show the full process lines, and note whether the statistics \ncollector process is taking up a significant amount of CPU time. If it \nis, you're definately being nailed by the bug, and you really need that \nupgrade.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Fri, 25 Jan 2008 11:55:43 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 or 2 servers for large DB scenario."
},
{
"msg_contents": "On Fri, 25 Jan 2008, David Brain wrote:\n> We currently have one large DB (~1.2TB on disk), that essentially consists of \n> 1 table with somewhere in the order of 500 million rows , this database has \n> daily inserts as well as being used for some semi-data mining type \n> operations, so there are a fairly large number of indices on the table. The \n> hardware storing this DB (a software RAID6) array seems to be very IO bound \n> for writes and this is restricting our insert performance to ~50TPS.\n\nAs you have such a complex insert procedure, I'm not so surprised that you \nare getting this kind of performance. Your average discs will do something \nlike 200 seeks per second, so if you are having to perform four seeks per \ntransaction, that would explain it. Remember, on software RAID 6 (without \na battery backed up cache) all the discs will probably need to participate \nin each transaction.\n\nYour suggestion of splitting the data seems hinged around having a smaller \ntable resulting in quicker SELECTs - it might be worth doing an experiment \nto see whether this is actually the case. My guess is that you may not \nactually get much of an improvement.\n\nSo, my suggestion would be to:\n1. Make sure the server has plenty of RAM, so hopefully a lot of the\n SELECT traffic hits the cache.\n2. Upgrade your disc system to hardware RAID, with a battery-backed-up\n cache. This will enable the writes to occur immediately without having\n to wait for the discs each time. RAID 6 sounds fine, as long as there\n is a battery-backed-up cache in there somewhere. Without that, it can\n be a little crippled.\n\nWe don't actually have that much information on how much time Postgres is \nspending on each of the different activities, but the above is probably a \ngood place to start.\n\nHope that helps,\n\nMatthew\n",
"msg_date": "Fri, 25 Jan 2008 16:56:24 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 or 2 servers for large DB scenario."
},
{
"msg_contents": "On Fri, 25 Jan 2008, Greg Smith wrote:\n> If you're seeing <100TPS you should consider if it's because you're limited \n> by how fast WAL commits can make it to disk. If you really want good insert \n> performance, there is no substitute for getting a disk controller with a good \n> battery-backed cache to work around that. You could just put the WAL xlog \n> directory on a RAID-1 pair of disks to accelerate that, you don't have to \n> move the whole database to a new controller.\n\nHey, you *just* beat me to it.\n\nYes, that's quite right. My suggestion was to move the whole thing, but \nGreg is correct - you only need to put the WAL on a cached disc system. \nThat'd be quite a bit cheaper, I'd imagine.\n\nAnother case of that small SSD drive being useful, I think.\n\nMatthew\n",
"msg_date": "Fri, 25 Jan 2008 17:00:01 +0000 (GMT)",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 or 2 servers for large DB scenario."
},
{
"msg_contents": "Matthew wrote:\n> On Fri, 25 Jan 2008, Greg Smith wrote:\n>> If you're seeing <100TPS you should consider if it's because you're \n>> limited by how fast WAL commits can make it to disk. If you really \n>> want good insert performance, there is no substitute for getting a \n>> disk controller with a good battery-backed cache to work around that. \n>> You could just put the WAL xlog directory on a RAID-1 pair of disks to \n>> accelerate that, you don't have to move the whole database to a new \n>> controller.\n> \n> Hey, you *just* beat me to it.\n> \n> Yes, that's quite right. My suggestion was to move the whole thing, but \n> Greg is correct - you only need to put the WAL on a cached disc system. \n> That'd be quite a bit cheaper, I'd imagine.\n> \n> Another case of that small SSD drive being useful, I think.\n\nPostgreSQL 8.3 will have \"asynchronous commits\" feature, which should \neliminate that bottleneck without new hardware, if you can accept the \nloss of last few transaction commits in case of sudden power loss:\n\nhttp://www.postgresql.org/docs/8.3/static/wal-async-commit.html\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Sun, 27 Jan 2008 10:47:30 +0000",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 or 2 servers for large DB scenario."
},
{
"msg_contents": "Hi David,\n\nI have been running few tests with 8.2.4 and here is what I have seen:\n\nIf fysnc=off is not an option (and it should not be an option :-) )\nthen commit_delay=10 setting seems to help a lot in my OLTP runs. \nGranted it will delay your transactions a bit, but the gain is big \nconsidering the WAL writes end up doing bigger writes under high load \nand got a good boost in performance due to that change (IIRC it was \nabout 6-10% depending on load and contention). So that might help out.\n\nCuriosly I did spend why it helps out on write contention. Atleast on \nSolaris my observation is WAL logs then end up getting bigger than 8K \n(Blocksize). This meant an overall reduction in IOPS on the filesystem \nthats holding the logs and hence more IOPS capacity available to do do \nmore Log writes. (Using EAStress type of benchmark, it ended up doing \nsomewhere between 128K-256KB writes on the logs which was pretty \nfascinating since the benchmark does drive fair amount of WAL writes and \nwithout commit_delay, the disks were pretty saturated quickly.\n\nAlso if the load is high, then the delay in transaction is pretty much \nnon existent. (atleast what I observed with commit_delay=10 and \ncommit_siblings left to default)\n\n\nOf course as already replied back, 8.3's async commit helps on top of \ncommit_delay so thats an option if few transactions loss potential is \nacceptable.\n\n-Jignesh\n\n\n\nDavid Brain wrote:\n> Hi,\n>\n> I'd appreciate some assistance in working through what would be the \n> optimal configuration for the following situation.\n>\n> We currently have one large DB (~1.2TB on disk), that essentially \n> consists of 1 table with somewhere in the order of 500 million rows , \n> this database has daily inserts as well as being used for some \n> semi-data mining type operations, so there are a fairly large number \n> of indices on the table. The hardware storing this DB (a software \n> RAID6) array seems to be very IO bound for writes and this is \n> restricting our insert performance to ~50TPS.\n>\n> As we need to achieve significantly faster insert performance I have \n> been considering splitting the table into 'new' and 'old' data, \n> making inserts into the 'new' table (which will also help as there are \n> really 1 insert, an update and some selects involved in populating the \n> table), then moving the data over to the 'old' DB on a periodic \n> basis. There would be new hardware involved, I'm thinking of HW RAID \n> 10 to improve the write performance.\n>\n> The question really is, is it best to use two separate servers and \n> databases (and have to come up with some copy process to move data \n> from one to another), or to just add the faster storage hardware to \n> the existing server and create a new tablespace for the 'new data' \n> table on that hardware. Doing this would enable us to potentially \n> move data more easily from new to old (we can't use partitioning \n> because there is some logic involved in when things would need to be \n> moved to 'old'). Are there any global resources that make just adding \n> the faster storage to the existing box a bad idea (the wal_log springs \n> to mind - although that could be moved too), that would make adding an \n> additional server instead a better idea?\n>\n> Also are there any settings that might boost our TPS on the existing \n> hardware (sync=off isn't an option.. (-: ). I have already \n> significantly increased the various buffers, but this was mainly to \n> improve select performance?\n>\n> Verson of Postgresql is 8.2.3.\n>\n> Thanks,\n>\n> David.\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n",
"msg_date": "Sun, 27 Jan 2008 19:53:04 -0500",
"msg_from": "\"Jignesh K. Shah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 or 2 servers for large DB scenario."
},
{
"msg_contents": "On Jan 25, 2008 11:36 AM, David Brain <[email protected]> wrote:\n> I'd appreciate some assistance in working through what would be the\n> optimal configuration for the following situation.\n>\n> We currently have one large DB (~1.2TB on disk), that essentially\n> consists of 1 table with somewhere in the order of 500 million rows ,\n> this database has daily inserts as well as being used for some semi-\n> data mining type operations, so there are a fairly large number of\n> indices on the table. The hardware storing this DB (a software RAID6)\n> array seems to be very IO bound for writes and this is restricting our\n> insert performance to ~50TPS.\n>\n> As we need to achieve significantly faster insert performance I have\n> been considering splitting the table into 'new' and 'old' data,\n> making inserts into the 'new' table (which will also help as there are\n> really 1 insert, an update and some selects involved in populating the\n> table), then moving the data over to the 'old' DB on a periodic\n> basis. There would be new hardware involved, I'm thinking of HW RAID\n> 10 to improve the write performance.\n>\n> The question really is, is it best to use two separate servers and\n> databases (and have to come up with some copy process to move data\n> from one to another), or to just add the faster storage hardware to\n> the existing server and create a new tablespace for the 'new data'\n> table on that hardware. Doing this would enable us to potentially\n> move data more easily from new to old (we can't use partitioning\n> because there is some logic involved in when things would need to be\n> moved to 'old'). Are there any global resources that make just adding\n> the faster storage to the existing box a bad idea (the wal_log springs\n> to mind - although that could be moved too), that would make adding an\n> additional server instead a better idea?\n>\n> Also are there any settings that might boost our TPS on the existing\n> hardware (sync=off isn't an option.. (-: ). I have already\n> significantly increased the various buffers, but this was mainly to\n> improve select performance?\n\nI would (amalgamating suggestions from others and adding my own):\n*) get off raid 6 asap. raid 6 is wild wild west in database terms\n*) partition this table. if you have a lot of indexes on the table\nyou might be running into random read problems. I'm not a huge fan in\npartitioning in most cases, but your case passes the smell test.\nUnique constraints are a problem, so partition wisely.\n*) move wal to separate device(s). you could see as much as double\ntps, but probably less than that. a single 15k drive will do, or two\nin a raid 1. contrary to the others, I would advise _against_ a ssd\nfor the wal...wal writing is mostly sequential and ssd is unlikely to\nhelp (where ssd is most likely to pay off is in the database volume\nfor faster random reads...likely not cost effective).\n*) and, for heaven's sake, if there is any way for you to normalize\nyour database into more than one table, do so :-)\n\nmerlin\n",
"msg_date": "Sun, 27 Jan 2008 23:25:26 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 or 2 servers for large DB scenario."
}
] |
[
{
"msg_contents": "\nHi,\n\nI've got a pg database, and a batch process that generates some metadata to\nbe inserted into one of the tables. Every 15 minutes or so, the batch script\nre-calculates the meta data (600,000 rows), dumps it to file, and then does\na TRUNCATE table followed by a COPY to import that file into the table.\n\nThe problem is, that whilst this process is happening, other queries against\nthis table time out. I've tried to copy into a temp table before doing an\n\"INSERT INTO table (SELECT * FROM temp)\", but the second statement still\ntakes a lot of time and causes a loss of performance.\n\nSo, what's the best way to import my metadata without it affecting the\nperformance of other queries?\n\nThanks,\n\nAndrew \n-- \nView this message in context: http://www.nabble.com/How-do-I-bulk-insert-to-a-table-without-affecting-read-performance-on-that-table--tp15099164p15099164.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Fri, 25 Jan 2008 15:27:05 -0800 (PST)",
"msg_from": "growse <[email protected]>",
"msg_from_op": true,
"msg_subject": "How do I bulk insert to a table without affecting read\n\tperformance on that table?"
},
{
"msg_contents": "On Jan 25, 2008 5:27 PM, growse <[email protected]> wrote:\n>\n> Hi,\n>\n> I've got a pg database, and a batch process that generates some metadata to\n> be inserted into one of the tables. Every 15 minutes or so, the batch script\n> re-calculates the meta data (600,000 rows), dumps it to file, and then does\n> a TRUNCATE table followed by a COPY to import that file into the table.\n>\n> The problem is, that whilst this process is happening, other queries against\n> this table time out. I've tried to copy into a temp table before doing an\n> \"INSERT INTO table (SELECT * FROM temp)\", but the second statement still\n> takes a lot of time and causes a loss of performance.\n\nCan you import to another table then\n\nbegin;\nalter table realtable rename to garbage;\nalter table loadtable rename to realtable;\ncommit;\n\n?\n\n>\n> So, what's the best way to import my metadata without it affecting the\n> performance of other queries?\n>\n> Thanks,\n>\n> Andrew\n> --\n> View this message in context: http://www.nabble.com/How-do-I-bulk-insert-to-a-table-without-affecting-read-performance-on-that-table--tp15099164p15099164.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n",
"msg_date": "Fri, 25 Jan 2008 20:06:58 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How do I bulk insert to a table without affecting read\n\tperformance on that table?"
},
{
"msg_contents": "\n\nScott Marlowe-2 wrote:\n> \n> On Jan 25, 2008 5:27 PM, growse <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> I've got a pg database, and a batch process that generates some metadata\n>> to\n>> be inserted into one of the tables. Every 15 minutes or so, the batch\n>> script\n>> re-calculates the meta data (600,000 rows), dumps it to file, and then\n>> does\n>> a TRUNCATE table followed by a COPY to import that file into the table.\n>>\n>> The problem is, that whilst this process is happening, other queries\n>> against\n>> this table time out. I've tried to copy into a temp table before doing an\n>> \"INSERT INTO table (SELECT * FROM temp)\", but the second statement still\n>> takes a lot of time and causes a loss of performance.\n> \n> Can you import to another table then\n> \n> begin;\n> alter table realtable rename to garbage;\n> alter table loadtable rename to realtable;\n> commit;\n> \n> ?\n> \n>>\n>> So, what's the best way to import my metadata without it affecting the\n>> performance of other queries?\n>>\n>> Thanks,\n>>\n>> Andrew\n>> --\n>> View this message in context:\n>> http://www.nabble.com/How-do-I-bulk-insert-to-a-table-without-affecting-read-performance-on-that-table--tp15099164p15099164.html\n>> Sent from the PostgreSQL - performance mailing list archive at\n>> Nabble.com.\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n> \n\nThis is a possibility. My question on this is that would an ALTER TABLE real\nRENAME TO garbage be faster than a DROP TABLE real?\n-- \nView this message in context: http://www.nabble.com/How-do-I-bulk-insert-to-a-table-without-affecting-read-performance-on-that-table--tp15099164p15107074.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Sat, 26 Jan 2008 03:42:45 -0800 (PST)",
"msg_from": "growse <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How do I bulk insert to a table without affecting\n\tread performance on that table?"
},
{
"msg_contents": "On Jan 26, 2008 5:42 AM, growse <[email protected]> wrote:\n>\n>\n>\n> Scott Marlowe-2 wrote:\n> >\n> > On Jan 25, 2008 5:27 PM, growse <[email protected]> wrote:\n> >>\n> >> Hi,\n> >>\n> >> I've got a pg database, and a batch process that generates some metadata\n> >> to\n> >> be inserted into one of the tables. Every 15 minutes or so, the batch\n> >> script\n> >> re-calculates the meta data (600,000 rows), dumps it to file, and then\n> >> does\n> >> a TRUNCATE table followed by a COPY to import that file into the table.\n> >>\n> >> The problem is, that whilst this process is happening, other queries\n> >> against\n> >> this table time out. I've tried to copy into a temp table before doing an\n> >> \"INSERT INTO table (SELECT * FROM temp)\", but the second statement still\n> >> takes a lot of time and causes a loss of performance.\n> >\n> > Can you import to another table then\n> >\n> > begin;\n> > alter table realtable rename to garbage;\n> > alter table loadtable rename to realtable;\n> > commit;\n> >\n> > ?\n>\n> This is a possibility. My question on this is that would an ALTER TABLE real\n> RENAME TO garbage be faster than a DROP TABLE real?\n\nI don't know. They're both pretty fast. I'd do a test, with parallel\ncontention on the table and see.\n",
"msg_date": "Sat, 26 Jan 2008 07:39:22 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How do I bulk insert to a table without affecting read\n\tperformance on that table?"
},
{
"msg_contents": ">>> On Sat, Jan 26, 2008 at 7:39 AM, in message\n<[email protected]>, \"Scott Marlowe\"\n<[email protected]> wrote: \n> On Jan 26, 2008 5:42 AM, growse <[email protected]> wrote:\n>> Scott Marlowe-2 wrote:\n>> > Can you import to another table then\n>> >\n>> > begin;\n>> > alter table realtable rename to garbage;\n>> > alter table loadtable rename to realtable;\n>> > commit;\n>> >\n>> > ?\n>>\n>> This is a possibility. My question on this is that would an ALTER TABLE real\n>> RENAME TO garbage be faster than a DROP TABLE real?\n> \n> I don't know. They're both pretty fast. I'd do a test, with parallel\n> contention on the table and see.\n \nWe do something similar (using DROP TABLE) on a weekly cycle.\nWe get occasional errors, even with the database transaction.\nI wonder whether we might dodge them by using the rename, and\nthen dropping the old table after a brief delay.\n \n-Kevin\n \n\n\n",
"msg_date": "Thu, 31 Jan 2008 17:06:20 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How do I bulk insert to a table without\n\taffecting read performance on that table?"
}
] |
[
{
"msg_contents": "Hi\n\nI am investigating migrating from postgres 743 to postgres 826 but \nalthough the performance in postgres 826 seems to be generally better \nthere are some instances where it seems to be markedly worse, a factor \nof up to 10. The problem seems to occur when I join to more than 4 \ntables. Has anyone else experienced anything similar or got any \nsuggestions as to what I might do? I am running on an intel box with two \nhyper threaded cores and 4GB of RAM. I have tweaked the postgres.conf \nfiles with these values and the query and explain output are below. In \nthis case the query takes 6.037 ms to run on 862 and 2.332 to run on 743.\n\nThanks in advance for any help.\n\nRegards\nMatthew\n\n8.2.6\nshared_buffers = 500MB\nwork_mem = 10MB\nmaintenance_work_mem = 100MB\neffective_cache_size = 2048MB\ndefault_statistics_target = 1000\n\n7.4.3\nshared_buffers = 51200\nsort_mem = 10240\nvacuum_mem = 81920\neffective_cache_size = 102400\n\nexplain analyze\n SELECT *\n FROM market mrkt\n JOIN market_group_relation mgr USING (market_id)\n JOIN market_group mg USING (market_group_id)\n JOIN market_group_price_relation mgpr USING (market_group_id)\n JOIN accommodation_price_panel app ON \napp.accommodation_price_panel_id = mgpr.price_panel_id\n JOIN daily_rates dr USING (accommodation_price_panel_id)\nWHERE mrkt.live <> 'X'::bpchar AND mg.live <> 'X'::bpchar AND app.live \n<> 'X'::bpchar\nAND dr.min_group_size = 0\nAND MARKET_ID = 10039 \nAND CODE = 'LONHRL'\nAND CODE_TYPE = 'IS'\nAND ROOM_TYPE = 'Zk'\nAND BOARD_TYPE = 'BB'\nAND CONTRACT_ID = '16077'\nAND ( START_DATE BETWEEN '2008-05-22' AND '2008-05-31' OR '2008-05-22' \nBETWEEN START_DATE AND END_DATE )\n\n\"Nested Loop (cost=37.27..48.34 rows=1 width=458) (actual \ntime=1.474..2.138 rows=14 loops=1)\"\n\" -> Nested Loop (cost=37.27..42.34 rows=1 width=282) (actual \ntime=1.428..1.640 rows=2 loops=1)\"\n\" -> Hash Join (cost=37.27..40.68 rows=1 width=199) (actual \ntime=1.367..1.516 rows=2 loops=1)\"\n\" Hash Cond: (\"outer\".market_group_id = \n\"inner\".market_group_id)\"\n\" -> Seq Scan on market_group mg (cost=0.00..3.01 rows=78 \nwidth=81) (actual time=0.004..0.105 rows=80 loops=1)\"\n\" Filter: (live <> 'X'::bpchar)\"\n\" -> Hash (cost=37.27..37.27 rows=1 width=126) (actual \ntime=1.325..1.325 rows=0 loops=1)\"\n\" -> Hash Join (cost=12.66..37.27 rows=1 width=126) \n(actual time=1.051..1.321 rows=2 loops=1)\"\n\" Hash Cond: (\"outer\".market_group_id = \n\"inner\".market_group_id)\"\n\" -> Seq Scan on market_group_relation mgr \n(cost=0.00..24.46 rows=27 width=31) (actual time=0.165..0.641 rows=30 \nloops=1)\"\n\" Filter: (10039 = market_id)\"\n\" -> Hash (cost=12.66..12.66 rows=2 width=95) \n(actual time=0.641..0.641 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..12.66 \nrows=2 width=95) (actual time=0.056..0.593 rows=27 loops=1)\"\n\" -> Index Scan using \naccommodation_price_panel_idx1 on accommodation_price_panel app \n(cost=0.00..6.02 rows=1 width=60) (actual time=0.037..0.200 rows=27 \nloops=1)\"\n\" Index Cond: ((contract_id = \n16077) AND ((code)::text = 'LONHRL'::text) AND (code_type = 'IS'::bpchar))\"\n\" Filter: (live <> 'X'::bpchar)\"\n\" -> Index Scan using \nmarket_group_price_relation_pkey on market_group_price_relation mgpr \n(cost=0.00..6.62 rows=1 width=35) (actual time=0.007..0.008 rows=1 \nloops=27)\"\n\" Index Cond: \n(\"outer\".accommodation_price_panel_id = mgpr.price_panel_id)\"\n\" -> Seq Scan on market mrkt (cost=0.00..1.65 rows=1 width=87) \n(actual time=0.045..0.046 rows=1 loops=2)\"\n\" Filter: ((live <> 'X'::bpchar) AND (market_id = 10039))\"\n\" -> Index Scan using daily_rates_pkey on daily_rates dr \n(cost=0.00..5.99 rows=1 width=180) (actual time=0.022..0.113 rows=7 \nloops=2)\"\n\" Index Cond: ((dr.accommodation_price_panel_id = \n\"outer\".price_panel_id) AND (dr.room_type = 'Zk'::bpchar))\"\n\" Filter: ((min_group_size = 0) AND (board_type = 'BB'::bpchar) \nAND (('2008-05-22'::date >= start_date) OR (start_date >= \n'2008-05-22'::date)) AND (('2008-05-22'::date <= end_date) OR \n(start_date >= '2008-05-22'::date)) AND (('2008-05-22'::date >= st (..)\"\n\"Total runtime: 2.332 ms\"\n\n\n\"Nested Loop (cost=0.00..30.39 rows=1 width=458) (actual \ntime=0.123..5.841 rows=14 loops=1)\"\n\" -> Nested Loop (cost=0.00..29.70 rows=1 width=439) (actual \ntime=0.099..4.590 rows=189 loops=1)\"\n\" -> Nested Loop (cost=0.00..29.40 rows=1 width=358) (actual \ntime=0.091..3.243 rows=189 loops=1)\"\n\" -> Nested Loop (cost=0.00..21.07 rows=1 width=327) \n(actual time=0.081..1.571 rows=189 loops=1)\"\n\" -> Nested Loop (cost=0.00..10.40 rows=1 \nwidth=147) (actual time=0.053..0.134 rows=27 loops=1)\"\n\" -> Seq Scan on market mrkt (cost=0.00..2.08 \nrows=1 width=87) (actual time=0.022..0.023 rows=1 loops=1)\"\n\" Filter: ((live <> 'X'::bpchar) AND \n(market_id = 10039))\"\n\" -> Index Scan using \naccommodation_price_panel_idx1 on accommodation_price_panel app \n(cost=0.00..8.31 rows=1 width=60) (actual time=0.027..0.071 rows=27 \nloops=1)\"\n\" Index Cond: ((contract_id = 16077) AND \n((code)::text = 'LONHRL'::text) AND (code_type = 'IS'::bpchar))\"\n\" Filter: (live <> 'X'::bpchar)\"\n\" -> Index Scan using daily_rates_pkey on \ndaily_rates dr (cost=0.00..10.64 rows=1 width=180) (actual \ntime=0.019..0.038 rows=7 loops=27)\"\n\" Index Cond: \n((app.accommodation_price_panel_id = dr.accommodation_price_panel_id) \nAND (dr.room_type = 'Zk'::bpchar) AND (dr.board_type = 'BB'::bpchar) AND \n(dr.min_group_size = 0))\"\n\" Filter: (((start_date >= '2008-05-22'::date) \nAND (start_date <= '2008-05-31'::date)) OR (('2008-05-22'::date >= \nstart_date) AND ('2008-05-22'::date <= end_date)))\"\n\" -> Index Scan using market_group_price_relation_pkey on \nmarket_group_price_relation mgpr (cost=0.00..8.31 rows=1 width=35) \n(actual time=0.005..0.006 rows=1 loops=189)\"\n\" Index Cond: (app.accommodation_price_panel_id = \nmgpr.price_panel_id)\"\n\" -> Index Scan using market_group_pkey on market_group mg \n(cost=0.00..0.28 rows=1 width=81) (actual time=0.003..0.004 rows=1 \nloops=189)\"\n\" Index Cond: (mgpr.market_group_id = mg.market_group_id)\"\n\" Filter: (live <> 'X'::bpchar)\"\n\" -> Index Scan using market_group_relation_idx2 on \nmarket_group_relation mgr (cost=0.00..0.67 rows=1 width=31) (actual \ntime=0.005..0.005 rows=0 loops=189)\"\n\" Index Cond: (mgr.market_group_id = mg.market_group_id)\"\n\" Filter: (10039 = market_id)\"\n\"Total runtime: 6.037 ms\"\n\n",
"msg_date": "Mon, 28 Jan 2008 11:41:41 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance issues migrating from 743 to 826"
},
{
"msg_contents": "\"Matthew Lunnon\" <[email protected]> writes:\n\n> In this case the query takes 6.037 ms to run on 862 and 2.332 to run on 743.\n\nThe difference between 2ms and 6ms is pretty negligable. A single context\nswitch or disk cache miss could throw the results off by that margin in either\ndirection.\n\nBut what plan does 7.4.3 come up with if you set enable_hashjoins = off? I'm\ncurious whether it comes up with the same nested loops plan as 8.2 and what\ncost it says it has.\n\nI think you need to find queries which take longer to have any reliable\nperformance comparisons. Note that the configuration parameters here aren't\nthe same at all, it's possible the change of effective_cache_size from 800k to\n2GB is what's changing the cost estimation. I seem to recall a change in the\narithmetic for calculatin Nested loop costs too which made it more aggressive\nin estimating cache effectiveness.\n\nIncidentally, default_statistics_target=1000 is awfully aggressive. I found in\nthe past that that caused the statistics table to become much larger and much\nslower to access. It may have caused some statistics to be toasted or it may\nhave just been the sheer volume of data present. It will also make your\nANALYZEs take a lot longer. I would suggest trying 100 first and incrementally\nraising it rather than jumping straight to 1000. And preferably only on the\ncolumns which really matter.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's 24x7 Postgres support!\n",
"msg_date": "Mon, 28 Jan 2008 12:41:03 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues migrating from 743 to 826"
},
{
"msg_contents": "On Jan 28, 2008 5:41 AM, Matthew Lunnon <[email protected]> wrote:\n> Hi\n>\n> I am investigating migrating from postgres 743 to postgres 826 but\n> although the performance in postgres 826 seems to be generally better\n> there are some instances where it seems to be markedly worse, a factor\n> of up to 10. The problem seems to occur when I join to more than 4\n> tables. Has anyone else experienced anything similar or got any\n> suggestions as to what I might do? I am running on an intel box with two\n> hyper threaded cores and 4GB of RAM. I have tweaked the postgres.conf\n> files with these values and the query and explain output are below. In\n> this case the query takes 6.037 ms to run on 862 and 2.332 to run on 743.\n\nIt looks like the data are not the same in these two environments.\n\n> 8.2.6\n> shared_buffers = 500MB\n> work_mem = 10MB\n> maintenance_work_mem = 100MB\n> effective_cache_size = 2048MB\n> default_statistics_target = 1000\n\nThat's very high for the default. Planning times will be increased noticeably\n\nPlan for 7.4:\n\n> \"Nested Loop (cost=37.27..48.34 rows=1 width=458) (actual\n> time=1.474..2.138 rows=14 loops=1)\"\n> \" -> Nested Loop (cost=37.27..42.34 rows=1 width=282) (actual\n> time=1.428..1.640 rows=2 loops=1)\"\n\nThis is processing 2 rows...\n\n> \"Total runtime: 2.332 ms\"\n\nWhile this is processing 189 rows:\n\n> \"Nested Loop (cost=0.00..30.39 rows=1 width=458) (actual\n> time=0.123..5.841 rows=14 loops=1)\"\n> \" -> Nested Loop (cost=0.00..29.70 rows=1 width=439) (actual\n> time=0.099..4.590 rows=189 loops=1)\"\n\nHardly seems a fair comparison.\n",
"msg_date": "Mon, 28 Jan 2008 09:14:00 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues migrating from 743 to 826"
},
{
"msg_contents": "Hi Scott,\n Thanks for your time\nRegards\nMatthew\n\nScott Marlowe wrote:\n> On Jan 28, 2008 5:41 AM, Matthew Lunnon <[email protected]> wrote:\n> \n>> Hi\n>>\n>> I am investigating migrating from postgres 743 to postgres 826 but\n>> although the performance in postgres 826 seems to be generally better\n>> there are some instances where it seems to be markedly worse, a factor\n>> of up to 10. The problem seems to occur when I join to more than 4\n>> tables. Has anyone else experienced anything similar or got any\n>> suggestions as to what I might do? I am running on an intel box with two\n>> hyper threaded cores and 4GB of RAM. I have tweaked the postgres.conf\n>> files with these values and the query and explain output are below. In\n>> this case the query takes 6.037 ms to run on 862 and 2.332 to run on 743.\n>> \n>\n> It looks like the data are not the same in these two environments.\n>\n> \n>> 8.2.6\n>> shared_buffers = 500MB\n>> work_mem = 10MB\n>> maintenance_work_mem = 100MB\n>> effective_cache_size = 2048MB\n>> default_statistics_target = 1000\n>> \n>\n> That's very high for the default. Planning times will be increased noticeably\n> \nI had originally left the default_statistics_target at its default and \nthen increased it to 100, but this did not seem to make much \ndifference. I will reduce this down to something more normal again.\n> Plan for 7.4:\n>\n> \n>> \"Nested Loop (cost=37.27..48.34 rows=1 width=458) (actual\n>> time=1.474..2.138 rows=14 loops=1)\"\n>> \" -> Nested Loop (cost=37.27..42.34 rows=1 width=282) (actual\n>> time=1.428..1.640 rows=2 loops=1)\"\n>> \n>\n> This is processing 2 rows...\n>\n> \n>> \"Total runtime: 2.332 ms\"\n>> \n>\n> While this is processing 189 rows:\n>\n> \n>> \"Nested Loop (cost=0.00..30.39 rows=1 width=458) (actual\n>> time=0.123..5.841 rows=14 loops=1)\"\n>> \" -> Nested Loop (cost=0.00..29.70 rows=1 width=439) (actual\n>> time=0.099..4.590 rows=189 loops=1)\"\n>> \n>\n> Hardly seems a fair comparison.\n> \nThe queries were on exactly the same data. My interpretation of what is \ngoing on here is that 8.2.6 seems to be leaving the filtering of \nmarket_id to the very last point, which is why it ends up with 189 rows \nat this point instead of the 2 that 743 has. 743 seems to do that \nfiltering much earlier and so reduce the number of rows at a much \nearlier point in the execution of the query. I guess that this is \nsomething to do with the planner which is why I tried increasing the \ndefault_statistics_target.\n> _____________________________________________________________________\n> This e-mail has been scanned for viruses by Verizon Business Internet Managed Scanning Services - powered by MessageLabs. For further information visit http://www.verizonbusiness.com/uk\n> \n\n\n\n\n\n\n\nHi Scott,\n Thanks for your time\nRegards\nMatthew\n\nScott Marlowe wrote:\n\nOn Jan 28, 2008 5:41 AM, Matthew Lunnon <[email protected]> wrote:\n \n\nHi\n\nI am investigating migrating from postgres 743 to postgres 826 but\nalthough the performance in postgres 826 seems to be generally better\nthere are some instances where it seems to be markedly worse, a factor\nof up to 10. The problem seems to occur when I join to more than 4\ntables. Has anyone else experienced anything similar or got any\nsuggestions as to what I might do? I am running on an intel box with two\nhyper threaded cores and 4GB of RAM. I have tweaked the postgres.conf\nfiles with these values and the query and explain output are below. In\nthis case the query takes 6.037 ms to run on 862 and 2.332 to run on 743.\n \n\n\nIt looks like the data are not the same in these two environments.\n\n \n\n8.2.6\nshared_buffers = 500MB\nwork_mem = 10MB\nmaintenance_work_mem = 100MB\neffective_cache_size = 2048MB\ndefault_statistics_target = 1000\n \n\n\nThat's very high for the default. Planning times will be increased noticeably\n \n\nI had originally left the default_statistics_target at its default and\nthen increased it to 100, but this did not seem to make much\ndifference. I will reduce this down to something more normal again.\n\n\nPlan for 7.4:\n\n \n\n\"Nested Loop (cost=37.27..48.34 rows=1 width=458) (actual\ntime=1.474..2.138 rows=14 loops=1)\"\n\" -> Nested Loop (cost=37.27..42.34 rows=1 width=282) (actual\ntime=1.428..1.640 rows=2 loops=1)\"\n \n\n\nThis is processing 2 rows...\n\n \n\n\"Total runtime: 2.332 ms\"\n \n\n\nWhile this is processing 189 rows:\n\n \n\n\"Nested Loop (cost=0.00..30.39 rows=1 width=458) (actual\ntime=0.123..5.841 rows=14 loops=1)\"\n\" -> Nested Loop (cost=0.00..29.70 rows=1 width=439) (actual\ntime=0.099..4.590 rows=189 loops=1)\"\n \n\n\nHardly seems a fair comparison.\n \n\nThe queries were on exactly the same data. My\ninterpretation of what is going on here is that 8.2.6 seems to be\nleaving the filtering of market_id to the very last point, which is why\nit ends up with 189 rows at this point instead of the 2 that 743 has.\n743 seems to do\nthat filtering much earlier and so reduce the number of rows at a much\nearlier point in the execution of the query. I guess that this is\nsomething to do with\nthe planner which is why I tried increasing the\ndefault_statistics_target.\n\n\n_____________________________________________________________________\nThis e-mail has been scanned for viruses by Verizon Business Internet Managed Scanning Services - powered by MessageLabs. For further information visit http://www.verizonbusiness.com/uk",
"msg_date": "Mon, 28 Jan 2008 15:27:50 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues migrating from 743 to 826"
},
{
"msg_contents": "Whatever email agent you're using seems to be quoting in a way that\ndoesn't get along well with gmail, so I'm just gonna chop most of it\nrather than have it quoted confusingly... Heck, I woulda chopped a\nlot anyway to keep it small. :)\n\nOn Jan 28, 2008 9:27 AM, Matthew Lunnon <[email protected]> wrote:\n>\n> Scott Marlowe wrote:\n> On Jan 28, 2008 5:41 AM, Matthew Lunnon <[email protected]> wrote:\n> default_statistics_target = 1000\n> > That's very high for the default. Planning times will be increased\n> > noticeably\n>\n> I had originally left the default_statistics_target at its default and then\n> increased it to 100, but this did not seem to make much difference. I will\n> reduce this down to something more normal again.\n\nYou do know that if you create a column when the default is 10, then\nincrease the default, it won't change the column's stats target,\nright? So, assuming the table was first created, then you changed the\ndefault, you'll now need to do:\n\nalter table xyz alter column abc set statistics 100;\nanalyze xyz;\n\nfor it to make any difference.\n\n> The queries were on exactly the same data. My interpretation of what is\n> going on here is that 8.2.6 seems to be leaving the filtering of market_id\n> to the very last point, which is why it ends up with 189 rows at this point\n> instead of the 2 that 743 has. 743 seems to do that filtering much earlier\n> and so reduce the number of rows at a much earlier point in the execution of\n> the query. I guess that this is something to do with the planner which is\n> why I tried increasing the default_statistics_target.\n\nAhh, I'm guessing it's something that your 7.4 database CAN use an\nindex on and your 8.2 data base can't use an index on. Like text in a\nnon-C locale. Or something... Table def?\n",
"msg_date": "Mon, 28 Jan 2008 09:39:54 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues migrating from 743 to 826"
},
{
"msg_contents": "Hi Gregory/All,\n\nThanks for your time.\n\nYes the difference is pretty small but does seem to be consistent, the \nproblem that I have is that this is just part of the query, I have tried \nto break things down so that I can see where the time is being spent. I \nset the default_statistics_target to 1000 after going via 100 but it \nseemed to make no difference.\n\nI have a confession to make though, this is not like for like. I did in \nfact have to add a couple of indexes to the data as the performance was \nso bad with 8.2.6. Very sorry for that, it doesn't help. The actual \ndifference if from 2ms to 57ms when these indexes are removed which is \nmuch more significant. Here is the like for like comparison with 8.2.6, \nthe indexes were added to the market_group_relation table since it is \ndoing a seq scan at the very end.\n\n\"Nested Loop (cost=0.00..54.03 rows=1 width=458) (actual \ntime=0.279..57.457 rows=14 loops=1)\"\n\" Join Filter: (mgr.market_group_id = mgpr.market_group_id)\"\n\" -> Nested Loop (cost=0.00..29.19 rows=1 width=439) (actual \ntime=0.102..4.867 rows=189 loops=1)\"\n\" -> Nested Loop (cost=0.00..28.91 rows=1 width=358) (actual \ntime=0.095..3.441 rows=189 loops=1)\"\n\" -> Nested Loop (cost=0.00..20.60 rows=1 width=327) \n(actual time=0.082..1.639 rows=189 loops=1)\"\n\" -> Nested Loop (cost=0.00..9.95 rows=1 width=147) \n(actual time=0.054..0.138 rows=27 loops=1)\"\n\" -> Seq Scan on market mrkt (cost=0.00..1.65 \nrows=1 width=87) (actual time=0.020..0.020 rows=1 loops=1)\"\n\" Filter: ((live <> 'X'::bpchar) AND \n(market_id = 10039))\"\n\" -> Index Scan using \naccommodation_price_panel_idx1 on accommodation_price_panel app \n(cost=0.00..8.30 rows=1 width=60) (actual time=0.029..0.079 rows=27 \nloops=1)\"\n\" Index Cond: ((contract_id = 16077) AND \n((code)::text = 'LONHRL'::text) AND (code_type = 'IS'::bpchar))\"\n\" Filter: (live <> 'X'::bpchar)\"\n\" -> Index Scan using daily_rates_pkey on \ndaily_rates dr (cost=0.00..10.63 rows=1 width=180) (actual \ntime=0.021..0.041 rows=7 loops=27)\"\n\" Index Cond: \n((app.accommodation_price_panel_id = dr.accommodation_price_panel_id) \nAND (dr.room_type = 'Zk'::bpchar) AND (dr.board_type = 'BB'::bpchar) AND \n(dr.min_group_size = 0))\"\n\" Filter: (((start_date >= '2008-05-22'::date) \nAND (start_date <= '2008-05-31'::date)) OR (('2008-05-22'::date >= \nstart_date) AND ('2008-05-22'::date <= end_date)))\"\n\" -> Index Scan using market_group_price_relation_pkey on \nmarket_group_price_relation mgpr (cost=0.00..8.30 rows=1 width=35) \n(actual time=0.005..0.006 rows=1 loops=189)\"\n\" Index Cond: (app.accommodation_price_panel_id = \nmgpr.price_panel_id)\"\n\" -> Index Scan using market_group_pkey on market_group mg \n(cost=0.00..0.27 rows=1 width=81) (actual time=0.003..0.004 rows=1 \nloops=189)\"\n\" Index Cond: (mgpr.market_group_id = mg.market_group_id)\"\n\" Filter: (live <> 'X'::bpchar)\"\n\" -> Seq Scan on market_group_relation mgr (cost=0.00..24.46 rows=30 \nwidth=31) (actual time=0.068..0.259 rows=30 loops=189)\"\n\" Filter: (10039 = market_id)\"\n\"Total runtime: 57.648 ms\"\n\n\n\nGregory Stark wrote:\n> \"Matthew Lunnon\" <[email protected]> writes:\n>\n> \n>> In this case the query takes 6.037 ms to run on 862 and 2.332 to run on 743.\n>> \n>\n> The difference between 2ms and 6ms is pretty negligable. A single context\n> switch or disk cache miss could throw the results off by that margin in either\n> direction.\n>\n> But what plan does 7.4.3 come up with if you set enable_hashjoins = off? I'm\n> curious whether it comes up with the same nested loops plan as 8.2 and what\n> cost it says it has.\n> \nI'll investigate and let you know.\n> I think you need to find queries which take longer to have any reliable\n> performance comparisons. Note that the configuration parameters here aren't\n> the same at all, it's possible the change of effective_cache_size from 800k to\n> 2GB is what's changing the cost estimation. I seem to recall a change in the\n> arithmetic for calculatin Nested loop costs too which made it more aggressive\n> in estimating cache effectiveness.\n>\n> Incidentally, default_statistics_target=1000 is awfully aggressive. I found in\n> the past that that caused the statistics table to become much larger and much\n> slower to access. It may have caused some statistics to be toasted or it may\n> have just been the sheer volume of data present. It will also make your\n> ANALYZEs take a lot longer. I would suggest trying 100 first and incrementally\n> raising it rather than jumping straight to 1000. And preferably only on the\n> columns which really matter.\n>\n> \n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--\n\n\n\n\n\n\n\nHi Gregory/All,\n\nThanks for your time.\n\nYes the difference is pretty small but does seem to be consistent, the\nproblem that I have is that this is just part of the query, I have\ntried to break things down so that I can see where the time is being\nspent. I set the default_statistics_target to 1000 after going via\n100 but it seemed to make no difference. \n\nI have a confession to make though, this is not like for like. I did\nin fact have to add a couple of indexes to the data as the performance\nwas so bad with 8.2.6. Very sorry for that, it doesn't help. The\nactual difference if from 2ms to 57ms when these indexes are removed\nwhich is much more significant. Here is the like for like comparison\nwith 8.2.6, the indexes were added to the market_group_relation table\nsince it is doing a seq scan at the very end.\n\n\"Nested Loop (cost=0.00..54.03 rows=1 width=458) (actual\ntime=0.279..57.457 rows=14 loops=1)\"\n\" Join Filter: (mgr.market_group_id = mgpr.market_group_id)\"\n\" -> Nested Loop (cost=0.00..29.19 rows=1 width=439) (actual\ntime=0.102..4.867 rows=189 loops=1)\"\n\" -> Nested Loop (cost=0.00..28.91 rows=1 width=358)\n(actual time=0.095..3.441 rows=189 loops=1)\"\n\" -> Nested Loop (cost=0.00..20.60 rows=1 width=327)\n(actual time=0.082..1.639 rows=189 loops=1)\"\n\" -> Nested Loop (cost=0.00..9.95 rows=1\nwidth=147) (actual time=0.054..0.138 rows=27 loops=1)\"\n\" -> Seq Scan on market mrkt \n(cost=0.00..1.65 rows=1 width=87) (actual time=0.020..0.020 rows=1\nloops=1)\"\n\" Filter: ((live <> 'X'::bpchar)\nAND (market_id = 10039))\"\n\" -> Index Scan using\naccommodation_price_panel_idx1 on accommodation_price_panel app \n(cost=0.00..8.30 rows=1 width=60) (actual time=0.029..0.079 rows=27\nloops=1)\"\n\" Index Cond: ((contract_id = 16077) AND\n((code)::text = 'LONHRL'::text) AND (code_type = 'IS'::bpchar))\"\n\" Filter: (live <> 'X'::bpchar)\"\n\" -> Index Scan using daily_rates_pkey on\ndaily_rates dr (cost=0.00..10.63 rows=1 width=180) (actual\ntime=0.021..0.041 rows=7 loops=27)\"\n\" Index Cond:\n((app.accommodation_price_panel_id = dr.accommodation_price_panel_id)\nAND (dr.room_type = 'Zk'::bpchar) AND (dr.board_type = 'BB'::bpchar)\nAND (dr.min_group_size = 0))\"\n\" Filter: (((start_date >=\n'2008-05-22'::date) AND (start_date <= '2008-05-31'::date)) OR\n(('2008-05-22'::date >= start_date) AND ('2008-05-22'::date <=\nend_date)))\"\n\" -> Index Scan using market_group_price_relation_pkey\non market_group_price_relation mgpr (cost=0.00..8.30 rows=1 width=35)\n(actual time=0.005..0.006 rows=1 loops=189)\"\n\" Index Cond: (app.accommodation_price_panel_id =\nmgpr.price_panel_id)\"\n\" -> Index Scan using market_group_pkey on market_group mg \n(cost=0.00..0.27 rows=1 width=81) (actual time=0.003..0.004 rows=1\nloops=189)\"\n\" Index Cond: (mgpr.market_group_id = mg.market_group_id)\"\n\" Filter: (live <> 'X'::bpchar)\"\n\" -> Seq Scan on market_group_relation mgr (cost=0.00..24.46\nrows=30 width=31) (actual time=0.068..0.259 rows=30 loops=189)\"\n\" Filter: (10039 = market_id)\"\n\"Total runtime: 57.648 ms\"\n\n\n\nGregory Stark wrote:\n\n\"Matthew Lunnon\" <[email protected]> writes:\n\n \n\nIn this case the query takes 6.037 ms to run on 862 and 2.332 to run on 743.\n \n\n\nThe difference between 2ms and 6ms is pretty negligable. A single context\nswitch or disk cache miss could throw the results off by that margin in either\ndirection.\n\nBut what plan does 7.4.3 come up with if you set enable_hashjoins = off? I'm\ncurious whether it comes up with the same nested loops plan as 8.2 and what\ncost it says it has.\n \n\nI'll investigate and let you know.\n\n\nI think you need to find queries which take longer to have any reliable\nperformance comparisons. Note that the configuration parameters here aren't\nthe same at all, it's possible the change of effective_cache_size from 800k to\n2GB is what's changing the cost estimation. I seem to recall a change in the\narithmetic for calculatin Nested loop costs too which made it more aggressive\nin estimating cache effectiveness.\n\nIncidentally, default_statistics_target=1000 is awfully aggressive. I found in\nthe past that that caused the statistics table to become much larger and much\nslower to access. It may have caused some statistics to be toasted or it may\nhave just been the sheer volume of data present. It will also make your\nANALYZEs take a lot longer. I would suggest trying 100 first and incrementally\nraising it rather than jumping straight to 1000. And preferably only on the\ncolumns which really matter.\n\n \n\n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--",
"msg_date": "Mon, 28 Jan 2008 15:49:25 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues migrating from 743 to 826"
},
{
"msg_contents": "Scott Marlowe wrote:\n> Whatever email agent you're using seems to be quoting in a way that\n> doesn't get along well with gmail, so I'm just gonna chop most of it\n> rather than have it quoted confusingly... Heck, I woulda chopped a\n> lot anyway to keep it small. :)\n> \nThanks again for your time. I'm using Thunderbird, maybe I need to upgrade.\n> On Jan 28, 2008 9:27 AM, Matthew Lunnon <[email protected]> wrote:\n> \n>> Scott Marlowe wrote:\n>> On Jan 28, 2008 5:41 AM, Matthew Lunnon <[email protected]> wrote:\n>> default_statistics_target = 1000\n>> \n>>> That's very high for the default. Planning times will be increased\n>>> noticeably\n>>> \n>> I had originally left the default_statistics_target at its default and then\n>> increased it to 100, but this did not seem to make much difference. I will\n>> reduce this down to something more normal again.\n>> \n>\n> You do know that if you create a column when the default is 10, then\n> increase the default, it won't change the column's stats target,\n> right? So, assuming the table was first created, then you changed the\n> default, you'll now need to do:\n>\n> alter table xyz alter column abc set statistics 100;\n> analyze xyz;\n>\n> for it to make any difference.\n> \nThanks I haven't looked into this yet, I'll look. When I changed the \ndefault_stats_target it did take a very long time to do its analyze so I \nassumed it was doing something.\n> \n>> The queries were on exactly the same data. My interpretation of what is\n>> going on here is that 8.2.6 seems to be leaving the filtering of market_id\n>> to the very last point, which is why it ends up with 189 rows at this point\n>> instead of the 2 that 743 has. 743 seems to do that filtering much earlier\n>> and so reduce the number of rows at a much earlier point in the execution of\n>> the query. I guess that this is something to do with the planner which is\n>> why I tried increasing the default_statistics_target.\n>> \n>\n> Ahh, I'm guessing it's something that your 7.4 database CAN use an\n> index on and your 8.2 data base can't use an index on. Like text in a\n> non-C locale. Or something... Table def?\n> \nThanks, I'll take a look at that, is there any documentation on what \n8.2.6. can't use in an index? It didn't seem to have complained about \nany of my indexes when I generated the database.\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n> _____________________________________________________________________\n> This e-mail has been scanned for viruses by Verizon Business Internet Managed Scanning Services - powered by MessageLabs. For further information visit http://www.verizonbusiness.com/uk\n> \n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--\n\n\n\n\n\n\n\n\n\nScott Marlowe wrote:\n\nWhatever email agent you're using seems to be quoting in a way that\ndoesn't get along well with gmail, so I'm just gonna chop most of it\nrather than have it quoted confusingly... Heck, I woulda chopped a\nlot anyway to keep it small. :)\n \n\nThanks again for your time. I'm using Thunderbird, maybe I need to\nupgrade.\n\n\nOn Jan 28, 2008 9:27 AM, Matthew Lunnon <[email protected]> wrote:\n \n\n Scott Marlowe wrote:\n On Jan 28, 2008 5:41 AM, Matthew Lunnon <[email protected]> wrote:\ndefault_statistics_target = 1000\n \n\n That's very high for the default. Planning times will be increased\nnoticeably\n \n\n I had originally left the default_statistics_target at its default and then\nincreased it to 100, but this did not seem to make much difference. I will\nreduce this down to something more normal again.\n \n\n\nYou do know that if you create a column when the default is 10, then\nincrease the default, it won't change the column's stats target,\nright? So, assuming the table was first created, then you changed the\ndefault, you'll now need to do:\n\nalter table xyz alter column abc set statistics 100;\nanalyze xyz;\n\nfor it to make any difference.\n \n\nThanks I haven't looked into this yet, I'll look. When I changed the\ndefault_stats_target it did take a very long time to do its analyze so\nI assumed it was doing something.\n\n\n \n\n The queries were on exactly the same data. My interpretation of what is\ngoing on here is that 8.2.6 seems to be leaving the filtering of market_id\nto the very last point, which is why it ends up with 189 rows at this point\ninstead of the 2 that 743 has. 743 seems to do that filtering much earlier\nand so reduce the number of rows at a much earlier point in the execution of\nthe query. I guess that this is something to do with the planner which is\nwhy I tried increasing the default_statistics_target.\n \n\n\nAhh, I'm guessing it's something that your 7.4 database CAN use an\nindex on and your 8.2 data base can't use an index on. Like text in a\nnon-C locale. Or something... Table def?\n \n\nThanks, I'll take a look at that, is there any documentation on what\n8.2.6. can't use in an index? It didn't seem to have complained about\nany of my indexes when I generated the database.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n_____________________________________________________________________\nThis e-mail has been scanned for viruses by Verizon Business Internet Managed Scanning Services - powered by MessageLabs. For further information visit http://www.verizonbusiness.com/uk\n \n\n\n-- \nMatthew Lunnon\nTechnical Consultant\nRWA Ltd.\n\n [email protected]\n Tel: +44 (0)29 2081 5056\n www.rwa-net.co.uk\n--",
"msg_date": "Mon, 28 Jan 2008 15:54:07 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issues migrating from 743 to 826"
}
] |
[
{
"msg_contents": "Hi ms\n\nI have a query which runs pretty quick ( 0.82ms) but when I put it \ninside a stored procedure it takes 10 times as long (11.229ms). Is \nthis what you would expect and is there any way that I can get around \nthis time delay?\n\npostgres.conf changes.\n\nshared_buffers = 500MB\nwork_mem = 10MB\nmaintenance_work_mem = 100MB\neffective_cache_size = 2048MB\ndefault_statistics_target = 1000\n\nThanks for any help.\n\nRegards\nMatthew.\n\n",
"msg_date": "Mon, 28 Jan 2008 12:02:26 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance problems inside a stored procedure."
},
{
"msg_contents": "Matthew Lunnon wrote:\n> I have a query which runs pretty quick ( 0.82ms) but when I put it \n> inside a stored procedure it takes 10 times as long (11.229ms). Is \n> this what you would expect and is there any way that I can get around \n> this time delay?\n\nIt depends. You'll need to show us the function. Also, what version of \nPostgres are you running?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Mon, 28 Jan 2008 12:06:48 +0000",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problems inside a stored procedure."
},
{
"msg_contents": "\nAhh, sorry, I have been too aggressive with my cutting, I am running \n8.2.6 and the function is below.\n\nThanks.\nMatthew\n\nCREATE OR REPLACE FUNCTION sp_get_price_panel_id(int4, \"varchar\", \n\"varchar\", \"varchar\", bpchar)\n RETURNS SETOF t_market_price_panel AS\n$BODY$\nSELECT *\n FROM market mrkt\n JOIN market_group_relation mgr USING (market_id)\n JOIN market_group mg USING (market_group_id)\n JOIN market_group_price_relation mgpr USING (market_group_id)\n JOIN accommodation_price_panel app ON \napp.accommodation_price_panel_id = mgpr.price_panel_id\nWHERE mrkt.live <> 'X'::bpchar AND mg.live <> 'X'::bpchar AND app.live \n<> 'X'::bpchar\n AND MARKET_ID = $1 \n AND CODE = $2\n AND CODE_TYPE = $3::CHAR(2)\n AND CONTRACT_ID = $4\n AND ( PRICE_PANEL_TYPE = 'B' OR PRICE_PANEL_TYPE = $5 );\n$BODY$\n LANGUAGE 'sql' VOLATILE;\n\n\nHeikki Linnakangas wrote:\n> Matthew Lunnon wrote:\n>> I have a query which runs pretty quick ( 0.82ms) but when I put it \n>> inside a stored procedure it takes 10 times as long (11.229ms). Is \n>> this what you would expect and is there any way that I can get around \n>> this time delay?\n>\n> It depends. You'll need to show us the function. Also, what version of \n> Postgres are you running?\n>\n",
"msg_date": "Mon, 28 Jan 2008 12:10:24 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance problems inside a stored procedure."
},
{
"msg_contents": "Matthew Lunnon wrote:\n\n> Ahh, sorry, I have been too aggressive with my cutting, I am running \n> 8.2.6 and the function is below.\n> \n<snip>\n\n> $BODY$\n> LANGUAGE 'sql' VOLATILE;\n ^^^^^^^^^^\nI suspect that it's because you're using VOLATILE (so no good \noptimizations is done); did you try STABLE? Could you show us the \nEXPLAIN ANALYZE of query and function?\n\n\n-- \n Euler Taveira de Oliveira\n http://www.timbira.com/\n",
"msg_date": "Tue, 29 Jan 2008 11:59:19 -0200",
"msg_from": "Euler Taveira de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problems inside a stored procedure."
},
{
"msg_contents": "Thanks Euler,\n\nI made the change to STABLE but it didn't seem to make any difference. \nOn closer inspection it seems to have been a casting problem, I was \npassing a varchar into the function and then testing this for equality \nwith an integer. The planner seems to have been unable to use this to \naccess the index and so was returning too many rows and then filtering \nthem. It looks like I still have to take a hit of 2ms or so to call the \nfunction but I guess that is not unreasonable.\n\nThanks for your help and to everyone who answered this thread.\n\nRegards\nMatthew.\n\nEuler Taveira de Oliveira wrote:\n> Matthew Lunnon wrote:\n>\n>> Ahh, sorry, I have been too aggressive with my cutting, I am running \n>> 8.2.6 and the function is below.\n>>\n> <snip>\n>\n>> $BODY$\n>> LANGUAGE 'sql' VOLATILE;\n> ^^^^^^^^^^\n> I suspect that it's because you're using VOLATILE (so no good \n> optimizations is done); did you try STABLE? Could you show us the \n> EXPLAIN ANALYZE of query and function?\n>\n>\n\n",
"msg_date": "Tue, 29 Jan 2008 17:38:13 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance problems inside a stored procedure."
},
{
"msg_contents": "Thanks for your help пїЅпїЅпїЅпїЅпїЅпїЅ your English is easily understandable and \nmuch better than my ... (Russian?). I managed to get the results of an \nanalyze and this showed that an index was not being used correctly. It \nseems that I was passing in a varchar and not casting it to an int and \nthis stopped the index from being used. I suppose this is a change in \nthe implicit casting rules between version 7.4.7 and 8.x.\n\nOnce I added the explicit cast the function now uses the correct plan \nand returns in about 3 ms which I suppose is the performance hit that a \nfunction call has.\n\nAnyway thanks very much for your time.\n\nRegards\nMatthew\n\nпїЅпїЅпїЅпїЅпїЅпїЅ пїЅпїЅпїЅпїЅпїЅ wrote:\n> Hello Matthew,\n>\n> Monday, January 28, 2008, 2:02:26 PM, пїЅпїЅ пїЅпїЅпїЅпїЅпїЅпїЅ:\n>\n> ML> I have a query which runs pretty quick ( 0.82ms) but when I put it\n> ML> inside a stored procedure it takes 10 times as long (11.229ms). Is \n> ML> this what you would expect and is there any way that I can get around \n> ML> this time delay?\n>\n> ML> postgres.conf changes.\n>\n> ML> shared_buffers = 500MB\n> ML> work_mem = 10MB\n> ML> maintenance_work_mem = 100MB\n> ML> effective_cache_size = 2048MB\n> ML> default_statistics_target = 1000\n>\n> ML> Thanks for any help.\n> When you run it outside stored procedure optimizer know about your\n> parameters, and know what rows (estimate count) will be selected, so\n> it can create fine plan. When you put it into SP optimizer don't know\n> nothing about value of your parameters, but MUST create plan for it.\n> If table is frequently updateable plan, what was created for SP\n> became bad, and need replaning.\n>\n> It's sample for obtaining plan (LeXa NalBat):\n>\n> create function f1 ( integer, integer )\n> returns void language plpgsql as $body$\n> declare\n> _rec record;\n> begin\n> for _rec in explain\n>\n> -- put your query here\n> select count(*) from t1 where id between $1 and $2\n>\n> loop\n> raise info '%', _rec.\"QUERY PLAN\";\n> end loop;\n> return;\n> end;\n> $body$;\n>\n> Sorry for bad English.\n>\n> \n",
"msg_date": "Tue, 05 Feb 2008 09:32:21 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance problems inside a stored procedure."
}
] |
[
{
"msg_contents": "We are trying to optimize our Database server without spending a \nfortune on hardware. Here is our current setup.\n\nMain Drive array: 8x 750 GB SATA 2 Drives in a RAID 10 Configuration, \nthis stores the OS, Applications, and PostgreSQL Data drives. 3 TB \nArray, 2 TB Parition for PostgreSQL.\nSecondary drive array: 2x 36 GB SAS 15000 RPM Drives in a RAID 1 \nConfiguration: the pg_xlog directory, checkpoints set to use about 18 \nGB max, this way when massive numbers of small writes occur, they \ndon't slow the system down. Drive failure loses no data. Checkpoints \nwill be another matter, hope to keep under control with bgwriter \ntweaking.\n\nNow our \"normal\" activities are really fast. Scanning data, etc., all \nruns pretty quickly. What is NOT fast is some of the massive \nqueries. We have some Updates with joins that match a 100m line table \nwith a 200m line table. Outside of custom coding pl/pgsql code that \ncreates the subfunction on the fly (which is up for consideration) to \ntry to keep the Matching joins to an O(n) problem from the current \nO(n^2) one, we are looking at hardware as an option to help speed up \nthese big batch queries that sometimes run for 5-6 days.\n\nCPU not a problem, 2x Quad-core Xeon, never taxing more than 13%, this \nwill change as more of our database functions are brought over here \nfrom the other servers\nRAM is not upgradable, have 48GB of RAM on there.\nWork_mem shouldn't be the issue, the big processes get Work_mem set to \n10GB, and if they are using temp tables, another 6-8GB for \ntemp_buffers. Maintenance Mem is set to 2 GB.\n\nHowever, the joins of two 50GB tables really just can't be solved in \nRAM without using drive space. My question is, can hardware speed \nthat up? Would putting a 400 GB SAS Drive (15000 RPM) in just to \nhandle PostgreSQL temp files help? Considering it would store \"in \nprocess\" queries and not \"completed transactions\" I see no reason to \nmirror the drive. If it fails, we'd simply unmount it, replace it, \nthen remount it, it could use the SATA space in the mean time.\n\nWould that speed things up, and if so, where in the drive mappings \nshould that partition go?\n\nThank you for your help. I'm mostly interested in if I can speed \nthese things up from 5-6 days to < 1 day, otherwise I need to look at \noptimizing it.\n\nAlex\n",
"msg_date": "Mon, 28 Jan 2008 08:54:30 -0500",
"msg_from": "Alex Hochberger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hard Drive Usage for Speeding up Big Queries"
},
{
"msg_contents": "On Jan 28, 2008 7:54 AM, Alex Hochberger <[email protected]> wrote:\n> We are trying to optimize our Database server without spending a\n> fortune on hardware. Here is our current setup.\n>\n> Main Drive array: 8x 750 GB SATA 2 Drives in a RAID 10 Configuration,\n> this stores the OS, Applications, and PostgreSQL Data drives. 3 TB\n> Array, 2 TB Parition for PostgreSQL.\n> Secondary drive array: 2x 36 GB SAS 15000 RPM Drives in a RAID 1\n> Configuration: the pg_xlog directory, checkpoints set to use about 18\n> GB max, this way when massive numbers of small writes occur, they\n> don't slow the system down. Drive failure loses no data. Checkpoints\n> will be another matter, hope to keep under control with bgwriter\n> tweaking.\n>\nSNIP\n\n> However, the joins of two 50GB tables really just can't be solved in\n> RAM without using drive space. My question is, can hardware speed\n> that up? Would putting a 400 GB SAS Drive (15000 RPM) in just to\n> handle PostgreSQL temp files help? Considering it would store \"in\n> process\" queries and not \"completed transactions\" I see no reason to\n> mirror the drive. If it fails, we'd simply unmount it, replace it,\n> then remount it, it could use the SATA space in the mean time.\n>\n> Would that speed things up, and if so, where in the drive mappings\n> should that partition go?\n\nDo you have a maintenance window to experiment in? Try putting it on\nthe pg_xlog array to see if it speeds up the selects during one. Then\nyou'll know. I'm thinking it will help a little, but there's only so\nmuch you can do with 50g result sets.\n",
"msg_date": "Mon, 28 Jan 2008 09:19:48 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hard Drive Usage for Speeding up Big Queries"
},
{
"msg_contents": "On Jan 28, 2008 8:54 AM, Alex Hochberger <[email protected]> wrote:\n> We are trying to optimize our Database server without spending a\n> fortune on hardware. Here is our current setup.\n>\n> Main Drive array: 8x 750 GB SATA 2 Drives in a RAID 10 Configuration,\n> this stores the OS, Applications, and PostgreSQL Data drives. 3 TB\n> Array, 2 TB Parition for PostgreSQL.\n> Secondary drive array: 2x 36 GB SAS 15000 RPM Drives in a RAID 1\n> Configuration: the pg_xlog directory, checkpoints set to use about 18\n> GB max, this way when massive numbers of small writes occur, they\n> don't slow the system down. Drive failure loses no data. Checkpoints\n> will be another matter, hope to keep under control with bgwriter\n> tweaking.\n>\n> Now our \"normal\" activities are really fast. Scanning data, etc., all\n> runs pretty quickly. What is NOT fast is some of the massive\n> queries. We have some Updates with joins that match a 100m line table\n> with a 200m line table. Outside of custom coding pl/pgsql code that\n> creates the subfunction on the fly (which is up for consideration) to\n> try to keep the Matching joins to an O(n) problem from the current\n> O(n^2) one, we are looking at hardware as an option to help speed up\n> these big batch queries that sometimes run for 5-6 days.\n>\n\nWell, you have already put some thought into your hardware...but the\nawful truth is that the sata drives are just terrible at seeking once\nyou start seeing significant numbers of page faults to disk, and\ngetting killed on sorting on top of it. Maybe the best plan of attack\nhere is to post some explain times, and the relevant query. Perhaps\nthere are some optimizations in indexing strategies or other query\ntactics (the pl/pgsql function smells suspicious as you have already\nnoted), and hopefully the giant sort can be optimized out. You have a\nnasty problem that may require some out of the box thinking, so the\nmore information you can provide the better.\n\nmerlin\n",
"msg_date": "Mon, 28 Jan 2008 21:56:40 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hard Drive Usage for Speeding up Big Queries"
}
] |
[
{
"msg_contents": "Hi All,\n\nI am experiencing a strange performance issue with Postgresql (7.4.19) \n+ PostGIS. (I posted to the PostGIS list but got no response, so am \ntrying here.)\n\nWe have a table of entries that contains latitude, longitude values \nand I have a simple query to retrieve all entries within a specified 2- \nD box.\n\nThe latitude, longitude are stored as decimals, plus a trigger stores \nthe corresponding geometry object.\n\nWhen I do an EXPLAIN ANALYZE on one query that returns 3261 rows, it \nexecutes in a reasonable 159ms:\n\nEXPLAIN ANALYZE SELECT DISTINCT latitude, longitude, color FROM \nNewEntries\n\t WHERE groupid = 57925 AND\n\t\t location @ SetSRID(MakeBox2D(SetSRID(MakePoint(-123.75, \n36.597889), 4326),\n \t\t SetSRID(MakePoint(-118.125, \n40.979898), 4326)), 4326);\n\n\nUnique (cost=23.73..23.74 rows=1 width=30) (actual \ntime=143.648..156.081 rows=3261 loops=1)\n -> Sort (cost=23.73..23.73 rows=1 width=30) (actual \ntime=143.640..146.214 rows=3369 loops=1)\n Sort Key: latitude, longitude, color\n -> Index Scan using group_index on newentries \n(cost=0.00..23.72 rows=1 width=30) (actual time=0.184..109.346 \nrows=3369 loops=1)\n Index Cond: (groupid = 57925)\n Filter: (\"location\" @ \n'0103000020E610000001000000050000000000000000F05EC0000000A0874C42400000000000F05EC0000000406D7D44400000000000885DC0000000406D7D44400000000000885DC0000000A0874C42400000000000F05EC0000000A0874C4240 \n'::geometry)\nTotal runtime: 159.430 ms\n(7 rows)\n\nIf I issue the same query over JDBC or use a PSQL stored procedure, it \ntakes over 3000 ms, which, of course is unacceptable!\n\nFunction Scan on gettilelocations (cost=0.00..12.50 rows=1000 \nwidth=30) (actual time=3311.368..3319.265 rows=3261 loops=1)\nTotal runtime: 3322.529 ms\n(2 rows)\n\nThe function gettilelocations is defined as:\n\nCREATE OR REPLACE FUNCTION GetTileLocations(Integer, real, real, real, \nreal)\n RETURNS SETOF TileLocation\nAS\n'\n DECLARE\n R TileLocation;\n BEGIN\n FOR R IN SELECT DISTINCT latitude, longitude, color FROM \nNewEntries\n WHERE groupid = $1 AND\n location @ SetSRID(MakeBox2D(SetSRID(MakePoint($2, \n$3), 4326),\n SetSRID(MakePoint($4, $5), 4326)),\n 4326) LOOP\n RETURN NEXT R;\n END LOOP;\n RETURN;\n END;\n'\nLANGUAGE plpgsql STABLE RETURNS NULL ON NULL INPUT;\n\nCan someone please tell me what we are doing wrong? Any help would be \ngreatly appreciated.\n\nThanks\n\nClaire\n\n --\n Claire McLister [email protected]\n 21060 Homestead Road Suite 150\n Cupertino, CA 95014 408-733-2737(fax)\n\n http://www.zeemaps.com\n\n\n\n",
"msg_date": "Mon, 28 Jan 2008 09:59:07 -0800",
"msg_from": "Claire McLister <[email protected]>",
"msg_from_op": true,
"msg_subject": "JDBC/Stored procedure performance issue"
},
{
"msg_contents": "Claire McLister <[email protected]> writes:\n> When I do an EXPLAIN ANALYZE on one query that returns 3261 rows, it \n> executes in a reasonable 159ms:\n> ...\n> If I issue the same query over JDBC or use a PSQL stored procedure, it \n> takes over 3000 ms, which, of course is unacceptable!\n\nI suspect that the problem is with \"groupid = $1\" instead of\n\"groupid = 57925\". The planner is probably avoiding an indexscan\nin the parameterized case because it's guessing the actual value will\nmatch so many rows as to make a seqscan faster. Is the distribution\nof groupid highly skewed? You might get better results if you increase\nthe statistics target for that column.\n\nSwitching to something newer than 7.4.x might help too. 8.1 and up\nsupport \"bitmap\" indexscans which work much better for large numbers\nof hits, and correspondingly the planner will use one in cases where\nit wouldn't use a plain indexscan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Jan 2008 15:51:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JDBC/Stored procedure performance issue "
},
{
"msg_contents": "Hi Tom,\n\nIs there any way to work out what plan the query is using in side the \nfunction? I think I have a similar problem with a query taking much \nlonger from inside a function than it does as a select statement.\n\nRegards\nMatthew\n\nTom Lane wrote:\n> Claire McLister <[email protected]> writes:\n> \n>> When I do an EXPLAIN ANALYZE on one query that returns 3261 rows, it \n>> executes in a reasonable 159ms:\n>> ...\n>> If I issue the same query over JDBC or use a PSQL stored procedure, it \n>> takes over 3000 ms, which, of course is unacceptable!\n>> \n>\n> I suspect that the problem is with \"groupid = $1\" instead of\n> \"groupid = 57925\". The planner is probably avoiding an indexscan\n> in the parameterized case because it's guessing the actual value will\n> match so many rows as to make a seqscan faster. Is the distribution\n> of groupid highly skewed? You might get better results if you increase\n> the statistics target for that column.\n>\n> Switching to something newer than 7.4.x might help too. 8.1 and up\n> support \"bitmap\" indexscans which work much better for large numbers\n> of hits, and correspondingly the planner will use one in cases where\n> it wouldn't use a plain indexscan.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n> _____________________________________________________________________\n> This e-mail has been scanned for viruses by Verizon Business Internet Managed Scanning Services - powered by MessageLabs. For further information visit http://www.verizonbusiness.com/uk\n> \n\n\n\n\n\n\nHi Tom,\n\nIs there any way to work out what plan the query is using in side the\nfunction? I think I have a similar problem with a query taking much\nlonger from inside a function than it does as a select statement.\n\nRegards\nMatthew\n\nTom Lane wrote:\n\nClaire McLister <[email protected]> writes:\n \n\nWhen I do an EXPLAIN ANALYZE on one query that returns 3261 rows, it \nexecutes in a reasonable 159ms:\n...\nIf I issue the same query over JDBC or use a PSQL stored procedure, it \ntakes over 3000 ms, which, of course is unacceptable!\n \n\n\nI suspect that the problem is with \"groupid = $1\" instead of\n\"groupid = 57925\". The planner is probably avoiding an indexscan\nin the parameterized case because it's guessing the actual value will\nmatch so many rows as to make a seqscan faster. Is the distribution\nof groupid highly skewed? You might get better results if you increase\nthe statistics target for that column.\n\nSwitching to something newer than 7.4.x might help too. 8.1 and up\nsupport \"bitmap\" indexscans which work much better for large numbers\nof hits, and correspondingly the planner will use one in cases where\nit wouldn't use a plain indexscan.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n_____________________________________________________________________\nThis e-mail has been scanned for viruses by Verizon Business Internet Managed Scanning Services - powered by MessageLabs. For further information visit http://www.verizonbusiness.com/uk",
"msg_date": "Tue, 29 Jan 2008 10:12:49 +0000",
"msg_from": "Matthew Lunnon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JDBC/Stored procedure performance issue"
},
{
"msg_contents": "Matthew Lunnon <[email protected]> writes:\n> Is there any way to work out what plan the query is using in side the \n> function? I think I have a similar problem with a query taking much \n> longer from inside a function than it does as a select statement.\n\nStandard approach is to PREPARE a statement that has parameters in the\nsame places where the function uses variables/parameters, and then use\nEXPLAIN [ANALYZE] EXECUTE to test it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Jan 2008 13:08:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JDBC/Stored procedure performance issue "
},
{
"msg_contents": "Thanks, Tom. Looks like that was the issue.\n\nI changed the function to use groupid = 57925 instead of groupid = $1 \n(I can do the same change in the JDBC prepare statement), and the \nperformance is much better.\n\nIt is still more than twice that of the simple query: 401.111 ms vs. \n155.544 ms, which, however, is more acceptable than 3000ms.\n\nWill upgrade to 8.1 at some point, but would like to get reasonable \nperformance with 7.4 until then. I did increase the statistics target \nto 1000.\n\nClaire\n\nOn Jan 28, 2008, at 12:51 PM, Tom Lane wrote:\n\n> Claire McLister <[email protected]> writes:\n>> When I do an EXPLAIN ANALYZE on one query that returns 3261 rows, it\n>> executes in a reasonable 159ms:\n>> ...\n>> If I issue the same query over JDBC or use a PSQL stored procedure, \n>> it\n>> takes over 3000 ms, which, of course is unacceptable!\n>\n> I suspect that the problem is with \"groupid = $1\" instead of\n> \"groupid = 57925\". The planner is probably avoiding an indexscan\n> in the parameterized case because it's guessing the actual value will\n> match so many rows as to make a seqscan faster. Is the distribution\n> of groupid highly skewed? You might get better results if you \n> increase\n> the statistics target for that column.\n>\n> Switching to something newer than 7.4.x might help too. 8.1 and up\n> support \"bitmap\" indexscans which work much better for large numbers\n> of hits, and correspondingly the planner will use one in cases where\n> it wouldn't use a plain indexscan.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Thu, 31 Jan 2008 11:29:52 -0800",
"msg_from": "Claire McLister <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JDBC/Stored procedure performance issue "
}
] |
[
{
"msg_contents": "Hi\n\nWe are looking to buy a new server and I am wondering what kind of hardware\nwe should buy and how to configure it.\n\nWe are either getting 8x 2.5\" 15k rpm sas disks or 6x 3.5\" 15k rpm sas\ndisks.\nIf we go with the 2.5\" disks I think we should run 6 disks in raid 10 for\nthe database and 2 disks in raid 1 for os/wal\nIf we go with the 3.5\" disks I think we should run 4 disks in raid 10 for\nthe database and 2 disks in raid 1 for os/wal\n\nThe database has currently about 40gb data and 20gb indexes. The work\nconsists mainly of small transactions, with\nthe occasional report. Currently we have 10gb ram, which is enough to hold\nthe working set in memory 99% of the time.\nThe database server will only be running postgres, nothing else.\n\nSo, my question is: should I go for the 2.5\" disk setup or 3.5\" disk setup,\nand does the raid setup in either case look correct?\n\nHiWe are looking to buy a new server and I am wondering what kind of hardware we should buy and how to configure it.We are either getting 8x 2.5\" 15k rpm sas disks or 6x 3.5\" 15k rpm sas disks.\nIf we go with the 2.5\" disks I think we should run 6 disks in raid 10 for the database and 2 disks in raid 1 for os/walIf we go with the 3.5\" disks I think we should run 4 disks in raid 10 for the database and 2 disks in raid 1 for os/wal\nThe database has currently about 40gb data and 20gb indexes. The work consists mainly of small transactions, withthe occasional report. Currently we have 10gb ram, which is enough to hold the working set in memory 99% of the time.\nThe database server will only be running postgres, nothing else.So, my question is: should I go for the 2.5\" disk setup or 3.5\" disk setup, and does the raid setup in either case look correct?",
"msg_date": "Mon, 28 Jan 2008 20:25:59 +0100",
"msg_from": "\"Christian Nicolaisen\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "8x2.5\" or 6x3.5\" disks"
},
{
"msg_contents": "On 28-1-2008 20:25 Christian Nicolaisen wrote:\n> So, my question is: should I go for the 2.5\" disk setup or 3.5\" disk \n> setup, and does the raid setup in either case look correct?\n\nAfaik they are about equal in speed. With the smaller ones being a bit \nfaster in random access and the larger ones a bit faster for sequential \nreads/writes.\n\nMy guess is that the 8x 2.5\" configuration will be faster than the 6x \n3.5\", even if the 3.5\"-drives happen to be faster they probably aren't \n50% faster... So since you don't need the larger storage capacities that \n3.5\" offer, I'd go with the 8x 2.5\"-setup.\n\nBest regards,\n\nArjen\n",
"msg_date": "Mon, 28 Jan 2008 21:28:21 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8x2.5\" or 6x3.5\" disks"
},
{
"msg_contents": "> I missed the initial post in this thread, but I haven't seen any 15K rpm\n> 2.5\" drives, so if you compare 10K rpm 2.5\" drives with 15K rpm 3.5\"\n> drives you will see differences (depending on your workload and controller\n> cache)\n\nI have some 15K rpm 2.5\" sas-drives from HP. Other vendors have them as well.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n",
"msg_date": "Tue, 29 Jan 2008 09:06:49 +0100",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8x2.5\" or 6x3.5\" disks"
},
{
"msg_contents": "On Mon, 28 Jan 2008, Arjen van der Meijden wrote:\n\n> On 28-1-2008 20:25 Christian Nicolaisen wrote:\n>> So, my question is: should I go for the 2.5\" disk setup or 3.5\" disk setup, \n>> and does the raid setup in either case look correct?\n>\n> Afaik they are about equal in speed. With the smaller ones being a bit faster \n> in random access and the larger ones a bit faster for sequential \n> reads/writes.\n\nI missed the initial post in this thread, but I haven't seen any 15K rpm \n2.5\" drives, so if you compare 10K rpm 2.5\" drives with 15K rpm 3.5\" \ndrives you will see differences (depending on your workload and controller \ncache)\n\nDavid Lang\n\n",
"msg_date": "Tue, 29 Jan 2008 00:32:25 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: 8x2.5\" or 6x3.5\" disks"
},
{
"msg_contents": "There are several suppliers who offer Seagate's 2.5\" 15k rpm disks, I \nknow HP, Dell are amongst those. So I was actually refering to those, \nrather than to the 10k one's.\n\nBest regards,\n\nArjen\n\[email protected] wrote:\n> On Mon, 28 Jan 2008, Arjen van der Meijden wrote:\n> \n>> On 28-1-2008 20:25 Christian Nicolaisen wrote:\n>>> So, my question is: should I go for the 2.5\" disk setup or 3.5\" disk \n>>> setup, and does the raid setup in either case look correct?\n>>\n>> Afaik they are about equal in speed. With the smaller ones being a bit \n>> faster in random access and the larger ones a bit faster for \n>> sequential reads/writes.\n> \n> I missed the initial post in this thread, but I haven't seen any 15K rpm \n> 2.5\" drives, so if you compare 10K rpm 2.5\" drives with 15K rpm 3.5\" \n> drives you will see differences (depending on your workload and \n> controller cache)\n> \n> David Lang\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n",
"msg_date": "Tue, 29 Jan 2008 11:29:23 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8x2.5\" or 6x3.5\" disks"
},
{
"msg_contents": "You don't mention the capacity of the disks you are looking at. Here is\nsomething you might want to consider. \n\n \n\nI've seen a few performance posts on using different hardware\ntechnologies to gain improvements. Most of those comments are on raid,\ninterface and rotation speed. One area that doesn't seem to have been\nmentioned is to run your disks empty.\n\n \n\nOne of the key roadblocks in disk performance is the time for the disk\nheads to seek, settle and find the start of the data. Another is the\ntime to transfer from disk to interface. Everyone may instinctively\nknow this but its often ignored.\n\n \n\nHard disks are CRV ( constant rotational velocity) = they spin at the\nsame speed all the time\n\nHard disk drives use a technology called ZBR = Zone Bit Recording = a\nlot more data on the outside tracks than the inner ones.\n\nHard disk fill up from outside track to inside track generally unless\nyou've done some weird partitioning. \n\n \n\nOn the outside of the disk you get a lot more data per seek than on the\ninside. Double whammy you get it faster.\n\nPerformance can vary more than 100% between the outer and inner tracks\nof the disk. So running a slower disk twice as big may give you more\nbenefit than running a small capacity 15K disk full. The slower disks\nare also generally more reliable and mostly much cheaper. \n\n \n\nThe other issue for full disks especially with lots of random small\ntransactions is the heads are seeking and settling across the whole\ndisk but typically with most of those seeks being on the latest\ntransactions which are placed nicely towards the middle of the disk.\n\n \n\nI know of a major bank that has a rule of thumb 25% of the disk\npartioned as a target maximum for high performance disk systems in a\nkey application. They also only pay for used capacity from their disk\nvendor. \n\n \n\nThis is not very green as you need to buy more disks for the same amount\nof data and its liable to upset your purchasing department who won't\nunderstand why you don't want to fill your disks up.\n\n \n\nMike \n\n \n\n\n\n\n\n\n\n\n\n\n\nYou don’t mention the capacity of the disks you are looking\nat. Here is something you might want to consider. \n \nI’ve seen a few performance posts\non using different hardware technologies to gain improvements. Most of those\ncomments are on raid, interface and rotation speed. One area that\ndoesn’t seem to have been mentioned is to run your\ndisks empty.\n \nOne of the key roadblocks in disk\nperformance is the time for the disk heads to seek, settle and find the start\nof the data. Another is the time to transfer from disk to interface. Everyone\nmay instinctively know this but its often ignored.\n \nHard disks are CRV ( constant\nrotational velocity) = they spin at the same speed all the time\nHard disk drives use a\ntechnology called ZBR = Zone Bit Recording = a lot more data on the\noutside tracks than the inner ones.\nHard disk fill up from outside track\nto inside track generally unless you’ve done some weird\npartitioning. \n \nOn the outside of the disk you get\na lot more data per seek than on the inside. Double whammy you get it faster.\nPerformance can vary more\nthan 100% between the outer and inner tracks of the disk. So running\na slower disk twice as big may give you more benefit than running a small capacity\n15K disk full. The slower disks are also generally more reliable and mostly\nmuch cheaper. \n \nThe other issue for full disks\nespecially with lots of random small transactions is the heads are seeking and\nsettling across the whole disk but typically with most of those\nseeks being on the latest transactions which are placed nicely towards the\nmiddle of the disk.\n \nI know of a major bank that has a rule\nof thumb 25% of the disk partioned as a target maximum for high\nperformance disk systems in a key application. They also only pay for used capacity\nfrom their disk vendor. \n \nThis is not very green as you need\nto buy more disks for the same amount of data and its liable to upset your\npurchasing department who won’t understand why you don’t want to\nfill your disks up.\n \nMike",
"msg_date": "Tue, 29 Jan 2008 06:43:15 -0500",
"msg_from": "\"Mike Smith\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8x2.5\" or 6x3.5\" disks"
},
{
"msg_contents": "On Jan 29, 2008 5:43 AM, Mike Smith <[email protected]> wrote:\n>\n> You don't mention the capacity of the disks you are looking at. Here is\n> something you might want to consider.\n>\n> I've seen a few performance posts on using different hardware technologies\n> to gain improvements. Most of those comments are on raid, interface and\n> rotation speed. One area that doesn't seem to have been mentioned is to\n> run your disks empty.\n>\n> One of the key roadblocks in disk performance is the time for the disk heads\n> to seek, settle and find the start of the data. Another is the time to\n> transfer from disk to interface. Everyone may instinctively know this but\n> its often ignored.\n>\n> Hard disks are CRV ( constant rotational velocity) = they spin at the same\n> speed all the time\n>\n> Hard disk drives use a technology called ZBR = Zone Bit Recording = a lot\n> more data on the outside tracks than the inner ones.\n>\n> Hard disk fill up from outside track to inside track generally unless\n> you've done some weird partitioning.\n\nThis really depends on your file system. While NTFS does this, ext2/3\ncertainly does not. Many unix file systems use a more random method\nto distribute their writes.\n\nThe rest of what you describe is called \"short stroking\" in most\ncircles. It's certainly worth looking into no matter what size drives\nyou're using.\n",
"msg_date": "Tue, 29 Jan 2008 09:00:22 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8x2.5\" or 6x3.5\" disks"
},
{
"msg_contents": "Mike Smith wrote:\n> I�ve seen a few performance posts on using different hardware \n> technologies to gain improvements. Most of those comments are on raid, \n> interface and rotation speed. One area that doesn�t seem to have \n> been mentioned is to run your disks empty.\n> ...\n> On the outside of the disk you get a lot more data per seek than on the \n> inside. Double whammy you get it faster.\n> \n> Performance can vary more than 100% between the outer and inner tracks \n> of the disk. So running a slower disk twice as big may give you more \n> benefit than running a small capacity 15K disk full. The slower disks \n> are also generally more reliable and mostly much cheaper.\n> ...\n> This is not very green as you need to buy more disks for the same amount \n> of data and its liable to upset your purchasing department who won�t \n> understand why you don�t want to fill your disks up.\n\nSo presumably the empty-disk effect could also be achieved by partitioning, say 25% of the drive for the database, and 75% empty partition. But in fact, you could use that \"low performance 75%\" for rarely-used or static data, such as the output from pg_dump, that is written during non-peak times.\n\nPretty cool.\n\nCraig\n",
"msg_date": "Tue, 29 Jan 2008 07:06:23 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8x2.5\" or 6x3.5\" disks"
},
{
"msg_contents": "[presumably the empty-disk effect could also be achieved by partitioning, say 25% of the drive for the database, and 75% empty partition. But in fact, you could use that \"low performance 75%\" for rarely-used or static data, such as the output from pg_dump, that is written during non-peak times] \n \nLarry Ellison financed a company called Pillar Data Systems which was founded on the principle that you can tier the disk according to the value and performance requirements of the data. They planned to put the most valuable in performance terms on the outside of SATA disks and use the empty space in the middle for slower stuff..\n(This is not an advert. I like the idea but I dont know if it works well and I dont have anything to do with Pillar other than EnterpriseDB compete against Larry's other little company). \nProbably the way to go is flash drives for primary performance data . EMC and others have announced Enterprise Flash Drives (they claim 30 times performance of 15K disks although at 30 times the cost of standard disk today ). Flash should also have pretty much consistent high performance across the whole capacity.\nWithin a couple of years EFD should be affordable for mainstream use.\n\nRe: [PERFORM] 8x2.5\" or 6x3.5\" disks\n\n\n\n\n\n\n\n[presumably the empty-disk effect could also be achieved by partitioning, say 25% of the drive for the database, and 75% empty partition. But in fact, you could use that \"low performance 75%\" for rarely-used or static data, such as the output from pg_dump, that is written during non-peak times]\n \nLarry Ellison financed a company called Pillar Data Systems which was founded on the principle that you can tier the disk according to the value and performance requirements of the data. They planned to put the most valuable in performance terms on the outside of SATA disks and use the empty space in the middle for slower stuff..\n\n(This is not an advert. I like the idea but I dont know if it works well and I dont have anything to do with Pillar other than EnterpriseDB compete against Larry's other little company). \nProbably the way to go is flash drives for primary performance data . EMC and others have announced Enterprise Flash Drives (they claim 30 times performance of 15K disks although at 30 times the cost of standard disk today ). Flash should also have pretty much consistent high performance across the whole capacity.\nWithin a couple of years EFD should be affordable for mainstream use.",
"msg_date": "Tue, 29 Jan 2008 13:40:25 -0500",
"msg_from": "\"Mike Smith\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8x2.5\" or 6x3.5\" disks"
}
] |
[
{
"msg_contents": "Hi\n\nHow long is a piece of string?\n\nWhile we're at it, how often do I vacuum analyze?\n\nSeriously though, how do I try measure this?\n\n-- \nAdrian Moisey\nSystem Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n",
"msg_date": "Tue, 29 Jan 2008 16:28:45 +0200",
"msg_from": "Adrian Moisey <[email protected]>",
"msg_from_op": true,
"msg_subject": "analyze"
},
{
"msg_contents": "On Tue, Jan 29, 2008 at 04:28:45PM +0200, Adrian Moisey wrote:\n> \n> Seriously though, how do I try measure this?\n\nIs autovacuum not going to work for your case? \n\nA\n\n",
"msg_date": "Tue, 29 Jan 2008 13:09:26 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: analyze"
},
{
"msg_contents": "On Jan 29, 2008 8:28 AM, Adrian Moisey <[email protected]> wrote:\n> Hi\n>\n> How long is a piece of string?\n>\n> While we're at it, how often do I vacuum analyze?\n>\n> Seriously though, how do I try measure this?\n\n1: Turn on autovacuum.\n2: Look up the thread on nagios plugins for pgsql and rip the query\nfor checking bloat out of it, or alternatively, use NAGIOS I guess. :)\n\nmonitor your db for bloat, and if autovacuum isn't keeping up either\nset it to be more aggresive, or schedule manual vacuums for the tables\nthat need it.\n",
"msg_date": "Tue, 29 Jan 2008 12:36:46 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: analyze"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.